Human Factors

Why modernize 21 CFR 820 to ISO 13485?

The FDA patches the regulations with guidance documents, but there is a desperate need to modernize 21 CFR 820 to ISO 13485.

FDA Proposed Amendment to 21 CFR 820

On February 23, 2022, the FDA published a proposed rule for medical device quality system regulation amendments. The FDA planned to implement amended regulations within 12 months, but the consensus of the device industry is that a transition of several years would be necessary. In the proposed rule, the FDA justifies the need for amended regulations based on the “redundancy of effort to comply with two substantially similar requirements,” creating inefficiencies. In public presentations, the FDA’s supporting arguments for the proposed quality system rule change rely heavily upon comparing similarities between 21 CFR 820 and ISO 13485. However, the comparison table provided is quite vague (see the table from page 2 of the FDA’s presentation reproduced below). The FDA also provided estimates of projected cost savings resulting from the proposed rule. What is completely absent from the discussion of the proposed rule is any mention of the need to modernize 21 CFR 820.

Overview of Similarities and Differences between QSR and ISO 13485 1006x1024 Why modernize 21 CFR 820 to ISO 13485?

Are the requirements “substantively similar”?

The above table provided by the FDA claims that the requirements of 21 CFR 820 are substantively similar to the requirements of ISO 13485. However, there are some aspects of ISO 13485 that will modernize 21 CFR 820. The areas of impact are 1) software, 2) risk management, 3) human factors or usability engineering, and 4) post-market surveillance. The paragraphs below identify the applicable clauses of ISO 13485 where each of the four areas are covered.

Modernize 21 CFR 820 to include software and software security

Despite the limited proliferation of software in medical devices during the 1990s, 21 CFR 820 includes seven references to software. However, there are some Clauses of ISO 13485 that reference software that are not covered in the QSR. Modernizing 21 CFR 820 to reference ISO 13485 will incorporate these additional areas of applicability. Clause 4.1.6 includes a requirement for the validation of quality system software. Clause 7.6 includes a requirement for the validation of software used to manage calibrated devices used for monitoring and measurement. Clause 7.3 includes a requirement for validation of software embedded in devices, but that requirement was already included in 21 CFR 820.30. The FDA can modernize 21 CFR 820 further by defining Software as a Medical Device (SaMD), referencing IEC 62304 for management of the software development lifecycle, referencing IEC/TR 80002-1 for hazard analysis of software, referencing AAMI TIR57 for cybersecurity, and referencing ISO 27001 for network security. Currently, the FDA strategy is to implement guidance documents for cybersecurity and software validation requirements, but ISO 13485 only references IEC 62304. The only aspect of 21 CFR 820 that appears to be adequate with regard to software is the validation of software used for automation in 21 CFR 820.75. This requirement is similar to Clause 7.5.6 (i.e., validation of processes for production and service provisions).

Does 21 CFR 820 adequately cover risk management?

The FDA already recognizes ISO 14971:2019 as the standard for the risk management of medical devices. However, the risk is only mentioned once in 21 CFR 820. In order to modernize 21 CFR 820, it will be necessary for the FDA to identify how risk should be integrated throughout the quality system requirements. The FDA recently conducted two webinars related to the risk management of medical devices, but implementing a risk-based approach to quality systems is a struggle for companies that already have ISO 13485 certification. Therefore, a guidance document with examples of how to implement a risk-based approach to quality system implementation would be very helpful to the medical device industry. 

Modernize 21 CFR 820 to include Human Factors and Usability Engineering

ISO 13485 references IEC 62366-1 as the applicable standard for usability engineering requirements, but there is no similar requirement found in 21 CFR 820. Therefore, human factors are an area where 21 CFR 820 needs to be modernized. The FDA has released guidance documents for the human factors content to be included in a 510k pre-market notification, but the guidance was released in 2016 and the guidance does not reflect the FDA’s current thoughts on human factors/usability engineering best practices. The FDA recently released a draft guidance for the format and content of human factors testing in a pre-market 510k submission, but that document is not a final guidance document and there is no mention of human factors, usability engineering, or even use errors in 21 CFR 820. Device manufacturers should be creating work instructions for use-related risk analysis (URRA) and fault-tree analysis to estimate the risks associated with use errors as identified in the draft guidance. These work instructions will also need to be linked with the design and development process and the post-market surveillance process.

Modernize 21 CFR 820 to include Post-Market Surveillance

ISO/TR 20416:2020 is a new standard specific to post-market surveillance, but it is not recognized by the FDA. There is also no section of 21 CFR 820 that includes a post-market surveillance requirement. The FDA QSR focuses on reactive elements such as:

  • 21 CFR 820.100 – CAPA
  • 21 CFR 820.198 – Complaint Handling
  • 21 CFR 803 – Medical Device Reporting
  • 21 CFR 820.200 – Servicing
  • 21 CFR 820.250 – Statistical Techniques

The FDA does occasionally require 522 Post-Market Surveillance Studies for devices that demonstrate risks that require post-market safety studies. In addition, most Class 3 devices are required to conduct post-approval studies (PAS). For Class 3 devices, the FDA requires the submitter to provide a plan for a post-market study. Once the study plan is accepted by the FDA, the manufacturer must report on the progress of the study. Upon completion of the study, most manufacturers are not required to continue PMS.

How will the FDA enforce compliance with ISO 13485?

It is not clear how the FDA would enforce compliance with Clause 8.2.1 in ISO 13485 because there is no substantively equivalent requirement in the current 21 CFR 820 regulations. The QSR is 26 years old, and the regulation does not mention cybersecurity, human factors, or post-market surveillance. Risk is only mentioned once by the regulation, and software is only mentioned seven times. The FDA has “patched” the regulations through guidance documents, but there is a desperate need for new regulations that include critical elements. The transition of quality system requirements for the USA from 21 CFR 820 to ISO 13485:2016 will force regulators to establish policies for compliance with all of the quality system elements that are not in 21 CFR 820.

Companies that do not already have ISO 13485 certification should be proactive by 1) updating their quality system to comply with the ISO 13485 standard and 2) adopting the best practices outlined in the following related standards:

  • AAMI/TIR57:2016 – Principles For Medical Device Security – Risk Management
  • IEC 62366-1:2015 – Medical devices — Part 1: Application of usability engineering to medical devices
  • ISO/TR 20416:2020 – Medical devices — Post-market surveillance for manufacturers
  • ISO 14971:2019 – Medical Devices – Application Of Risk Management To Medical Devices
  • IEC 62304:2015 – Medical Device Software – Software Life Cycle Processes
  • ISO/TR 80002-1:2009 – Medical device software — Part 1: Guidance on the application of ISO 14971 to medical device software
  • ISO/TR 80002-2:2017 – Medical device software — Part 2: Validation of software for medical device quality systems

What is the potential impact of the US FDA requiring software, risk management, cybersecurity, human factors, and post-market surveillance as part of a medical device company’s quality system?

Best human factors questions?

Best human factors questions to ask the FDA during a pre-submission meeting, and what information content do you need in a 510k?

Best human factors questions to ask the FDA?

The FDA did not start enforcing the requirement to apply human factors and usability engineering to medical device design until 2017 because the final version of the human factors guidance document was not released until February 3, 2016. Approximately ninety percent of the human factors testing reports submitted to the FDA in 510k pre-market submissions are deficient because the 510k submission content only includes the final summative testing report. The FDA needs a complete usability engineering file, and the human factors information needs to comply with FDA guidelines for the format and content of a 510k pre-market submission–not just IEC 62366-1:2015.

What human factors information does the FDA want?

For several years, FDA submission deficiency letters indicated that you should not include the frequency of occurrence in your estimation of use-related risks, but the FDA never provided this information in a guidance document. On December 9, 2022, the FDA finally released a draft human factors guidance regarding the format and content of a 510k pre-market submission. The new draft guidance includes the requirement for a use-related risk analysis (URRA) in table 2 (copied below).

Table 2 example of tabular format for the URRA 1024x354 Best human factors questions?

In this new draft FDA guidance, the FDA identifies three different human factors submission categories. For the first category, only a conclusion and high-level summary are needed. For the second category, a user specification is also needed. For the third category, you need a comprehensive human factors engineering report with the following elements described in Section IV of the draft FDA guidance:

Submission Category 1, 2, and 3

  • Conclusion and high-level summary

Submission Category 2 and 3

  • Descriptions of intended device users, uses, use environments, and training
  • Description of the device-user interface
  • Summary of known use problems

Submission Category 3 only

  • Summary of preliminary analyses and evaluations
  • Use-related risk analysis to analyze hazards and risks associated with the use of the device
  • Identification and description of critical tasks
  • Details of validation testing of the final design

Before you spend tens of thousands or hundreds of thousands of dollars on human factors testing, you want to make sure the FDA agrees with your human factors testing plan. Otherwise, you will pay for the testing twice: once for your initial submission and a second time in your response to the FDA request for additional information to address deficiencies. Testing can cost more than your electrical safety testing. The facility needs to have the right equipment and space for the testing, you need support personnel to set up the equipment, you need to recruit participants, you need to compensate participants, and you need device samples.

When can you ask the FDA human factors questions?

The FDA cannot provide consulting advice on a submission, and the agency will not review data during pre-submission meetings. The FDA can provide feedback on protocols, specifications, and scientific justifications. Therefore, you should submit questions to the FDA in a pre-submission when you have a draft protocol, a draft specification, or a draft justification for why a task is not critical. Pre-submissions are “non-binding.” You can change your design and approach to human factors. Therefore, don’t wait until your information is 100% finalized. Share your documentation at the draft stage during the development phase and before your design freeze. You need these answers when you are planning a study and obtaining quotes. 

What are the best human factors questions to ask in a pre-sub?

In the FDA guidance for pre-submission meetings, the FDA provides suggested questions to ask:

  • Does the Agency have comments on our proposed human factors engineering process?
  • Is the attached use-related risk analysis plan adequate? Does the Agency agree that we have identified all the critical tasks?
  • Does the Agency agree with our proposed test participant recruitment plan for the human factors validation testing?

The above examples are only suggestions, but the best approach is to provide a brief example of what the human factors information will look like and ask the FDA to comment on the examples. The FDA does not have time to review data during a pre-sub meeting, but the FDA can review a few rows extracted from your URRA and comment on your proposed approach to the human factors process.

Human factors questions that are not appropriate

The FDA pre-submission guidance cautions you only to ask 3-4 questions for each meeting request because the FDA has difficulty answering more questions in a 60-minute teleconference. Therefore, you should not ask questions already answered in the FDA guidance. The new draft guidance includes examples of when a device modification can leverage existing human factors information and when new information is needed to support a premarket submission. Instead of asking a question specific to leveraging existing human factors information, instead, provide your rationale for leveraging existing data and ask if the FDA has any concerns with your overall approach to human factors.

Recommended human factors action items

Create a procedure for your human factors process that includes detailed instructions for creating the information required in a usability engineering report and templates for each document.

Formative usability testing – Frequently Asked Questions?

Formative usability testing is not a regulatory requirement, but it is necessary if you want to successfully develop medical devices.

Formative Usability Testing FAQs 1024x169 Formative usability testing   Frequently Asked Questions?

What is the difference between formative and summative usability testing?

“Formative” tests are any usability tests that you perform during the development process, while “summative” testing is the final usability testing you perform to validate that your chosen user interface is effective. Many design teams perform formative testing of one kind or another without even realizing that is what they are doing. Unfortunately, design teams often forget to document the testing they performed during prototyping and product development. Formative usability testing probably always existed as part of product development, but not everyone recognizes the term and identifies the work they have done as “formative.” The most important reason for documenting formative usability testing is to identify which user interface designs failed and why so that future design teams can learn from your failures.

Why don’t more companies do usability testing?

Everyone likes to believe they can skip steps in the learning process, but some lessons can only be learned the hard way. When a medical device design team is developing a user interface for a new product, they need to learn which designs will fail and why before they can fully understand how to design the best user interface for the device. Therefore, most design and development teams will select a user interface that they are familiar with or they see used by a competitor product. The team will not always test the proposed design solution, because they have no reason to believe that the chosen interface will fail. Unfortunately, this can lead to failure later in the design process. Then the team will need to backtrack and repeat the evaluation of various interface designs.

What is the best approach?

“Fail small and fail fast” is the best advice for anyone performing formative usability testing. Instead of writing a lengthy protocol and recruiting 10 subjects to evaluate your proposed user interface, you might consider building a couple of different prototypes and asking two or three people which prototype they prefer and why? Another simple question is, “Tell me what you think of this design?” Iterative formative testing over time with different users is better than one single testing session with a lot of users. It is also better to start collecting formative usability testing data as early in the development process as possible. Gathering data earlier in the process will ensure that users direct the development of your device instead of the design team developing a new device in a direction that is not preferred by users.

When during the design process should formative testing be planned?

Formative testing should be planned during the development phase of the design process. During this phase, medical device manufacturers evaluate multiple design solutions as risk controls for their devices. Use-related risks should be included in this, and the formative usability testing is intended to identify which user interface will do the best job of eliminating the use errors. It is important to evaluate these potential user interfaces and to verify that there are no use errors that the design team overlooked during this phase of the design. This is also the phase of design when the instructions for use are developed and user training is developed. All of this formative usability testing should be completed prior to your design freeze and the start of the verification and validation testing.

What are the different types of formative testing?

Formative usability testing can be used as a pilot for your summative usability testing protocol prior to scheduling the final testing. However, there are many other types of formative testing. The most common reason for doing this testing is to identify any potential use errors that were not originally identified in your user-related risk analysis (URRA). Another type of testing is to simulate use of the device to make sure that every user task is identified in the instructions for use. Finally, design teams will conduct formative usability testing to develop training materials for training new users on how to properly use your medical device.

Which types of formative tests are the most useful?

Use-related risks are difficult to identify unless you conduct simulated use testing with your device. Therefore, you need to get your device in the hands of your intended users, in the intended use environment, and ask them to simulate the use of the device. It is not critical to evaluate a specific number of users. Two or three users might be enough, but simulated use by intended users in the intended use environment is essential to give you the information you need regarding potential use errors. It is also important to avoid “leading” the users. Instead of asking users to perform a specific task, ask users to show you how they would use the device. Ask them what they like about the device, and ask them what they don’t like about the device. Ask users what they think about the device, and ask them how it compares to other devices they are already using.

Who should you recruit for your formative usability testing?

You should start your human factors process by defining the intended user of your device and by defining if there is more than one user group. You then should recruit subjects that are within this user group(s). You can use employees or friends to help you with initial feedback about the usability of your device’s user interface. However, what seems intuitive to one person may be the opposite for other people with different experiences. Even the sequence of steps in which users perform the same tasks can impact usability. Therefore, be very cautious about relying upon data collected only from subjects that are outside your intended user group. Most companies disregard this advice because they are unsure of how to recruit their intended users. However, if your company has difficulty identifying intended users for testing, you will also have difficulty marketing and selling your device later. This struggle may be an indicator that you need to involve marketing and salespeople that can get your prototypes in the hands of the intended users.

How should you document formative studies?

When you are performing summative usability testing you already know exactly what your use-related risks are and you have a list of critical tasks that you are trying to verify users can perform without use errors. Because these tasks are clearly defined, it is easier to write a protocol and it is easier to design data collection forms for study moderators to use. In contrast, when you are conducting formative usability testing you are trying to identify use errors that you are not already aware of. Therefore, it is much harder to write a detailed protocol and design a data collection form. For this reason, it is critical to capture the data with video recordings. This is a safety measure you are taking to ensure that you will not miss valuable use errors or use tasks that you had not already identified. The use of video to record data allows the moderator to focus on observation and interviewing users with open-ended questions. This will generate the most value for your design team during the development process.

Where is testing performed?

While the design team is developing the list of design inputs for your new device, the team must create a definition for the intended users and the intended use environment. The formative usability testing and summative testing should be conducted in the intended use environment or you will need to simulate that use environment. If you are struggling to figure out how to simulate the intended use environment, you should systematically identify the characteristics of the intended use environment. These characteristics include temperature, humidity, ambient noise, other equipment that is present, the number of people present, and the dimensions of the space. If you have a room available with temperature and humidity control, you can add ambient noise by recording the intended use environment. You can rent equipment, or you can place objects of the same size in the space. You can also identify the workspace restrictions by taping the floor to establish boundaries for the simulation. By adding these characteristics to a simulated environment, you open the possibilities for additional places that can be used for formative usability testing.

What will happen if you skip formative testing?

If you skip formative usability testing, you will increase the possibility of failing your summative usability testing. If this happens, then your summative testing becomes your formative usability testing. After you fail, you will need to revise your testing protocol and repeat the study. Another possibility is that you will fail to identify a potential use error. If the FDA identifies this use error you will need to repeat your testing. If the use error is never identified, then you may end up with complaints or medical device reporting of use errors. In extreme cases, this could result in serious injuries or death.

Scroll to Top