Blog

Archive for Usability

Why modernize 21 CFR 820 to ISO 13485?

The FDA patches the regulations with guidance documents, but there is a desperate need to modernize 21 CFR 820 to ISO 13485.

FDA Proposed Amendment to 21 CFR 820

On February 23, 2022, the FDA published a proposed rule for medical device quality system regulation amendments. The FDA planned to implement amended regulations within 12 months, but the consensus of the device industry is that a transition of several years would be necessary. In the proposed rule, the FDA justifies the need for amended regulations based on the “redundancy of effort to comply with two substantially similar requirements,” creating inefficiencies. In public presentations, the FDA supporting arguments for the proposed quality system rule change relies heavily upon comparing similarities between 21 CFR 820 and ISO 13485. However, the comparison table provided is quite vague (see the table from page 2 of FDA’s presentation reproduced below). The FDA also provided estimates of projected cost savings resulting from the proposed rule. What is completely absent from the discussion of the proposed rule is any mention of the need to modernize 21 CFR 820.

Overview of Similarities and Differences between QSR and ISO 13485 1006x1024 Why modernize 21 CFR 820 to ISO 13485?

Are the requirements “substantively similar”?

The above table provided by the FDA claims that the requirements of 21 CFR 820 are substantively similar to the requirements of ISO 13485. However, there are some some aspects of ISO 13485 that will modernize 21 CFR 820. The areas of impact are: 1) software, 2) risk management, 3) human factors or usability engineering, and 4) post-market surveillance. The paragraphs below identify the applicable clauses of ISO 13485 where each of the four areas are covered.

Modernize 21 CFR 820 to include software and software security

Despite the limited proliferation of software in medical devices during the 1990s, 21 CFR 820 includes seven references to software. However there are some Clauses of ISO 13485 that reference software that are not covered in the QSR. Modernizing 21 CFR 820 to reference ISO 13485 will incorporate these additional areas of applicability. Clause 4.1.6 includes a requirement for validation of quality system software. Clause 7.6 includes a requirement for validation of software used to manage calibrated devices used for monitoring and measurement. Clause 7.3 includes a requirement for validation of software embedded in devices, but that requirement was already included in 21 CFR 820.30. The FDA can modernize 21 CFR 820 further by defining Software as a Medical Device (SaMD), referencing IEC 62304 for management of the software development lifecycle, referencing IEC/TR 80002-1 for hazard analysis of software, referencing AAMI TIR57 for cybersecurity, and referencing ISO 27001 for network security. Currently the FDA strategy is to implement guidance documents for cybersecurity and software validation requirements, but ISO 13485 only references IEC 62304. Then only aspect of 21 CFR 820 that appears to be adequate with regard to software is validation of software used for automation in 21 CFR 820.75. This requirement is similar to Clause 7.5.6 (i.e., validation of processes for production and service provisions).

Does 21 CFR 820 adequately cover risk management?

The FDA already recognizes ISO 14971:2019 as the standard for risk management of medical devices. However, risk is only mentioned once in 21 CFR 820. In order to modernize 21 CFR 820, it will be necessary for the FDA to identify how risk should be integrated throughout the quality system requirements. The FDA recently conducted two webinars related to risk management of medical devices, but implementing a risk-based approach to quality systems is a struggle for companies that already have ISO 13485 certification. Therefore, a guidance document with examples of how to implement a risk-based approach to quality system implementation would be very helpful to the medical device industry. 

Modernize 21 CFR 820 to include Human Factors and Usability Engineering

ISO 13485 references IEC 62366-1 as the applicable standard for usability engineering requirements, but there is no similar requirement found in 21 CFR 820. Therefore, human factors is an area where 21 CFR 820 needs to be modernized. The FDA has released guidance documents for the human factors content to be included in a 510k pre-market notification, but the guidance was released in 2016 and the guidance does not reflect the FDA’s current thoughts on human factors / usability engineering best practices. The FDA recently released a draft guidance for the format and content of human factors testing in a pre-market 510k submission, but that document is not a final guidance document and there is no mention of human factors, usability engineering, or even use errors in 21 CFR 820. Device manufacturers should be creating work instructions for use-related risk analysis (URRA) and fault-tree analysis to estimate the risks associated with use errors as identified the draft guidance. These work instructions will also need to be linked with the design and development process and the post-market surveillance process.

Modernize 21 CFR 820 to include Post-Market Surveillance

ISO/TR 20416:2020 is a new standard specific to post-market surveillance, but it is not recognized by the FDA. There is also no section of 21 CFR 820 that includes a post-market surveillance requirement. The FDA QSR focuses on reactive elements such as:

  • 21 CFR 820.100 – CAPA
  • 21 CFR 820.198 – Complaint Handling
  • 21 CFR 803 – Medical Device Reporting
  • 21 CFR 820.200 – Servicing
  • 21 CFR 820.250 – Statistical Techniques

The FDA does occasionally require 522 Post-Market Surveillance Studies for devices that demonstrate risks that require post-market safety studies. In addition, most Class 3 devices are required to conduct post-approval studies (PAS). For Class 3 devices, the FDA requires the submitter provide a plan for a post-market study. Once the study plan is accepted by the FDA, the manufacturer must report on the progress of the study. Upon completion of the study, most manufacturers are not required to continue PMS.

How will the FDA enforce compliance with ISO 13485?

It is not clear how the FDA would enforce compliance with Clause 8.2.1 in ISO 13485, because there is no substantively equivalent requirement in the current 21 CFR 820 regulations. The QSR is 26 years old, and the regulation does not mention cybersecurity, human factors, or post-market surveillance. Risk is only mentioned once by the regulation, and software is only mentioned seven times. The FDA has “patched” the regulations through guidance documents, but there is a desperate need for new regulations that include critical elements. The transition of quality system requirements for the USA from 21 CFR 820 to ISO 13485:2016 will force regulators to establish policies for compliance with all of the quality system elements that are not in 21 CFR 820.

Companies that do not already have ISO 13485 certification should be proactive by 1) updating their quality system to comply with the ISO 13485 standard and 2) adopting the best practices outlined in the following related standards:

  • AAMI/TIR57:2016 – Principles For Medical Device Security – Risk Management
  • IEC 62366-1:2015 – Medical devices — Part 1: Application of usability engineering to medical devices
  • ISO/TR 20416:2020 – Medical devices — Post-market surveillance for manufacturers
  • ISO 14971:2019 – Medical Devices – Application Of Risk Management To Medical Devices
  • IEC 62304:2015 – Medical Device Software – Software Life Cycle Processes
  • ISO/TR 80002-1:2009 – Medical device software — Part 1: Guidance on the application of ISO 14971 to medical device software
  • ISO/TR 80002-2:2017 – Medical device software — Part 2: Validation of software for medical device quality systems

What is the potential impact of the US FDA requiring software, risk management, cybersecurity, human factors, and post-market surveillance as part of a medical device company’s quality system?

Posted in: FDA, Human Factors, ISO 13485:2016, Post-Market Surveillance, Quality Management System, Software Verification and Validation

Leave a Comment (0) →

The best human factors questions in every successful FDA meeting are?

What are the best human factors questions to ask the FDA during a pre-submission meeting, and what information content do you need in a 510k?

Talk to the FDA before human factors validation

The FDA did not start enforcing the requirement to apply human factors and usability engineering to medical device design until 2017 because the final version of the human factors guidance document was not released until February 3, 2016. Approximately ninety percent of the human factors testing reports submitted to the FDA in 510k pre-market submissions are deficient because the 510k submission content only includes the final summative testing report. The FDA needs a complete usability engineering file, and the human factors information needs to comply with FDA guidelines for the format and content of a 510k pre-market submission–not just IEC 62366-1:2015.

What human factors information does the FDA want?

For several years, FDA submission deficiency letters indicated that you should not include the frequency of occurrence in your estimation of use-related risks, but the FDA never provided this information in a guidance document. On December 9, 2022, the FDA finally released a draft human factors guidance regarding the format and content of a 510k pre-market submission. The new draft guidance includes the requirement for a use-related risk analysis (URRA) in table 2 (copied below).

Table 2 example of tabular format for the URRA 1024x354 The best human factors questions in every successful FDA meeting are?

In this new draft FDA guidance, the FDA identifies three different human factors submission categories. For the first category, only a conclusion and high-level summary are needed. For the second category, a user specification is also needed. For the third category, you need a comprehensive human factors engineering report with the following elements described in Section IV of the draft FDA guidance:

Submission Category 1, 2, and 3

  • Conclusion and high-level summary

Submission Category 2 and 3

  • Descriptions of intended device users, uses, use environments, and training
  • Description of the device-user interface
  • Summary of known use problems

Submission Category 3 only

  • Summary of preliminary analyses and evaluations
  • Use-related risk analysis to analyze hazards and risks associated with the use of the device
  • Identification and description of critical tasks
  • Details of validation testing of the final design

Before you spend tens of thousands or hundreds of thousands of dollars on human factors testing, you want to make sure the FDA agrees with your human factors testing plan. Otherwise, you will pay for the testing twice: once for your initial submission and a second time in your response to the FDA request for additional information to address deficiencies. Testing can cost more than your electrical safety testing. The facility needs to have the right equipment and space for the testing, you need support personnel to set up the equipment, you need to recruit participants, you need to compensate participants, and you need device samples.

When can you ask the FDA human factors questions?

The FDA cannot provide consulting advice on a submission, and the agency will not review data during pre-submission meetings. The FDA can provide feedback on protocols, specifications, and scientific justifications. Therefore, you should submit questions to the FDA in a pre-submission when you have a draft protocol, a draft specification, or a draft justification for why a task is not critical. Pre-submissions are “non-binding.” You can change your design and approach to human factors. Therefore, don’t wait until your information is 100% finalized. Share your documentation at the draft stage during the development phase and before your design freeze. You need these answers when you are planning a study and obtaining quotes. 

What are the best human factors questions to ask in a pre-sub?

In the FDA guidance for pre-submission meetings, the FDA provides suggested questions to ask:

  • Does the Agency have comments on our proposed human factors engineering process?
  • Is the attached use-related risk analysis plan adequate? Does the Agency agree that we have identified all the critical tasks?
  • Does the Agency agree with our proposed test participant recruitment plan for the human factors validation testing?

The above examples are only suggestions, but the best approach is to provide a brief example of what the human factors information will look like and ask the FDA to comment on the examples. The FDA does not have time to review data during a pre-sub meeting, but the FDA can review a few rows extracted from your URRA and comment on your proposed approach to the human factors process.

Human factors questions that are not appropriate

The FDA pre-submission guidance cautions you only to ask 3-4 questions for each meeting request because the FDA has difficulty answering more questions in a 60-minute teleconference. Therefore, you should not ask questions already answered in the FDA guidance. The new draft guidance includes examples of when a device modification can leverage existing human factors information and when new information is needed to support a premarket submission. Instead of asking a question specific to leveraging existing human factors information, instead, provide your rationale for leveraging existing data and ask if the FDA has any concerns with your overall approach to human factors.

Recommended human factors action items

Create a procedure for your human factors process that includes detailed instructions for creating the information required in a usability engineering report and templates for each document.

Posted in: 510(k), FDA, Human Factors, Usability

Leave a Comment (1) →

Human factors process, can we make this easy to understand?

90% of usability testing submitted to the FDA is unacceptable and the root cause is simply a failure to understand the human factors process.

If you submitted no usability testing to the FDA in your 510(k) submission, it would be obvious why the FDA reviewer identified usability as a major deficiency. However, you spent tens of thousands of dollars on usability testing that delayed the 510(k) submission by six months. Despite all of the time and money your company invested in the human factors process, it appears that you need to start over and repeat the entire process again. The CEO is furious, and he wants you to show him where in the 49-page FDA guidance it says that you have to do things differently.

Benefits from the human factors process

  1. Use errors result in serious injuries and death
  2. Easy to use products sell
  3. You will prevent delays in regulatory approval

Why was your rationale for no usability testing rejected?

Unlike CE Marking technical files, the FDA does not require a usability engineering file for all products. Instead, the FDA determines if usability testing is required based upon a comparison of your device’s user interface and a competitor’s user interface (i.e. predicate device user interface). If the user interface is identical, then usability testing may not be required. Instead, your company should be able to write a rationale for not doing usability testing based upon equivalence with the predicate device. If there are differences in your user interface, you will need to provide use-related risk analysis (URRA), identify critical tasks, implement risk controls, and provide verification testing to demonstrate the effectiveness of the risk controls. Even if your device is “easier to use” or “simpler”, you still need to provide the documentation to support this claim in your submission. The FDA also does not allow comparative claims in your marketing for 510(k) cleared devices. Comparative claims require the support of clinical data.

What is the 10-step human factors process?

  1. Define human factors for your device or IVD
  2. Identify use errors
  3. Conduct a URRA
  4. Perform a critical task analysis
  5. Conduct a risk control option analysis
  6. Conduct formative usability testing
  7. Implement risk controls
  8. Conduct summative usability testing
  9. Prepare HFE/UE documentation
  10. Collect post-market surveillance data specific to use errors

There is a YouTube video describing these 10 steps at the bottom of this blog posting.

Why is formative testing needed?

  • Observational study to identify unforeseen use errors
  • Observational study to evaluate risk control options
  • What are the other types of studies?
  • Development of indications for use
  • Development of training materials

Why is the human factors process crazy expensive to outsource?

  • Human factors consultants need time to learn about your device
  • Consultants are more conservative because they cannot afford to fail
  • Justifying your choice of risk controls is difficult because you started too late
  • Your instructions for use (IFU) are inadequate
  • Consultants need to explain the human factors process to you
  • Recruiting subjects is marketing (which may not be their expertise)
  • You are paying for infrastructure (specialized testing facilities)
  • This is a team effort that requires many consulting hours collectively

Why was your Usability Engineering File refused?

  1. Your company provided an application failure modes and effects analysis (aFMEA) to support your justification that residual risks are acceptable. The FDA guidance suggests using risk analysis tools such as an FMEA or fault-tree analysis, but deficiency letters from FDA reviewers recommend a use-related risk analysis (URRA) format that is totally different.

    URRA table example from the FDA 1024x399 Human factors process, can we make this easy to understand?

    Example of a URRA Table provided by the FDA for the Human Factors Process

    The primary problem with using an FMEA or Fault-Tree risk analysis tool is that these tools involve estimation of the severity of harm and the probability of occurrence of harm, while the FDA does not feel it is appropriate to estimate the probability of occurrence of harm. Instead, the FDA instructs companies to assume that use errors will occur and to implement risk controls to mitigate those risks (see URRA example above). Although “mitigation” is unlikely, and use risks will only be reduced, this is the approach the FDA wants companies to use. In addition, the FDA expects your company to provide traceability of risk control implementation to each use-related risk you identified and the FDA expects documentation of verification testing (i.e. usability testing) that shows your risk controls are effective. Finally, the FDA (and ISO 14971, Clause 10) expects you to collect and perform a trend analysis of use errors. Any use errors that are reported should be evaluated for the need to implement additional corrective actions to prevent future use errors. Blaming “user error” is not an acceptable approach. 

  2. You provided risk analysis and human factors testing in your 510(k) submission, but the FDA reviewer said you need to identify critical tasks and provide traceability to each critical task in your summative validation report. – Critical tasks are specifically mentioned in section 3.2 of the FDA guidance on applying human factors and usability engineering–and a total of 49 times throughout the guidance. However, “critical tasks” are not mentioned even once in ISO 14971:2019 or ISO/TR 24971:2020. The term “critical tasks” is not even found in IEC 62366-1:2015. There is mention of “tasks”, and “task” is a formal definition (i.e. Definition 3.14, “Task – one or more USER interactions with a MEDICAL DEVICE to achieve a desired result”). Therefore, companies that are familiar with the ISO Standards and CE Marking process frequently need training on the FDA requirements for the human factors process. After receiving training, then your company will be prepared to modify your usability engineering file documentation to comply with the FDA requirements for human factors.
  3. You completed a summative validation protocol, but the FDA disagrees with your definition of user groups. – Each user has a different level of experience, training, and competency. Therefore, if you define the intended user population too broadly (e.g. healthcare practitioners), the FDA may not accept your summative usability testing. This is the reason that the human factors process begins with defining the human factors for your IVD or device. Radiologists, for example, have the following training pathway:
    • graduate from medical school;
    • complete an internship;
    • pass state licensing exam;
    • complete a residency in radiology;
    • become board certified; and
    • complete an optional fellowship.

Therefore, if you are developing imaging software, you need to make sure your user group includes radiologists that cover the entire range of competencies. In addition, most radiology images are taken by radiology technicians and then reviewed by the radiologist. Therefore, radiology technicians should be considered a completely different user group due to the differences in experience, training, and competency when compared to a radiologist. This simple example doubles the number of users needed because you have two user groups instead of one.

  1. You evaluated 15 users, but the FDA reviewer is asking you to evaluate a larger number of users based upon a special controls guidance document. – The FDA guidance on human factors testing specifies a minimum of 15 users for each user group–not a minimum of 15 users. Therefore, for a device that is for Rx-only and OTC use, you will have at least two user groups that need to be evaluated independently. In addition, some devices have special controls guidance documents that specify usability testing requirements. For example, an OTC blood glucose meter must pass a 350-person lay-user study. Covid-19 self-tests are expected to pass a 30-person lay-user study as another example.
  2. Your usability study was conducted in Australia, but the FDA insists that your usability study must be repeated in the USA. – Most people think of language being the primary difference between two countries, and therefore the author of a study protocol may not perceive any difference between the USA and Australia, Ireland, Canada, or the UK. However, this lack of ability to identify differences between cultural norms shows our own ignorance of cultural differences. International travelers learn quickly about the differences in the interface used for electrical outlets between the USA and other countries. There are also more subtle differences between cultures, such as in which direction do you toggle a light switch to turn on a light, up or down? For devices that are used in a hospital environment, it is critical to understand how your device will interact with other devices and how different hospital protocols might impact human factors.
  3. The FDA reviewer indicated that your usability engineering file does not assess the ability of laypersons to self-select whether your OTC device is appropriate for them. – Devices and IVD devices may have contraindications or indications for use that are specific to an intended patient population or intended user population. In these cases, the user of the device or IVD needs to be able to “self-select” as included or excluded from use. The ability to self-select should be assessed as part of any OTC usability study. The ability to identify suitable and unsuitable patients for treatment is also a common criterion for a usability study involving prescription devices where a physician is the subject of the study.
  4. The FDA reviewer indicated that you did not provide raw data collected by the study moderator. – Data collected during a human factors study is usually subjective in nature, and the FDA may want to conduct their own review and analysis of your data. Therefore, you cannot provide only a testing report that summarizes the results of your study. You must also provide the raw data for the study. It is permitted to provide the data in a tabular format that has been transcribed from paper case report forms or was recorded electronically. You should also consider scanning any paper forms for permanent retention or retaining the paper forms in case there is any question of accuracy in the transcription of the data collected. Finally, it is best practice to record videos of the study participants performing each task and answering interview questions. This will help in filling any gaps in the notes recorded by the moderator, and the recording provides additional objective evidence of the study results.
  5. The FDA reviewer indicated that your study is not valid, because the training provided by moderators was not scripted and training decay was not considered in the design of the study. – Summative usability testing requires that users complete all of the critical tasks identified in your critical task analysis without assistance. It is permitted to provide training to the user prior to conducting the study if the device or IVD is for prescription use and healthcare practitioners are responsible for providing instruction to the user. However, any training provided must be scripted in advance and approved as part of the summative usability testing protocol. This ensures that every subject in the study receives consistent training. Unfortunately, the FDA may still not be satisfied with the design of your study if you do not allow sufficient time to pass between the time that training is provided to the user and when the subject uses the device or IVD for the first time. In general, one hour is the minimum amount of time that should pass between providing user training and when the device or IVD is used for the first time. This is referred to as “training decay” and the duration of time between your scripted training and the user performing critical tasks for the first time should be specified in your summative usability protocol. One solution to address both issues is to provide a video of the instructions to each subject 24-hours in advance of participation in the study.

Additional resources for the human factors process and usability testing

Posted in: 510(k), Design Control, Usability

Leave a Comment (4) →

Integrating usability testing into your design process

This article explains how you should be integrating usability testing into your design control process–especially formative usability testing.

Integrating Usability Engineering and Risk Management into your Design Control Process Integrating usability testing into your design process

Why you should be integrating usability testing into the design

We recently recorded an updated usability webinar and released a usability procedure (SYS-048) with help from Research Collective–a firm specializing in human factors testing. After listening carefully to the webinar, and reading through the new usability procedure, I felt we needed to update our combined design/risk management plan to specify formative testing during phase 3 and summative (validation) testing during phase 4 of the design process. This is necessary to ensure your usability testing is interwoven with your risk management process. Integrating usability testing into all phases of your design process is critical–especially design planning (phase 1), feasibility (phase 2), and development (phase 3).

Integrating usability testing into your design plan helps identify issues earlier

During the usability training webinar, Research Collective provided a diagram showing the various steps in the usability engineering process. The first five steps should be included in Phases 1 and 2 of your design process. Phase 1 of the design process is planning. In that phase, you should identify all of the usability engineering tasks that need to be performed during the design process and estimate when each activity will be performed. The first of these usability activities is the identification of usability factors related to your device. Identifying usability factors is performed during Phase 2 of your design process before hazard identification.

Indentifying Usability Issues 300x209 Integrating usability testing into your design process

Before performing hazard identification, which should include identifying potential use errors, you need to identify five key usability elements associated with your device:

  1. prospective device users during all stages of use must be defined
  2. use environments must be identified
  3. user interfaces must be identified
  4. known use errors with similar devices and previous generations of your device must be researched
  5. critical tasks must be described in detail and analyzed for potential use errors

Defining users must include the following characteristics: physical condition, education, literacy, dexterity, experience, etc. Use environment considerations may consist of low lighting, extreme temperatures or humidity, or excessive uncontrolled motion (e.g., ambulatory devices). User interfaces may include keyboards, knobs, buttons, switches, remote controllers, or even a touch screen display.

Often the best reason for developing a new device is to address an everyday use error that is inherent to the design of your current device model or a competitor’s product. Therefore, a thorough review of adverse event databases and literature searches for potential use errors is an important task to perform before hazard identification. This review of adverse event data and literature searches of clinical literature are key elements of performing post-market surveillance, and now ISO 13485:2016 requires that post-market surveillance shall be an input to your design process.

Finally, the step-by-step process of using your device should be analyzed carefully to identify each critical user task. User tasks are defined as “critical” for “a user task which, if performed incorrectly or not performed at all, would or could cause serious harm to the patient or user, where harm is defined to include compromised medical care.” Not every task is critical, all critical tasks must be identified, and ultimately you need to verify that each critical task is performed correctly during your summative (validation) usability testing.

Evaluating Risk Control Options – Formative Usability Testing in Phase 3 (Development)

Once your design team has conducted hazard identification and identified your design inputs (i.e., design phase 2), you will begin to evaluate risks and compare various risk control options. Risk control option analysis requires testing multiple prototype versions to assess which design has the optimum benefit/risk ratio. This is an iterative process that involves screening tests. For any use risks you identify, formative usability testing should be performed. Sometimes the risk controls you implement will create new use errors or new risks of other types. In this case, you must compare the risks before implementing a risk control with risks created by the risk control.

Formative Usability Testing Process 220x300 Integrating usability testing into your design process

Ideally, each design iteration will reduce the risks further until all risks have been eliminated. The international risk management standard (ISO 14971) states that risks shall be reduced as low as reasonably practicable (ALARP). However, the European medical devices regulations require risks to be reduced as far as possible, considering the state-of-the-art. For example, all small-bore connectors in the USA are now required to have unique connectors that are incompatible with IV tubing Luer lock connections to prevent potential use errors. That requirement is considered “state-of-the-art.” If your device is marketed in both the USA and Europe, you will need to reduce errors as far as possible–before writing warnings and precautions in your instructions for use.

Reaching the point where use errors cannot be reduced any further may require many design iterations, and each iteration should be subsequently evaluated with formative usability testing. Formative testing can be performed with prototypes, rather than production equivalents, but the formative testing conditions should also address factors such as the use environment and users with different levels of education and/or experience. Ultimately, if the formative testing is done well, summative (validation) testing will be a formality.

Risk Control Effectiveness During Phase 4 – Summative Usability Testing during Verification

Once your team freezes the design, you will need to conduct verification testing. This includes integrating usability testing into the verification testing process. Summative (validation) testing must be performed once your design is “frozen.” If you are developing an electrical medical device, then you will need to provide evidence of usability testing as part of your documentation for submission to an electrical safety testing lab for IEC 60601-1 testing. There is a collateral standard for usability (i.e., IEC 60601-1-6). For software as a medical device (SaMD), you will also be expected to conduct usability testing to demonstrate that the user interface does not create any user errors.

Summative Usability Testing Process 174x300 Integrating usability testing into your design process

When you conduct summative (validation) testing, it is critical to make sure that you are using samples that are production equivalents rather than prototypes. Also, it is crucial to have your instructions for use (IFU) finalized. Any residual risks for use errors should be identified in the precautions section of your IFU, and the use of video is encouraged as a training aid to ensure use errors are identified, and the user understands any potential harm. When the summative testing is performed, there should be no deviations and no use errors. Inadequate identification of usability factors during Phase 2, or inadequate formative testing during Phase 3, is usually the root cause of failed summative testing. If your team prepared sufficiently in Phase 2 and 3, the Phase 4 results would be unsurprisingly successful.  

Additional Training Resources for Usability Engineering

The following additional training resources for usability engineering may be helpful to you:

Posted in: Design Control, Usability

Leave a Comment (1) →