Blog

Posts Tagged Design controls

Human factors process, can we make this easy to understand?

90% of usability testing submitted to the FDA is unacceptable and the root cause is simply a failure to understand the human factors process.

If you submitted no usability testing to the FDA in your 510(k) submission, it would be obvious why the FDA reviewer identified usability as a major deficiency. However, you spent tens of thousands of dollars on usability testing that delayed the 510(k) submission by six months. Despite all of the time and money your company invested in the human factors process, it appears that you need to start over and repeat the entire process again. The CEO is furious, and he wants you to show him where in the 49-page FDA guidance it says that you have to do things differently.

Benefits from the human factors process

  1. Use errors result in serious injuries and death
  2. Easy to use products sell
  3. You will prevent delays in regulatory approval

Why was your rationale for no usability testing rejected?

Unlike CE Marking technical files, the FDA does not require a usability engineering file for all products. Instead, the FDA determines if usability testing is required based upon a comparison of your device’s user interface and a competitor’s user interface (i.e. predicate device user interface). If the user interface is identical, then usability testing may not be required. Instead, your company should be able to write a rationale for not doing usability testing based upon equivalence with the predicate device. If there are differences in your user interface, you will need to provide use-related risk analysis (URRA), identify critical tasks, implement risk controls, and provide verification testing to demonstrate the effectiveness of the risk controls. Even if your device is “easier to use” or “simpler”, you still need to provide the documentation to support this claim in your submission. The FDA also does not allow comparative claims in your marketing for 510(k) cleared devices. Comparative claims require the support of clinical data.

What is the 10-step human factors process?

  1. Define human factors for your device or IVD
  2. Identify use errors
  3. Conduct a URRA
  4. Perform a critical task analysis
  5. Conduct a risk control option analysis
  6. Conduct formative usability testing
  7. Implement risk controls
  8. Conduct summative usability testing
  9. Prepare HFE/UE documentation
  10. Collect post-market surveillance data specific to use errors

There is a YouTube video describing these 10 steps at the bottom of this blog posting.

Why is formative testing needed?

  • Observational study to identify unforeseen use errors
  • Observational study to evaluate risk control options
  • What are the other types of studies?
  • Development of indications for use
  • Development of training materials

Why is the human factors process crazy expensive to outsource?

  • Human factors consultants need time to learn about your device
  • Consultants are more conservative because they cannot afford to fail
  • Justifying your choice of risk controls is difficult because you started too late
  • Your instructions for use (IFU) are inadequate
  • Consultants need to explain the human factors process to you
  • Recruiting subjects is marketing (which may not be their expertise)
  • You are paying for infrastructure (specialized testing facilities)
  • This is a team effort that requires many consulting hours collectively

Why was your Usability Engineering File refused?

  1. Your company provided an application failure modes and effects analysis (aFMEA) to support your justification that residual risks are acceptable. The FDA guidance suggests using risk analysis tools such as an FMEA or fault-tree analysis, but deficiency letters from FDA reviewers recommend a use-related risk analysis (URRA) format that is totally different.

    URRA table example from the FDA 1024x399 Human factors process, can we make this easy to understand?

    Example of a URRA Table provided by the FDA for the Human Factors Process

    The primary problem with using an FMEA or Fault-Tree risk analysis tool is that these tools involve estimation of the severity of harm and the probability of occurrence of harm, while the FDA does not feel it is appropriate to estimate the probability of occurrence of harm. Instead, the FDA instructs companies to assume that use errors will occur and to implement risk controls to mitigate those risks (see URRA example above). Although “mitigation” is unlikely, and use risks will only be reduced, this is the approach the FDA wants companies to use. In addition, the FDA expects your company to provide traceability of risk control implementation to each use-related risk you identified and the FDA expects documentation of verification testing (i.e. usability testing) that shows your risk controls are effective. Finally, the FDA (and ISO 14971, Clause 10) expects you to collect and perform a trend analysis of use errors. Any use errors that are reported should be evaluated for the need to implement additional corrective actions to prevent future use errors. Blaming “user error” is not an acceptable approach. 

  2. You provided risk analysis and human factors testing in your 510(k) submission, but the FDA reviewer said you need to identify critical tasks and provide traceability to each critical task in your summative validation report. – Critical tasks are specifically mentioned in section 3.2 of the FDA guidance on applying human factors and usability engineering–and a total of 49 times throughout the guidance. However, “critical tasks” are not mentioned even once in ISO 14971:2019 or ISO/TR 24971:2020. The term “critical tasks” is not even found in IEC 62366-1:2015. There is mention of “tasks”, and “task” is a formal definition (i.e. Definition 3.14, “Task – one or more USER interactions with a MEDICAL DEVICE to achieve a desired result”). Therefore, companies that are familiar with the ISO Standards and CE Marking process frequently need training on the FDA requirements for the human factors process. After receiving training, then your company will be prepared to modify your usability engineering file documentation to comply with the FDA requirements for human factors.
  3. You completed a summative validation protocol, but the FDA disagrees with your definition of user groups. – Each user has a different level of experience, training, and competency. Therefore, if you define the intended user population too broadly (e.g. healthcare practitioners), the FDA may not accept your summative usability testing. This is the reason that the human factors process begins with defining the human factors for your IVD or device. Radiologists, for example, have the following training pathway:
    • graduate from medical school;
    • complete an internship;
    • pass state licensing exam;
    • complete a residency in radiology;
    • become board certified; and
    • complete an optional fellowship.

Therefore, if you are developing imaging software, you need to make sure your user group includes radiologists that cover the entire range of competencies. In addition, most radiology images are taken by radiology technicians and then reviewed by the radiologist. Therefore, radiology technicians should be considered a completely different user group due to the differences in experience, training, and competency when compared to a radiologist. This simple example doubles the number of users needed because you have two user groups instead of one.

  1. You evaluated 15 users, but the FDA reviewer is asking you to evaluate a larger number of users based upon a special controls guidance document. – The FDA guidance on human factors testing specifies a minimum of 15 users for each user group–not a minimum of 15 users. Therefore, for a device that is for Rx-only and OTC use, you will have at least two user groups that need to be evaluated independently. In addition, some devices have special controls guidance documents that specify usability testing requirements. For example, an OTC blood glucose meter must pass a 350-person lay-user study. Covid-19 self-tests are expected to pass a 30-person lay-user study as another example.
  2. Your usability study was conducted in Australia, but the FDA insists that your usability study must be repeated in the USA. – Most people think of language being the primary difference between two countries, and therefore the author of a study protocol may not perceive any difference between the USA and Australia, Ireland, Canada, or the UK. However, this lack of ability to identify differences between cultural norms shows our own ignorance of cultural differences. International travelers learn quickly about the differences in the interface used for electrical outlets between the USA and other countries. There are also more subtle differences between cultures, such as in which direction do you toggle a light switch to turn on a light, up or down? For devices that are used in a hospital environment, it is critical to understand how your device will interact with other devices and how different hospital protocols might impact human factors.
  3. The FDA reviewer indicated that your usability engineering file does not assess the ability of laypersons to self-select whether your OTC device is appropriate for them. – Devices and IVD devices may have contraindications or indications for use that are specific to an intended patient population or intended user population. In these cases, the user of the device or IVD needs to be able to “self-select” as included or excluded from use. The ability to self-select should be assessed as part of any OTC usability study. The ability to identify suitable and unsuitable patients for treatment is also a common criterion for a usability study involving prescription devices where a physician is the subject of the study.
  4. The FDA reviewer indicated that you did not provide raw data collected by the study moderator. – Data collected during a human factors study is usually subjective in nature, and the FDA may want to conduct their own review and analysis of your data. Therefore, you cannot provide only a testing report that summarizes the results of your study. You must also provide the raw data for the study. It is permitted to provide the data in a tabular format that has been transcribed from paper case report forms or was recorded electronically. You should also consider scanning any paper forms for permanent retention or retaining the paper forms in case there is any question of accuracy in the transcription of the data collected. Finally, it is best practice to record videos of the study participants performing each task and answering interview questions. This will help in filling any gaps in the notes recorded by the moderator, and the recording provides additional objective evidence of the study results.
  5. The FDA reviewer indicated that your study is not valid, because the training provided by moderators was not scripted and training decay was not considered in the design of the study. – Summative usability testing requires that users complete all of the critical tasks identified in your critical task analysis without assistance. It is permitted to provide training to the user prior to conducting the study if the device or IVD is for prescription use and healthcare practitioners are responsible for providing instruction to the user. However, any training provided must be scripted in advance and approved as part of the summative usability testing protocol. This ensures that every subject in the study receives consistent training. Unfortunately, the FDA may still not be satisfied with the design of your study if you do not allow sufficient time to pass between the time that training is provided to the user and when the subject uses the device or IVD for the first time. In general, one hour is the minimum amount of time that should pass between providing user training and when the device or IVD is used for the first time. This is referred to as “training decay” and the duration of time between your scripted training and the user performing critical tasks for the first time should be specified in your summative usability protocol. One solution to address both issues is to provide a video of the instructions to each subject 24-hours in advance of participation in the study.

Additional resources for the human factors process and usability testing

Posted in: 510(k), Design Control, Usability

Leave a Comment (4) →

Implementing Design Controls – 10 Steps

The article explains ten steps of implementing design controls, including design plans, design inputs, design review, verification protocols, and risk management.
waterfall fda Implementing Design Controls   10 Steps

FDA Guidance for Implementing Design Controls

The diagram above is called the “Application of Design Controls to Waterfall Design Process.” The FDA introduced this diagram in 1997 in the design controls guidance document. However, the original source of the diagram was Health Canada.

This diagram is one of the first slides I use for every design control course that I teach because the diagram visually displays the design controls process. The design controls process, defined by Health Canada and the US FDA, is equivalent to the design and development section found in ISO 13485 and ISO 9001 (i.e., – Clause 7.3). Seven sub-clauses comprise the requirements of these ISO Standards:

  • 7.3.1 – Design Planning
  • 7.3.2 – Design Inputs
  • 7.3.3 – Design Outputs
  • 7.3.4 – Design Reviews
  • 7.3.5 – Design Verification
  •  7.3.6 – Design Validation
  • 7.3.7 – Design Changes

In addition to the seven sub-clauses found in these ISO Standards, the FDA Quality System Regulation (QSR) also includes additional requirements in the following sub-sections of 21 CFR 820.30: a) general, h) design transfer, and J) Design History File (DHF).

Implementing Design Controls: A Complex Process

Even though the requirement for Design Controls has been in place for 16 years, there are still far too many design teams that struggle with understanding these requirements. Medical device regulations are complex, but design controls are the most complex process in any quality system. The reason for this is that each of the seven sub-clauses represents a mini-process that is equivalent in complexity to CAPA root cause analysis. Many companies choose to create separate work instructions for each sub-clause.

Medical Device Academy’s training philosophy is to distill processes down to discrete steps that can be absorbed and implemented quickly. We use independent forms to support each step, and develop training courses with practical examples, instead of writing a detailed procedure(s). The approach we teach removes complexity from your design control procedure. Instead, we rely upon the structure of step-by-step forms completed at each stage of the design process.

10 Ways for Implementing Design Controls

1. Design plans are just a plan. You can and should change that plan. This is stated in both Clause 7.3.1 of the ISO Standards, and Section 21 CFR 820.30b of the FDA QSR. You can make your plan as detailed as you need to, but I recommend starting simple and adding detail. Your first version of a design plan should include the following tasks: 

  • Identification of the regulatory pathway-based upon the device risk classification and applicable harmonized standards.
  • Development of a risk management plan
  • Approval of your design plan (1st design review) 
  • Initial hazard identification
  • Documentation and approval of design inputs (2nd design review) 
  • Risk control option analysis
  • Reiterative development of the product design
  • Risk analysis 
  • Documentation and approval of design outputs (3rd design review) 
  • Design verification and validation and risk control verification 
  • Clinical evaluation and risk/benefit analysis
  • Development of post-market surveillance plan with a post-market risk management plan
  • Development of a draft Device Master Record(DMR) /TF Index
  • Commercial release (4th and final design review)
  • Regulatory approval and closure of the Design History File (DHF
  • Review lessons learned and initiate actions to improve the design process 

2. Design inputs need to be requirements verified through the use of a verification protocol. If you identify external standards for each design input, you will have an easier time completing the verification activities, because verification tests will be easier to identify. Medical Device Academy has written more on this topic in a previous blog posting.

3. Design outputs are drawings and specifications. Ensure you keep them updated and control the changes. When you finally approve the design, this is the “design freeze.” 

4. Design reviews should have defined deliverables. I recommend designing a form for documenting the design review, which identifies the deliverables for each design review. The form should also define the minimum required attendees by function. Other design review attendees should be identified as optional—rather than required reviewers and approvers. If your design review process requires too many people, this will have a long-term impact upon review and approval of design changes. 

5. Design verification protocols should be standardized instead of being project-specific. Information regarding traceability to lots of calibrated equipment ID and test methods should be included as a variable that is entered manually into a blank space when the protocol is executed. The philosophy behind this approach is to create a protocol once and repeat it forever. This results in a verification process that is consistent and predictable, but it also eliminates the need for review and approval of the protocol for each new project. 

6. Design validation should be more than bench testing. Ensure that animal models, simulated anatomical models, finite element analysis, and human clinical studies are considered. 

7. Design transfer is not a single event in time. Transfer begins with the release of your first drawing or specification to purchasing and ends with the commercial release of the product. 

8. Do not keep the DHF open after commercial release. All changes after that point should be under production controls, and changes should be documented in the (DMR)/Technical File (TF). 

9. Your DMR Index should perform a dual function of also meeting technical documentation requirements for other counties, such as Canada and Europe. 

10. Audit your design control process to identify opportunities for improvement and preventive actions. Audits should include a review of the design process metrics, and you may consider establishing quality objectives for the improvement of the design process. This last step, and the standardization of design verification protocols in step five (5), are discussed in further detail in another blog by Medical Device Academy.

More Information Regarding Implementing Design Controls

If you are interested in design control training, Rob Packard will be teaching a Design Controls Training Webinar on November 2, 2018.

Posted in: Design Control

Leave a Comment (6) →

Auditing Design Controls – 7 Step Process

This blog reviews seven steps for effectively auditing design controls utilizing the ISO 13485 standard and process approach to auditing.

turtle diagram for design controls Auditing Design Controls   7 Step Process

Third-party auditors (i.e., – a Notified Body Auditor) don’t always practice what we preach. I know this may come as a huge shock to everyone, but sometimes we don’t use the process approach. Auditing design controls is a good example of my own failure to follow was it true and pure. Instead, I use NB-MED 2.5.1/rec 5 as a checklist, and I sample Technical Files to identify any weaknesses. The reason I do this is that I want to provide as much value to the auditing client as possible without falling behind in my audit schedule.

Often, I would sample a new Technical File for a new product family that had not been sampled by the Technical Reviewer yet. My reason for doing this is that I could often find elements that are missing from the Technical File before the Technical Reviewer saw the file. This gives the client an opportunity to fix the deficiency before submission and potentially shortens the approval process. Since NB-MED documents are guidance documents, I could not write the client up for a nonconformity, unless they were missing a required element of the M5 version of the MDD (93/42/EEC as modified by 2007/47/EC). This is skirting the edge of consulting for a third- party reviewer, but I found it was a 100% objective way to review Technical Files. I also found I could review an entire Technical File in about an hour.

What’s wrong with this approach to auditing design controls?

This approach only tells you if the elements of a Technical File are present, but it doesn’t evaluate the design process. Therefore, I supplemented my element approach with a process audit of the design change process by picking a few recent design changes that I felt were high-risk issues. During the process audit of the design change process, I sampled the review of risk management documentation, any associated process validation documentation, and the actual design change approval records. If I had time, I looked for the following types of changes: 1) vendor change, 2) specification change, and 3) process change. By doing this, I covered the following clauses in ISO 13485:2016: 7.4 (purchasing), 7.3.9 (design changes), 7.5.6 (process validation), 7.1 (risk management), and 4.2.5 (control of records).

So what is my bastardized process approach to auditing design controls missing? Clauses 7.3.1 through 7.3.10 of ISO 13485:2016 are missing. These clauses are the core of the design and development process. To address this, I would like to suggest the following process approach:

Step 1 – Define the Design Process

Identify the process owner and interview them. Do this in their office–not in the conference room. Get your answers for steps 2-7 directly from them. Ask lots of open-ended questions to prevent “yes/no” responses.

Step 2 – Process Inputs

Identify how design projects are initiated. Look for a record of a meeting where various design projects were vetted and approved for internal funding. These are inputs into the design process. There should be evidence of customer focus, and some examples of corrective actions taken based upon complaints or service trend analysis.

Step 3 – Process Outputs

Identify where Design History Files (DHF) are stored physically or electronically, and determine how the DHF is updated as the design projects progress.

Step 4 – What Resources

This is typically the step of a process audit where their auditor needs to identify “what resources” are used in the process. However, only companies that have software systems for design controls have resources dedicated to Design and Development. I have indicated this in the “Turtle Diagram” presented above.

Step 5 – With Whom, Auditing Training Records

Identify which people are assigned to the design team for a design project. Sometimes companies assign great teams. In this case, the auditor should focus on the team members that must review and approve design inputs (see Clause 7.3.2) and design outputs (see Clause 7.3.3). All of these team members should have training records for Design Control procedures and Risk Management procedures.

Step 6 – Auditing Design Controls Procedures and Forms

Identify the design control procedures and forms. Do not read and review these procedures. Auditors never have the time to do this. Instead, ask the process owner to identify specific procedures or clauses within procedures where clauses in the ISO Standard are addressed. If the process owner knows exactly where to find what you are looking for, they’re training was effective, or they may have written the procedure(s). If the process owner has trouble locating the clauses you are requesting, spend more time sampling training records.

Step 7 – Process Metrics

Ask the process owner to identify some metrics or quality objectives they are using to monitor and improve the design and development process. This is a struggle for many process owners–not just design. If any metrics are not performing up to expectations, there should be evidence of actions being taken to address this. If no metrics are being tracked by the process owner, you might review schedule compliance.

Many design projects are behind schedule, and therefore this is an important metric for most companies. Now that you have completed your “Turtle Diagram,” if you have more time to audit the design process, you can interview team members to review their role in the design process. You could also sample-specific Technical Files as I indicated above. If you are performing a thorough internal audit, I recommend doing both. To learn more about using the process approach to auditing, you can register for our webinar on the topic.

Posted in: Design Control, ISO Auditing

Leave a Comment (1) →