Search Results for: 13485

Predicate selection guidance proposes controversial additions

The FDA released a new draft 510k predicate selection guidance on September 7, but the draft guidance proposes controversial additions.

Draft Guidance on Predicate Selection Best Practices 1024x664 Predicate selection guidance proposes controversial additions
Download the Draft FDA Predicate Selection Guidance

On September 7, 2023, a draft predicate selection guidance document was released by the FDA. Normally the release of a new draft of FDA guidance documents is anticipated and there is an obvious need for the draft. This new draft, however, appears to include some controversial additions that I feel should be removed from the guidance. This specific guidance was developed to help submitters use best practices in selecting a predicate. There is some useful advice regarding the need to review the FDA database for evidence of use-related and design-related safety issues associated with a potential predicate that is being considered. Unfortunately, the last section of the guidance suggests some controversial recommendations that I strongly disagree with.

Please submit comments to the FDA regarding this draft guidance

This guidance is a draft. Your comments and feedback to the FDA will have an impact on FDA policy. We are preparing a redlined draft of the guidance with specific comments and recommended changes. We will make the comments and feedback available for download from our website on our predicate selection webinar page. We are also creating a download button for the original draft in Word (.docx) format, and sharing the FDA instructions for how to respond.

Section 1 – Introduction to the guidance

The FDA indicates that this new draft predicate selection guidance document was created to provide recommendations to implement four (4) best practices when selecting a predicate device to support a 510k submission. This first objective is something that our consulting firm recommended in a training webinar. The guidance indicates that the guidance was also created by the FDA in an attempt to improve the predictability, consistency, and transparency of the 510k pre-market review process. This second objective is not accomplished by the draft guidance and needs to be modified before the guidance is released as a final guidance.

Section 2 – Background

This section of the guidance is divided into two parts: A) The 510k Process, and B) 510k Modernization.

A. The 510k Process

The FDA released a Substantial Equivalence guidance document that explains how to demonstrate substantial equivalence. The guidance document includes a new decision tree that summarizes each of the six questions that 510k reviewers are required to answer in the process of evaluating your 510k submission for substantial equivalence. The evidence of substantial equivalence must be summarized in the Predicates and Substantial Equivalence section of the FDA eSTAR template in your 510k submission, and the guidance document reviews the content that should be provided.

Substantial equivalence is evaluated against a predicate device or multiple predicates. To be considered substantially equivalent, the subject device of your 510k submission must have the same intended use AND the same technological characteristics as the predicate device. Therefore, you cannot use two different predicates if one predicate has the same intended use (but different technological characteristics), and the second predicate has the same technological characteristics (but a different intended use). That’s called a “split predicate,” and that term is defined in the guidance This does not prohibit you from using a secondary predicate, but you must meet the requirements of this guidance document to receive 510k clearance. The guidance document reviews five examples of multiple predicates being used correctly to demonstrate substantial equivalence.

B. 510k Modernization

The second part of this section refers to the FDA’s Safety Action Plan issued in April 2018. The announcement of the Safety Action Plan is connected with the FDA’s announcement of actions to modernize the 510k process as well. The goals of the FDA Safety Action Plan consist of:

  1. Establish a robust medical device patient safety net in the United States
  2. Explore regulatory options to streamline and modernize timely implementation of postmarket mitigations
  3. Spur innovation towards safer medical devices
  4. Advance medical device cybersecurity
  5. Integrate the Center for Devices and Radiological Health’s (CDRH’s) premarket and postmarket offices and activities to advance the use of a TPLC approach to device safety 

Examples of modernization efforts include the following:

  • Conversion of the remaining Class 3 devices that were designated for the 510k clearance pathway to the PMA approval process instead
  • Use of objective performance standards when bringing new technology to the market
  • Use of more modern predicate devices (i.e., < 10 years old)

In this draft predicate selection guidance, the FDA states that feedback submitted to the docket in 2019 has persuaded the FDA to acknowledge that focusing only on modern predicate devices may not result in optimal safety and effectiveness. Therefore, the FDA is now proposing the approach of encouraging best practices in predicate selection. In addition, the draft guidance proposes increased transparency by identifying the characteristics of technological characteristics used to support a 510k submission.

The FDA did not mention an increased emphasis on risk analysis or risk management in the guidance, but the FDA is modernizing the quality system regulations (i.e., 21 CFR 820) to incorporate ISO 13485:2016 by reference. Since ISO 13485:2016 requires the application of a risk-based approach to all processes, the application of a risk-based approach will also impact the 510k process in multiple ways, such as design controls, supplier controls, process validation, post-market surveillance, and corrective actions. 

Section 3 – Scope of the predicate selection guidance

The draft predicate selection guidance indicates that the scope of the guidance is to be used in conjunction with the FDA’s 510k program guidance. The scope is also not intended to change to applicable statutory or regulatory standards. 

Section 4 – How to use the FDA’s predicate selection guidance

The FDA’s intended use of the predicate selection guidance is to provide submitters with a tool to help them during the predicate selection process. This guidance suggests a specific process for predicate selection. First, the submitter should identify all of the possible legally marketed devices that also have similar indications for use. Second, the submitter should exclude any devices with different technological characteristics if the differences raise new or different issues of risk. The remaining sub-group is referred to in the guidance as “valid predicate device(s).” The third, and final, step of the selection process is to use the four (4) best practices for predicate selection proposed in the guidance. The diagram below provides a visual depiction of the terminology introduced in this guidance.

Visual diagram of terminology in predicate selection guidance 1024x438 Predicate selection guidance proposes controversial additions

Section 5 – Best Practices (for predicate selection)

The FDA predicate selection guidance has four (4) best practices recommended for submitters to use when narrowing their list of valid predicate devices to a final potential predicate(s). Prior to using these best practices, you need to create a list of legally marketed devices that could be potential predicates. The following FDA Databases are the most common sources for generating a list of legally marketed devices:

  • Registration & Listing Database
    • Trade names of similar devices (i.e., proprietary name)
    • Manufacturer(s) of similar devices (i.e., owner operator name)
  • 510k Database
    • 510k number of similar devices
    • Applicant Name (i.e., owner operator name) of similar devices
    • Device Name (i.e., trade name) of similar device
  • Device Classification Database
    • Device classification name of similar devices
    • Product Code of similar devices
    • Regulation Number of similar devices

Our team usually uses the Basil Systems Regulatory Database to perform our searches. Basil Systems uses data downloaded directly from the FDA, but the software gives us four advantages over the FDA public databases:

  1. The search engine uses a natural-language algorithm rather than a Boolean search.
  2. The database is much faster than the FDA databases.
  3. The results include analytics regarding the review timelines and a “predicate tree.”
  4. Basil Systems also has a post-market surveillance database that includes all of the FDA adverse events and recall data, but it also includes access to data from Health Canada and the Australian TGA.

A. Predicate devices cleared using well-established methods

Some 510k submissions use the same methods used by a predicate device that was used for their substantial equivalence comparison, while other devices use well-established methods. The reason for this may have been that the 510k submission preceded the release of an FDA product-specific, special controls guidance document. In other cases, the FDA may not have recognized an international standard for the device classification. You can search for recognized international standards associated with a specific device classification by using the FDA’s recognized consensus standards database. An example is provided below.

How to search for FLL recognized standards 1024x712 Predicate selection guidance proposes controversial additions

FLL recognized standards 1024x577 Predicate selection guidance proposes controversial additions

New 510k submissions should always use the methods identified in FDA guidance documents and refer to recognized international standards instead of copying the methods used to support older 510k submissions that predate the current FDA guidance or recognized standards. The problem with the FDA’s proposed approach is that the FDA is implying that a device that was not tested to the current FDA guidance or recognized standards is inherently not as safe or effective as another device that was tested to the current FDA guidance or recognized standards. This inference may not be true. Therefore, even though this may be a consideration, it is not appropriate to require manufacturers to include this as a predicate selection criterion. The FDA is already taking this into account by requiring companies to comply with the current FDA guidance and recognized standards for device description, labeling, non-clinical performance testing, and other performance testing. An example of how the FDA PreSTAR automatically notifies you of the appropriate FDA special controls guidance for a product classification is provided below.

Screen capture of PreSTAR classification section 1024x792 Predicate selection guidance proposes controversial additions

B. Predicate devices meet or exceed expected safety and performance

This best practice identified in the FDA predicate selection guidance recommends that you search through three different FDA databases to identify any reported injury, deaths, or malfunctions of the predicate device. Those three databases are:

  1. MAUDE Database
  2. MDR Database
  3. MedSun Database

All of these databases are helpful, but there are also problems associated with each database. In general, adverse events are underreported, and a more thorough review of post-market surveillance review is needed to accurately assess the safety and performance of any device. The MAUDE data represents reports of adverse events involving medical devices and it is updated weekly. The data consists of all voluntary reports since June 1993, user facility reports since 1991, distributor reports since 1993, and manufacturer reports since August 1996. The MDR Data is no longer updated, but the MDR database allows you to search the CDRH database information on medical devices that may have malfunctioned or caused a death or serious injury during the years 1992 through 1996. Medical Product Safety Network (MedSun) is an adverse event reporting program launched in 2002 by CDRH. The primary goal for MedSun is to work collaboratively with the clinical community to identify, understand, and solve problems with the use of medical devices. The FDA predicate selection guidance, however, does not mention the Total Product Life Cycle (TPLC) database which is a more efficient way to search all of the FDA databases–including the recall database and the 510k database.

The biggest problem with this best practice as a basis for selecting a predicate is that the number of adverse events depends upon the number of devices used each year. For a small manufacturer, the number of adverse events will be very small because there are very few devices in use. For a larger manufacturer, the number of adverse events will be larger–even though it may represent less than 0.1% of sales. Finally, not all companies report adverse events when they are required to, while some companies may over-report adverse events. None of these possibilities is taken into consideration in the FDA’s draft predicate selection guidance.

C. Predicate devices without unmitigated use-related or design-related safety issues

For the third best practice, the FDA predicate selection guidance recommends that submitters search the Medical Device Safety database and CBER Safety & Availability (Biologics) database to identify any “emerging signals” that may indicate a new causal association between a device and an adverse event(s). As with all of the FDA database searches, this information is useful as an input to the design process, because it helps to identify known hazards associated with similar devices. However, a more thorough review of post-market surveillance review is needed to accurately assess the safety and performance of any device–including searching databases from other countries where similar devices are marketed.

D. Predicate devices without an associated design-related recall

For the fourth best practice, the FDA predicate selection guidance recommends that submitters search the FDA recalls database. As stated above, the TPLC database includes this information for each product classification. Of the four best practices recommended by the FDA, any predicate device that was to a design-related recall is unlikely to be accepted by the FDA as a suitable predicate device. Therefore, this search should be conducted during the design planning phase or while design inputs are being identified. If you are unable to identify another predicate device that was not the subject of a design-related recall, then you should request a pre-submission meeting with the FDA and provide a justification for the use of the predicate device that was recalled. Your justification will need to include an explanation of the risk controls that were implemented to prevent a similar malfunction or use error with your device. Often recalls result from quality problems associated with a supplier that did not make a product to specifications or some other non-conformity associated with the assembly, test, packaging, or labeling of a device. None of these problems should automatically exclude the use of a predicate because they are not specific to the design.

Section 6 – Improving Transparency

This section of the FDA predicate selection guidance contains the most controversial recommendations. The FDA is proposing that the 510k summary in 510k submissions should include a narrative explaining their selection of the predicate device(s) used to support the 510k clearance. This would be a new requirement for the completion of a 510k summary because that information is not currently included in 510k summaries. The new FDA eSTAR has the ability to automatically generate a 510k summary as part of the submission (see example below), but the 510k summary generated by the eSTAR does not include a section for including a narrative explaining the reasons for predicate selection.

Sections of the 510k summary that are automatically populated in the eSTAR 1024x544 Predicate selection guidance proposes controversial additions

The FDA added this section to the draft guidance with the goals of improving the predictability, consistency, and transparency of the 510k pre-market review process. However, the proposed addition of a narrative explaining the reasons for predicate selection is not the best way to achieve those goals. Transparency is best achieved by eliminating the option of a 510k statement (i.e., 21 CFR 807.93). Currently, the 510k process allows for submitters to provide a 510k statement or a 510k summary. The 510k statement prevents the public from gaining access to any of the information that would be provided in a 510k summary. Therefore, if the narrative explaining the reasons for predicate selection is going to be required in a 510k submission, that new requirement should be added to the substantial equivalence section of the eSTAR instead of only including it in the 510k summary. If the 510k statement is eliminated as an option for submitters, then all submitters will be required to provide a 510k summary and the explanation for the predicate selection can be copied from a text box in the substantial equivalence section.

The FDA eSTAR ensures consistency of the 510k submission contents and format, and tracking of FDA performance has improved the consistency of the FDA 510k review process. Adding an explanation for predicate selection will not impact either of these goals for improving the 510k process. In addition, companies do not select predicates only for the reasons indicated in this FDA predicate selection guidance. One of the most common reasons for selecting a predicate is the cost of purchasing samples of predicate devices for side-by-side performance testing. This only relates to cost, not safety or performance, and forcing companies to purchase more expensive devices for testing would not align with the least burdensome approach. Another flaw in this proposed additional information to be included in the 510k summary is that there is a huge variation in the number of predicates that can be selected for different product classifications. For example, 319 devices were cleared in the past 10 years for the FLL product classification (i.e., clinical electronic thermometer), while 35 devices were cleared in the past 10 years for the LCX product classification (i.e., pregnancy test). Therefore, the approach to selecting a predicate for these two product classifications would be significantly different due to the number of valid predicates to choose from. This makes it very difficult to create a predictable or consistent process for predicate selection across all product classifications. There may also be confidential, strategic reasons for predicate selection that would not be appropriate for a 510k summary.

Section 7 – Examples

The FDA predicate selection guidance provides three examples. In each example, the FDA is suggesting that the submitter should provide a table that lists the valid predicate devices and compare those devices in a table using the four best practices as criteria for the final selection. The FDA is positioning this as providing more transparency to the public, but this information presented in the way the FDA is presenting it would not be useful to the public. This is creating more documentation for companies to submit to the FDA without making devices safer or improving efficacy. This approach would be a change in the required content of a 510k summary and introduce post-market data as criteria for 510k clearance. This is a significant deviation from the current FDA policy.

Example 1 from predicate selection guidance

In this example, the submitter included a table in their 510k submission, along with their rationale for selecting one of the four potential predicates as the predicate device used to support their 510k submission. This example is the most concerning because the summary doesn’t have any details regarding the volume of sales for the potential predicates being evaluated. The number of adverse events and recalls is usually correlated with the volume of sales. The proposed table doesn’t account for this information.

Example 2 from predicate selection guidance

In this example, the submitter was only able to identify one potential, valid predicate device. The submitter provided a table showing that the predicate did not present concerns for three of the four best practices, but the predicate was the subject of a design-related recall. The submitter also explained the measures taken to reduce the risk of those safety concerns in the subject device. As stated above, using the occurrence of a recall as the basis for excluding a predicate is not necessarily appropriate. Most recalls are initiated due to reasons other than the design. Therefore, you need to make sure that the reason for the recall is design-related rather than a quality system compliance issue or a vendor quality issue.

Example 3 from predicate selection guidance

In this example, the submitter identified two potential, valid predicate devices. No safety concerns were identified using any of the four best practices, but the two potential devices have different market histories. One device has 15 years of history, and the second device has three years of history. The submitter chose the device with 15 years of history because the subject device had a longer regulatory history. The problem with this approach is that years since clearance is not an indication of regulatory history. A device can be cleared in 2008, but it might not be launched commercially until several years later. In addition, the number of devices used may be quite small for a small company. In contrast, if the product with three years since the 510k clearance is distributed by a major medical device company, there may be thousands of devices in use every year. 

Medical Device Academy’s recommendations for predicate selection

The following information consists of recommendations our consulting firm provides to clients regarding predicate selection.

Try to use only one predicate (i.e., a primary predicate)

Once you have narrowed down a list of predicates, we generally recommend only using one of the options as a primary predicate and avoiding the use of a second predicate unless absolutely necessary. If you are unsure of whether a second predicate or reference device is needed, this is an excellent question to ask the FDA during a pre-submission teleconference under the topic of “regulatory strategy” (see image below). In your PreSTAR you can ask the following question, “[Your company name] is proposing to use [primary predicate] as a primary predicate. A) Does the FDA have any concerns with the predicate selection? B) Does the FDA feel that a secondary predicate or reference device is needed?”

PreSTAR Topic Selection 1024x330 Predicate selection guidance proposes controversial additions

When and how to use multiple predicates

Recently a client questioned me about the use of a secondary predicate in a 510k submission that I was preparing. They were under the impression that only one predicate was allowed for a 510k submission because the FDA considers the two predicate devices to be a “split predicate.” The video provided above explains the definition of a “split predicate,” and the definition refers to more than the use of two predicates. For many of the 510k submissions, we prepared and obtained clearance for used secondary predicates. An even more common strategy is to use a second device as a reference device. The second device may only have technological characteristics in common with the subject device, but the methods of safety and performance testing used can be adopted as objective performance standards for your 510k submission.

When you are trying to use multiple predicate devices to demonstrate substantial equivalence to your subject device in a 510k submission, you have three options for the correct use of multiple predicate devices:

  1. Two predicates with different technological characteristics, but the same intended use.
  2. A device with more than one intended use.
  3. A device with more than one indication under the same intended use.

If you use “option 1”, then your subject device must have the technological characteristics of both predicate devices. For example, your device has Bluetooth capability, and it uses infrared technology to measure temperature, while one of the two predicates has Bluetooth but uses a thermistor, and the other predicate uses infrared measurement but does not have Bluetooth.

If you use “option 2”, you are combining the features of two different devices into one device. For example, one predicate device is used to measure temperature, and the other predicate device is used to measure blood pressure. Your device, however, can perform both functions. You might have chosen another multi-parameter monitor on the market as your predicate, however, you may not be able to do that if none of the multi-parameter monitors have the same combination of intended uses and technological characteristics. This scenario is quite common when a new technology is introduced for monitoring, and none of the multi-parameter monitors are using the new technology yet.

If you use “option 3”, you need to be careful that the ability of your subject device to be used for a second indication does not compromise the performance of the device for the first indication. For example, bone fixation plates are designed for the fixation of bone fractures. If the first indication is for long bones, and the second indication is for small bones in the wrist, the size and strength of the bone fixation plate may not be adequate for long bones, or the device may be too large for the wrist.

Predicate selection guidance proposes controversial additions Read More »

Risk Management Training Webinar for ISO 14971:2019

Medical Device Academy’s ISO 14971:2019 risk management training webinar is being expanded from a single webinar to a two-part webinar.

Your cart is empty

What’s new in this risk management training webinar?

Our previous version of the ISO 14971 risk management training webinar was recorded on October 19, 2018. Although that webinar is 100% compliant with the 2019 version of the ISO 14971 standard, everyone considering the purchases of that webinar asks me to confirm this. The webinar was possible because we were using the draft version of the standard for the webinar. Over time we observe clients struggling with the implementation of the risk management process. It is a complex process, and covering the topic in a single webinar is not really feasible. Therefore, we now provide additional training webinars on hazard identification and benefit-risk analysis.

When is the risk management training webinar?

This webinar was recorded and you will receive access to the content as a Dropbox link. You will also receive the native slide deck. Any person that completes our training quiz will receive a training certificate, a corrected quiz, and the answer key to our quiz.

Register for the Risk Management Training Webinar for $129.00 (USD)

Risk is a process 1 e1539972741500 Risk Management Training Webinar for ISO 14971:2019
Risk Management Training for ISO 14971:2019
Two-part Risk Management Training Webinar for ISO 14971:2019 - Part 1 of this webinar will be presented live on Tuesday, March 29 @ 9-10:30 am EDT. Part 2 of this webinar series will be presented live on Tuesday, April 5 at 9-10:30 am EDT. Purchase of this webinar series will grant the customer access to both live webinars. They will also receive the native slide decks and recording for the two webinars.
Price: $129.00

exam1 150x150 Risk Management Training Webinar for ISO 14971:2019
20-Question Quiz for Risk Management Training Webinar on ISO 14971:2019
20-Question Quiz for Risk Management Training Webinar on ISO 14971:2019 - We updated our quiz for training effectiveness from a 10-question quiz to a 20-question quiz that is more comprehensive. Any person that purchases the 20-question quiz will receive a training certificate template and the answer key to our quiz.
Price: $49.00

A risk management procedure compliant with ISO 14971 is the best practice for meeting the requirement in ISO 13485:2016, Clause 7.1. If you are unfamiliar with the ISO 13485 standard, please visit our page on “What is ISO 13485?

Risk is a process 1 1024x1024 Risk Management Training Webinar for ISO 14971:2019

VIEW OUR RISK MANAGEMENT PROCEDURE

CLICK HERE OR IMAGE BELOW:

SOPS Risk Management Training Webinar for ISO 14971:2019

About Your Instructor

Winter in VT 2024 150x150 Risk Management Training Webinar for ISO 14971:2019

Rob Packard is a regulatory consultant with ~25 years of experience in the medical device, pharmaceutical, and biotechnology industries. He is a graduate of UConn in Chemical Engineering. Rob was a senior manager at several medical device companies—including the President/CEO of a laparoscopic imaging company. His Quality Management System expertise covers all aspects of developing, training, implementing, and maintaining ISO 13485 and ISO 14971 certifications. From 2009 to 2012, he was a lead auditor and instructor for one of the largest Notified Bodies. Rob’s specialty is regulatory submissions for high-risk medical devices, such as implants and drug/device combination products for CE marking applications, Canadian medical device applications, and 510k submissions. The most favorite part of his job is training others. He can be reached via phone at +1.802.281.4381 or by email. You can also follow him on YouTubeLinkedIn, or Twitter.

Risk Management Training Webinar for ISO 14971:2019 Read More »

Design Controls Implementation

Design controls can be overwhelming, but you can learn the process using this step-by-step guide to implementing design controls.
Design and development process HD 610x1024 Design Controls ImplementationDesign Controls Implementation

You can implement design controls at any point during the development process, but the earlier you implement your design process the more useful design controls will be. The first step of implementing design controls is to create and design controls procedure. You will also need at least two of the following additional quality system procedures:

  1. Risk Management Procedure (SYS-010)
  2. Software Development and Validation (SYS-044)
  3. Usability Procedure (SYS-048)
  4. Cybersecurity Work Instruction (WI-007)

A risk management file (in accordance with ISO 14971:2019) is required for all medical devices, and usability engineering or human factors engineering (in accordance with IEC 62366-1) is required for all medical devices. The software and cybersecurity procedures listed above are only required for products with 1) software and/or firmware, and 2) wireless functionality or an access point for removable media (e.g., USB flash drive or SD card).

Step 2: Design controls training

Even though the requirement for design controls has been in place for more than 25 years, there are still far too many design teams that struggle with understanding these requirements. Medical device regulations are complex, but design controls are the most complex process in any quality system. The reason for this is that each of the seven sub-clauses represents a mini-process that is equivalent in complexity to CAPA root cause analysis. Many companies choose to create separate work instructions for each sub-clause.

Medical Device Academy’s training philosophy is to distill processes down to discrete steps that can be absorbed and implemented quickly. We use independent forms to support each step and develop training courses with practical examples, instead of writing a detailed procedure(s). The approach we teach removes complexity from your design control procedure (SYS-008). Instead, we rely upon the structure of step-by-step forms completed at each stage of the design process.

If you are interested in design control training, Rob Packard will be hosting the 3rd edition of our Design Controls Training Webinar on Friday, August 11, 2023, @ 9:30 am EDT.

Step 3: Gathering post-market surveillance data

Post-market surveillance is not currently required by the FDA in 21 CFR 820, but it is required by ISO 13485:2016 in Clause 7.3.3c) (i.e., “[Design and development inputs] shall include…applicable outputs(s) of risk management”). The FDA is expected to release the plans for the transition to ISO 13485 in FY 2024, but most companies mistakenly think that the FDA does not require consideration of post-market surveillance when they are designing new devices. This is not correct. There are three ways the FDA expects post-market surveillance to be considered when you are developing a new device:

  1. Complaints and adverse events associated with previous versions of the device and competitor devices should be identified as input to the risk management process for hazard identification.
  2. If the device incorporates software, existing vulnerabilities of the off-the-shelf software (including operating systems) should be identified as part of the cybersecurity risk assessment process.
  3. During the human factors process, you should search for known use errors associated with previous versions of the device and competitor devices; known use-related risks should also include any potential use errors identified during formative testing.

Even though the FDA does not currently require compliance with ISO 13485, the FDA does recognize ISO 14971:2019, and post-market surveillance is identified as an input to the risk management process in Clause 4.2 (see note 2), Clause 10.4, and Annex A.2.10. 

Step 4: Creating a design plan 

You are required to update your design plan as the development project progresses. Most design and development projects take a year before the company is ready to submit a 510k submission to the FDA. Therefore, don’t worry about making your first version of the plan perfect. You have a year to make lots of improvements to your design plan. At a minimum, you should be updating your design plan during each design review. One thing that is important to capture in your first version, however, is the correct regulatory pathway for your intended markets. If you aren’t sure which markets you plan to launch in, you can select one market and add more later, or you can select a few and delete one or more later. Your design plan should identify the resources needed for the development project, and you should estimate when you expect to conduct each of your design reviews.

Contents of your design plan

The requirement for design plans is stated in both Clause 7.3.1 of the ISO Standards, and Section 21 CFR 820.30b of the FDA QSR. You can make your plan as detailed as you need to, but I recommend starting simple and adding detail. Your first version of a design plan should include the following tasks:

  • Identification of the regulatory pathway based on the device risk classification and applicable harmonized standards.
  • Development of a risk management plan
  • Approval of your design plan (1st design review) 
  • Initial hazard identification
  • Documentation and approval of user needs and design inputs (2nd design review) 
  • Risk control option analysis
  • Reiterative development of the product design
  • Risk analysis 
  • Documentation and approval of design outputs implementation of risk control measures (3rd design review) 
  • Design verification and verification of the effectiveness of risk control measures (4th design review)
  • Design validation and verification of the effectiveness of risk control measures that could not be verified with verification testing alone
  • Clinical evaluation and benefit/risk analysis (5th design review)
  • Development of a post-market surveillance plan with a post-market risk management plan
  • Development of a draft Device Master Record/Technical File (DMR/TF) Index
  • Regulatory approval (e.g., 510k clearance) and closure of the Design History File (DHF)
  • Commercial release (6th and final design review)
  • Review lessons learned and initiate actions to improve the design process

Step 5: Create a detailed testing plan

Your testing plan must indicate which recognized standards you plan to conform with, and any requirements that are not applicable should be identified and documented with a justification for the non-applicability. The initial version of your testing plan will be an early version of your user needs and design inputs. However, you should expect the design inputs to change several times. After you receive feedback from regulators is one time you may need to make changes to design inputs. You may also need to make changes when you fail your testing (i.e., preliminary testing, verification testing, or validation testing). If your company is following “The Lean Startup” methodology, your initial version of the design inputs will be for a minimum viable product (i.e., MVP). As you progress through your iterative development process, you will add and delete design inputs based on customer feedback and preliminary testing. Your goal should be to fail early and fail fast because you don’t want to get to your verification testing and fail. That’s why we conduct a “design freeze,” prior to starting the design verification testing and design transfer activities.

Design Timeline with 513g 1024x542 Design Controls Implementation

Step 6: Request a pre-submission meeting with the FDA

Design inputs need to be requirements verified through the use of a verification protocol. If you identify external standards for each design input, you will have an easier time completing the verification activities, because verification tests will be easier to identify. Some standards do not include testing requirements, and there are requirements that do not correspond to an external standard. For example, IEC 62366-1 is an international standard for usability engineering, but the standard does not include specific testing requirements. Therefore, manufacturers have to develop their own test protocol for validation of the usability engineering controls implemented. If your company is developing a novel sterilization process (e.g., UV sterilization), you will also need to develop your own verification testing protocols. In these cases, you should submit the draft protocols to the FDA (along with associated risk analysis documentation) to obtain feedback and agreement with your testing plan. The method for obtaining written feedback and agreement with a proposed testing plan is to submit a pre-submission meeting request to the FDA (i.e., PreSTAR).

Step 7: Iterative development is how design controls really work

Design controls became a legal requirement in the USA in 1996 when the FDA updated the quality system regulations. At that time, the “V-diagram” was quite new and limited to software development. Therefore, the FDA requested permission from Health Canada to reprint the “Waterfall Diagram” in the design control guidance that the FDA released. Both diagrams are models. They do not represent best practices, and they do not claim to represent how the design process is done in most companies. The primary information that is being communicated by the “Waterfall Diagram” is that user needs are validated while design inputs are verified. The diagram is not intended to communicate that the design process is linear or must proceed from user needs, to design inputs, and then to design outputs. The “V-Diagram” is meant to communicate that there are multiple levels of verification and validation testing that occur, and the development process is iterative as software bugs are identified. Both models help teach design and development concepts, but neither is meant to imply legal requirements. One of the best lessons to teach design and development teams is that this is a need to develop simple tests to screen design concepts so that design concepts can fail early and fail fast–before the design is frozen. This process is called “risk control option analysis,” and it is required in clause 7.1 of ISO 14971:2019.

Step 8: “Design Freeze”

Design outputs are drawings and specifications. Ensure you keep them updated and control the changes. When you finally approve the design, this is approval of your design outputs (i.e., selection of risk control options). The final selection of design outputs or risk control measures is often conducted as a formal design review meeting. The reason for this is that the cost of design verification is significant. There is no regulatory or legal requirement for a “design freeze.” In fact, there are many examples where changes are anticipated but the team decides to proceed with the verification testing anyway. The best practice developed by the medical device industry is to conduct a “design freeze.” The design outputs are “frozen” and no further changes are permitted. The act of freezing the design is simply intended to reduce the business risk of spending money on verification testing twice because the design outputs were changed during the testing process. If a device fails testing, it will be necessary to change the design and repeat the testing, but if every person on the design team agrees that the need for changes is remote and the company should begin testing it is less likely that changes will be made after the testing begins.

Step 9: Begin the design transfer process

Design transfer is not a single event in time. Transfer begins with the release of your first drawing or specification to purchasing and ends with the commercial release of the product. The most common example of a design transfer activity is the approval of prototype drawings as a final released drawing. This is common for molded parts. Several iterations of the plastic part might be evaluated using 3D printed parts and machined parts, but in order to consistently make the component for the target cost an injection mold is typically needed. The cost of the mold may be $40-100K, but it is difficult to change the design once the mold is built. The lead time for injection molds is often 10-14 weeks. Therefore, a design team may begin the design transfer process for molded parts prior to conducting a design freeze. Another component that may be released earlier as a final design is a printed circuit board (PCB). Electronic components such as resistors, capacitors, and integrated circuits (ICs) may be available off-the-shelf, but the raw PCB has a longer lead time and is customized for your device.

Step 10: Verification of Design Controls

Design verification testing requires pre-approved protocols and pre-defined acceptance criteria. Whenever possible, design verification protocols should be standardized instead of being project-specific. Information regarding traceability to the calibrated equipment identification and test methods should be included as a variable that is entered manually into a blank space when the protocol is executed. The philosophy behind this approach is to create a protocol once and repeat it forever. This results in a verification process that is consistent and predictable, but it also eliminates the need for review and approval of the protocol for each new project. Standardized protocols do not need to specify a vendor or dates for the testing, but you might consider documenting the vendor(s) and duration of the testing in your design inputs to help with project management and planning. You might also want to use a standardized template for the format and content of your protocol and report. The FDA provides a guidance document specifically for the report format and content for non-clinical performance testing.

Step 11: Validation of Design Controls

Design validation is required to demonstrate that the device meets the user’s and patient’s needs. User needs are typically the indications for use–including safety and performance requirements. Design validation should be more than bench testing. Ensure that animal models, simulated anatomical models, finite element analysis, and human clinical studies are considered. One purpose of design validation is to demonstrate performance for the indications for use, but validating that risk controls implemented are effective at preventing use-related risks is also important. Therefore, human factors summative validation testing is one type of design validation. Human factors testing will typically involve simulated use with the final version of the device and intended users. Validation testing usually requires side-by-side non-clinical performance testing with a predicate device for a 510k submission, while CE Marking submissions typically require human clinical data to demonstrate safety and performance.

Step 12: FDA 510k Submission

FDA pre-market notification, or 510k submission, is the most common type of regulatory approval required for medical devices in the USA. FDA submissions are usually possible to submit earlier than other countries, because the FDA does not require quality system certification or summary technical documents, and performance testing data is usually non-clinical benchtop testing. FDA 510k submissions also do not require submission of process validation for manufacturing. Therefore, most verification and validation is conducted on “production equivalents” that were made in small volume before the commercial manufacturing process is validated. The quality system and manufacturing process validation may be completed during the FDA 510k review.

Step 13: The Final Design Review 

Design reviews should have defined deliverables. We recommend designing a form for documenting the design review, which identifies the deliverables for each design review. The form should also define the minimum required attendees by function. Other design review attendees should be identified as optional—rather than required reviewers and approvers. If your design review process requires too many people, this will have a long-term impact upon review and approval of design changes.

The only required design review is a final design review to approve the commercial release of your product. Do not keep the DHF open after commercial release. All changes after that point should be under production controls, and changes should be documented in the (DMR)/Technical File (TF). If device modifications require a new 510k submission, then you should create a new design project and DHF for the device modification. The new DHF might have no changes to the user needs and design inputs, but you might have minor changes (e.g., a change in the sterilization method requires testing to revised design inputs).

Step 14: FDA Registration

Within 30 days of initial product distribution in the USA, you are required to register your establishment with the FDA. Registration must be renewed annually between October 1 and December 31, and registration is required for each facility. If your company is located outside the USA, you will need an initial importer that is registered and you will need to register before you can ship the product to the USA. Non-US companies must also designate a US Agent that resides in the USA. At the time of FDA registration, your company is expected to be compliant with all regulations for the quality system, UDI, medical device reporting, and corrections/removals.

Step 15: Post-market surveillance is the design control input for the next design project

One of the required outputs of your final design review is your DMR Index. The DMR Index should perform a dual function of also meeting technical documentation requirements for other countries, such as Canada and Europe. A Technical File Index, however, includes additional documents that are not required in the USA. One of those documents is your post-market surveillance plan and the results of post-market surveillance. That post-market surveillance is an input to your design process for the next generation of products. Any use errors, software bugs, or suggestions for new functionality should be documented as post-market surveillance and considered as potential inputs to the design process for future design projects.

Step 16: Monitoring your design controls process

Audit your design controls process to identify opportunities for improvement and preventive actions. Audits should include a review of the design process metrics, and you may consider establishing quality objectives for the improvement of the design process. This last step, and the standardization of design verification protocols in step five (5), are discussed in further detail in another blog by Medical Device Academy.

Design Controls Implementation Read More »

Auditing MDSAP and QSR Requirements – a 4-part webinar series

Rob Packard is hosting the Auditing MDSAP and QSR requirements four-part webinar series from August 9th to 30th, 2023.

Your cart is empty

Auditing MDSAP 1024x790 Auditing MDSAP and QSR Requirements   a 4 part webinar series

Register for the 4-part Webinar Series on Auditing MDSAP and QSR requirements for $299

USB copy 150x150 Auditing MDSAP and QSR Requirements   a 4 part webinar series
This is a 4-part webinar series that will be conducted live on August 9, 17, 23, and 30 (2023) via Zoom. The second webinar was rescheduled from 16th to 17th due to a scheduling conflict. This course assumes that the participant already has experience auditing to ISO 13485 and/or ISO 9001.
Price: $299.00

Outline for the Auditing MDSAP and QSR requirements webinar series

Registrants will receive a confirmation email because we deliver content and notification of updates through AWeber as an email subscription. After confirmation, you will receive login information for the four live Zoom webinars. Each of the four webinars will be approximately 45 minutes in duration, and the training content is organized as follows:

Session 1 – Auditing MDSAP and QSR requirements Kick-off – recorded on August 9, 2023

  1. What are the MDSAP and QSR regulatory requirements that you need to audit?
  2. Where do you find the MDSAP and QSR regulatory requirements?
  3. How are regulatory requirements different from quality system requirements (i.e., ISO 13485:2016)?
  4. How do you modify an audit schedule and your agenda to include regulatory requirements?
  5. Which documents do you need for audit preparation?
  6. How do you document your credentials (i.e., training competence) for regulatory requirements?

Session 2 – Auditing Management Processes & CAPA Process – recorded on August 17, 2023

  1. MDSAP Chapter 1 – Management
  2. MDSAP Chapter 3 – Measurement, Analysis & Improvement
  3. FDA QSIT – Management (except Purchasing & Supplier Controls)
  4. FDA QSIT – CAPA

Session 3 – Wednesday, August 23, 2023, @ 11:00 am EDT (live)

  1. MDSAP Chapter 2 – Device Market Authorization & Facility Registration
  2. MDSAP Chapter 4 – Adverse Event Reporting & Advisory Notices
  3. MDSAP Chapter 5 – Design Controls
  4. FDA QSIT – Design Controls

Session 4 – Auditing QSR & MDSAP requirements Finale – Wednesday, August 30, 2023, @ 11:00 am EDT (live)

  1. MDSAP Chapter 6 – Production & Service Controls
  2. MDSAP Chapter 7 – Purchasing
  3. FDA QSIT – Production & Process Controls
  4. FDA QSIT – Purchasing & Supplier Controls (sub-section of Management)

About Your Medical Device Academy Webinar Instructor Rob Packard

Home page video cropped 150x150 Auditing MDSAP and QSR Requirements   a 4 part webinar series

Rob Packard is a regulatory consultant with ~25 years of experience in the medical device, pharmaceutical, and biotechnology industries. He is a graduate of UConn in Chemical Engineering. Rob was a senior manager at several medical device companies—including the President/CEO of a laparoscopic imaging company. His Quality Management System expertise covers all aspects of developing, training, implementing, and maintaining ISO 13485 and ISO 14971 certifications. 2009-2012, he was a lead auditor and instructor for one of the largest Notified Bodies. Rob’s specialty is regulatory submissions for high-risk medical devices, such as implants and drug/device combination products for CE marking applications, Canadian medical device applications, and 510(k) submissions. The most favorite part of his job is training others. He can be reached via phone at 802.281.4381 or by email. You can also follow him on YouTubeLinkedIn or Twitter.

Auditing MDSAP and QSR Requirements – a 4-part webinar series Read More »

Auditor shadowing as an effective auditor training technique

This article reviews auditor shadowing as an effective auditor training technique, but we also identify five common auditor shadowing mistakes.

How do you evaluate auditor competency?

Somewhere in your procedure for quality audits, I’ll bet there is a section on auditor competency. Most companies require that the auditor has completed either a course for an internal auditor or a lead auditor course. If the course had an exam, you might even have evidence of training effectiveness. Demonstrating training competence is much more challenging. One way is to review internal audit reports, but writing reports is part of what an auditor does. How can you evaluate an auditor’s ability to interview people, take notes, follow audit trails, and manage their time? The most common solution is to require the auditor “shadow” a more experienced auditor several times, and then the trainer will “shadow” the trainee.

auditor with clip board 203x300 Auditor shadowing as an effective auditor training technique
If you are shadowing, you are taking notes, so you can discuss your observations with the person you are shadowing later. 

Auditor shadowing in 1st party audits

ISO 19011:2018 defines first-party audits as internal audits. When first-party auditors are being shadowed by a trainer or vice versa, there are many opportunities for training. The key to the successful training of auditors is to recognize teachable moments.

When the trainer is auditing, the trainer should look for opportunities to ask the trainee, “What should I do now?” or “What information do I need to record?” In these situations, the trainer asks the trainee what they should do BEFORE doing it. If the trainee is unsure, the trainer should immediately explain what, why, and how with real examples.

When the trainer is shadowing, the trainer should watch and wait for a missed opportunity to gather important information. In these situations, the trainer must resist guiding the trainee until after the trainee appears to be done. When it happens, sometimes the best tool is simply asking, “Are you sure you got all the information you came for?”

Here are five (5) mistakes that I observed trainers made when they were shadowing:

1. Splitting up, instead of staying together, is one of the more common mistakes I have observed. This happens when people are more interested in completing an audit rather than taking advantage of training opportunities. The trainee may be capable of auditing independently, but this is unfair to the trainee because they need feedback on their auditing technique. This is also unfair to the auditee because it is challenging to support multiple auditors simultaneously. When it is unplanned, trainers may not be available for both auditors. If an audit is running behind schedule, this is the perfect time to teach a trainee how to recover sometime in their schedule. Time management is, after all, one of the most challenging skills for auditors to master.

2. Staying in the conference room instead of going to where the work is done is a common criticism of auditors. If the information you need to audit can be found in a conference room, you could have completed the audit remotely. This type of audit only teaches new auditors how to take notes. These are necessary skills that auditors should master in a classroom before shadowing.

3. Choosing an administrative process is a mistake because administrative processes limit the number of aspects of the process approach that an auditor-in-training can practice. Administrative processes rarely have equipment that requires validation or calibration, and the process inputs and outputs consist only of paperwork, forms, or computer records. With raw materials and finished goods to process, the auditor’s job is more challenging because there is more to be aware of.

4. Not providing honest feedback is a huge mistake. Auditors need to be thick-skinned, or they don’t belong in a role where they will criticize others. Before you begin telling others how to improve, you must self-reflect and identify your strengths and weaknesses. Understanding your perspective, strengths, weaknesses, and prejudices is critical to being a practical assessor. As a trainer, it is your job to help new auditors to self-reflect and accurately rate their performance against objective standards.

5. “Silent Shadowing” has no value at all. By this, I mean shadowing another auditor without asking questions. You should mentally pretend you are doing the audit if you are a trainee. Whenever the trainer does something different from how you would do things, you should make a note to ask, “Why did you do that?” If you are the trainer, you should also mentally pretend you are doing the audit. It is not enough to be present. Your job is to identify opportunities for the trainee to improve. The better the trainee, the more challenging it becomes to identify areas for improvement.  This is why training other auditors have helped me improve my auditing skills.

Auditor shadowing in 2nd party audits

supply chain weakest link Auditor shadowing as an effective auditor training technique

Auditors responsible for supplier auditing are critical to supplier selection, supplier evaluation, re-evaluation, and the investigation of the root cause for any non-conformities related to a supplier. Auditor shadowing is a great tool to teach supplier auditors and other people responsible for supply-chain management what to look at and what to look for when they audit a supplier. If you are developing a new supplier quality engineer responsible for performing supplier audits, observing the auditor during some actual supplier audits is recommended. Supplier audits are defined as second-party audits in the ISO 19011 Standard. The purpose of these audits is not to verify conformity to all the aspects of ISO 13485. Instead, the primary purpose of these audits is to verify that the supplier has adequate controls to manufacture conforming products for your company consistently. Therefore, processes such as Management Review (Clause 5.6) and Internal Auditing (Clause 8.2.2) are not typically sampled during a second-party audit.

The two most valuable processes for a second-party auditor to sample are 1) incoming inspection and 2) production controls. Using the process approach to auditing, the second-party auditor will have an opportunity to verify that the supplier has adequate controls for documents and records for both of these processes. Training records for personnel performing these activities can be sampled. The adequacy of raw material storage can be evaluated by following the flow of accepted raw materials, leaving the incoming inspection area. Calibration records can be sampled by gathering equipment numbers from calibrated equipment used by both processes. Even process validation procedures can be assessed by comparing the actual process parameters being used in manufacturing with the documented process parameters in the most recent validation or re-validation reports.

I recommend having the trainee shadow the trainer during the process audit of the incoming inspection process and for the trainer to shadow the trainee during the process audit of production processes. The trainee should ask questions between the two process audits to help them fully understand the process approach to auditing. Supplier auditors should also be coached on techniques for overcoming resistance to observing processes involving trade secrets or where competitor products may also be present. During the audit of production processes, the trainer may periodically prompt the trainee to gather the information that will be needed for following audit trails to calibration records, document control, or for comparison with the validated process parameters. The “teachable moment” is immediately after the trainee misses an opportunity, but while the trainee is still close enough to go back and capture the missing details.

Are you allowed to shadow a 3rd party auditor or FDA inspector?

qsit inspection Auditor shadowing as an effective auditor training technique

Consider using 3rd party audits and inspections as an opportunity to shadow experienced auditors to learn what they are looking at and what they look for. In addition to shadowing an expert within your own company or an auditor/consultant you hire for an internal audit, you can also shadow a 3rd party auditor or an FDA inspector. This concept was the subject of a discussion thread I ran across on Elsmar Cove from 2005. The comments in the discussion thread supported the idea of shadowing a 3rd party auditor. The process owner (i.e., the manager responsible for that process) should be the guide for whichever process is being audited, and the process owner is responsible for addressing any non-conformities found in the area., The process owner should be present during interviews, but the process owner should refrain from commenting. The 3rd party auditor and the process owner need to know if the person being interviewed was effectively trained and if they can explain the process under the pressure of an audit or FDA inspection. If you are interested in implementing this idea, I recommend using one of two strategies (or both):

  1. Consider having the internal auditor that audited each process shadow the certification body auditor for the processes they audited during their internal audit. This approach will teach your internal auditor what they might have missed, and they will learn what the 3rd party auditors look for to simulate a 3rd party audit more effectively when conducting internal audits.
  2. Consider having the internal auditor that is assigned to conduct the next process audit of each process shadow the certification body auditor for that process. This approach will ensure that any nonconformities observed during the 3rd party audit are checked for the effectiveness of corrective actions during the next internal auditor. Your internal auditor will know precisely how the original nonconformity was identified and the context of the finding.

Auditor shadowing as an effective auditor training technique Read More »

Seven ways to improve quality auditor training

A five-day lead auditor course is never enough. Effective quality auditor training must include practical feedback from an expert.

What is required for quality auditor training?

The key to training auditors to audit is consistent follow-up over a long period of time (1-2 years, depending upon the frequency of audits). I recommend following the same training process that accredited auditors must complete. I have adapted that process and developed seven (7) specific recommendations.

Training the trainer

One of my clients asked me to create a training course on how to train operators. I could have taught the operators myself, but so many people needed training that we felt it would be more cost-effective to train the trainers. Usually, I have multiple presentations archived that I can draw upon, but this time I had nothing. I had never trained engineers on how to be trainers before—at least not formally. I thought about the problems other quality managers have had in training internal auditors and how I have helped the auditors improve. The one theme I recognized was that effective quality auditor training needs to include practical feedback from an experienced auditor. An expert auditor that is training new auditors needs to identify systematic ways to provide feedback, and setting a benchmark for the number of times feedback will be provided is really helpful.

Improve by observing yourself and other quality auditors

Observing someone else is a great way to learn when you are learning any new skill. Interns often do this, which is also a technique used to train new auditors. This technique is called shadowing. You can learn by watching, but eventually, you need to try to do tasks that are beyond your comfort level, and it is best to practice auditing with an expert watching you.

Practice team member audit preparation

Many of the internal auditing procedures we see require new auditors to conduct three audits as team members before they can audit independently. In contrast, notified body auditors join as team members for 10-20 audits before they can act as lead auditors. During the training period, auditors in training observe multiple lead auditors and multiple quality systems. Each audit allows auditors in training to write nonconformities and receive feedback from a lead auditor. At the beginning of quality auditor training, the focus must be on audit preparation. What are the areas of importance, what are the results of previous audits, are there any previous audit findings to close, etc? This preparation can even be done as practice for a hypothetical audit.

During quality auditor training, practice the opening and closing meetings

Opening and closing meetings are one of the first things to teach a new lead auditor. Have new lead auditors rehearse their first few opening and closing meetings with you in private before conducting the opening and closing meetings. Ensure the lead auditor has an opening/closing meeting checklist to help them. Recording practice sessions is enormously helpful because the trainee can watch and observe their mistakes. As trainees get more experience, the opening and closing meetings should have time limits. Finally, you might ask members of top management to challenge the lead auditor with questions. The lead auditor needs to be comfortable with their decisions and the grading of the audit findings.

How to practice audit team leadership

Have new lead auditors conduct team audits with another qualified lead auditor for 10-20 audits before you allow them to conduct an audit alone. Leading the opening and closing meetings is usually the first area new lead auditors master. The most complicated area to learn is managing a team of auditors. Team members will fall behind schedule during audits, or someone will forget to audit a process. As a lead auditor, you must complete the audits for your assigned processes and communicate with the entire team to ensure everyone else is on schedule. As an observer, you must let lead auditors make mistakes and help them realize them. Initially, a trainer will encourage new lead auditors to give themselves more than enough time. As their training progresses, the timing needs to be shorter and more challenging. Ultimately, you have to push the team beyond its capability to teach new lead auditors to recognize problem signs and teach them how to fix the problems.

Shadow auditors virtually with recordings

Live shadowing is challenging for experts and trainees because you are distracted by listening to the auditee and observing the auditor. However, if an audit is recorded, the person shadowing can watch the recording. The audit is already completed, and there is little need to concentrate on the auditee. A recording allows the observer to focus on the auditor. If a new auditor is conducting their first audit, an expert should shadow the trainee for 100% of the audit. Gradually the observation can decrease with each subsequent audit.

Practice note-taking with recorded audits

Taking detailed notes is something that experts take for granted, but I learned a lot by watching FDA inspectors take notes during an inspection. Have a new auditor observe a few audits before they are allowed to participate. Make sure they take notes and explain what you are doing and why they are observing as you conduct audits. Review the notes of new auditors periodically throughout the audit to provide suggestions for improvement and identify missing information. You can also record a supplier audit or internal audit and let a new trainee take notes on the pre-recorded webinar. This eliminates the need to coordinate schedules to involve the trainee.

Quality auditor training should include practicing audit agenda creation

Have new lead auditors submit a draft audit agenda to you before sending it to the supplier or department manager. Usually, the first audit agenda will need revision and possibly multiple revisions. Make sure you train the person to include enough detail in the agenda, and using a checklist or template is recommended. The agenda creation will be part of the audit preparation, and it can be done without time pressure.

How do you audit the auditing process?

Most quality managers are experienced and have little trouble planning an audit schedule. The next step is to conduct the audit. The problem is that there is very little objective oversight of the auditing process. The ISO 13485 standard for medical devices requires that “Auditors shall not audit their own work.” Therefore, most companies will opt for one of two solutions for auditing the internal audit process: 1) hire a consultant or 2) ask the Director of Regulatory Affairs to audit the internal auditing process.

Both of the above strategies for auditing the internal audit process meet the requirements of ISO 13485, but neither approach helps to improve an internal auditor’s performance. I have interviewed hundreds of audit program managers over the years, and the most common feedback audit program managers give is “Change the wording of this finding” or “You forgot to close this previous finding.” This type of feedback is related to the report-writing phase of the audit process. I rarely hear program managers explain how they help auditors improve at the other parts of the process.

When auditors are first being trained, we typically provide examples of best practices for audit preparation, checklists, interviewing techniques, AND reports. After auditors are “shadowed” by the audit program manager for an arbitrary three times, the auditors are now miraculously “trained.” Let’s see if I can draw an analogy to make my point.

That kind of sounds like watching your 16-year-old drive the family car three times and then giving them a license.

About the Author

Rob Packard 150x150 Seven ways to improve quality auditor trainingRobert Packard is a regulatory consultant with 25+ years of experience in the medical device, pharmaceutical, and biotechnology industries. He is a graduate of UConn in Chemical Engineering. Robert was a senior manager at several medical device companies—including the President/CEO of a laparoscopic imaging company. His Quality Management System expertise covers all aspects of developing, training, implementing, and maintaining ISO 13485 and ISO 14971 certifications. 2009-2012, he was a lead auditor and instructor for one of the largest Notified Bodies. Robert’s specialty is regulatory submissions for high-risk medical devices, such as implants and drug/device combination products for CE marking applications, Canadian medical device applications, and 510(k) submissions. The most favorite part of his job is training others. He can be reached via phone at 802.258.1881 or by email. You can also follow him on LinkedIn, Twitter, and YouTube.

Seven ways to improve quality auditor training Read More »

CAPA – Corrective/Preventative Action

What is a CAPA? How do you evaluate the need to open a new CAPA, and who should be assigned to work on it when you do?

What is a CAPA?

“CAPA” is the acronym for corrective action and preventive action. It’s a systematic process for identifying the root cause of quality problems and identifying actions for containment, correction, and corrective action. In the special case of preventive actions, the actions taken prevent quality problems from ever happening, while the corrective actions prevent quality problems from happening again. The US FDA requires a CAPA procedure, and an inadequate CAPA process is the most common reason for FDA 483 inspection observations and warning letters. When I teach courses on the CAPA process, 100% of the people can tell me what the acronym CAPA stands for. If everyone understands what a CAPA is, why is the CAPA process the most common source of FDA 483 inspection observations and auditor nonconformities?

Most of the 483 inspection observations identify one of the following seven problems:

  1. the procedure is inadequate
  2. records are incomplete
  3. actions planned did not include corrections
  4. actions planned did not include corrective actions
  5. actions planned were not taken or delayed
  6. training is inadequate
  7. actions taken were not effective

CAPA Resources – Procedures, Forms, and Training

Medical device companies are required to have a CAPA procedure. Medical Device Academy offers a CAPA procedure for sale as an individual procedure or as part of our turnkey quality systems. Purchase of the procedure includes a form for your CAPA records and a CAPA log for monitoring and measuring the CAPA process effectiveness. You can also purchase our risk-based CAPA webinar, which the turnkey quality system includes.

What’s special about preventive action?

I completed hundreds of audits of CAPA processes over the years. Surprisingly, this seems to be a process with more variation from company to company than almost any other process I review. This also seems to be a significant source of non-conformities. In the ISO 13485 Standard, clauses 8.5.2 (Corrective Action) and 8.5.3 (Preventive Action) have almost identical requirements. Third-party auditors, however, emphasize that these are two separate clauses. I like to refer to certification body auditors as purists. Although certification body auditors acknowledge that companies may implement preventive actions as an extension of corrective action, they also expect to see examples of strictly preventive actions.

You may be confused between corrective actions and preventive actions, but there is an easy way to avoid confusion. Ask yourself one question: “Why did you initiate the CAPA?” If the reason was: 1) a complaint, 2) audit non-conformity, or 3) rejected components—then your actions are corrective. You can always extend your actions to include other products, equipment, or suppliers that were not involved if they triggered the CAPA. However, for a CAPA to be purely preventive in nature, you need to initiate the CAPA before complaints, non-conformities and rejects occur.

How do you evaluate the need to open a CAPA?

If the estimated risk is low and the probability of occurrence is known, then alert limits and action limits can be statistically derived. These quality issues are candidates for continued trend analysis—although the alert or action limits may be modified in response to an investigation. If the trend analysis results in identifying events that require action, then that is the time when a formal CAPA should be opened. No formal CAPA is needed if the trend remains below your alert limit.

If the estimated risk is moderate or the probability of occurrence is unknown, then a formal CAPA should be considered. Ideally, you can establish a baseline for the occurrence and demonstrate that frequency decreases upon implementing corrective actions. If you can demonstrate a significant drop in frequency, this verifies the effectiveness of actions taken. If you need statistics to show a difference, then your actions are not effective.

A quality improvement plan may be more appropriate if the estimated risk is high or multiple causes require multiple corrective actions. Two clauses in the Standard apply. Clause 5.4.2 addresses the planning of changes to the Quality Management System. For example, if you correct problems with your incoming inspection process—this addresses 5.4.2. Clause 7.1 addresses the planning of product realization. For example, if you correct problems with a component specification where the incoming inspection process is not effective, this addresses 7.1. The plan could be longer or shorter Depending on the number of contributing causes and the complexity of implementing solutions. If implementing corrective action takes more than 90 days, you might consider the following approach.

Step 1 – open a CAPA

Step 2 – identify the initiation of a quality plan as one of your corrective actions

Step 3 – close the CAPA when your quality plan is initiated (i.e., – documented and approved)

Step 4 –verify effectiveness by reviewing the progress of the quality plan in management reviews and other meeting forums…you can cross-reference the CAPA with the appropriate management review meeting minutes in your effectiveness section

If the corrective action required is installing and validating new equipment, the CAPA can be closed as soon as a validation plan is created. The effectiveness of the CAPA is verified when the validation protocol is successfully implemented, and a positive conclusion is reached. The same approach also works for implementing software solutions to manage processes better. The basic strategy is to start long-term improvement projects with the CAPA system but monitor the status of these projects outside the CAPA system.

Best practices would be implementing six-sigma projects with formal charters for each long-term improvement project.

NOTE: I recommend closing CAPAs when actions are implemented and tracking the effectiveness checks for CAPAs as a separate quality system metric. If closure takes over 90 days, the CAPA should probably be converted to a Quality Plan. This is NOT intended to be a “workaround” to give companies a way to extend CAPAs that are not making progress on time.

Who should be assigned to work on a CAPA?

Personnel in quality assurance are usually assigned to CAPAs, while managers in other departments are less frequently assigned to CAPAs.  This is a mistake. Each process should have a process owner, who should be assigned to the root cause investigation, develop a CAPA plan, and manage the planned actions. If the manager is not adequately trained, someone from the quality assurance department should use this as an opportunity to conduct on-the-job training to help them with the CAPA–not do the work for them. This will increase the number of people in the company with CAPA competency. This will also ensure that the process owner takes a leadership role in revising and updating procedures and training on the processes that need improvement. Finally, the process will teach the process owner the importance of using monitoring and measuring the process to identify when the process is out of control or needs improvement. The best practice is to establish a CAPA Board to monitor the CAPA process, expedite actions when needed, and ensure that adequate resources are made available.

What is a root cause investigation?

If you are investigating the root cause of a complaint, people will sample additional records to estimate the frequency of the quality issue. I describe this as investigating the depth of a problem. The FDA emphasizes the need to review other product lines, or processes, to determine if a similar problem exists. I describe this as investigating the breadth of a problem. Most companies describe actions taken on other product lines and/or processes as “preventive actions.” This is not always accurate. If a problem is found elsewhere, actions taken are corrective. If potential problems are found elsewhere, actions taken are preventive. You could have both types of actions, but most people incorrectly identify corrective actions as preventive actions.

Another common mistake is to characterize corrections as corrective actions.

The most striking difference between companies seems to be the number of CAPAs they initiate. There are many reasons, but the primary reason is the failure to use a risk-based approach to CAPAs. Not every quality issue should result in the initiation of a formal CAPA. The first step is to investigate the root cause of a quality issue. The FDA requires that the root cause investigation is documented, but if you already have an open CAPA for the same root cause…DO NOT OPEN A NEW CAPA!!!

What should you do if you do not have a CAPA open for the root cause you identify?

The image below gives you my basic philosophy.

death by capa CAPA   Corrective/Preventative ActionMost CAPA investigations document the estimated probability of occurrence of a quality issue. This is only half of the necessary risk analysis I describe below. Another aspect of an investigation is documenting the severity of potential harm resulting from the quality issue. If a quality issue affects customer satisfaction, safety, or efficacy, the severity is significant. Risk is the product of severity and probability of occurrence.

How much detail is needed in your CAPAs?

One of the most common reasons for an FDA 483 inspection observation related to CAPAs is the lack of detail. You may be doing all the planned tasks but must document your activity. Investigations will often include a lot of detail identifying how the root cause was identified, but you need an equal level of detail for planned containment, corrections, corrective actions, and effectiveness checks. Who is responsible, when will it be completed, how will it be done, what will the records be, and how will you monitor progress? Make sure you include copies of records in the CAPA file as well because this eliminates the need for inspectors and auditors to request additional records that are related to the CAPA. Ideally, the person reviewing the CAPA file will not need to request any additional records. For example, a copy of the revised process procedure, a copy of training records, and a copy of graphed metrics for the process are frequently missing from a CAPA file, but auditors will request this information to verify all actions were completed and that the CAPA is effective.

What is the difference between corrections and corrective actions?

Every nonconformity identified in the original finding requires correction. By reviewing records, FDA inspectors and auditors will verify that each correction was completed. In addition, several new nonconformities may be identified during the investigation of the root cause. Corrections must be documented for the newly found nonconformities as well. Corrective actions are actions you take to prevent new nonconformities from occurring. Examples of the most common corrective actions include: revising procedures, revising forms, retraining personnel, and creating new process metrics to monitor and measure the effectiveness of a process. Firing someone who did not follow a procedure is not a corrective action. Better employee recruiting, onboarding, and management oversight should prevent employees from making serious mistakes. The goal is to have a near-perfect process that identifies human error rather than a near-perfect employee that has to compensate for weak processes.

Implementing timely corrective actions

Every correction and corrective action in your CAPA plan should include a target completion date, and a specific person should be assigned to each task. Once your plan is approved, you need a mechanism for monitoring the on-time completion of each task. There should be top management or a CAPA board this is responsible for reviewing and expediting CAPAs. If CAPAs are being completed on-schedule, regular meetings are short. If CAPAs are behind schedule, management or the CAPA board needs authority and responsibility to expedite actions and make additional resources available when needed. Identifying lead and lag metrics is essential to manage the CAPA process successfully–and all other quality system processes.

What is an effectiveness check?

Implementation of actions and effectiveness of actions is frequently confused. An action was implemented when the action you planned was completed. Usually, this is documented with the approval of revised documents and training records. The effectiveness of actions is more challenging to demonstrate, and therefore it is critical to identify lead and lag metrics for each process. The lead metrics are metrics that measure the routine activities that are necessary for a process, while the lag metrics measure the results of activities. For example, monitoring the frequency of cleaning in a controlled environment is a lead metric, while monitoring the bioburden and particulates is a lag metric. Therefore, effectiveness checks should be quantitative whenever possible. Your effectiveness is weak if you need to use statistics to show a statistical difference before and after implementing your CAPA plan. If a graph of the process metrics is noticeably improved after implementing your CAPA plan, then the effectiveness is strong.

About Your Instructor

Winter in VT 2024 150x150 CAPA   Corrective/Preventative Action

Rob Packard is a regulatory consultant with ~25 years of experience in the medical device, pharmaceutical, and biotechnology industries. He is a graduate of UConn in Chemical Engineering. Rob was a senior manager at several medical device companies—including the President/CEO of a laparoscopic imaging company. His Quality Management System expertise covers all aspects of developing, training, implementing, and maintaining ISO 13485 and ISO 14971 certifications. From 2009 to 2012, he was a lead auditor and instructor for one of the largest Notified Bodies. Rob’s specialty is regulatory submissions for high-risk medical devices, such as implants and drug/device combination products for CE marking applications, Canadian medical device applications, and 510(k) submissions. The most favorite part of his job is training others. He can be reached via phone at 802.281.4381 or by email. You can also follow him on YouTube, LinkedInor Twitter.

CAPA – Corrective/Preventative Action Read More »

How do you demonstrate training competence?

Anyone can sign and date training records, but how do you demonstrate the effectiveness of training and competence?
%name How do you demonstrate training competence?

What are the requirements for training?

The requirements for training are found in ISO 13485:2016, Clause 6.2 (i.e., Human Resources). Auditors and inspectors usually only ask for training records, but the requirement for records is the last item in the clause (i.e., “6.2(e) – maintain appropriate records of education, training, skills, and experience (see 4.2.5).”). The first four items in Clause 6.2 all include one word, but it’s not “records.” The most critical word in the requirements is “competence.”

What is the difference between training requirements, training records, training effectiveness, and training competence?

  • Training requirements define the “appropriate education, training, skills, and experience.” Specifically, what degree(s) is required, if any? What training curriculum needs to be completed? What skills are needed to perform the work? And how many years of experience are needed?
  • Training records are documented evidence that all training requirements have been met. Records may include a job application, resume, individual training records, group training records, or a training certificate. Records may also include the results of any quizzes to demonstrate training effectiveness and any evaluation of training competence.
  • Training effectiveness is how well a person understands the information communicated during training. Using terms from human factors engineering (i.e., the PCA Process), the person must have the correct perception and cognition. Cognition can be evaluated by giving people quizzes or asking them questions. Written and verbal exams may also include pictures and/or video. Performing these types of tasks on a quiz or exam are “knowledge tasks.”
  • Training competence is possessing the skill or ability to perform tasks. To demonstrate competence, someone that is already competent (i.e., a subject matter expert) must observe the trainee performing tasks. A person that once possessed the skill or ability can also judge competency, and a person that does not possess the skill or ability can be trained to evaluate skill and ability (e.g., an Olympic Judge or referee). Skills and abilities can be physical or cognitive, but there are also social skills. Examples of each are provided in the table below:

List of abilities and skills 1024x495 How do you demonstrate training competence?

What keeps me awake at night? (a story from April 2, 2011)

I am in Canada, it’s almost midnight, and I can’t sleep. I’m here to teach a Canadian client about ISO 14971—the ISO Standard for Risk Management of medical devices. Most companies requesting this training are doing so for one of two reasons: 1) some design engineers have no risk management training, or 2) the engineers have previous training on risk management, but the training is out-of-date. This Canadian company falls into the second category, and engineers with previous training ask the most challenging questions. This group of engineers forced me to re-read the Standard several times and reflect on the nuances of almost every single phrase. While teaching this risk management course, I learned more about risk management than I ever knew.

The four levels of the Learning Pyramid

The image at the beginning of this blog is a learning model that explains my experiences training Canadian design engineers. I call this model the “Learning Pyramid.” At the base of the pyramid, there are “Newbies.”

This is the first of four levels. At the base, students read policies and procedures, hoping to understand.

The student is now asked to watch someone else demonstrate proper procedures in the second level of the pyramid. One of my former colleagues has a saying that explains the purpose of this process well, “A picture tells a thousand words, but a demonstration is like a thousand pictures.” Our children call this “sharing time,” but everyone over 50 remembers this as “show and tell.”

The student is now asked to perform their tasks in the third level of the pyramid. This is described as “doing,” but in my auditing courses, I refer to this process as “shadowing.” Trainees will first read the procedures for Internal Auditing (level 1). Next, trainees will shadow the trainer during an audit to demonstrate the proper technique (level 2). During subsequent audits, the trainees will audit, and the trainer will shadow the trainee (level 3). During this “doing” phase, the trainer must watch, listen, and wait for what I call the “teachable moment.” This is a moment when the trainee makes a mistake, and you can use this mistake as an opportunity to demonstrate a complex subject.

Finally, in the fourth level of the Learning Pyramid, we now allow the trainee to become a trainer. This is where I am at. I am an instructor, but I am still learning. I am learning what I don’t know.

Teaching forces you back to the bottom of the Learning Pyramid

After teaching, the next step in the learning process is to return to the first level. I re-read the Standards and procedures until I understood the nuances I was unaware of. Then, I search for examples in the real world that demonstrate the complex concepts I am learning. After searching for examples, I tested my knowledge by applying the newly acquired knowledge to an FDA 510(k) or De Novo submission for a medical device client. Finally, I prepared to teach again. This reiterative process reminds me of the game Chutes and Ladders, but one key difference is that we never really reach the level of “Guru.” We continue to improve but never reach our goal of perfection.

Where is training competence in the Learning Pyramid?

Most people feel that a person is competent in a position when they can consistently perform a task. However, the ISO 13485 standard uses the phrase “necessary competence” to suggest that competency for any given task might not be the same for all people. For example, in some cases, it may be sufficient to perform a task with written instructions in front of you, while an instructor might be expected to perform the task without written instructions. The speed at which a person performs a task might also differ. For example, a secretary might be required to touch type at a speed of 60+ wpm with a QWERTY keyboard, while a stenographer must be able to use a STENO keyboard and write at 225 wpm. The accuracy requirements may differ for those two positions as well. Therefore, your company may decide that the training competence requirement for a design engineer is that they can pass an exam on risk management. However, the training competency requirement for the design project lead or Engineering Manager might include teaching inexperienced design engineers to apply the basic risk management principles. Demonstrating the ability to teach inexperienced design engineers might be demonstrated by an auditor interviewing members of your design team.

How do you demonstrate training competence? Read More »

Audit schedule and an audit agenda, what’s the difference?

Internal audit and supplier audit programs both require an audit schedule and an audit agenda, but what’s the difference between them?

What is an audit schedule?

An “audit schedule” is not a formal definition in ISO 19011:2018. However, section 5.1 of that standard states that your audit program should include nine different requirements. Item “d” is “d) schedule (number/duration/frequency) of the audits.” Typically, the audit program manager will maintain an annual audit schedule with a date indicating the date the schedule was last revised. The most common example in lead auditor training is a matrix like the one shown below. The left-hand column will list all of the individual processes that are identified in the company’s process interaction diagram, and the top row of the matrix will indicate the month when each process audit will be conducted. Typically, the expectation is to complete the audit sometime during that month, but some quality auditing procedures specify that the audit may be completed the month before or the month after to accommodate the process owner. The regulations only require that you document and maintain an audit schedule, and the standard is only considered guidance.

%name Audit schedule and an audit agenda, what’s the difference?
Example of an Internal Audit Schedule

I use a slide in lead auditor courses that gives the example of an internal auditing schedule provided above. On the surface, this example seems like a good audit schedule. Twelve auditors perform two audits each year. If each audit requires approximately two days, each auditor spends less than two percent of their work year auditing. Unfortunately, a two-percent allocation of time is insufficient to become or remain proficient at auditing. An improvement to the auditing schedule would be to assign fewer auditors so each auditor gets more experience. There is no perfect number, but assigning a few specialists will improve the chances of becoming and remaining proficient at auditing.

What is an audit agenda?

An “audit agenda” is not a formal definition in ISO 19011:2018 either. In fact, the word “agenda” is not even used in ISO 19011. Instead, section 5.5.5 of ISO 19011 states that “The assignment [of the individual audit] should be made in sufficient time before the scheduled date of the audit, in order to ensure the effective planning of the audit.” The audit plan must also be part of the records [i.e., Clause 5.5.7(b)]. Therefore, “agenda” and “plan” may be used interchangeably. Details of audit planning are provided in 6.3.2 of ISO 19011.

6 Steps to Creating an Audit Schedule

There are six steps to creating an audit schedule:

  1. What were the results of previous audits?
  2. Which processes are the most important to audit?
  3. Who should conduct your internal audit?
  4. How long should your internal audit be?
  5. Should you conduct one full quality system audit or several audits?
  6. Is a remote audit good enough?

We will address each of the six steps below.

How do the results of previous audits impact your audit schedule?

The results of an audit include nonconformities, observations for improvement (OFI), and a conclusion regarding whether the quality system is effective or not. Usually, most processes are effective, and there are no nonconformities or OFIs. Therefore, any processes that had a nonconformity or OFI should be prioritized in the audit schedule and audit planning for the future. For internal and supplier audits, a best practice is for the auditor and the process owner to discuss the corrective actions planned and determine the appropriate timeline or implementation of actions planned. Then the auditor can indicate a timeframe for re-auditing the nonconforming process after corrective actions are implemented. This strategy allows the auditor to be part of the effectiveness check. This approach is appropriate for individual process audits but not for a full-quality system audit.

Which processes are the most important to audit?

The primary element impacting the importance of processes is the risk to product quality associated with the process. Usually, support processes are of lower importance because they do not directly impact product quality. In contrast, core processes directly involved in a device’s design, manufacture, and distribution are critical. Most auditors and audit program managers emphasize design controls and production process controls as important areas to audit. However, the distribution area is often neglected. Other core processes are purchasing, sales, customer service, and servicing. Not every process is equally important when comparing two companies. For example, device manufacturers that only make software as a medical device (SaMD) often have very limited purchasing and incoming inspection activities to audit.

Who should the audit program manager assign to each internal audit?

%name Audit schedule and an audit agenda, what’s the difference?The example of a revised audit schedule provided above identifies the departments where each of the auditors works with color coding. This is done to ensure that auditors are not assigned to audit processes where they might have a conflict of interest (i.e., they would be auditing their own work). This is the most important aspect of assigning auditors. The second most important aspect is to make sure the auditor has the technical knowledge to audit the process. It is challenging to conduct an audit of manufacturing if you have not spent any time in manufacturing before. If auditors are new and their training is in progress, then the audit program manager may assign the auditor to a process specifically to give them more experience with that type of process. Inexperienced auditors often are assigned less important processes that have not changed recently. However, a better approach to training auditors is to give them a challenge with support. Having the new auditor prepare a detailed sampling plan and list of questions before the audit can prepare them for auditing a more challenging, important process that is likely to have one or more nonconformities. Auditing processes that have nonconformities is also the best way to teach a new auditor how to write the audit findings.

What should be the duration of each internal audit in your schedule?

The duration of an audit should be based on the results of previous audits, but other important factors include: 1) the number of personnel involved in the process, 2) the complexity of the process, and 3) the risk to product quality associated with the process. The MDSAP program uses a procedure for audit time determination (i.e., MDSAP AU P0008.007: Audit Time Determination Procedure), and the MDSAP audit approach document (i.e., MDSAP AU P0002.008 Audit Approach) classifies processes as having either a “direct” or “indirect” impact upon product quality based upon the applicable clause of the process (i.e., Clauses 0-6.3 are indirect, and Clauses 6.4-8.5.3 are direct). For example, the production processes and design and development processes both involve a large number of people in most organizations, the processes are complex, and both processes directly impact product quality. Therefore, I typically allocate 3-4 hours to each of those processes during an audit. In comparison, incoming inspection often involves one or two people, and the process often involves only one procedure. Incoming inspection is a “direct” process, but less time (e.g., 1 hour) should be allocated to auditing incoming inspection than the other two processes–unless there was a nonconformity in the incoming inspection process during a recent audit or unless the process was recently changed.

Should you conduct one full quality system audit or several audits?

Both approaches have strengths and weaknesses, but there is not a single best way. If I am using employees to conduct an audit, then I typically restrict the scope of the audit to a single process. Alternatively, when I use a consultant to conduct an audit, I typically conduct a full-quality system audit to minimize travel costs. Another strategy I have recommended is to identify the processes that are most important to audit first (e.g., processes with recent changes and/or nonconformities), and these processes are scheduled for individual process audits during the first half of the audit schedule. Then I schedule a full-quality system audit in the second half of the audit schedule. The strategy ensures that all important processes will be audited twice in one year, but every process will be audited at least once.

Remote audits vs On-site audits

Prior to the Covid-19 pandemic, remote audits were rare in the medical device industry. Many NBs insisted that remote audits were not permitted or effective. The pandemic forced the entire industry to create policies for remote auditing and to use remote auditing whenever possible. Now that the pandemic has ended, many companies continue to conduct remote audits to save money. Even NBs are conducting more remote audits for Stage 1 readiness audits during the ISO 13485 certification process. ISO 19011 has a section in the Appendices outlining the differences between remote and on-site audits. However, there is a minimal advantage to conducting an on-site audit of a process where the auditor is expected to spend all of their time in a conference room during the audit. If the audit is going to be done in a conference room, then why not conduct it remotely? The one exception is when most records are paper-based and unavailable electronically. Alternatively, an on-site audit is generally more effective if the process involves observing inspection activities or assembly operations. Remote audits of inspection activities and assembly operations should be reserved for re-auditing or when a process has been audited on-site in the past, but an on-site audit would still be more effective for those processes.

How many times should a process be audited annually?

Many notified bodies will expect companies to audit all processes at least once during the year. However, it doesn’t expressly state this as a requirement in the regulations, and some companies justify skipping processes that are functioning well and have not changed in the past year. Our team is seeing this more frequently as the number of lead auditors worldwide has become scarce due to the requirements of MDSAP, the MDR/IVDR implementation, and unannounced audits. However, I almost never see the opposite justification (i.e., auditing a process more than once a year). If a process has been changed significantly, or there were nonconformities, then re-auditing the process may be used to verify the effectiveness of corrective actions or to verify that personnel are compliant with the revised process.

How to take advantage of the process approach to auditing

Another improvement that can be made to the revised example of an audit schedule is to use the process approach to auditing. Instead of performing an independent document control and training audit, these two clauses/procedures can be incorporated into every audit. The same is true of maintenance and calibration support processes. Wherever maintenance and calibration are relevant, these clauses should be investigated as part of auditing that area. For example, when the incoming inspection process is audited, it makes sense to look for evidence of calibration for any devices used to perform measurements in that area. When production process controls are being audited, maintenance records of production equipment should also be sampled.

If the concept of process auditing is fully implemented, the following ISO 13485 clauses can easily be audited in the regular course of reviewing other processes:

  • 4.2.1), Quality System Documentation;
  • 4.2.3), Document Control;
  • 4.2.4), Record Control;
  • 5.3), Quality Policy;
  • 5.4.1), Quality Objectives;
  • 6.2.2), Training;
  • 6.3), Maintenance;
  • 6.4), Work Environment;
  • 7.1), Planning of Product Realization & Risk Management
  • 7.6), Calibration;
  • 8.2.3), Monitoring & Measurement of Processes
  • 8.5.2), Corrective Action; and
  • 8.5.3) Preventive Action.

This strategy reduces the number of process audits needed by more than half.

Internal Auditing: Upstream/Downstream Examples

Another way to embrace the process approach to auditing is to assign auditors to upstream or downstream processes in the product realization process from their own area. For example, Manufacturing can audit Customer Service to understand better how customer requirements are confirmed during the order confirmation process. This is an example of auditing upstream because Manufacturing receives the orders from Customer Service—often indirectly through an MRP system. Using this approach allows someone from Manufacturing to identify opportunities for miscommunication between the two departments. If Regulatory Affairs audits the engineering process, this is an example of auditing downstream. Regulatory Affairs is often defining the requirements for the Technical Files and Design History Files that Engineering creates. If someone from Regulatory Affairs audits these processes, the auditor will realize what aspects of technical documentation are poorly understood by Engineering and quickly identify retraining opportunities.

Audit schedule and an audit agenda, what’s the difference? Read More »

Iterative design is real, waterfalls are illusions

The Waterfall Diagram was copied by the FDA from Health Canada and ISO 9001:1994, but everyone actually uses an iterative design process.

Iterative Design – What is it?

The FDA first mandated that medical device manufacturers implement design controls in 1996. Unfortunately, in 1996 the design process was described as a linear process. In reality, the development of almost every product, especially medical devices, involves an iterative design process. The V-diagram from IEC 62304 is closer to the real design control process, but even that process is oversimplified.

Software Validation and Verification 1 Iterative design is real, waterfalls are illusions

What is the design control process?

The design control process is the collection of methods used by a team of people and companies to ensure that a new medical device will meet the requirements of customers, regulators, recognized standards, and stakeholders. With so many required inputs, it is highly unlikely that a new medical device could ever be developed in a linear process. The design control process must also integrate risk management and human factors disciplines. ISO 14971:2019, the international risk management standard, requires conducting optional control analysis. Option control analysis requires evaluating multiple risk control options and selecting the best combination of risk controls for implementation. The human factors process involves formative testing where you evaluate different solutions to user interfaces, directions for use, and training. This always requires multiple revisions before the user specifications are ready to be validated in summative usability testing. Process success is verified by conducting verification and validation testing. The process ends when the team agrees that all design transfer activities are completed, and your regulatory approval is received.

waterfall fda Iterative design is real, waterfalls are illusions

Where did design controls come from?

The diagram above is called the “Application of Design Controls to Waterfall Design Process.” The FDA introduced this diagram in 1997 in the design controls guidance document. However, the original source of the diagram was Health Canada.

This diagram is one of the first slides I use for every design control course that I teach because the diagram visually displays the design control process. The design controls process, defined by Health Canada and the US FDA, is equivalent to the design and development section found in ISO 13485 and ISO 9001 (i.e., – Clause 7.3). Seven sub-clauses comprise the requirements of these ISO Standards:

  • 7.3.1 – Design Planning
  • 7.3.2 – Design Inputs
  • 7.3.3 – Design Outputs
  • 7.3.4 – Design Reviews
  • 7.3.5 – Design Verification
  • 7.3.6 – Design Validation
  • 7.3.7 – Design Changes

In addition to the seven sub-clauses found in these ISO Standards, the FDA Quality System Regulation (QSR) also includes additional requirements in the following sub-sections of 21 CFR 820.30: a) general, h) design transfer, and J) Design History File (DHF). If you need a procedure(s) to comply with design controls, we offer two procedures:

  1. Design Controls (SYS-008)
  2. Change Control (SYS-006)

The change control process was separated from the design controls process because it is specific to changes that occur after a device is released to the market. We also have a training on Change Control.

Free Download – Overview of the Design & Development Process

What are the phases of the Design Control Process?

Normally, we finish the design control process by launching your device in the USA because this is when you should close your Design History File (DHF). However, if you are going to expand to different markets, there is a specific order we recommend. We recommend the US market first because no quality system certification is required, and the FDA 510(k) process is easier than the CE Marking process due to the implementation of the MDR and the IVDR. The Canadian market is the second market we recommend because the Canadian Device License Application process for Health Canada is even easier than the 510(k) process for Class II devices. The Canadian market is not recommended as the first country to launch because Health Canada requires MDSAP certification for your quality system, and the Canadian market is 10% of the US market size. The European market should probably be your last market because of the high cost and long timeline for obtaining CE Marking. Each of the phases of design and development is outlined in the first column of our free download, “Overview of Regulatory Process, Medical Device Development & Quality System Planning for Start-ups.”

Which regulatory filings are required during each phase of design and development?

The second column of our free download lists the regulatory filings required by the FDA for each phase. Generally, we see companies waiting too long to have their first pre-submission meeting with the FDA or skipping the pre-submission meeting altogether. This is a strategic mistake. Pre-submission meetings are free to submit to the FDA, and the purpose of the meeting is to answer your questions. Even if you are 100% confident of the regulatory pathway, you know precisely which predicate you plan to use, and you know which verification tests you need to complete, you still have intelligent questions that you can ask the FDA. Critical questions fall into three categories: 1) selection of your test articles, 2) sample size justification, and 3) acceptance criteria. Even if you don’t have a complete testing protocol prepared for the FDA to review, you can propose a rationale for your test article (e.g., the smallest size in your product family). You can also provide a paragraph explaining the statistical justification for your sample size. You might also present a paragraph explaining the data analysis method you plan to use.

The following example illustrates how discussing the details of your testing plan with the FDA can help you avoid requests for additional information and retesting. Many of the surgical mask companies that submitted devices during the Covid-19 pandemic found that the FDA had changed the sample size requirements, and they were now requiring three non-consecutive lots with a 4% AQL sample size calculation. If the company made a lot of 50,000 masks, they would be forced to sample a large number of masks, while a lot size of 250 masks allowed the company to sample the minimum sample size of 32 masks.

If the regulatory pathway for your device is unclear, you might start with the submission of a 513(g) submission during the first phase of design and development. After you have written confirmation of the correct regulatory pathway from the FDA, you can submit a pre-submission meeting request to the FDA. If the pathway for your device is a De Novo Classification Request, you might have a preliminary pre-submission meeting to get an agreement with the FDA regarding which recognized standards should be applied as Special Controls for your device. While waiting 70+ days for your pre-submission meeting with the FDA, you can obtain quotes from testing labs and prepare draft testing protocols. After the presubmission meeting, you can submit a pre-submission supplement that includes detailed testing protocols–including your rationale for the selection of test articles, sample size justification, and the acceptance criteria.

What quality assurance documentation is required during each phase?

In addition to testing reports for your verification and validation testing, you will find many other supporting documents you also need to prepare. We refer to these supporting documents generically as “quality assurance documentation” because the documents verify that you meet specific customer and regulatory requirements. However, the documents should be prepared by the person or people responsible for that portion of your design project. For example, every device will need a user manual and draft labeling. Even though you will need someone from quality to complete a regulatory checklist to ensure that all of the required symbols and general label content are included, you will also need an electrical engineer to prepare sections of the manual with EMC labeling requirements from the FDA’s EMC guidance document. In your submission’s non-clinical performance testing section, you must include human factors documentation in addition to your summative usability testing report. For example, you will need a use specification, results of your systematic search of adverse events for use errors, a task analysis, and a Use-Related Risk Analysis (URRA). Software and cybersecurity documentation includes several documents beyond the testing reports as well.

Which procedures do you need to implement during each development phase?

The last column of our free download lists the procedures we recommend implementing during each phase of the design process. In Canada and Europe, you must complete the implementation of your entire quality system before submitting a Canadian License Application or CE Marking application. In the USA, however, you can finish implementing your quality system during the FDA review of your 510k submission or even after 510k clearance is received. The quality system requirement is that your quality system is fully implemented when you register your establishment with the FDA and begin distribution.

Iterative design is real, waterfalls are illusions Read More »

Scroll to Top