Design Validation

The process of verifying that the medical device meets the user needs.

Software Design Validation – FDA Requirements

What are the FDA software design validation requirements for software as a medical device (SaMD) and software in a medical device (SiMD).qsit Software Design Validation   FDA Requirements

If your product has software, then the investigator is instructed by the FDA QSIT Inspection Manual to consider reviewing software validation. Since inadequate software validation causes many quality problems with devices, you should be shocked if an investigator doesn’t review the software validation of a device containing software. Software-containing devices are also the only devices that manufacturers are required to submit a risk analysis for when submitting premarket notifications (i.e., 510k submissions).

Software Design Validation

Validation confirms that a device meets the user needs. Software validation is no different. Unfortunately, “software design validation” is also the term that we use to mean software design and development–which includes software verification activities and software validation activities. The software verification activities consist of unit testing, integration testing, and system testing. In software verification, we are verifying that each requirement of the software design specification (SDS) meets the requirements of the software requirements specification (SRS). In contrast, software validation involves simulated use or actual use testing of the software to confirm that it meets the user needs of the software. The “device” is the final complete software program in the operating environment in which it is intended to be used (i.e., operating system and hardware), and the “user needs” may be defined as system-level requirements in the SRS or as the intended purpose of the software in the software description.

To facilitate the validation of software, a traceability matrix is typically used to facilitate the construction of validation protocols. The traceability matrix will identify each requirement in the left-hand column of the matrix. The columns to the right of the requirements should include the following:

  1. hazard identification
  2. the potential severity of harm
  3. P1 – the probability of occurrence
  4. P2 – the probability of occurrence resulting in harm
  5. risk controls
  6. design outputs or references to the code modules that are responsible for each requirement
  7. references to verification and validation testing for each risk control
  8. estimation of residual risks
  9. risk/benefit analysis of each risk and overall risk
  10. traceability to information disclosed to users and patients or residual risks

Since the failure of each module can easily result in multiple failure modes, the above approach to documenting design requirements and risk analysis is generally more effective than using an FMEA. This approach also has the benefit of lending itself to assessing risk each time new complaints, service reports, and other post-market surveillance information is gathered.

The use of a traceability matrix also lends itself to the early stages of debugging software modules and unit validation. Each software design requirement will typically have a section of code (i.e., a software module) that is associated with it. That module will be validated initially as a standalone unit operation to verify that it performs the intended function. In addition to verifying the correct function, the software validation protocol should also verify that the embedded risk controls catch incorrect inputs to the module for that module. The correct error code should be generated, and applicable alarms should be triggered.

Finally, after each requirement has been verified, the entire software program must be validated as well. When changes are made, the module and program as a whole must be re-validated. Inspectors and auditors will specifically review changes made in recent versions to verify that revalidation of the entire program was performed–not just unit testing. You must also comply with IEC 62304, medical device software – software lifecycle processes. This is required for CE Marking as a harmonized standard and recognized by the US FDA. One of the implications of applying IEC 62304 is that you must consider the risk of using software of unknown pedigree or provenance (SOUP).

Software Risk Analysis

Each requirement of the software design validation requirements document will typically have a risk associated with it if the software fails to perform that requirement. These risks are quantified concerning the severity of harm and the probability of occurrence of harm. The likelihood of occurrence of harm has two factors: P1 and P2, as defined in Annex E of ISO 14971:2007 (see our updated risk management training).

P1 is the probability of occurrence, and for software, we have two factors. First, the situation must occur that will trigger a failure of the software. Second, does the software have a design risk control that prevents harm or provides a warning of the potential for harm? P2 is the probability that occurrence will result in harm; P2 has one factor. P2 is determined by evaluating the likelihood that failure will result in harm if the risk control is not 100% effective.

An investigator reviewing the risk assessment should verify that risk has been estimated for each software design requirement. There should be harm identified for each software design requirement, or the traceability matrix should indicate that no harm can result from failure to meet the software design requirement. Next, the risk assessment should indicate what the risk controls are for each requirement identified with the potential for harm. In accordance with ISO 14971, design risk controls should be implemented first to eliminate the possibility of harm. Wherever it is impossible to eliminate the possibility of harm, a protective measure (i.e., an alarm) should be used.

Each risk control must be verified for effectiveness as part of the software validation. Also, the residual risk for each potential harm is subject to a risk/benefit analysis in accordance with EN ISO 14971:2012, Annex ZA Deviation #4. The international version, ISO 14971:2007 (which is recognized by the US FDA and Health Canada), allows companies to limit a risk/benefit analysis to only unacceptable risks. Therefore, the European requirement (i.e., EN ISO 14971:2012) is more stringent. Companies that intend to CE Mark medical devices should comply with the EN version of the risk management standard instead of the international version for risk management.

Software Design Validation – FDA Requirements Read More »

FDA QSIT Inspection of Design Validation: Part I-Non-Software

qsit FDA QSIT Inspection of Design Validation: Part I Non SoftwareThis article reviews FDA QSIT inspection requirements of design validation and is specific to devices that do not contain software.

In the FDA QSIT Manual (http://bit.ly/QSITManual), the word “validation appears in the QSR 78 times. This exceeds the frequency of the names “verification,” “production,” “corrective,” and the acronym “CAPA.” The word “validation” is almost as frequent as the word “management”–which appears 80 times in the QSIT Manual. The section of the QSIT Manual specific to design validation is pages 35-40.

The FDA selects only one product or product family when they are inspecting design controls. Therefore, if you keep track of which products have already been inspected by the agency, you can often predict the most likely product for the investigator to select during the next inspection. The number of MDRs and recalls reported will impact the investigator’s selection. Class I devices are not selected.

The QSIT Manual instructs inspectors to verify that acceptance criteria were specified before conducting design validation activities and that the validation meets the user’s needs and intended uses. There should also be no remaining discrepancies from the design validation. Inspectors must verify that all validation activities were performed using initial production devices or production equivalents. The last item to verify is that design changes were controlled–including performing design validation of the changes.

Risk Analysis

Risk analysis is seldom reviewed in detail–except for software risk analysis. However, when a nonconforming product is reworked, it is required to review the adverse effects of rework. QSIT inspectors will expect you to document this review of risks. Investigators will also expect risks to be reviewed and updated per trend analysis of complaints, service reports, and non-conformities. Finally, when companies assess the need to report recalls, the FDA expects to see a health hazard evaluation to be completed (http://bit.ly/HHE-Form). A detailed risk analysis review is uncommon in QSIT inspections but receives greater emphasis in reviewing CE marking applications.

Predetermined Acceptance Criteria

Investigators reviewing your design validation protocols will specifically look at the acceptance criteria for the testing you perform. Investigators are looking for two things. First, were the acceptance criteria met without deviation? Second, was the protocol approved before knowing the results (i.e., was this a prospective design validation protocol)? In certain areas, there are also known risks associated with products that the investigators will look for. For example, in sterilization validation, the investigator will verify that the validation was performed to the most current version of the standard and that the validation has addressed the most common pitfalls of sterilization. For example:

  • Have the most challenging devices been identified?
  • Has performance been validated at the maximum sterilization dose?

User Needs & Intended Uses are Met

In the area of user needs and intended uses, there are a few problems with the initial launch of devices for the intended use. Issues typically arise when companies expand the intended use to new patient populations and new intended uses. When this occurs, unique user needs and risks may need to be evaluated. Therefore, the FDA periodically reviews claims made by companies in marketing communications to ensure that claims do not stray beyond the cleared intended use of the device. This will sometimes be identified as a 483 inspection observation. Sometimes, the FDA will issue a warning letter to a company that continues to market a device for uncleared indications.

Initial Production Devices or Production Equivalents

When investigators review validation protocols and reports, the documentation must include traceability to the device’s production lot(s). Investigators may even request a copy of the Device History Record (DHR) for the production lot used for validation. If a production lot is not used, the design validation documentation must disclose how the product differs from production lots and why the results are acceptable. The samples used should be subjected to the final test/inspection requirements. If final test/inspection requirements are not yet established, samples should be retained so that they can be inspected at a later date. Without this traceability, you may have to repeat your design validation with a production lot.

Validation of Design Changes

Far too many hours are wasted writing justifications for why re-validation is unnecessary. I recommend that re-validation of the design be performed for any design change if all three of the following criteria are not met:

  1. a sound scientific rationale can be provided with references
  2. the logic does not require a subject matter expert to understand it
  3. quantitative analysis is possible to analyze the risk impact

Many design validations require simulated use with a physician. Companies should obtain as much user feedback as possible before launching a device. Therefore, any re-validation that requires simulated use and user feedback should be a priority over writing a rationale for not conducting re-validation.

FDA QSIT Inspection of Design Validation: Part I-Non-Software Read More »

Scroll to Top