FDA QSIT Inspection of Design Validation: Part 2-Software

qsit FDA QSIT Inspection of Design Validation: Part 2 SoftwareThis article reviews FDA QSIT inspection requirements of design validation, and is specific to devices containing software.

If the product selected has software, then the investigator is instructed by the FDA QSIT Inspection Manual ( to consider reviewing software validation. Since inadequate software validation causes many quality problems with devices, you should be shocked if an investigator doesn’t review software validation of a device containing software. Software-containing devices are also the only devices that manufacturers are required to submit a risk analysis for when submitting premarket notifications (i.e., 510k submissions).

Software Validation

Validation confirms that a device meets the user needs. Software validation is no different. In the case of software validation, the “device” is the final complete software program in the operating environment in which it is intended to be used (i.e., operating system and hardware), and the “” is the “software design requirements” document.

In order to facilitate the validation of software, a traceability matrix is typically used to facilitate construction of validation protocols. The traceability matrix will identify each requirement in the left-hand column of the matrix. The columns to the right of the requirements should include the following:

  1. hazard identification
  2. potential severity of harm
  3. P1 – probability of occurrence
  4. P2 – probability of occurrence resulting in harm
  5. risk controls
  6. design outputs or references to the code modules that are responsible for each requirement
  7. references to verification and validation testing for each risk control
  8. estimation of residual risks
  9. risk/benefit analysis of each risk and overall risk
  10. traceability to information disclosed to users and patients or residual risks

Since failure of each module can easily result in multiple failure modes, the above approach to documenting design requirements and risk analysis is generally more effective than using an FMEA. This approach also has the benefit of lending itself to assessing risk each time new complaints, service reports and other post-market surveillance information is gathered.

The use of a traceability matrix also lends itself to the early stages of debugging software modules and unit validation. Each software design requirement will typically have a section of code (i.e., software module) that is associated with it. That module will be validated initially as a standalone unit operation to verify that it performs the intended function. In addition to verifying correct function, the software validation protocol should also verify that incorrect inputs to the module are caught by the embedded risk controls for that module. The correct error code should be generated and applicable alarms should be triggered.

Finally, after each individual requirement has been verified, the entire software program must be validated as well. When changes are made, the module and the entire program must be re-validated. Inspectors and auditors will specifically review changes made in recent versions to verify that revalidation of the entire program was performed–not just unit testing. You must also comply with IEC 62304, medical device software – software lifecycle processes. This is required for CE Marking as a harmonized standard, and recognized by the US FDA ( One of the implications of applying IEC 62304 is that you must consider the risk of using software of unknown pedigree or provenance (SOUP).

Software Risk Analysis

Each requirement of the software design requirements document will typically have a risk associated with it if the software fails to perform that requirement. These risks are quantified with respect to severity of harm and probability of occurrence of harm. Probability of occurrence of harm has two factors: P1 and P2 as defined in Annex E of ISO 14971:2007 (

P1 is the probability of occurrence, and for software, we have two factors. First, the situation must occur that will trigger a failure of the software. Second, does the software have a design risk control that prevents harm or provides a warning of the potential for harm? P2 is the probability that occurrence will result in harm; P2 has one factor. P2 is determined by evaluating the likelihood that failure will result in harm if the risk control is not 100% effective.

An investigator reviewing the risk assessment should verify that risk has been estimated for each software design requirement. There should be a harm identified for each software design requirement, or the traceability matrix should indicate that no harm can result from failure to meet the software design requirement. Next, the risk assessment should indicate what the risk controls are for each requirement identified with a potential for harm. In accordance with ISO 14971, design risk controls should be implemented first in order to eliminate the possibility of harm. Wherever it is impossible to eliminate the possibility of harm, a protective measure (i.e., alarm) should be used.

Each risk control must be verified for effectiveness as part of the software validation. In addition, the residual risk for each potential harm is subject to a risk/benefit analysis in accordance with EN ISO 14971:2012, Annex ZA Deviation #4 ( The international version, ISO 14971:2007 (which is recognized by the US FDA and Health Canada), allows companies to limit a risk/benefit analysis to only risks that are unacceptable. Therefore, the European requirement (i.e., EN ISO 14971:2012) is more stringent. Companies that intend to CE Mark medical devices should comply with the the EN version of the risk management standard instead of the international version for risk management.

Posted in: FDA

Leave a Comment (2) ↓


  1. Brian Acheson October 6, 2014

    Thanks for this interesting blog. I’m puzzled by the interpretation of the probability of occurrence of harm though. Given that 4.3 a) of 62304 states that “If the hazard could arise from the failure of the software system to behave as specified, the probability of such failure shall be assumed to be 100%, then surely P1 and P2 are redundant? Meaning we assume that failure will always occur and evaluate the risk based purely on severity .

    • Rob Packard October 6, 2014

      Hi Brian,

      P1 is the probability of occurrence for a failure. In accordance with 4.3a) in 62304, P1 would always be 1.00–not equal to P2. For non-software systems, P1 is almost always less than 1.00. P2, however, is the probability of occurrence of harm when the failure occurs. Therefore, if a temperature measurement system gives an erroneous reading 100% of the time this is P1. However, the erroneous reading will not always result in harm. Therefore, we need clinical history to estimate P2. The risk is severity x P1 x P2. For software we can drop the P1 since it equals 1.00, but we still need to estimate risk using P2.


Leave a Comment

Time limit is exhausted. Please reload the CAPTCHA.


Get every new post on this blog delivered to your Inbox.

Join other followers:

Simple Share Buttons
Simple Share Buttons