Software Verification and Validation

Data Hazards for SaMD and SiMD

What are data hazards?

Data hazards occur when there is missing data, incorrect data, or a delay in data delivery to a database or user interface. This can occur with both software as a medical device (SaMD) and software in a medical device (SiMD). Missing data, incorrect data, and delays in data delivery cannot cause physical harm to a patient or user. Therefore, many medical device and IVD manufacturers state in their risk management file that their device or IVD has low risk or no risk. This is an incorrect software risk analysis because failure to meet software data or database requirements results in a hazardous situation, such as 1) an incorrect diagnosis or treatment, or 2) a delay in diagnosis or treatment. These hazardous situations can compromise the standard of care at best, or at worst, hazardous situations can result in physical harm–including death.

Where do you document these hazards?

Data hazards are documented in your software risk management file, but the data hazards are referenced in multiple documents. Usually data hazards will be referenced in a software hazard identification, the software risk analysis, software verification and validation test cases, and the software risk management report. Security risk assessments will also identify potential data hazards resulting from cybersecurity vulnerabilities that could be exploited.

How do you identify data hazards?

IEC 62304 is completely useless for the purpose of identifying data hazards. In Clause 5.2.2 of the standard for the software life-cycle process, the only examples of data definition and database requirements provided are form, fit, and function. IEC/TIR 80002-1, Guidance on the  application of ISO 14971 to medical device software, is extremely useful. Specifically, in Table B1 the following potential data hazards are identified:

  • Mix-up of Data such as:
    • Associating data with the wrong patient
    • Associating the wrong device/instrument with a patient
    • Associating measurements with the wrong analyte
  • Loss of data resulting from events such as:
    • Connectivity failure or Quality of Service (QoS) issues
    • Incorrect data acquisition timing or sampling rates (i.e., measurement)
    • Capacity limitations during peak loads
    • Missing data fields (i.e., incorrect database configuration or database mapping)
  • Modification of data caused by:
    • Data entry errors during manual entry
    • Automated preventive maintenance
    • Reset of data
    • Rounding of data
    • Averaging of data 

Patient data also presents a security risk. Access to data must be controlled for data entry, viewing, and editing. Other potential causes of data integrity issues include power loss, division by zero, overflow/underflow, floating point rounding, improper range/bounds checking, off-by-one, hardware failure, timing, and watch-dog time outs. In Table B2 the guidance provides additional examples of causes and risk control measures are recommended for each cause.

Data hazards associated with artificial intelligence (AI) and machine learning (ML)

There are also data hazards associated with AI/ML software. When an algorithm is developed there is a potential for improving the algorithm or making the algorithm worse. There is always a data bias resulting from the patient population selected for data collection and the clinical users that assign a ground truth for that data. Sometimes the data entered is subjective or qualitative data rather than objective, quantitative data. The sequence or timing of data collection can also impact the validity of data used for training an AI algorithm. AAMI CR 34971:2023 is a Guide on the Application of ISO 14971 to Machine Learning and Artificial Intelligence. That guidance identifies additional hazards associated with data and databases.

How are these hazards addressed in software requirements?

In IEC 62304, Clause 5.2.2, lists the content for twelve different software requirements (i.e., items A-L):

  • a) functional and capability requirements;
  • b) SOFTWARE SYSTEM inputs and outputs;
  • c) interfaces between the SOFTWARE SYSTEM and other SYSTEMS;
  • d) software-driven alarms, warnings, and operator messages;
  • e) SECURITY requirements;
  • f) user interface requirements implemented by software;
  • g) data definition and database requirements;
  • h) installation and acceptance requirements of the delivered MEDICAL DEVICE SOFTWARE at the operation and maintenance site or sites;
  • i) requirements related to methods of operation and maintenance;
  • j) requirements related to IT-network aspects;
  • k) user maintenance requirements; and
  • l) regulatory requirements.

These software requirements may overlap, because any specific cause of failure can result in multiple types of software hazards. For example, a loss of connectivity can result in mix-up of data, incomplete data, or modification of the data. Therefore, to ensure your software design is safe, you must carefully analyze software risks and evaluate as many test cases as you can to verify effectiveness of the software risk controls.

What is the best way to analyze data risks?

There are multiple risk analysis tools available to device and IVD manufacturers (e.g., preliminary hazard analysis, failure modes and effects analysis, and fault-tree analysis). Using a design failure modes and effects analysis (i.e., dFMEA) is the most common risk analysis tool, but a dFMEA is not the best tool for software risk analysis. There are two reasons for this. First, the dFMEA is a bottom-up approach that assumes you know all of the software functions that are needed–but you won’t. Second, the dFMEA will have multiple rows of effects for each failure mode because each cause of software failure can overlap with multiple software functions. Therefore, the best way to analyze data risks is a fault-tree analysis (i.e., FTA). The FTA is the best tool for analysis of software data hazards because you only need three fault trees: 1) Mix-up of data, 2) Loss of data, and 3) Modification of data. In each of these FTAs, all of the potential causes of software failure will be identified in the branches of the fault tree. Analyzing the fault tree structure, specifically the position of OR gates, can assist in software design. OR logic gates that can result in critical failures need additional software risk controls to prevent a single cause of software failure from creating a serious hazardous situation.

Fault Tree Example from AAMI TIR 80002 1 2009 300x239 Data Hazards for SaMD and SiMD
Copied from Section 6.2.1.5 from AAMI TIR 80002-1:2009

How to build a fault tree

The first step of your software risk analysis should be to identify data hazards. Once you identify the data hazards, you can build a fault tree for each of the three possible software failures: mix-up of data, 2) incomplete data, and 3) modification of data. Each data hazards can cause multiple software failure modes, but the type of logic gate will determine the outcome of that data hazard. An OR gate will result in software failures if there is just one hazardous event, while an AND gate requires at least two hazardous events to occur before the software failure will occur. The position of the OR/AND gate also impacts the potential for software failures.

Data Hazards for SaMD and SiMD Read More »

Software security, what is the best time to test cybersecurity?

The new US FDA draft cybersecurity guidance requires you to test cybersecurity, but when should you conduct software security testing?

The 2022 draft cybersecurity guidance from the FDA emphasizes the need to design devices to be secure and the need to design devices capable of reducing emerging cybersecurity risks throughout the total product lifecycle. Designing devices for security must be built into your original design plan, or you will need to modify your device for improved security just to obtain initial 510(k) clearance from the FDA. What is not clear from the guidance or standards is when you need to conduct security testing or repeat tests.

Planning Cybersecurity Tests

As with all quality system processes, cybersecurity testing should begin with a plan. There are two models typically used for the design and development process: Waterfall Diagram (typical of hardware development) and V-Diagram (typical of software development).

waterfall fda Software security, what is the best time to test cybersecurity?
Waterfall Diagram

Software Validation and Verification 1 Software security, what is the best time to test cybersecurity?

V-Diagram

How are design plans for SaMD different from other design plans?

Most of the verification testing for software as a medical device (SaMD) is 1) conducted virtually, 2) tests software code in a “sandbox,” and 3) involves internally developed testing protocols. In contrast, verification testing for other types of devices involves 1) physical devices, 2) testing at a 3rd party lab, and 3) involves international standards and testing methods. The biggest differences between SaMD verification testing and other device verification testing are the speed and cost of the testing. SaMD verification is much faster and less expensive. Therefore, if your software design documentation is efficient, you can complete more design iterations. This is why software developers use the V-diagram to model the design and development process instead of the “waterfall” diagram.

Where do the requirements to test cybersecurity belong in your design plan?

A design plan documents the design and development process for your device. You must establish, maintain, and update the plan as the project progresses. There is no required format, but auditors and the FDA will audit your Design History File (DHF) for compliance with your plan. You are required to document the following content in your plan:

  • Stages of development
  • Reviews at each design and development stage
  • Verification, validation, and design transfer activities at each stage
  • Responsibilities and authorities for the design project
  • Methods you are using to ensure traceability of user needs, software hazards, software requirements, software design specifications, and software testing reports
  • Human resources needed for your design project, including competency

Software Design Inputs

In the early stages of the software development lifecycle, you must select an appropriate threat model and perform a hazard analysis for software security. These security hazards need to be included as design inputs in your software requirements specification (SRS). The need for updateability and patchability should also be included as design inputs. 

In parallel with your SRS, you will need to create a User Specification. The SRS and User Specification will determine the use cases and call-flow views that require verification testing later in your software development process. After the SRS has been approved, you will need to create a software design specification (SDS). Each item in the SDS should be traceable to an item in the SRS. The SDS items that trace to security hazards are your risk controls. Each risk control will require you to test cybersecurity to verify risk control effectiveness. At this point, you will need to create your testing protocols for security.

System Testing Protocols to Test Cybersecurity

Testing protocols should include a boundary analysis and rationale for boundary assumptions. Testing protocols should also include vulnerability testing. The FDA recommends the following vulnerability testing:

  1. Abuse cases, malformed, and unexpected inputs,
    1. Robustness
    2. Fuzz testing
  2. Attack surface analysis,
  3. Vulnerability chaining,
  4. Closed box testing of known vulnerability scanning,
  5. Software composition analysis of binary executable files, and
  6. Static and dynamic code analysis, including testing for credentials that are “hardcoded,” default, easily guessed, and easily compromised.

Does your development budget include security testing? 

Design control training traditionally emphasizes the importance of “freezing” design outputs before starting verification testing to prevent the need for repeating any of the verification testing. The reason for this is that verification testing is expensive, and it is time-consuming to produce additional verification samples. In contrast, SaMD is guaranteed to be changed multiple times during the verification testing process as software bugs are identified. Therefore, software developers focus on the velocity of developing code and testing that code. One exception to this is penetration testing. Penetration testing is usually conducted once your code is final because it is more expensive than other software verification and validation testing and it would need to be repeated each time the software is updated or patched.

Penetration Testing

Penetration testing is another method used to test cybersecurity that would probably be conducted in parallel with simulated use testing to validate performance and the effectiveness of human factors risk controls. Penetration testing could be at the system level in a sandbox environment, or it can be performed on a sample device in a simulated use environment. Your penetration testing documentation should include the following:

  1. independence and technical expertise
  2. scope of testing
  3. duration of testing
  4. testing employed, and
  5. test results, findings, and observations

Postmarket cybersecurity management

For CE Marked products, there is a requirement for a postmarket surveillance plan (i.e., PMS plan) to be submitted as part of your technical file. The US FDA does not currently have this requirement for Class 1 and Class 2 devices, but Class 3 devices (i.e., PMA) and devices with humanitarian device exemptions (HDE)  are required to submit a PMS plan as part of the premarket submission. The US FDA also requires a postmarket cybersecurity management plan to be submitted for premarket submissions of Class 2 and Class 3 devices. You should create your postmarket cybersecurity management plan during your verification and validation activities, and the final version should be approved at the time of product release.

If you need additional resources or training related to cybersecurity, you may be interested in the following:

Software security, what is the best time to test cybersecurity? Read More »

Why modernize 21 CFR 820 to ISO 13485?

The FDA patches the regulations with guidance documents, but there is a desperate need to modernize 21 CFR 820 to ISO 13485.

FDA Proposed Amendment to 21 CFR 820

On February 23, 2022, the FDA published a proposed rule for medical device quality system regulation amendments. The FDA planned to implement amended regulations within 12 months, but the consensus of the device industry is that a transition of several years would be necessary. In the proposed rule, the FDA justifies the need for amended regulations based on the “redundancy of effort to comply with two substantially similar requirements,” creating inefficiencies. In public presentations, the FDA’s supporting arguments for the proposed quality system rule change rely heavily upon comparing similarities between 21 CFR 820 and ISO 13485. However, the comparison table provided is quite vague (see the table from page 2 of the FDA’s presentation reproduced below). The FDA also provided estimates of projected cost savings resulting from the proposed rule. What is completely absent from the discussion of the proposed rule is any mention of the need to modernize 21 CFR 820.

Overview of Similarities and Differences between QSR and ISO 13485 1006x1024 Why modernize 21 CFR 820 to ISO 13485?

Are the requirements “substantively similar”?

The above table provided by the FDA claims that the requirements of 21 CFR 820 are substantively similar to the requirements of ISO 13485. However, there are some aspects of ISO 13485 that will modernize 21 CFR 820. The areas of impact are 1) software, 2) risk management, 3) human factors or usability engineering, and 4) post-market surveillance. The paragraphs below identify the applicable clauses of ISO 13485 where each of the four areas are covered.

Modernize 21 CFR 820 to include software and software security

Despite the limited proliferation of software in medical devices during the 1990s, 21 CFR 820 includes seven references to software. However, there are some Clauses of ISO 13485 that reference software that are not covered in the QSR. Modernizing 21 CFR 820 to reference ISO 13485 will incorporate these additional areas of applicability. Clause 4.1.6 includes a requirement for the validation of quality system software. Clause 7.6 includes a requirement for the validation of software used to manage calibrated devices used for monitoring and measurement. Clause 7.3 includes a requirement for validation of software embedded in devices, but that requirement was already included in 21 CFR 820.30. The FDA can modernize 21 CFR 820 further by defining Software as a Medical Device (SaMD), referencing IEC 62304 for management of the software development lifecycle, referencing IEC/TR 80002-1 for hazard analysis of software, referencing AAMI TIR57 for cybersecurity, and referencing ISO 27001 for network security. Currently, the FDA strategy is to implement guidance documents for cybersecurity and software validation requirements, but ISO 13485 only references IEC 62304. The only aspect of 21 CFR 820 that appears to be adequate with regard to software is the validation of software used for automation in 21 CFR 820.75. This requirement is similar to Clause 7.5.6 (i.e., validation of processes for production and service provisions).

Does 21 CFR 820 adequately cover risk management?

The FDA already recognizes ISO 14971:2019 as the standard for the risk management of medical devices. However, the risk is only mentioned once in 21 CFR 820. In order to modernize 21 CFR 820, it will be necessary for the FDA to identify how risk should be integrated throughout the quality system requirements. The FDA recently conducted two webinars related to the risk management of medical devices, but implementing a risk-based approach to quality systems is a struggle for companies that already have ISO 13485 certification. Therefore, a guidance document with examples of how to implement a risk-based approach to quality system implementation would be very helpful to the medical device industry. 

Modernize 21 CFR 820 to include Human Factors and Usability Engineering

ISO 13485 references IEC 62366-1 as the applicable standard for usability engineering requirements, but there is no similar requirement found in 21 CFR 820. Therefore, human factors are an area where 21 CFR 820 needs to be modernized. The FDA has released guidance documents for the human factors content to be included in a 510k pre-market notification, but the guidance was released in 2016 and the guidance does not reflect the FDA’s current thoughts on human factors/usability engineering best practices. The FDA recently released a draft guidance for the format and content of human factors testing in a pre-market 510k submission, but that document is not a final guidance document and there is no mention of human factors, usability engineering, or even use errors in 21 CFR 820. Device manufacturers should be creating work instructions for use-related risk analysis (URRA) and fault-tree analysis to estimate the risks associated with use errors as identified in the draft guidance. These work instructions will also need to be linked with the design and development process and the post-market surveillance process.

Modernize 21 CFR 820 to include Post-Market Surveillance

ISO/TR 20416:2020 is a new standard specific to post-market surveillance, but it is not recognized by the FDA. There is also no section of 21 CFR 820 that includes a post-market surveillance requirement. The FDA QSR focuses on reactive elements such as:

  • 21 CFR 820.100 – CAPA
  • 21 CFR 820.198 – Complaint Handling
  • 21 CFR 803 – Medical Device Reporting
  • 21 CFR 820.200 – Servicing
  • 21 CFR 820.250 – Statistical Techniques

The FDA does occasionally require 522 Post-Market Surveillance Studies for devices that demonstrate risks that require post-market safety studies. In addition, most Class 3 devices are required to conduct post-approval studies (PAS). For Class 3 devices, the FDA requires the submitter to provide a plan for a post-market study. Once the study plan is accepted by the FDA, the manufacturer must report on the progress of the study. Upon completion of the study, most manufacturers are not required to continue PMS.

How will the FDA enforce compliance with ISO 13485?

It is not clear how the FDA would enforce compliance with Clause 8.2.1 in ISO 13485 because there is no substantively equivalent requirement in the current 21 CFR 820 regulations. The QSR is 26 years old, and the regulation does not mention cybersecurity, human factors, or post-market surveillance. Risk is only mentioned once by the regulation, and software is only mentioned seven times. The FDA has “patched” the regulations through guidance documents, but there is a desperate need for new regulations that include critical elements. The transition of quality system requirements for the USA from 21 CFR 820 to ISO 13485:2016 will force regulators to establish policies for compliance with all of the quality system elements that are not in 21 CFR 820.

Companies that do not already have ISO 13485 certification should be proactive by 1) updating their quality system to comply with the ISO 13485 standard and 2) adopting the best practices outlined in the following related standards:

  • AAMI/TIR57:2016 – Principles For Medical Device Security – Risk Management
  • IEC 62366-1:2015 – Medical devices — Part 1: Application of usability engineering to medical devices
  • ISO/TR 20416:2020 – Medical devices — Post-market surveillance for manufacturers
  • ISO 14971:2019 – Medical Devices – Application Of Risk Management To Medical Devices
  • IEC 62304:2015 – Medical Device Software – Software Life Cycle Processes
  • ISO/TR 80002-1:2009 – Medical device software — Part 1: Guidance on the application of ISO 14971 to medical device software
  • ISO/TR 80002-2:2017 – Medical device software — Part 2: Validation of software for medical device quality systems

What is the potential impact of the US FDA requiring software, risk management, cybersecurity, human factors, and post-market surveillance as part of a medical device company’s quality system?

Why modernize 21 CFR 820 to ISO 13485? Read More »

Software validation documentation for a medical device

Learn why you need to start with software validation documentation before you jump into software development.

When do you create software validation documentation for a medical device or IVD?

At least once a week, I speak with the founder of a new MedTech company that developed a new software application as a medical device (SaMD). The founder will ask me to explain the process for obtaining a 510(k), and they want help with software validation documentation. Many people I speak with have never even heard of IEC 62304.

Even though they already have a working application, usually, validation documentation has not even been started. Although you can create all of your software validation documentation after you create a working application, certain tasks are important to perform before you develop software code. Jumping into software development without the foundational documentation will not get your device to market faster. Instead, you will struggle to create documentation retroactively, and the process will be slower. In the end, the result will be a frustrating delay in the launch of your device.

What are the 11 software validation documents required by the FDA?

In 2005 the FDA released a guidance document outlining software validation documentation content required for a premarket submission. There were 11 documents identified in that guidance:

software validation documentation 1024x385 Software validation documentation for a medical device

What the FDA guidance fails to explain is that some of these documents need to be created before software development begins, or your software validation documentation will be missing critical design elements. Therefore, it is important to create a software development plan that schedules activities that result in those documents at the right time. In contrast, four of the eleven documents can wait until your software development is complete.

Which of the software validation documents can wait until the end?

The level of concern only determines what documents the FDA wants to review in a submission rather than what documents are needed for a design history file. In fact, the level of concern (LOC) document is no longer required as a separate document in premarket submissions using the FDA eSTAR template because the template already incorporates the questions that document your LOC. The revision level history document is simply a summary of revisions made to the software during the development process, and that document can be created manually or automatically at the end of the process, or the revision level history can be a living document that is created as changes are made. The traceability matrix can also be a living document created as changes are made, but its only purpose is to act as a tool to provide traceability from hazards to software requirements, to design specifications, and finally to verification and validation reports. Other software tools, such as Application Lifecycle Management (ALM) Software, are designed to ensure the traceability of every hazard and requirement throughout the entire development process. Finally, unresolved anomalies should only be documented at the time of submission. The list may be incomplete until all verification and validation testing is completed, and the list should be the shortest at the time of submission.

What documentation will be created near the end of development?

The software design specification (SDS) is typically a living document until your development process is completed, and you may need to update the SDS after the initial software release to add new features, maintain interoperability with software accessories, or change security controls. The SDS can not begin, however, until you have software requirements and the basic architecture defined. The verification and validation activities are discrete documents created after each revision of the SDS and must therefore be one of the last documents created–especially when provided to the FDA as a summary of the verification and validation efforts.

Which validation documents do you need first?

At the beginning of software development, you need a procedure(s) that defines your software development process. That procedure should have a section that explains the software development environment–including how patches and upgrades will be controlled and released. If you don’t have a quality system procedure that defines your development process, then each developer may document their coding and validation activities differently. That does not mean that you can’t improve or change the procedure once development has begun, but we recommend limiting the implementation of a revised procedure when making major software changes and discussing how revisions will be implemented for any work that remains in progress or has already been completed.

When do the remaining software validation documents get created?

The remaining four software validation documents required for a premarket submission to the FDA are:

  1. Software description
  2. Software hazard analysis
  3. Software requirements specification (SRS)
  4. Architecture design chart

Your development process will be iterative, and therefore, you should be building and refining these four documents iteratively in parallel with your software code. At the beginning of your project, your design plan will need a brief software description. Your initial software description needs to include the indications for use, a list of the software’s functional elements, and the elements of your user specification (i.e., intended patient population, intended users, and user interface). If you are using lean startup methodology, the first version of your device description will be limited to a minimal viable product (MVP). The target performance of the MVP should be documented as an initial software requirements specification (SRS). This initial SRS might only consist of one requirement, but the SRS will expand quickly. Next, you need to perform an initial software hazard analysis to identify the possible hazards. It is important to remember that software hazards are typically hazardous situations and are not limited to direct physical harm. For each potential hazard you identify in your hazard analysis, you will need a software requirement to address each hazard, and each requirement needs to be added to your SRS. As your software becomes more complex by adding software features, your device description needs to be updated. As you add functions and requirements to your software application, your SRS will need updates too. Finally, your development team will need a tool to track data flow and calculations from one software function to the next. That tool is your architecture design chart, and you will want to organize your SRS to match the various software modules identified in your architecture diagram. This phase is iterative and non-linear, you will always have failures, and typically a team of developers will collaborate virtually. Maintaining a current version of the four software documents is critical to keeping your development team on track.

How do you perform a software hazard analysis?

One of the most important pre-requisite tasks for software developers is conducting a hazard analysis. You can develop an algorithm before you write any code, but if you start developing your application to execute an algorithm before you perform a software hazard analysis, you will be missing critical software requirements. Software hazard analysis is different from traditional device hazard analysis because software hazards are unique to software. A traditional device hazard analysis consists of three steps: 1) answering the 37 questions in Annex A of ISO/TR 24971:2020, 2) systematically identifying hazards by using Table C1 in Annex C of ISO 14971:2019, and 3) reviewing the risks associated with previous versions of the device and similar competitor devices. A software hazard analysis will have very few hazards identified from steps 1 and 2 above. Instead, the best resource for software hazard analysis is IEC/TR 80002-1:2009. You should still use the other two standards, especially if you are developing software in a medical device (SiMD) or firmware, but IEC/TR 80002-1 has a wealth of tables that can be used to populate your initial hazards analysis and to update your hazard analysis when you add new features.

How do you document your hazard analysis?

Another key difference between a traditional hazard analysis and a software hazard analysis is how you document the hazards. Most devices use a design FMEA (dFMEA) to document hazards. The dFMEA is a bottom-up method for documenting your risk analysis by starting with device failure modes. Another tool for documenting hazards is a fault tree diagram.

Fault Tree Example from AAMI TIR 80002 1 2009 300x239 Software validation documentation for a medical device
Copied from Section 6.2.1.5 from AAMI / IEC TIR 80002-1:2009

A fault tree is a top-down method for documenting your risk analysis, where you identify all of the potential causes that contribute to a specific failure mode. Fault tree diagrams lend themselves to complaint investigations because complaint investigations begin with the identification of the failure (i.e., complaint) at the top of the diagram. For software, the FDA will not allow you to use the probability of occurrence to estimate risks. Instead, software risk estimation should be limited to the severity of the potential harm. Therefore, a fault tree diagram is generally a better tool for documenting software risk analysis and organizing your list of hazards. You might even consider creating a separate fault tree diagram for each module of your software identified in the architecture diagram. This approach will also help you identify the potential impact of any software hazard by looking at the failure at the top of the fault tree. The higher the potential severity of the software failure, the more resources the software team needs to apply to developing software risk controls and verifying risk control effectiveness for the associated fault tree.

Software validation documentation for a medical device Read More »

Software Service Provider Qualification and Management

What is your company’s approach to qualifying a software service provider and managing software-as-a-service (SaaS) for cybersecurity?

The need for qualifying and managing your software service provider

Most of the productivity gains of the past decade are related to the integration of software tools into our business processes. In the past, software licenses were a small part of corporate budgets, and the most critical software tools helped to manage material requirements planning (MRP) functions and customer relationship management (CRM). Today, there are software applications to automate every business process. Failure of a single software service provider, also known as “Software-as-a-Service” or (Saas), can paralyze your entire business. In the past, business continuity plans focused on labor, power, inventory, records, and logistics. Today our business continuity plans also need to expand for the inclusion of software service providers, internet bandwidth, websites, email, and cybersecurity. This new paradigm is not specific to the medical device industry. The medical device industry has become more dependent upon its supply chain due to the ubiquity of outsourcing, and what happens to other industries will eventually filter its way into this little collective niche we share. With that in mind, how do we qualify and manage a software service provider?

Threats to software service providers (Kaseya Case Study)

Two years ago the WannaCry ransomware attack affected 200,000 computers, 150 countries, and more than 80 hospitals.

Wana Decrypt0r screenshot Software Service Provider Qualification and Management

Kaseya isn’t a hospital. Kaseya is a software service provider company. So why is this example relevant to the medical device industry?

The ransomware attack on Kaseya was severe enough that both CISA and the FBI got involved, and it compromised some Managed Service Providers (MSPs) and downstream customers. This supply chain ransomware attack even has its own Wikipedia page. The attack prompted Kaseya to shut down servers temporarily. None of this is a critique of Kaseya or their actions. They were merely the latest high-profile victim of a cyberattack in the news. Now cybercriminals are attacking your supply chain. We want to emphasize the concepts and considerations of this type of attack as it pertains to your business.

What supplier controls do you require for a software service provider?

If you are a manufacturer selling a medical device under the jurisdiction of the U.S. FDA, you need to comply with 21 CFR 820.50 (i.e. purchasing controls). The FDA requires an established and maintained procedure to control how you are ensuring what your company buys meets the specified requirements of what you need. Many device manufacturers only consider suppliers that are making physical components, but a software service provider may be critical to your device if your device is software as a medical device (SaMD), includes software, or interacts with a software accessory. A software service provider may also be involved with quality system software, clinical data management, or your medical device files. Do you purchase software-as-a-service or rely upon an MSP for cloud storage?

You need to determine if your software service provider is involved in document review or approval, controlling quality records, Protected Health Information (PHI), or electronic signature requirements. You don’t need a supplier quality agreement for all of the off-the-shelf items your company purchases. For example, it would be silly to have Sharpie sign a supplier quality agreement because you occasionally purchase a package of highlighters. On the other hand, if you are relying upon Docusign to manage 100% of your signed quality records, you need to know when Docusign updates its software or has a security breach. You should also be validating Docusign as a software tool, and there should be a backup of your information.

21 CFR 820.50 requires that you document supplier evaluations to meet specified and quality requirements per your “established and maintained” procedure. The specified requirements for this supplier might include the following:

  • How much data storage do you need?
  • How many user accounts do you need?
  • Do you need unique electronic IDs for each user?
  • Do you need tech support for the software service?
  • Is the software accessed with an internet browser, is the software application-based, or both?
  • How much does this software service cost?
  • Is the license a one-time purchase? Or is it a subscription?

The quality requirements for a supplier like this may look more like these questions;

  • How is my information backed up?
  • Can I restore previous file revisions in the case of corruption?
  • How can I control access to my information?
  • Can I sign electronic documents? If yes, is it 21 CFR Part 11 compliant?
  • Does this supplier have downstream access to my information? (can the supplier’s suppliers see my stuff?)
  • Do I manage PHI? If so, can this system be made HIPAA compliant? What about HITECH?
  • What cybersecurity practices does this supplier utilize?
  • How are routine patches and updates communicated to me?

A risk-based approach to supplier quality management

ISO 13485:2016 requires that you apply a risk-based approach to all processes, including supplier quality management. A risk-based approach should be applied to suppliers providing both goods and services. For example, you may order shipping boxes and contract sterilization services. Both companies are suppliers, but in this example, the services provided by the contract sterilizer are associated with a much higher risk than the shipping box supplier. Therefore, it makes sense that you would need to exercise greater control over the sterilizer. Software service providers are much like contract sterilizers. SaaS is not tangible but the service provided may have a high level of risk and potential impact on your quality management system. Therefore, you need to determine the risk associated with SaaS before you can evaluate, control, and monitor a software service supplier.

First, you need to document the qualification of a new supplier. It would be nice if your cloud service provider had a valid ISO 13485:2016 certification. You would then have an objectively demonstratable record of their process controls and know that they are routinely audited to maintain that certification. They would also understand and expect to undergo 2nd party supplier audits because they operate in the medical device industry. Alternatively, a software service provider may have an ISO 9001:2015 certification. This is a  general quality system certification that may be applied to all products or services. In the absence of quality system certification, you can audit a potential supplier. For some suppliers, this makes sense. However, many companies that are outside of the medical device industry do not even have a quality system because it is not required or typical of their industry. For the ones that do, though, you can likely leverage their existing certifications and accreditations.

Cybersecurity standards you should know

Most cloud service providers will not have ISO 13485 certification, because it is a quality management standard specific to the medical device industry. However, you might look for some combination of the following ISO standards that may be relevant to a software service provider:

  • ISO/IEC 27001 Information Technology – Security Techniques – Information Security Management Systems – Requirements
  • ISO/IEC 27002:2013 Information Technology. Security Techniques. Code Of Practice For Information Security Controls
  • ISO/IEC 27017:2015 Information Technology. Security Techniques. Code Of Practice For Information Security Controls Based On ISO/IEC 27002 For Cloud Services
  • ISO/IEC 27018:2019 Information Technology – Security Techniques – Code Of Practice For Protection Of Personally Identifiable Information (PII) In Public Clouds Acting As PII Processors
  • ISO 22301:2019 Security And Resilience – Business Continuity Management Systems – Requirements
  • ISO/IEC 27701:2019 Security Techniques. Extension to ISO/IEC 27001 and ISO/IEC 27002 For Privacy Information Management. Requirements And Guidelines

Does your software service provider have SOC reports?

%name Software Service Provider Qualification and Management

The acronym “SOC” stands for Service Organization Control, and these reports were established by the American Institute of Certified Public Accountants. SOC reports are internal controls that an organization utilizes and each report is for a specific subject. SOC reports apply to varying degrees for SaaS and MSP Suppliers

The SOC 1 Report focuses on Internal Controls over Financial Reporting. Depending on what information you need to store on the cloud, this report could be more applicable to the continuity of your overall business than specifically to your quality management system.

The SOC 2 Report addresses what level of control an organization places on the five Trust Service Criteria: 1) Security, 2) Availability, 3) Processing Integrity, 4) Confidentiality, and 5) Privacy. As a medical device manufacturer, these areas would touch on control of documents, control of records, and process validation, among other areas of your quality system. Some suppliers may not share a SOC 2 report with you, because of the amount of confidential detail provided in the report.

The SOC 3 Report will contain much of the same information that the SOC 2 Report contains. They both address the five Trust Service Criteria. The difference is the intended audiences of the reports. The SOC 3 is a general use report expected to be shared with others or publicly available. Therefore, it doesn’t go into the same intimate level of detail as the SOC 2 report. Specifically, information regarding what controls a system utilizes is very brief if identified at all compared to the description and itemized list of controls in the SOC 2 Report.

Other ways to qualify and manage your software service provider

SOC reports will help paint a picture of the organization you are trying to qualify for. You will also need to evaluate the supplier on an ongoing basis. It is essential to know if the supplier is subject to routine audits and inspections to maintain applicable certifications and accreditations. For example, if their ISO certificate lasts for three years, you should know that you should follow up with your supplier for their new certificate at least every three years. On the other hand, if they lose certification, it may signify that the supplier can’t meet your needs any longer and you should find a new supplier.

There is a long list of standards, certifications, accreditations, attestations, and registries that you can use to help qualify a SaaS or MSP supplier. One such registry is maintained by Cloud Security Alliance (i.e. the CSA STAR registry). “STAR” is an acronym standing for Security, Trust, Assurance, and Risk. CSA describes the STAR registry in their own words:

“STAR encompasses the key principles of transparency, rigorous auditing, and harmonization of standards outlined in the Cloud Controls Matrix (CCM) and CAIQ. Publishing to the registry allows organizations to show current and potential customers their security and compliance posture, including the regulations, standards, and frameworks they adhere to. It ultimately reduces complexity and helps alleviate the need to fill out multiple customer questionnaires.”

Some of the questions your supplier qualification process should be asking about your SaaS and MSP suppliers include:

  • Why do I need this software service?
  • Which standards, regulations, or process controls need to be met?
  • What is required for qualifying suppliers providing SaaS or an MSP?
  • How will you monitor a software service provider?

ISO certification, SOC reports, and the CSA STAR registry are supplier evaluation tools you can use for supplier qualification and monitoring. When you use these tools, make sure that you ask open-ended questions instead of close-ended questions. Our webinar on supplier qualification provides several examples of how to convert your “antique” yes/no questions into value-added questions.

Are your suppliers qualified Supplier Evaluation Tools Software Service Provider Qualification and Management

Your software service provider should be able to provide records and metrics demonstrating the effectiveness of their cybersecurity plans. Below are three examples of other types of records you might request:

  • Cloud Computing Compliance Controls Catalogue or “C5 Attestation Report”
  • System Security Plan for Controlled Unclassified Information in accordance with NIST publication SP 800-171
  • Privacy Shield Certification to EU-U.S. Privacy Shield or Swiss-U.S. Privacy Shield

The privacy shield certification may be especially important for companies with CE Marked devices in order to comply with the European Union’s General Data Protection Regulation (GDPR) or Regulation 2016/679.

A final consideration for supplier qualification is, “Who are the upstream suppliers?” It is essential to know if your new supplier or their suppliers will have access to Protected Health Information (PHI). Since you have less control of your supplier’s subcontractors, you may need to evaluate how your supplier manages their supply chain and which general cybersecurity practices your supplier’s subcontractors adhere to.

Additional cybersecurity, software validation, and supplier quality resources

For more resources on cybersecurity, software validation, and supplier quality management please check out the following resources:

Learn how to quickly perfect your 510k cybersecurity documentation rvp 8 12 2021 Software Service Provider Qualification and Management

Software Service Provider Qualification and Management Read More »

Cybersecurity FDA Guidance for Devices with Software and Firmware

This article reviews the 2014 FDA guidance for premarket and post-market cybersecurity of medical devices with software and firmware—including requirements for reporting field corrections and removals.

Cybersecurity with custom aspect ratio Cybersecurity FDA Guidance for Devices with Software and Firmware

Hospitals, home health systems, and medical devices are more connected now than ever. The automatic communication between medical devices and network systems is improving efficiency and accuracy in the world of healthcare. Medical devices are capable of more computing, analysis, reporting, and automation to improve the speed and quality of patient care. There are even devices that consist only of software (i.e. software as a medical device or SaMD). Along with technological advances, new risks and concerns are also introduced. The risk of hackers exploiting vulnerabilities in networks and software is inevitable. The FDA introduced guidance for both pre-market and post-market cybersecurity to assist manufacturers in developing effective controls to protect patients and users. Cybersecurity protection requires Identification, Protection, Detection, Response, and Recovery.

The first step is incorporating processes and procedures to improve device cybersecurity into your quality management system. You should have a specific cybersecurity plan (i.e. security risk management plan) to outline the steps necessary to ensure a safe and secure medical device. In addition, your software development team will need cybersecurity training. The only medical device guidance document specific to cybersecurity is currently AAMI TIR57:2016.

Identify Cybersecurity Risks

The key to understanding and assessing the cybersecurity risks involved with your device begin in the early stages of design development. At the start of the risk management process, you need to identify the essential safety and performance requirements of the device. You need to identify any potential cybersecurity vulnerabilities that could impact safety or performance, as well as the specific harms that could result if the vulnerability was exploited. In assessing the specific vulnerabilities, the FDA recommends using the Common Vulnerability Scoring System (CVSS). There is a CVSS calculator available online through NIST. The overall score is calculated based on different factors such as attack vector (local, adjacent network, network), access complexity (high, medium, low), authentication (multiple, single, none), the impact of confidentiality (none, partial, complete), exploitability (unproven that exploit exists, proof of concept code, functional exploit exists), remediation level (official fix, temporary fix, workaround, unavailable), collateral damage potential (low, medium, high), etc. This score is used in the hazard analysis in determining the level of risk.

Cybersecurity Protection

The process of assessing the exploitability and harms can also assist in determining mitigations that can be implemented to reduce the cybersecurity risk. During the design process, the FDA expects you to implement as many protections as practicable. Protections include:

  • Limit Access to Trusted Users
    • Password protection strengthened password requirements
    • User authentication
    • Layered privileges based on user role
  • Limit Access to Tampering
    • Physical locks on devices and/or communication ports
    • Automatic timed methods to terminate sessions
  • Ensure Trusted Content
    • Restrict software or firmware updates to authenticated code
    • Systematic procedures for authorized users to download software and firmware only from the manufacturer
    • Ensure capability of secure data transfer, use of encryption

Cybersecurity Detection

The FDA also requires you to implement features that allow for security compromises to be detected, recognized, logged, timed, and acted upon during regular use. You should develop and provide information to the end-user concerning appropriate actions to take upon the detection of a cybersecurity event. Methods for retention and recovery should be provided to allow recovery of device configuration by an authenticated privileged user.

If you include off-the-shelf (OTS) software in your device, you are responsible for the performance of the software as part of the device. All software changes to address cybersecurity vulnerabilities of the OTS software need to be validated. You need to maintain a formal business relationship with the OTS vendor to ensure timely notification of any information concerning quality problems or corrective actions. Sometimes you will need to involve the OTS vendor to correct cybersecurity vulnerabilities.

Post-Market Surveillance

Once you complete the hazard analysis, mitigation implementation, validations, and has deployed their device for use – your activities shift to post-market management. Several QMS tools can assist in the cybersecurity processes post-market, including complaint handling, quality audits, corrective and preventive action, ongoing risk analysis, and servicing. A critical component of every cybersecurity program is the monitoring of cybersecurity information sources to assist in the identification and detection of risk. You should maintain contact with third-party software suppliers for the identification of new vulnerabilities, updates, and patches that come available.

There are many sources that companies should follow for information relating to cybersecurity, including independent security researchers, in-house testing, software or hardware suppliers, healthcare facilities, and Information Sharing and Analysis Organizations (ISAO). Involvement in ISAOs is strongly recommended by the FDA and reduces your reporting burden if an upgrade or patch is required post-market. ISAOs share vulnerabilities and threats that impact medical devices with their members. They share and disseminate cybersecurity information and intelligence pertaining to vulnerabilities and threats spanning many technology sectors, and are seen as an integral part of your post-market cybersecurity surveillance program.

Response and Recovery

If you identify a cybersecurity vulnerability, there are remediation and reporting steps that need to occur. Remediation may involve a software update, bug fixes, patches, “defense-in-depth” strategies to remove malware, or covering an access port to reduce the vulnerability. Uncontrolled risks should be remediated as soon as possible and must be reported to the FDA according to 21 CFR 806. Certain circumstances remove the reporting requirement. The decision flowchart below can be used to determine the reporting requirements.

Cybersecurity software change decision tree Cybersecurity FDA Guidance for Devices with Software and Firmware

In addition to reporting corrections and removals, the FDA identifies specific content to be included in PMA periodic reports regarding vulnerabilities and risks. If you have a Class III device, you should review that section thoroughly to ensure annual report compliance.

If a device contains software or firmware, cybersecurity will be an important component of the risk management processes, and continual cybersecurity management will be necessary to ensure the ongoing safety and effectiveness of your device. If you need more help with cybersecurity risk management of your medical device, please schedule a free 30-minute call with Medical Device Academy by clicking on the link below.

Click here to schedule a 15 minute call 300x62 Cybersecurity FDA Guidance for Devices with Software and Firmware

Cybersecurity FDA Guidance for Devices with Software and Firmware Read More »

What are the software verification and validation (V&V) requirements?

This article defines software verification and validation for medical devices and provides an overview of CE Marking and 510k requirements. We also provide a link to our free download of a webinar on 510k software documentation.

Software Validation and Verification 1 What are the software verification and validation (V&V) requirements?

Software Verification and validation is an essential tool for ensuring medical device software is safe. Software is not a piece of metal that can be put into a strain gauge to see if the code is strong enough not to break. That’s because software is intangible. You can’t see if it is in the process of failing until it fails. The FDA is concerned about software safety since many medical devices now include software. Software failure can result in serious injury or even death to a patient. This places significant liability on the device manufacturer to ensure their software is safe. One way to ensure software safety is to perform software verification and validation (V&V).

What is software verification and validation (V&V)?

Definitions of software verification and validation confuse most people. Which tasks are software verification? And which tasks are software validation? Sometimes the terms are used interchangeably. Even the FDA does not clearly define the meaning of these two terms for software. For example, in the FDA’s design control guidance document the following definitions are used:

“Verification means confirmation by examination and provision of objective evidence that specified requirements have been fulfilled.”

“Validation means confirmation by examination and provision of objective evidence that the particular requirements for specific intended use can be consistently fulfilled.”

Specific intended use requirement…specified requirements…what is the difference? To understand the difference between the two terms, the key is understanding “Intended Use.” It is asking the question: “What is the software’s intended use?”

“Intended Use” is not just about a bunch of engineers sitting around a table coming up with really fresh ideas. “Intended Use” refers specifically to the patient/customer of the software and how it fulfills their needs (i.e., “User Needs”). Systematic identification of user needs is required, and the software must address the user needs. Identification of user needs is done through customer focus groups, rigorous usability studies, and consultation with subject matter experts such as doctors and clinicians providing expert insight.

“Intended Use” also ensures the safety of the process through the process of “Hazard Analysis,” whereby any hazard that could potentially cause harm to the patient/customer is identified. For each identified hazard, software requirements, software design, and other risk controls are used to make sure the hazard does not result in harm, or if it does, the severity of the harm is reduced as far as possible.

So if “Validation” ensures user needs are met, what is “Verification” and how does it apply to the software development process. “Verification” ensures that the software is built correctly based on the software requirements (i.e., design inputs), with regard to each task the software must perform (i.e., unit testing), during communication between software modules (i.e., integration testing) and within the overall system architecture (i.e., system-level testing). This is accomplished by rigorous and thorough software testing using prospectively approved software verification protocols.

CE Marking requirements for software verification and validation (V&V)

European CE Marking applications include the submission of a technical file that summarizes the technical documentation for the medical device. To be approved for CE Marking by a Notified Body, the device must meet the essential requirements defined in the applicable EU directive. The technical file must also include performance testing of the medical device in accordance with the “State of the Art.” For software, IEC/EN 62304:2006, medical device software – software life cycle processes, is considered “State of the Art” for the development and maintenance of software for medical devices. This standard applies to stand-alone software and embedded software alike. The standard also identifies specific areas of concern, such as software of unknown pedigree (SOUP). As with most medical device standards, the standard provides a risk-based approach for the evaluation of SOUP acceptability and defines testing requirements for SOUP.

FDA requirements for software verification and validation (V&V)

For 510k submissions to the US FDA, section 16 of the 510k submission describes the software verification and validation (V&V) activities that have been conducted to ensure the software is safe and effective. There are 11 documents that are typically included in this section of the submission for software with a moderate level of concern:

  1. Level of Concern
  2. Software Description
  3. Device Hazard Analysis
  4. Software Requirement Specification (SRS)
  5. Architecture Design Chart
  6. Software Design Specification (SDS)
  7. Traceability Analysis
  8. Software Development Environment Description
  9. Verification and Validation Documentation
  10. Revision Level History
  11. Unresolved Anomalies (Bugs or Defects)

The FDA does not require compliance with IEC 62304 as the European Regulations do, but IEC 62304 is a recognized standard, and manufacturers must comply with all applicable parts of IEC 62304 if they claim to follow IEC 62304. The FDA also provides a guidance document for the general principles of software validation. The above requirements for software verification and validation documentation also apply to software as a medical device (SaMD).

Additional Resources

If you are interested in learning more about the documentation requirements for a 510k submission of a software medical device, please click here to download a free recording of our 510k software documentation webinar.

Medical Device Academy also has a new live webinar scheduled for Tuesday, January 5, 2016, @ Noon (EST). The topic is “Planning Your 2016 Annual Audit Schedule. We are also offering this live webinar as a bundle with our auditor toolkit.

About the Author

Nancy Knettell is the newest member of the Medical Device Academy’s consulting team, and this is her first blog contribution to our website. Nancy is an IEC 62304 subject matter expert. To learn more about Nancy, please click here. If you have suggestions for future blogs or webinars on the topic of medical device software, please submit your requests to our updated suggestion portal.

What are the software verification and validation (V&V) requirements? Read More »

Scroll to Top