CSV vs. CSA: Moving from Software Validation to Assurance

John Todd, Sr. Business Consultant/Product Researcher, Total Resource Management (TRM), Inc.

Posted 2/29/2024

Oh, the exciting world of software validation! Hours and hours of white-knuckled test development, execution, and… as if those are not enough… resolution and documentation of the results! Then of course are the continued change management efforts needed to keep the system in a validated state over its lifecycle. It takes truly special people to function in this arena.

The FDA has recognized that the degree of effort and documentation required to maintain validated systems has been standing in the way of industry adopting software tools to automate processes and make moves into the cloud. Based on feedback from those impacted by validation methods and requirements, the FDA has made a course change.

The approach and guidelines provided by the FDA, last updated back in 2002, were under the term “Computer Software Validation (CSV).” General Principles of Software Validation is a guidance document that is deeply understood by those who operate as a regulated entity. This document refers to the original quality system regulation (Title 21 CFR, Part 820) and other applicable regulations. This document is also where the concepts of requirements definition, installation, operational, and performance qualification activities have been used to describe the validation process.

The new approach as outlined in the draft Computer Software Assurance for Production and Quality System Software (CSA), acknowledges that advances in manufacturing technologies (including software) enable reductions in sources of error, optimize resources, and reduce consumer and patient risk. By drafting this new guidance, the FDA wants manufacturers to take a more risk-based approach, assuring that the software systems being implemented are functioning as expected, focusing on those areas and functions of the tools that present the most risk.

software validation

From then to now…

In the past, testing and other verification activities have been the workhorses at each stage of the development lifecycle used to validate software. This kind of testing does not provide the confidence the FDA nor the regulated entity are looking for, if the goal is to determine the fitness of the software for its intended use in the context of the business and process operation.

CSA takes a risk-based approach intended to prevent the appearance of defects in the software development and deployment lifecycle. While still important and required by the validation requirements stated in 21 CFR 820, the hope is that the approach will enable manufacturers to pay more attention to product quality and adopt new technologies. Focus on the features/functions of the software that present the most risk in their use, rather than those that have little or no risk at all.

What does the CSA framework look like?

The CSA framework is easily digestible in that it includes only a few elements. While the details under each are considerable, such as determining the risk approach, the guidance leaves much up to the regulated entity to decide what applies and how to approach the tasks given their context. 

The elements are:

  • Identify the intended use (of the software features/functions)
  • Determine the risk-based approach
  • Determine assurance activities
  • Establish appropriate records

The guidance begins with asking the question if the intended software even needs to be placed under the assurance umbrella. If the software is not used for production purposes, or a part of the quality system, assurance may not be necessary. If the software is used directly or in support of production, it will likely need to be under the assurance program.

Then comes understanding the intended use of the software as a whole and the features/functions it provides. A reasonable list of how each of the features/functions that are intended to be used provides a foundation for what the best risk-based approach would look like.

For example: If the software is simply used to document a reading in support of a quality record, this may be a low-risk situation. The initial assessment of the vendor, the installation, and any configuration may be sufficient to state the software is fit for use.

However, if customization has been performed to suit process needs, such as formulas and other potential sources of error, then the risk is increased and needs to be addressed.

Determining the risk approach begins with matching potential failures to the features/functions that are intended to be used. Given that a certain feature is going to be used in a certain way, the goal is to answer the questions: How might it fail? And, What are the risks that the failure would present? High risk features/function are those that impact the process and/or product quality that compromises safety. (The CSA document mentioned earlier provides several examples of high- and low-risk failures of software features/functions.)

Given a software feature/function has been determined to present a high process risk, assurance activities must be identified. Any assurance activities need to be in line with the risk level. High-risk items will have more rigorous testing and documentation than those deemed low risk.

What do assurance activities look like? Mostly there are various approaches to testing the focused features/functions. Automated and scripted testing are at one end of the spectrum, while unscripted or exploratory are at the other. No matter the activity, the results and resolution of the issues discovered remain important and required outcomes that must be documented.

The final piece to the framework is the definition of appropriate records that capture the evidence that after assurance activities, the software can be deemed ‘fit for use’ or ‘performs as intended.’ These records would include elements of the framework such as the intended use of the feature/function, the risks its use could present, the results of the testing, who did the testing and when, and how issues were resolved. Electronic audit trails, logs, and automated test result sets are preferred to manual or paper-based testing results.

Tools available for Maximo/MAS Manage Assurance

IBM has long provided test scripts (and results) of the out of box features/functions of Maximo via the Life Sciences add-on. With the release of MAS Manage, IBM has continued to release these foundational test scripts, now with a focus on the use of the well-known Selenium software testing automation toolset. This is known as the Maximo Testing Automation Framework (TAF). While there is a learning curve to setting up and running the tests within Selenium, they do produce clear and acceptable electronic records of the pass/fail results. Selenium also has built-in change management and data governance functions.

Further, TRM RulesManager Studio has a feature set called RampUp for developing test scripts. They can be developed traditionally with limited code required, or visually by recording and then playing back simulated users exercising the focused features/functions. RulesManager Studio is utilized on most Maximo/Manage instances we stand up for clients, so it is well known and easily adopted for your software assurance testing.

Wrap up – Software Validation

This change in guidance from the FDA for software validation has been greatly anticipated by regulated entities. (Remember, it is draft mode still – not yet implemented) By promoting and supporting a risk-based approach, the FDA is helping manufacturers take advantage of new technologies, move their computing systems to the cloud, and turn their focus more on product quality, patient safety, and innovation.

TRM has worked with many clients over the years in support of their validation efforts around Maximo, and now MAS implementations, on-premises and in the cloud. We are uniquely qualified to work with your Quality Assurance and Operations teams to not only maintain validation of your current systems, but to also assist in your CSV/CSA activities.


avt-img

John Q. Todd

John Q. Todd has nearly 30 years of business and technical experience in the Project Management, Process development/improvement, Quality/ISO/CMMI Management, Technical Training, Reliability Engineering, Maintenance, Application development, Risk Management, & Enterprise Asset Management fields. His experience includes work as a Reliability Engineer & RCM implementer for NASA/JPL Deep Space Network, as well as numerous customer projects and consulting activities as a reliability and spares analysis expert. He is a Sr. Business Consultant and Product Researcher with Total Resource Management, an an IBM Gold Business Partner – focused on the market-leading EAM solution, Maximo, specializes in improving asset and operational performance by delivering strategic consulting services with world class functional and technical expertise.



Picture of Brawley

Brawley

EXPLORE BY TOPIC

Join the discussion

Click here to join the Maintenance and Reliability Information Exchange, where readers and authors share articles, opinions, and more.

Get Weekly Maintenance Tips

delivered straight to your inbox