Metrics provide proof of effectiveness for GxP training
GXP/Non-Commercial – Danielle Duran
Learning professionals understand the importance of demonstrating our value to the business. Like many other enabling or supporting roles, however, we do not generate revenue directly. Across industries, our job security is not always stable, yet we are only increasingly confident in our ability to drive strong performance.
We are also increasingly aware of the criticality of using data (Kirkpatrick model or other chosen doctrine) to show how critical to performance the learning and training investments have been. In my experience, however, we are not always aligned across the business on the best data to leverage.
On the GxP side of the business, the most common form of data presented when effectiveness is questioned is the training metrics. A key performance indicator (KPI) that often carries a heavy load of emotional baggage bursting at the seams with comments related to relevance, timing, system errors and completely full to the brim of lengthy procedures that are often not written to serve as a tool for learning.
Some of the most common reasons training metrics are requested are during an inspection by a health authority, an internal audit or an internal business process like the quality review or a leader ensuring the readiness and compliance of their their team. While the leader’s actions may be some of the most critical to ensuring strong performance of personnel, and in turn uninterrupted safe supply of products to patients, it is the inspections and audits that carry a higher stakes request, and the ones we will address here.
An asset I am proud to offer as a learning professional is my understanding of how learning works and to use that to ensure knowledge or skill transfer takes place during a training activity. I also know how to create measurable objectives to demonstrate effectiveness, and I know that the completion of a training activity does not equate to a training activity being effective, particularly if I have no evidence to support the claim.
In a moderated panel conversation at the 2022 LTEN Annual Conference, Peter Baker, a former investigator with the FDA, outlined what he and his colleagues are seeking for evidence of effective training. It was a reminder that we are ultimately quite aligned in what we know matters most, that the proof is in the pudding and if we are ensuring a safe and effective (uninterrupted supply of) product to our patients, we all win.
It also reinforced the context that learning and training is an enabling function. Its ultimate goal is not to accomplish and prove that specific tasks happened (while it is part of the requirement for documentation). Its purpose is to enable the success of the quality management system and its personnel to accomplish their objectives.
Baker focused on two primary areas to show training is effective — the creation and revision of standard operating procedures (SOPs), preventive management of quality events (deviations or out-of-specifications) and corrective and preventative actions (CAPAs). Central to his message about both areas was the criticality of collaboration and continuous improvement, core principles of quality culture.
Creation and Revision of SOPs
Effective SOPs are at the heart of effective training, since this is the primary source of knowledge about a procedure, and the tool often used to transfer that knowledge. They must be written well, improved based on feedback before their deployment and revised based on their performance on a regular cadence.
Baker stated that for an SOP to be truly effective, its authorship must include all those involved in the process, including those performing the task itself. Ensuring those performing the task participate in its definition prior to deployment will ensure it is accurate and performable and will prevent errors supporting right the first time and efficiency in time and cost.
Prior to a revision, the owner of the document – along with other collaborators as appropriate, similar to its design team – should be evaluating errors or issues with its performance, ensuring the procedure executers are involved. The guidance from ICHQ10 calls for continuous improvement, and newer recommendations from the Parenteral Drug Association (PDA) on quality culture include it as a measure in their assessment tool.
The health authorities want to know that SOP revisions include deliberate and evidence-based improvements.
Management of Quality Events and CAPAs
Investigators will review trends within deviations, including trends in root causes. A marker of an effective training system is one that will have fewer human-error-caused deviations. Since most procedures do involve some humans, there will likely always be some human error, but where they exist, it is their treatment that matters.
How the investigations of human error deviations are conducted, the extent to which the causes are understood, and how a meaningful CAPA is defined, are all aspects an investigator will explore.
Was the reason for the error identified? If the error was due to ineffective training, what was ineffective about it? If the error was due to some other behavioral aspect, to what extent is it understood so that an effective CAPA can be created?
The CAPA effectiveness will of course be evaluated, but it is most important for training when these are based on human error root causes. If training is named as both a cause and CAPA, how different is the training in the CAPA than the original training? What is being done about the original training? Are the qualifications of trainers reviewed? To what extent is the learning or training function engaged?
In my own experience, owners of CAPAs are very happy to get the support from a training representative. How can we as learning professionals ensure we have relationships in place to enable successful CAPAs?
Regarding training metrics, Baker didn’t mention them until asked, but spoke of them more as a minimal requirement of documentation, not necessarily as a meaningful measure of effective training. He did warn that many internal audit programs are primarily focused on the documentation requirements but are not investigating the more robust evidence that a training program is effective, and encouraged those in the audience to work with their businesses to make shifts and be more inspection-ready.
As with most topics, training is mentioned in the regulations, more so in European Union legislation for GMP, and it is required to be effective, but the definition for effectiveness is not given. It is for each developer and producer of drugs and therapeutics to define, and to hold themselves accountable.
Basic and foundational requirements are laid out, but they are very much the floor and not ceiling for how we should be operating. We should be able to easily produce compliant metrics to say that personnel are completing required training on time and that departments or roles have defined qualification requirements per responsibilities, even if only in the form of a matrix.
Much more importantly, the GxP learning and training functions should support the business to effectively collaborate, or design and improve procedure documents, and create other documents when necessary to support knowledge or skill transfer. We should partner closely when investigating deviations to understand the meaningful causes of human error, and in turn when defining CAPAs to ensure they are effective.
We must contribute to ensuring and strengthening our quality cultures. All of this requires collaboration and partnership across all business functions.
Consider now what markers of effective training you want to carefully refine, and which need more improvement. Also, consider which relationships you need to build or strengthen along the way.
Our colleagues and patients deserve our collective best, and it’s why we work so hard through such complexity.
Danielle Duran is director, GxP learning, for Aimmune Therapeutics, a Nestle Health Science company. Email her at email@example.com.