On-site monitoring: triggered or random?


How can we prioritise sites for on-site monitoring to target those that present the highest risk, and use trial resources most efficiently? Can we use data collected centrally with an algorithm to trigger on-site monitoring (OSM)?

This is what the UCL MRC Clinical Trials Unit’s TEMPER team looked at in a SWAT1 study. Susan Wagland and Anna Liew watched a webinar given by Will Cragg of TEMPER, broadcast on 17 October 2017, which described the study.

In the TEMPER trial, researchers used pre-defined site-related prognostic triggers, such as low rate of data return, and identified the sites that scored highly on these triggers (so-called “triggered” sites). They matched these sites with others that had not scored highly (so-called “non-triggered” sites) and performed OSM at both groups of sites. The results were interesting.

Selected methods
• 3 large multi-centre clinical trials were included, each with > 80 sites, all on CTIMPs
• 84 sites received OSM (42 matched pairs)
• Primary outcome: proportion of sites with one or more critical (C) or major (M) finding, between triggered and non-triggered sites. With 42 matched pairs, 80% power to detect a 30% absolute difference (e.g. 70% vs 40%)
• Secondary outcome: total number of C and M findings, between triggered and non-triggered sites
• ‘major’ and ‘critical’ as defined by MHRA
• Sites were matched by similar number of randomisations and similar date of first randomisation
• 38 triggers were used; data was collected in a SQL Basic database and an algorithm was used to select sites
• An example of a trigger: “more than 0.5% of the values in the open forms are missing or queried based on total number of fields to be entered”.

Interestingly, there was no significant difference (7%; p = 0.365) in the primary outcome between triggered and non-triggered sites (88% and 81% respectively had 1 or more C or M findings).
However, a large number of M findings related to failing to reconsent patients after a patient information sheet had changed. If this was removed from the totals (and you may have your own views about this methodologically; and future work will look at central monitoring for reconsent) then significantly more triggered sites than non-triggered sites had 1 or more C or M findings (86% vs 60%; 26% difference, p = 0.007).
With the secondary outcome (total number of M and C findings per site) there was no significant difference between triggered and non-triggered sites (triggered median 3 per site, range 0-24; not triggered 1, range 0-33; p = 0.19) unless the findings associated with the reconsent issue were removed (triggered median 1.5 per site, range 0-14; not triggered 0, range 0-6; p = 0.04).

However, the team made other observations. Some of the prognostic factors that fed into the algorithm may be more indicative that others, and the team will refine the algorithm for future use. For instance, triggers common to many sites were:

• Sites where the PI was more involved in training staff, completing questionnaires, reviewing questionnaires, screening and talking to patients, as opposed to sites where these roles were largely conducted by research nurses
• High recruitment
• Poor CRF return rate
• Slow data query resolution rate

A pre-visit subjective assessment of “concern” by the central study team had no prognostic value. Will also noted that site visits have an important communication role; and that we all want to do SWATs but they take up the trial team’s time and specific resource must be allocated.

This was a well-conducted study of a risk-adapted monitoring strategy in one trials unit and confirmed the overall benefit of OSM in that more than 80% of all sites visited had one or more major or critical finding. If anything, it supports allocating resources to OSM at all sites. However, we need to work on selecting sites for triggered OSM more smartly, and allocating resources to target greatest risk.
The study had its own large cost implication in (among other things) setting up the database, specifying relevant prognostic factors and algorithm, collecting data, oversight and quality.

The study was sponsored by the MRC and funded by Cancer Research UK PRC grant with additional support from the MRC London Hub for Trial Methodology Research.
Note: Other studies have looked at selecting sites algorithmically: see ADAMON in Germany and OPTIMON in Bordeaux.
1 SWAT: Study Within a Trial

Link: http://discovery.ucl.ac.uk/1557693/ Journal article in preparation.
Email address for TEMPER, given here with Will’s permission: mrcctu.temper@ucl.ac.uk

A recording the William Cragg's webinar “Triggered or routine site monitoring visits for randomised controlled trials? Results of TEMPER, a prospective, matched-pair study” on 17 Oct 2017 is available at https://www.mrc-phru.ox.ac.uk/webinars/