Conference on Statistical Practice
New Orleans, Louisiana
February 14-16, 2019
Non-inferiority (NI) clinical trials aim to show an experimental treatment is therapeutically no worse than the standard of care, particularly if the new treatment is preferred for reasons such as cost, convenience, safety, etc. NI trials are less conservative than superiority or placebo-controlled studies: non-compliance and missing data may increase bias toward the alternative hypothesis.
Our objective was to compare multiple imputation (MI) and other methods for analyzing trials with missing data in intention-to-treat (ITT) and per-protocol (PP) populations. We simulated trials with missing data and non-compliance due to treatment inefficacy under varying trial conditions (trajectory of treatment effects, correlation between repeated measures, and missing data type) and assessed these methods by estimating bias, type 1 error and power. We found that a MI model with an auxiliary non-compliance variable performs better than other methods in controlling type I error rates in ITT analyses. A hybrid ITT/PP approach that imputed for non-compliant subjects yielded low type 1 error and was unbiased, offering an alternative estimator under a hypothetical assumption of full compliance.
Abstract for Lay Audience
In drug and device clinical trials, patient withdrawal, loss-to-follow-up, and non-compliance with treatment protocols complicate analysis. When the data planned for collection are compromised or incomplete, estimates for treatment effect may be biased and trial conclusions may not be generalizable. Non-inferiority (NI) trials form a class of designs which aim to show that an experimental treatment is therapeutically no worse than existing treatments. If a new treatment may be preferred for reasons such as lower cost, convenience, or improved safety pro_le, an NI trial may be used to test whether the treatment is as e_cacious as an active control within some pre-determined margin. NI trials are by nature less conservative than superiority and placebo-controlled studies, and many of the challenges in their analysis and interpretation are exacerbated by missing or incomplete data. Although missing data problems have been extensively studied, there is a dearth of research on their effects and on the best approaches in NI trials. This is important because clinical trial statisticians must ensure the methodologies used in the analysis of NI trials adequately control the statistical risk of false conclusions. We addressed this gap in knowledge by conducting a simulation experiment to characterize the effects of missing data in NI trials. We evaluated common approaches to missing data handling as well as statistically-principled methods under various missing data mechanisms. Our results identified one missing data scenario that is particularly problematic in NI trials, and we offered recommendations for researchers that address some of these special cases. Given the increasing popularity of the NI design, the persistent challenge of missing data and patient compliance, and the reliance of regulators and clinicians on trial results, there is a critical need to improve rigor and reproducibility of NI analyses. Better practices have potential for patients' easier access to new treatments and for minimizing risk of exposure to treatments that are ineffective.