Short Courses

Short courses will be instructed on August 12 afternoon from 1:00 pm to 4:30 pm.

Short Course 1

Short Course 2

Short Course 3

Short course 1: An introduction to graphical testing procedures for group-sequential designs

Instructors: Michael Grayling (Johnson & Johnson), Yevgen Tymofyeyev (Johnson & Johnson)

Multiple testing problems arise regularly in the design of clinical trials due to the presence of diverse sets of research hypotheses posed by multiple endpoints, treatment arms, subgroups, and combinations of these factors. Over the past decade, there has been a great expansion in the availability of methodology for performing sequential tests of multiple hypotheses. Amongst such methods, graphical testing in a group-sequential setting (see, e.g., Maurer and Bretz, 2013) has found particular utility, having now been leveraged in numerous studies. In this course, we will provide attendees with the necessary information to evaluate, select, and implement such a design in practice. This will include discussion of nuances related to planning the timing and triggering of interim analyses and comprehensive pragmatic detailing of analysis criteria.

A brief description of graphical testing in the fixed-sample setting and group-sequential design for a single hypothesis will be provided, alongside a recap of how to implement these approaches in R, however some familiarity with these methods will be helpful. The primary focus of the course will then be on how to identify and implement the stopping rules of a group-sequential trial under a graphical testing procedure, describing the minimal information that must be specified for a design to be determined. Key options within this methodology will then be covered, including the utility of ‘look back’ analyses, how one can modify the alpha spending function for a hypothesis on updating the graph, and different alternatives for triggering interim analyses. We discuss both purely statistical and real-world considerations when selecting a design, and also detail how simulation can be used to estimate key marginal power quantities accounting for the correlation between all test statistics.

To help implement such approaches in practice, we also discuss a Quarto template that leverages the popular {gsDesign} and {gMCP} to dynamically and efficiently produce a design and its operating characteristics in a form directly useable with a protocol, for arbitrarily complex multiple hypothesis testing and interim analysis strategies. Throughout the course, we use several recent trials (e.g., Burtness et al (2019) from the KEYTRUDA program) as elucidating examples, to cover use cases with multiple arms, multiple endpoints, and multiple populations.

Brutness B, et al. Pembrolizumab alone or with chemotherapy versus cetuximab with chemotherapy for recurrent or metastatic squamous cell carcinoma of the head and neck (KEYNOTE-048): A randomised, open-label, phase 3 study. Lancet 2019;394:1915-28.

Maurer W, Bretz F. Multiple testing in group sequential trials using graphical approaches. Stat Biopharm Res 2013;5:311-20.

Course outline

15 mins: Refresher on group-sequential design for a single hypothesis
• Information fractions
• Canonical joint distribution
• Error spending
15 mins: Refresher on graphical testing procedures in fixed sample designs
• As a special case of closed testing
• ‘Epsilon’ edges
• Power calculation
10 mins: Break
20 mins: Short practical on group-sequential design / graphical testing using R
• Using {gsDesign}/{rpact} and {gMCP}/{graphicalMCP} to reproduce study design(s)
60 mins: Graphical testing in group-sequential designs
• Spending function updating (i.e., ‘delayed’ alpha recycling)
• ‘Look back’ analyses
• Analysis triggers (including in multi-arm event driven studies)
• Simulation accounting for all correlations
5 mins: Break
45 mins: Software demonstration and practical
• Demonstration of the functionality of a dynamic Quarto template
• Practical on using this template to reproduce recent study design(s)
10 mins: Q&A
• Opportunity to ask questions, from theoretical issues to implementation problems

Instructor Qualification

Michael Grayling is a Senior Principal Statistician within the Statistical Modelling and Methodology group at Johnson & Johnson, primarily supporting issues in Oncology and Immunology. Before joining J&J, Michael worked as a Research Fellow in Biostatistics at Newcastle University, where he developed a significant level of teaching experience and mentored multiple graduate students. His research interests include multi-arm multi-stage trials, crossover studies, and small sample sizes. He has published more than 50 papers in peer-reviewers journals and has authored a number of R packages and Stata modules related to trial design. Having a long record of teaching similar courses and delivering invited presentations focusing on adaptive design and multiple testing procures, including running a two-day course on this topic in five countries, attendees will in particular benefit from Michael’s well-developed materials and presentation experience on the subject matter.

About Instructors:

Dr. Michael Grayling is a Senior Principal Statistician within the Statistical Modelling and Methodology group at Johnson & Johnson, primarily supporting issues in Oncology and Immunology. Before joining J&J, Michael worked as a Research Fellow in Biostatistics at Newcastle University, where he developed a significant level of teaching experience and mentored multiple graduate students. His research interests include multi-arm multi-stage trials, crossover studies, and small sample sizes. He has published more than 50 papers in peer-reviewers journals and has authored a number of R packages and Stata modules related to trial design. Having a long record of teaching similar courses and delivering invited presentations focusing on adaptive design and multiple testing procures, including running a two-day course on this topic in five countries, attendees will in particular benefit from Michael’s well-developed materials and presentation experience on the subject matter.

Dr. Yevgen Tymofyeyev is a Senior Scientific Director in the Statistical Modelling and Methodology group at Johnson & Johnson. In his current role, he serves as the statistical modelling lead for the Oncology Therapeutic area, implementing innovative designs and methods, including programs that utilize complex multi-stage designs with multiple hypothesis testing objectives, which have resulted in the successful submission of several clinical trials. He is actively involved in scientific collaborations in the field of randomization, adaptive design methodology, and software, which have led to an extensive list of publications, presentations, and implementation tools. Among recent presentations focusing on the topic of the proposal are a presentation at the Adaptive Designs and Multiple Testing Procedures Workshop in 2024 (IBS) and a NJ ASA Chapter invited presentation in 2023. Having 20 years of experience in pharmaceutical development, attendees will in particular benefit from Yevgen’s extensive expertise in employing complex methods in practice.

Short Course 2: Adaptive sequential design for phase 2/3 seamless combination and for multiple comparisons

Instructor: Ping Gao, Innovatio Statistics, Inc.
Topic 1: Adaptive sequential design for phase 2/3 seamless combination

We propose an adaptive sequential testing procedure for the selection and testing of multiple treatment options, such as dose/regimen, different drugs, sub-populations, endpoints, or a mixture of them in a seamlessly combined phase II/III trial. The selection is to be made at the end of phase 2 stage. Unlike in many of the published literature, the selection rule is not required to be to “select the best”, and does not need to be pre-specified, which provides flexibility and allows the trial investigators to use any efficacy and safety information/criteria, or surrogate or intermediate endpoint to make the selection. Sample size and power calculations are provided. The calculations have been confirmed to be accurate by simulations. Interim analysis can be performed after the selection, sample size can be modified if the observed efficacy deviates from the assumed. Inference after the trial, including p-value, median unbiased point estimate and confidence intervals, are provided. By applying a dominance theorem, the procedure can be applied to normal, binary, Poisson, negative binomial distributed endpoints and time-to-event endpoints, and a mixture of these distributions (in trials involving endpoint selection).

Article:

Gao, P., & Li, Y. (2024). Adaptive two-stage seamless sequential design for clinical trials. Journal of Biopharmaceutical Statistics, 1–23.

Article download (open access):

https://doi.org/10.1080/10543406.2024.2342518

Features:

  • Adaptive sequential design for seamless phase 2/3 combinations. The method includes
    • Sample size/power calculation
    • Critical boundary determination 
    • Interim analysis (treatment [e.g., dose/regimen] selection, etc., sample size re-estimation, futility stopping)
    • Final analysis (point estimate, confidence interval, p-value)
    • Simulations (type I error control confirmation simulations [with/without adaptive features], power simulations)

Software support: The procedure is supported by the DACT (Design and Analysis of Clinical Trials) software at

https://www.innovatiostat.com/software.html

Selection of optimal design:

The design of a seamless design requires assumptions on the effect size for each treatment option being tested. Available knowledge may limit the accuracy of such assumptions. A major motivation of adaptive design is to mitigate the inaccuracies of assumptions. Operating characteristics (OC), such as mean sample size (when adaptive measures are included in the trial design) and mean sample size are of major interest to trial designers and investigators. Factors that impact the OCs include : planned sample size, timing of interim analysis, critical boundary selection, rules of interim analysis, futility threshold. We suggest that optimal combination of these factors can be evaluated with simulations (supported by DACT). The strategy for selection of optimal design is discussed by:

Ping Gao & Weidong Zhang (2024) A systematic approach to adaptive sequential design for clinical trials: using simulations to select a design with desired operating characteristics, Journal of Biopharmaceutical Statistics, 34:5, 737-752, DOI: 10.1080/10543406.2024.2358796

Download article (open access):

https://www.tandfonline.com/doi/full/10.1080/10543406.2024.2358796

Software use demonstration:

We’ll demonstrate how to use DACT to conduct design, interim analysis, final analysis, and simulations

Topic 2: Adaptive sequential design for multiple comparisons

We present an adaptive sequential testing procedure for clinical trials that test the efficacy of multiple treatment options, such as dose/regimen, different drugs, sub-populations, endpoints, or a mixture of them in one trial. At any interim analyses, sample size re-estimation can be conducted, and any option can be dropped for lack of efficacy or unsatisfactory safety profile. Inference after the trial, including p-value, conservative point estimate and confidence intervals, are provided.

Article:

 Gao, P., & Li, Y. (2023). Adaptive Multiple Comparison Sequential Design (AMCSD) for clinical trials. Journal of Biopharmaceutical Statistics, 34(3), 424–440.

Article download (open access):

https://doi.org/10.1080/10543406.2023.2233590cle:

Features:

  • Adaptive sequential design for multiple comparisons. The method includes
    • Sample size/power calculation
    • Critical boundary determination 
    • Interim analysis (treatment [e.g., dose/regimen] selection, etc., sample size re-estimation, futility stopping)
    • Final analysis (point estimate, confidence interval, p-value)
    • Simulations (type I error control confirmation simulations  [with/without adaptive features], power simulations)

Software support: The procedure is supported by the DACT (Design and Analysis of Clinical Trials) software at

https://www.innovatiostat.com/software.html

Selection of optimal design:

The design of an AMCSD requires assumptions on the effect size for each treatment option being tested. Available knowledge may limit the accuracy of such assumptions. A major motivation of adaptive design is to mitigate the inaccuracies of assumptions. Operating characteristics (OC), such as mean sample size (when adaptive measures are included in the trial design) and mean sample size are of major interest to trial designers and investigators. Factors that impact the OCs include : planned sample size, timing of interim analysis, critical boundary selection, rules of interim analysis, futility threshold. We suggest that optimal combination of these factors can be evaluated with simulations (supported by DACT). The strategy for selection of optimal design is discussed by:

Ping Gao & Weidong Zhang (2024) A systematic approach to adaptive sequential design for clinical trials: using simulations to select a design with desired operating characteristics, Journal of Biopharmaceutical Statistics, 34:5, 737-752, DOI: 10.1080/10543406.2024.2358796

Download article (open access):

https://www.tandfonline.com/doi/full/10.1080/10543406.2024.2358796

Software use demonstration:

We’ll demonstrate how to use DACT to conduct design, interim analysis, final analysis , and simulations

About DACT:

  • The applications and evaluation of the operating characteristics of many statistical methodologies require the use of sophisticated software or extensive simulations.
  • DACT is designed to serve a wide range of innovative statistical designs and analyses.
  • The primary objective of the DACT software is to promote the understanding and application of cutting-edge statistical solutions in clinical trials. For this reason, the software is free for non-commercial scientific research, including but not limited to academic researchers and research/teaching institutions.
  • Computing codes are available upon request.
  • DACT is currently free for all users until further notice

About the Instructor:

Ping Gao, Ph.D., is the founder of Innovatio Statistics, Inc. Ping’s research covers a wide range of innovative statistical solutions for clinical trials:

  • Non-inferiority
  • Adaptive sequential design
  • Adaptive sequential design for phase 2/3 seamless combination trials
  • Adaptive sequential design for multiple comparisons
  • Optimizing adaptive sequential designs
  • Hybrid frequentist-Bayesian approach for sample size re-estimation
  • Dynamic borrowing of external control in rare disease trials
  • Population enrichment
  • Dynamic Bayesian design
  • Adaptive design for oncology phase 2 trials (an extension of Simon’s design)

Details of Ping’s research are provided at:

https://www.innovatiostat.com/research.html

Short Course 3: Good Software Engineering Practice for R Packages


Instructor: Daniel Sabanes Bove, RCONIS


Short Course Description:


The vast majority of statisticians in academia and industry alike write statistical software daily. Nonetheless, software engineering principles are often neglected in biostatistics: most biostatisticians know a programming language (such as R) but lack formal training in writing reusable and reliable code.


This course aims to equip participants with the essential software engineering practices required to develop and maintain robust R packages. With the growing demand for reproducible research and the increasing complexity of statistical methods developed e.g. for MCP methodology, writing high-quality R packages has become a critical skill for statisticians to prototype, develop, and disseminate novel methods (incl. for MCP) and push their adoption in practice. The course will focus on the key principles of software engineering, such as workflows, modular design, version control, testing, documentation, and quality indicators. Focusing on these aspects ensures the reliability and sustainability of R pack-ages. Examples will be given from R packages implementing MCP, such as rpact.


Participants will learn how to structure their R packages following best practices and making use of tools that streamline the development process. A significant emphasis will be placed on writing and running unit tests, ensuring that packages are error-free and behave as expected across different envi-ronments and over time. In the full day version, the course would also cover version control using Git, allowing participants to manage code changes effectively and collaborate with others.


By the end of the course, participants will have a solid understanding of good software engineering principles tailored to R package development, enabling them to build packages that are not only functional but also reliable, reusable, and easy to maintain.


About the Instructor:


Dr. Daniel Sabanés Bové studied statistics at LMU Munich, Germany and obtained his PhD at the University of Zurich, Switzerland in 2013 for his research work on Bayesian model selection. He startedhis career in Roche as a biostatistician for 5 years, then continued at Google as a data scientist for 2 years, before rejoining Roche as a statistical software engineering lead for 4 years. In 2024, Daniel co-founded RCONIS (Research Consulting and Innovative Solutions). He is (co-)author of multiple R pack-ages published on CRAN and Bioconductor, as well as the book “Likelihood and Bayesian Inference: With Applications in Biology and Medicine”. He is currently a co-chair of the openstatsware working group on Software Engineering in Biostatistics (see https://openstatsware.org).