A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://optad.github.io/adoptr-validation-report below:

Validation Report for adoptr package

Introduction

This work is licensed under the CC-BY-SA 4.0 license

Preliminaries

R package validation for regulatory environments can be a tedious endeavour. The authors firmly believe that under the current regulation, there is no such thing as a ‘validated R package’: validation is by definition a process conducted by the user. This validation report merely aims at facilitating validation of adoptr as much as possible. No warranty whatsoever as to the correctness of adoptr nor the completeness of the validation report are given by the authors.

We assume that the reader is familiar with the notation and theoretical background of adoptr. Otherwise, the following resources might be of help:

Scope

adoptr itself already makes extensive use of unittesting to ensure correctness of all implemented functions. Yet, due to constraints on the build-time for an R package, the range of scenarios covered in the unittests of adoptr is rather limited. Furthermore, the current R unittesting framework does not permit an easy generation of a human-readable report of the test cases to ascertain coverage and test quality.

Therefore, adoptr splits testing in two parts: technical correctness is ensured via an extensive unittesting suit in adoptr itself (aiming to maintain a 100% code coverage). The validation report, however, runs through a wide range of possible application scenarios and ensures plausibility of results as well as consistency with existing methods wherever possible. The report itself is implemented as a collection of Rmarkdown documents allowing to show both the underlying code as well as the corresponding output in a human-readable format.

The online version of the report is dynamically re-generated on a weekly basis based on the respective most current version of adoptr on CRAN. The latest result of these builds is available at https://optad.github.io/adoptr-validation-report/. To ensure early warning in case of any test-case failures, formal tests are implemented using the testthat package (Wickham, R Studio, and R Core Team 2018). I.e., the combination of using a unittesting framework, a continuous integration, and continuous deployment service leads to an always up-to-date validation report (build on the current R release on Linux). Any failure of the integrated formal tests will cause the build status of the validation report to switch from ‘passing’ to ‘failed’ and the respective maintainer will be notified immediately.

Validating a local installation of adoptr

Note that, strictly speaking, the online version of the validation report only provides evidence of the correctness on the respective Travis-CI cloud virtual machine infrastructure using the respective most recent release of R and the most recent versions of the dependencies available on CRAN. In some instances it might therefore be desirable to conduct a local validaton of adoptr.

To do so, one should install adoptr with the INSTALL_opts option to include tests and invoke the test suit locally via

install.packages("adoptr", INSTALL_opts = c("--install-tests"))
tools::testInstalledPackage("adoptr", types = c("examples", "tests"))

Upon passing the test suit successfully, the validation report can be build locally. To do so, first clone the entire source directory and switch to the newly created folder

git clone https://github.com/optad/adoptr-validation-report.git
cd adoptr-validation-report

Make sure that all packages required for building the report are available, i.e., install all dependencies listed in the top-level DESCRIPTION file, e.g.,

install.packages(c(
    "adoptr", 
    "tidyverse", 
    "bookdown", 
    "rpact", 
    "testthat", 
    "pwr" ) )

The book can then be build using the terminal command

Rscript -e 'bookdown::render_book("index.Rmd", output_format = "all")'

or directly from R via

bookdown::render_book("index.Rmd", output_format = "all")

This produces a new folder _book with the html and pdf versions of the report.

Validation Scenarios Scenario I: Large effect, point prior

This is the default scenario.

Variant I.1: Minimizing Expected Sample Size under the Alternative Variant I.2: Minimizing Expected Sample Size under the Null Hypothesis Variant I.3: Conditional Power Constraint Scenario II: Large effect, Gaussian prior

Similar scope to Scenario I, but with a continuous Gaussian prior on \(\delta\).

Variant II.1: Minimizing Expected Sample Size Variant II.2: Minimizing Expected Sample Size under the Null hypothesis Variant II.3: Condtional Power Constraint Scenario III: Large effect, uniform prior Variant III.1: Convergence under Prior Concentration

Additionally, the designs are compared graphically. Inspect the plot to see convergence pattern.

Scenario IV: Smaller effect size, larger trials Variant IV.1: Minimizing Expected Sample Size under the Alternative Variant IV.2: Increasing Power Variant IV.3: Increasing Maximal Type One Error Rate Scenario V: Single-arm design, medium effect size Variant V.1: Sensitivity to Integration Order Variant V.2: Utility Maximization Variant V.3: \(n_1\) penalty Variant V.4: \(n_2\) penalty Scenario VI: Binomial distribution

This scenario investigates the implementation of the binomial distribution.

Variant VI.1: Minimizing Expected Sample Size under the Alternative Variant VI.2: Minimizing Expected Sample Size under the Null Variant VI.3: Conditional Power Constraint Scenario VII: Binomial Distribution, Gaussian Prior Variant VII.1: Minimizing Expected Sample Size under Continuous Prior Variant VII.2: Minimizing Expected Sample Size under Continuous Prior scenarioVIII: Large Effect, Unknown Variance

Due to the large effect size, the sample size is low. Thus, the designs are computed using a \(t\)-distribution.

Variant VIII.1: Minimizing Expected Sample Size under Point Prior Variant VIII.2 Comparison to Normal Distribution

A design is computed under the same constraints as in VIII.1, but a normal distribution is assumed.

Variant IX: Time-To-Event-Endpoints

Let \(\theta\) be the hazard ratio.

Scenario IX.1 Minimizing expected number of events under point prior Scenario X: Chi-Squared Distribution Variant X.1: Contingency Table with Binary Endpoints

Let \(\delta_0 = (x, y, z)\) where \(x=y=z\) and \(\delta_1 = (0.4, 0.5, 0.6)\), where each entry denotes rates in three different groups respectively.

Variant X.2: Two-Sided Z-Test Scenario XI: F-Distribution Variant XI.1: ANOVA

Let \(\delta_0 = (x, y, z)\) where \(x=y=z\) and \(\delta_1 = (0.4, 0.5, 0.6)\), where each entry denotes group means in three different groups respectively.

Scenario XII: Further Constraints Variant XII.1: Maximal Sample Size Constraint Technical Setup

All scenarios are run in a single, shared R session. Required packages are loaded here, the random seed is defined and set centrally, and the default number of iteration is increased to make sure that all scenarios converge properly. Additional R scripts with convenience functions are sourced here as well. There are three additional functions for this report. rpact_design creates a two-stage design via the package rpact (Wassmer and Pahlke 2018) in the notation of adoptr. sim_pr_reject and sim_n allow to simulate rejection probabilities and expected sample sizes respectively by the adoptr routine simulate. Furthermore, global tolerances for the validation are set. For error rates, a relative deviation of \(1\%\) from the target value is accepted. (Expected) Sample sizes deviations are more liberally accepted up to an absolute deviation of \(0.5\).

library(adoptr)
library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr     1.1.4     ✔ readr     2.1.5
## ✔ forcats   1.0.0     ✔ stringr   1.5.1
## ✔ ggplot2   3.5.1     ✔ tibble    3.2.1
## ✔ lubridate 1.9.3     ✔ tidyr     1.3.1
## ✔ purrr     1.0.2     
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ✖ dplyr::n()      masks adoptr::n()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
## 
## Attaching package: 'rpact'
## 
## The following object is masked from 'package:dplyr':
## 
##     pull
library(pwr)
library(testthat)
## 
## Attaching package: 'testthat'
## 
## The following object is masked from 'package:dplyr':
## 
##     matches
## 
## The following object is masked from 'package:purrr':
## 
##     is_null
## 
## The following objects are masked from 'package:readr':
## 
##     edition_get, local_edition
## 
## The following object is masked from 'package:tidyr':
## 
##     matches
## 
## The following object is masked from 'package:adoptr':
## 
##     expectation
library(tinytex)

# load custom functions in folder subfolder '/R'
for (nm in list.files("R", pattern = "\\.[RrSsQq]$"))
   source(file.path("R", nm))

# define seed value
seed  <- 42

# define absolute tolerance for error rates
tol   <- 0.01

# define absolute tolerance for sample sizes
tol_n <- 0.5

# define custom tolerance and iteration limit for nloptr
opts = list(
    algorithm = "NLOPT_LN_COBYLA",
    xtol_rel  = 1e-5,
    maxeval   = 100000
)
References

Bauer, P., F. Bretz, V. Dragalin, F. König, and G. Wassmer. 2015.

“Twenty-Five Years of Confirmatory Adaptive Designs: Opportunities and Pitfalls.” Statistics in Medicine

35 (3): 325–47.

https://doi.org/10.1002/sim.6472

.

Pilz, M., K. Kunzmann, C. Herrmann, G. Rauch, and M. Kieser. 2019.

“A Variational Approach to Optimal Two-Stage Designs.” Statistics in Medicine

38 (21): 4159–71.

https://doi.org/10.1002/sim.8291

.

Wassmer, G., and W. Brannath. 2016. Group Sequential and Confirmatory Adaptive Designs in Clinical Trials. Springer Series in Pharmaceutical Statistics -. Springer International Publishing.

Wassmer, G., and F. Pahlke. 2018.

Rpact: Confirmatory Adaptive Clinical Trial Design and Analysis

.

https://www.rpact.org

.

Wickham, H., R Studio, and R Core Team. 2018.

Testthat: Unit Testing for r

.

https://cran.r-project.org/web/packages/testthat/index.html

.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4