Title: | Processing and Analyzing of Diagnostics Trials |
---|---|
Description: | Provides methods and functions to analyze the quantitative or qualitative performance for diagnostic assays, and outliers detection, reader precision and reference range are discussed. Most of the methods and algorithms refer to CLSI (Clinical & Laboratory Standards Institute) recommendations and NMPA (National Medical Products Administration) guidelines. In additional, relevant plots are constructed by 'ggplot2'. |
Authors: | Kai Gu [aut, cre, cph] |
Maintainer: | Kai Gu <[email protected]> |
License: | GPL (>= 3) |
Version: | 1.1.1.9000 |
Built: | 2024-11-04 02:48:41 UTC |
Source: | https://github.com/kaigu1990/mcradds |
mcradds
Packagemcradds
Processing and analyzing of In Vitro Diagnostic Data.
Maintainer: Kai Gu [email protected] [copyright holder]
Useful links:
Report bugs at https://github.com/kaigu1990/mcradds/issues
This ADSL is created by subsetting the CDISC ADSL with 60 subjects in each of two treatments like placebo and Xanomeline (this one corresponds to high dose level in the original ADSL).
adsl_sub
adsl_sub
A adsl_sub data set contains 120 observations and 14 variables. And the description of each variable has been labeled in data set.
A copy from VCA::anovaVCA in VCA
package
anovaVCA(...)
anovaVCA(...)
... |
Arguments passed on to
|
a class of VCA
for downstream analysis.
data(glucose) anovaVCA(value ~ day / run, glucose)
data(glucose) anovaVCA(value ~ day / run, glucose)
This function compares two AUC of paired two-sample diagnostic assays by standardized difference method, which has a little difference in SE calculation with unpaired design. In order to compare the two assays, this function provides three assessments including 'difference', 'non-inferiority' and 'superiority'. This method of comparing is referred from Liu(2006)'s article that can be found in reference section below.
aucTest( x, y, response, h0 = 0, conf.level = 0.95, method = c("difference", "non-inferiority", "superiority"), ... )
aucTest( x, y, response, h0 = 0, conf.level = 0.95, method = c("difference", "non-inferiority", "superiority"), ... )
x |
( |
y |
( |
response |
( |
h0 |
( |
conf.level |
( |
method |
( |
... |
other arguments to be passed to |
If the samples are not considered independent, such as in a paired design,
the SE can not be computed by the method of Delong provided in pROC
package.
Here the aucTest
function use the standardized difference approach from
Liu(2006) publication to compute the SE and corresponding hypothesis test
statistic for a paired design study.
difference
is to test the difference between two diagnostic tests, the
default h0 is zero.
non-inferiority
is to test the new diagnostic tests is no worse than the
standard diagnostic test in a specific margin, but the same time maybe it's
safer, easier to administer or cost less.
superiority
is to test the test the new diagnostic tests is better than the
standard diagnostic test in a specific margin(default is zero), having better efficacy.
A RefInt
object contains relevant results in comparing the paired
ROC of two-sample assays.
The test of significance for the difference is not equal to the result of EP24A2 Appendix D. Table D2. Because the Table D2 uses the method of Hanley & McNeil (1982), whereas this function here uses the method of DeLong et al. (1988), which results in the difference of SE. Thus the corresponding Z statistic and P value will be not equal as well.
Jen-Pei Liu (2006) "Tests of equivalence and non-inferiority for diagnostic accuracy based on the paired areas under ROC curves". Statist. Med. , 25:1219–1238. DOI: 10.1002/sim.2358.
data("ldlroc") # H0 : Difference between areas = 0: aucTest(x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis) # H0 : Superiority margin <= 0.1: aucTest( x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis, method = "superiority", h0 = 0.1 ) # H0 : Non-inferiority margin <= -0.1: aucTest( x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis, method = "non-inferiority", h0 = -0.1 )
data("ldlroc") # H0 : Difference between areas = 0: aucTest(x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis) # H0 : Superiority margin <= 0.1: aucTest( x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis, method = "superiority", h0 = 0.1 ) # H0 : Non-inferiority margin <= -0.1: aucTest( x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis, method = "non-inferiority", h0 = -0.1 )
ggplot
for Bland-Altman Plot and Regression PlotDraw a ggplot-based difference Bland-Altman plot of reference assay vs. test assay
for BAsummary
object, and a regression plot for MCResult
. Also Providing
the necessary and useful option arguments for presentation.
autoplot(object, ...) ## S4 method for signature 'BAsummary' autoplot( object, type = c("absolute", "relative"), color = "black", fill = "lightgray", size = 1.5, shape = 21, jitter = FALSE, ref.line = TRUE, ref.line.params = list(col = "blue", linetype = "solid", size = 1), ci.line = FALSE, ci.line.params = list(col = "blue", linetype = "dashed"), loa.line = TRUE, loa.line.params = list(col = "blue", linetype = "dashed"), label = TRUE, label.digits = 4, label.params = list(col = "black", size = 4), x.nbreak = NULL, y.nbreak = NULL, x.title = NULL, y.title = NULL, main.title = NULL ) ## S4 method for signature 'MCResult' autoplot( object, color = "black", fill = "lightgray", size = 1.5, shape = 21, jitter = FALSE, identity = TRUE, identity.params = list(col = "gray", linetype = "dashed"), reg = TRUE, reg.params = list(col = "blue", linetype = "solid"), equal.axis = FALSE, legend.title = TRUE, legend.digits = 2, x.nbreak = NULL, y.nbreak = NULL, x.title = NULL, y.title = NULL, main.title = NULL )
autoplot(object, ...) ## S4 method for signature 'BAsummary' autoplot( object, type = c("absolute", "relative"), color = "black", fill = "lightgray", size = 1.5, shape = 21, jitter = FALSE, ref.line = TRUE, ref.line.params = list(col = "blue", linetype = "solid", size = 1), ci.line = FALSE, ci.line.params = list(col = "blue", linetype = "dashed"), loa.line = TRUE, loa.line.params = list(col = "blue", linetype = "dashed"), label = TRUE, label.digits = 4, label.params = list(col = "black", size = 4), x.nbreak = NULL, y.nbreak = NULL, x.title = NULL, y.title = NULL, main.title = NULL ) ## S4 method for signature 'MCResult' autoplot( object, color = "black", fill = "lightgray", size = 1.5, shape = 21, jitter = FALSE, identity = TRUE, identity.params = list(col = "gray", linetype = "dashed"), reg = TRUE, reg.params = list(col = "blue", linetype = "solid"), equal.axis = FALSE, legend.title = TRUE, legend.digits = 2, x.nbreak = NULL, y.nbreak = NULL, x.title = NULL, y.title = NULL, main.title = NULL )
object |
( |
... |
not used. |
type |
( |
color , fill
|
( |
size |
( |
shape |
( |
jitter |
( |
ref.line |
( |
ref.line.params , ci.line.params , loa.line.params
|
( |
ci.line |
( |
loa.line |
( |
label |
( |
label.digits |
( |
label.params |
( |
x.nbreak , y.nbreak
|
( |
x.title , y.title , main.title
|
( |
identity |
( |
identity.params , reg.params
|
( |
reg |
( |
equal.axis |
( |
legend.title |
( |
legend.digits |
( |
A ggplot
based Bland-Altman plot or regression plot that can be
easily customized using additional ggplot
functions.
If you'd like to alter any part that this autoplot
function haven't
provided, adding other ggplot
statements are suggested.
h_difference()
to see the type details.
mcr::mcreg()
to see the regression parameters.
# Specify the type for difference plot data("platelet") object <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) autoplot(object) autoplot(object, type = "relative") # Set the addition parameters for `geom_point` autoplot(object, type = "relative", jitter = TRUE, fill = "lightblue", color = "grey", size = 2 ) # Set the color and line type for reference and limits of agreement lines autoplot(object, type = "relative", ref.line.params = list(col = "red", linetype = "solid"), loa.line.params = list(col = "grey", linetype = "solid") ) # Set label color, size and digits autoplot(object, type = "absolute", ref.line.params = list(col = "grey"), loa.line.params = list(col = "grey"), label.digits = 2, label.params = list(col = "grey", size = 3, fontface = "italic") ) # Add main title, X and Y axis titles, and adjust X ticks. autoplot(object, type = "absolute", x.nbreak = 6, main.title = "Bland-Altman Plot", x.title = "Mean of Test and Reference Methods", y.title = "Reference - Test" ) # Using the default arguments for regression plot data("platelet") fit <- mcreg( x = platelet$Comparative, y = platelet$Candidate, method.reg = "Deming", method.ci = "jackknife" ) autoplot(fit) # Only present the regression line and alter the color and shape. autoplot(fit, identity = FALSE, reg.params = list(col = "grey", linetype = "dashed"), legend.title = FALSE, legend.digits = 4 )
# Specify the type for difference plot data("platelet") object <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) autoplot(object) autoplot(object, type = "relative") # Set the addition parameters for `geom_point` autoplot(object, type = "relative", jitter = TRUE, fill = "lightblue", color = "grey", size = 2 ) # Set the color and line type for reference and limits of agreement lines autoplot(object, type = "relative", ref.line.params = list(col = "red", linetype = "solid"), loa.line.params = list(col = "grey", linetype = "solid") ) # Set label color, size and digits autoplot(object, type = "absolute", ref.line.params = list(col = "grey"), loa.line.params = list(col = "grey"), label.digits = 2, label.params = list(col = "grey", size = 3, fontface = "italic") ) # Add main title, X and Y axis titles, and adjust X ticks. autoplot(object, type = "absolute", x.nbreak = 6, main.title = "Bland-Altman Plot", x.title = "Mean of Test and Reference Methods", y.title = "Reference - Test" ) # Using the default arguments for regression plot data("platelet") fit <- mcreg( x = platelet$Comparative, y = platelet$Candidate, method.reg = "Deming", method.ci = "jackknife" ) autoplot(fit) # Only present the regression line and alter the color and shape. autoplot(fit, identity = FALSE, reg.params = list(col = "grey", linetype = "dashed"), legend.title = FALSE, legend.digits = 4 )
The BAsummary
class is used to display the BlandAltman analysis and outliers.
BAsummary(call, data, stat, param)
BAsummary(call, data, stat, param)
call |
( |
data |
( |
stat |
( |
param |
( |
An object of class BAsummary
.
call
call
data
data
outlier
outlier
param
param
Calculate the Bland-Altman related statistics with specific difference type, such as difference, limited of agreement and confidence interval. And the outlier detecting function and graphic function will get the difference result from this.
blandAltman(x, y, sid = NULL, type1 = 3, type2 = 5, conf.level = 0.95)
blandAltman(x, y, sid = NULL, type1 = 3, type2 = 5, conf.level = 0.95)
x |
( |
y |
( |
sid |
( |
type1 |
( |
type2 |
( |
conf.level |
( |
A object with BAsummary
class that contains the BlandAltman analysis.
data
a data frame contains the raw data from the input.
stat
a list contains the summary table (tab
) of Bland-Altman analysis,
vector (absolute_diff
) of absolute difference and vector (relative_diff
)
of relative difference.
h_difference()
to see the type details.
data("platelet") blandAltman(x = platelet$Comparative, y = platelet$Candidate) # with sample id as input sid blandAltman(x = platelet$Comparative, y = platelet$Candidate, sid = platelet$Sample) # Specifiy the type for difference blandAltman(x = platelet$Comparative, y = platelet$Candidate, type1 = 1, type2 = 4)
data("platelet") blandAltman(x = platelet$Comparative, y = platelet$Candidate) # with sample id as input sid blandAltman(x = platelet$Comparative, y = platelet$Candidate, sid = platelet$Sample) # Specifiy the type for difference blandAltman(x = platelet$Comparative, y = platelet$Candidate, type1 = 1, type2 = 4)
A copy from mcr::calcBias in mcr
package
calcBias(...)
calcBias(...)
... |
Arguments passed on to
|
Bis and corresponding confidence interval for the specific medical
decision levels (x.levels
).
data(platelet) fit <- mcreg( x = platelet$Comparative, y = platelet$Candidate, method.reg = "Deming", method.ci = "jackknife" ) calcBias(fit, x.levels = c(30, 200)) calcBias(fit, x.levels = c(30, 200), type = "proportional") calcBias(fit, x.levels = c(30, 200), type = "proportional", percent = FALSE)
data(platelet) fit <- mcreg( x = platelet$Comparative, y = platelet$Candidate, method.reg = "Deming", method.ci = "jackknife" ) calcBias(fit, x.levels = c(30, 200)) calcBias(fit, x.levels = c(30, 200), type = "proportional") calcBias(fit, x.levels = c(30, 200), type = "proportional", percent = FALSE)
This example calcium
can be used to compute the reference range of
Calcium in 240 medical students by sex.
calcium
calcium
A calcium data set contains 240 observations and 3 variables.
Sample id
Measurements from target subjects
Sex group of target subjects
CLSI-EP28A3 Table 4. is cited in this data set.
This function concatenates inputs like cat()
and prints them with newline.
cat_with_newline(...)
cat_with_newline(...)
... |
inputs to concatenate. |
None, only used for the side effect of producing the concatenated output in the R console.
This is similar to cli::cat_line()
.
cat_with_newline("hello", "world")
cat_with_newline("hello", "world")
The Desc
class serves as the store for results from frequency and univariate
statistics analysis.
Desc(func, mat, stat)
Desc(func, mat, stat)
func |
( |
mat |
( |
stat |
( |
An object of class Desc
.
func
func
mat
mat
stat
stat
Create a summary table for one or more variables by one group, as well as a total column if necessary.
descfreq( data, denom = NULL, var, bygroup, format, fctdrop = FALSE, addtot = FALSE, na_str = NULL )
descfreq( data, denom = NULL, var, bygroup, format, fctdrop = FALSE, addtot = FALSE, na_str = NULL )
data |
( |
denom |
( |
var |
( |
bygroup |
( |
format |
( |
fctdrop |
( |
addtot |
( |
na_str |
( |
A object Desc
contains an intermediate data with long form for
post-processing and final data with wide form for presentation.
By default, the each category is sorted based on the corresponding factor
level of var
variable. If the variable is not a factor, that will be sorted
alphabetically.
data(adsl_sub) # Count the age group by treatment with 'xx (xx.x%)' format adsl_sub %>% descfreq( var = "AGEGR1", bygroup = "TRTP", format = "xx (xx.x%)" ) # Count the race by treatment with 'xx (xx.xx)' format and replace NA with '0' adsl_sub %>% descfreq( var = "RACE", bygroup = "TRTP", format = "xx (xx.xx)", na_str = "0" ) # Count the sex by treatment adding total column adsl_sub %>% descfreq( var = "SEX", bygroup = "TRTP", format = "xx (xx.x%)", addtot = TRUE ) # Count multiple variables by treatment and sort category by corresponding factor levels adsl_sub %>% dplyr::mutate( AGEGR1 = factor(AGEGR1, levels = c("<65", "65-80", ">80")), SEX = factor(SEX, levels = c("M", "F")), RACE = factor(RACE, levels = c( "WHITE", "AMERICAN INDIAN OR ALASKA NATIVE", "BLACK OR AFRICAN AMERICAN" )) ) %>% descfreq( var = c("AGEGR1", "SEX", "RACE"), bygroup = "TRTP", format = "xx (xx.x%)", addtot = TRUE, na_str = "0" )
data(adsl_sub) # Count the age group by treatment with 'xx (xx.x%)' format adsl_sub %>% descfreq( var = "AGEGR1", bygroup = "TRTP", format = "xx (xx.x%)" ) # Count the race by treatment with 'xx (xx.xx)' format and replace NA with '0' adsl_sub %>% descfreq( var = "RACE", bygroup = "TRTP", format = "xx (xx.xx)", na_str = "0" ) # Count the sex by treatment adding total column adsl_sub %>% descfreq( var = "SEX", bygroup = "TRTP", format = "xx (xx.x%)", addtot = TRUE ) # Count multiple variables by treatment and sort category by corresponding factor levels adsl_sub %>% dplyr::mutate( AGEGR1 = factor(AGEGR1, levels = c("<65", "65-80", ">80")), SEX = factor(SEX, levels = c("M", "F")), RACE = factor(RACE, levels = c( "WHITE", "AMERICAN INDIAN OR ALASKA NATIVE", "BLACK OR AFRICAN AMERICAN" )) ) %>% descfreq( var = c("AGEGR1", "SEX", "RACE"), bygroup = "TRTP", format = "xx (xx.x%)", addtot = TRUE, na_str = "0" )
Create a summary table with a set of descriptive statistics for one or more variables by one group, as well as a total column if necessary.
descvar( data, var, bygroup, stats = getOption("mcradds.stats.default"), autodecimal = TRUE, decimal = 1, addtot = FALSE, .perctype = 2 )
descvar( data, var, bygroup, stats = getOption("mcradds.stats.default"), autodecimal = TRUE, decimal = 1, addtot = FALSE, .perctype = 2 )
data |
( |
var |
( |
bygroup |
( |
stats |
( |
autodecimal |
( |
decimal |
( |
addtot |
( |
.perctype |
( |
A object Desc
contains an intermediate data with long form for
post-processing and final data with wide form for presentation.
The decimal precision is based on two aspects, one is the original precision
from the variable or the decimal
argument, and the second is the common use that
has been defined in getOption("mcradds.precision.default")
. So if you want to
change the second decimal precision, you can alter it manually with option()
.
data(adsl_sub) # Compute the default statistics of AGE by TRTP group adsl_sub %>% descvar( var = "AGE", bygroup = "TRTP" ) # Compute the specific statistics of BMI by TRTP group, adding total column adsl_sub %>% descvar( var = "BMIBL", bygroup = "TRTP", stats = c("N", "MEANSD", "MEDIAN", "RANGE", "IQR"), addtot = TRUE ) # Set extra decimal to define precision adsl_sub %>% descvar( var = "BMIBL", bygroup = "TRTP", stats = c("N", "MEANSD", "MEDIAN", "RANGE", "IQR"), autodecimal = FALSE, decimal = 2, addtot = TRUE ) # Set multiple variables together adsl_sub %>% descvar( var = c("AGE", "BMIBL", "HEIGHTBL"), bygroup = "TRTP", stats = c("N", "MEANSD", "MEDIAN", "RANGE", "IQR"), autodecimal = TRUE, addtot = TRUE )
data(adsl_sub) # Compute the default statistics of AGE by TRTP group adsl_sub %>% descvar( var = "AGE", bygroup = "TRTP" ) # Compute the specific statistics of BMI by TRTP group, adding total column adsl_sub %>% descvar( var = "BMIBL", bygroup = "TRTP", stats = c("N", "MEANSD", "MEDIAN", "RANGE", "IQR"), addtot = TRUE ) # Set extra decimal to define precision adsl_sub %>% descvar( var = "BMIBL", bygroup = "TRTP", stats = c("N", "MEANSD", "MEDIAN", "RANGE", "IQR"), autodecimal = FALSE, decimal = 2, addtot = TRUE ) # Set multiple variables together adsl_sub %>% descvar( var = c("AGE", "BMIBL", "HEIGHTBL"), bygroup = "TRTP", stats = c("N", "MEANSD", "MEDIAN", "RANGE", "IQR"), autodecimal = TRUE, addtot = TRUE )
Creates a 2x2 contingency table from the data frame or matrix for the qualitative performance and reader precision of downstream analysis.
diagTab( formula = ~., data, bysort = NULL, dimname = NULL, levels = NULL, rep = FALSE, across = NULL )
diagTab( formula = ~., data, bysort = NULL, dimname = NULL, levels = NULL, rep = FALSE, across = NULL )
formula |
( |
data |
( |
bysort |
( |
dimname |
( |
levels |
( |
rep |
( |
across |
( |
A object MCTab
contains the 2x2 contingency table.
To be attention that if you would like to generate the 2x2 contingency table for reproducibility analysis, the original data should be long structure and using the corresponding formula.
Summary()
for object to calculate diagnostic accuracy criteria.
# For qualitative performance with wide data structure data("qualData") qualData %>% diagTab(formula = ~ CandidateN + ComparativeN) qualData %>% diagTab( formula = ~ CandidateN + ComparativeN, levels = c(1, 0) ) # For qualitative performance with long data structure dummy <- data.frame( id = c("1001", "1001", "1002", "1002", "1003", "1003"), value = c(1, 0, 0, 0, 1, 1), type = c("Test", "Ref", "Test", "Ref", "Test", "Ref") ) dummy %>% diagTab( formula = type ~ value, bysort = "id", dimname = c("Test", "Ref"), levels = c(1, 0) ) # For Between-Reader precision performance data("PDL1RP") reader <- PDL1RP$btw_reader reader %>% diagTab( formula = Reader ~ Value, bysort = "Sample", levels = c("Positive", "Negative"), rep = TRUE, across = "Site" )
# For qualitative performance with wide data structure data("qualData") qualData %>% diagTab(formula = ~ CandidateN + ComparativeN) qualData %>% diagTab( formula = ~ CandidateN + ComparativeN, levels = c(1, 0) ) # For qualitative performance with long data structure dummy <- data.frame( id = c("1001", "1001", "1002", "1002", "1003", "1003"), value = c(1, 0, 0, 0, 1, 1), type = c("Test", "Ref", "Test", "Ref", "Test", "Ref") ) dummy %>% diagTab( formula = type ~ value, bysort = "id", dimname = c("Test", "Ref"), levels = c(1, 0) ) # For Between-Reader precision performance data("PDL1RP") reader <- PDL1RP$btw_reader reader %>% diagTab( formula = Reader ~ Value, bysort = "Sample", levels = c("Positive", "Negative"), rep = TRUE, across = "Site" )
Help function detects the potential outlier with Dixon method, following the rules of EP28A3 and NMPA guideline for establishment of reference range.
dixon_outlier(x)
dixon_outlier(x)
x |
( |
A list contains outliers and vector without outliers.
x <- c(13.6, 44.4, 45.9, 11.9, 41.9, 53.3, 44.7, 95.2, 44.1, 50.7, 45.2, 60.1, 89.1) dixon_outlier(x)
x <- c(13.6, 44.4, 45.9, 11.9, 41.9, 53.3, 44.7, 95.2, 44.1, 50.7, 45.2, 60.1, 89.1) dixon_outlier(x)
Perform Rosner's generalized extreme Studentized deviate (ESD) test, which assumes that the distribution is normal (Gaussian), can be used when the number of outliers is unknown, and becomes more robust as the number of samples increases.
ESD_test(x, alpha = 0.05, h = 5)
ESD_test(x, alpha = 0.05, h = 5)
x |
( |
alpha |
( |
h |
( |
A list class containing the results of the ESD test.
stat
a data frame contains the several statistics about ESD test that includes
the index(i
), Mean, SD, raw data(x
), the location(Obs
) in x
, ESD statistics(ESDi),
Lambda and Outliers(TRUE
or FALSE
).
ord
a vector with the order index of outliers that is equal to Obs
in
the stat
data frame.
The algorithm for determining the number of outliers is as follows:
Compare ESDi with Lambda. If ESDi > Lambda then the observations will be regards as outliers.
The order index corresponds to the available x
data that has been removed the
missing (NA) value.
As we should compare if the ESD(h) and ESD(h+1) are equal, the h+1 ESD values will be shown. If they are identical, both of them can not be regarded as outliers.
CLSI EP09A3 Appendix B. Detecting Aberrant Results (Outliers).
data("platelet") res <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) ESD_test(x = res@stat$relative_diff)
data("platelet") res <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) ESD_test(x = res@stat$relative_diff)
A helper function to find the lambda for all potential outliers in each iteration.
esd.critical(alpha, N, i)
esd.critical(alpha, N, i)
alpha |
( |
N |
( |
i |
( |
a lambda value calculated from the formula.
esd.critical(alpha = 0.05, N = 100, i = 1)
esd.critical(alpha = 0.05, N = 100, i = 1)
MCTab
ObjectsProvides a concise summary of the content of MCTab
objects. Computes
sensitivity, specificity, positive and negative predictive values and positive
and negative likelihood ratios for a diagnostic test with reference/gold standard.
Computes positive/negative percent agreement, overall percent agreement and Kappa
when the new test is evaluated by comparison to a non-reference standard. Computes
average positive/negative agreement when the both tests are all not the
reference, such as paired reader precision.
getAccuracy(object, ...) ## S4 method for signature 'MCTab' getAccuracy( object, ref = c("r", "nr", "bnr"), alpha = 0.05, r_ci = c("wilson", "wald", "clopper-pearson"), nr_ci = c("wilson", "wald", "clopper-pearson"), bnr_ci = "bootstrap", bootCI = c("perc", "norm", "basic", "stud", "bca"), nrep = 1000, rng.seed = NULL, digits = 4, ... )
getAccuracy(object, ...) ## S4 method for signature 'MCTab' getAccuracy( object, ref = c("r", "nr", "bnr"), alpha = 0.05, r_ci = c("wilson", "wald", "clopper-pearson"), nr_ci = c("wilson", "wald", "clopper-pearson"), bnr_ci = "bootstrap", bootCI = c("perc", "norm", "basic", "stud", "bca"), nrep = 1000, rng.seed = NULL, digits = 4, ... )
object |
( |
... |
other arguments to be passed to DescTools::BinomCI. |
ref |
( |
alpha |
( |
r_ci |
( |
nr_ci |
( |
bnr_ci |
( |
bootCI |
( |
nrep |
( |
rng.seed |
( |
digits |
( |
A data frame contains the qualitative diagnostic accuracy criteria with three columns for estimated value and confidence interval.
sens: Sensitivity refers to how often the test is positive when the condition of interest is present.
spec: Specificity refers to how often the test is negative when the condition of interest is absent.
ppv: Positive predictive value refers to the percentage of subjects with a positive test result who have the target condition.
npv: Negative predictive value refers to the percentage of subjects with a negative test result who do not have the target condition.
plr: Positive likelihood ratio refers to the probability of true positive rate divided by the false negative rate.
nlr: Negative likelihood ratio refers to the probability of false positive rate divided by the true negative rate.
ppa: Positive percent agreement, equals to sensitivity when the candidate method is evaluated by comparison with a comparative method, not reference/gold standard.
npa: Negative percent agreement, equals to specificity when the candidate method is evaluated by comparison with a comparative method, not reference/gold standard.
opa: Overall percent agreement.
kappa: Cohen's kappa coefficient to measure the level of agreement.
apa: Average positive agreement refers to the positive agreements and can be regarded as weighted ppa.
ana: Average negative agreement refers to the negative agreements and can be regarded as weighted npa.
# For qualitative performance data("qualData") tb <- qualData %>% diagTab( formula = ~ CandidateN + ComparativeN, levels = c(1, 0) ) getAccuracy(tb, ref = "r") getAccuracy(tb, ref = "nr", nr_ci = "wilson") # For Between-Reader precision performance data("PDL1RP") reader <- PDL1RP$btw_reader tb2 <- reader %>% diagTab( formula = Reader ~ Value, bysort = "Sample", levels = c("Positive", "Negative"), rep = TRUE, across = "Site" ) getAccuracy(tb2, ref = "bnr") getAccuracy(tb2, ref = "bnr", rng.seed = 12306)
# For qualitative performance data("qualData") tb <- qualData %>% diagTab( formula = ~ CandidateN + ComparativeN, levels = c(1, 0) ) getAccuracy(tb, ref = "r") getAccuracy(tb, ref = "nr", nr_ci = "wilson") # For Between-Reader precision performance data("PDL1RP") reader <- PDL1RP$btw_reader tb2 <- reader %>% diagTab( formula = Reader ~ Value, bysort = "Sample", levels = c("Positive", "Negative"), rep = TRUE, across = "Site" ) getAccuracy(tb2, ref = "bnr") getAccuracy(tb2, ref = "bnr", rng.seed = 12306)
A copy from mcr::getCoefficients in mcr
package
getCoefficients(...)
getCoefficients(...)
... |
Arguments passed on to
|
data(platelet) fit <- mcreg( x = platelet$Comparative, y = platelet$Candidate, method.reg = "Deming", method.ci = "jackknife" ) getCoefficients(fit)
data(platelet) fit <- mcreg( x = platelet$Comparative, y = platelet$Candidate, method.reg = "Deming", method.ci = "jackknife" ) getCoefficients(fit)
BAsummary
ObjectDetect the potential outliers from the absolute and relative differences in
BAsummary
object with 4E and ESD method.
getOutlier(object, ...) ## S4 method for signature 'BAsummary' getOutlier( object, method = c("ESD", "4E"), difference = c("abs", "rel"), alpha = 0.05, h = 5 )
getOutlier(object, ...) ## S4 method for signature 'BAsummary' getOutlier( object, method = c("ESD", "4E"), difference = c("abs", "rel"), alpha = 0.05, h = 5 )
object |
( |
... |
not used. |
method |
( |
difference |
( |
alpha |
( |
h |
( |
A list contains the statistics results (stat
), outliers' ord id (ord
),
sample id (sid
), matrix with outliers (outmat
) and matrix without outliers (rmmat
).
Bland-Altman analysis is used as the input data regardless of the 4E and ESD method because it's necessary to determine the absolute and relative differences beforehand. For the 4E method, both of the absolute and relative differences are required to be define, and the bias exceeds the 4 fold of the absolute and relative differences. However for the ESD method, only one of them is necessary (the latter is more recommended), and the bias needs to meet the ESD test.
data("platelet") # Using `blandAltman` function with default arguments ba <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) getOutlier(ba, method = "ESD", difference = "rel") # Using sample id as input ba2 <- blandAltman(x = platelet$Comparative, y = platelet$Candidate, sid = platelet$Sample) getOutlier(ba2, method = "ESD", difference = "rel") # Using `blandAltman` function when the `tyep2` is 2 with `X vs. (Y-X)/X` difference ba3 <- blandAltman(x = platelet$Comparative, y = platelet$Candidate, type2 = 4) getOutlier(ba3, method = "ESD", difference = "rel") # Using "4E" as the method input ba4 <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) getOutlier(ba4, method = "4E")
data("platelet") # Using `blandAltman` function with default arguments ba <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) getOutlier(ba, method = "ESD", difference = "rel") # Using sample id as input ba2 <- blandAltman(x = platelet$Comparative, y = platelet$Candidate, sid = platelet$Sample) getOutlier(ba2, method = "ESD", difference = "rel") # Using `blandAltman` function when the `tyep2` is 2 with `X vs. (Y-X)/X` difference ba3 <- blandAltman(x = platelet$Comparative, y = platelet$Candidate, type2 = 4) getOutlier(ba3, method = "ESD", difference = "rel") # Using "4E" as the method input ba4 <- blandAltman(x = platelet$Comparative, y = platelet$Candidate) getOutlier(ba4, method = "4E")
This data set consists of the Glucose intermediate precision data in the CLSI EP05-A3 guideline.
glucose
glucose
A glucose data set contains 80 observations and 3 variables.
day number
run number
measurement value
CLSI-EP05A3 Table A1. Glucose Precision Evaluation Measurements (mg/dL) is cited in this data set.
EP05A3: Evaluation of Precision of Quantitative Measurement Procedures.
Helper function computes the difference with specific type.
h_difference(x, y, type)
h_difference(x, y, type)
x |
( |
y |
( |
type |
( |
a matrix contains the x and y measurement data and corresponding difference.
h_difference(x = c(1.1, 1.2, 1.5), y = c(1.2, 1.3, 1.4), type = 5)
h_difference(x = c(1.1, 1.2, 1.5), y = c(1.2, 1.3, 1.4), type = 5)
Helper function factor inputs in order of appearance, or per the levels that you provide.
h_factor(df, var, levels = NULL, ...)
h_factor(df, var, levels = NULL, ...)
df |
( |
var |
( |
levels |
( |
... |
other arguments to be passed to |
A factor variable
df <- data.frame(a = c("aa", "a", "aa")) h_factor(df, var = "a") h_factor(df, var = "a", levels = c("aa", "a"))
df <- data.frame(a = c("aa", "a", "aa")) h_factor(df, var = "a") h_factor(df, var = "a", levels = c("aa", "a"))
Help function to format the count and percent into one string.
h_fmt_count_perc(cnt, perc = NULL, format, ...)
h_fmt_count_perc(cnt, perc = NULL, format, ...)
cnt |
( |
perc |
( |
format |
( |
... |
other arguments to be passed to formatters::format_value. |
A character vector of formatted counts and percents.
h_fmt_count_perc(cnt = c(5, 9, 12, 110, 0), format = "xx") h_fmt_count_perc( cnt = c(5, 9, 12, 110, 0), perc = c(0.0368, 0.0662, 0.0882, 0.8088, 0), format = "xx (xx.x%)" )
h_fmt_count_perc(cnt = c(5, 9, 12, 110, 0), format = "xx") h_fmt_count_perc( cnt = c(5, 9, 12, 110, 0), perc = c(0.0368, 0.0662, 0.0882, 0.8088, 0), format = "xx (xx.x%)" )
Help function to format numeric data as strings and concatenate into a single character.
h_fmt_est(num1, num2, digits = c(2, 2), width = c(6, 6))
h_fmt_est(num1, num2, digits = c(2, 2), width = c(6, 6))
num1 |
( |
num2 |
( |
digits |
( |
width |
( |
A single character.
h_fmt_est(num1 = 3.14, num2 = 3.1415, width = c(4, 4))
h_fmt_est(num1 = 3.14, num2 = 3.1415, width = c(4, 4))
Help function to format numeric data with formatC
function.
h_fmt_num(x, digits, width = digits + 4)
h_fmt_num(x, digits, width = digits + 4)
x |
( |
digits |
( |
width |
( |
A character object with specific digits and width.
h_fmt_num(pi * 10^(-2:2), digits = 2, width = 6)
h_fmt_num(pi * 10^(-2:2), digits = 2, width = 6)
Help function to format numeric data as strings and concatenate into a single character range.
h_fmt_range(num1, num2, digits = c(2, 2), width = c(6, 6))
h_fmt_range(num1, num2, digits = c(2, 2), width = c(6, 6))
num1 |
( |
num2 |
( |
digits |
( |
width |
( |
A single character.
h_fmt_range(num1 = 3.14, num2 = 3.14, width = c(4, 4))
h_fmt_range(num1 = 3.14, num2 = 3.14, width = c(4, 4))
Help function summarizes the statistics as needed.
h_summarize(x, conf.level = 0.95)
h_summarize(x, conf.level = 0.95)
x |
( |
conf.level |
( |
a verctor contains several statistics, such as n, mean, median, min, max, q25, q75, sd, se, limit of agreement of limit and confidence interval .
h_summarize(1:50)
h_summarize(1:50)
This data set consists the measurements of low-density lipoprotein (LDL), oxidized low-density lipoprotein (OxLDL) and the corresponding diagnosis. OxLDL is thought to be the active molecule in the process of atherosclerosis, so its proponents believe that its serum concentration should provide more accurate risk stratification than the traditional LDL assay.
ldlroc
ldlroc
A ldlroc data set contains 50 observations and 3 variables.
the diagnosis, 1 represents a subject has the disease or condition of interest is present, 0 is absent
oxidized low-density lipoprotein(OxLDL) measurement value
low-density lipoprotein(LDL) measurement value
CLSI-EP24A2 Table D1. OxLDL and LDL Assay Values (in U/L) for 50 Subjects.
EP24A2 Assessment of the Diagnostic Accuracy of Laboratory Tests Using Receiver Operating Characteristic Curves.
A copy from mcr::mcreg in mcr
package
mcreg(...)
mcreg(...)
... |
Arguments passed on to
|
A regression fit model.
data(platelet) fit <- mcreg( x = platelet$Comparative, y = platelet$Candidate, method.reg = "Deming", method.ci = "jackknife" ) printSummary(fit) getCoefficients(fit)
data(platelet) fit <- mcreg( x = platelet$Comparative, y = platelet$Candidate, method.reg = "Deming", method.ci = "jackknife" ) printSummary(fit) getCoefficients(fit)
The MCTab
class serves as the store for 2x2 contingency table
MCTab(data, tab, levels)
MCTab(data, tab, levels)
data |
( |
tab |
( |
levels |
( |
An object of class MCTab
.
data
data
tab
candidate
levels
levels
This data shows the rank number for computing the confidence interval of nonparametric reference limit when the samples within 119-1000 values. But the reference interval must be 95% and the confidence interval is 90%.
nonparRanks
nonparRanks
A nonparRanks data set contains 882 observations and 3 variables.
sample size
lower rank
upper rank
CLSI-EP28A3 Table 8. is cited in this data set.
EP28-A3c: Defining, Establishing, and Verifying Reference Intervals in the Clinical Laboratory.
This nonparametric method is used to calculate the reference interval when the distribution is skewed and the sample size is above to 120 observations.
nonparRI(x, ind = 1:length(x), conf.level = 0.95)
nonparRI(x, ind = 1:length(x), conf.level = 0.95)
x |
( |
ind |
( |
conf.level |
( |
a vector of nonparametric reference interval
data("calcium") x <- calcium$Value nonparRI(x)
data("calcium") x <- calcium$Value nonparRI(x)
This dummy data set is from a PD-L1 HE stained study to estimate the reproducibility
of one assay in determining the PD-L1 status of NSCLC tissue specimens. It
contains three sub-data to compute the reproducibility within reader (one pathologists,
also called reader here, scores one specimen three times), between reader (three
readers scores the same specimen) and between site (one reader in three sites
scores the same specimens). These data sets don't have the reference for the each
score so it can be only used in the pairwise comparison to calculate the APA
,
ANA
and OPA
which don't reply on the reference.
PDL1RP
PDL1RP
A PDL1RP data set contains 3 sub set, each sub set includes 150 specimens, 450 observations and 4 variables.
Sample id
Site id
Order of reader scoring
Reader id, the first character represents the site id, and the second character is the reader number
Result of scoring, Positive
or Negative
Adjust the cor.test
function so that it can define the specific H0 as per
your request, that is based on Fisher's Z transformation of the correlation.
pearsonTest( x, y, h0 = 0, conf.level = 0.95, alternative = c("two.sided", "less", "greater"), ... )
pearsonTest( x, y, h0 = 0, conf.level = 0.95, alternative = c("two.sided", "less", "greater"), ... )
x |
( |
y |
( |
h0 |
( |
conf.level |
( |
alternative |
( |
... |
other arguments to be passed to |
a named vector contains correlation coefficient (cor
), confidence
interval(lowerci
and upperci
), Z statistic (Z
) and p-value (pval
)
NCSS correlation document
cor.test()
to see the detailed arguments.
x <- c(44.4, 45.9, 41.9, 53.3, 44.7, 44.1, 50.7, 45.2, 60.1) y <- c(2.6, 3.1, 2.5, 5.0, 3.6, 4.0, 5.2, 2.8, 3.8) pearsonTest(x, y, h0 = 0.5, alternative = "greater")
x <- c(44.4, 45.9, 41.9, 53.3, 44.7, 44.1, 50.7, 45.2, 60.1) y <- c(2.6, 3.1, 2.5, 5.0, 3.6, 4.0, 5.2, 2.8, 3.8) pearsonTest(x, y, h0 = 0.5, alternative = "greater")
This example platelet
can be used to create a data set comparing
Platelet results from two analyzers in cells.
platelet
platelet
A platelet data set contains 120 observations and 3 variables.
Sample id
Measurements from comparative analyzer
Measurements from candidate analyzer
CLSI-EP09 A3 Appendix H, Table H2 is cited in this data set.
From mcr
package, mcr::creatinine data set contains data with with serum and plasma
creatinin measurements in mg/dL for each sample.
A copy from mcr::printSummary in mcr
package
printSummary(...)
printSummary(...)
... |
Arguments passed on to
|
data(platelet) fit <- mcreg( x = platelet$Comparative, y = platelet$Candidate, method.reg = "Deming", method.ci = "jackknife" ) printSummary(fit)
data(platelet) fit <- mcreg( x = platelet$Comparative, y = platelet$Candidate, method.reg = "Deming", method.ci = "jackknife" ) printSummary(fit)
This simulated data qualData
can be used to calculate the
qualitative performance such as sensitivity and specificity.
qualData
qualData
A qualData data set contains 200 observations and 3 variables.
Sample id
Measurements from comparative analyzer with 1=positive
and 0=negative
Measurements from candidate analyzer with 1=positive
and 0=negative
platelet that contains quantitative data comparing platelet results from two analyzers.
The RefInt
class serves as the store for results in reference
Interval calculation.
RefInt(call, method, n, data, outlier, refInt, confInt)
RefInt(call, method, n, data, outlier, refInt, confInt)
call |
( |
method |
( |
n |
( |
data |
( |
outlier |
( |
refInt |
( |
confInt |
( |
An object of class RefInt
.
call
call
method
method
n
n
data
data
outlier
outlier
refInt
refInt
confInt
confInt
This function is used to establish the reference interval for target population with parametric, non-parametric and robust methods that follows the CLSI-EP28A3 and NMPA guideline. In additional, it also provides the corresponding confidence interval for lower/upper reference limit if needed. Given that outliers should be identified beforehand, Tukey and Dixon methods can be applied depending on distribution of the data.
refInterval( x, out_method = c("doxin", "tukey"), out_rm = FALSE, RI_method = c("parametric", "nonparametric", "robust"), CI_method = c("parametric", "nonparametric", "boot"), refLevel = 0.95, bootCI = c("perc", "norm", "basic", "stud", "bca"), confLevel = 0.9, rng.seed = NULL, tol = 1e-06, R = 10000 )
refInterval( x, out_method = c("doxin", "tukey"), out_rm = FALSE, RI_method = c("parametric", "nonparametric", "robust"), CI_method = c("parametric", "nonparametric", "boot"), refLevel = 0.95, bootCI = c("perc", "norm", "basic", "stud", "bca"), confLevel = 0.9, rng.seed = NULL, tol = 1e-06, R = 10000 )
x |
( |
out_method |
( |
out_rm |
( |
RI_method |
( |
CI_method |
( |
refLevel |
( |
bootCI |
( |
confLevel |
( |
rng.seed |
( |
tol |
( |
R |
( |
A RefInt
object contains relevant results in establishing of reference interval.
There are some conditions of use to be aware of:
If parametric method is used to calculate reference interval, confidence interval should be the same method as well.
If non-parametric method is used to calculate the reference interval and
the sample size is up to 120 observations, the non-parametric is suggested for
confidence interval. Otherwise if the sample size is below to 120, the bootstrap
method is the better choice. Beside the non-parametric method for confidence
interval only allows the refLevel=0.95
and confLevel=0.9
arguments,
if not the bootstrap methods will be used automatically.
If robust method is used to calculate the reference interval, the method for confidence interval must be bootstrap.
data("calcium") x <- calcium$Value refInterval(x, RI_method = "parametric", CI_method = "parametric") refInterval(x, RI_method = "nonparametric", CI_method = "nonparametric") refInterval(x, RI_method = "robust", CI_method = "boot", R = 1000)
data("calcium") x <- calcium$Value refInterval(x, RI_method = "parametric", CI_method = "parametric") refInterval(x, RI_method = "nonparametric", CI_method = "nonparametric") refInterval(x, RI_method = "robust", CI_method = "boot", R = 1000)
This robust method is used to calculate the reference interval on small sample size (below to 120 observations).
robustRI(x, ind = 1:length(x), conf.level = 0.95, tol = 1e-06)
robustRI(x, ind = 1:length(x), conf.level = 0.95, tol = 1e-06)
x |
( |
ind |
( |
conf.level |
( |
tol |
( |
a vector of robust reference interval
This robust algorithm is referring to CLSI document EP28A3.
# This example data is taken from EP28A3 Appendix B. to ensure the result is in accordance. x <- c(8.9, 9.2, rep(9.4, 2), rep(9.5, 3), rep(9.6, 4), rep(9.7, 5), 9.8, rep(9.9, 2), 10.2) robustRI(x)
# This example data is taken from EP28A3 Appendix B. to ensure the result is in accordance. x <- c(8.9, 9.2, rep(9.4, 2), rep(9.5, 3), rep(9.6, 4), rep(9.7, 5), 9.8, rep(9.9, 2), 10.2) robustRI(x)
The SampleSize
class serves as the store for results and parameters in sample
size calculation.
SampleSize(call, method, n, param)
SampleSize(call, method, n, param)
call |
( |
method |
( |
n |
( |
param |
( |
An object of class SampleSize
.
call
call
method
method
n
n
param
param
A show method that displays essential information of objects.
## S4 method for signature 'SampleSize' show(object) ## S4 method for signature 'MCTab' show(object) ## S4 method for signature 'BAsummary' show(object) ## S4 method for signature 'RefInt' show(object) ## S4 method for signature 'tpROC' show(object) ## S4 method for signature 'Desc' show(object)
## S4 method for signature 'SampleSize' show(object) ## S4 method for signature 'MCTab' show(object) ## S4 method for signature 'BAsummary' show(object) ## S4 method for signature 'RefInt' show(object) ## S4 method for signature 'tpROC' show(object) ## S4 method for signature 'Desc' show(object)
object |
( |
None (invisible NULL
), only used for the side effect of printing to
the console.
# Sample zie calculation size_one_prop(p1 = 0.95, p0 = 0.9, alpha = 0.05, power = 0.8) size_ci_corr(r = 0.9, lr = 0.85, alpha = 0.025, alternative = "greater") # Get 2x2 Contingency Table qualData %>% diagTab(formula = ~ CandidateN + ComparativeN) # Bland-Altman analysis data("platelet") blandAltman(x = platelet$Comparative, y = platelet$Candidate) # Reference Interval data("calcium") refInterval(x = calcium$Value, RI_method = "nonparametric", CI_method = "nonparametric") # Comparing the Paired ROC when Non-inferiority margin <= -0.1 data("ldlroc") aucTest( x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis, method = "non-inferiority", h0 = -0.1 ) data(adsl_sub) # Count multiple variables by treatment adsl_sub %>% descfreq( var = c("AGEGR1", "SEX", "RACE"), bygroup = "TRTP", format = "xx (xx.x%)", addtot = TRUE, na_str = "0" ) # Summarize multiple variables by treatment adsl_sub %>% descvar( var = c("AGE", "BMIBL", "HEIGHTBL"), bygroup = "TRTP", stats = c("N", "MEANSD", "MEDIAN", "RANGE", "IQR"), autodecimal = TRUE, addtot = TRUE )
# Sample zie calculation size_one_prop(p1 = 0.95, p0 = 0.9, alpha = 0.05, power = 0.8) size_ci_corr(r = 0.9, lr = 0.85, alpha = 0.025, alternative = "greater") # Get 2x2 Contingency Table qualData %>% diagTab(formula = ~ CandidateN + ComparativeN) # Bland-Altman analysis data("platelet") blandAltman(x = platelet$Comparative, y = platelet$Candidate) # Reference Interval data("calcium") refInterval(x = calcium$Value, RI_method = "nonparametric", CI_method = "nonparametric") # Comparing the Paired ROC when Non-inferiority margin <= -0.1 data("ldlroc") aucTest( x = ldlroc$LDL, y = ldlroc$OxLDL, response = ldlroc$Diagnosis, method = "non-inferiority", h0 = -0.1 ) data(adsl_sub) # Count multiple variables by treatment adsl_sub %>% descfreq( var = c("AGEGR1", "SEX", "RACE"), bygroup = "TRTP", format = "xx (xx.x%)", addtot = TRUE, na_str = "0" ) # Summarize multiple variables by treatment adsl_sub %>% descvar( var = c("AGE", "BMIBL", "HEIGHTBL"), bygroup = "TRTP", stats = c("N", "MEANSD", "MEDIAN", "RANGE", "IQR"), autodecimal = TRUE, addtot = TRUE )
This function performs sample size computation for testing Pearson's correlation when a lower confidence interval is provided.
size_ci_corr( r, lr, alpha = 0.05, interval = c(10, 1e+05), tol = 1e-05, alternative = c("two.sided", "less", "greater") )
size_ci_corr( r, lr, alpha = 0.05, interval = c(10, 1e+05), tol = 1e-05, alternative = c("two.sided", "less", "greater") )
r |
( |
lr |
( |
alpha |
( |
interval |
( |
tol |
( |
alternative |
( |
an object of size
class that contains the sample size and relevant parameters.
Fisher (1973, p. 199).
size_one_prop()
size_ci_one_prop()
size_corr()
size_ci_corr(r = 0.9, lr = 0.85, alpha = 0.025, alternative = "greater")
size_ci_corr(r = 0.9, lr = 0.85, alpha = 0.025, alternative = "greater")
This function performs sample size computation for testing a given lower confidence interval of one proportion with the using of the Simple Asymptotic(Wald), Wilson score, clopper-pearson and other methods.
size_ci_one_prop( p, lr, alpha = 0.05, interval = c(1, 1e+05), tol = 1e-05, alternative = c("two.sided", "less", "greater"), method = c("simple-asymptotic", "wilson", "wald", "clopper-pearson") )
size_ci_one_prop( p, lr, alpha = 0.05, interval = c(1, 1e+05), tol = 1e-05, alternative = c("two.sided", "less", "greater"), method = c("simple-asymptotic", "wilson", "wald", "clopper-pearson") )
p |
( |
lr |
( |
alpha |
( |
interval |
( |
tol |
( |
alternative |
( |
method |
( |
an object of size
class that contains the sample size and relevant parameters.
Newcombe, R. G. 1998. 'Two-Sided Confidence Intervals for the Single Proportion: Comparison of Seven Methods.' Statistics in Medicine, 17, pp. 857-872.
size_one_prop()
size_corr()
size_ci_corr()
size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = "wilson") size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = "simple-asymptotic") size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = "wald")
size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = "wilson") size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = "simple-asymptotic") size_ci_one_prop(p = 0.85, lr = 0.8, alpha = 0.05, method = "wald")
This function performs sample size computation for testing Pearson's correlation, using uses Fisher's classic z-transformation to normalize the distribution of Pearson's correlation coefficient.
size_corr( r1, r0, alpha = 0.05, power = 0.8, alternative = c("two.sided", "less", "greater") )
size_corr( r1, r0, alpha = 0.05, power = 0.8, alternative = c("two.sided", "less", "greater") )
r1 |
( |
r0 |
( |
alpha |
( |
power |
( |
alternative |
( |
an object of size
class that contains the sample size and relevant parameters.
Fisher (1973, p. 199).
size_one_prop()
size_ci_one_prop()
size_ci_corr()
size_corr(r1 = 0.95, r0 = 0.9, alpha = 0.025, power = 0.8, alternative = "greater")
size_corr(r1 = 0.95, r0 = 0.9, alpha = 0.025, power = 0.8, alternative = "greater")
This function performs sample size computation for testing one proportion in accordance with Chinese NMPA's IVD guideline.
size_one_prop( p1, p0, alpha = 0.05, power = 0.8, alternative = c("two.sided", "less", "greater") )
size_one_prop( p1, p0, alpha = 0.05, power = 0.8, alternative = c("two.sided", "less", "greater") )
p1 |
( |
p0 |
( |
alpha |
( |
power |
( |
alternative |
( |
an object of size
class that contains the sample size and relevant parameters.
Chinese NMPA's IVD technical guideline.
size_ci_one_prop()
size_corr()
size_ci_corr()
size_one_prop(p1 = 0.95, p0 = 0.9, alpha = 0.05, power = 0.8)
size_one_prop(p1 = 0.95, p0 = 0.9, alpha = 0.05, power = 0.8)
Providing the confidence interval of Spearman's rank correlation by Bootstrap, and define the specific H0 as per your request, that is based on Fisher's Z transformation of the correlation but with the variance recommended by Bonett and Wright (2000), not the same as Pearson's.
spearmanTest( x, y, h0 = 0, conf.level = 0.95, alternative = c("two.sided", "less", "greater"), nrep = 1000, rng.seed = NULL, ... )
spearmanTest( x, y, h0 = 0, conf.level = 0.95, alternative = c("two.sided", "less", "greater"), nrep = 1000, rng.seed = NULL, ... )
x |
( |
y |
( |
h0 |
( |
conf.level |
( |
alternative |
( |
nrep |
( |
rng.seed |
( |
... |
other arguments to be passed to |
a named vector contains correlation coefficient (cor
), confidence
interval(lowerci
and upperci
), Z statistic (Z
) and p-value (pval
)
NCSS correlation document
cor.test()
boot::boot()
to see the detailed arguments.
x <- c(44.4, 45.9, 41.9, 53.3, 44.7, 44.1, 50.7, 45.2, 60.1) y <- c(2.6, 3.1, 2.5, 5.0, 3.6, 4.0, 5.2, 2.8, 3.8) spearmanTest(x, y, h0 = 0.5, alternative = "greater")
x <- c(44.4, 45.9, 41.9, 53.3, 44.7, 44.1, 50.7, 45.2, 60.1) y <- c(2.6, 3.1, 2.5, 5.0, 3.6, 4.0, 5.2, 2.8, 3.8) spearmanTest(x, y, h0 = 0.5, alternative = "greater")
The tpROC
class serves as the store for results in testing the AUC of paired
two-sample assays.
tpROC(testROC, refROC, method, H0, stat)
tpROC(testROC, refROC, method, H0, stat)
testROC |
( |
refROC |
( |
method |
( |
H0 |
( |
stat |
( |
An object of class tpROC
.
testROC
testROC
refROC
refROC
method
method
stat
stat
Help function detects the potential outlier with Tukey method where the number
is below Q1-1.5*IQR
and above Q3+1.5*IQR
.
tukey_outlier(x)
tukey_outlier(x)
x |
( |
A list contains outliers and vector without outliers.
x <- c(13.6, 44.4, 45.9, 14.9, 41.9, 53.3, 44.7, 95.2, 44.1, 50.7, 45.2, 60.1, 89.1) tukey_outlier(x)
x <- c(13.6, 44.4, 45.9, 14.9, 41.9, 53.3, 44.7, 95.2, 44.1, 50.7, 45.2, 60.1, 89.1) tukey_outlier(x)
A copy from VCA::VCAinference in VCA
package
VCAinference(...)
VCAinference(...)
... |
Arguments passed on to
|
object of VCAinference
contains a series of statistics.
data(glucose) fit <- anovaVCA(value ~ day / run, glucose) VCAinference(fit)
data(glucose) fit <- anovaVCA(value ~ day / run, glucose) VCAinference(fit)