The auc_ci function takes an S3 object generated by evalmod and calculates CIs of AUCs when multiple data sets are specified.

auc_ci(curves, alpha = NULL, dtype = NULL)

# S3 method for aucs
auc_ci(curves, alpha = 0.05, dtype = "normal")

Arguments

curves

An S3 object generated by evalmod. The auc_ci function accepts the following S3 objects.

S3 object# of models# of test datasets
smcurvessinglemultiple
mmcurvesmultiplemultiple

See the Value section of evalmod for more details.

alpha

A numeric value of the significant level (default: 0.05)

dtype

A string to specify the distribution used for CI calculation.

dtypedistribution
normal (default)Normal distribution
zNormal distribution
tt-distribution

Value

The auc_ci function returns a dataframe of AUC CIs.

See also

evalmod for generating S3 objects with performance evaluation measures. auc for retrieving a dataset of AUCs.

Examples

################################################## ### Single model & multiple test datasets ### ## Create sample datasets with 100 positives and 100 negatives samps <- create_sim_samples(4, 100, 100, "good_er") mdat <- mmdata(samps[["scores"]], samps[["labels"]], modnames = samps[["modnames"]], dsids = samps[["dsids"]]) ## Generate an smcurve object that contains ROC and Precision-Recall curves smcurves <- evalmod(mdat) ## Calculate CI of AUCs sm_auc_cis <- auc_ci(smcurves) ## Shows the result sm_auc_cis
#> modnames curvetypes mean error lower_bound upper_bound n #> 1 good_er ROC 0.8032500 0.03138050 0.7718695 0.8346305 4 #> 2 good_er PRC 0.8336947 0.03031903 0.8033757 0.8640138 4
################################################## ### Multiple models & multiple test datasets ### ## Create sample datasets with 100 positives and 100 negatives samps <- create_sim_samples(4, 100, 100, "all") mdat <- mmdata(samps[["scores"]], samps[["labels"]], modnames = samps[["modnames"]], dsids = samps[["dsids"]]) ## Generate an mscurve object that contains ROC and Precision-Recall curves mmcurves <- evalmod(mdat) ## Calculate CI of AUCs mm_auc_ci <- auc_ci(mmcurves) ## Shows the result mm_auc_ci
#> modnames curvetypes mean error lower_bound upper_bound n #> 1 random ROC 0.4636250 0.02802810 0.4355969 0.4916531 4 #> 2 random PRC 0.4836572 0.02555646 0.4581007 0.5092137 4 #> 3 poor_er ROC 0.7943250 0.04363583 0.7506892 0.8379608 4 #> 4 poor_er PRC 0.7484883 0.05112118 0.6973671 0.7996095 4 #> 5 good_er ROC 0.8134000 0.01687562 0.7965244 0.8302756 4 #> 6 good_er PRC 0.8469727 0.01544813 0.8315246 0.8624208 4 #> 7 excel ROC 0.9795500 0.01153232 0.9680177 0.9910823 4 #> 8 excel PRC 0.9787671 0.01327986 0.9654872 0.9920469 4 #> 9 perf ROC 1.0000000 0.00000000 1.0000000 1.0000000 4 #> 10 perf PRC 1.0000000 0.00000000 1.0000000 1.0000000 4