The auc function takes an S3 object generated by evalmod and retrieves a data frame with the Area Under the Curve (AUC) scores of ROC and Precision-Recall curves.

auc(curves)

# S3 method for aucs
auc(curves)

Arguments

curves

An S3 object generated by evalmod. The auc function accepts the following S3 objects.

S3 object# of models# of test datasets
sscurvessinglesingle
mscurvesmultiplesingle
smcurvessinglemultiple
mmcurvesmultiplemultiple

See the Value section of evalmod for more details.

Value

The auc function returns a data frame with AUC scores.

See also

evalmod for generating S3 objects with performance evaluation measures. pauc for retrieving a dataset of pAUCs.

Examples

################################################## ### Single model & single test dataset ### ## Load a dataset with 10 positives and 10 negatives data(P10N10) ## Generate an sscurve object that contains ROC and Precision-Recall curves sscurves <- evalmod(scores = P10N10$scores, labels = P10N10$labels) ## Shows AUCs auc(sscurves)
#> modnames dsids curvetypes aucs #> 1 m1 1 ROC 0.7200000 #> 2 m1 1 PRC 0.7397716
################################################## ### Multiple models & single test dataset ### ## Create sample datasets with 100 positives and 100 negatives samps <- create_sim_samples(1, 100, 100, "all") mdat <- mmdata(samps[["scores"]], samps[["labels"]], modnames = samps[["modnames"]]) ## Generate an mscurve object that contains ROC and Precision-Recall curves mscurves <- evalmod(mdat) ## Shows AUCs auc(mscurves)
#> modnames dsids curvetypes aucs #> 1 random 1 ROC 0.4841000 #> 2 random 1 PRC 0.5000530 #> 3 poor_er 1 ROC 0.7893000 #> 4 poor_er 1 PRC 0.7541289 #> 5 good_er 1 ROC 0.8090000 #> 6 good_er 1 PRC 0.8397466 #> 7 excel 1 ROC 0.9931000 #> 8 excel 1 PRC 0.9935980 #> 9 perf 1 ROC 1.0000000 #> 10 perf 1 PRC 1.0000000
################################################## ### Single model & multiple test datasets ### ## Create sample datasets with 100 positives and 100 negatives samps <- create_sim_samples(4, 100, 100, "good_er") mdat <- mmdata(samps[["scores"]], samps[["labels"]], modnames = samps[["modnames"]], dsids = samps[["dsids"]]) ## Generate an smcurve object that contains ROC and Precision-Recall curves smcurves <- evalmod(mdat, raw_curves = TRUE) ## Get AUCs sm_aucs <- auc(smcurves) ## Shows AUCs sm_aucs
#> modnames dsids curvetypes aucs #> 1 good_er 1 ROC 0.7618000 #> 2 good_er 1 PRC 0.7945817 #> 3 good_er 2 ROC 0.8238000 #> 4 good_er 2 PRC 0.8490360 #> 5 good_er 3 ROC 0.8230000 #> 6 good_er 3 PRC 0.8378528 #> 7 good_er 4 ROC 0.8378000 #> 8 good_er 4 PRC 0.8696040
## Get AUCs of Precision-Recall sm_aucs_prc <- subset(sm_aucs, curvetypes == "PRC") ## Shows AUCs sm_aucs_prc
#> modnames dsids curvetypes aucs #> 2 good_er 1 PRC 0.7945817 #> 4 good_er 2 PRC 0.8490360 #> 6 good_er 3 PRC 0.8378528 #> 8 good_er 4 PRC 0.8696040
################################################## ### Multiple models & multiple test datasets ### ## Create sample datasets with 100 positives and 100 negatives samps <- create_sim_samples(4, 100, 100, "all") mdat <- mmdata(samps[["scores"]], samps[["labels"]], modnames = samps[["modnames"]], dsids = samps[["dsids"]]) ## Generate an mscurve object that contains ROC and Precision-Recall curves mmcurves <- evalmod(mdat, raw_curves = TRUE) ## Get AUCs mm_aucs <- auc(mmcurves) ## Shows AUCs mm_aucs
#> modnames dsids curvetypes aucs #> 1 random 1 ROC 0.5224000 #> 2 random 1 PRC 0.4987108 #> 3 poor_er 1 ROC 0.8280000 #> 4 poor_er 1 PRC 0.7841504 #> 5 good_er 1 ROC 0.7673000 #> 6 good_er 1 PRC 0.8041642 #> 7 excel 1 ROC 0.9791000 #> 8 excel 1 PRC 0.9840001 #> 9 perf 1 ROC 1.0000000 #> 10 perf 1 PRC 1.0000000 #> 11 random 2 ROC 0.4047000 #> 12 random 2 PRC 0.4245858 #> 13 poor_er 2 ROC 0.7877000 #> 14 poor_er 2 PRC 0.7278862 #> 15 good_er 2 ROC 0.8363000 #> 16 good_er 2 PRC 0.8643072 #> 17 excel 2 ROC 0.9954000 #> 18 excel 2 PRC 0.9953327 #> 19 perf 2 ROC 1.0000000 #> 20 perf 2 PRC 1.0000000 #> 21 random 3 ROC 0.5214000 #> 22 random 3 PRC 0.5358556 #> 23 poor_er 3 ROC 0.7347000 #> 24 poor_er 3 PRC 0.6564273 #> 25 good_er 3 ROC 0.8045000 #> 26 good_er 3 PRC 0.8288849 #> 27 excel 3 ROC 0.9726000 #> 28 excel 3 PRC 0.9723809 #> 29 perf 3 ROC 1.0000000 #> 30 perf 3 PRC 1.0000000 #> 31 random 4 ROC 0.5420000 #> 32 random 4 PRC 0.5116964 #> 33 poor_er 4 ROC 0.8175000 #> 34 poor_er 4 PRC 0.8214220 #> 35 good_er 4 ROC 0.8373000 #> 36 good_er 4 PRC 0.8711125 #> 37 excel 4 ROC 0.9809000 #> 38 excel 4 PRC 0.9768571 #> 39 perf 4 ROC 1.0000000 #> 40 perf 4 PRC 1.0000000
## Get AUCs of Precision-Recall mm_aucs_prc <- subset(mm_aucs, curvetypes == "PRC") ## Shows AUCs mm_aucs_prc
#> modnames dsids curvetypes aucs #> 2 random 1 PRC 0.4987108 #> 4 poor_er 1 PRC 0.7841504 #> 6 good_er 1 PRC 0.8041642 #> 8 excel 1 PRC 0.9840001 #> 10 perf 1 PRC 1.0000000 #> 12 random 2 PRC 0.4245858 #> 14 poor_er 2 PRC 0.7278862 #> 16 good_er 2 PRC 0.8643072 #> 18 excel 2 PRC 0.9953327 #> 20 perf 2 PRC 1.0000000 #> 22 random 3 PRC 0.5358556 #> 24 poor_er 3 PRC 0.6564273 #> 26 good_er 3 PRC 0.8288849 #> 28 excel 3 PRC 0.9723809 #> 30 perf 3 PRC 1.0000000 #> 32 random 4 PRC 0.5116964 #> 34 poor_er 4 PRC 0.8214220 #> 36 good_er 4 PRC 0.8711125 #> 38 excel 4 PRC 0.9768571 #> 40 perf 4 PRC 1.0000000