Last updated: 2024-06-20
Checks: 7 0
Knit directory: muse/
This reproducible R Markdown analysis was created with workflowr (version 1.7.1). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
The command set.seed(20200712)
was run prior to running
the code in the R Markdown file. Setting a seed ensures that any results
that rely on randomness, e.g. subsampling or permutations, are
reproducible.
Great job! Recording the operating system, R version, and package versions is critical for reproducibility.
Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.
Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.
The results in this page were generated with repository version 97f4197. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.
Note that you need to be careful to ensure that all relevant files for
the analysis have been committed to Git prior to generating the results
(you can use wflow_publish
or
wflow_git_commit
). workflowr only checks the R Markdown
file, but you know if there are other scripts or data files that it
depends on. Below is the status of the Git repository when the results
were generated:
Ignored files:
Ignored: .Rhistory
Ignored: .Rproj.user/
Ignored: r_packages_4.3.3/
Ignored: r_packages_4.4.0/
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the repository in which changes were
made to the R Markdown
(analysis/confusion_matrix_rates.Rmd
) and HTML
(docs/confusion_matrix_rates.html
) files. If you’ve
configured a remote Git repository (see ?wflow_git_remote
),
click on the hyperlinks in the table below to view the files as they
were in that past version.
File | Version | Author | Date | Message |
---|---|---|---|---|
Rmd | 97f4197 | Dave Tang | 2024-06-20 | Multiclass metrics |
html | 130f8d4 | Dave Tang | 2024-06-19 | Build site. |
Rmd | 2195f45 | Dave Tang | 2024-06-19 | Compare calculations |
html | af2bef5 | Dave Tang | 2024-06-19 | Build site. |
Rmd | f08c5d1 | Dave Tang | 2024-06-19 | R function for calculating confusion matrix rates |
I often forget the names and aliases (and how to calculate them) of confusion matrix rates and have to look them up. Finally, I had enough and was looking for a single function that could calculate the most commonly used rates, like sensitivity or precision, but I couldn’t find one that didn’t require me to install some R package. Therefore I wrote my own called table_metrics and will briefly talk about it in this post.
I have had this Simple guide to confusion matrix terminology bookmarked for many years and I keep referring back to it. It does a great job of explaining the list of rates that are often calculated from a confusion matrix for a binary classifier. If you need a refresher on the confusion matrix rates/metrics, check it out.
We can generate the same confusion matrix as the Simple guide with the following code.
generate_example <- function(){
dat <- data.frame(
n = 1:165,
truth = c(rep("no", 60), rep("yes", 105)),
pred = c(rep("no", 50), rep("yes", 10), rep("no", 5), rep("yes", 100))
)
table(dat$truth, dat$pred)
}
confusion <- generate_example()
confusion
no yes
no 50 10
yes 5 100
I wrote the function confusion_matrix() to generate a confusion matrix based on case numbers. The same confusion matrix can be generated with the function by sourcing it from GitHub.
source("https://raw.githubusercontent.com/davetang/learning_r/main/code/confusion_matrix.R")
eg <- confusion_matrix(TP=100, TN=50, FN=5, FP=10)
eg$cm
no yes
no 50 10
yes 5 100
To use the table_metrics function I wrote, you also source it directly from GitHub.
source("https://raw.githubusercontent.com/davetang/learning_r/main/code/table_metrics.R")
The function has four parameters, which are described below using roxygen2 syntax (copied and pasted from the source code of the table_metrics function).
#' @param tab Confusion matrix of class table
#' @param pos Name of the positive label
#' @param neg Name of the negative label
#' @param truth Where the truth/known set is stored, `row` or `col`
To use table_metrics()
on the example data we generated,
we have to provide arguments for the four parameters.
The first parameter is the confusion matrix stored as a table.
The second and third parameters are the names of the positive and negative labels. The example used yes and no, so those are our input arguments.
If you have generated a confusion matrix with the predictions as the rows and truth labels as the columns then change the fourth argument to ‘col’. Our truth labels are on the rows, so ‘row’ is specified.
table_metrics(confusion, 'yes', 'no', 'row')
$accuracy
[1] 0.909
$misclassifcation_rate
[1] 0.0909
$error_rate
[1] 0.0909
$true_positive_rate
[1] 0.952
$sensitivity
[1] 0.952
$recall
[1] 0.952
$false_positive_rate
[1] 0.167
$true_negative_rate
[1] 0.833
$specificity
[1] 0.833
$precision
[1] 0.909
$prevalance
[1] 0.636
$f1_score
[1] 0.9300032
The function returns a list with the confusion matrix rates/metrics. You can save the list and subset for the rate/metric you are interested in.
my_metrics <- table_metrics(confusion, 'yes', 'no', 'row')
my_metrics$sensitivity
[1] 0.952
Finally, if you want more significant digits (default is set to 3), supply it as the fifth argument.
I have some additional notes on machine learning evaluation that may also be of interest. And that’s it!
Generate labels.
true_label <- factor(c(rep(1, 80), rep(2, 10), rep(3, 10)), levels = 1:3)
true_label
[1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
[38] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
[75] 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3
Levels: 1 2 3
Predictions.
pred_label <- factor(c(2, 3, rep(1, 98)), levels = 1:3)
pred_label
[1] 2 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
[38] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
[75] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Levels: 1 2 3
Generate confusion matrix.
cm <- table(truth = true_label, predict = pred_label)
cm
predict
truth 1 2 3
1 78 1 1
2 10 0 0
3 10 0 0
Using yardstick::f_meas.
if(!require("yardstick")){
install.packages("yardstick")
}
Loading required package: yardstick
Attaching package: 'yardstick'
The following object is masked from 'package:readr':
spec
yardstick::f_meas(cm)
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 f_meas macro 0.292
Using f_meas_vec()
.
yardstick::f_meas_vec(truth = true_label, estimate = pred_label)
[1] 0.2921348
High accuracy but low \(F_1\).
yardstick::accuracy(cm)
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy multiclass 0.78
Double check to see if table_metrics()
calculations are
correct.
true_label <- factor(c(rep(1, 90), rep(2, 10)), levels = 1:2)
pred_label <- factor(rep(1, 100), levels = 1:2)
cm <- table(truth = true_label, predict = pred_label)
cm
predict
truth 1 2
1 90 0
2 10 0
Calculate metrics.
cm_metrics <- table_metrics(cm, 1, 2, 'row')
Using {yardstick}.
cm_metrics$accuracy
[1] 0.9
yardstick::accuracy(cm)$.estimate
[1] 0.9
F1 score.
cm_metrics$f1_score
[1] 0.9473684
yardstick::f_meas(cm)$.estimate
[1] 0.9473684
Specificity.
cm_metrics$specificity
[1] 0
yardstick::specificity(cm)$.estimate
Warning: While computing binary `spec()`, no true negatives were detected (i.e.
`true_negative + false_positive = 0`).
Specificity is undefined in this case, and `NA` will be returned.
Note that 10 predicted negatives(s) actually occurred for the problematic event
level, 1
[1] NA
Note the difference in sensitivity; this is because the function expects that the true class results should be in the columns of the table and we have it the other way around.
cm_metrics$recall
[1] 1
yardstick::recall(cm)$.estimate
[1] 0.9
yardstick::sensitivity(cm)$.estimate
[1] 0.9
If we provide the labels manually, the sensitivity is calculated properly.
yardstick::sensitivity_vec(true_label, pred_label)
[1] 1
Same for precision.
cm_metrics$precision
[1] 0.9
yardstick::precision_vec(true_label, pred_label)
[1] 0.9
Install Palmer Archipelago (Antarctica) Penguin Data.
if(!require("palmerpenguins")){
install.packages("palmerpenguins")
}
Loading required package: palmerpenguins
library(dplyr)
library(palmerpenguins)
palmerpenguins::penguins |>
select(contains("_"), species) |>
na.omit() |>
group_by(species) |>
mutate(species_n = row_number()) -> dat
head(dat)
# A tibble: 6 × 6
# Groups: species [1]
bill_length_mm bill_depth_mm flipper_length_mm body_mass_g species species_n
<dbl> <dbl> <int> <int> <fct> <int>
1 39.1 18.7 181 3750 Adelie 1
2 39.5 17.4 186 3800 Adelie 2
3 40.3 18 195 3250 Adelie 3
4 36.7 19.3 193 3450 Adelie 4
5 39.3 20.6 190 3650 Adelie 5
6 38.9 17.8 181 3625 Adelie 6
Number of species.
table(dat$species)
Adelie Chinstrap Gentoo
151 68 123
80% of species.
dat |>
group_by(species) |>
summarise(thres = floor(.8 * n())) -> thres
thres
# A tibble: 3 × 2
species thres
<fct> <dbl>
1 Adelie 120
2 Chinstrap 54
3 Gentoo 98
Training and testing data.
dat |>
group_by(species) |>
inner_join(y = thres, by = "species") |>
filter(species_n < thres) |>
select(-c(species_n, thres)) -> training
dat |>
group_by(species) |>
inner_join(y = thres, by = "species") |>
filter(species_n >= thres) |>
select(-c(species_n, thres)) -> testing
stopifnot(nrow(rbind(training, testing)) == nrow(dat))
Decision tree.
if(!require("tree")){
install.packages("tree")
}
Loading required package: tree
library(tree)
fit <- tree(species ~ ., data = training)
pred <- predict(fit, testing, type = "class")
tab <- table(predict = pred, truth = testing$species)
tab
truth
predict Adelie Chinstrap Gentoo
Adelie 29 1 0
Chinstrap 3 14 1
Gentoo 0 0 25
Sensitivity, where the .estimator
refers to:
One of: “binary”, “macro”, “macro_weighted”, or “micro” to specify the type of averaging to be done. “binary” is only relevant for the two class case. The other three are general methods for calculating multiclass metrics. The default will automatically choose “binary” or “macro” based on estimate.
yardstick::sensitivity(tab, estimator = "macro")$.estimate
[1] 0.9337073
yardstick::sensitivity(tab, estimator = "macro_weighted")$.estimate
[1] 0.9315068
yardstick::sensitivity(tab, estimator = "micro")$.estimate
[1] 0.9315068
See vignette("multiclass", "yardstick")
.
yardstick::f_meas(tab)
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 f_meas macro 0.921
sessionInfo()
R version 4.4.0 (2024-04-24)
Platform: x86_64-pc-linux-gnu
Running under: Ubuntu 22.04.4 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3
LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.20.so; LAPACK version 3.10.0
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
time zone: Etc/UTC
tzcode source: system (glibc)
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] tree_1.0-43 palmerpenguins_0.1.1 yardstick_1.3.1
[4] lubridate_1.9.3 forcats_1.0.0 stringr_1.5.1
[7] dplyr_1.1.4 purrr_1.0.2 readr_2.1.5
[10] tidyr_1.3.1 tibble_3.2.1 ggplot2_3.5.1
[13] tidyverse_2.0.0 workflowr_1.7.1
loaded via a namespace (and not attached):
[1] sass_0.4.9 utf8_1.2.4 generics_0.1.3 stringi_1.8.4
[5] hms_1.1.3 digest_0.6.35 magrittr_2.0.3 timechange_0.3.0
[9] evaluate_0.23 grid_4.4.0 fastmap_1.2.0 rprojroot_2.0.4
[13] jsonlite_1.8.8 processx_3.8.4 whisker_0.4.1 ps_1.7.6
[17] promises_1.3.0 httr_1.4.7 fansi_1.0.6 scales_1.3.0
[21] jquerylib_0.1.4 cli_3.6.2 rlang_1.1.3 munsell_0.5.1
[25] withr_3.0.0 cachem_1.1.0 yaml_2.3.8 tools_4.4.0
[29] tzdb_0.4.0 colorspace_2.1-0 httpuv_1.6.15 vctrs_0.6.5
[33] R6_2.5.1 lifecycle_1.0.4 git2r_0.33.0 fs_1.6.4
[37] pkgconfig_2.0.3 callr_3.7.6 pillar_1.9.0 bslib_0.7.0
[41] later_1.3.2 gtable_0.3.5 glue_1.7.0 Rcpp_1.0.12
[45] xfun_0.44 tidyselect_1.2.1 rstudioapi_0.16.0 knitr_1.46
[49] htmltools_0.5.8.1 rmarkdown_2.27 compiler_4.4.0 getPass_0.2-4