Last updated: 2026-04-28

Checks: 7 0

Knit directory: muse/

This reproducible R Markdown analysis was created with workflowr (version 1.7.2). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.


Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.

The command set.seed(20200712) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

Great job! Recording the operating system, R version, and package versions is critical for reproducibility.

Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.

Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.

The results in this page were generated with repository version f77a50a. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:


Ignored files:
    Ignored:    .Rproj.user/
    Ignored:    data/1M_neurons_filtered_gene_bc_matrices_h5.h5
    Ignored:    data/293t/
    Ignored:    data/293t_3t3_filtered_gene_bc_matrices.tar.gz
    Ignored:    data/293t_filtered_gene_bc_matrices.tar.gz
    Ignored:    data/5k_Human_Donor1_PBMC_3p_gem-x_5k_Human_Donor1_PBMC_3p_gem-x_count_sample_filtered_feature_bc_matrix.h5
    Ignored:    data/5k_Human_Donor2_PBMC_3p_gem-x_5k_Human_Donor2_PBMC_3p_gem-x_count_sample_filtered_feature_bc_matrix.h5
    Ignored:    data/5k_Human_Donor3_PBMC_3p_gem-x_5k_Human_Donor3_PBMC_3p_gem-x_count_sample_filtered_feature_bc_matrix.h5
    Ignored:    data/5k_Human_Donor4_PBMC_3p_gem-x_5k_Human_Donor4_PBMC_3p_gem-x_count_sample_filtered_feature_bc_matrix.h5
    Ignored:    data/97516b79-8d08-46a6-b329-5d0a25b0be98.h5ad
    Ignored:    data/Parent_SC3v3_Human_Glioblastoma_filtered_feature_bc_matrix.tar.gz
    Ignored:    data/brain_counts/
    Ignored:    data/cl.obo
    Ignored:    data/cl.owl
    Ignored:    data/jurkat/
    Ignored:    data/jurkat:293t_50:50_filtered_gene_bc_matrices.tar.gz
    Ignored:    data/jurkat_293t/
    Ignored:    data/jurkat_filtered_gene_bc_matrices.tar.gz
    Ignored:    data/pbmc20k/
    Ignored:    data/pbmc20k_seurat/
    Ignored:    data/pbmc3k.csv
    Ignored:    data/pbmc3k.csv.gz
    Ignored:    data/pbmc3k.h5ad
    Ignored:    data/pbmc3k/
    Ignored:    data/pbmc3k_bpcells_mat/
    Ignored:    data/pbmc3k_export.mtx
    Ignored:    data/pbmc3k_matrix.mtx
    Ignored:    data/pbmc3k_seurat.rds
    Ignored:    data/pbmc4k_filtered_gene_bc_matrices.tar.gz
    Ignored:    data/pbmc_1k_v3_filtered_feature_bc_matrix.h5
    Ignored:    data/pbmc_1k_v3_raw_feature_bc_matrix.h5
    Ignored:    data/refdata-gex-GRCh38-2020-A.tar.gz
    Ignored:    data/seurat_1m_neuron.rds
    Ignored:    data/t_3k_filtered_gene_bc_matrices.tar.gz
    Ignored:    r_packages_4.5.2/

Untracked files:
    Untracked:  .claude/
    Untracked:  CLAUDE.md
    Untracked:  analysis/.claude/
    Untracked:  analysis/aucc.Rmd
    Untracked:  analysis/bimodal.Rmd
    Untracked:  analysis/bioc.Rmd
    Untracked:  analysis/bioc_scrnaseq.Rmd
    Untracked:  analysis/chick_weight.Rmd
    Untracked:  analysis/likelihood.Rmd
    Untracked:  analysis/modelling.Rmd
    Untracked:  analysis/sampleqc.Rmd
    Untracked:  analysis/wordpress_readability.Rmd
    Untracked:  bpcells_matrix/
    Untracked:  data/Caenorhabditis_elegans.WBcel235.113.gtf.gz
    Untracked:  data/GCF_043380555.1-RS_2024_12_gene_ontology.gaf.gz
    Untracked:  data/SC3pv3_GEX_Human_PBMC_filtered_feature_bc_matrix.h5
    Untracked:  data/SC3pv3_GEX_Human_PBMC_raw_feature_bc_matrix.h5
    Untracked:  data/SeuratObj.rds
    Untracked:  data/arab.rds
    Untracked:  data/astronomicalunit.csv
    Untracked:  data/davetang039sblog.WordPress.2026-02-12.xml
    Untracked:  data/femaleMiceWeights.csv
    Untracked:  data/lung_bcell.rds
    Untracked:  m3/
    Untracked:  output/decontx_corrected.rds
    Untracked:  output/dropletutils_cells.rds
    Untracked:  output/soupx_corrected.rds
    Untracked:  women.json

Unstaged changes:
    Modified:   analysis/isoform_switch_analyzer.Rmd

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.


These are the previous versions of the repository in which changes were made to the R Markdown (analysis/cell_ranger_summary.Rmd) and HTML (docs/cell_ranger_summary.html) files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view the files as they were in that past version.

File Version Author Date Message
Rmd f77a50a Dave Tang 2026-04-28 Cell Ranger web summary

Introduction

The first thing to look at after cellranger count finishes is the web summary (web_summary.html) and the metrics CSV (metrics_summary.csv) it is built from. Almost every conclusion you can draw from a 10x experiment depends on whether the upstream metrics are in good shape; a poorly QC’d library will silently undermine every downstream step — from cell calling, to ambient correction (see SoupX, DecontX, and the DropletUtils notebook), to clustering and differential expression.

This notebook is a reference for what each metric in the web summary means, what value you should expect from a good-quality library, why a metric might come in lower than expected, and what to do about it. It is built from the official 10x technical notes plus the patterns we have seen interpreting the SC3pv3_GEX_Human_PBMC dataset across the SoupX/DecontX/DropletUtils notebooks in this project.

The expected ranges below are guidelines, not hard cutoffs. They are calibrated for standard 10x Chromium 3’/5’ Single Cell Gene Expression on whole cells from a well-prepared mammalian sample (human or mouse PBMC is the cleanest case). Tissues, nuclei, and non-standard chemistries shift several of these numbers — those differences are flagged where they matter.

The web summary at a glance

A Cell Ranger web summary has three groups of metrics that should be read together:

  1. Sequencing metrics — how good is the raw sequencing data feeding the pipeline?
  2. Mapping metrics — where in the genome did the reads end up?
  3. Cell metrics — how many cells were called, and how concentrated is the read mass in those cells?

Cell Ranger colour-codes any metric outside its expected range (yellow warning, red alert) and lists every failed metric in the Alerts banner at the top. A green report is rare in practice — most real samples have at least one yellow flag — but how many yellow flags and which ones is a strong signal of overall library quality.

Sequencing metrics

These metrics describe the raw sequencing data before any biology is done.

Q30 Bases in Barcode / UMI / RNA Read

The fraction of base calls with a Phred quality score ≥ 30 (one error in 1000 bases) on each of the three reads (cell barcode, UMI, and RNA read).

Metric Good Borderline Concerning
Q30 in Barcode ≥ 95% 90–95% < 90%
Q30 in UMI ≥ 95% 90–95% < 90%
Q30 in RNA Read ≥ 85% 75–85% < 75%

The RNA read (typically R2) is allowed to drop a little because the second read of a paired-end sequencing run usually has lower quality. The barcode and UMI reads (typically R1) are short and at the start of the run, so they should be very high.

Why might Q30 be low? Old flow cell, sequencer maintenance issue, library overloaded onto the lane, very low cluster density, or a bad sequencing run upstream. Cell Ranger’s barcode-correction step recovers some of the loss, but if Q30 in Barcode is truly low you will also see a low Valid Barcodes percentage.

Valid Barcodes

Fraction of reads whose 16 bp cell barcode matches the 10x inclusion list (a known list of barcodes for the chemistry in use), after a one-mismatch correction.

Good Borderline Concerning
≥ 90% 75–90% < 75%

10x’s documented warning threshold is 75%. Good libraries typically clear 90%.

Why might Valid Barcodes be low? Sequencing quality issue (correlated with low Q30 in Barcode), a chemistry mismatch (wrong --chemistry argument), or a library prep problem.

Valid UMIs

Fraction of reads whose 12 bp UMI does not contain an N and is not a homopolymer.

Good Borderline Concerning
≥ 99% 95–99% < 95%

This is almost always close to 100%; a low value is a sequencing-quality red flag.

Sequencing Saturation

The fraction of confidently mapped reads that originate from a UMI that has already been seen. High saturation means most unique transcripts have been captured and additional sequencing yields diminishing returns; low saturation means deeper sequencing would still discover new molecules.

Saturation Interpretation
< 30% Severely under-sequenced; sequence deeper before drawing conclusions.
30–60% Acceptable for exploratory or comparative analyses.
60–80% Typical mature library — most studies aim here.
> 80% Saturated; sequencing more is essentially wasted spend.

There is no universally correct value: saturation should be high enough that the median UMI count per cell is close to its asymptote for your tissue, which usually means somewhere in 60–80% for whole-cell prep.

Why might saturation be low? Insufficient sequencing depth (the most common cause), library complexity higher than expected (rich tissue with many distinct transcripts), or under-loaded cell count (fewer cells means each one gets more reads).

Mean Reads per Cell

Total confidently mapped read pairs divided by the number of called cells.

Good Borderline Concerning
≥ 20,000 10,000–20,000 < 10,000

10x’s documented minimum is 20,000 read pairs per cell. Most published work targets 25,000–50,000; some applications (rare-variant detection, isoform analysis) target 100,000+. Mean reads per cell ties directly to sequencing saturation and to the median UMI/gene counts you can expect downstream.

Mapping metrics

These metrics describe where the reads aligned in the reference genome and transcriptome.

Reads Mapped to Genome

Fraction of all reads that aligned somewhere in the genome.

Good Borderline Concerning
≥ 90% 80–90% < 80%

10x’s documented threshold for this metric is 85% for human/mouse. Anything significantly lower indicates a serious upstream issue.

Why might this be low? Wrong reference genome, contamination from a different species, residual rRNA in the library, adapter or polyA contamination that survived trimming, or a fundamentally low-quality library.

Reads Mapped Confidently to Genome

Fraction of all reads aligned uniquely to the genome (Cell Ranger uses MAPQ = 255 as its “confident” cutoff). Multi-mapping reads — to repetitive elements, paralogues, etc. — are excluded.

Good Borderline Concerning
≥ 80% 70–80% < 70%

The gap between “Mapped” and “Confidently Mapped” tells you what fraction of reads went to repetitive sequence; on a typical mammalian library it is 5–10%.

Reads Mapped Confidently to Transcriptome

Fraction of all reads that aligned uniquely to a known transcript on the correct strand. This is the metric that determines how much of your sequencing budget actually contributed to gene quantification.

Good Borderline Concerning
≥ 50% 30–50% < 30%

10x lists 30% as the floor. Typical good libraries land in 50–80% for whole-cell preps from well-annotated genomes; nuclei preps run lower because most reads come from pre-mRNA.

Confidently Mapped to Exonic / Intronic / Intergenic

These three sum to “Confidently Mapped to Genome”. They tell you what kind of sequence the reads represent.

Region Whole-cell good Whole-cell concerning Nuclei good
Exonic 50–80% < 50% 30–60%
Intronic 5–15% > 25% (for cells) 30–60%
Intergenic 2–5% > 8% 2–5%

From Cell Ranger 7 onwards --include-introns=true is the default, so intronic reads are now included in the gene count — this means whole-cell intronic percentages of 10–15% are normal for current pipelines. They were lower in older versions because intronic reads were being thrown away.

Why might intergenic be elevated? Genomic DNA contamination of the library, mis-annotated reference, non-poly(A) priming artefacts, or fragmented transcripts whose UTRs were lost.

Why might intronic be elevated unexpectedly (in a whole-cell prep)? Some pre-mRNA / nuclear leakage from damaged cells, or inclusion of intronic counts where you were not expecting it. Some cell types — neutrophils for example — naturally have unusually high intron retention.

Antisense Reads

Fraction of confidently mapped reads that aligned on the opposite strand of a known gene with no sense-strand alignment.

Good Borderline Concerning
1–3% 3–6% > 6%

10x 3’ libraries are stranded; antisense reads are not expected. Some always sneak through.

Why might antisense be elevated?

  • Internal poly(T) priming on genomic regions with A-rich stretches in introns or 3’ UTRs of antisense genes — common in stressed or damaged cells with degraded mRNA.
  • Template-switching artefacts during reverse transcription.
  • Genuine antisense transcription in some tissues, but at most a couple of percent.
  • Compromised input — a damaged/lysing cell suspension produces more degradation and therefore more antisense.

Elevated antisense (>6–8%) combined with elevated intergenic (>5–7%) is a recognisable signature of a sample where some fraction of the input was damaged before the chip was run.

Cell metrics

These metrics describe Cell Ranger’s call about which barcodes are cells and what those cells look like.

Estimated Number of Cells

The number of barcodes Cell Ranger called as cells. This should match what you expected from the loading.

For non-HT 3’ v3 / v3.1 chemistry, the practical recovery ranges are:

Loaded cells Expected recovered cells Expected multiplet rate
1,000 ~600 < 1%
5,000 ~3,000 ~4%
10,000 ~6,000 ~8%
20,000 ~12,000 ~15%
> 30,000 hard ceiling > 25% (data unusable)

For the HT chemistries (SC3Pv3HT, SC5PHT) the ceiling is much higher — up to ~60,000 recovered cells per channel.

If the cell count is much higher than expected: the algorithm may be over-calling (often a high-ambient signature). Inspect the barcode rank plot — see the DropletUtils notebook. Cells far below the inflection point are suspicious.

If the cell count is much lower than expected: the algorithm may be under-calling, or the prep produced fewer healthy cells than loaded. The barcode rank plot tells you which: a clean knee at a low cell count means the cells weren’t there; a messy curve means the algorithm struggled.

Fraction Reads in Cells

Of the confidently mapped reads, the fraction that came from barcodes called as cells.

Good Borderline Concerning
≥ 80% 70–80% < 70%

10x’s documented warning threshold is 70%. Good libraries from clean preps typically clear 80% and often 90%+.

A low Fraction Reads in Cells means a non-trivial share of read mass landed on barcodes Cell Ranger called as empty. There are two possible causes (Cell Ranger’s own alert text lists both):

  1. High ambient RNA — the soup is rich enough that empty droplets carry meaningful read mass.
  2. A real population of low-RNA cells the algorithm missed — those reads then count as “non-cell” by construction.

Disambiguating these matters, because the corrective actions are opposite: option 2 calls for --force-cells to capture more barcodes; option 1 calls for not doing that, because the missing read mass is in barcodes with ambient-like profiles. The diagnostics in the DropletUtils notebook (barcode rank plot, mitochondrial percentage of disagreement sets, gene-count distributions) are the cheapest way to tell which is which.

Median UMI Counts per Cell

The median total UMI count among called cells.

Sample type Good Concerning
PBMC (whole-cell) 3,000–10,000 < 1,000
Solid tissue (cells) 1,000–5,000 < 500
Nuclei (snRNA-seq) 1,000–5,000 < 500
Stressed / damaged < 1,000

This metric scales with sequencing depth, so always read it together with Mean Reads per Cell and Sequencing Saturation. Low UMI per cell at low saturation means “sequence deeper”; low UMI per cell at high saturation means “the cells genuinely don’t have much mRNA”.

Median Genes per Cell

The median number of genes with at least one UMI in each called cell.

Sample type Good Concerning
PBMC (whole-cell) 1,000–3,000+ < 500
Solid tissue 500–2,500 < 300
Nuclei (snRNA-seq) 500–2,500 < 300

Median genes per cell scales sub-linearly with UMI per cell (the relationship saturates). Drops here without a corresponding drop in UMI per cell suggest the data is dominated by a small set of highly-expressed genes — typical of stressed or apoptotic cells producing lots of mitochondrial / heat-shock / immediate-early gene transcripts.

Total Genes Detected

The number of genes detected in any cell, across the whole sample.

Good for human/mouse Concerning
15,000–25,000+ < 10,000

This is one of the more forgiving metrics — almost any well-prepared mammalian sample at reasonable depth will detect 15–25k genes.

Reading a web summary holistically

Single metrics rarely tell you what is going on; combinations do. Here is a working example using the metrics from a real sample we discussed in this project.

Metric Value Expected Verdict
Reads Mapped to Genome 97.0% ≥ 90% Excellent
Confidently Mapped to Genome 87.2% ≥ 80% Good
Confidently Mapped to Transcriptome 70.2% ≥ 50% Good
Confidently Mapped to Exonic 69.1% 50–80% Good
Confidently Mapped to Intronic 10.1% 5–15% Borderline
Confidently Mapped to Intergenic 7.9% 2–5% Elevated
Antisense 8.7% 1–3% Elevated
Fraction Reads in Cells 64.7% ≥ 70% Low alert
Estimated Number of Cells 11,245 matches loading OK

Read in isolation, each metric tells a slightly different story; read together, they are a coherent signature of one underlying problem. The mapping rates (genome, transcriptome, exonic) are all healthy, so the sequencing and upstream alignment are fine. But two metrics push consistently in the same direction — antisense at 8.7% and intergenic at 7.9% — and together they point at compromised input: stressed or partially-degraded cells contribute reads that are stranded inconsistently and prime in non-standard places. The Fraction Reads in Cells at 64.7% then closes the loop: 35% of read mass sits on non-cell barcodes, which is exactly what you would expect when (a) the soup is rich because damaged cells have leaked RNA into the supernatant, and/or (b) some fraction of dying cells were rejected by the cell-calling step but still carry significant read mass.

The same dataset showed emptyDrops() and CellBender each adding roughly 35,000 “extra” cells on top of Cell Ranger’s 11,245 — a 4× inflation. That is consistent with the same underlying issue: the EmptyDrops-style test asks “does this barcode differ from ambient?”, and damaged-but-not-fully-empty barcodes pass that test even though they are not viable cells. The web summary signature (elevated antisense + elevated intergenic + low Fraction Reads in Cells) is the upstream confirmation that this is what is happening.

The general lesson is: pair web-summary metrics with downstream diagnostics. A low Fraction Reads in Cells alert can mean either “missed cells” or “leaked RNA”, and Cell Ranger’s alert text lists both. The web summary alone cannot disambiguate them; the per-cell mitochondrial percentage and gene-count distributions of the disagreement sets can.

Common alert patterns

Alert pattern Most likely cause First thing to do
Low Fraction Reads in Cells alone High ambient or genuinely missed cells Inspect barcode rank plot and disagreement-set mito-%.
Low Q30 in Barcode + Low Valid Barcodes Sequencing-quality issue Check the sequencer run report; consider re-sequencing.
Low Reads Mapped to Genome Wrong reference, contamination, or adapter/polyA leak Verify the reference and the chemistry; check FastQC for adapter content.
Elevated antisense + elevated intergenic Damaged or partially-degraded input Tighten dissociation; inspect mitochondrial-% per cell.
Estimated cells far above expected loading Over-calling driven by ambient Re-call with stricter lower or use emptyDropsCellRanger() with FDR adjustment.
Estimated cells far below expected loading, with sharp knee Cells didn’t make it onto the chip Recount input cell concentration; tighten viability gating.
Low Sequencing Saturation Under-sequenced library Sequence deeper (more reads per cell).
Low Median Genes per Cell with high UMI/Cell Stressed cells dominated by a few highly-expressed genes Inspect mito-%, ribosomal-%, immediate-early genes.
Low Mean Reads per Cell Either too many cells loaded or under-sequenced Decide which by checking saturation: low saturation = sequence more.

Troubleshooting workflow

When the web summary has multiple alerts, work through them in this order:

  1. Sequencing first. If Q30 or Valid Barcodes is bad, every downstream metric is suspect. Fix or re-sequence before doing anything else.
  2. Mapping second. If Reads Mapped to Genome is below 80%, you have a reference, contamination, or adapter problem. Confirm the reference matches the species, check the chemistry argument matches the kit, and look at FastQC of the raw reads.
  3. Cell calling third. Once sequencing and mapping are healthy, the cell-calling alerts are the part you can usually do something about post-hoc. The barcode rank plot in the DropletUtils notebook is the right tool here; in particular, the comparison between the knee, the inflection, and what emptyDrops() returns will tell you whether the cell call is conservative, aggressive, or about right.
  4. Per-cell QC fourth. Within the called cells, inspect mitochondrial percentage (pct_mito), ribosomal percentage, and the joint distribution of UMI count and gene count. This is where you separate genuinely viable cells from debris, doublets, and stressed cells. This step belongs in the downstream Seurat / SingleCellExperiment notebook, not in the web summary.

A useful guard rail: never ramp --force-cells to make an alert go away without first confirming the underlying biology supports the change. --force-cells simply takes the top N barcodes by UMI; if those barcodes have ambient-like profiles, you are converting a yellow alert into a quietly-corrupted dataset.

Reading metrics_summary.csv programmatically

The web summary is built from metrics_summary.csv in the Cell Ranger output directory. Reading it into R and checking the values against the ranges above is a useful pre-flight step before launching downstream analyses, especially across batches of samples.

# Replace with the path to a real metrics_summary.csv
metrics_path <- "data/metrics_summary.csv"

if (file.exists(metrics_path)) {
  metrics <- read.csv(metrics_path, check.names = FALSE)
  metrics <- as.list(metrics[1, ])

  pct <- function(x) as.numeric(sub("%", "", x))

  checks <- list(
    valid_barcodes              = pct(metrics[["Valid Barcodes"]]) >= 75,
    q30_barcode                 = pct(metrics[["Q30 Bases in Barcode"]]) >= 90,
    q30_rna                     = pct(metrics[["Q30 Bases in RNA Read"]]) >= 75,
    reads_mapped_to_genome      = pct(metrics[["Reads Mapped to Genome"]]) >= 85,
    confident_to_transcriptome  = pct(metrics[["Reads Mapped Confidently to Transcriptome"]]) >= 30,
    intergenic_under_8          = pct(metrics[["Reads Mapped Confidently to Intergenic Regions"]]) <= 8,
    antisense_under_6           = pct(metrics[["Reads Mapped Antisense to Gene"]]) <= 6,
    fraction_reads_in_cells     = pct(metrics[["Fraction Reads in Cells"]]) >= 70
  )

  data.frame(
    check = names(checks),
    pass  = unlist(checks)
  )
}

Wrap this in a function over a directory of metrics_summary.csv files and you have a batch QC sweep that flags samples deviating from the cohort.

Sources and further reading

Session info

sessionInfo()
R version 4.5.2 (2025-10-31)
Platform: x86_64-pc-linux-gnu
Running under: Ubuntu 24.04.4 LTS

Matrix products: default
BLAS:   /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3 
LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.26.so;  LAPACK version 3.12.0

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       

time zone: Etc/UTC
tzcode source: system (glibc)

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] workflowr_1.7.2

loaded via a namespace (and not attached):
 [1] vctrs_0.7.3       httr_1.4.8        cli_3.6.6         knitr_1.51       
 [5] rlang_1.2.0       xfun_0.57         stringi_1.8.7     otel_0.2.0       
 [9] processx_3.9.0    promises_1.5.0    jsonlite_2.0.0    glue_1.8.1       
[13] rprojroot_2.1.1   git2r_0.36.2      htmltools_0.5.9   httpuv_1.6.17    
[17] ps_1.9.3          sass_0.4.10       rmarkdown_2.31    jquerylib_0.1.4  
[21] tibble_3.3.1      evaluate_1.0.5    fastmap_1.2.0     yaml_2.3.12      
[25] lifecycle_1.0.5   whisker_0.4.1     stringr_1.6.0     compiler_4.5.2   
[29] fs_2.1.0          pkgconfig_2.0.3   Rcpp_1.1.1-1.1    rstudioapi_0.18.0
[33] later_1.4.8       digest_0.6.39     R6_2.6.1          pillar_1.11.1    
[37] callr_3.7.6       magrittr_2.0.5    bslib_0.10.0      tools_4.5.2      
[41] cachem_1.1.0      getPass_0.2-4    

sessionInfo()
R version 4.5.2 (2025-10-31)
Platform: x86_64-pc-linux-gnu
Running under: Ubuntu 24.04.4 LTS

Matrix products: default
BLAS:   /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3 
LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.26.so;  LAPACK version 3.12.0

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       

time zone: Etc/UTC
tzcode source: system (glibc)

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] workflowr_1.7.2

loaded via a namespace (and not attached):
 [1] vctrs_0.7.3       httr_1.4.8        cli_3.6.6         knitr_1.51       
 [5] rlang_1.2.0       xfun_0.57         stringi_1.8.7     otel_0.2.0       
 [9] processx_3.9.0    promises_1.5.0    jsonlite_2.0.0    glue_1.8.1       
[13] rprojroot_2.1.1   git2r_0.36.2      htmltools_0.5.9   httpuv_1.6.17    
[17] ps_1.9.3          sass_0.4.10       rmarkdown_2.31    jquerylib_0.1.4  
[21] tibble_3.3.1      evaluate_1.0.5    fastmap_1.2.0     yaml_2.3.12      
[25] lifecycle_1.0.5   whisker_0.4.1     stringr_1.6.0     compiler_4.5.2   
[29] fs_2.1.0          pkgconfig_2.0.3   Rcpp_1.1.1-1.1    rstudioapi_0.18.0
[33] later_1.4.8       digest_0.6.39     R6_2.6.1          pillar_1.11.1    
[37] callr_3.7.6       magrittr_2.0.5    bslib_0.10.0      tools_4.5.2      
[41] cachem_1.1.0      getPass_0.2-4