Last updated: 2025-02-14

Checks: 7 0

Knit directory: muse/

This reproducible R Markdown analysis was created with workflowr (version 1.7.1). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.


Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.

The command set.seed(20200712) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

Great job! Recording the operating system, R version, and package versions is critical for reproducibility.

Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.

Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.

The results in this page were generated with repository version 588f6ff. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:


Ignored files:
    Ignored:    .Rproj.user/
    Ignored:    data/1M_neurons_filtered_gene_bc_matrices_h5.h5
    Ignored:    data/brain_counts/
    Ignored:    data/seurat_1m_neuron.rds
    Ignored:    r_packages_4.4.1/

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.


These are the previous versions of the repository in which changes were made to the R Markdown (analysis/seurat_bpcells.Rmd) and HTML (docs/seurat_bpcells.html) files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view the files as they were in that past version.

File Version Author Date Message
Rmd 588f6ff Dave Tang 2025-02-14 Seurat workflow on 1.3M neurons
html be30f63 Dave Tang 2025-02-12 Build site.
Rmd d106cde Dave Tang 2025-02-12 Seurat workflow
html c5e8ee1 Dave Tang 2025-02-12 Build site.
Rmd fa1564d Dave Tang 2025-02-12 BPCells with Seurat

https://github.com/satijalab/seurat/blob/9755c164d99828dbc5dd9c8364389766cd4ff7fd/vignettes/seurat5_bpcells_interaction_vignette.Rmd

BPCells is an R package that allows for computationally efficient single-cell analysis. It utilizes bit-packing compression to store counts matrices on disk and C++ code to cache operations.

We leverage the high performance capabilities of BPCells to work with Seurat objects in memory while accessing the counts on disk. In this vignette, we show how to use BPCells to load data, work with a Seurat objects in a more memory-efficient way, and write out Seurat objects with BPCells matrices.

We will show the methods for interacting with both a single dataset in one file or multiple datasets across multiple files using BPCells. BPCells allows us to easily analyze these large datasets in memory, and we encourage users to check out some of our other vignettes here and here to see further applications.

remotes::install_github("bnprks/BPCells/r")
suppressPackageStartupMessages(library(BPCells))
suppressPackageStartupMessages(library(Seurat))

We use BPCells functionality to both load in our data and write the counts layers to bitpacked compressed binary files on disk to improve computation speeds. BPCells has multiple functions for reading in files.

Load Data

Download 1.3 Million Brain Cells from E18 Mice (3.93 GB).

my_url <- 'https://cf.10xgenomics.com/samples/cell-exp/1.3.0/1M_neurons/1M_neurons_filtered_gene_bc_matrices_h5.h5'
my_file <- paste0("data/", basename(my_url))

if(!file.exists(my_file)){
  options(timeout = 10000)
  download.file(url = my_url, destfile = my_file)
}

Load Data from one h5 file

In this section, we will load the 1.3 million brain cells dataset. We will use BPCells::open_matrix_10x_hdf5() that reads in feature matrices from 10x data. We then write a matrix directory, load the matrix, and create a Seurat object.

brain.data <- BPCells::open_matrix_10x_hdf5(path = my_file)
brain.data
27998 x 1306127 IterableMatrix object with class 10xMatrixH5

Row names: ENSMUSG00000051951, ENSMUSG00000089699 ... ENSMUSG00000095742
Col names: AAACCTGAGATAGGAG-1, AAACCTGAGCGGCTTC-1 ... TTTGTCATCTGAAAGA-133

Data type: uint32_t
Storage order: column major

Queued Operations:
1. 10x HDF5 feature matrix in file /home/rstudio/muse/data/1M_neurons_filtered_gene_bc_matrices_h5.h5
# Write the matrix to a directory
my_outdir <- "data/brain_counts"
if(!dir.exists(my_outdir)){
  BPCells::write_matrix_dir(
    mat = brain.data,
    dir = my_outdir
  )
}

# Now that we have the matrix on disk, we can load it
brain.mat <- open_matrix_dir(dir = my_outdir)
brain.mat
27998 x 1306127 IterableMatrix object with class MatrixDir

Row names: ENSMUSG00000051951, ENSMUSG00000089699 ... ENSMUSG00000095742
Col names: AAACCTGAGATAGGAG-1, AAACCTGAGCGGCTTC-1 ... TTTGTCATCTGAAAGA-133

Data type: uint32_t
Storage order: column major

Queued Operations:
1. Load compressed matrix from directory /home/rstudio/muse/data/brain_counts
# Create Seurat Object
brain <- CreateSeuratObject(
  counts = brain.mat,
  project = '1m_neurons'
)
brain
An object of class Seurat 
27998 features across 1306127 samples within 1 assay 
Active assay: RNA (27998 features, 0 variable features)
 1 layer present: counts

What if I already have a Seurat Object?

You can use BPCells to convert the matrices in your already created Seurat objects to on-disk matrices. Note, that this is only possible for V5 assays. As an example, if you’d like to convert the counts matrix of your RNA assay to a BPCells matrix, you can use the following:

obj <- readRDS("/path/to/reference.rds")

# Write the counts layer to a directory
write_matrix_dir(mat = obj[["RNA"]]$counts, dir = '/brahms/hartmana/vignette_data/bpcells/brain_counts')
counts.mat <- open_matrix_dir(dir = "/brahms/hartmana/vignette_data/bpcells/brain_counts")

obj[["RNA"]]$counts <- counts.mat

Example Analysis

Use fix by Ben Parks, author of BPCells to overcome the error Cannot convert BPcells matrix to dgcMatrix.

Error in (function (cond) : error in evaluating the argument 'x' in selecting a method for function 'as.matrix': Error converting IterableMatrix to dgCMatrix
* dgCMatrix objects cannot hold more than 2^31 non-zero entries
* Input matrix has 2612254000 entries
fixed_PrepDR5 <- function(object, features = NULL, layer = 'scale.data', verbose = TRUE) {
  layer <- layer[1L]
  olayer <- layer
  layer <- SeuratObject::Layers(object = object, search = layer)
  if (is.null(layer)) {
    abort(paste0("No layer matching pattern '", olayer, "' not found. Please run ScaleData and retry"))
  }
  data.use <- SeuratObject::LayerData(object = object, layer = layer)
  features <- features %||% VariableFeatures(object = object)
  if (!length(x = features)) {
    stop("No variable features, run FindVariableFeatures() or provide a vector of features", call. = FALSE)
  }
  if (is(data.use, "IterableMatrix")) {
    features.var <- BPCells::matrix_stats(matrix=data.use, row_stats="variance")$row_stats["variance",]
  } else {
    features.var <- apply(X = data.use, MARGIN = 1L, FUN = var)
  }
  features.keep <- features[features.var > 0]
  if (!length(x = features.keep)) {
    stop("None of the requested features have any variance", call. = FALSE)
  } else if (length(x = features.keep) < length(x = features)) {
    exclude <- setdiff(x = features, y = features.keep)
    if (isTRUE(x = verbose)) {
      warning(
        "The following ",
        length(x = exclude),
        " features requested have zero variance; running reduction without them: ",
        paste(exclude, collapse = ', '),
        call. = FALSE,
        immediate. = TRUE
      )
    }
  }
  features <- features.keep
  features <- features[!is.na(x = features)]
  features.use <- features[features %in% rownames(data.use)]
  if(!isTRUE(all.equal(features, features.use))) {
    missing_features <- setdiff(features, features.use)
    if(length(missing_features) > 0) {
    warning_message <- paste("The following features were not available: ",
                             paste(missing_features, collapse = ", "),
                             ".", sep = "")
    warning(warning_message, immediate. = TRUE)
    }
  }
  data.use <- data.use[features.use, ]
  return(data.use)
}

assignInNamespace('PrepDR5', fixed_PrepDR5, 'Seurat')

Once this conversion is done, you can perform typical Seurat functions on the object. For example, we can normalize data and visualize features by automatically accessing the on-disk counts.

debug_flag <- FALSE
options(future.globals.maxSize = 1.5 * 1024^3)
start_time <- Sys.time()

brain <- NormalizeData(brain, normalization.method = "LogNormalize")
Normalizing layer: counts
brain <- FindVariableFeatures(brain, selection.method = 'vst', nfeatures = 2000, verbose = debug_flag)
brain <- ScaleData(brain, verbose = debug_flag)
brain <- RunPCA(brain, verbose = debug_flag)
brain <- RunUMAP(brain, dims = 1:30, verbose = debug_flag)
Warning: The default method for RunUMAP has changed from calling Python UMAP via reticulate to the R-native UWOT using the cosine metric
To use Python UMAP via reticulate, set umap.method to 'umap-learn' and metric to 'correlation'
This message will be shown once per session
brain <- FindNeighbors(brain, dims = 1:30, verbose = debug_flag)
brain <- FindClusters(brain, resolution = 0.5, verbose = debug_flag)
brain
An object of class Seurat 
27998 features across 1306127 samples within 1 assay 
Active assay: RNA (27998 features, 2000 variable features)
 3 layers present: counts, data, scale.data
 2 dimensional reductions calculated: pca, umap
end_time <- Sys.time()
end_time - start_time
Time difference of 1.945015 hours

Saving Seurat objects with on-disk layers

If you save your object and load it in in the future, Seurat will access the on-disk matrices by their path, which is stored in the assay level data. To make it easy to ensure these are saved in the same place, we provide new functionality to the SaveSeuratRds() function. In this function, you specify your filename. The pointer to the path in the Seurat object will change to the current directory.

This also makes it easy to share your Seurat objects with BPCells matrices by sharing a folder that contains both the object and the BPCells directory.

SaveSeuratRds(
  object = brain,
  file = "data/seurat_1m_neuron.rds"
)
Warning: Trying to move '/home/rstudio/muse/data/brain_counts' to itself,
skipping
Trying to move '/home/rstudio/muse/data/brain_counts' to itself,
skipping
Trying to move '/home/rstudio/muse/data/brain_counts' to itself,
skipping

If needed, a layer with an on-disk matrix can be converted to an in-memory matrix using the as() function. For the purposes of this demo, we’ll subset the object so that it takes up less space in memory.

brain_subset <- subset(brain, downsample = 1000)
brain_subset[["RNA"]]$counts <- as(object = brain_subset[["RNA"]]$counts, Class = "dgCMatrix")
brain_subset
An object of class Seurat 
27998 features across 29011 samples within 1 assay 
Active assay: RNA (27998 features, 2000 variable features)
 3 layers present: counts, data, scale.data
 2 dimensional reductions calculated: pca, umap

sessionInfo()
R version 4.4.1 (2024-06-14)
Platform: x86_64-pc-linux-gnu
Running under: Ubuntu 22.04.5 LTS

Matrix products: default
BLAS:   /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3 
LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.20.so;  LAPACK version 3.10.0

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       

time zone: Etc/UTC
tzcode source: system (glibc)

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] Seurat_5.1.0       SeuratObject_5.0.2 sp_2.1-4           BPCells_0.3.0     
[5] workflowr_1.7.1   

loaded via a namespace (and not attached):
  [1] RColorBrewer_1.1-3     rstudioapi_0.17.1      jsonlite_1.8.9        
  [4] magrittr_2.0.3         spatstat.utils_3.1-0   farver_2.1.2          
  [7] rmarkdown_2.28         fs_1.6.4               vctrs_0.6.5           
 [10] ROCR_1.0-11            spatstat.explore_3.3-3 htmltools_0.5.8.1     
 [13] sass_0.4.9             sctransform_0.4.1      parallelly_1.38.0     
 [16] KernSmooth_2.23-24     bslib_0.8.0            htmlwidgets_1.6.4     
 [19] ica_1.0-3              plyr_1.8.9             plotly_4.10.4         
 [22] zoo_1.8-12             cachem_1.1.0           whisker_0.4.1         
 [25] igraph_2.1.1           mime_0.12              lifecycle_1.0.4       
 [28] pkgconfig_2.0.3        Matrix_1.7-0           R6_2.5.1              
 [31] fastmap_1.2.0          MatrixGenerics_1.18.1  fitdistrplus_1.2-1    
 [34] future_1.34.0          shiny_1.9.1            digest_0.6.37         
 [37] colorspace_2.1-1       patchwork_1.3.0        ps_1.8.1              
 [40] rprojroot_2.0.4        tensor_1.5             RSpectra_0.16-2       
 [43] irlba_2.3.5.1          progressr_0.15.0       fansi_1.0.6           
 [46] spatstat.sparse_3.1-0  httr_1.4.7             polyclip_1.10-7       
 [49] abind_1.4-8            compiler_4.4.1         fastDummies_1.7.4     
 [52] MASS_7.3-60.2          tools_4.4.1            lmtest_0.9-40         
 [55] httpuv_1.6.15          future.apply_1.11.3    goftest_1.2-3         
 [58] glue_1.8.0             callr_3.7.6            nlme_3.1-164          
 [61] promises_1.3.0         grid_4.4.1             Rtsne_0.17            
 [64] getPass_0.2-4          cluster_2.1.6          reshape2_1.4.4        
 [67] generics_0.1.3         gtable_0.3.6           spatstat.data_3.1-2   
 [70] tidyr_1.3.1            data.table_1.16.2      utf8_1.2.4            
 [73] spatstat.geom_3.3-3    RcppAnnoy_0.0.22       ggrepel_0.9.6         
 [76] RANN_2.6.2             pillar_1.9.0           stringr_1.5.1         
 [79] spam_2.11-0            RcppHNSW_0.6.0         later_1.3.2           
 [82] splines_4.4.1          dplyr_1.1.4            lattice_0.22-6        
 [85] survival_3.6-4         deldir_2.0-4           tidyselect_1.2.1      
 [88] miniUI_0.1.1.1         pbapply_1.7-2          knitr_1.48            
 [91] git2r_0.35.0           gridExtra_2.3          scattermore_1.2       
 [94] xfun_0.48              matrixStats_1.4.1      stringi_1.8.4         
 [97] lazyeval_0.2.2         yaml_2.3.10            evaluate_1.0.1        
[100] codetools_0.2-20       tibble_3.2.1           cli_3.6.3             
[103] uwot_0.2.2             xtable_1.8-4           reticulate_1.39.0     
[106] munsell_0.5.1          processx_3.8.4         jquerylib_0.1.4       
[109] Rcpp_1.0.13            globals_0.16.3         spatstat.random_3.3-2 
[112] png_0.1-8              spatstat.univar_3.0-1  parallel_4.4.1        
[115] ggplot2_3.5.1          dotCall64_1.2          listenv_0.9.1         
[118] viridisLite_0.4.2      scales_1.3.0           ggridges_0.5.6        
[121] leiden_0.4.3.1         purrr_1.0.2            rlang_1.1.4           
[124] cowplot_1.1.3