scrna4/6 Jupyter Notebook lamindata

Analyze a collection in memory

Here, we’ll analyze the growing collection by loading it into memory. This is only possible if it’s not too large. If your data is large, you’ll likely want to iterate over the collection to train a model, the topic of the next page (scrna5/6).

import lamindb as ln
import bionty as bt
💡 connected lamindb: testuser1/test-scrna
ln.settings.transform.stem_uid = "mfWKm8OtAzp8"
ln.settings.transform.version = "1"
ln.track()
💡 notebook imports: bionty==0.47.1 lamindb==0.75.0 scanpy==1.9.6
💡 saved: Transform(uid='mfWKm8OtAzp85zKv', version='1', name='Analyze a collection in memory', key='scrna4', type='notebook', created_by_id=1, updated_at='2024-08-05 13:24:52 UTC')
💡 saved: Run(uid='0uB6nr4NENywMcOBObWK', transform_id=4, created_by_id=1)
Run(uid='0uB6nr4NENywMcOBObWK', started_at='2024-08-05 13:24:52 UTC', is_consecutive=True, transform_id=4, created_by_id=1)
ln.Collection.df()
uid version name description hash reference reference_type visibility transform_id meta_artifact_id run_id created_by_id updated_at
id
2 A22kL5r80OubMlqzD8fj 2 My versioned scRNA-seq collection None Umjxg4HR1wkZqKROsyz1sw None None 1 2 None 2 1 2024-08-05 13:24:42.517118+00:00
1 A22kL5r80OubMlqzJkZb 1 My versioned scRNA-seq collection None exJtsBYH53iiebYH-Qx0sw None None 1 1 None 1 1 2024-08-05 13:24:05.007141+00:00
collection = ln.Collection.filter(
    name="My versioned scRNA-seq collection", version="2"
).one()
collection.ordered_artifacts.df()
uid version description key suffix type _accessor size hash _hash_type n_objects n_observations visibility _key_is_virtual storage_id transform_id run_id created_by_id updated_at
id
2 nwfXKFQAHARNDPqv35hZ None 10x reference adata None .h5ad dataset AnnData 857752 PnpU6XI5Fbzwc49XgrgdNg md5 None 70 1 True 1 2 2 1 2024-08-05 13:24:39.726671+00:00
1 XIosp8pmjXnNO56K7MfO None Human immune cells from Conde22 None .h5ad dataset AnnData 57612943 9sXda5E7BYiVoDOQkTC0KB sha1-fl None 1648 1 True 1 1 1 1 2024-08-05 13:24:03.950118+00:00

If the collection isn’t too large, we can now load it into memory.

Under-the-hood, the AnnData objects are concatenated during loading.

The amount of time this takes depends on a variety of factors.

If it occurs often, one might consider storing a concatenated version of the collection, rather than the individual pieces.

adata = collection.load()

The default is an outer join during concatenation as in pandas:

adata
AnnData object with n_obs × n_vars = 1718 × 36508
    obs: 'cell_type', 'n_genes', 'percent_mito', 'louvain', 'donor', 'tissue', 'assay', 'artifact_uid'
    obsm: 'X_pca', 'X_umap'

The AnnData has the reference to the individual artifacts in the .obs annotations:

adata.obs.artifact_uid.cat.categories
Index(['nwfXKFQAHARNDPqv35hZ', 'XIosp8pmjXnNO56K7MfO'], dtype='object')

We can easily obtain ensemble IDs for gene symbols using the look up object:

genes = bt.Gene.lookup(field="symbol")
genes.itm2b.ensembl_gene_id
'ENSG00000136156'

Let us create a plot:

import scanpy as sc

sc.pp.pca(adata, n_comps=2)
sc.pl.pca(
    adata,
    color=genes.itm2b.ensembl_gene_id,
    title=(
        f"{genes.itm2b.symbol} / {genes.itm2b.ensembl_gene_id} /"
        f" {genes.itm2b.description}"
    ),
    save="_itm2b",
)
WARNING: saving figure to file figures/pca_itm2b.pdf
_images/2fcf385a3700cc8ae9d1d9bb09eb9a571a69e79300d642574e2a4a26db18515c.png

We could save a plot as a pdf and then see it in the flow diagram:

artifact = ln.Artifact("./figures/pca_itm2b.pdf", description="My result on ITM2B")
artifact.save()
artifact.view_lineage()
Hide code cell output
_images/a4cb6007c66b1b4ee1872c37d2bad574dead3af3d4072c5f4916cb93eb6a4c64.svg

But given the image is part of the notebook, we can also rely on the report that we create when saving the notebook:

ln.finish()

To see the current notebook, visit: lamin.ai/laminlabs/lamindata/transform/mfWKm8OtAzp8z8