# Integrating spatial data with scRNA-seq using scanorama¶

Author: Giovanni Palla

This tutorial shows how to work with multiple Visium datasets and perform integration of scRNA-seq dataset with Scanpy. It follows the previous tutorial on analysis and visualization of spatial transcriptomics data.

We will use Scanorama paper - code to perform integration and label transfer. It has a convenient interface with scanpy and anndata.

To install the required libraries, type the following:

pip install git+https://github.com/theislab/scanpy.git
pip install git+https://github.com/theislab/anndata.git
pip install scanorama


[1]:

import scanpy as sc
import anndata as an
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import scanorama

[2]:

sc.logging.print_versions()
sc.set_figure_params(facecolor="white", figsize=(8, 8))
sc.settings.verbosity = 3

WARNING: If you miss a compact list, please try print_header!
-----
anndata     0.7.8
scanpy      1.8.2
sinfo       0.3.1
-----
PIL                         9.0.1
anndata                     0.7.8
annoy                       NA
appnope                     0.1.2
asttokens                   NA
backcall                    0.2.0
beta_ufunc                  NA
binom_ufunc                 NA
cffi                        1.15.0
colorama                    0.4.4
cycler                      0.10.0
cython_runtime              NA
dateutil                    2.8.2
debugpy                     1.5.1
decorator                   5.1.1
defusedxml                  0.7.1
entrypoints                 0.4
executing                   0.8.3
fbpca                       NA
h5py                        3.6.0
hypergeom_ufunc             NA
igraph                      0.9.9
intervaltree                NA
ipykernel                   6.9.1
ipython_genutils            0.2.0
jedi                        0.18.1
joblib                      1.1.0
kiwisolver                  1.3.2
leidenalg                   0.8.9
llvmlite                    0.36.0
matplotlib                  3.5.1
matplotlib_inline           NA
mpl_toolkits                NA
natsort                     8.1.0
nbinom_ufunc                NA
numba                       0.53.1
numexpr                     2.8.0
numpy                       1.22.3
packaging                   21.3
pandas                      1.4.1
parso                       0.8.3
pexpect                     4.8.0
pickleshare                 0.7.5
pkg_resources               NA
prompt_toolkit              3.0.27
ptyprocess                  0.7.0
pure_eval                   0.2.2
pydev_ipython               NA
pydevconsole                NA
pydevd                      2.6.0
pydevd_concurrency_analyser NA
pydevd_file_utils           NA
pydevd_plugins              NA
pydevd_tracing              NA
pygments                    2.11.2
pyparsing                   3.0.7
pytz                        2021.3
scanorama                   1.7.1
scanpy                      1.8.2
scipy                       1.8.0
seaborn                     0.11.2
setuptools                  60.9.3
sinfo                       0.3.1
six                         1.16.0
sklearn                     1.0.2
sortedcontainers            2.4.0
sphinxcontrib               NA
stack_data                  0.2.0
statsmodels                 0.13.2
tables                      3.7.0
texttable                   1.6.4
traitlets                   5.1.1
typing_extensions           NA
wcwidth                     0.2.5
zipp                        NA
zmq                         22.3.0
-----
IPython             8.1.1
jupyter_client      7.1.2
jupyter_core        4.9.2
notebook            6.4.8
-----
Python 3.9.10 | packaged by conda-forge | (main, Feb  1 2022, 21:27:48) [Clang 11.1.0 ]
macOS-11.6.2-x86_64-i386-64bit
16 logical CPU cores, i386
-----
Session information updated at 2022-03-08 14:41



We will use two Visium spatial transcriptomics dataset of the mouse brain (Sagittal), which are publicly available from the 10x genomics website.

The function datasets.visium_sge() downloads the dataset from 10x genomics and returns an AnnData object that contains counts, images and spatial coordinates. We will calculate standards QC metrics with pp.calculate_qc_metrics and visualize them.

When using your own Visium data, use Scanpy’s read_visium() function to import it.

[3]:

adata_spatial_anterior = sc.datasets.visium_sge(
sample_id="V1_Mouse_Brain_Sagittal_Anterior"
)
sample_id="V1_Mouse_Brain_Sagittal_Posterior"
)

reading /Users/isaac/github/scanpy-tutorials/spatial/data/V1_Mouse_Brain_Sagittal_Anterior/filtered_feature_bc_matrix.h5
(0:00:00)
(0:00:00)

[4]:

adata_spatial_anterior.var_names_make_unique()

[5]:

for name, adata in [
]:
fig, axs = plt.subplots(1, 4, figsize=(12, 3))
fig.suptitle(f"Covariates for filtering: {name}")

sns.distplot(
kde=False,
bins=40,
ax=axs[1],
)
sns.distplot(
kde=False,
bins=60,
ax=axs[3],
)


sc.datasets.visium_sge downloads the filtered visium dataset, the output of spaceranger that contains only spots within the tissue slice. Indeed, looking at standard QC metrics we can observe that the samples do not contain empty spots.

We proceed to normalize Visium counts data with the built-in normalize_total method from Scanpy, and detect highly-variable genes (for later). As discussed previously, note that there are more sensible alternatives for normalization (see discussion in sc-tutorial paper and more recent alternatives such as SCTransform or GLM-PCA).

[6]:

for adata in [
]:

normalizing counts per cell
finished (0:00:00)
If you pass n_top_genes, all cutoffs are ignored.
extracting highly variable genes
finished (0:00:00)
normalizing counts per cell
finished (0:00:00)
If you pass n_top_genes, all cutoffs are ignored.
extracting highly variable genes
finished (0:00:00)


## Data integration¶

We are now ready to perform integration of the two dataset. As mentioned before, we will be using Scanorama for that. Scanorama returns two lists, one for the integrated embeddings and one for the corrected counts, for each dataset. We would like to note that in this context using BBKNN or Ingest is also possible.

[7]:

adatas = [adata_spatial_anterior, adata_spatial_posterior]

Found 32285 genes among all datasets
[[0.         0.50104322]
[0.         0.        ]]
Processing datasets (0, 1)


We will concatenate the two datasets and save the integrated embeddings in adata_spatial.obsm['scanorama_embedding']. Furthermore we will compute UMAP to visualize the results and qualitatively assess the data integration task.

Notice that we are concatenating the two dataset with uns_merge="unique" strategy, in order to keep both images from the visium datasets in the concatenated anndata object.

[8]:

adata_spatial = sc.concat(
label="library_id",
uns_merge="unique",
keys=[
k
for d in [
]
for k, v in d.items()
],
index_unique="-",
)

[9]:

sc.pp.neighbors(adata_spatial, use_rep="X_scanorama")

computing neighbors
finished: added to .uns['neighbors']
.obsp['distances'], distances for each pair of neighbors
.obsp['connectivities'], weighted adjacency matrix (0:00:05)
computing UMAP
running Leiden clustering
finished: found 22 clusters and added
'clusters', the cluster labels (adata.obs, categorical) (0:00:00)

[10]:

sc.pl.umap(
)

WARNING: Length of palette colors is smaller than the number of categories (palette length: 20, categories length: 22. Some categories will have the same color.


We can also visualize the clustering result in spatial coordinates. For that, we first need to save the cluster colors in a dictionary. We can then plot the Visium tissue fo the Anterior and Posterior Sagittal view, alongside each other.

[11]:

clusters_colors = dict(
zip([str(i) for i in range(18)], adata_spatial.uns["clusters_colors"])
)

[12]:

fig, axs = plt.subplots(1, 2, figsize=(15, 10))

for i, library in enumerate(
["V1_Mouse_Brain_Sagittal_Anterior", "V1_Mouse_Brain_Sagittal_Posterior"]
):
sc.pl.spatial(
img_key="hires",
library_id=library,
color="clusters",
size=1.5,
palette=[
v
for k, v in clusters_colors.items()
],
legend_loc=None,
show=False,
ax=axs[i],
)

plt.tight_layout()

WARNING: Length of palette colors is smaller than the number of categories (palette length: 16, categories length: 19. Some categories will have the same color.
WARNING: Length of palette colors is smaller than the number of categories (palette length: 13, categories length: 17. Some categories will have the same color.


From the clusters, we can clearly see the stratification of the cortical layer in both of the tissues (see the Allen brain atlas for reference). Furthermore, it seems that the dataset integration worked well, since there is a clear continuity between clusters in the two tissues.

## Data integration and label transfer from scRNA-seq dataset¶

e can also perform data integration between one scRNA-seq dataset and one spatial transcriptomics dataset. Such task is particularly useful because it allows us to transfer cell type labels to the Visium dataset, which were dentified from the scRNA-seq dataset.

For this task, we will be using a dataset from Tasic et al., where the mouse cortex was profiled with smart-seq technology.

The dataset can be downloaded from GEO <https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE115746>__ count - metadata. Conveniently, you can also download the pre-processed dataset in h5ad format from here.

Since the dataset was generated from the mouse cortex, we will subset the visium dataset in order to select only the spots part of the cortex. Note that the integration can also be performed on the whole brain slice, but it would give rise to false positive cell type assignments and and therefore it should be interpreted with more care.

The integration task will be performed with Scanorama: each Visium dataset will be integrated with the smart-seq cortex dataset.

[13]:

adata_cortex = sc.read("./adata_processed.h5ad")


Subset the spatial anndata to (approximately) selects only spots belonging to the cortex.

[14]:

adata_anterior_subset = adata_spatial_anterior[
]
:,
]


Run integration with Scanorama

[15]:

adatas_anterior = [adata_cortex, adata_anterior_subset]

# Integration.

Found 20538 genes among all datasets
[[0.         0.24327122]
[0.         0.        ]]
Processing datasets (0, 1)
Found 20538 genes among all datasets
[[0.         0.40765766]
[0.         0.        ]]
Processing datasets (0, 1)


Notice that we are concatenating datasets with the join="outer" and uns_merge="first" strategies. This is because we want to keep the obsm['coords'] as well as the images of the visium datasets.

[16]:

adata_cortex_anterior = sc.concat(
label="dataset",
keys=["smart-seq", "visium"],
join="outer",
uns_merge="first",
)
label="dataset",
keys=["smart-seq", "visium"],
join="outer",
uns_merge="first",
)


At this step, we have integrated each visium dataset in a common embedding with the scRNA-seq dataset. In such embedding space, we can compute distances between samples and use such distances as weights to be used for for propagating labels from the scRNA-seq dataset to the Visium dataset.

Such approach is very similar to the TransferData function in Seurat (see paper). Here, we re-implement the label transfer function with a simple python function, see below.

Frist, let’s compute cosine distances between the visium dataset and the scRNA-seq dataset, in the common embedding space

[17]:

from sklearn.metrics.pairwise import cosine_distances

distances_anterior = 1 - cosine_distances(
"X_scanorama"
],
"X_scanorama"
],
)
distances_posterior = 1 - cosine_distances(
"X_scanorama"
],
"X_scanorama"
],
)


Then, let’s propagate labels from the scRNA-seq dataset to the visium dataset

[18]:

def label_transfer(dist, labels):
lab = pd.get_dummies(labels).to_numpy().T
class_prob = lab @ dist
norm = np.linalg.norm(class_prob, 2, axis=0)
class_prob = class_prob / norm
class_prob = (class_prob.T - class_prob.min(1)) / class_prob.ptp(1)
return class_prob

[19]:

class_prob_anterior = label_transfer(distances_anterior, adata_cortex.obs.cell_subclass)
class_prob_posterior = label_transfer(
)


The class_prob_[anterior-posterior] objects is a numpy array of shape (cell_type, visium_spots) that contains assigned weights of each spots to each cell types. This value essentially tells us how similar that spots look like, from an expression profile perspective, to all the other annotated cell types from the scRNA-seq dataset.

We convert the class_prob_[anterior-posterior] object to a dataframe and assign it to the respective anndata

[20]:

cp_anterior_df = pd.DataFrame(
)
cp_posterior_df = pd.DataFrame(
)


[21]:

adata_anterior_subset_transfer = adata_anterior_subset.copy()
)

)


We are then able to explore how cell types are propagated from the scRNA-seq dataset to the visium dataset. Let’s first visualize the neurons cortical layers.

[22]:

sc.pl.spatial(
img_key="hires",
color=["L2/3 IT", "L4", "L5 PT", "L6 CT"],
size=1.5,
)
sc.pl.spatial(
img_key="hires",
color=["L2/3 IT", "L4", "L5 PT", "L6 CT"],
size=1.5,
)


Interestingly, it seems that this approach worked, since sequential layers of cortical neurons could be correctly identified, both in the anterior and posterior sagittal slide.

We can go ahead an visualize astrocytes and oligodendrocytes as well.

[23]:

sc.pl.spatial(
)
sc.pl.spatial(

[ ]: