--- title: "Seurat Integration Example" output: html_document: df_print: paged toc: true toc_float: true --- This is an example of a workflow to process data in Seurat v4. Here we're using a dataset consisting of a cells from two conditions which we'll examine and then analyse together. In this exercise we will: * Load in the data * Do some basic QC and Filtering * Select genes which we believe are going to be informative * Perform dimensionality reduction * Decide on whether we need to Integrate, and then run integration if needed * Detect clusters within the data * Find genes which define the clusters * Count the number of cells in each cluster in the two conditions * Compare gene expression between conditions within each cluster * Examine how robust the clusters and genes are Setup ===== We're going to start with some basic housekeeping. We're going to load the packages we're going to use, these will be: * Seurat (for general single cell loading and processing) * Tidyverse (for non-standard data manipulation and plotting) * SCINA (for cell type prediction) We're also going to import a load of helper functions from SeuratExtras to give us some shortcuts for some of the additional plots and QC which we're going to perform. Finally, We'll set a nicer theme for ggplot in tidyverse so our graphs look nicer. ```{r message=FALSE} library(Seurat) library(SCINA) library(tidyverse) source("seurat_helpers_functions.R") theme_set(theme_bw(base_size = 14)) ``` Loading Data ============ In this case the data we're loading arrives in the form of two gzipped text matrices. We're therefore going to load these into dataframes initially and then convert them to Seurat objects. If the data had come directly out of cellranger we would have h5 files and we could load the matrices using ```Read10x_h5``` from the Seurat package. We initially load the data into a standard data frame, but them we'll immediately load this into a Seurat object which will also turn the data into a sparse matrix which will use much less memory. ```{r} Read10X("Control/") -> control CreateSeuratObject( counts=control, project="control" ) -> control Read10X("Stim/") -> stimulated CreateSeuratObject( counts=stimulated, project="stimulated" ) -> stimulated control stimulated ``` Initial Filtering ----------------- Before we go any further we can look at the full knee plot for these samples, as we'll have to do some basic clean up. ```{r fig.width=4, fig.height=4} knee_plot(control) + ggtitle("Control Knee Plot") + geom_hline(yintercept = 1000, colour="red") knee_plot(stimulated) + ggtitle("Stimulated Knee Plot") + geom_hline(yintercept = 1000, colour="red") ``` The data look very nice and we can set a cutoff of 1000 UMIs per cell as a conservative filter to start from and make the data smaller. ```{r} subset(control, nCount_RNA > 1000) -> control subset(stimulated, nCount_RNA > 1000) -> stimulated control stimulated ``` We've dropped to about 6000 barcodes per sample as opposed to over 1 million before. Now we can join the data together for the two conditions. Initiall we'll just merge the samples - no clever integration. ```{r} merge( control, stimulated, add.cell.ids=c("control","stimulated"), project="immune" ) -> data ``` If we had more than two datasets to merge then the second argument to ```merge``` would be a vector of datasets. We no longer need the individual datasets so we can delete these. ```{r} rm(control) rm(stimulated) ``` QC == Before we do any analysis it's really important to do some quality control and filtering to make sure we're working with good data. Seurat automatically creates two metrics we can use: 1. ```nCount_RNA``` the total number of reads (or more correctly UMIs) in the dataset 2. ```nFeature_RNA``` the number of observed genes (anything with a nonzero count) We can supplement this with other metrics which we can calculate ourselves. We can use the ```add_qc_metrics``` function from Seurat Helpers to add information on 1. Amount of mitochondrial sequences - cells undergoing apoptosis drastically upregulate mitochondrial genes, so seeing this is a sign of sick or dying cells 2. Amount of MALAT1. This is a nuclear gene which is retained even if the cell lyses and the cytoplasm, and most of the RNA leak out. It can help identify lysed cells. 3. Amount of Ribosomal Proteins. These aren't necessarily indicative of a problem but they can vary a lot between cell types so they are useful to visualise. 4. Percentage largest gene. Some cell types are dominated by the production of a single protein so we can capture this. We can also flag the name of the most highly expressed gene. ```{r} add_qc_metrics(data) -> data ``` We can now see these additional metrics in the sample metadata ```{r} data[[]] %>% as_tibble() %>% head() ``` Seurat QC Plots --------------- Seurat comes with some convenience methods for plotting out certain types of visualisation, such as the distribution of certain QC metrics. We can view this on both a linear and log scale to see which looks more helpful. ```{r fig.width=12, fig.height=8} data[[]] VlnPlot(data, features=c("nCount_RNA","percent_MT", "percent_Ribosomal","percent_Largest_Gene", "percent_Malat")) ``` For some metrics it's better to view on a log scale. ```{r fig.width=12, fig.height=4} VlnPlot(data, features=c("nCount_RNA","percent_MT", "percent_Largest_Gene")) + scale_y_log10() ``` We can also plot metrics against each other to see what the relationship between them is. There is a built in seurat plot to do this which is easy to use, but isn't especially pretty. ```{r} FeatureScatter(data,feature1 = "percent_Malat", feature2 = "percent_MT") ``` Let's do some basic filtering on read count and mitochronrial content to remove the worst offenders, we can then look at some of the other properties. GGplot QC plots --------------- Rather than using Seurat's built in plots you also have the option to extract the data yourself and plot it using any of the R conventional plotting systems. ```{r} as_tibble( data[[]], rownames="Cell.Barcode" ) -> qc.metrics head(qc.metrics) ``` We can then plot out with ggplot if we want to do more. ```{r fig.height=7, fig.width=9} qc.metrics %>% arrange(percent_MT) %>% ggplot(aes(nCount_RNA,nFeature_RNA,colour=percent_MT)) + geom_point(size=1) + scale_color_gradientn(colors=c("black","blue","green2","red","yellow")) + ggtitle("Example of plotting QC metrics") + scale_x_log10() + scale_y_log10() ``` From the basic data we can do some simple filtering to pick out the cells with low counts or high mitochrondrial content. We can see that there are potentially a couple of different populations seen here with different relationships between the read (UMI) counts and number of genes detected. We can try to quantitate this by calculating a complexity value which just divides genes by UMIs. Higher values indicate that we're getting shallower coverage of more genes, and lower values mean that we're seeing fewer genes overall. This can often link to the percent highest gene value from before, but the effect can be more widespread than that. Plotting complexity ------------------- The standard way of calculating this is ```log10(genes)/log10(counts)``` however this gives absolute values which are difficult to judge. A possibly better approach is to fit a line through the cloud and then calculate the difference from the observed value to the expected. ```{r} calculate_complexity(data) -> data ``` Now we can plot this ```{r} plot_complexity(data) qc.metrics %>% ggplot(aes(x=complexity_diff)) + geom_density(fill="yellow") min(c(max(qc.metrics$complexity_diff),0-min(qc.metrics$complexity_diff))) -> complexity_scale qc.metrics %>% mutate(complexity_diff=replace(complexity_diff,complexity_diff< -0.1,-0.1)) %>% ggplot(aes(x=log10(nCount_RNA), y=log10(nFeature_RNA), colour=complexity_diff)) + geom_point(size=0.5) + geom_abline(slope=complexity.lm$coefficients[2], intercept = complexity.lm$coefficients[1]) + scale_colour_gradient2(low="blue2",mid="grey",high="red2") qc.metrics %>% ggplot(aes(x=complexity_diff, y=percent.Largest.Gene)) + geom_point() ``` Examining Largest Genes ----------------------- Some of the unusual populations in these plots can derive from the activity of a single gene, so we can look into this more closely. First let's see which the largest genes are. ```{r} qc.metrics %>% group_by(largest_gene) %>% count() %>% arrange(desc(n)) -> largest_gene_list largest_gene_list ``` We can see what the big genes are doing in any of the previous plots. ```{r} largest_gene_list %>% filter(n>140) %>% pull(largest_gene) -> largest_genes_to_plot qc.metrics %>% filter(largest_gene %in% largest_genes_to_plot) %>% mutate(largest_gene=factor(largest_gene, levels=largest_genes_to_plot)) %>% arrange(largest_gene) %>% ggplot(aes(x=log10(nCount_RNA), y=log10(nFeature_RNA), colour=largest_gene)) + geom_point(size=1) + scale_colour_manual(values=c("grey",RColorBrewer::brewer.pal(9,"Set1"))) qc.metrics %>% filter(largest_gene %in% largest_genes_to_plot) %>% mutate(largest_gene=factor(largest_gene, levels=largest_genes_to_plot)) %>% arrange(largest_gene) %>% ggplot(aes(x=complexity_diff, y=percent.Largest.Gene, colour=largest_gene)) + geom_point()+ scale_colour_manual(values=c("grey",RColorBrewer::brewer.pal(9,"Set1"))) ``` We have some super-outliers which are being driven by IGKC. For the remainder, it looks like the lower complexity cells are mostly either mitochrondrial, and dominated by MT-CO1, or Ribosomal with either RPL10 or RPS18. Let's project those metrics to see more clearly. ```{r} qc.metrics %>% arrange(percent.MT) %>% ggplot(aes(x=complexity_diff, y=percent.Largest.Gene, colour=percent.MT)) + geom_point() + scale_colour_gradient(low="grey", high="red2") ``` ```{r} qc.metrics %>% arrange(percent.Ribosomal) %>% ggplot(aes(x=complexity_diff, y=percent.Largest.Gene, colour=percent.Ribosomal)) + geom_point() + scale_colour_gradient(low="grey", high="red2") ``` That seems to fit with the rest of the story. It's maybe not surprising that cells which have a lot of their reads taken by highly active ribosomes or mitochondria show less diversity overall. Setting QC Cutoffs ================== In general its a good idea to be fairly permissive when filtering your initial data. Depending on the source of your counts and the way they were imported you'll probably already have removed the cells with very low counts, or the genes represented in only 1 or 2 cells. Here we'll set a cutoff on two of the metrics we calculated, but you will need to look at the QC of your own data to help decide. Remember, we will look at QC again after quantitating and clustering the data, so we can always come back and filter more harshly later if we wish. ```{r} qc.metrics %>% ggplot(aes(percent.MT)) + geom_histogram(binwidth = 0.5, fill="yellow", colour="black") + ggtitle("Distribution of Percentage Mitochondrion") + geom_vline(xintercept = 10) ``` ```{r} qc.metrics %>% ggplot(aes(percent.Largest.Gene)) + geom_histogram(binwidth = 0.7, fill="yellow", colour="black") + ggtitle("Distribution of Percentage Largest Gene") + geom_vline(xintercept = 10) ``` Filtering ========= From the QC we can then filter the data to get rid of cells with unusual QC metrics. We've set cutoffs based on the plots we made before. ```{r} subset( data, nFeature_RNA>750 & nFeature_RNA < 2000 & percent.MT < 10 & percent.Largest.Gene < 10 ) -> data data ``` Ideally, after filtering we should re-plot to make sure that the data really does look better. Normalisation, Selection and Scaling ==================================== Normalisation ------------- Before we do any analysis with the data we needs to normalise the raw counts we currently have to get values which are more comparable between cells. The default normalisation in Seurat is pretty simple - it simply scales the counts by the total counts in each cell, multiplies by 10,000 and then log transforms. ```{r} NormalizeData(data, normalization.method = "LogNormalize") -> data ``` We can now access the normalised data in ```data@assays$RNA@data```. We can use this to show that we can get a list of the most highly expressed genes overall. ```{r} apply(data@assays$RNA@data,1,mean) -> gene.expression sort(gene.expression, decreasing = TRUE) -> gene.expression head(gene.expression, n=50) ``` We can already see that there may be some issues to address in this data. Malat1 is a nuclear expressed transcript which tends to persist when cells have lysed and the cytoplasm has gone. It is generally highly expressed anyway, but cells with a very high level might indicate a problem. We can also see high amounts of ribosomal proteins. Again, these are generally highly expressed but their presence in specific subsets might also be of concern in regards to the accuracy of quantitation in the data. We can look in various ways at how well the data have been normalised. We can pick out a specific gene: ```{r} ggplot(mapping = aes(data@assays$RNA@data["GAPDH",])) + geom_histogram(binwidth = 0.05, fill="yellow", colour="black") + ggtitle("GAPDH expression") ``` So even for a so-called housekeeping gene we still see a significant proportion of dropout cells, and expression values which spread over 3 orders of magnitude. We can also go a bit wider and pick the first 100 cells and look at the distributions of their expression values. ```{r} as.tibble( data@assays$RNA@data[,1:100] ) %>% pivot_longer( cols=everything(), names_to="cell", values_to="expression" ) %>% ggplot(aes(x=expression, group=cell)) + geom_density() + coord_cartesian(ylim=c(0,0.6), xlim=c(0,3)) ``` So we can see that this simplistic normalisation doesn't actually normalise the quantitative data very well because it's so biased by the proportion of zero values in the dataset. This simplistic normalisation therefore doesn't do a great job in this instance. We can try the normalisation again, this time using a centered log ratio transformation - more similar to the sort of size factor based normalisation which is used for many RNA-Seq experiments. The ```margin=2``` option means that it normalises per cell instead of per gene ```{r} NormalizeData(data, normalization.method = "CLR", margin = 2) -> data ``` We can now re-plot the distributions to see whether they look any better. ```{r} as.tibble( data@assays$RNA@data[,1:100] ) %>% pivot_longer( cols=everything(), names_to="cell", values_to="expression" ) %>% ggplot(aes(x=expression, group=cell)) + geom_density() + coord_cartesian(ylim=c(0,0.6), xlim=c(0,3)) ``` This method clearly gives us much more well matched distributions, so from a quantitative point of view this is going to be easier to compare between samples, so we'll stick to this. We can also look at some overall metrics. Here we can compare the quantitative value at the 95th percentile to the mean expression ```{r fig.height=5, fig.width=6} tibble( pc95 = apply(data[["RNA"]]@data,2,quantile,0.95), measured = apply(data[["RNA"]]@data,2,function(x)(100*sum(x!=0))/length(x)) ) -> normalisation.qc normalisation.qc %>% ggplot(aes(x=measured,y=pc95))+ geom_point()+ ggtitle("Normalisation of data") ``` We can see that the CLR normalisation works pretty well with the 95th percentile being largely stable for cells with more than 5% measured genes (otherwise the value will be zero by definition). Cell Cycle Scoring ------------------ Now that we have quantitated the data we can have a look at whether the cell cycle is having any effect on the data. Seurat comes with a bunch of marker genes for different cell cycle stages which we can use ```{r} cc.genes.updated.2019 ``` We can use these to try to predict the cell cycle of each cell. ```{r} CellCycleScoring(data, s.features = cc.genes.updated.2019$s.genes, g2m.features = cc.genes.updated.2019$g2m.genes, set.ident = TRUE) -> data ``` We should now have a bunch of new QC metrics to give the score for S and G2M ```{r} data[[]] ``` We can look at the spread of the cells in different states. ```{r} as_tibble(data[[]]) %>% ggplot(aes(Phase)) + geom_bar() ``` ```{r} as_tibble(data[[]]) %>% ggplot(aes(x=S.Score, y=G2M.Score, color=Phase)) + geom_point() + coord_cartesian(xlim=c(-0.15,0.15), ylim=c(-0.15,0.15)) ``` Although the tool has made predictions of the stage for each cell, there isn't a huge separation between the groups it's picked so we have some hope that this will have a relatively minor influence on the overall expression patterns we see. We'll pick this up later once we've clustered the data and we can see what the content of the different clusters looks like. Gene Selection -------------- Before going on to do the dimensionality reduction we're going to do some filtering of genes to remove those which are likely to be uninformative in the overall structure of the data. The main method to do this is to find unusually variable genes - these are calculated in the context of the gene's expression since lowly expressed genes are more likely to be variable by standard measures. Seurat provides a method to calculate a normalised intensity for each gene, and can then select the top 'n' most variable features. In this case we're selecting the 500 most variable genes. ```{r} FindVariableFeatures( data, selection.method = "vst", nfeatures=500 ) -> data ``` The variability information can be accessed using the HVFInfo method. The names of the variable features can be accessed with ```VariableFeatures()```. ```{r} as_tibble(HVFInfo(data),rownames = "Gene") -> variance.data variance.data %>% mutate(hypervariable=Gene %in% VariableFeatures(data) ) -> variance.data head(variance.data, n=10) ``` We can plot out a graph of the variance vs mean and highlight the selected genes so we can see whether we think we're likely to capture what we need. ```{r} variance.data %>% ggplot(aes(log(mean),log(variance),color=hypervariable)) + geom_point() + scale_color_manual(values=c("black","red")) ``` Scaling ------- Before putting the data into PCA for dimensionality reduction we will scale the genes so that they have a mean of 0 and a variance of 1. This is claimed to make the analysis less biased by expression level in the PCA. ```{r} ScaleData(data,features=rownames(data)) -> data ``` Dimensionality Reduction ======================== Now we've got to the stage where we can do the reduction. We're going to use two methods - PCA and tSNE. PCA --- We can start by actually running the PCA. We will only use the variable features which we previously selected. The PCA will calculate all of our PCs and will also give us a list of the genes which were most highly (and lowly) weighted in the different PCs. ```{r} RunPCA(data,features=VariableFeatures(data)) -> data ``` We can use the ```DimPlot``` function to plot all of our projections - we just need to tell it which one to use. Here we're going to just plot the first two PCs from our PCA. As we classified our cells by cell cycle before it will pick this up and colour the clusters by that so we can see if the cell cycle is having a big effect on the clusters we're picking out. ```{r fig.height=6, fig.width=8} DimPlot(data,reduction="pca") ``` We can use the ```group.by``` option to colour by any other metadata column. We can also add labels to the plot. Finally we can add a call to the ```NoLegend()``` function to supress the automatic colour legend which is drawn. ```{r fig.height=6, fig.width=6} DimPlot(data,reduction="pca", group.by = "largest_gene", label = TRUE, label.size = 3) + NoLegend() ``` We can look at later PCs by passing the ```dims``` argument. ```{r fig.height=6, fig.width=8} DimPlot(data,reduction="pca", dims=c(3,4)) ``` This nicely shows us the power, but also the limitations of PCA in that we can see that not all of the useful information is captured in the first two principal components. The question then becomes how far down the set of PCs do we need to go to capture all of the biologically relevant information. We can start with a simple plot called the elbow plot which simply quantitates the amount of variance captured in the different PCs ```{r fig.height=4, fig.width=8} ElbowPlot(data) ``` From this we can see that there are fairly high amounts of information captured in the first 10 PCs and that maybe we can see some additional information up to around 15PCs, but beyond that the plot is very flat. Taking somewhere between 10-15 PCs should therefore capture what we want to see. For a more detailed view we can do dimensionality heatmaps. These are plots of PCA weightings for the most highly and lowly weighted genes, shown against the set of cells which are most highly influenced by the PC. The idea is that as long as we're seeing clear structure in one of these plots then we're still adding potentially useful information to the analysis. ```{r fig.height=15,fig.width=8} DimHeatmap(data,dims=1:15, cells=500) ``` We can see that there is still clear structure right up to PC15, we can therefore keep all of these PCs, but we probably don't need to go any further. tSNE ---- To try to capture more information in a single 2D plot we're going to take the first 15 dimentions of the PCA - which were calculated on only the 500 most variable genes - forward into a tSNE projection. We can run this in a very similar way to the PCA, except that we specify the number of dimensions we want to use. Since tSNE uses a randomised starting position, if we want to be able to reproduce the plot we see then we'll need to know the random 'seed' which was used to create the plot. We can capture the current state of the random number generator (from the ```.Random.seed``` function) and report it. This will change every time we run, but at least we will report the result. In our case, because we want everyone to get the same answer I've saved the seed from when I prepared this tutorial and we'll re-use that. ```{r} 8482 -> saved.seed set.seed(saved.seed) ``` We are now going to run the tSNE. The one parameter we might need to play around with is the perplexity value (expected number of nearest neighbours). By default this is set (somewhat arbitrarily) to 30. Setting this to a low value will help resolve small clusters, but at the expense of large clusters becoming more diffuse. Setting it to higher values will make the larger clusters more distinct, but may lose smaller clusters. ```{r fig.width=6, fig.height=5} RunTSNE( data, dims=1:15, seed.use = saved.seed, perplexity=10 ) -> data DimPlot(data,reduction = "tsne", pt.size = 1) + ggtitle("tSNE with Perplexity 10") ``` ```{r fig.width=6, fig.height=5} RunTSNE( data,dims=1:15, seed.use = saved.seed, perplexity=200 ) -> data DimPlot(data,reduction = "tsne", pt.size = 1) + ggtitle("tSNE with Perplexity 200") ``` ```{r fig.width=6, fig.height=5} RunTSNE( data, dims=1:15, seed.use = saved.seed ) -> data DimPlot(data,reduction = "tsne", pt.size = 1) + ggtitle("tSNE with default Perplexity (30)") ``` We can see the differences in clustering between the different perplexities - the structures aren't completely different but the compactness of them and the emphasis on smaller clusters certainly changes. We can also see that there isn't a huge effect of cell cycle in that all cycles are generally represented in all clusters, with maybe one cluster being somewhat depleted for cells in S-phase. Defining Cell Clusters ====================== At the moment in our PCA and tSNE we can see that there are clusters of cells, but we haven't tried to identify what these are. We will come to this problem now. We're going to use a graph based method to detect clusters. This finds the 'k' nearest neighbours to each cell and makes this into a graph. It then looks for highly inter-connected subgraphs within the graph and uses these to define clusters. In the first instance we just define the graph. We can control the number of neigbours used using the ```k.param``` value. The default is 20. As before we use the first 15 dimensions of the PCA to calculate the neighbours. ```{r} FindNeighbors(data,dims=1:15) -> data ``` Since we're only calculating distances for the 20 nearest neighbours we get another sparse matrix of distances. ```{r} data@graphs$RNA_snn[1:10,1:10] ``` We can then segment the graph using the ```FindClusters``` method. The resolution controls how fragmented the graph will be. Larger values give larger clusters, smaller values gives smaller clusters. ```{r} FindClusters(data,resolution = 0.5) -> data ``` The clusters are stored in the "seurat_clusters" metadata annotation so they can be used in any way the previous QC data was used. They will also be picked up automatically when projections are plotted. ```{r} head(data$seurat_clusters, n=50) ``` If we go back and plot our PCA we can see the clusters, but we can see that some of the clusters don't resolve very well in PC1 vs PC2. ```{r fig.height=6, fig.width=7} DimPlot(data,reduction="pca",label = TRUE)+ggtitle("PC1 vs PC2 with Clusters") ``` If we start looking further through the PCs we can see that some of the clusters which are overlaid in PC1 start to separate. These differences represent a small proportion of the overall variance but can be important in resolving changes. In PC4 we get a clear resolution of cluster 8 which was previously conflated with 9 and 10. In PC 9 we separate out clusters 6 and 9. ```{r fig.height=6, fig.width=7} DimPlot(data,reduction="pca", dims=c(4,9), label=TRUE)+ggtitle("PC4 vs PC9 with Clusters") ``` If we look at the same thing with the tSNE plot we can see that all of the information across the 15PCs used is preserved and we see the overall similartiy of the cells. ```{r fig.height=6, fig.width=7} DimPlot(data,reduction="tsne",pt.size = 1, label = TRUE, label.size = 7) ``` Examining the properties of the clusters ======================================== Now that we have our clusters we can look to see if they are being influenced by any of the QC metrics we calculated earlier. We can see that some of the clusters are skewed in one or more of the metrics we've calculated so we will want to take note of this. Some of these skews could be biological in nature, but they could be noise coming from the data. Number of reads --------------- ```{r} VlnPlot(data,features="nCount_RNA") ``` Number of genes --------------- ```{r} VlnPlot(data,features="nFeature_RNA") ``` It might be tempting to think that clusters 8, 10 and 12 could be from GEMs where two or more cells were captured since they all have unusually high coverage and diversity. They are also small and tightly clustered away from the main groups of points. Percent Mitochondrion --------------------- ```{r} VlnPlot(data,features="percent.MT") ``` MALAT1 ------ ```{r} VlnPlot(data,features="MALAT1") ``` Cell Cycle ---------- ```{r} data@meta.data %>% group_by(seurat_clusters,Phase) %>% count() %>% group_by(seurat_clusters) %>% mutate(percent=100*n/sum(n)) %>% ungroup() %>% ggplot(aes(x=seurat_clusters,y=percent, fill=Phase)) + geom_col() + ggtitle("Percentage of cell cycle phases per cluster") ``` Percent Largest Gene -------------------- ```{r} VlnPlot(data,features="percent.Largest.Gene") ``` Which largest gene ```{r} data[[]] %>% group_by(seurat_clusters, largest_gene) %>% count() %>% arrange(desc(n)) %>% group_by(seurat_clusters) %>% slice(1:2) %>% ungroup() %>% arrange(seurat_clusters, desc(n)) ``` ```{r fig.height=12, fig.width=12} data@reductions$tsne@cell.embeddings %>% as_tibble() %>% add_column(seurat_clusters=data$seurat_clusters, largest_gene=data$largest_gene) %>% filter(largest_gene %in% largest_genes_to_plot) %>% ggplot(aes(x=tSNE_1, y=tSNE_2, colour=seurat_clusters)) + geom_point() + facet_wrap(vars(largest_gene)) ``` That's already quite nice for explaining some of the functionality of the clusters, but there's more in there than just the behaviour of the most expressed gene, so let's do a more systematic search for markers. Finding Markers for each Cluster ================================ Now that we have defined the different clusters we can start to evaluate them. One way to do this will be to identify genes whose expression defines each cluster which has been identified. Suerat provides the ```FindMarkers``` function to identify genes which a specific to a given cluster. This is a somewhat generic function which can run a number of different tests. We are only going to focus on two of these but you can find the others in the Seurat documentation. The two tests we are going to use are: 1. The Wilcox rank sum test. This identifies genes which are differentially regulated between two groups of cells. It is a non-parametric test which makes very few assumptions about the behaviour of the data and just looks for genes which have expression which is consistently ranked more highly in one group of cells compared to another. 2. The ROC test. This is a measure of how specifically a gene can predict membership of two groups. It gives a value between 0.5 (no predictive value) and 1 (perfectly predictive on its own) to say how useful each gene is at predicting. Again this is a non-parametric test which just cares about the ranked expression measures for each gene. Single Prediction ----------------- In the simplest case we can find genes which appear to be upregulated in a specific cluster compared to all cells not in that cluster. The additional ```min.pct``` parameter says that the gene must be measured in at least 25% of the cells in either cluster 0 or all of the other other cells in order to be tested. This cuts down on testing genes which are effectively unexpressed. ```{r} FindMarkers(data,ident.1 = 0, min.pct = 0.25) ``` We can then use a convenience plotting method ```VlnPlot``` to show the expression levels of these genes in the cells in each cluster. ```{r} VlnPlot(data,features="VCAN") ``` We can indeed see that the VCAN gene is more highly expressed in cluster 0 than any of the other clusters, but we can also see that it is also reasonably highly expressed in clusters 10 and 11. Multiple Prediction ------------------- We can extend the same methodology to make predictions for all of the clusters. Here we're calling FindMarkers for all of the clusters and combining the results into a single table. We add an additional column which just says which cluster each hit initially came from. This will take a little while to run, but at the end we'll have predictions for all clusters. ```{r} # This loop just runs the FindMarkers function on all of the clusters lapply( levels(data[["seurat_clusters"]][[1]]), function(x)FindMarkers(data,ident.1 = x,min.pct = 0.25) ) -> cluster.markers # This simply adds the cluster number to the results of FindMarkers sapply(0:(length(cluster.markers)-1),function(x) { cluster.markers[[x+1]]$gene <<- rownames(cluster.markers[[x+1]]) cluster.markers[[x+1]]$cluster <<- x }) # Finally we collapse the list of hits down to a single table and sort it by FDR to put the most significant ones first as_tibble(do.call(rbind,cluster.markers)) %>% arrange(p_val_adj) -> cluster.markers cluster.markers ``` We can extract from this list the most upregulated gene from each cluster ```{r} cluster.markers %>% group_by(cluster) %>% slice(1) %>% pull(gene) -> best.wilcox.gene.per.cluster best.wilcox.gene.per.cluster ``` We can then plot these out. ```{r fig.width=20, fig.height=12} VlnPlot(data,features=best.wilcox.gene.per.cluster) ``` We can see that for some clusters (eg Cluster 8 - CDKN1C) We really do have a gene which can uniquely predict, but for many others (eg cluster 5 IL7R) we have a hit which also picks up other clusters (clusters 1 and 4 in this case). We can try to clean this up for any individual cluster by using the roc analysis. ```{r} FindMarkers(data,ident.1 = 5, ident.2 = 4, test.use = "roc", only.pos = TRUE) ``` We want to look at the power value here. A value of 1 is perfectly separating, and a value of 0 is random. Our best positive hit (more expressed in cluster 5) has a power of 0.808. We can see what that looks like. ```{r} VlnPlot(data,features="LTB") ``` That does indeed do a slightly better job at separating cluster 5 from cluster 4, but it also comes up all over the place in other clusters. Let's look at a hit lower down the scale. ```{r} VlnPlot(data,features="TPT1") ``` This could actually be a better option to use as a marker for this cluster. Automated Cell Type Annotation ============================== We can use knowledge of cell type marker genes to classify our cells. Lots of systems exist to do this. We're going to use scina. This analysis requires a list of marker genes for each of the clusters we want to find. We're using a small set distributed with scina, but you can make a larger collection relevant to the cell types you're using. ```{r} as.data.frame(data@assays$RNA[,]) -> scina.data load(system.file('extdata','example_signatures.RData', package = "SCINA")) signatures ``` ```{r} SCINA( scina.data, signatures, max_iter = 100, convergence_n = 10, convergence_rate = 0.999, sensitivity_cutoff = 0.9, rm_overlap=TRUE, allow_unknown=TRUE ) -> scina.results data$scina_labels <- scina.results$cell_labels ``` Now we can plot out the tsne spread coloured by the automatic annotation. ```{r} DimPlot(data,reduction = "tsne", pt.size = 1, label = TRUE, group.by = "scina_labels", label.size = 5) ``` Now we can plot out the tsne spread coloured by the automatic annotation. ```{r} DimPlot(data,reduction = "tsne", pt.size = 1, label = TRUE, group.by = "scina_labels") ``` We can also relate this to the clusters which we automatically detected. ```{r fig.height=8, fig.width=8} tibble( cluster = data$seurat_clusters, cell_type = data$scina_labels ) %>% group_by(cluster,cell_type) %>% count() %>% group_by(cluster) %>% mutate( percent=(100*n)/sum(n) ) %>% ungroup() %>% mutate( cluster=paste("Cluster",cluster) ) %>% ggplot(aes(x="",y=percent, fill=cell_type)) + geom_col(width=1) + coord_polar("y", start=0) + facet_wrap(vars(cluster)) + theme(axis.text.x=element_blank()) + xlab(NULL) + ylab(NULL) ``` Colouring by genes ------------------ The other view we might want to use once we have picked out some genes we like is to colour the entirity of a projection by that gene. Sometimes we want to do this to look at genes we expect to be diagnostic for different cell subtypes, but sometimes we will use the same method to explore hits which come out of our own analysis. We can use the ```FeaturePlot``` function to colour the tSNE projection with the expression level of a number of different genes. ```{r fig.width=15, fig.height=15} FeaturePlot(data,features=best.wilcox.gene.per.cluster) ``` We can see that some of these genes very specifically isolate to their own cluster, but for others we see expression which is more widely spread over a number of clusters. This then leads to a larger problem which is how we evaluate the clustering which has been done in our data. If you remember - the tSNE is built on pairwise distances between cells and these in turn come from the PCA distances over (in our case) 15 PCs, and these are calculated on (in our case) 500 hyper variable genes. Trying to work out way back through the final plot positions to this more abstract set of relationships is a challenging task. Exploring relationships ======================= To try to make this more approachable we're going to use a visualisation system called Sleepwalk. Sleepwalk --------- Sleepwalk is a visualsiation and exploration tool which lets you construct an interactive projection where you can mouse over any point in the graph and have it coloured by the strength of interaction with all of the other points in the plot. What you hope to see is that your groups all interact strongly with each other and not with anything else, but you will also see clusters where this isn't the case. Here we're start a sleepwalk session using our tSNE projection and providing the PCA results to calculate the distances. For refernece, this is the plot with the colours of the groups we identified overlaid on it. ```{r fig.height=7, fig.width=8} DimPlot(data,reduction = "tsne", pt.size = 1, label = TRUE, label.size = 8) ``` Now we'll start the interactive plot. For the distances we're only going to provide the same 15 PCs which we actually used when defining the tSNE projection so that it's a fair comparison. ```{r} sleepwalk(data@reductions$tsne@cell.embeddings, data@reductions$pca@cell.embeddings[,1:15]) ``` Look at cluster 11. How similar are the points in the cluster and how different is it to the clusters around it? Do the same thing for cluster 12 which looks similarly isolated on the plot. Have a look at the relationships between clusters 1,3,4 and 8. Should these be separated into different clusters? Look at the results of finding markers for these clusters and see if that helps inform your decision. Export to Loupe --------------- Finally, we can go back from the analysis we've done here to the Loupe browser to try to take advantage of both the flexibility in R and the interaction within Loupe. To do this we're doing to write out the tSNE data from R into a file which we can then open in Loupe. One oddity in here is that during import Seruat removes the "-1" from the end of the names of all of the cell barcodes. If we don't put them back then Loupe won't recognise them, so we have to do a bit of data manipulation before saving the file. ```{r} data@reductions$tsne@cell.embeddings[1:10,] data@reductions$tsne@cell.embeddings[,] %>% as_tibble(rownames = "barcode") %>% mutate(barcode=paste0(barcode,"-1")) %>% write_csv("for_loupe_import.csv") ```