# Herr Strathmann.

## I am interested in:

• Machine Learning and Computational Statistics, in particular
• Kernel Methods
• Bayesian Statistics and MCMC
• Bioinformatics (a little)
• Open-source software

## A list of my publications:

Marianne Ruhl, Patrick Chhatwal, Heiko Strathmann, Thomas Kuntzen, Dorothea Bankwitz, Kathrin Skibbe, Andreas Walker, Falko M Heinemann, Peter Horn, Todd M Allen, Daniel Hoffmann, Thomas Pietschmann and Jörg Timm. Escape from a dominant HLA-B*15-restricted CD8+ T cell response against hepatitis C virus requires compensatory mutations outside the epitope.. Journal of virology (November), 2011. URL BibTeX

@article{Ruhl2011,
abstract = "Antiviral CD8+ T cells are a key component of the adaptive immune system against HCV. For the development of immune therapies it is essential to understand how CD8+ T cells contribute to clearance of infection and why they fail so often. A mechanism for secondary failure is mutational escape of the virus. However, some substitutions in viral epitopes are associated with fitness costs and often require compensatory mutations. We hypothesized that compensatory mutations may point towards epitopes under particularly strong selection pressure that may be beneficial for vaccine design because of a higher genetic barrier to escape. We previously identified two HLA-B*15-restricted CD8+ epitopes in NS5B (LLRHHNMVY(2450-2458) and SQRQKKVTF(2466-2474)) based on sequence analysis of a large HCV genotype 1b outbreak. Both epitopes are targeted in about 70\% of HLA-B*15 positive individuals exposed to HCV. Reproducible selection of escape mutations was now confirmed in an independent multi-center cohort. Interestingly, mutations were also selected in the epitope flanking region, suggesting that compensatory evolution may play a role. Co-variation analysis of sequences from the database confirmed a significant association between escape mutations inside one of the epitopes (H2454R and M2456L) and substitutions in the epitope flanking region (S2439T and K2440Q). Functional analysis in the subgenomic replicon Con1 confirmed that the primary escape mutations impaired viral replication, while fitness was restored by the additional substitutions in the epitope flanking region.Conclusion: Selection of escape mutations inside a HLA-B*15 epitope requires secondary substitutions in the epitope flanking region that compensate for fitness costs.",
author = {Ruhl, Marianne and Chhatwal, Patrick and Strathmann, Heiko and Kuntzen, Thomas and Bankwitz, Dorothea and Skibbe, Kathrin and Walker, Andreas and Heinemann, Falko M and Horn, Peter a and Allen, Todd M and Hoffmann, Daniel and Pietschmann, Thomas and Timm, J\"{o}rg},
doi = "10.1128/JVI.05603-11",
issn = "1098-5514",
journal = "Journal of virology",
month = "",
number = "November",
pmid = 22072759,
title = "{Escape from a dominant HLA-B*15-restricted CD8+ T cell response against hepatitis C virus requires compensatory mutations outside the epitope.}",
url = "http://www.ncbi.nlm.nih.gov/pubmed/22072759",
year = 2011
}


Arthur Gretton, Bharath Sriperumbudur, Dino Sejdinovic, Heiko Strathmann, Sivaraman Balakrishnan, Massimiliano Pontil and Kenji Fukumizu. Optimal kernel choice for large-scale two-sample tests. In Advances in Neural Information Processing Systems. 2012. BibTeX

@inproceedings{Gretton2012a,
author = "Gretton, Arthur and Sriperumbudur, Bharath and Sejdinovic, Dino and Strathmann, Heiko and Balakrishnan, Sivaraman and Pontil, Massimiliano and Fukumizu, Kenji",
booktitle = "Advances in Neural Information Processing Systems",
title = "{Optimal kernel choice for large-scale two-sample tests}",
year = 2012
}


## Russian Roulette for intractable Likelihoods

$$\def\mm#1{\boldsymbol{#1}} \DeclareMathOperator{\tr}{tr}$$

While I was working at UCL's Statistics Department in winter, I got involved into a very exciting project in the group of Mark Girolami. It is based around the Pseudo-Marginal Metropolis-Hastings algorithm. In 2003, a Genetics paper [1] described an approach to sample a distribution using the standard Metropolis-Hastings algorithm when the density function is not available by simply replacing it with an unbiased estimate.

For a standard Bayesian inference problem with likelihood $$\pi(y|\theta)$$, prior $$\pi(\theta)$$, and a proposal $$Q$$, rather than using the standard M-H ratio $\frac{\pi(y|\theta^{\text{new}})}{\pi(y|\theta)}\times\frac{\pi(\theta^{\text{new}})}{\pi(\theta)}\times \frac{Q(\theta|\theta^{\text{new}})}{Q(\theta^{\text{new}}|\theta)},$ the likelihood is replaced by an unbiased estimator as

$\frac{\hat{\pi}(y|\theta^{\text{new}})}{\hat{\pi}(y|\theta)}\times\frac{\pi(\theta^{\text{new}})}{\pi(\theta)}\times \frac{Q(\theta|\theta^{\text{new}})}{Q(\theta^{\text{new}}|\theta)}.$ Remarkably  the resulting Markov chain converges to the same posterior distribution as the exact algorithm. The approach was later formalised and popularised in [2].

In our project, we exploited this idea to perform inference over models whose likelihood functions are intractable. Example of such intractable likelihoods are for example Ising models or, even simpler, very large Gaussian models. Both of those models' normalising constants are very hard to compute. We came up with a way of producing unbiased estimators for the likelihoods, which are based on writing likelihoods as an infinite sum, and then truncating it stochastically.

Producing unbiased estimators for the Pseudo-Marginal approach is a very challenging task. Estimates have to be strictly positive. This can be achieved via pulling out the sign of the estimates in the final Monte-Carlo integral estimate and add a correction term (which increases the variance of the estimator). This problem is studied under the term Sign problem. The next step is to write the likelihood function as an infinite sum. In our paper, we do this for a geometrically titled correction of a biased estimator obtained by an approximation such as importance sampling estates, upper bounds, or deterministic approximations, and for likelihoods based on the exponential function.

I in particular worked on the exponential function estimate. We took a very nice example from spatial statistics: a worldwide grid of ozone measurements from a satellite that consists of a about 173,405 measurements. We fitted a simple Gaussian model whose covariance matrices are massive (and sparse). In such models of the form $\log (\mathcal{N}_x(\mu,\Sigma))=-\log(\det(\Sigma)) - (\mu-x)^T \Sigma^{-1}(\mu-x) + C,$ the normalising constant involves a log-determinant of such a large matrix. This is impossible using classical methods such as Cholesky factorisation $\Sigma=LL^T \Rightarrow \log(\det(\Sigma))=2\sum_i\log(L_{ii}),$ due to memory shortcomings: It is not possible to store the Cholesky factor $$L$$ since it is not in general sparse. We therefore constructed an unbiased estimator using a very neat method based on graph colourings and Krylov methods from [3].

This unbiased estimator of the log-likelihood is then turned into a (positive) unbiased estimator of the likelihood itself via writing the exponential function as an infinite series $\exp(\log(\det(\Sigma)))=1+\sum_{i=1}^\infty \frac{\log(\det(\Sigma))^i}{i!}.$

We then construct an unbiased estimator of this series by playing Russian Roulette: We evaluate the terms in the series and plug in a different estimator for $$\log(\det(\Sigma))$$ for every $$i$$; once those values are small, we start flipping a coin every whether we continue the series or not. If we do continue, we add some weights that ensure unbiasedness. We also ensure that it is less likely to continue in every iteration so that the procedure eventually stops. This basic idea (borrowed from Physics papers from some 20 years ago) and some technical details and computational tricks then give an unbiased estimator of the likelihood of the log-determinant of our Gaussian model and can therefore be plugged into Pseudo-Marginal M-H. This allows to perform Bayesian inference over models of sizes where it has been impossible before.

More details can be found on our project page (link, see ozone link), and in our paper draft on arXiv (link). One of my this year's Google summer of Code projects for the Shogun Machine-Learning toolbox is about producing a sophisticated implementation of log-determinant estimators (link). Pretty exciting!

[1]: Beaumont, M. A. (2003). Estimation of population growth or decline in genetically monitored populations. Genetics 164 1139–1160.
[2]: Andrieu, C., & Roberts, G. O. (2009). The pseudo-marginal approach for efficient Monte Carlo computations. The Annals of Statistics, 37(2), 697–725.
[3]: Aune, E., Simpson, D., & Eidsvik, J. (2012). Parameter Estimation in High Dimensional Gaussian Distributions.

## NIPS paper: Optimal kernel choice for large-scale two-sample tests

NIPS 2012 is already over. Unfortunately, I could not go due to the lack of travel funding. However, as mentioned a few weeks ago, I participated in one paper which is closely related to my Master's project with Arthur Gretton and Massi Pontil. Optimal kernel choice for large-scale two-sample tests. We recently set up a page for the paper where you can download my Matlab implementation of the paper's methods. Feel free to play around with that. I am currently finishing implementing most methods into the SHOGUN toolbox. We also have a poster which was presented at NIPS. See below for all links.
Update: I have completed the kernel selection framework for SHOGUN, it will be included in the next release. See the base class interface here. See an example to use it: single kernel (link) and combined kernels (link). All methods that are mentioned in the paper are included. I also updated the shogun tutorial (link).

At its core, the paper describes a method for selecting the best kernel for two-sample testing with the linear time MMD. Given a kernel $$k$$ and terms
$h_k((x_{i},y_{i}),((x_{j},y_{j}))=k(x_{i},x_{i})+k(y_{i},y_{i})-k(x_{i},y_{j})-k(x_{j},y_{i}),$
the linear time MMD is their empirical mean,

$\hat\eta_k=\frac{1}{m}\sum_{i=1}^{m}h((x_{2i-1},y_{2i-1}),(x_{2i},y_{2i})),$
which is a linear time estimate for the squared distance of the mean embeddings of the distributions where the $$x_i, y_i$$ come from. The quantity allows to perform a two-sample test, i.e. to tell whether the underlying distributions are different.

Given a finite family of kernels $$\mathcal{K}$$, we select the optimal kernel via maximising the ratio of the MMD statistic by a linear time estimate of the standard deviation of the terms
$k_*=\arg \sup_{k\in\mathcal{K}}\frac{ \hat\eta_k}{\hat \sigma_k},$
where $$\hat\sigma_k^2$$ is a linear time estimate of the variance $$\sigma_k^2=\mathbb{E}[h_k^2] - (\mathbb{E}[h_k])^2$$ which can also be computed in linear time and constant space. We give a linear time and constant space empirical estimate of this ratio. We establish consistency of this empirical estimate as
$\left| \sup_{k\in\mathcal{K}}\hat\eta_k \hat\sigma_k^{-1} -\sup_{k\in\mathcal{K}}\eta_k\sigma_k^{-1}\right|=O_P(m^{-\frac{1}{3}}).$

In addition, we describe a MKL style generalisation to selecting weights of convex combinations of a finite number of baseline kernels,
$\mathcal{K}:=\{k : k=\sum_{u=1}^d\beta_uk_u,\sum_{u=1}^d\beta_u\leq D,\beta_u\geq0, \forall u\in\{1,...,d\}\},$
$\min \{ \beta^T\hat{Q}\beta : \beta^T \hat{\eta}=1, \beta\succeq0\},$
where $$\hat{Q}$$ is the positive definite empirical covariance matrix of the $$h$$ terms of all pairs of kernels.

We then describe three experiments to illustrate

• That our criterion outperforms existing methods on synthetic and real-life datasets which correspond to hard two-sample problems
• Why multiple kernels can be an advantage in two-sample testing

Supplementary page

## Master's dissertation: Adaptive Large-Scale Kernel Two-Sample Testing

I recently finished working on my Master project here at UCL. It was supervised by Arthur Gretton and Massimiliano Pontil and was about kernel selection for a linear version of the Maximum Mean Discrepancy, a kernel based two-sample test. Download report here.

Given sets of samples of size m from two probability distributions, a two-sample test decides whether the distributions are the same or different with some confidence level. The linear time MMD statistic is defined as

$\text{MMD}_l^2=\frac{1}{m}\sum_{i=1}^{m}h((x_{2i-1},y_{2i-1}),(x_{2i},y_{2i}))$
where
$h((x_{i},y_{i}),((x_{j},y_{j}))=k(x_{i},x_{i})+k(y_{i},y_{i})-k(x_{i},y_{j})-k(x_{j},y_{i})$
and $$k$$ is an RKHS reproducing kernel (I used the Gaussian kernel only).
A two sample test works simply as this: if this statistic (computed on sample data) is larger than a computed threshold (also on data), it is likely that the two sets of samples are from different distributions.

(Null and alternative distributions of MMD statistic, red area represents type I error, blue area represents type II error)

The linear time version is highly interesting for large-scale problems since one does not have to store data in order to compute it. Instead, it is possible to compute statistic and threshold in an on-line way.

The work contains three main contributions:

1. An on-line version for the already existing linear time two-sample test. More important, it was shown in experiments that in some situations, the linear time test is a better choice than the current quadratic time MMD state-of-the-art method. This for example happens when problems are so hard that the amount of data necessary to solve them does not fit into computer memory. On the blobs dataset described in the work,

a quadratic time test on the maximum processable amount of data reached a bad type II error while with the linear time version and much more data, almost zero type II error could be reached. Another case is when simply infinite data (but finite computation time) is available: the (adaptive) linear time test reaches lower type II error that its quadratic counterpart.

2. A description of a criterion that can be used for kernel selection for the linear time MMD. Null and alternative distribution of the statistic have appealing properties that can be exploited in order to select the optimal kernel in the sense that a test's type II error is minimised. The criterion is the ratio of MMD statistic and its standard deviation. This pulls null and alternative distribution apart while minimising their variance.

In experiments, this criterion performed better or at least equal than existing methods for kernel selection. This is especially true when the length scale at which probability distributions differ is different to the overall length scale, as for example in the above shown blobs dataset.

(Opt and MaxRat methods are based on the criterion and perform better than existing methods, X-Val-Type II is another newly described method, blobs dataset)

3. A MKL-style generalisation of two-sample testing on the base of finite linear combinations of baseline kernels of the form
$\mathcal{K}:=\{k : k=\sum_{u=1}^d\beta_uk_u,\sum_{u=1}^d\beta_u\leq D,\beta_u\geq0, \forall u\in\{1,...,d\}\}$
Optimal weights of these combinations can be computed via solving the convex program
$\min \{ \beta^T\hat{Q}\beta : \beta^T \hat{\eta}=1, \beta\succeq0\}$
where $$\hat{Q}$$ is a linear time estimate of the covariance of the MMD estimates and $$\hat{\eta}$$ is a linear time estimate of the MMD.

Whenever combined kernels may capture more information relevant distinguishing probability distributions than one kernel, this method leads to better results.

(Opt and $$L2$$ use combined kernels)
It also has an interesting feature selection interpretation in the sense that the two-sample test provides information on which kernels (and therefore domains) are useful for locating distinguishing characteristics of distributions.

All above plots and results can be found in the report (link). Many results are joint work and went to the article "Optimal kernel choice for large-scale two-sample tests", which was accepted at NIPS 2012 while my report was written. Many thanks to the other authors, in particular to Arthur Gretton, Dino Sejdinovic, Bharath Sriperumbudur, and Massimiliano Pontil for all the fruitful discussions and guidance.

## Bachelor's disseration: Adaptive Kernel Methods for Sequence Classification in Bioinformatics

I wrote my Bachelor thesis in 2009/2010 in Duisburg, supervised by Prof. Hoffmann, Bioinformatics, Essen, and Prof. Pauli, Intelligent systems, Duisburg.

The work is about classification of amino acid sequences using SVM and string kernels. In particular, I compared the Distant Segments kernel by Sébastian Boisvert to a standard Spectrum kernel on a HIV dataset. In addition, I described a bisection based method to search for the soft-margin parameter of the underlying SVM, which outperformed a standard grid-search.