support vector machine
Kernel Machines and Estimators
There are two types of "kernel methods" -- kernel machines and kernel estimators.
FASTlab logo GT

FASTlab Home Papers/Code Team
The Generative and Latent Mean Map Kernels
Nishant Mehta and Alexander Gray
Georgia Institute of Technology Technical Report, 2010


New kinds of kernels between distributions, which can yield improved classification performance for non-iid and other structured data problems. [pdf]

Abstract: We introduce two kernels that extend the mean map, which embeds distributions in Hilbert spaces. The generative mean map kernel (MMK) measures similarity between probabilistic models of structured data such as sequences. The latent mean map kernel extends the non-iid data formulation of the empirical mean map to handle latent variable models. We present classification results on synthetic and DNA data, comparing support vector machines (SVMs) using these two kernels to a Bayes classi´Čüer and SVMs using other generative kernels. The generative MMK outperformed all other methods, while the latent MMK was competitive for the synthetic data. We also demonstrate the generative MMK as a similarity measure between kernel density estimators for a manifold visualization of biodiversity data.

@techreport{mehta2010gmmk, title = "{The Generative and Latent Mean Map Kernels}", author = "Nishant Mehta and Alexander G. Gray", institution = "{Georgia Institute of Technology}", series = "{College of Computing Technical Report}", year = "2010" }
See also


High-dimensional Kernel Estimation
We have developed a variant of kernel estimation which is efficient in high dimensionalities assuming there is a lower-dimensional structure. [see webpage here]


Multiple Kernel Density Estimation
We demonstrate a way to learn a combination of kernels for kernel estimation, as an alternative to bandwidth selection.


In preparation


Kernels for Measurement Error
We are developing ways to incorporate measurement errors into kernel methods.
Isometric Separation Maps
Nikolaos Vasiloglou, Alexander Gray, and David Anderson
Machine Learning and Signal Processing (MLSP) 2009


An approach to learning the kernel for support vector machines, which can guarantee linear separability in the kernel space. [pdf]

Abstract: Maximum Variance Unfolding (MVU) and its variants have been very successful in embedding data-manifolds in lower dimensional spaces, often revealing the true intrinsic dimension. In this paper we show how to also incorporate supervised class information into an MVU-like method without breaking its convexity. We call this method the Isometric Separation Map and we show that the resulting kernel matrix can be used as a binary/multiclass Support Vector Machine-like method in a semi-supervised (transductive) framework. We also show that the method always finds a kernel matrix that linearly separates the training data exactly without projecting them in infinite dimensional spaces. In traditional SVMs we choose a kernel and hope that the data become linearly separable in the kernel space. In this paper we show how the hyperplane can be chosen ad hoc and the kernel is trained so that data are always linearly separable. Comparisons with Large Margin SVMs show comparable performance.

@Inproceedings{vasiloglou2009ism, Author = "Nikolaos Vasiloglou and Alexander G. Gray and David Anderson", Title = "{Learning Isometric Separation Maps}", Booktitle = "IEEE International Workshop on Machine Learning For Signal Processing (MLSP)", Year = "2009" }