Semi-supervised learning (SSL) involves training classifiers using small amounts of labeled and relatively large amounts of unlabeled data. As annotating training data in a majority of application is time-consuming, tedious and error-prone, SSL has received a lot of interest in the machine learning community.
Graph-based SSL algorithms are an important class of SSL techniques that have attracted much of attention of late. Here one assumes that the data (both labeled and unlabeled) is embedded within a low-dimensional manifold expressed by a graph. In other words, each data sample is represented by a vertex within a weighted graph with the weights providing a measure of similarity between vertices.
A majority of the previously proposed objectives in SSL are based on minimizing squared error. While squared error is ideal for regression under a Gaussian error model, it is not necessarily ideal for classification. In addition, squared loss is based on absolute error.
Measure Propagation is a new approach to graph-based SSL that is based on minimizing KL-Divergence based loss. Here we define a probability distribution that encodes class membership probabilities and minimize the KL-Divergence between them using a regularizer derived from the graph. This web page is meant as a means to disseminate the latest information and developments on measure propagation.