DescriptionClassification and supervised learning problems in general aim to choose a function that best describes a relation between a set of observed attributes and their corresponding outputs. We focus on binary classification, where the output is a binary response variable. In this dissertation, we seek motivation within statistical learning theory, which attempts to estimate how well a classification function generalizes with respect to unseen data in a probabilistic setting.
We study linear programming formulations for finding a hyperplane that separates two sets of points. Such formulations were initially given by Mangasarian (1965) for the separable case, and more recently extended by "soft margin'' formulations that maximize the margin of separation subject to a penalty proportional to the sum of margin violations. LP-Boost is a boosting algorithm which solves the large scale linear program in a high dimensional space of features using a column generation solution technique.
While many authors have developed different boosting algorithms that assume the existence of effective base learning algorithms for generating a feature or base classifier (e.g., solving the pricing problem with a column generation approach), there has not been much work done to propose such algorithms. In this dissertation, we propose a branch-and-bound algorithm for finding a Boolean monomial that best agrees with the given set of data. This problem has been known as maximum monomial agreement and has been mostly studied in the computational complexity and learning communities in the context of negative computational complexity results. Here we propose an algorithm that finds the monomial that best agrees with the given data and show that it is computationally efficient in practice for UCI datasets.
Revisiting the problem of finding weighted voting classifiers, we consider the problem of balancing the sparsity of the vector of weights and accuracy with respect to the given the training data. It has been suggested by several authors to minimize the L1-norm of the normal vector of the separating hyperplane in order to find sparse solutions. In contrast, we formulate the discrete optimization problem of minimizing the sum of disagreements of a weighted voting classifier and a penalty proportional to the number of nonzero components of the hyperplane's normal vector. The problem that we propose extends the NP-hard Minimum Disagreement Halfspace problem studied in computational learning theory. We are able to formulate this problem as a Mixed Integer Linear Program (MILP), and show that the continuous relaxation of the MILP is equivalent to the well-known L1-norm minimizing LP for finding weighted voting classifiers. We proceed by suggesting novel cutting planes to tighten the continuous relaxation. Finally, we propose and test a novel boosting algorithm that solves the relaxation by dynamic generation of columns and cuts. The algorithm used for generating the columns in this procedure generalizes our algorithm for maximum monomial agreement. In our experiments with the novel MILP relaxation, we demonstrate that our formulation, as well as solution technique, provide an effective approach for constructing sparse and accurate classifiers, and balancing the tradeoff between the two.