On the relevance of linear discriminative features
Résumé
Linear Discriminant Analysis (LDA) has been widely used to extract linear features for classification. In real applications, the usefulness of the extracted features usually needs to be confirmed using an error rate of classification embedded in a classifier. Little attention has been paid to whether and how discriminative features themselves can be interpreted as indicators of usefulness. We refer to this as relevance, i.e., the capability of discriminative features to characterize the contribution of the original variables to classification. We approach the relevance by considering how it could be lost while extracting optimal discriminative features. Then, the discrepancy between the relevance and optimality of discriminative features is shown to originate from the “angle” between the space spanned by eigenvectors of the within-class scattering matrix, and the primary space in which the original variables reside. In particular, for a given dataset, the larger the “angle”, the less evident is the relevance discovered from optimal discriminative features. Furthermore, the relevance and optimality are regarded as two constraint conditions, or a tradeoff, in order to extract relevant-discriminative features. At last, a simulated experiment is used to show how the relevance is lost when the “angle” is changed. Experimental results based on both USPS handwritten digitals and PIE face databases show that a maximum margin criterion is a reasonable compromise between the relevance and optimality, since it approximates the averaged class margin using Euclidean distance measured in the primary space.