Publication
Computational Intelligence
Paper

Evaluation metrics in classification: A quantification of distance-bias

View publication

Abstract

This article provides a characterization of bias for evaluation metrics in classification (e.g., Information Gain, Gini, χ2, etc.). Our characterization provides a uniform representation for all traditional evaluation metrics. Such representation leads naturally to a measure for the distance between the bias of two evaluation metrics. We give a practical value to our measure by observing the distance between the bias of two evaluation metrics and its correlation with differences in predictive accuracy when we compare two versions of the same learning algorithm that differ in the evaluation metric only. Experiments on real-world domains show how the expectations on accuracy differences generated by the distance-bias measure correlate with actual differences when the learning algorithm is simple (e.g., search for the best single feature or the best single rule). The correlation, however, weakens with more complex algorithms (e.g., learning decision trees). Our results show how interaction among learning components is a key factor to understand learning performance.

Date

Publication

Computational Intelligence

Authors

Topics

Share