A framework for monitoring classifiers’ performance: when and why failure occurs?

作者:David A. Cieslak, Nitesh V. Chawla

摘要

Classifier error is the product of model bias and data variance. While understanding the bias involved when selecting a given learning algorithm, it is similarly important to understand the variability in data over time, since even the One True Model might perform poorly when training and evaluation samples diverge. Thus, it becomes the ability to identify distributional divergence is critical towards pinpointing when fracture points in classifier performance will occur, particularly since contemporary methods such as tenfolds and hold-out are poor predictors in divergent circumstances. This article implement a comprehensive evaluation framework to proactively detect breakpoints in classifiers’ predictions and shifts in data distributions through a series of statistical tests. We outline and utilize three scenarios under which data changes: sample selection bias, covariate shift, and shifting class priors. We evaluate the framework with a variety of classifiers and datasets.

论文关键词:Mach Learn, Friedman Test, Predictive Distribution, Class Prior, Sample Selection Bias

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10115-008-0139-1