Cost-Sensitive Active Visual Category Learning

作者:Sudheendra Vijayanarasimhan, Kristen Grauman

摘要

We present an active learning framework that predicts the tradeoff between the effort and information gain associated with a candidate image annotation, thereby ranking unlabeled and partially labeled images according to their expected “net worth” to an object recognition system. We develop a multi-label multiple-instance approach that accommodates realistic images containing multiple objects and allows the category-learner to strategically choose what annotations it receives from a mixture of strong and weak labels. Since the annotation cost can vary depending on an image’s complexity, we show how to improve the active selection by directly predicting the time required to segment an unlabeled image. Our approach accounts for the fact that the optimal use of manual effort may call for a combination of labels at multiple levels of granularity, as well as accurate prediction of manual effort. As a result, it is possible to learn more accurate category models with a lower total expenditure of annotation effort. Given a small initial pool of labeled data, the proposed method actively improves the category models with minimal manual intervention.

论文关键词:Visual category learning, Active learning, Multi-label, Multiple-instance learning, Cost prediction, Cost sensitive learning, Object recognition

论文评审过程:

论文官网地址:https://doi.org/10.1007/s11263-010-0372-4