Jörg Simon Wicker Senior Lecturer | School of Computer Science | The University of Auckland
Senior Lecturer | School of Computer Science | The University of Auckland

Hitting the Target: Stopping Active Learning at the Cost-Based Optimum

Zac Pullar-Strecker, Katharina Dost, Eibe Frank, Jörg Wicker: Hitting the Target: Stopping Active Learning at the Cost-Based Optimum. In: Machine Learning, 2022.

Abstract

Active learning allows machine learning models to be trained using fewer labels while retaining similar performance to traditional supervised learning. An active learner selects the most informative data points, requests their labels, and retrains itself. While this approach is promising, it raises the question of how to determine when the model is ‘good enough’ without the additional labels required for traditional evaluation. Previously, different stopping criteria have been proposed aiming to identify the optimal stopping point. Yet, optimality can only be expressed as a domain-dependent trade-off between accuracy and the number of labels, and no criterion is superior in all applications. As a further complication, a comparison of criteria for a particular real-world application would require practitioners to collect additional labelled data they are aiming to avoid by using active learning in the first place. This work enables practitioners to employ active learning by providing actionable recommendations for which stopping criteria are best for a given real-world scenario. We contribute the first large-scale comparison of stopping criteria for pool-based active learning, using a cost measure to quantify the accuracy/label trade-off, public implementations of all stopping criteria we evaluate, and an open-source framework for evaluating stopping criteria. Our research enables practitioners to substantially reduce labeling costs by utilizing the stopping criterion which best suits their domain.

BibTeX (Download)

@article{pullar-strecker2022hitting,
title = {Hitting the Target: Stopping Active Learning at the Cost-Based Optimum},
author = {Zac Pullar-Strecker and Katharina Dost and Eibe Frank and J\"{o}rg Wicker},
editor = {Yu-Feng Li and Prateek Jain},
url = {https://doi.org/10.1007/s10994-022-06253-1
https://arxiv.org/abs/2110.03802},
doi = {10.1007/s10994-022-06253-1},
year  = {2022},
date = {2022-10-14},
urldate = {2022-12-01},
journal = {Machine Learning},
abstract = {Active learning allows machine learning models to be trained using fewer labels while retaining similar performance to traditional supervised learning. An active learner selects the most informative data points, requests their labels, and retrains itself. While this approach is promising, it raises the question of how to determine when the model is ‘good enough’ without the additional labels required for traditional evaluation. Previously, different stopping criteria have been proposed aiming to identify the optimal stopping point. Yet, optimality can only be expressed as a domain-dependent trade-off between accuracy and the number of labels, and no criterion is superior in all applications. As a further complication, a comparison of criteria for a particular real-world application would require practitioners to collect additional labelled data they are aiming to avoid by using active learning in the first place. This work enables practitioners to employ active learning by providing actionable recommendations for which stopping criteria are best for a given real-world scenario. We contribute the first large-scale comparison of stopping criteria for pool-based active learning, using a cost measure to quantify the accuracy/label trade-off, public implementations of all stopping criteria we evaluate, and an open-source framework for evaluating stopping criteria. Our research enables practitioners to substantially reduce labeling costs by utilizing the stopping criterion which best suits their domain.},
keywords = {active learning, cost analysis, data labelling, machine learning, stopping criteria},
pubstate = {published},
tppubtype = {article}
}