On Actively Teaching the Crowd to Classify

Is it possible to teach workers while crowdsourcing classification tasks? Amongst the challenges: (a) workers have different (unknown) skills, competence, and learning rate to which the teaching must be adapted, (b) feedback on the workers’ progress is limited, (c) we may not have informative features for our data (otherwise crowdsourcing may be unnecessary). We propose a natural Bayesian model of the workers, modeling them as a learning entity with an initial skill, competence, and dynamics. We then show how a teaching system can exploit this model to interactively teach the workers. Our model uses feedback to adapt the teaching process to each worker, based on priors over hypotheses elicited from the crowd. Our experiments carried out on both simulated workers and real image annotation tasks on Amazon Mechanical Turk show the effectiveness of crowd-teaching systems.

Presented at:
NIPS Workshop on Data Driven Education

 Record created 2016-10-07, last modified 2019-03-17

Download fulltext

Rate this document:

Rate this document:
(Not yet reviewed)