TY - JOUR
T1 - CrowdWT: Crowdsourcing via Joint Modeling of Workers and Tasks
AU - Tu, Jinzheng
AU - Yu, Guoxian
AU - Wang, Jun
AU - Domeniconi, Carlotta
AU - Guo, Maozu
AU - Zhang, Xiangliang
N1 - KAUST Repository Item: Exported on 2021-02-01
Acknowledgements: We appreciate the anonymous reviewers for their comments on improving our work, and the authors who kindly shared their source code and datasets with us for the experiments.
PY - 2021/1/6
Y1 - 2021/1/6
N2 - Crowdsourcing is a relatively inexpensive and efficient mechanism to collect annotations of data from the open Internet. Crowdsourcing workers are paid for the provided annotations, but the task requester usually has a limited budget. It is desirable to wisely assign the appropriate task to the right workers, so the overall annotation quality is maximized while the cost is reduced. In this article, we propose a novel task assignment strategy (CrowdWT) to capture the complex interactions between tasks and workers, and properly assign tasks to workers. CrowdWT first develops a Worker Bias Model (WBM) to jointly model the worker's bias, the ground truths of tasks, and the task features. WBM constructs a mapping between task features and worker annotations to dynamically assign the task to a group of workers, who are more likely to give correct annotations for the task. CrowdWTfurther introduces a Task Difficulty Model (TDM), which builds a Kernel ridge regressor based on task features to quantify the intrinsic difficulty of tasks and thus to assign the difficult tasks tomore reliable workers. Finally, CrowdWT combines WBM and TDM into a unified model to dynamically assign tasks to a group of workers and recall more reliable and even expert workers to annotate the difficult tasks. Our experimental results on two real-world datasets and two semi-synthetic datasetsshow that CrowdWT achieves high-quality answers within a limited budget, and has the best performance against competitive methods.
AB - Crowdsourcing is a relatively inexpensive and efficient mechanism to collect annotations of data from the open Internet. Crowdsourcing workers are paid for the provided annotations, but the task requester usually has a limited budget. It is desirable to wisely assign the appropriate task to the right workers, so the overall annotation quality is maximized while the cost is reduced. In this article, we propose a novel task assignment strategy (CrowdWT) to capture the complex interactions between tasks and workers, and properly assign tasks to workers. CrowdWT first develops a Worker Bias Model (WBM) to jointly model the worker's bias, the ground truths of tasks, and the task features. WBM constructs a mapping between task features and worker annotations to dynamically assign the task to a group of workers, who are more likely to give correct annotations for the task. CrowdWTfurther introduces a Task Difficulty Model (TDM), which builds a Kernel ridge regressor based on task features to quantify the intrinsic difficulty of tasks and thus to assign the difficult tasks tomore reliable workers. Finally, CrowdWT combines WBM and TDM into a unified model to dynamically assign tasks to a group of workers and recall more reliable and even expert workers to annotate the difficult tasks. Our experimental results on two real-world datasets and two semi-synthetic datasetsshow that CrowdWT achieves high-quality answers within a limited budget, and has the best performance against competitive methods.
UR - http://hdl.handle.net/10754/667125
UR - https://dl.acm.org/doi/10.1145/3421712
UR - http://www.scopus.com/inward/record.url?scp=85099344094&partnerID=8YFLogxK
U2 - 10.1145/3421712
DO - 10.1145/3421712
M3 - Article
SN - 1556-472X
VL - 15
SP - 1
EP - 24
JO - ACM Transactions on Knowledge Discovery from Data
JF - ACM Transactions on Knowledge Discovery from Data
IS - 1
ER -