Nothing Special   »   [go: up one dir, main page]

skip to main content
article

Learning from multiple annotators with varying expertise

Published: 01 June 2014 Publication History

Abstract

Learning from multiple annotators or knowledge sources has become an important problem in machine learning and data mining. This is in part due to the ease with which data can now be shared/collected among entities sharing a common goal, task, or data source; and additionally the need to aggregate and make inferences about the collected information. This paper focuses on the development of probabilistic approaches for statistical learning in this setting. It specially considers the case when annotators may be unreliable, but also when their expertise vary depending on the data they observe. That is, annotators may have better knowledge about different parts of the input space and therefore be inconsistently accurate across the task domain. The models developed address both the supervised and the semi-supervised settings and produce classification and annotator models that allow us to provide estimates of the true labels and annotator expertise when no ground-truth is available. In addition, we provide an analysis of the proposed models, tasks, and related practical problems under various scenarios. In particular, we address how to evaluate annotators and how to consider cases where some ground-truth may be available. We show experimentally that annotator expertise can indeed vary in real tasks and that the presented approaches provide clear advantages over previously introduced multi-annotator methods, which only consider input-independent annotator characteristics, and over alternative approaches that do not model multiple annotators.

Cited By

View all
  • (2024)BadLabel: A Robust Perspective on Evaluating and Enhancing Label-Noise LearningIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2024.335542546:6(4398-4409)Online publication date: 18-Jan-2024
  • (2024)Learning From Noisy Correspondence With Tri-Partition for Cross-Modal MatchingIEEE Transactions on Multimedia10.1109/TMM.2023.331800226(3884-3896)Online publication date: 1-Jan-2024
  • (2024)No regret sample selection with noisy labelsMachine Language10.1007/s10994-023-06478-8113:3(1163-1188)Online publication date: 1-Mar-2024
  • Show More Cited By

Index Terms

  1. Learning from multiple annotators with varying expertise
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image Machine Language
    Machine Language  Volume 95, Issue 3
    June 2014
    210 pages

    Publisher

    Kluwer Academic Publishers

    United States

    Publication History

    Published: 01 June 2014

    Author Tags

    1. Adversarial annotators
    2. Classification
    3. Crowdsourcing
    4. Graphical models
    5. Multiple labelers
    6. Opinion aggregation

    Qualifiers

    • Article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 30 Sep 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)BadLabel: A Robust Perspective on Evaluating and Enhancing Label-Noise LearningIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2024.335542546:6(4398-4409)Online publication date: 18-Jan-2024
    • (2024)Learning From Noisy Correspondence With Tri-Partition for Cross-Modal MatchingIEEE Transactions on Multimedia10.1109/TMM.2023.331800226(3884-3896)Online publication date: 1-Jan-2024
    • (2024)No regret sample selection with noisy labelsMachine Language10.1007/s10994-023-06478-8113:3(1163-1188)Online publication date: 1-Mar-2024
    • (2023)Label correction of crowdsourced noisy annotations with an instance-dependent noise transition modelProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3666140(347-386)Online publication date: 10-Dec-2023
    • (2023)Mitigating memorization of noisy labels by clipping the model predictionProceedings of the 40th International Conference on Machine Learning10.5555/3618408.3619942(36868-36886)Online publication date: 23-Jul-2023
    • (2023)ProMixProceedings of the Thirty-Second International Joint Conference on Artificial Intelligence10.24963/ijcai.2023/494(4442-4450)Online publication date: 19-Aug-2023
    • (2023)Co-Training-Teaching: A Robust Semi-Supervised Framework for Review-Aware Rating RegressionACM Transactions on Knowledge Discovery from Data10.1145/362539118:2(1-16)Online publication date: 26-Sep-2023
    • (2023)From Labels to Decisions: A Mapping-Aware Annotator ModelProceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3580305.3599828(5404-5415)Online publication date: 6-Aug-2023
    • (2023)A Parametrical Model for Instance-Dependent Label NoiseIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2023.330187645:12(14055-14068)Online publication date: 1-Dec-2023
    • (2023)Toward Facial Expression Recognition in the Wild via Noise-Tolerant NetworkIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2022.322066933:5(2033-2047)Online publication date: 1-May-2023
    • Show More Cited By

    View Options

    View options

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media