Nothing Special   »   [go: up one dir, main page]

Skip to main content

Mistakes Hold the Key: Reducing Errors in a Crowdsourced Tumor Annotation Task by Optimizing the Training Strategy

  • Conference paper
  • First Online:
Human-Computer Interaction (HCI-COLLAB 2023)

Abstract

Accurate tumor identification is crucial for diagnosing and treating various diseases. Nevertheless, the limited availability of expert pathologists delays reliable and timely tumor identification. Crowdsourcing can assist by taking advantage of the collective intelligence of crowdworkers through consensus-based opinion aggregation. However, the open problem of training crowdworkers rapidly for doing complex tasks poses a significant challenge, currently yielding inaccurate results. To improve the performance of crowdworkers, we present a redesign of the training strategy by addressing the errors crowdworkers face frequently. By identifying error patterns through a study, we optimize the design of the training strategy for an exemplary tumor identification crowdsourcing task.

We conduct a comparative analysis between a baseline version of the training strategy and an optimized version based on identified error patterns. Our findings demonstrate that optimizing the training strategy significantly reduces annotation mistakes during the crowdsourced tumor identification process, attributable to the increase of retention. Moreover, it provides noticeable improvements in the performance of annotation of correct tumor regions.

This research contributes to the field by testing the effectiveness of training strategy optimization in crowdsourcing tasks, specifically for tumor annotation. Addressing crowdworkers’ training needs and leveraging their collective intelligence, our approach enhances tumor identification’s reliability, providing alternatives for healthcare decision-making.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://digitalslidearchive.github.io/digital_slide_archive/ Accessed Jun 2023.

  2. 2.

    www.microworkers.com Accessed Jun. 2023.

References

  1. Amgad, M., et al.: Structured crowdsourcing enables convolutional segmentation of histology images. Bioinformatics 35(18), 3461–3467 (2019). https://doi.org/10.1093/bioinformatics/btz083

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  2. Gutman, D.A., et al.: The digital slide archive: a software platform for management, integration, and analysis of histology for cancer research. Can. Res. 77(21), e75–e78 (2017). https://doi.org/10.1158/0008-5472.CAN-17-0629

    Article  CAS  Google Scholar 

  3. Gamboa, E., Libreros, A., Hirth, M., Dubiner, D.: Human-AI collaboration for improving the identification of cars for autonomous driving (2022). https://ceur-ws.org/Vol-3318/short14.pdf

  4. Garcia-Molina, H., Joglekar, M., Marcus, A., Parameswaran, A., Verroios, V.: Challenges in data crowdsourcing. IEEE Trans. Knowl. Data Eng. 28(4), 901–911 (2016). https://doi.org/10.1109/TKDE.2016.2518669

    Article  Google Scholar 

  5. Hoßfeld, T., et al.: Best practices for QoE crowdtesting: QoE assessment with crowdsourcing. IEEE Trans. Multimed. 16(2), 541–558 (2013). https://doi.org/10.1109/TMM.2013.2291663

    Article  Google Scholar 

  6. Estellés-Arolas, E., González-Ladrón-De-Guevara, F.: Towards an integrated crowdsourcing definition. J. Inf. Sci. 38(2), 189–200 (2012). https://doi.org/10.1177/0165551512437638

    Article  Google Scholar 

  7. López-Pérez, M., et al.: Learning from crowds in digital pathology using scalable variational Gaussian processes. Sci. Rep. 11(1), 11612 (2021). https://doi.org/10.1038/s41598-021-90821-3

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  8. Mehta, P., Sandfort, V., Gheysens, D., Braeckevelt, G.J., Berte, J., Summers, R.M.: Segmenting the kidney on CT scans via crowdsourcing. In: 16th International Symposium on Biomedical Imaging, pp. 829–832. IEEE, Venice (2019). https://doi.org/10.1109/ISBI.2019.8759240

  9. Bui, M., Bourier, F., Baur, C., Milletari, F., Navab, N., Demirci, S.: Robust navigation support in lowest dose image setting. Int. J. Comput. Assist. Radiol. Surg. 14(2), 291–300 (2019). https://doi.org/10.1007/s11548-018-1874-8

    Article  PubMed  Google Scholar 

  10. Goldenberg, M., Ordon, M., Honey, J.R.D., Andonian, S., Lee, J.Y.: Objective assessment and standard setting for basic flexible ureterorenoscopy skills among urology trainees using simulation-based methods. J. Endourol. 34(4), 495–501 (2020). https://doi.org/10.1089/end.2019.0626

    Article  PubMed  Google Scholar 

  11. Kandala, P.A., Sivaswamy, J.: Crowdsourced annotations as an additional form of data augmentation for CAD development. In: 4th IAPR Asian Conference on Pattern Recognition, pp. 753–758. IEEE, Nanjing (2017). https://doi.org/10.1109/ACPR.2017.6

  12. Conti, S.L., et al.: Crowdsourced assessment of ureteroscopy with laser lithotripsy video feed does not correlate with trainee experience. J. Endourol. 33(1), 42–49 (2019). https://doi.org/10.1089/end.2018.0534

    Article  PubMed  Google Scholar 

  13. Morozov, S., et al.: A simplified cluster model and a tool adapted for collaborative labeling of lung cancer CT scans. Comput. Methods Program. Biomed. 206, 106111 (2021). https://doi.org/10.1016/j.cmpb.2021.106111

    Article  CAS  Google Scholar 

  14. Marzahl, C., et al.: Is crowd-algorithm collaboration an advanced alternative to crowd-sourcing on cytology slides? In: Tolxdorff, T., Deserno, T., Handels, H., Maier, A., Maier-Hein, K., Palm, C. (eds) Bildverarbeitung für die Medizin 2020. Informatik aktuell, pp. 26–31. Springer, Wiesbaden (2020). https://doi.org/10.1007/978-3-658-29267-6_5

  15. Rice, M.K., et al.: Crowdsourced assessment of inanimate biotissue drills: a valid and cost-effective way to evaluate surgical trainees. J. Surg. Educ. 76(3), 814–823 (2019). https://doi.org/10.1016/j.jsurg.2018.10.007

    Article  PubMed  Google Scholar 

  16. Grote, A., Schaadt, N.S., Forestier, G., Wemmert, C., Feuerhake, F.: Crowdsourcing of histological image labeling and object delineation by medical students. IEEE Trans. Med. Imaging 38(5), 1284–1294 (2019). https://doi.org/10.1109/TMI.2018.2883237

    Article  PubMed  Google Scholar 

  17. Ørting, S. N., et al.: A survey of crowdsourcing in medical image analysis. Hum. Comput. 7(1), 1–26 (2020). https://doi.org/10.48550/arXiv.1902.09159

  18. Hirth, M., Borchert, K., De Moor, K., Borst, V., Hoßfeld, T.: Personal task design preferences of crowdworkers. In: 12th International Conference on Quality of Multimedia Experience, pp. 1–6 (2020). https://doi.org/10.1109/QoMEX48832.2020.9123094

  19. Gamboa, E., Galda, R., Mayas, C., Hirth, M.: The crowd thinks aloud: crowdsourcing usability testing with the thinking aloud method. In: Stephanidis, C., et al. (eds.) HCII 2021. LNCS, vol. 13094, pp. 24–39. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-90238-4_3

    Chapter  Google Scholar 

  20. Wang, J., Tian, S., Sun, J., Zhang, J., Lin, L., Hu, C.: The presence of tumour-infiltrating lymphocytes (TILs) and the ratios between different subsets serve as prognostic factors in advanced hypopharyngeal squamous cell carcinoma. BMC Cancer 20(1), 731 (2020). https://doi.org/10.1186/s12885-020-07234-0

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  21. Kittur, A., Chi, E.H., Suh, B.: Crowdsourcing user studies with mechanical turk. In: SIGCHI Conference on Human Factors in Computing Systems, CHI 2008, pp. 453–456. Association for Computing Machinery, New York (2008). https://doi.org/10.1145/1357054.1357127

  22. Paley, G.L., et al.: Crowdsourced assessment of surgical skill proficiency in cataract surgery. J. Surg. Educ. 78(4), 1077–1088 (2021). https://doi.org/10.1016/j.jsurg.2021.02.004

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jose Alejandro Libreros .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Libreros, J.A., Gamboa, E., Hirth, M. (2024). Mistakes Hold the Key: Reducing Errors in a Crowdsourced Tumor Annotation Task by Optimizing the Training Strategy. In: Ruiz, P.H., Agredo-Delgado, V., Mon, A. (eds) Human-Computer Interaction. HCI-COLLAB 2023. Communications in Computer and Information Science, vol 1877. Springer, Cham. https://doi.org/10.1007/978-3-031-57982-0_17

Download citation

Publish with us

Policies and ethics