Abstract
Accurate tumor identification is crucial for diagnosing and treating various diseases. Nevertheless, the limited availability of expert pathologists delays reliable and timely tumor identification. Crowdsourcing can assist by taking advantage of the collective intelligence of crowdworkers through consensus-based opinion aggregation. However, the open problem of training crowdworkers rapidly for doing complex tasks poses a significant challenge, currently yielding inaccurate results. To improve the performance of crowdworkers, we present a redesign of the training strategy by addressing the errors crowdworkers face frequently. By identifying error patterns through a study, we optimize the design of the training strategy for an exemplary tumor identification crowdsourcing task.
We conduct a comparative analysis between a baseline version of the training strategy and an optimized version based on identified error patterns. Our findings demonstrate that optimizing the training strategy significantly reduces annotation mistakes during the crowdsourced tumor identification process, attributable to the increase of retention. Moreover, it provides noticeable improvements in the performance of annotation of correct tumor regions.
This research contributes to the field by testing the effectiveness of training strategy optimization in crowdsourcing tasks, specifically for tumor annotation. Addressing crowdworkers’ training needs and leveraging their collective intelligence, our approach enhances tumor identification’s reliability, providing alternatives for healthcare decision-making.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
https://digitalslidearchive.github.io/digital_slide_archive/ Accessed Jun 2023.
- 2.
www.microworkers.com Accessed Jun. 2023.
References
Amgad, M., et al.: Structured crowdsourcing enables convolutional segmentation of histology images. Bioinformatics 35(18), 3461–3467 (2019). https://doi.org/10.1093/bioinformatics/btz083
Gutman, D.A., et al.: The digital slide archive: a software platform for management, integration, and analysis of histology for cancer research. Can. Res. 77(21), e75–e78 (2017). https://doi.org/10.1158/0008-5472.CAN-17-0629
Gamboa, E., Libreros, A., Hirth, M., Dubiner, D.: Human-AI collaboration for improving the identification of cars for autonomous driving (2022). https://ceur-ws.org/Vol-3318/short14.pdf
Garcia-Molina, H., Joglekar, M., Marcus, A., Parameswaran, A., Verroios, V.: Challenges in data crowdsourcing. IEEE Trans. Knowl. Data Eng. 28(4), 901–911 (2016). https://doi.org/10.1109/TKDE.2016.2518669
Hoßfeld, T., et al.: Best practices for QoE crowdtesting: QoE assessment with crowdsourcing. IEEE Trans. Multimed. 16(2), 541–558 (2013). https://doi.org/10.1109/TMM.2013.2291663
Estellés-Arolas, E., González-Ladrón-De-Guevara, F.: Towards an integrated crowdsourcing definition. J. Inf. Sci. 38(2), 189–200 (2012). https://doi.org/10.1177/0165551512437638
López-Pérez, M., et al.: Learning from crowds in digital pathology using scalable variational Gaussian processes. Sci. Rep. 11(1), 11612 (2021). https://doi.org/10.1038/s41598-021-90821-3
Mehta, P., Sandfort, V., Gheysens, D., Braeckevelt, G.J., Berte, J., Summers, R.M.: Segmenting the kidney on CT scans via crowdsourcing. In: 16th International Symposium on Biomedical Imaging, pp. 829–832. IEEE, Venice (2019). https://doi.org/10.1109/ISBI.2019.8759240
Bui, M., Bourier, F., Baur, C., Milletari, F., Navab, N., Demirci, S.: Robust navigation support in lowest dose image setting. Int. J. Comput. Assist. Radiol. Surg. 14(2), 291–300 (2019). https://doi.org/10.1007/s11548-018-1874-8
Goldenberg, M., Ordon, M., Honey, J.R.D., Andonian, S., Lee, J.Y.: Objective assessment and standard setting for basic flexible ureterorenoscopy skills among urology trainees using simulation-based methods. J. Endourol. 34(4), 495–501 (2020). https://doi.org/10.1089/end.2019.0626
Kandala, P.A., Sivaswamy, J.: Crowdsourced annotations as an additional form of data augmentation for CAD development. In: 4th IAPR Asian Conference on Pattern Recognition, pp. 753–758. IEEE, Nanjing (2017). https://doi.org/10.1109/ACPR.2017.6
Conti, S.L., et al.: Crowdsourced assessment of ureteroscopy with laser lithotripsy video feed does not correlate with trainee experience. J. Endourol. 33(1), 42–49 (2019). https://doi.org/10.1089/end.2018.0534
Morozov, S., et al.: A simplified cluster model and a tool adapted for collaborative labeling of lung cancer CT scans. Comput. Methods Program. Biomed. 206, 106111 (2021). https://doi.org/10.1016/j.cmpb.2021.106111
Marzahl, C., et al.: Is crowd-algorithm collaboration an advanced alternative to crowd-sourcing on cytology slides? In: Tolxdorff, T., Deserno, T., Handels, H., Maier, A., Maier-Hein, K., Palm, C. (eds) Bildverarbeitung für die Medizin 2020. Informatik aktuell, pp. 26–31. Springer, Wiesbaden (2020). https://doi.org/10.1007/978-3-658-29267-6_5
Rice, M.K., et al.: Crowdsourced assessment of inanimate biotissue drills: a valid and cost-effective way to evaluate surgical trainees. J. Surg. Educ. 76(3), 814–823 (2019). https://doi.org/10.1016/j.jsurg.2018.10.007
Grote, A., Schaadt, N.S., Forestier, G., Wemmert, C., Feuerhake, F.: Crowdsourcing of histological image labeling and object delineation by medical students. IEEE Trans. Med. Imaging 38(5), 1284–1294 (2019). https://doi.org/10.1109/TMI.2018.2883237
Ørting, S. N., et al.: A survey of crowdsourcing in medical image analysis. Hum. Comput. 7(1), 1–26 (2020). https://doi.org/10.48550/arXiv.1902.09159
Hirth, M., Borchert, K., De Moor, K., Borst, V., Hoßfeld, T.: Personal task design preferences of crowdworkers. In: 12th International Conference on Quality of Multimedia Experience, pp. 1–6 (2020). https://doi.org/10.1109/QoMEX48832.2020.9123094
Gamboa, E., Galda, R., Mayas, C., Hirth, M.: The crowd thinks aloud: crowdsourcing usability testing with the thinking aloud method. In: Stephanidis, C., et al. (eds.) HCII 2021. LNCS, vol. 13094, pp. 24–39. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-90238-4_3
Wang, J., Tian, S., Sun, J., Zhang, J., Lin, L., Hu, C.: The presence of tumour-infiltrating lymphocytes (TILs) and the ratios between different subsets serve as prognostic factors in advanced hypopharyngeal squamous cell carcinoma. BMC Cancer 20(1), 731 (2020). https://doi.org/10.1186/s12885-020-07234-0
Kittur, A., Chi, E.H., Suh, B.: Crowdsourcing user studies with mechanical turk. In: SIGCHI Conference on Human Factors in Computing Systems, CHI 2008, pp. 453–456. Association for Computing Machinery, New York (2008). https://doi.org/10.1145/1357054.1357127
Paley, G.L., et al.: Crowdsourced assessment of surgical skill proficiency in cataract surgery. J. Surg. Educ. 78(4), 1077–1088 (2021). https://doi.org/10.1016/j.jsurg.2021.02.004
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Libreros, J.A., Gamboa, E., Hirth, M. (2024). Mistakes Hold the Key: Reducing Errors in a Crowdsourced Tumor Annotation Task by Optimizing the Training Strategy. In: Ruiz, P.H., Agredo-Delgado, V., Mon, A. (eds) Human-Computer Interaction. HCI-COLLAB 2023. Communications in Computer and Information Science, vol 1877. Springer, Cham. https://doi.org/10.1007/978-3-031-57982-0_17
Download citation
DOI: https://doi.org/10.1007/978-3-031-57982-0_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-57981-3
Online ISBN: 978-3-031-57982-0
eBook Packages: Biomedical and Life SciencesBiomedical and Life Sciences (R0)