Abstract
Large-scale datasets play an important role in the application of deep learning methods to various practical tasks. Many crowdsourcing tools have been proposed for annotation tasks; however, these tasks are relatively easy. Non-obvious annotation tasks require professional knowledge (e.g., medical image annotation) and non-expert annotators need to be trained to perform such tasks. In this paper, we propose Trafne, a framework for the effective training of non-expert annotators by combining feedback from the system (auto validation) and human experts (expert validation). Subsequently, we present a prototype implementation designed for brain tumor image annotation. We perform a user study to evaluate the effectiveness of our framework compared to a traditional training method. The results demonstrate that our proposed approach can help non-expert annotators to complete a non-obvious annotation more accurately than the traditional method. In addition, we discuss the requirements of non-expert training on a non-obvious annotation and potential applications of the framework.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Brinker, T.J., et al.: Deep neural networks are superior to dermatologists in melanoma image classification. Eur. J. Cancer 119(2019), 11–17 (2019). https://doi.org/10.1016/j.ejca.2019.05.023
Chang, J.C., Amershi, S., Kamar, E.: Revolt: collaborative crowdsourcing for labeling machine learning datasets, pp. 2334–2346. Association for Computing Machinery, New York (2017). https://doi.org/10.1145/3025453.3026044
Clark, K., et al.: The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository. J. Digit. Imaging 26(6), 1045–1057 (2013). https://doi.org/10.1007/s10278-013-9622-7
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database, pp. 248–255 (2009)
Dong, Z., Zhang, R., Shao, X.: Automatic annotation and segmentation of object instances with deep active curve network. IEEE Access 7(2019), 147501–147512 (2019). https://doi.org/10.1109/ACCESS.2019.2946650
Eickhoff, C., de Vries, A.: How crowd sourcable is your task? Mathematical Structures in Computer Science - MSCS (2011)
Ferreira, R., et al.: The virtual microscope. In: Proceedings: A Conference of the American Medical Informatics Association. AMIA Fall Symposium, vol. 4, pp. 49–453 (1997). https://pubmed.ncbi.nlm.nih.gov/9357666
Havaei, M., et al.: Brain tumor segmentation with Deep Neural Networks. Med. Image Anal. 35, 18–31 (2017). https://doi.org/10.1016/j.media.2016.05.004
He, J., Van Ossenbruggen, J., de Vries, A.: Do you need experts in the crowd?: a case study in image annotation for marine biology, pp. 57–60 (2013)
Heim, E., et al.: Large-scale medical image annotation with crowd-powered algorithms. J. Med. Imaging 5(092018), 1 (2018). https://doi.org/10.1117/1.JMI.5.3.034002
Hong, Y., et al.: Deep learning method for comet segmentation and comet assay image analysis. Sci. Rep. 10(1), 1–12 (2020)
Hu, E., Nosato, H., Sakanashi, H., Murakawa, M.: A modified anomaly detection method for capsule endoscopy images using non-linear color conversion and Higher-order Local Auto-Correlation (HLAC), pp. 5477–5480 (2013). https://doi.org/10.1109/EMBC.2013.6610789
Kae, A., Sohn, K., Lee, H., Learned-Miller, E.: Augmenting CRFs with Boltzmann machine shape priors for image labeling (2013)
Kittur, A., Smus, B., Khamkar, S., Kraut, R.: CrowdForge: crowdsourcing complex work. In: CHI 2011, pp. 43–52 (2011). https://doi.org/10.1145/2047196.2047202
Kumaravel, T.S., Vilhar, B., Faux, S., Jha, A.: Comet assay measurements: a perspective. Cell Biol. Toxicol. 25, 53–64 (2007). https://doi.org/10.1007/s10565-007-9043-9
The Medical Imaging Technology Association (MITA): Standard Digital Imaging and Communications in Medicine (2020). https://www.dicomstandard.org/current
Philbrick, K.A., et al.: RIL-contour: a medical imaging dataset annotation tool for and with deep learning. J. Digit. Imaging 32(4), 571–581 (2019). https://doi.org/10.1007/s10278-019-00232-0
Prah, M., Schmainda, K.M.: Data from brain-tumor-progression. Cancer Imaging Arch. (2018). https://doi.org/10.7937/K9/TCIA.2018.15quzvnb
Singla, A., Bogunovic, I., Bartók, G., Karbasi, A., Krause, A.: Near-optimally teaching the crowd to classify, pp. II-154–II-162 (2014)
Su, H., Deng, J., Fei-Fei, L.: Crowdsourcing annotations for visual object detection, pp. 40–46 (2012)
Suzuki, R., Igarashi, T.: Collaborative 3D modeling by the crowd, pp. 124–131 (2017)
Suzuki, S., Abe, K.: Topological structural analysis of digitized binary images by border following. Comput. Vis. Graph. Image Process. 30(1), 32–46 (1985). https://doi.org/10.1016/0734-189X(85)90016-7
Tschandl, P., Rosendahl, C., Kittler, H.: The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 5(2018), 180161 (2018)
van der Wal, R., Sharma, N., Mellish, C., Robinson, A., Siddharthan, A.: The role of automated feedback in training and retaining biological recorders for citizen science. Conserv. Biol. J. Soc. Conserv. Biol. 30, 550–561 (2016). https://doi.org/10.1111/cobi.12705
von Ahn, L.: Human computation, pp. 1–2 (2008). https://doi.org/10.1109/ICDE.2008.4497403
Chang, C.M., Mishra, S.D., Igarashi, T.: A hierarchical task assignment for manual image labeling. In: 2019 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), pp. 139–143 (2019). https://doi.org/10.1109/VLHCC.2019.8818828
Otani, N., Baba, Y., Kashima, H.: Quality control for crowdsourced hierarchical classification. In: 2015 IEEE International Conference on Data Mining, pp. 937–942 (2015). https://doi.org/10.1109/ICDM.2015.83
Chang, C.M., Lee, C.H., Igarashi, T.: Spatial labeling: leveraging spatial layout for improving label quality in non-expert image annotation. In: CHI Conference on Human Factors in Computing Systems (CHI 2021), Yokohama, Japan, 8–13 May 2021. ACM, New York (2021). https://doi.org/10.1145/3411764.3445165
Steven, D., Kulkarni, A., Bunge, B., Nguyen, T., Klemmer, S., Hartmann, B.: Shepherding the crowd: managing and providing feedback to crowd workers. In: Conference on Human Factors in Computing Systems – Proceedings, pp. 1669–1674 (2011). https://doi.org/10.1145/1979742.1979826
Chang, C.M., Yang, X., Igarashi, T.: An empirical study on the effect of quick and careful labeling styles in image annotation. In: The 48th International Conference on Graphics Interface and Human-Computer Interaction (Gl 2022), Virtual Conference, 17–19 May 2022
Acknowledgements
This work was supported by JST CREST Grant Number JP- MJCR17A1, Japan.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Miyata, S., Chang, CM., Igarashi, T. (2022). Trafne: A Training Framework for Non-expert Annotators with Auto Validation and Expert Feedback. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2022. Lecture Notes in Computer Science(), vol 13336. Springer, Cham. https://doi.org/10.1007/978-3-031-05643-7_31
Download citation
DOI: https://doi.org/10.1007/978-3-031-05643-7_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-05642-0
Online ISBN: 978-3-031-05643-7
eBook Packages: Computer ScienceComputer Science (R0)