Nothing Special   »   [go: up one dir, main page]

Skip to main content

Trafne: A Training Framework for Non-expert Annotators with Auto Validation and Expert Feedback

  • Conference paper
  • First Online:
Artificial Intelligence in HCI (HCII 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13336))

Included in the following conference series:

Abstract

Large-scale datasets play an important role in the application of deep learning methods to various practical tasks. Many crowdsourcing tools have been proposed for annotation tasks; however, these tasks are relatively easy. Non-obvious annotation tasks require professional knowledge (e.g., medical image annotation) and non-expert annotators need to be trained to perform such tasks. In this paper, we propose Trafne, a framework for the effective training of non-expert annotators by combining feedback from the system (auto validation) and human experts (expert validation). Subsequently, we present a prototype implementation designed for brain tumor image annotation. We perform a user study to evaluate the effectiveness of our framework compared to a traditional training method. The results demonstrate that our proposed approach can help non-expert annotators to complete a non-obvious annotation more accurately than the traditional method. In addition, we discuss the requirements of non-expert training on a non-obvious annotation and potential applications of the framework.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Brinker, T.J., et al.: Deep neural networks are superior to dermatologists in melanoma image classification. Eur. J. Cancer 119(2019), 11–17 (2019). https://doi.org/10.1016/j.ejca.2019.05.023

    Article  Google Scholar 

  2. Chang, J.C., Amershi, S., Kamar, E.: Revolt: collaborative crowdsourcing for labeling machine learning datasets, pp. 2334–2346. Association for Computing Machinery, New York (2017). https://doi.org/10.1145/3025453.3026044

  3. Clark, K., et al.: The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository. J. Digit. Imaging 26(6), 1045–1057 (2013). https://doi.org/10.1007/s10278-013-9622-7

    Article  Google Scholar 

  4. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database, pp. 248–255 (2009)

    Google Scholar 

  5. Dong, Z., Zhang, R., Shao, X.: Automatic annotation and segmentation of object instances with deep active curve network. IEEE Access 7(2019), 147501–147512 (2019). https://doi.org/10.1109/ACCESS.2019.2946650

    Article  Google Scholar 

  6. Eickhoff, C., de Vries, A.: How crowd sourcable is your task? Mathematical Structures in Computer Science - MSCS (2011)

    Google Scholar 

  7. Ferreira, R., et al.: The virtual microscope. In: Proceedings: A Conference of the American Medical Informatics Association. AMIA Fall Symposium, vol. 4, pp. 49–453 (1997). https://pubmed.ncbi.nlm.nih.gov/9357666

  8. Havaei, M., et al.: Brain tumor segmentation with Deep Neural Networks. Med. Image Anal. 35, 18–31 (2017). https://doi.org/10.1016/j.media.2016.05.004

    Article  Google Scholar 

  9. He, J., Van Ossenbruggen, J., de Vries, A.: Do you need experts in the crowd?: a case study in image annotation for marine biology, pp. 57–60 (2013)

    Google Scholar 

  10. Heim, E., et al.: Large-scale medical image annotation with crowd-powered algorithms. J. Med. Imaging 5(092018), 1 (2018). https://doi.org/10.1117/1.JMI.5.3.034002

    Article  Google Scholar 

  11. Hong, Y., et al.: Deep learning method for comet segmentation and comet assay image analysis. Sci. Rep. 10(1), 1–12 (2020)

    Article  Google Scholar 

  12. Hu, E., Nosato, H., Sakanashi, H., Murakawa, M.: A modified anomaly detection method for capsule endoscopy images using non-linear color conversion and Higher-order Local Auto-Correlation (HLAC), pp. 5477–5480 (2013). https://doi.org/10.1109/EMBC.2013.6610789

  13. Kae, A., Sohn, K., Lee, H., Learned-Miller, E.: Augmenting CRFs with Boltzmann machine shape priors for image labeling (2013)

    Google Scholar 

  14. Kittur, A., Smus, B., Khamkar, S., Kraut, R.: CrowdForge: crowdsourcing complex work. In: CHI 2011, pp. 43–52 (2011). https://doi.org/10.1145/2047196.2047202

  15. Kumaravel, T.S., Vilhar, B., Faux, S., Jha, A.: Comet assay measurements: a perspective. Cell Biol. Toxicol. 25, 53–64 (2007). https://doi.org/10.1007/s10565-007-9043-9

  16. The Medical Imaging Technology Association (MITA): Standard Digital Imaging and Communications in Medicine (2020). https://www.dicomstandard.org/current

  17. Philbrick, K.A., et al.: RIL-contour: a medical imaging dataset annotation tool for and with deep learning. J. Digit. Imaging 32(4), 571–581 (2019). https://doi.org/10.1007/s10278-019-00232-0

    Article  Google Scholar 

  18. Prah, M., Schmainda, K.M.: Data from brain-tumor-progression. Cancer Imaging Arch. (2018). https://doi.org/10.7937/K9/TCIA.2018.15quzvnb

  19. Singla, A., Bogunovic, I., Bartók, G., Karbasi, A., Krause, A.: Near-optimally teaching the crowd to classify, pp. II-154–II-162 (2014)

    Google Scholar 

  20. Su, H., Deng, J., Fei-Fei, L.: Crowdsourcing annotations for visual object detection, pp. 40–46 (2012)

    Google Scholar 

  21. Suzuki, R., Igarashi, T.: Collaborative 3D modeling by the crowd, pp. 124–131 (2017)

    Google Scholar 

  22. Suzuki, S., Abe, K.: Topological structural analysis of digitized binary images by border following. Comput. Vis. Graph. Image Process. 30(1), 32–46 (1985). https://doi.org/10.1016/0734-189X(85)90016-7

  23. Tschandl, P., Rosendahl, C., Kittler, H.: The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 5(2018), 180161 (2018)

    Article  Google Scholar 

  24. van der Wal, R., Sharma, N., Mellish, C., Robinson, A., Siddharthan, A.: The role of automated feedback in training and retaining biological recorders for citizen science. Conserv. Biol. J. Soc. Conserv. Biol. 30, 550–561 (2016). https://doi.org/10.1111/cobi.12705

    Article  Google Scholar 

  25. von Ahn, L.: Human computation, pp. 1–2 (2008). https://doi.org/10.1109/ICDE.2008.4497403

  26. Chang, C.M., Mishra, S.D., Igarashi, T.: A hierarchical task assignment for manual image labeling. In: 2019 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), pp. 139–143 (2019). https://doi.org/10.1109/VLHCC.2019.8818828

  27. Otani, N., Baba, Y., Kashima, H.: Quality control for crowdsourced hierarchical classification. In: 2015 IEEE International Conference on Data Mining, pp. 937–942 (2015). https://doi.org/10.1109/ICDM.2015.83

  28. Chang, C.M., Lee, C.H., Igarashi, T.: Spatial labeling: leveraging spatial layout for improving label quality in non-expert image annotation. In: CHI Conference on Human Factors in Computing Systems (CHI 2021), Yokohama, Japan, 8–13 May 2021. ACM, New York (2021). https://doi.org/10.1145/3411764.3445165

  29. Steven, D., Kulkarni, A., Bunge, B., Nguyen, T., Klemmer, S., Hartmann, B.: Shepherding the crowd: managing and providing feedback to crowd workers. In: Conference on Human Factors in Computing Systems – Proceedings, pp. 1669–1674 (2011). https://doi.org/10.1145/1979742.1979826

  30. Chang, C.M., Yang, X., Igarashi, T.: An empirical study on the effect of quick and careful labeling styles in image annotation. In: The 48th International Conference on Graphics Interface and Human-Computer Interaction (Gl 2022), Virtual Conference, 17–19 May 2022

    Google Scholar 

Download references

Acknowledgements

This work was supported by JST CREST Grant Number JP- MJCR17A1, Japan.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chia-Ming Chang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Miyata, S., Chang, CM., Igarashi, T. (2022). Trafne: A Training Framework for Non-expert Annotators with Auto Validation and Expert Feedback. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2022. Lecture Notes in Computer Science(), vol 13336. Springer, Cham. https://doi.org/10.1007/978-3-031-05643-7_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-05643-7_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-05642-0

  • Online ISBN: 978-3-031-05643-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics