Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1007/978-3-642-32541-0_23guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Crowd-sourced knowledge bases

Published: 05 September 2012 Publication History

Abstract

Crowdsourcing is a low cost way of obtaining human judgements on a large number of items, but the knowledge in these judgements is not reusable and further items to be processed require further human judgement. Ideally one could also obtain the reasons people have for these judgements, so the ability to make the same judgements could be incorporated into a crowd-sourced knowledge base. This paper reports on experiments with 27 students building knowledge bases to classify the same set of 1000 documents. We have assessed the performance of the students building the knowledge bases using the same students to assess the performance of each other's knowledge bases on a set of test documents. We have explored simple techniques for combining the knowledge from the students. These results suggest that although people vary in document classification, simple merging may produce reasonable consensus knowledge bases.

References

[1]
Snow, R., O'Connor, B., Jurafsky, D., Ng, A.Y.: Cheap and fast--but is it good?: evaluating non-expert annotations for natural language tasks. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 254-263. Association for Computational Linguistics, Honolulu (2008).
[2]
Chen, K.-T.: Human Computation: Experience and Thoughts. In: CHI 2011 Workshop on Crowdsourcing and Human Computation Systems, Studies and Platforms (2011).
[3]
Ahn, L.V., Maurer, B., Mcmillen, C., Abraham, D., Blum, M.: reCAPTCHA: Human-Based Character Recognition via Web Security Measures. Science 321(12), 1465-1468 (2008).
[4]
Brew, A., Greene, D., Cunningham, P.: Using Crowdsourcing and Active Learning to Track Sentiment in Online Media. In: Proceeding of the 2010 Conference on ECAI 2010: 19th European Conference on Artificial Intelligence, pp. 145-150. IOS Press (2010).
[5]
Lin, H., Davis, J., Zhou, Y.: Ontological Services Using Crowdsourcing. In: 21st Australasian Conference on Information Systems (2010).
[6]
O'Leary, D.E.: Knowledge Acquisition from Multiple Experts: An Empirical Study. Management Science 44(8), 1049-1058 (1998).
[7]
Medsker, L., Tan, M., Turban, E.: Knowledge acquisition from multiple experts: Problems and issues. Expert Systems with Applications 9(1), 35-40 (1995).
[8]
Turban, E.: Managing knowledge acquisition from multiple experts. In: IEEE/ACM International Conference on Developing and Managing Expert System Programs, Washington, DC, USA, pp. 129-138 (1991).
[9]
La Salle, A.J., Medsker, L.R.: Computerized conferencing for knowledge acquisition from multiple experts. Expert Systems with Applications 3(4), 517-522 (1991).
[10]
Kim, Y.S., Park, S.S., Deards, E., Kang, B.H.: Adaptive Web Document Classification with MCRDR. In: International Conference on Information Technology: Coding and Computing (ITCC 2004), pp. 476-480 (2004).
[11]
Park, S.S., Kim, Y.S., Kang, B.H.: Personalized Web Document Classification using MCRDR. In: The Pacific Knowledge Acquisition Workshop, Auckland, New Zealand (2004).
[12]
Doan, A., Ramakrishnan, R., Halevy, A.Y.: Crowdsourcing systems on the World-Wide Web. Communications of the ACM 54(4), 86-96 (2011).
[13]
Zhang, L., Zhang, H.: Research of Crowdsourcing Model based on Case Study. In: 8th International Conference on Service Systems and Service Management (ICSSSM), pp. 1-5. IEEE, Tianjin (2011).
[14]
Das, R., Vukovic, M.: Emerging theories and models of human computation systems: a brief survey. In: Proceedings of the 2nd International Workshop on Ubiquitous Crowdsouring, pp. 1-4. ACM, Beijing (2011).
[15]
Heymann, P., Garcia-Molina, H.: Turkalytics: analytics for human computation. In: Proceedings of the 20th International Conference on World Wide Web, pp. 477-486. ACM, Hyderabad (2011).
[16]
Little, G., Chilton, L.B., Goldman, M., Miller, R.C.: TurKit: human computation algorithms on mechanical turk. In: Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, pp. 57-66. ACM, New York (2010).
[17]
Winchester, S.: The Surgeon of Crowthorne: A Tale of Murder. Madness and the Oxford English Dictionary, Penguin (1999).
[18]
Buhrmester, M., Kwang, T., Gosling, S.D.: Amazon's mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science 6(1), 3-5 (2011).
[19]
Davis, J.G.: From Crowdsourcing to Crowdservicing. IEEE Internet Computing 15(3), 92-94 (2011).
[20]
Geiger, D., Seedorf, S., Schulze, T., Nickerson, R.C., Schader, M.: Managing the Crowd: Towards a Taxonomy of Crowdsourcing Processes. In: Americas Conference on Information Systems, AMCIS 2011 (2011).
[21]
Schenk, E., Guittard, C.: Towards a Characterization of Crowdsourcing Practices. Journal of Innovation Economics 7(1), 93-107 (2011).
[22]
Mittal, S., Dym, C.L.: Knowledge Acquisition from Multiple Experts. AI Magazine 6(2), 32-36 (1985).
[23]
Richardson, M., Domingos, P.: Building large knowledge bases by mass collaboration. In: Proceedings of the 2nd International Conference on Knowledge Capture, pp. 129-137. ACM, Sanibel Island (2003).
[24]
Puuronen, S., Terziyan, V.Y.: Knowledge Acquisition from Multiple Experts Based on Semantics of Concepts. In: Fensel, D., Studer, R. (eds.) EKAW 1999. LNCS (LNAI), vol. 1621, pp. 259-273. Springer, Heidelberg (1999).
[25]
Park, S.S., Kim, S.K., Kang, B.H.: Web Information Management System: Personalization and Generalization. In: The IADIS International Conference WWW/Internet 2003, Algarve, Portugal, pp. 523-530 (2003).
[26]
Bagno, E., Eylon, B.-S.: From problem solving to a knowledge structure: An example from the domain of electromagnetism. American Journal of Physics 65(8), 726-736 (1997).
[27]
Dumais, S., Chen, H.: Hierarchical classification of Web content. In: Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 256-263. ACM, Athens (2000).
[28]
Wibowo, W., Williams, H.E.: Simple and accurate feature selection for hierarchical categorisation. In: Proceedings of the 2002 ACM Symposium on Document Engineering, pp. 111-118. ACM, McLean (2002).
[29]
Liu, T.-Y., Yang, Y., Wan, H., Zeng, H.-J., Chen, Z., Ma, W.-Y.: Support vector machines classification with a very large-scale taxonomy. SIGKDD Explor. Newsl. 7(1), 36-43 (2005).

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
PKAW'12: Proceedings of the 12th Pacific Rim conference on Knowledge Management and Acquisition for Intelligent Systems
September 2012
372 pages
ISBN:9783642325403
  • Editors:
  • Deborah Richards,
  • Byeong Ho Kang

Sponsors

  • AGRO-KNOW: AGRO-KNOW Technologies, Greece
  • SARAWAK: Sarawak Convention Bureau, Malaysia
  • MIMOS BERHAD
  • agINFRA: agINFRA: A Data Infrastructure for Agriculture
  • The Japanese Society for Artificial Intelligence
  • SIG-MACC: SIG-MACC, Japan Society for Software Science and Technology, Japan
  • MOSTI: Ministry of Science, Technology and Innovation
  • ZBW: Leibniz Information Centre for Economics, Germany
  • FRANZ: FRANZ INC.

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 05 September 2012

Author Tags

  1. crowd-sourcing
  2. document classification
  3. knowledge acquisition
  4. re-useable knowledge

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 29 Nov 2024

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media