Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3357155.3358441acmotherconferencesArticle/Chapter ViewAbstractPublication PagesihcConference Proceedingsconference-collections
research-article

Characterising volunteers' task execution patterns across projects on multi-project citizen science platforms

Published: 22 October 2019 Publication History

Abstract

Citizen science projects engage people in activities that are part of a scientific research effort. On multi-project citizen science platforms, scientists can create projects consisting of tasks. Volunteers, in turn, participate in executing the project's tasks. Such type of platforms seeks to connect volunteers and scientists' projects, adding value to both. However, little is known about volunteer's cross-project engagement patterns and the benefits of such patterns for scientists and volunteers. This work proposes a Goal, Question, and Metric (GQM) approach to analyse volunteers' cross-project task execution patterns and employs the Semiotic Inspection Method (SIM) to analyse the communicability of the platform's cross-project features. In doing so, it investigates what are the features of platforms to foster volunteers' cross-project engagement, to what extent multi-project platforms facilitate the attraction of volunteers to perform tasks in new projects, and to what extent multi-project participation increases engagement on the platforms. Results from analyses on real platforms show that volunteers tend to explore multiple projects, but they perform tasks regularly in just a few of them; few projects attract much attention from volunteers; volunteers recruited from other projects on the platform tend to get more engaged than those recruited outside the platform. System inspection shows that platforms still lack personalised and explainable recommendations of projects and tasks.

References

[1]
Ricardo Matsumura Araujo. 2013. 99designs: An analysis of creative competition in crowdsourced design. In First AAAI conference on Human computation and crowdsourcing. AAAI, Palo Alto, US, 17--24.
[2]
Elizabeth H Boakes, Gianfranco Gliozzo, Valentine Seymour, Martin Harvey, Chloë Smith, David B Roy, and Muki Haklay. 2016. Patterns of contribution to citizen science biodiversity projects increase understanding of volunteersâĂŹ recording behaviour. Scientific reports 6 (2016), 33051.
[3]
Victor R Basili1 Gianluigi Caldiera and H Dieter Rombach. 1994. The goal question metric approach. Encyclopedia of software engineering 2 (1994), 528--532.
[4]
E Gil Clary, Mark Snyder, Robert D Ridge, John Copeland, Arthur A Stukas, Julie Haugen, and Peter Miene. 1998. Understanding and assessing the motivations of volunteers: a functional approach. Journal of personality and social psychology 74, 6 (1998), 1516.
[5]
Joe Cox, Eun Young Oh, Brooke Simmons, Chris Lintott, Karen Masters, Anita Greenhill, Gary Graham, and Kate Holmes. 2015. Defining and measuring success in online citizen science: A case study of Zooniverse projects. Computing in Science & Engineering 17, 4 (2015), 28--41.
[6]
Clarisse Sieckenius De Souza. 2005. The semiotic engineering of human-computer interaction. MIT press, Cambridge, Massachusetts, US.
[7]
Clarisse Sieckenius De Souza, Carla Faria Leitão, Raquel Oliveira Prates, and Elton José da Silva. 2006. The semiotic inspection method. In Proceedings of VII Brazilian symposium on Human factors in computing systems. ACM, NY, US, 148--157.
[8]
Djellel Difallah, Elena Filatova, and Panos Ipeirotis. 2018. Demographics and Dynamics of Mechanical Turk Workers. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining (WSDM '18). ACM, New York, NY, USA, 135--143.
[9]
Melissa Eitzel, Jessica Cappadonna, Chris Santos-Lang, Ruth Duerr, Sarah Elizabeth West, Arika Virapongse, Christopher Kyba, Anne Bowser, Caren Cooper, Andrea Sforzi, Anya Metcalfe, Edward Harris, Martin Thiel, Mordechai Haklay, Lesandro Ponciano, Joseph Roche, Luidi Ceccaroni, Fraser Shilling, Daniel Dorler, Florian Heigl, Tim Kiessling, Brittany Davis, and Qijun Jiang. 2017. Citizen Science Terminology Matters: Exploring Key Terms. Citizen Science: Theory and Practice 2, 1 (2017), 20.
[10]
Daniel Lombrana González et al. 2018. Scifabric/pybossa: v2.10.0.
[11]
Alexandra Eveleigh, Charlene Jennett, Ann Blandford, Philip Brohan, and Anna L Cox. 2014. Designing for dabblers and deterring drop-outs in citizen science. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, NY, US, 2985--2994.
[12]
Muki Haklay. 2013. Citizen science and volunteered geographic information: Overview and typology of participation. In Crowdsourcing geographic knowledge. Springer, Netherlands, 105--122.
[13]
Alan Irwin. 1995. Citizen Science: A Study of People, Expertise and Sustainable Development. Psychology Press, London, UK.
[14]
Edith Law and Luis von Ahn. 2011. Human computation. Synthesis lectures on artificial intelligence and machine learning 5, 3 (2011), 1--121.
[15]
Chris Lintott and Jason Reed. 2013. Human computation in citizen science. In Handbook of human computation. Springer, NY, US, 153--162.
[16]
Pietro Michelucci and Janis L Dickinson. 2016. The power of crowds. Science 351, 6268 (2016), 32--33.
[17]
Heather L O'Brien and Elaine G Toms. 2008. What is user engagement? A conceptual framework for defining user engagement with technology. Journal of the American society for Information Science and Technology 59, 6 (2008), 938--955.
[18]
Lesandro Ponciano and Nazareno Andrade. 2018. Perspectivas em Computação Social. In Computação Brasil, Raquel Prates and Thais Castro (Eds.). Vol. 36. Sociedade Brasileira de Computação, Porto Alegre, Brasil, 30--33.
[19]
Lesandro Ponciano and Francisco Brasileiro. 2014. Finding Volunteers' Engagement Profiles in Human Computation for Citizen Science Projects. Human Computation 1, 2 (2014), 245--264.
[20]
Lesandro Ponciano and Francisco Brasileiro. 2018. Agreement-based credibility assessment and task replication in human computation systems. Future Generation Computer Systems 87 (2018), 159--170.
[21]
Lesandro Ponciano, Francisco Brasileiro, Nazareno Andrade, and Lívia Sampaio. 2014. Considering human aspects on strategies for designing and managing distributed human computation. Journal of Internet Services and Applications 5, 1 (2014), 10.
[22]
Lesandro Ponciano, Francisco Brasileiro, Robert Simpson, and Arfon Smith. 2014. Volunteers' engagement in human computation for astronomy projects. Computing in Science & Engineering 16, 6 (2014), 52--59.
[23]
Jennifer Preece. 2016. Citizen science: New research challenges for human-computer interaction. International Journal of Human-Computer Interaction 32, 8 (2016), 585--612.
[24]
M Jordan Raddick, Georgia Bracey, Pamela L Gay, Chris J Lintott, Phil Murray, Kevin Schawinski, Alexander S Szalay, and Jan Vandenberg. 2010. Galaxy Zoo: Exploring the Motivations of Citizen Science Volunteers. Astronomy Education Review 9, 1 (2010), n1.
[25]
Christine Robson, Marti Hearst, Chris Kau, and Jeffrey Pierce. 2013. Comparing the use of social networking and traditional media channels for promoting citizen science. In Proceedings of the 2013 conference on Computer supported cooperative work. ACM, NY, US, 1463--1468.
[26]
Dana Rotman, Jenny Preece, Jen Hammock, Kezee Procita, Derek Hansen, Cynthia Parr, Darcy Lewis, and David Jacobs. 2012. Dynamic changes in motivation in collaborative citizen-science projects. In Proceedings of the ACM 2012 conference on computer supported cooperative work. ACM, NY, US, 217--226.
[27]
Robert Simpson, Kevin R Page, and David De Roure. 2014. Zooniverse: observing the world's largest citizen science platform. In Proceedings of the 23rd international conference on world wide web. ACM, NY, US, 1049--1054.
[28]
Ianna Sodré and Francisco Brasileiro. 2017. An analysis of the use of qualifications on the Amazon mechanical Turk online labor market. Computer Supported Cooperative Work (CSCW) 26, 4--6 (2017), 837--872.
[29]
Jessica Suzuki and Edna Dias Canedo. 2018. Interaction Design Process Oriented by Metrics. In International Conference on Human-Computer Interaction. Springer, Cham, Switzerland, 290--297.
[30]
DM Rini van Solingen and Egon W Berghout. 1999. The Goal/Question/Metric Method: a practical guide for quality improvement of software development. McGraw-Hill, NY, US.
[31]
Rini van Solingen. 2014. Agile GQM: Why Goal/Question/Metric is more Relevant than Ever and Why It Helps Solving the Agility Challenges of Today's Organizations. In 2014 Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement. IEEE, Washington, D.C., US, 271--271.
[32]
Luis Von Ahn. 2008. Human computation. In Proceedings of the 2008 IEEE 24th International Conference on Data Engineering. IEEE Computer Society, Washington, DC, US, 1--2.
[33]
Luis Von Ahn, Benjamin Maurer, Colin McMillen, David Abraham, and Manuel Blum. 2008. recaptcha: Human-based character recognition via web security measures. Science 321, 5895 (2008), 1465--1468.
[34]
Sarah West and Rachel Pateman. 2016. Recruiting and Retaining Participants in Citizen Science: What Can Be Learned from the Volunteering Literature? Citizen Science: Theory and Practice 1, 2 (2016), 10.
[35]
Andrea Wiggins and Kevin Crowston. 2011. From conservation to crowdsourcing: A typology of citizen science. In 44th Hawaii International Conference on System Sciences. IEEE, Washington, DC, US, 1--10.
[36]
Andrea Wiggins and Kevin Crowston. 2012. Goals and tasks: Two typologies of citizen science projects. In 45th Hawaii International Conference on System Sciences. IEEE, Washington, DC, US, 3426--3435.
[37]
John Wilson. 2000. Volunteering. Annual review of sociology 26, 1 (2000), 215--240.
[38]
Poonam Yadav and John Darlington. 2016. Design guidelines for the user-centred collaborative citizen science platforms. Human Computation 3, 11 (2016), 205--211.

Cited By

View all
  • (2024)Exploring Project Connections Across the Citizen Science Landscape: A Social Network Analysis of Shared VolunteersSage Open10.1177/2158244024129842414:4Online publication date: 19-Nov-2024
  • (2022)Generating Recommendations with Post-Hoc Explanations for Citizen ScienceProceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization10.1145/3503252.3531290(69-78)Online publication date: 4-Jul-2022
  • (2022)Citizen Science as an Ecosystem of Engagement: Implications for Learning and Broadening ParticipationBioScience10.1093/biosci/biac03572:7(651-663)Online publication date: 22-Jun-2022
  • Show More Cited By

Index Terms

  1. Characterising volunteers' task execution patterns across projects on multi-project citizen science platforms

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    IHC '19: Proceedings of the 18th Brazilian Symposium on Human Factors in Computing Systems
    October 2019
    679 pages
    ISBN:9781450369718
    DOI:10.1145/3357155
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    • SBC: Brazilian Computer Society

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 22 October 2019

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. GQM
    2. citizen science
    3. engagement
    4. human computation
    5. projects

    Qualifiers

    • Research-article

    Conference

    IHC '19
    Sponsor:
    • SBC
    IHC '19: XVIII Brazilian Symposium on Human Factors in Computing Systems
    October 22 - 25, 2019
    Espírito Santo, Vitória, Brazil

    Acceptance Rates

    IHC '19 Paper Acceptance Rate 56 of 165 submissions, 34%;
    Overall Acceptance Rate 331 of 973 submissions, 34%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)13
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 13 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Exploring Project Connections Across the Citizen Science Landscape: A Social Network Analysis of Shared VolunteersSage Open10.1177/2158244024129842414:4Online publication date: 19-Nov-2024
    • (2022)Generating Recommendations with Post-Hoc Explanations for Citizen ScienceProceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization10.1145/3503252.3531290(69-78)Online publication date: 4-Jul-2022
    • (2022)Citizen Science as an Ecosystem of Engagement: Implications for Learning and Broadening ParticipationBioScience10.1093/biosci/biac03572:7(651-663)Online publication date: 22-Jun-2022
    • (2022)Modeling and Evaluating Personas with Software Explainability RequirementsHuman-Computer Interaction10.1007/978-3-030-92325-9_11(136-149)Online publication date: 1-Jan-2022
    • (2021)Relevance of non-activity representation in traveling user behavior profiling for adaptive gamificationProceedings of the XXI International Conference on Human Computer Interaction10.1145/3471391.3471431(1-7)Online publication date: 22-Sep-2021

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media