Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3173574.3173644acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

What's at Stake: Characterizing Risk Perceptions of Emerging Technologies

Published: 19 April 2018 Publication History

Abstract

One contributing factor to how people choose to use technology is their perceptions of associated risk. In order to explore this influence, we adapted a survey instrument from risk perception literature to assess mental models of users and technologists around risks of emerging, data-driven technologies (e.g., identity theft, personalized filter bubbles). We surveyed 175 individuals for comparative and individual assessments of risk, including characterizations using psychological factors. We report our observations around group differences (e.g., expert versus non-expert) in how people assess risk, and what factors may structure their conceptions of technological harm. Our findings suggest that technologists see these risks as posing a bigger threat to society than do non-experts. Moreover, across groups, participants did not see technological risks as voluntarily assumed. Differences in how people characterize risk have implications for the future of design, decision-making, and public communications, which we discuss through a lens we call risk-sensitive design.

References

[1]
2015. Google apologises for Photos app's racist blunder. BBC News (July 2015). http://www.bbc.com/news/technology-33347866
[2]
Alessandro Acquisti and Ralph Gross. 2006. Imagined communities: Awareness, information sharing, and privacy on the Facebook. In International workshop on privacy enhancing technologies. Springer, 36--58.
[3]
Julio Angulo and Martin Ortlieb. 2015. "WTH.!â ? A´ L" Experiences, reactions, and expectations related to online privacy panic situations. In Symposium on Usable Privacy and Security (SOUPS). https://www.usenix.org/system/files/conference/ soups2015/soups15-paper-angulo.pdf
[4]
Julia Angwin, Lauren Kirchner, Surya Mattu, and Jeff Larson. 2016. Machine Bias: There's Software Used Across the Country to Predict Future Criminals. And it's Biased Against Blacks. (May 2016). https://www.propublica.org/article/ machine-bias-risk-assessments-in-criminal-sentencing
[5]
Annie I. Anton, Julia B. Earp, and Jessica D. Young. 2010. How internet users' privacy concerns have evolved since 2002. IEEE Security & Privacy 8, 1 (2010). http://ieeexplore.ieee.org/abstract/document/5403147/
[6]
Rebecca Balebako, Jaeyeon Jung, Wei Lu, Lorrie Faith Cranor, and Carolyn Nguyen. 2013. "Little Brothers Watching You": Raising Awareness of Data Leaks on Smartphones. In Proceedings of the Ninth Symposium on Usable Privacy and Security (SOUPS '13). ACM, New York, NY, USA, 12:1--12:11.
[7]
Solon Barocas and Helen Nissenbaum. 2014. Big data's end run around anonymity and consent. Privacy, big data, and the public good: Frameworks for Engagement (2014), 44--75.
[8]
Andrew Besmer and Heather Richter Lipford. 2010. Users'(mis) conceptions of social applications. In Proceedings of Graphics Interface 2010. Canadian Information Processing Society, 63--70. http://dl.acm.org/citation.cfm?id=1839226
[9]
Igor Bilogrevic and Martin Ortlieb. 2016. "If You Put All The Pieces Together...": Attitudes Towards Data Combination and Sharing Across Services and Companies. ACM Press, 5215--5227.
[10]
Buhrmester, Michael, Kwang, Tracy, and Gosling, Samuel D. 2011. Amazon's Mechanical Turk: A New Source of Inexpensive, Yet High-Quality, Data? Perspectives on Psychological Science 6, 1 (Jan. 2011), 3--5.
[11]
Cranor, Lorrie F., Reagle, Joseph, and Ackerman, Mark S. 2000. Beyond Concern: Understanding Net Users' Attitudes about Online Privacy. In The Internet Upheaval: Raising Questions, Seeking Answers in Communications Policy, Ingo Vogelsang and Benjamin M. Compaine (Eds.). MIT Press, Cambridge, Massachusetts, 47--70.
[12]
David J. Hauser and Norbert Schwarz. 2016. Attentive Turkers: MTurk participants perform better on online attention checks than do subject pool participants. Behavior Research Methods 48, 1 (March 2016), 400--407.
[13]
Serge Egelman. 2013. My Profile is My Password, Verify Me!: The Privacy/Convenience Tradeoff of Facebook Connect. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 2369--2378.
[14]
Mostafa El-Bermawy. 2016. Your Filter Bubble is Destroying Democracy. WIRED (Nov. 2016). https://www.wired.com/2016/11/ filter-bubble-destroying-democracy/
[15]
Mauricio S. Featherman and John D. Wells. 2010. The Intangibility of e-Services: Effects on Perceived Risk and Acceptance. SIGMIS Database 41, 2 (May 2010), 110--131.
[16]
Casey Fiesler, Michaelanne Dye, Jessica L. Feuston, Chaya Hiruncharoenvate, C.J. Hutto, Shannon Morrison, Parisa Khanipour Roshan, Umashanthi Pavalanathan, Amy S. Bruckman, Munmun De Choudhury, and Eric Gilbert. 2017. What (or Who) Is Public?: Privacy Settings and Social Media Content Sharing. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '17). ACM, New York, NY, USA, 567--580.
[17]
Casey Fiesler, Cliff Lampe, and Amy S Bruckman. 2016. Reality and Perception of Copyright Terms of Service for Online Content Creation. ACM Press, 1448--1459.
[18]
Samuel Gibbs. 2015a. Musk, Wozniak and Hawking urge ban on warfare AI and autonomous weapons. The Guardian (July 2015). https://www.theguardian.com/technology/2015/jul/27/ musk-wozniak-hawking-ban-ai-autonomous-weapons
[19]
Samuel Gibbs. 2015b. Women less likely to be shown ads for high-paid jobs on Google, study shows. The Guardian (July 2015). https://www.theguardian.com/technology/2015/jul/08/ women-less-likely-ads-high-paid-jobs-google-study
[20]
Paul Hitlin. 2016. Research in the Crowdsourcing Age, a Case Study. (July 2016). http://www.pewinternet.org/2016/07/11/ research-in-the-crowdsourcing-age-a-case-study/
[21]
Carlos Jensen and Colin Potts. 2004. Privacy Policies As Decision-making Tools: An Evaluation of Online Privacy Notices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '04). ACM, New York, NY, USA, 471--478.
[22]
Jennifer King, Airi Lampinen, and Alex Smolen. 2011. Privacy: Is there an app for that?. In Proceedings of the Seventh Symposium on Usable Privacy and Security. ACM, 12. http://dl.acm.org/citation.cfm?id=2078843
[23]
Ewa Luger, Stuart Moran, and Tom Rodden. 2013. Consent for All: Revealing the Hidden Complexity of Terms and Conditions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 2687--2696.
[24]
Aleecia McDonald and Lorrie Faith Cranor. 2010. Beliefs and Behaviors: Internet Users' Understanding of Behavioral Advertising. SSRN Scholarly Paper ID 1989092. Social Science Research Network, Rochester, NY. http://papers.ssrn.com/abstract=1989092
[25]
Gregory S. McNeal. 2014. Facebook Manipulated User News Feeds To Create Emotional Responses. (June 2014). https://www.forbes.com/sites/gregorymcneal/2014/06/28/ facebook-manipulated-user-news-feeds-to-create-emotional-contagio
[26]
Claire Cain Miller. 2017. Evidence That Robots Are Winning the Race for American Jobs. The New York Times (March 2017). https://www.nytimes.com/2017/03/28/upshot/ evidence-that-robots-are-winning-the-race-for-american-jobs. html
[27]
Johnny Saldana. 2015. The coding manual for qualitative researchers. Sage.
[28]
Paul Slovic. 1987. Perception of Risk. Science 236 (April 1987), 280--285. http://www.heatherlench.com/ wp-content/uploads/2008/07/slovic.pdf
[29]
Paul Slovic. 1993. Perceived risk, trust, and democracy. Risk analysis 13, 6 (1993), 675--682.
[30]
Paul Slovic, Baruch Fischhoff, and Sarah Lichtenstein. 1980. Facts and fears: Understanding perceived risk. Societal risk assessment: How safe is safe enough 4 (1980), 181--214.
[31]
Paul Slovic, Baruch Fischhoff, and Sarah Lichtenstein. 1985. Characterizing Perceived Risk. SSRN Scholarly Paper ID 2185557. Social Science Research Network, Rochester, NY. https://papers.ssrn.com/abstract=2185557
[32]
Olivia Solon. 2017. 'This oversteps a boundary': teenagers perturbed by Facebook surveillance. The Guardian (May 2017). https://www.theguardian.com/technology/2017/may/02/ facebook-surveillance-tech-ethics
[33]
Latanya Sweeney. 2013. Discrimination in online ad delivery. Queue 11.3 (2013), 10. http: //dataprivacylab.org/projects/onlineads/1071--1.pdf
[34]
Eran Toch, Yang Wang, and Lorrie Faith Cranor. 2012. Personalization and privacy: a survey of privacy risks and remedies in personalization-based systems. User Modeling and User-Adapted Interaction 22, 1--2 (April 2012), 203--220.
[35]
Zeynep Tufekci. 2015. Algorithmic Harms beyond Facebook and Google: Emergent Challenges of Computational Agency. Colorado Technology Law Journal 13.2 (2015), 203--218. http://ctlj.colorado. edu/wp-content/uploads/2015/08/Tufekci-final.pdf
[36]
Joseph Turow, Lauren Feldman, and Kimberly Meltzer. 2005. Open to Exploitation: America's Shoppers Online and Offline. A Report from the Annenberg Public Policy Center of the University of Pennsylvania (2005). https://works.bepress.com/joseph_turow/10/
[37]
Blase Ur, Pedro Giovanni Leon, Lorrie Faith Cranor, Richard Shay, and Yang Wang. 2012. Smart, Useful, Scary, Creepy: Perceptions of Online Behavioral Advertising. In Proceedings of the Eighth Symposium on Usable Privacy and Security (SOUPS '12). ACM, New York, NY, USA, 4:1--4:15.
[38]
Jessica Vitak, Katie Shilton, and Zahra Ashktorab. 2016. Beyond the Belmont Principles: Ethical Challenges, Practices, and Beliefs in the Online Data Research Community. ACM Press, 939--951.
[39]
Rick Wash. 2010. Folk models of home computer security. In Proceedings of the Sixth Symposium on Usable Privacy and Security. ACM, 11. http://dl.acm.org/citation.cfm?id=1837125
[40]
Mu Yang, Yijun Yu, Arosha K. Bandara, and Bashar Nuseibeh. 2014. Adaptive sharing for online social networks: a trade-off between privacy risk and social benefit. In Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on. IEEE, 45--52. http://ieeexplore.ieee.org/abstract/document/7011232/
[41]
Yaxing Yao, Davide Lo Re, and Yang Wang. Folk Models of Online Behavioral Advertising. (????).
[42]
Michael Zimmer. 2010. "But the data is already public": on the ethics of research in Facebook. Ethics and Information Technology 12, 4 (Dec. 2010), 313--325.
[43]
Michael Zimmer. 2016. OkCupid Study Reveals the Perils of Big-Data Science. (May 2016). https://www.wired.com/2016/05/ okcupid-study-reveals-perils-big-data-science/

Cited By

View all
  • (2025)Students as AI literate designers: a pedagogical framework for learning and teaching AI literacy in elementary educationJournal of Research on Technology in Education10.1080/15391523.2025.2449942(1-22)Online publication date: 21-Jan-2025
  • (2024)Do Crowdsourced Fairness Preferences Correlate with Risk Perceptions?Proceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640543.3645209(304-324)Online publication date: 18-Mar-2024
  • (2023)Understanding user interactions and perceptions of AI risk in SingaporeBig Data & Society10.1177/2053951723121382310:2Online publication date: 15-Nov-2023
  • Show More Cited By

Index Terms

  1. What's at Stake: Characterizing Risk Perceptions of Emerging Technologies

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
    April 2018
    8489 pages
    ISBN:9781450356206
    DOI:10.1145/3173574
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 19 April 2018

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. design
    2. ethics
    3. risk

    Qualifiers

    • Research-article

    Conference

    CHI '18
    Sponsor:

    Acceptance Rates

    CHI '18 Paper Acceptance Rate 666 of 2,590 submissions, 26%;
    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)93
    • Downloads (Last 6 weeks)8
    Reflects downloads up to 13 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Students as AI literate designers: a pedagogical framework for learning and teaching AI literacy in elementary educationJournal of Research on Technology in Education10.1080/15391523.2025.2449942(1-22)Online publication date: 21-Jan-2025
    • (2024)Do Crowdsourced Fairness Preferences Correlate with Risk Perceptions?Proceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640543.3645209(304-324)Online publication date: 18-Mar-2024
    • (2023)Understanding user interactions and perceptions of AI risk in SingaporeBig Data & Society10.1177/2053951723121382310:2Online publication date: 15-Nov-2023
    • (2023)Divergences in Blame Attribution after a Security Breach based on Compliance Behavior: Implications for Post-breach Risk CommunicationProceedings of the 2023 European Symposium on Usable Security10.1145/3617072.3617117(27-47)Online publication date: 16-Oct-2023
    • (2023)The methodology of studying fairness perceptions in Artificial IntelligenceInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2022.102954170:COnline publication date: 8-Feb-2023
    • (2023)Influence of Privacy Knowledge on Privacy Attitudes in the Domain of Location-Based ServicesPrivacy and Identity Management10.1007/978-3-031-31971-6_10(118-132)Online publication date: 1-Jun-2023
    • (2022)A Scoping Review of Ethics Across SIGCHIProceedings of the 2022 ACM Designing Interactive Systems Conference10.1145/3532106.3533511(137-154)Online publication date: 13-Jun-2022
    • (2022)Limits of Individual Consent and Models of Distributed Consent in Online Social NetworksProceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency10.1145/3531146.3534640(2251-2262)Online publication date: 21-Jun-2022
    • (2020)Does My Smart Device Provider Care About My Privacy? Investigating Trust Factors and User Attitudes in IoT SystemsProceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society10.1145/3419249.3420108(1-12)Online publication date: 25-Oct-2020
    • (2020)When Are Search Completion Suggestions Problematic?Proceedings of the ACM on Human-Computer Interaction10.1145/34152424:CSCW2(1-25)Online publication date: 15-Oct-2020
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media