Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

To Engage or Not to Engage with AI for Critical Judgments: : How Professionals Deal with Opacity When Using AI for Medical Diagnosis

Published: 01 January 2022 Publication History

Abstract

Artificial intelligence (AI) technologies promise to transform how professionals conduct knowledge work by augmenting their capabilities for making professional judgments. We know little, however, about how human-AI augmentation takes place in practice. Yet, gaining this understanding is particularly important when professionals use AI tools to form judgments on critical decisions. We conducted an in-depth field study in a major U.S. hospital where AI tools were used in three departments by diagnostic radiologists making breast cancer, lung cancer, and bone age determinations. The study illustrates the hindering effects of opacity that professionals experienced when using AI tools and explores how these professionals grappled with it in practice. In all three departments, this opacity resulted in professionals experiencing increased uncertainty because AI tool results often diverged from their initial judgment without providing underlying reasoning. Only in one department (of the three) did professionals consistently incorporate AI results into their final judgments, achieving what we call engaged augmentation. These professionals invested in AI interrogation practices—practices enacted by human experts to relate their own knowledge claims to AI knowledge claims. Professionals in the other two departments did not enact such practices and did not incorporate AI inputs into their final decisions, which we call unengagedaugmentation.” Our study unpacks the challenges involved in augmenting professional judgment with powerful, yet opaque, technologies and contributes to literature on AI adoption in knowledge work.

References

[1]
Albu OB, Flyverbom M (2019) Organizational transparency: Conceptualizations, conditions, and consequences. Bus. Soc. 58(2):268–297.
[2]
Ananny M, Crawford K (2016) Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media Soc. 20(3):973–989.
[3]
Anthony C (2018) To question or accept? How status differences influence responses to new epistemic technologies in knowledge work. Acad. Management Rev. 43(4):661–679.
[4]
Anthony C (2021) When knowledge work and analytical technologies collide: The practices and consequences of black boxing algorithmic technologies. Admin. Sci. Quart. ePub ahead of print June 4, https://doi.org/10.1177/00018392211016755.
[5]
Autor DH (2015) Why are there still so many jobs? The history and future of workplace automation. J. Econom. Perspect. 29(3):3–30.
[6]
Bailey D, Leonardi P, Barley S (2012) The lure of the virtual. Organ. Sci. 23(5):1485–1504.
[7]
Barad K (2003) Posthumanist performativity: Toward an understanding of how matter comes to matter. Signs 28(3):801–831.
[8]
Barley S (1986) Technology as an occasion for structuring: Technically induced change in the temporal organization of radiological work. Admin. Sci. Quart. 3(1):78–108.
[9]
Barley S (1990) The alignment of technology and structure through roles and networks. Admin. Sci. Quart. 35(1):61–103.
[10]
Barley SR, Bechky BA, Milliken FJ (2017) The changing nature of work: Careers, identities, and work lives in the 21st century. Acad. Management Discoveries 3(2):111–115.
[11]
Barocas S, Selbst AD, Raghavan M (2020) The hidden assumptions behind counterfactual explanations and principal reasons. Proc. 2020 Conf. Fairness, Accountability, Transparency (Association for Computing Machinery, New York), 80–89.
[12]
Barredo Arrieta A, Díaz-Rodríguez N, Del Ser J, Bennetot A, Tabik S, Barbado A, Garcia S, et al. (2020) Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inform. Fusion 58(2020):82–115.
[13]
Barrett M, Oborn E, Orlikowski W (2016) Creating value in online communities: The sociomaterial configuring of strategy, platform, and stakeholder engagement. Inform. Systems Res. 27(4):704–723.
[14]
Barrett M, Oborn E, Orlikowski WJ, Yates J (2012) reconfiguring boundary relations: Robotic innovations in pharmacy work. Organ. Sci. 23(5):1448–1466.
[15]
Bauer K, Hinz O, van der Aalst W, Weinhardt C (2021) Expl(AI)n it to me – explainable AI and information systems research. Bus. Inform. Systems Engrg. 63(2):79–82.
[16]
Beane M (2019) Shadow learning: Building robotic surgical skill when approved means fail. Admin. Sci. Quart. 64(1):87–123.
[17]
Beane M, Orlikowski WJ (2015) What difference does a robot make? The material enactment of distributed coordination. Organ. Sci. 26(6):1553–1573.
[18]
Bechky B (2003a) Object lessons: Workplace artifacts as representations of occupational jurisdiction. Amer. J. Sociol. 109(3):720–752.
[19]
Bechky B (2003b) Sharing meaning across occupational communities: The transformation of understanding on a production floor. Organ. Sci. 14(3):312–330.
[20]
Bechky BA (2020) Evaluative spillovers from technological change: The effects of “DNA Envy” on occupational practices in forensic science. Admin. Sci. Quart. 65(3):606–643.
[21]
Beer D (2017) The social power of algorithms. Inform. Comm. Soc. 20(1):1–13.
[22]
Benbya H, Pachidi S, Jarvenpaa S (2021) Special issue editorial: Artificial intelligence in organizations: Implications for information systems research. J. Assoc. Inform. Systems, https://aisel.aisnet.org/jais/vol22/iss2/10.
[23]
Bird S, Dudík M, Edgar R, Horn B, Lutz R, Milan V, Sameki M, Wallach H, Walker K (2020) Fairlearn: A toolkit for assessing and improving fairness in AI. Report, Microsoft, Redmond, WA.
[24]
Boyaci T, Canyakmaz C, deVericourt F (2020) Human and Machine: The Impact of Machine Input on Decision-Making Under Cognitive Limitations (Social Science Research Network, Rochester, NY).
[25]
Brynjolfsson E, McAfee A (2014) The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (W. W. Norton & Company, New York).
[26]
Brynjolfsson E, Mitchell T (2017) What can machine learning do? Workforce implications. Science 358(6370):1530–1534.
[27]
Burrell J (2016) How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data Soc., https://doi.org/10.1177/2053951715622512.
[28]
Caplan R, Donovan J, Hanson L, Matthews J (2018) Algorithmic accountability: A primer (Data & Society). Accessed January 10, 2020, https://datasociety.net/library/algorithmic-accountability-a-primer/.
[29]
Carlile PR (2004) Transferring, translating, and transforming: An integrative framework for managing knowledge across boundaries. Organ. Sci. 15(5):555–568.
[30]
Charmaz K (2014) Constructing Grounded Theory (Sage, Thousand Oaks, CA).
[31]
Christin A (2020) The ethnographer and the algorithm: Beyond the black box. Theory Soc. 49(5):897–918.
[32]
Crawford K, Dobbe R, Dyer T, Fried G, Green B, Kaziunas E, Kak A, et al. (2019) AI Now 2019 Report (AI Now Institute, New York).
[33]
Cremer DD, Kasparov G (2021) AI should augment human intelligence, not replace it. Harvard Bus. Rev. (March 18), https://hbr.org/2021/03/ai-should-augment-human-intelligence-not-replace-it.
[34]
Cukier K, Mayer-Schonberger V, De Vericourt F (2021) Framers: Human Advantage in an Age of Technology and Turmoil (Dutton, New York).
[35]
Daugherty PR, Wilson HJ (2018) Human + Machine: Reimagining Work in the Age of AI (Harvard Business Press, Cambridge, MA).
[36]
Davenport TH, Kirby J (2016) Only Humans Need Apply: Winners and Losers in the Age of Smart Machines (HarperBusiness, New York).
[37]
Diakopoulos N (2020) Transparency. Dubber M, Pasquale F, Das S, eds. The Oxford Handbook of Ethics in AI (Oxford University Press, Oxford, United Kingdom), 197–214.
[38]
Dodgson M, Gann DM, Salter A (2007) “In case of fire, please use the elevator”: Simulation technology and organization in fire engineering. Organ. Sci. 18(5):849–864.
[39]
Domingos P (2015) The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, 1st ed. (Basic Books, New York).
[40]
Dourish P (2016) Algorithms and their others: Algorithmic culture in context. Big Data Soc., https://doi.org/10.1177/2053951716665128.
[41]
Erickson I, Robert L, Crowston K, Nickerson J (2018) Workshop: Work in the Age of Intelligent Machines. GROUP ’18 Proc. 20th ACM Internat. Conf. Supporting Groupwork (Sundial Island, FL), 359–361.
[42]
Faraj S, Pachidi S, Sayegh K (2018) Working and organizing in the age of the learning algorithm. Inform. Organ. 28(1):62–70.
[43]
Fernández-Loría C, Provost F, Han X (2020) Explaining data-driven decisions made by AI systems: The counterfactual approach. Preprint, submitted January 21, https://arxiv.org/abs/2001.07417v1.
[44]
Gao R, Saar-Tsechansky M, De-Arteaga M, Han L, Lee MK, Lease M (2021) Human-AI collaboration with bandit feedback. Preprint, submitted May 22, https://arxiv.org/abs/2105.10614.
[45]
Gherardi S (2000) Practice-based theorizing on learning and knowing in organizations. Organization 7(2):211–223.
[46]
Gillespie T (2014) The relevance of algorithms. Gillespie T, Boczkowski PJ, Foot KA, eds. Media Technologies: Essays on Communication, Materiality, and Society (MIT Press, Cambridge, MA), 167–194.
[47]
Gkeredakis M, Lifshitz-Assaf H, Barrett M (2021) Crisis as opportunity, disruption and exposure: Exploring emergent responses to crisis through digital technology. Inform. Organ. 31(1):100344.
[48]
Glaser B, Strauss A (1967) Discovering Grounded Theory (Aldine Publishing Company, Chicago).
[49]
Glikson E, Woolley AW (2020) Human trust in artificial intelligence: Review of empirical research. Acad. Management Ann. 14(2):627–660.
[50]
Grady D (2019) A.I. took a test to detect lung cancer. It got an A. New York Times (May 20), https://www.nytimes.com/2019/05/20/health/cancer-artificial-intelligence-ct-scans.html.
[51]
Griffin M, Grote G (2020) When is more uncertainty better? A model of uncertainty regulation and effectiveness. Acad. Management Rev. 45(4):745–765.
[52]
Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D (2018) A survey of methods for explaining black box models. ACM Comput. Surveys 51(5):93:1–93:42.
[53]
Hansen HK, Flyverbom M (2015) The politics of transparency and the calibration of knowledge in the digital age. Organization 22(6):872–889.
[54]
Haraway D (1988) Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist Stud. 14(3):575–599.
[55]
Hardy C, Lawrence TB, Grant D (2005) Discourse and collaboration: The role of conversations and collective identity. Acad. Management Rev. 30(1):58–77.
[56]
Hooker S, Erhan D, Kindermans PJ, Kim B (2019) A benchmark for interpretability methods in deep neural networks. Preprint, submitted November 5, https://arxiv.org/abs/1806.10758.
[57]
Howard-Grenville J, Nelson AJ, Earle A, Haack J, Young D (2017) “If chemists don’t do it, who is going to?” Peer-driven occupational change and the emergence of green chemistry. Admin. Sci. Quart. 62(3):524–560.
[58]
Kaur H, Nori H, Jenkins S, Caruana R, Wallach H, Wortman Vaughan J (2020) Interpreting interpretability: Understanding data scientists’ use of interpretability tools for machine learning. Proc. 2020 CHI Conf. Human Factors Comput. Systems, Honolulu (Association for Computing Machinery, New York), 1–14.
[59]
Kellogg K, Valentine M, Christin A (2019) Algorithms at work: The new contested terrain of control. Acad. Management Ann. 14(1):366–410.
[60]
Khadpe P, Krishna R, Fei-Fei L, Hancock JT, Bernstein MS (2020) Conceptual metaphors impact perceptions of human-AI collaboration. Proc. ACM Human Comput. Interactions, 163:1–163:26.
[61]
Kogut B, Zander U (1992) Knowledge of the firm, combinative capabilities, and the replication of technology. Organ. Sci. 3(3):383–397.
[62]
Lebovitz S, Levina N, Lifshitz-Assaf H (2021) Is AI ground truth really “true”? The dangers of training and evaluating AI tools based on experts’ know-what. Management Inform. Systems Quart. 45(3):1501–1525.
[63]
Leonardi P (2011) When flexible routines meet flexible technologies: Affordance, constraint, and the imbrication of human and material agencies. Management Inform. Systems Quart. 35(1):147–167.
[64]
Leonardi PM, Bailey DE (2008) Transformational technologies and the creation of new work practices: Making implicit knowledge explicit in task-based offshoring. Management Inform. Systems Quart. 32(2):411–436.
[65]
Leonardi P, Barley S (2010) What’s under construction here? Social action, materiality, and power in constructivist studies of technology and organizing. Acad. Management Ann. 4(1):1–51.
[66]
Leonardi PM, Treem JW (2020) Behavioral visibility: A new paradigm for organization studies in the age of digitization, digitalization, and datafication. Organ. Stud. 41(12):1601–1625.
[67]
Levina N (2005) Collaborating on multiparty information systems development projects: A collective reflection-in-action view. Inform. Systems Res. 16(2):109–130.
[68]
Levina N, Vaast E (2005) The emergence of boundary spanning competence in practice: Implications for implementation and use of information systems. Management Inform. Systems Quart. 29(2):335–363.
[69]
Lifshitz-Assaf H (2018) Dismantling knowledge boundaries at NASA: The critical role of professional identity in open innovation. Admin. Sci. Quart. 63(4):746–782.
[70]
Lifshitz-Assaf H, Lebovitz S, Zalmanson L (2021) Minimal and adaptive coordination: How hackathons’ projects accelerate innovation without killing it. Acad. Management J. 64(3):684–715.
[71]
Maguire S, Hardy C, Lawrence TB (2004) institutional entrepreneurship in emerging fields: HIV/AIDS treatment advocacy in Canada. Acad. Management J. 47(5):657–679.
[72]
Mazmanian M, Cohn M, Dourish P (2014) Dynamic reconfiguration in planetary exploration: A sociomaterial ethnography. Management Inform. Systems Quart. 38(3):831–848.
[73]
Mazmanian M, Orlikowski W, Yates J (2013) The autonomy paradox: The implications of mobile email devices for knowledge professionals. Organ. Sci. 24(5):1337–1357.
[74]
Mol A (2003) The Body Multiple (Duke University Press, Durham, NC).
[75]
Moran G (2018) This artificial intelligence won’t take your job, it will help you do it better. Fast Company (October 24), https://www.fastcompany.com/90253977/this-artificial-intelligence-wont-take-your-job-it-will-help-you-do-it-better.
[76]
Mukherjee S (2017) A.I. Vs. M.D. New Yorker (March 27), https://www.newyorker.com/magazine/2017/04/03/ai-versus-md.
[77]
Nelson AJ, Irwin J (2014) “Defining what we do—all over again”: Occupational Identity, technological change, and the librarian/internet-search relationship. Acad. Management J. 57(3):892–928.
[78]
[79]
Orlikowski W (1992) The duality of technology: Rethinking the concept of technology in organizations. Organ. Sci. 3(3):398–427.
[80]
Orlikowski W (2000) Using technology and constituting structures: A practice lens for studying technology in organizations. Organ. Sci. 11(4):404–428.
[81]
Orlikowski W (2007) Sociomaterial practices: Exploring technology at work. Organ. Stud. 28(9):1435–1448.
[82]
Orlikowski W, Scott S (2008) Sociomateriality: Challenging the separation of technology, work and organization. Acad. Management Ann. 2(1):433–474.
[83]
Pachidi S, Berends H, Faraj S, Huysman M (2021) Make way for the algorithms: Symbolic actions and change in a regime of knowing. Organ. Sci. 32(1):18–41.
[84]
Packard MD, Clark BB (2020) On the mitigability of uncertainty and the choice between predictive and nonpredictive strategy. Acad. Management Rev. 45(4):766–786.
[85]
Pasquale F (2015) The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, Cambridge, MA).
[86]
Pearl J, Mackenzie D (2018) The Book of Why: The New Science of Cause and Effect, 1st ed. (Basic Books, New York)
[87]
Pinch T, Bijker W (1987) The social construction of facts and artifacts: Or how the sociology of science and the sociology of technology might benefit each other. Hughes TP, Bijker W, Pinch T, eds. The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology (MIT Press, Cambridge, MA), 17–50.
[88]
Polanyi M (1958) Personal Knowledge: Toward a Post-Critical Philosophy (University of Chicago Press, Chicago).
[89]
Polanyi M (1966) The Tacit Dimension (University of Chicago Press, Chicago).
[90]
Puranam P (2021) Human–AI collaborative decision-making as an organization design problem. J. Organ. Design 10(2021):75–80.
[91]
Rai A, Constantinides P, Sarker S (2019) Editor’s comments: Next-generation digital platforms: Toward human–AI hybrids. Management Inform. Systems Quart. 43(1):iii–ix.
[92]
Raisch S, Krakowski S (2021) Artificial intelligence and management: The automation–augmentation paradox. Acad. Management Rev. 46(1):192–210.
[93]
Razorthink Inc. (2019) 4 major challenges facing fraud detection; ways to resolve them using machine learning. Medium (April 25), https://medium.com/razorthink-ai/4-major-challenges-facing-fraud-detection-ways-to-resolve-them-using-machine-learning-cf6ed1b176dd.
[94]
Recht M, Bryan RN (2017) Artificial intelligence: Threat or boon to radiologists? J. Amer. College Radiology 14(11):1476–1480.
[95]
Rindova V, Courtney H (2020) To shape or adapt: Knowledge problems, epistemologies, and strategic postures under Knightian uncertainty. Acad. Management Rev. 45(4):787–807.
[96]
Roberts M, Driggs D, Thorpe M, Gilbey J, Yeung M, Ursprung S, Aviles-Rivero AI, et al. (2021) Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans. Nature Machine Intelligence 3(3):199–217.
[97]
Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1(5):206–215.
[98]
Samek W, Montavon G, Vedaldi A, Hansen LK, Müller KR, eds. (2019) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning (Springer Nature, Cham, Switzerland).
[99]
Schön DA (1983) The Reflective Practitioner: How Professionals Think in Action (Basic Books, New York).
[100]
Scott SV, Orlikowski WJ (2012) Reconfiguring relations of accountability: Materialization of social media in the travel sector. Accounting Organ. Soc. 37(1):26–40.
[101]
Scott S, Orlikowski W (2014) Entanglements in practice: Performing anonymity through social media. Management Inform. Systems Quart. 38(3):873–893.
[102]
Seamans R, Furman J (2019) AI and the economy. Innovation Policy Econom. 19(1):161–191.
[103]
Simonite T (2018) Google’s AI guru wants computers to think more like brains. Wired Magazine (December 12), https://www.wired.com/story/googles-ai-guru-computers-think-more-like-brains/.
[104]
Spradley (1979) The Ethnographic Interview (Holt, Rinehart and Winston, New York).
[105]
Stohl C, Stohl M, Leonardi PM (2016) Managing opacity: Information visibility and the paradox of transparency in the digital age. Internat. J. Comm. 10(2016):123–137.
[106]
Suchman L (2007) Human-Machine Reconfigurations: Plans and Situated Actions (Cambridge University Press, Cambridge, United Kingdom).
[107]
Teodorescu M, Morse L, Awwad Y, Kane G (2021) Failures of fairness in automation require a deeper understanding of human–ML augmentation. Management Inform. Systems Quart. 45(3b):1483–1499.
[108]
Tripsas M (2009) Technology, identity, and inertia through the lens of “the digital photography company.” Organ. Sci. 20(2):441–460.
[109]
Turco CJ (2016) The Conversational Firm: Rethinking Bureaucracy in the Age of Social Media (Columbia University Press, New York).
[110]
Van Den Broek E, Sergeeva A, Huysman M (2021) When the machine meets the expert: An ethnography of developing AI for hiring. Management Inform. Systems Quart. 45(3):1557–1580.
[111]
Van Maanen J (1988) Tales of the Field: On Writing Ethnography, 2nd ed. (University of Chicago Press, Chicago).
[112]
von Krogh G (2018) Artificial intelligence in organizations: New opportunities for phenomenon-based theorizing. Acad. Management Discoveries 4(4):404–409.
[113]
Waardenburg L, Sergeeva A, Huysman M (2018) Hotspots and blind spots. Schultze U, Aanestad M, Mähring M, Østerlund C, Riemer K, eds. Living with Monsters? Social Implication of Algorithmic Phenomena, Hybrid Agency, and the Permativity of Technology, IFIP Advances in Information and Communication Technology (Springer International Publishing, Cham, Switzerland), 96–109.
[114]
Wagner EL, Moll J, Newell S (2011) Accounting logics, reconfiguration of ERP systems and the emergence of new accounting practices: A sociomaterial perspective. Management Accounting Res. 22(3):181–197.
[115]
Wagner E, Newell S, Piccoli G (2010) Understanding project survival in an ES environment: A sociomaterial practice perspective. J. Assoc. Inform. Systems 11(5):276–297.
[116]
Watkins EA (2020) Took a pic and got declined, vexed and perplexed: Facial recognition in algorithmic management. 2020 Comput. Supported Cooperative Work Social Comput. (Association for Computing Machinery, New York), 177–182.
[117]
Wilson HJ, Daugherty PR (2018) Collaborative intelligence: Humans and AI are joining forces. Harvard Bus. Rev. (July 1), https://hbr.org/2018/07/collaborative-intelligence-humans-and-ai-are-joining-forces.
[118]
Xu F, Uszkoreit H, Du Y, Fan W, Zhao D, Zhu J (2019) Explainable AI: A brief survey on history, research areas, approaches and challenges. Tang J, Kan MY, Zhao D, Li S, Zan H, eds. Natural Language Processing and Chinese Computing, Lecture Notes in Computer Science (Springer International Publishing, Cham, Switzerland), 563–574.
[119]
Zhang D, Mishra S, Brynjolfsson E, Etchemendy J, Ganguli D, Grosz B, Lyons T, et al. (2021) The AI index 2021 annual report. Report, AI Index Steering Committee, Human-Centered AI Institute, Stanford University, Stanford, CA.
[120]
Zuboff S (2015) Big other: Surveillance capitalism and the prospects of an information civilization. J. Inform. Tech. 30(1):75–89.

Cited By

View all
  • (2024)The Short-Term Effects of Generative Artificial Intelligence on EmploymentOrganization Science10.1287/orsc.2023.1844135:6(1977-1989)Online publication date: 1-Nov-2024
  • (2024)The Crowdless Future? Generative AI and Creative Problem-SolvingOrganization Science10.1287/orsc.2023.1843035:5(1589-1607)Online publication date: 1-Sep-2024
  • (2024)The Algorithm and the Org Chart: How Algorithms Can Conflict with Organizational StructuresProceedings of the ACM on Human-Computer Interaction10.1145/36869038:CSCW2(1-31)Online publication date: 8-Nov-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Organization Science
Organization Science  Volume 33, Issue 1
January-February 2022
496 pages
ISSN:1526-5455
DOI:10.1287/orsc.2022.33.issue-1
Issue’s Table of Contents

Publisher

INFORMS

Linthicum, MD, United States

Publication History

Published: 01 January 2022
Accepted: 02 October 2021
Received: 15 January 2020

Author Tags

  1. artificial intelligence
  2. opacity
  3. explainability
  4. transparency
  5. augmentation
  6. technology adoption and use
  7. uncertainty
  8. innovation
  9. professional judgment
  10. expertise
  11. decision making
  12. medical diagnosis

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 26 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)The Short-Term Effects of Generative Artificial Intelligence on EmploymentOrganization Science10.1287/orsc.2023.1844135:6(1977-1989)Online publication date: 1-Nov-2024
  • (2024)The Crowdless Future? Generative AI and Creative Problem-SolvingOrganization Science10.1287/orsc.2023.1843035:5(1589-1607)Online publication date: 1-Sep-2024
  • (2024)The Algorithm and the Org Chart: How Algorithms Can Conflict with Organizational StructuresProceedings of the ACM on Human-Computer Interaction10.1145/36869038:CSCW2(1-31)Online publication date: 8-Nov-2024
  • (2024)“I Want It That Way”: Enabling Interactive Decision Support Using Large Language Models and Constraint ProgrammingACM Transactions on Interactive Intelligent Systems10.1145/368505314:3(1-33)Online publication date: 1-Aug-2024
  • (2024)Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-MakingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642621(1-18)Online publication date: 11-May-2024
  • (2024)Ignoring and collective passivity in relation to information systemsInformation and Organization10.1016/j.infoandorg.2024.10052334:3Online publication date: 1-Sep-2024
  • (2024)Decoding algorithm appreciationDecision Support Systems10.1016/j.dss.2024.114168179:COnline publication date: 1-Apr-2024
  • (2023)Unpacking Human and AI Complementarity: Insights from Recent WorksACM SIGMIS Database: the DATABASE for Advances in Information Systems10.1145/3614178.361418054:3(6-10)Online publication date: 4-Aug-2023
  • (2023)Designing Data Science Software for Social Care OrganisationsExtended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544549.3577062(1-8)Online publication date: 19-Apr-2023
  • (2022)Applying ethics to AI in the workplace: the design of a scorecard for Australian workplace health and safetyAI & Society10.1007/s00146-022-01460-938:2(919-935)Online publication date: 13-May-2022

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media