Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3581641.3584079acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article

An Investigation into an Always Listening Interface to Support Data Exploration

Published: 27 March 2023 Publication History

Abstract

Natural Language Interfaces that facilitate data exploration tasks are rapidly gaining in interest in the research community because they enable users to focus their attention on the task of inquiry rather than the mechanics of chart construction. Yet, current systems rely solely on processing the user’s explicit commands to generate the user’s intended chart. These commands can be ambiguous due to natural language tendencies such as speech disfluency and underspecification. In this paper, we developed and studied how an always listening interface can help contextualize imprecise queries. Our study revealed that an always listening interface is able to use an on-going conversation to fill in missing properties for imprecise commands, disambiguate inaccurate commands without asking the user for clarification, as well as generate charts without being explicitly asked.

References

[1]
andrew abela. 2013. Andrew Abela’s chart chooser. https://datavizblog.com/2013/04/29/andrew-abelas-chart-chooser/
[2]
Shashank Ahire and Michael Rohs. 2020. Tired of Wake Words? Moving Towards Seamless Conversations with Intelligent Personal Assistants. In Proceedings of the 2nd Conference on Conversational User Interfaces (Bilbao, Spain) (CUI ’20). Association for Computing Machinery, New York, NY, USA, Article 20, 3 pages. https://doi.org/10.1145/3405755.3406141
[3]
Salvatore Andolina, Valeria Orso, Hendrik Schneider, Khalil Klouche, Tuukka Ruotsalo, Luciano Gamberini, and Giulio Jacucci. 2018. Investigating Proactive Search Support in Conversations. In Proceedings of the 2018 Designing Interactive Systems Conference (Hong Kong, China) (DIS ’18). Association for Computing Machinery, New York, NY, USA, 1295–1307. https://doi.org/10.1145/3196709.3196734
[4]
Jillian Aurisano, Abhinav Kumar, Alberto Gonzalez, Jason Leigh, Barbara DiEugenio, and Andrew Johnson. 2016. Articulate2: Toward a conversational interface for visual data exploration. In IEEE Visualization. InfoVis, Batltimore, MD.
[5]
Axa-Group. 2022. AXA-Group/nlp.js: An NLP library for building bots, with entity extraction, sentiment analysis, automatic language identify, and so more. https://github.com/axa-group/nlp.js/
[6]
Daizoru. 2019. Daizoru/node-thesaurus: A thesaurus of words, it contains English by default but it can be used with your own data file. https://github.com/daizoru/node-thesaurus
[7]
Tong Gao, Mira Dontcheva, Eytan Adar, Zhicheng Liu, and Karrie G. Karahalios. 2015. DataTone: Managing Ambiguity in Natural Language Interfaces for Data Visualization. In Proceedings of the 28th Annual ACM Symposium on User Interface Software Technology(Charlotte, NC, USA) (UIST ’15). Association for Computing Machinery, New York, NY, USA, 489–500. https://doi.org/10.1145/2807442.2807478
[8]
Wiqas Ghai and Navdeep Singh. 2012. Literature Review on Automatic Speech Recognition. International Journal of Computer Applications 41 (2012), 42–50.
[9]
Awni Y. Hannun. 2021. The History of Speech Recognition to the Year 2030. ArXiv abs/2108.00084(2021).
[10]
M. Hearst, M. Tory, and V. Setlur. 2019. Toward Interface Defaults for Vague Modifiers in Natural Language Interfaces for Visual Analysis. In 2019 IEEE Visualization Conference (VIS). IEEE Computer Society, Los Alamitos, CA, USA, 21–25. https://doi.org/10.1109/VISUAL.2019.8933569
[11]
Enamul Hoque, Vidya Setlur, Melanie Tory, and Isaac Dykeman. 2018. Applying Pragmatics Principles for Interaction with Visual Analytics. IEEE Transactions on Visualization and Computer Graphics 24, 1(2018), 309–318. https://doi.org/10.1109/TVCG.2017.2744684
[12]
Umar Iqbal, Pouneh Nikkhah Bahrami, Rahmadi Trimananda, Hao Cui, Alexander Gamero-Garrido, Daniel Dubois, David R. Choffnes, Athina Markopoulou, Franziska Roesner, and Zubair Shafiq. 2022. Your Echos are Heard: Tracking, Profiling, and Ad Targeting in the Amazon Smart Speaker Ecosystem. ArXiv abs/2204.10920(2022).
[13]
Chandra Khatri, Rahul Goel, Behnam Hedayatnia, Angeliki Metanillou, Anushree Venkatesh, Raefer Gabriel, and Arindam Mandal. 2018. Contextual topic modeling for dialog systems. In 2018 ieee spoken language technology workshop (slt). IEEE, 892–899.
[14]
Jelena Krivokapić, Will Styler, and Benjamin Parrell. 2020. Pause postures: The relationship between articulation and cognitive processes during pauses. Journal of Phonetics 79 (Mar 2020), 100953. https://doi.org/10.1016/j.wocn.2019.100953
[15]
Abhinav Kumar, Jillian Aurisano, Barbara Di Eugenio, Andrew Johnson, Abeer Alsaiari, Nigel Flowers, Alberto Gonzalez, and Jason Leigh. 2017. Multimodal Coreference Resolution for Exploratory Data Visualization Dialogue: Context-Based Annotation and Gesture Identification. In SEMDIAL 2017 (SaarDial) Workshop on the Semantics and Pragmatics of Dialogue. ISCA, 41–51. https://doi.org/10.21437/SemDial.2017-5
[16]
Abhinav Kumar, Jillian Aurisano, Barbara Di Eugenio, and Andrew E. Johnson. 2020. Intelligent Assistant for Exploring Data Visualizations. In FLAIRS Conference. 538–543. https://aaai.org/ocs/index.php/FLAIRS/FLAIRS20/paper/view/18496
[17]
Linda Liu, Yile Gu, Aditya Gourav, Ankur Gandhe, Shashank Kalmane, Denis Filimonov, Ariya Rastrow, and Ivan Bulyko. 2021. Domain-Aware Neural Language Models for Speech Recognition. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2021), 7373–7377.
[18]
Nathan Malkin, Serge Egelman, and David Wagner. 2019. Privacy Controls for Always-Listening Devices. In Proceedings of the New Security Paradigms Workshop (San Carlos, Costa Rica) (NSPW ’19). Association for Computing Machinery, New York, NY, USA, 78–91. https://doi.org/10.1145/3368860.3368867
[19]
Donald McMillan, Antoine Loriette, and Barry Brown. 2015. Repurposing Conversation: Experiments with the Continuous Speech Stream. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI ’15). Association for Computing Machinery, New York, NY, USA, 3953–3962. https://doi.org/10.1145/2702123.2702532
[20]
Mariella Moon. 2021. Tesla is working on an AI-powered humanoid robot. https://www.engadget.com/tesla-bot-humanoid-robot-033635103.html
[21]
Arpit Narechania, Arjun Srinivasan, and John T. Stasko. 2021. NL4DV: A Toolkit for Generating Analytic Specifications for Data Visualization from Natural Language Queries. IEEE Transactions on Visualization and Computer Graphics 27 (2021), 369–379.
[22]
Michael Power, Greg Fell, and Michael Wright. 2013. Principles for high-quality, high-value testing. BMJ Evidence-Based Medicine 18, 1 (2013), 5–10. https://doi.org/10.1136/eb-2012-100645 arXiv:https://ebm.bmj.com/content/18/1/5.full.pdf
[23]
Arvind Satyanarayan, Dominik Moritz, Kanit Wongsuphasawat, and Jeffrey Heer. 2017. Vega-Lite: A Grammar of Interactive Graphics. IEEE Transactions on Visualization and Computer Graphics 23, 1(2017), 341–350. https://doi.org/10.1109/TVCG.2016.2599030
[24]
Vidya Setlur, Sarah E. Battersby, Melanie Tory, Rich Gossweiler, and Angel X. Chang. 2016. Eviza: A Natural Language Interface for Visual Analysis. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (Tokyo, Japan) (UIST ’16). Association for Computing Machinery, New York, NY, USA, 365–377. https://doi.org/10.1145/2984511.2984588
[25]
Vidya Setlur, Melanie Tory, and Alex Djalali. 2019. Inferencing Underspecified Natural Language Utterances in Visual Analysis. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (IUI ’19). Association for Computing Machinery, New York, NY, USA, 40–51. https://doi.org/10.1145/3301275.3302270
[26]
William Seymour, Mark Coté, and Jose M. Such. 2022. When It’s Not Worth the Paper It’s Written On: A Provocation on the Certification of Skills in the Alexa and Google Assistant Ecosystems. Proceedings of the 4th Conference on Conversational User Interfaces (2022).
[27]
Leixian Shen, Enya Shen, Yuyu Luo, Xiaocong Yang, Xuming Hu, Xiongshuai Zhang, Zhiwei Tai, and Jianmin Wang. 2022. Towards Natural Language Interfaces for Data Visualization: A Survey. IEEE Transactions on Visualization and Computer Graphics PP (2022), 1–1. https://doi.org/10.1109/TVCG.2022.3148007
[28]
Yang Shi, Yang Wang, Ye Qi, John Chen, Xiaoyao Xu, and Kwan-Liu Ma. 2017. IdeaWall: Improving Creative Collaboration through Combinatorial Visual Stimuli. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (Portland, Oregon, USA) (CSCW ’17). Association for Computing Machinery, New York, NY, USA, 594–603. https://doi.org/10.1145/2998181.2998208
[29]
Ben Shneiderman. 2000. The Limits of Speech Recognition. Commun. ACM 43, 9 (sep 2000), 63–65. https://doi.org/10.1145/348941.348990
[30]
Arjun Srinivasan, Nikhila Nyapathy, Bongshin Lee, Steven M. Drucker, and John Stasko. 2021. Collecting and Characterizing Natural Language Utterances for Specifying Data Visualizations. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 464, 10 pages. https://doi.org/10.1145/3411764.3445400
[31]
Yiwen Sun, Jason Leigh, Andrew Johnson, and Barbara Di Eugenio. 2013. Articulate: Creating meaningful visualizations from natural language. (01 2013), 218–235. https://doi.org/10.4018/978-1-4666-4309-3.ch011
[32]
Yiwen Sun, Jason Leigh, Andrew Johnson, and Sangyoon Lee. 2010. Articulate: A Semi-automated Model for Translating Natural Language Queries into Meaningful Visualizations. Vol. 6133. Springer Berlin Heidelberg, Berlin, Heidelberg, 184–195. https://doi.org/10.1007/978-3-642-13544-6_18
[33]
Roderick Tabalba, Nurit Kirshenbaum, Jason Leigh, Abari Bhatacharya, Andrew Johnson, Veronica Grosso, Barbara Di Eugenio, and Moira Zellner. 2022. Articulate+: An Always-Listening Natural Language Interface for Creating Data Visualizations. In Proceedings of the 4th Conference on Conversational User Interfaces (Glasgow, United Kingdom) (CUI ’22). Association for Computing Machinery, New York, NY, USA, Article 38, 6 pages. https://doi.org/10.1145/3543829.3544534
[34]
Madiha Tabassum, Tomasz Kosiński, Alisa Frik, Nathan Malkin, Primal Wijesekera, Serge Egelman, and Heather Richter Lipford. 2019. Investigating Users’ Preferences and Expectations for Always-Listening Voice Assistants. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 3, 4, Article 153 (dec 2019), 23 pages. https://doi.org/10.1145/3369807
[35]
Nick Yee, Jeremy N Bailenson, and Kathryn Rickertsen. 2007. A meta-analysis of the impact of the inclusion and realism of human-like faces on user experiences in interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, San Jose California USA, 1–10. https://doi.org/10.1145/1240624.1240626
[36]
Bowen Yu and Cláudio T. Silva. 2020. FlowSense: A Natural Language Interface for Visual Data Exploration within a Dataflow System. IEEE Transactions on Visualization and Computer Graphics 26 (2020), 1–11.

Cited By

View all
  • (2024)Talk to the Wall: The Role of Speech Interaction in Collaborative Visual AnalyticsIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.345633531:1(941-951)Online publication date: 9-Sep-2024
  • (2024)The future of PIM: pragmatics and potentialHuman–Computer Interaction10.1080/07370024.2024.2356155(1-28)Online publication date: 25-Jun-2024
  • (2024)A Conversational Assistant for Democratization of Data Visualization: A Comparative Study of Two Approaches of InteractionStatistical Analysis and Data Mining: The ASA Data Science Journal10.1002/sam.1171417:6Online publication date: 24-Dec-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
IUI '23: Proceedings of the 28th International Conference on Intelligent User Interfaces
March 2023
972 pages
ISBN:9798400701061
DOI:10.1145/3581641
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 March 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Arti
  2. Articulate+
  3. always listening
  4. charts
  5. collaborative digital assistants
  6. data exploration
  7. data visualization
  8. digital collaborator
  9. visualization

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • National Science Foundation

Conference

IUI '23
Sponsor:

Acceptance Rates

Overall Acceptance Rate 746 of 2,811 submissions, 27%

Upcoming Conference

IUI '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)76
  • Downloads (Last 6 weeks)8
Reflects downloads up to 16 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Talk to the Wall: The Role of Speech Interaction in Collaborative Visual AnalyticsIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.345633531:1(941-951)Online publication date: 9-Sep-2024
  • (2024)The future of PIM: pragmatics and potentialHuman–Computer Interaction10.1080/07370024.2024.2356155(1-28)Online publication date: 25-Jun-2024
  • (2024)A Conversational Assistant for Democratization of Data Visualization: A Comparative Study of Two Approaches of InteractionStatistical Analysis and Data Mining: The ASA Data Science Journal10.1002/sam.1171417:6Online publication date: 24-Dec-2024
  • (2023)Using Personal Situated Analytics (PSA) to Interpret Recorded MeetingsAdjunct Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology10.1145/3586182.3616697(1-3)Online publication date: 29-Oct-2023
  • (2023)PSA: A Cross-Platform Framework for Situated Analytics in MR and VR2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)10.1109/ISMAR-Adjunct60411.2023.00027(92-96)Online publication date: 16-Oct-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media