default search action
20th SIGdial 2019: Stockholm, Sweden
- Satoshi Nakamura, Milica Gasic, Ingrid Zuckerman, Gabriel Skantze, Mikio Nakano, Alexandros Papangelis, Stefan Ultes, Koichiro Yoshino:
Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, SIGdial 2019, Stockholm, Sweden, September 11-13, 2019. Association for Computational Linguistics 2019, ISBN 978-1-950737-61-1 - Chinnadhurai Sankar, Sujith Ravi:
Deep Reinforcement Learning For Modeling Chit-Chat Dialog With Discrete Attributes. 1-10 - Stefan Ultes:
Improving Interaction Quality Estimation with BiLSTMs and the Impact on Dialogue Policy Learning. 11-20 - Sahisnu Mazumder, Bing Liu, Shuai Wang, Nianzu Ma:
Lifelong and Interactive Learning of Factual Knowledge in Dialogues. 21-31 - Igor Shalyminov, Sungjin Lee, Arash Eshghi, Oliver Lemon:
Few-Shot Dialogue Generation Without Annotated Data: A Transfer Learning Approach. 32-39 - Chenguang Zhu, Michael Zeng, Xuedong Huang:
SIM: A Slot-Independent Neural Model for Dialogue State Tracking. 40-45 - Arshit Gupta, John Hewitt, Katrin Kirchhoff:
Simple, Fast, Accurate Intent Classification and Slot Labeling for Goal-Oriented Dialogue Systems. 46-55 - Rylan Conway, Lambert Mathias:
Time Masking: Leveraging Temporal Information in Spoken Dialogue Systems. 56-61 - Dirk Väth, Ngoc Thang Vu:
To Combine or Not To Combine? A Rainbow Deep Reinforcement Learning Agent for Dialog Policies. 62-67 - Bhargavi Paranjape, Graham Neubig:
Contextualized Representations for Low-resource Utterance Tagging. 68-74 - Anh-Duong Trinh, Robert J. Ross, John D. Kelleher:
Capturing Dialogue State Variable Dependencies with an Energy-based Neural Dialogue State Tracker. 75-84 - Samuel Louvan, Bernardo Magnini:
Leveraging Non-Conversational Tasks for Low Resource Slot Filling: Does it help? 85-91 - Alexandros Papangelis, Yi-Chia Wang, Piero Molino, Gökhan Tür:
Collaborative Multi-Agent Dialogue Model Training Via Reinforcement Learning. 92-102 - Vikram Ramanarayanan, Matthew Mulholland, Yao Qian:
Scoring Interactional Aspects of Human-Machine Dialog for Language Learning and Assessment using Text Features. 103-109 - Lina Maria Rojas-Barahona, Pascal Bellec, Benoit Besset, Martinho Dos-Santos, Johannes Heinecke, Munshi Asadullah, Olivier Le Blouch, Jean Y. Lancien, Géraldine Damnati, Emmanuel Mory, Frédéric Herledan:
Spoken Conversational Search for General Knowledge. 110-113 - Jean Léon Bouraoui, Sonia Le Meitour, Romain Carbou, Lina Maria Rojas-Barahona, Vincent Lemaire:
Graph2Bots, Unsupervised Assistance for Designing Chatbots. 114-117 - Boris Galitsky, Dmitry I. Ilvovsky, Elizaveta Goncharova:
On a Chatbot Conducting Dialogue-in-Dialogue. 118-121 - Semih Yavuz, Abhinav Rastogi, Guan-Lin Chao, Dilek Hakkani-Tür:
DeepCopy: Grounded Response Generation with Hierarchical Pointer Networks. 122-132 - Zhuoxuan Jiang, Xian-Ling Mao, Ziming Huang, Jie Ma, Shaochun Li:
Towards End-to-End Learning for Efficient Dialogue Agent by Modeling Looking-ahead Ability. 133-142 - Xinnuo Xu, Yizhe Zhang, Lars Liden, Sungjin Lee:
Unsupervised Dialogue Spectrum Generation for Log Dialogue Ranking. 143-154 - Bo-Hsiang Tseng, Pawel Budzianowski, Yen-Chen Wu, Milica Gasic:
Tree-Structured Semantic Encoder with Knowledge Sharing for Domain Adaptation in Natural Language Generation. 155-164 - Shikib Mehri, Tejas Srinivasan, Maxine Eskénazi:
Structured Fusion Networks for Dialog. 165-177 - Lei Shu, Piero Molino, Mahdi Namazifar, Hu Xu, Bing Liu, Huaixiu Zheng, Gökhan Tür:
Flexibly-Structured Model for Task-Oriented Dialogues. 178-187 - Zhengzhe Yang, Jinho D. Choi:
FriendsQA: Open-Domain Question Answering on TV Show Transcripts. 188-197 - Philip R. Cohen:
Foundations of Collaborative Task-Oriented Dialogue: What's in a Slot? 198-209 - Diana Kleingarn, Nima Nabizadeh, Martin Heckmann, Dorothea Kolossa:
Speaker-adapted neural-network-based fusion for multimodal reference resolution. 210-214 - Guan-Lin Chao, Abhinav Rastogi, Semih Yavuz, Dilek Hakkani-Tür, Jindong Chen, Ian R. Lane:
Learning Question-Guided Video Representation for Multi-Turn Video Question Answering. 215-225 - Murathan Kurfali, Robert Östling:
Zero-shot transfer for implicit discourse relation classification. 226-231 - Sabita Acharya, Barbara Di Eugenio, Andrew D. Boyd, Richard Cameron, Karen Dunn Lopez, Pamela Martyn-Nemeth, Debaleena Chattopadhyay, Pantea Habibi, Carolyn Dickens, Haleh Vatani, Amer Ardati:
A Quantitative Analysis of Patients' Narratives of Heart Failure. 232-238 - Aakanksha Naik, Luke Breitfeller, Carolyn P. Rosé:
TDDiscourse: A Dataset for Discourse-Level Temporal Ordering of Events. 239-249 - Francesca Alloatti, Luigi Di Caro, Gianpiero Sportelli:
Real Life Application of a Question Answering System Using BERT Language Model. 250-253 - Andrea Vanzo, Emanuele Bastianelli, Oliver Lemon:
Hierarchical Multi-Task Natural Language Understanding for Cross-domain Conversational AI: HERMIT NLU. 254-263 - Shuyang Gao, Abhishek Sethi, Sanchit Agarwal, Tagyoung Chung, Dilek Hakkani-Tür:
Dialog State Tracking: A Neural Reading Comprehension Approach. 264-273 - Oleg Akhtiamov, Ingo Siegert, Alexey Karpov, Wolfgang Minker:
Cross-Corpus Data Augmentation for Acoustic Addressee Detection. 274-283 - Kornel Laskowski, Marcin Wlodarczak, Mattias Heldner:
A Scalable Method for Quantifying the Role of Pitch in Conversational Turn-Taking. 284-292 - Michelle Cohn, Chun-Yen Chen, Zhou Yu:
A Large-Scale User Study of an Alexa Prize Chatbot: Effect of TTS Dynamism on Perceived Quality of Social Dialog. 293-306 - Andisheh Partovi, Ingrid Zukerman:
Influence of Time and Risk on Response Acceptability in a Simple Spoken Dialogue System. 307-319 - Jonathan Ginzburg, Zulipiye Yusupujiang, Chuyuan Li, Kexin Ren, Pawel Lupkowski:
Characterizing the Response Space of Questions: a Corpus Study for English and Polish. 320-330 - Nazia Attari, Martin Heckmann, David Schlangen:
From Explainability to Explanation: Using a Dialogue Setting to Elicit Annotations with Justifications. 331-335 - Athanasios Lykartsis, Margarita Kotti:
Prediction of User Emotion and Dialogue Success Using Audio Spectrograms and Convolutional Neural Networks. 336-344 - Nils Axelsson, Gabriel Skantze:
Modelling Adaptive Presentations in Human-Robot Interaction using Behaviour Trees. 345-352 - Filip Radlinski, Krisztian Balog, Bill Byrne, Karthik Krishnamoorthi:
Coached Conversational Preference Elicitation: A Case Study in Understanding Movie Preferences. 353-360 - Amanda Cercas Curry, Verena Rieser:
A Crowd-based Evaluation of Abuse Response Strategies in Conversational Agents. 361-366 - Yiheng Zhou, He He, Alan W. Black, Yulia Tsvetkov:
A Dynamic Strategy Coach for Effective Negotiation. 367-378 - Prakhar Gupta, Shikib Mehri, Tiancheng Zhao, Amy Pavel, Maxine Eskénazi, Jeffrey P. Bigham:
Investigating Evaluation of Open-Domain Dialogue Systems With Human Generated Multiple References. 379-391 - Simon Keizer, Ondrej Dusek, Xingkun Liu, Verena Rieser:
User Evaluation of a Multi-dimensional Statistical Dialogue System. 392-398 - Tatiana Anikina, Ivana Kruijff-Korbayová:
Dialogue Act Classification in Team Communication for Robot Assisted Disaster Response. 399-410 - Sarah McLeod, Ivana Kruijff-Korbayová, Bernd Kiefer:
Multi-Task Learning of System Dialogue Act Selection for Supervised Pretraining of Goal-Oriented Dialogue Policies. 411-417 - Mitchell Abrams, Luke Gessler, Matthew Marge:
B. Rex: a dialogue agent for book recommendations. 418-421 - Dmytro Kalpakchi, Johan Boye:
SpaceRefNet: a neural approach to spatial reference resolution in a real city environment. 422-431 - Charlotte Roze, Chloé Braud, Philippe Muller:
Which aspects of discourse relations are hard to learn? Primitive decomposition for discourse relation classification. 432-441 - Siddharth Varia, Christopher Hidey, Tuhin Chakrabarty:
Discourse Relation Prediction: Revisiting Word Pairs with Convolutional Networks. 442-452
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.