2024
pdf
bib
abs
Local Topology Measures of Contextual Language Model Latent Spaces with Applications to Dialogue Term Extraction
Benjamin Matthias Ruppik
|
Michael Heck
|
Carel van Niekerk
|
Renato Vukovic
|
Hsien-chin Lin
|
Shutong Feng
|
Marcus Zibrowius
|
Milica Gasic
Proceedings of the 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue
A common approach for sequence tagging tasks based on contextual word representations is to train a machine learning classifier directly on these embedding vectors. This approach has two shortcomings. First, such methods consider single input sequences in isolation and are unable to put an individual embedding vector in relation to vectors outside the current local context of use. Second, the high performance of these models relies on fine-tuning the embedding model in conjunction with the classifier, which may not always be feasible due to the size or inaccessibility of the underlying feature-generation model. It is thus desirable, given a collection of embedding vectors of a corpus, i.e. a datastore, to find features of each vector that describe its relation to other, similar vectors in the datastore. With this in mind, we introduce complexity measures of the local topology of the latent space of a contextual language model with respect to a given datastore. The effectiveness of our features is demonstrated through their application to dialogue term extraction. Our work continues a line of research that explores the manifold hypothesis for word embeddings, demonstrating that local structure in the space carved out by word embeddings can be exploited to infer semantic properties.
pdf
bib
abs
Dialogue Ontology Relation Extraction via Constrained Chain-of-Thought Decoding
Renato Vukovic
|
David Arps
|
Carel van Niekerk
|
Benjamin Matthias Ruppik
|
Hsien-chin Lin
|
Michael Heck
|
Milica Gasic
Proceedings of the 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue
State-of-the-art task-oriented dialogue systems typically rely on task-specific ontologies for fulfilling user queries. The majority of task-oriented dialogue data, such as customer service recordings, comes without ontology and annotation. Such ontologies are normally built manually, limiting the application of specialised systems. Dialogue ontology construction is an approach for automating that process and typically consists of two steps: term extraction and relation extraction. In this work, we focus on relation extraction in a transfer learning set-up. To improve the generalisation, we propose an extension to the decoding mechanism of large language models. We adapt Chain-of-Thought (CoT) decoding, recently developed for reasoning problems, to generative relation extraction. Here, we generate multiple branches in the decoding space and select the relations based on a confidence threshold. By constraining the decoding to ontology terms and relations, we aim to decrease the risk of hallucination. We conduct extensive experimentation on two widely used datasets and find improvements in performance on target ontology for source fine-tuned and one-shot prompted large language models.
pdf
bib
abs
Infusing Emotions into Task-oriented Dialogue Systems: Understanding, Management, and Generation
Shutong Feng
|
Hsien-chin Lin
|
Christian Geishauser
|
Nurul Lubis
|
Carel van Niekerk
|
Michael Heck
|
Benjamin Matthias Ruppik
|
Renato Vukovic
|
Milica Gasic
Proceedings of the 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Emotions are indispensable in human communication, but are often overlooked in task-oriented dialogue (ToD) modelling, where the task success is the primary focus. While existing works have explored user emotions or similar concepts in some ToD tasks, none has so far included emotion modelling into a fully-fledged ToD system nor conducted interaction with human or simulated users. In this work, we incorporate emotion into the complete ToD processing loop, involving understanding, management, and generation. To this end, we extend the EmoWOZ dataset (Feng et al., 2022) with system affective behaviour labels. Through interactive experimentation involving both simulated and human users, we demonstrate that our proposed framework significantly enhances the user’s emotional experience as well as the task success.
pdf
bib
abs
Ontology Construction for Task-oriented Dialogue
Renato Vukovic
Proceedings of the 20th Workshop of Young Researchers' Roundtable on Spoken Dialogue Systems
My research interests lie generally in dialogue ontology construction, that uses techniques from information extraction to extract relevant terms from task-oriented dialogue data and order them by finding hierarchical relations between them.
2023
pdf
bib
abs
ChatGPT for Zero-shot Dialogue State Tracking: A Solution or an Opportunity?
Michael Heck
|
Nurul Lubis
|
Benjamin Ruppik
|
Renato Vukovic
|
Shutong Feng
|
Christian Geishauser
|
Hsien-chin Lin
|
Carel van Niekerk
|
Milica Gasic
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Recent research on dialog state tracking (DST) focuses on methods that allow few- and zero-shot transfer to new domains or schemas. However, performance gains heavily depend on aggressive data augmentation and fine-tuning of ever larger language model based architectures. In contrast, general purpose language models, trained on large amounts of diverse data, hold the promise of solving any kind of task without task-specific training. We present preliminary experimental results on the ChatGPT research preview, showing that ChatGPT achieves state-of-the-art performance in zero-shot DST. Despite our findings, we argue that properties inherent to general purpose models limit their ability to replace specialized systems. We further theorize that the in-context learning capabilities of such models will likely become powerful tools to support the development of dedicated dialog state trackers and enable dynamic methods.
pdf
bib
abs
From Chatter to Matter: Addressing Critical Steps of Emotion Recognition Learning in Task-oriented Dialogue
Shutong Feng
|
Nurul Lubis
|
Benjamin Ruppik
|
Christian Geishauser
|
Michael Heck
|
Hsien-chin Lin
|
Carel van Niekerk
|
Renato Vukovic
|
Milica Gasic
Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Emotion recognition in conversations (ERC) is a crucial task for building human-like conversational agents. While substantial efforts have been devoted to ERC for chit-chat dialogues, the task-oriented counterpart is largely left unattended. Directly applying chit-chat ERC models to task-oriented dialogues (ToDs) results in suboptimal performance as these models overlook key features such as the correlation between emotions and task completion in ToDs. In this paper, we propose a framework that turns a chit-chat ERC model into a task-oriented one, addressing three critical aspects: data, features and objective. First, we devise two ways of augmenting rare emotions to improve ERC performance. Second, we use dialogue states as auxiliary features to incorporate key information from the goal of the user. Lastly, we leverage a multi-aspect emotion definition in ToDs to devise a multi-task learning objective and a novel emotion-distance weighted loss function. Our framework yields significant improvements for a range of chit-chat ERC models on EmoWOZ, a large-scale dataset for user emotions in ToDs. We further investigate the generalisability of the best resulting model to predict user satisfaction in different ToD datasets. A comparison with supervised baselines shows a strong zero-shot capability, highlighting the potential usage of our framework in wider scenarios.
2022
pdf
bib
abs
Dialogue Term Extraction using Transfer Learning and Topological Data Analysis
Renato Vukovic
|
Michael Heck
|
Benjamin Ruppik
|
Carel van Niekerk
|
Marcus Zibrowius
|
Milica Gasic
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
Goal oriented dialogue systems were originally designed as a natural language interface to a fixed data-set of entities that users might inquire about, further described by domain, slots and values. As we move towards adaptable dialogue systems where knowledge about domains, slots and values may change, there is an increasing need to automatically extract these terms from raw dialogues or related non-dialogue data on a large scale. In this paper, we take an important step in this direction by exploring different features that can enable systems to discover realisations of domains, slots and values in dialogues in a purely data-driven fashion. The features that we examine stem from word embeddings, language modelling features, as well as topological features of the word embedding space. To examine the utility of each feature set, we train a seed model based on the widely used MultiWOZ data-set. Then, we apply this model to a different corpus, the Schema-guided dialogue data-set. Our method outperforms the previously proposed approach that relies solely on word embeddings. We also demonstrate that each of the features is responsible for discovering different kinds of content. We believe our results warrant further research towards ontology induction, and continued harnessing of topological data analysis for dialogue and natural language processing research.