Nothing Special   »   [go: up one dir, main page]

Jake Vasilakes


2024

pdf bib
ExU: AI Models for Examining Multilingual Disinformation Narratives and Understanding their Spread
Jake Vasilakes | Zhixue Zhao | Michal Gregor | Ivan Vykopal | Martin Hyben | Carolina Scarton
Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 2)

Addressing online disinformation requires analysing narratives across languages to help fact-checkers and journalists sift through large amounts of data. The ExU project focuses on developing AI-based models for multilingual disinformation analysis, addressing the tasks of rumour stance classification and claim retrieval. We describe the ExU project proposal and summarise the results of a user requirements survey regarding the design of tools to support fact-checking.

2022

pdf bib
Learning Disentangled Representations of Negation and Uncertainty
Jake Vasilakes | Chrysoula Zerva | Makoto Miwa | Sophia Ananiadou
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Negation and uncertainty modeling are long-standing tasks in natural language processing. Linguistic theory postulates that expressions of negation and uncertainty are semantically independent from each other and the content they modify. However, previous works on representation learning do not explicitly model this independence. We therefore attempt to disentangle the representations of negation, uncertainty, and content using a Variational Autoencoder. We find that simply supervising the latent representations results in good disentanglement, but auxiliary objectives based on adversarial learning and mutual information minimization can provide additional disentanglement gains.