default search action
3rd Insight@ACL 2022: Dublin, Ireland
- Shabnam Tafreshi, João Sedoc, Anna Rogers, Aleksandr Drozd, Anna Rumshisky, Arjun R. Akula:
Proceedings of the Third Workshop on Insights from Negative Results in NLP, Insights@ACL 2022, Dublin, Ireland, May 26, 2022. Association for Computational Linguistics 2022, ISBN 978-1-955917-40-7 - Yue Ding, Karolis Martinkus, Damian Pascual, Simon Clematide, Roger Wattenhofer:
On Isotropy Calibration of Transformer Models. 1-9 - Alessandra Teresa Cignarella, Cristina Bosco, Paolo Rosso:
Do Dependency Relations Help in the Task of Stance Detection? 10-17 - Sopan Khosla, Rashmi Gangadharaiah:
Evaluating the Practical Utility of Confidence-score based Techniques for Unsupervised Open-world Classification. 18-23 - Chenyang Lyu, Jennifer Foster, Yvette Graham:
Extending the Scope of Out-of-Domain: Examining QA models in multiple subdomains. 24-37 - Uri Shaham, Omer Levy:
What Do You Get When You Cross Beam Search with Nucleus Sampling? 38-45 - Simeng Sun, Brian Dillon, Mohit Iyyer:
How Much Do Modifications to Transformer Language Models Affect Their Ability to Learn Linguistic Knowledge? 46-53 - Alberto Muñoz-Ortiz, Carlos Gómez-Rodríguez, David Vilares:
Cross-lingual Inflection as a Data Augmentation Method for Parsing. 54-61 - Dawei Zhu, Michael A. Hedderich, Fangzhou Zhai, David Ifeoluwa Adelani, Dietrich Klakow:
Is BERT Robust to Label Noise? A Study on Learning with Noisy Labels in Text Classification. 62-67 - Heather C. Lent, Emanuele Bugliarello, Anders Søgaard:
Ancestor-to-Creole Transfer is Not a Walk in the Park. 68-74 - Xiaohan Yang, Eduardo Peynetti, Vasco Meerman, Chris Tanner:
What GPT Knows About Who is Who. 75-81 - Goonmeet Bajaj, Vinh Nguyen, Thilini Wijesiriwardene, Hong Yung Yip, Vishesh Javangula, Amit P. Sheth, Srinivasan Parthasarathy, Olivier Bodenreider:
Evaluating Biomedical Word Embeddings for Vocabulary Alignment at Scale in the UMLS Metathesaurus Using Siamese Networks. 82-87 - Itsuki Okimura, Machel Reid, Makoto Kawano, Yutaka Matsuo:
On the Impact of Data Augmentation on Downstream Performance in Natural Language Processing. 88-93 - Etsuko Ishii, Yan Xu, Samuel Cahyawijaya, Bryan Wilie:
Can Question Rewriting Help Conversational Question Answering? 94-99 - Pedro Rodríguez, Phu Mon Htut, John Lalor, João Sedoc:
Clustering Examples in Multi-Dataset Benchmarks with Item Response Theory. 100-112 - Hyounghun Kim, Aishwarya Padmakumar, Di Jin, Mohit Bansal, Dilek Hakkani-Tur:
On the Limits of Evaluating Embodied Agent Model Generalization Using Validation Sets. 113-118 - Maxim K. Surkov, Vladislav D. Mosin, Ivan P. Yamshchikov:
Do Data-based Curricula Work? 119-128 - Bingyu Zhang, Nikolay Arefyev:
The Document Vectors Using Cosine Similarity Revisited. 129-133 - Ionut Sorodoc, Laura Aina, Gemma Boleda:
Challenges in including extra-linguistic context in pre-trained language models. 134-138 - Cecilia Ying, Stephen Thomas:
Label Errors in BANKING77. 139-143 - Hanjie Chen, Guoqing Zheng, Ahmed Hassan Awadallah, Yangfeng Ji:
Pathologies of Pre-trained Language Models in Few-shot Fine-tuning. 144-153 - Vinayshekhar Bannihatti Kumar, Vaibhav Kumar, Mukul Bhutani, Alexander Rudnicky:
An Empirical study to understand the Compositional Prowess of Neural Dialog Models. 154-158 - Maria Alexeeva, Allegra A. Beal, Mihai Surdeanu:
Combining Extraction and Generation for Constructing Belief-Consequence Causal Links. 159-164 - Margot Mieskes:
Replicability under Near-Perfect Conditions - A Case-Study from Automatic Summarization. 165-171 - Dipesh Kumar, Avijit Thawani:
BPE beyond Word Boundary: How NOT to use Multi Word Expressions in Neural Machine Translation. 172-179 - Philipp Koch, Matthias Aßenmacher, Christian Heumann:
Pre-trained language models evaluating themselves - A comparative study. 180-187
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.