default search action
5th RepL4NLP@ACL 2020, Online Conference
- Spandana Gella, Johannes Welbl, Marek Rei, Fabio Petroni, Patrick S. H. Lewis, Emma Strubell, Min Joon Seo, Hannaneh Hajishirzi:
Proceedings of the 5th Workshop on Representation Learning for NLP, RepL4NLP@ACL 2020, Online, July 9, 2020. Association for Computational Linguistics 2020, ISBN 978-1-952148-15-6 - Zihan Liu, Genta Indra Winata, Pascale Fung:
Zero-Resource Cross-Domain Named Entity Recognition. 1-6 - Tyler A. Chang, Anna N. Rafferty:
Encodings of Source Syntax: Similarities in NMT Representations Across Target Languages. 7-16 - Mingda Chen, Kevin Gimpel:
Learning Probabilistic Sentence Representations from Paraphrases. 17-23 - Siddharth Bhat, Alok Debnath, Souvik Banerjee, Manish Shrivastava:
Word Embeddings as Tuples of Feature Probabilities. 24-33 - Abhinav Gupta, Cinjon Resnick, Jakob N. Foerster, Andrew M. Dai, Kyunghyun Cho:
Compositionality and Capacity in Emergent Languages. 34-38 - Pratik Jawanpuria, N. T. V. Satya Dev, Anoop Kunchukuttan, Bamdev Mishra:
Learning Geometric Word Meta-Embeddings. 39-44 - Ivan Vulic, Anna Korhonen, Goran Glavas:
Improving Bilingual Lexicon Induction with Unsupervised Post-Processing of Monolingual Word Vector Spaces. 45-54 - Lis Pereira, Xiaodong Liu, Fei Cheng, Masayuki Asahara, Ichiro Kobayashi:
Adversarial Training for Commonsense Inference. 55-60 - Riccardo Volpi, Luigi Malagò:
Evaluating Natural Alpha Embeddings on Intrinsic and Extrinsic Tasks. 61-71 - Ashutosh Adhikari, Achyudh Ram, Raphael Tang, William L. Hamilton, Jimmy Lin:
Exploring the Limits of Simple Learners in Knowledge Distillation for Document Classification with DocBERT. 72-77 - Cemil Cengiz, Deniz Yuret:
Joint Training with Semantic Role Labeling for Better Generalization in Natural Language Inference. 78-88 - Juan Manuel Coria, Sahar Ghannay, Sophie Rosset, Hervé Bredin:
A Metric Learning Approach to Misogyny Categorization. 89-94 - Lukas Lange, Heike Adel, Jannik Strötgen:
On the Choice of Auxiliary Languages for Improved Sequence Tagging. 95-102 - Lukas Lange, Anastasiia Iurshina, Heike Adel, Jannik Strötgen:
Adversarial Alignment of Multilingual Models for Extracting Temporal Expressions from Text. 103-109 - Alessio Miaschi, Felice Dell'Orletta:
Contextual and Non-Contextual Word Embeddings: an in-depth Linguistic Investigation. 110-119 - Shijie Wu, Mark Dredze:
Are All Languages Created Equal in Multilingual BERT? 120-130 - Martin Tutek, Jan Snajder:
Staying True to Your Word: (How) Can Attention Become Explanation? 131-142 - Mitchell A. Gordon, Kevin Duh, Nicholas Andrews:
Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning. 143-155 - Vikas Raunak, Vaibhav Kumar, Vivek Gupta, Florian Metze:
On Dimensional Linguistic Properties of the Word Embedding Space. 156-165 - Shubham Toshniwal, Haoyue Shi, Bowen Shi, Lingyu Gao, Karen Livescu, Kevin Gimpel:
A Cross-Task Analysis of Text Span Representations. 166-176 - Yuhui Zhang, Chenghao Yang, Zhengping Zhou, Zhiyuan Liu:
Enhancing Transformer with Sememe Knowledge. 177-184 - Hanoz Bhathena, Angelica Willis, Nathan Dass:
Evaluating Compositionality of Sentence Representation Models. 185-193 - Aditya Bhargava, Gerald Penn:
Supertagging with CCG primitives. 194-204 - Sriram Balasubramanian, Naman Jain, Gaurav Jindal, Abhijeet Awasthi, Sunita Sarawagi:
What's in a Name? Are BERT Named Entity Representations just as Good for any other Name? 205-214
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.