Nothing Special   »   [go: up one dir, main page]

Semantic Re-tuning with Contrastive TensionDownload PDF

Published: 12 Jan 2021, Last Modified: 05 May 2023ICLR 2021 PosterReaders: Everyone
Keywords: Semantic Textual Similarity, Transformers, Language Modelling, Sentence Embeddings, Sentence Representations, Pre-training, Fine-tuning
Abstract: Extracting semantically useful natural language sentence representations from pre-trained deep neural networks such as Transformers remains a challenge. We first demonstrate that pre-training objectives impose a significant task bias onto the final layers of models with a layer-wise survey of the Semantic Textual Similarity (STS) correlations for multiple common Transformer language models. We then propose a new self-supervised method called Contrastive Tension (CT) to counter such biases. CT frames the training objective as a noise-contrastive task between the final layer representations of two independent models, in turn making the final layer representations suitable for feature extraction. Results from multiple common unsupervised and supervised STS tasks indicate that CT outperforms previous State Of The Art (SOTA), and when combining CT with supervised data we improve upon previous SOTA results with large margins.
One-sentence Summary: A self-supervised method for learning STS related sentence embedding using pre-trained language models, setting a new SOTA for STS related embedding tasks.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Code: [![github](/images/github_icon.svg) FreddeFrallan/Contrastive-Tension](https://github.com/FreddeFrallan/Contrastive-Tension)
Data: [BookCorpus](https://paperswithcode.com/dataset/bookcorpus), [GLUE](https://paperswithcode.com/dataset/glue)
12 Replies

Loading