Europe PMC requires Javascript to function effectively.
Either your web browser doesn't support Javascript or it is currently turned off. In the latter case, please
turn on Javascript support in your web browser and reload this page.
This website requires cookies, and the limited processing of your
personal data in order to function. By using the site you are agreeing
to this as outlined in our
privacy notice and cookie policy.
Share this article
Share with emailShare with twitterShare with linkedinShare with facebook
Abstract
Text classification is one of the fundamental tasks in natural language processing, which requires an agent to determine the most appropriate category for input sentences. Recently, deep neural networks have achieved impressive performance in this area, especially pretrained language models (PLMs). Usually, these methods concentrate on input sentences and corresponding semantic embedding generation. However, for another essential component: labels, most existing works either treat them as meaningless one-hot vectors or use vanilla embedding methods to learn label representations along with model training, underestimating the semantic information and guidance that these labels reveal. To alleviate this problem and better exploit label information, in this article, we employ self-supervised learning (SSL) in model learning process and design a novel self-supervised relation of relation ( [Formula: see text]) classification task for label utilization from a one-hot manner perspective. Then, we propose a novel relation of relation learning network( [Formula: see text]-Net) for text classification, in which text classification and [Formula: see text] classification are treated as optimization targets. Meanwhile, triplet loss is employed to enhance the analysis of differences and connections among labels. Moreover, considering that one-hot usage is still short of exploiting label information, we incorporate external knowledge from WordNet to obtain multiaspect descriptions for label semantic learning and extend [Formula: see text]-Net to a novel description-enhanced label embedding network(DELE) from a label embedding perspective. One step further, since these fine-grained descriptions may introduce unexpected noise, we develop a mutual interaction module to select appropriate parts from input sentences and labels simultaneously based on contrastive learning (CL) for noise mitigation. Extensive experiments on different text classification tasks reveal that [Formula: see text]-Net can effectively improve the classification performance and DELE can make better use of label information and further improve the performance. As a byproduct, we have released the codes to facilitate other research.