Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3511808.3557562acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
short-paper

Co-Training with Validation: A Generic Framework for Semi-Supervised Relation Extraction

Published: 17 October 2022 Publication History

Abstract

In the scenarios of low-resource natural language applications, Semi-supervised Relation Extraction (SRE) plays a key role in mitigating the scarcity of labelled sentences by harnessing a large amount of unlabeled corpus. Current SRE methods are mainly designed based on the paradigm of Self-Training with Validation (STV), which employs two learners and each of them plays the single role of annotator or validator. However, such a single role setting under-utilizes the potential of learners in promoting new labelled instances from unlabeled corpus. In this paper, we propose a generic SRE paradigm, called Co-Training with Validation (CTV), for making full use of learners to benefit more from unlabeled corpus. In CTV, each learner alternately plays the roles of annotator and validator to generate and validate pseudo-labelled instances. Thus, more high-quality instances are exploited and two learners can be reinforced by each other during the learning process. Experimental results on two public datasets show that our CTV considerably outperforms the state-of-the-art SRE techniques, and works well with different kinds of learners for relation extraction.

References

[1]
Avrim Blum and Tom M. Mitchell. 1998. Combining Labeled and Unlabeled Data with Co-Training. In COLT 1999. 92--100. https://doi.org/10.1145/279943.279962
[2]
Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor W. Tsang, and Masashi Sugiyama. 2018. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In NeurIPS 2018. 8536--8546. https://proceedings.neurips.cc/paper/2018/hash/a19744e268754fb0148b017647355b7b-Abstract.html
[3]
Amina Kadry and Laura Dietz. 2017. Open Relation Extraction for Support Passage Retrieval: Merit and Open Issues. In SIGIR 2017. 1149--1152. https://doi.org/10.1145/3077136.3080744
[4]
Wanli Li, Tieyun Qian, Xu Chen, Kejian Tang, Shaohui Zhan, and Tao Zhan. 2021. Exploit a Multi-head Reference Graph for Semi-supervised Relation Extraction. In IJCNN 2021. 1--7. https://doi.org/10.1109/IJCNN52387.2021.9534434
[5]
Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin Zhou, and Jiwei Li. 2019. Entity-Relation Extraction as Multi-Turn Question Answering. In ACL 2019. 1340--1350. https://doi.org/10.18653/v1/p19--1129
[6]
Hongtao Lin, Jun Yan, Meng Qu, and Xiang Ren. 2019. Learning Dual Retrieval Module for Semi-supervised Relation Extraction. In WWW 2019. 1073--1083. https://doi.org/10.1145/3308558.3313573
[7]
Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In ACL 2014. 55--60. https://doi.org/10.3115/v1/p14--5010
[8]
Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In ACL/IJCNLP 2009. 1003--1011. https://aclanthology.org/P09--1113/
[9]
Abhishek Nadgeri, Anson Bastos, Kuldeep Singh, Isaiah Onando Mulang', Johannes Hoffart, Saeedeh Shekarpour, and Vijay Saraswat. 2021. KGPool: Dynamic Knowledge Graph Context Selection for Relation Extraction. In ACL/IJCNLP Findings 2021. 535--548. https://doi.org/10.18653/v1/2021.findings-acl.48
[10]
Kamal Nigam and Rayid Ghani. 2000. Analyzing the Effectiveness and Applicability of Co-training. In CIKM 2000. 86--93. https://doi.org/10.1145/354756.354805
[11]
Haitian Sun, Patrick Verga, Bhuwan Dhingra, Ruslan Salakhutdinov, and William W. Cohen. 2021. Reasoning Over Virtual Knowledge Bases With Open Predicate Relations. In ICML 2021. 9966--9977. http://proceedings.mlr.press/v139/sun21e.html
[12]
Wei Wang and Zhi-Hua Zhou. 2007. Analyzing Co-training Style Algorithms. In ECML 2007 (Lecture Notes in Computer Science, Vol. 4701). 454--465. https://doi.org/10.1007/978--3--540--74958--5_42
[13]
Shanchan Wu and Yifan He. 2019. Enriching Pre-trained Language Model with Entity Information for Relation Classification. In CIKM 2019. 2361--2364. https://doi.org/10.1145/3357384.3358119
[14]
Chaojun Xiao, Yuan Yao, Ruobing Xie, Xu Han, Zhiyuan Liu, Maosong Sun, Fen Lin, and Leyu Lin. 2020. Denoising Relation Extraction from Document-level Distant Supervision. In EMNLP 2020. 3683--3688. https://doi.org/10.18653/v1/2020.emnlp-main.300
[15]
Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention. In EMNLP 2020. 6442--6454. https://doi.org/10.18653/v1/2020.emnlp-main.523
[16]
Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant Supervision for Relation Extraction via Piecewise Convolutional Neural Networks. In EMNLP 2015. 1753--1762. https://doi.org/10.18653/v1/d15--1203
[17]
Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation Classification via Convolutional Deep Neural Network. In COLING 2014. 2335--2344. https://aclanthology.org/C14--1220/
[18]
Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning. 2017. Position-aware Attention and Supervised Data Improve Slot Filling. In EMNLP 2017. 35--45. https://doi.org/10.18653/v1/d17--1004
[19]
Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification. In ACL 2016. 207--212. https://aclanthology.org/P16--2034
[20]
Wenxuan Zhou and Muhao Chen. 2021. An Improved Baseline for Sentence-level Relation Extraction. CoRR, Vol. abs/2102.01373 (2021). showeprint[arXiv]2102.01373 https://arxiv.org/abs/2102.01373

Cited By

View all
  • (2023)SelfLRE: Self-refining Representation Learning for Low-resource Relation ExtractionProceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3539618.3592058(2364-2368)Online publication date: 19-Jul-2023

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
CIKM '22: Proceedings of the 31st ACM International Conference on Information & Knowledge Management
October 2022
5274 pages
ISBN:9781450392365
DOI:10.1145/3511808
  • General Chairs:
  • Mohammad Al Hasan,
  • Li Xiong
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 17 October 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. low resource
  2. natural language processing
  3. relation extraction
  4. semi-supervised learning

Qualifiers

  • Short-paper

Funding Sources

  • Fundamental Research Funds for the Central Universities ties

Conference

CIKM '22
Sponsor:

Acceptance Rates

CIKM '22 Paper Acceptance Rate 621 of 2,257 submissions, 28%;
Overall Acceptance Rate 1,861 of 8,427 submissions, 22%

Upcoming Conference

CIKM '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)15
  • Downloads (Last 6 weeks)0
Reflects downloads up to 03 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2023)SelfLRE: Self-refining Representation Learning for Low-resource Relation ExtractionProceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3539618.3592058(2364-2368)Online publication date: 19-Jul-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media