Nothing Special   »   [go: up one dir, main page]

skip to main content
review-article

Semantic Relation Extraction: A Review of Approaches, Datasets, and Evaluation Methods With Looking at the Methods and Datasets in the Persian Language

Published: 20 July 2023 Publication History

Abstract

A large volume of unstructured data, especially text data, is generated and exchanged daily. Consequently, the importance of extracting patterns and discovering knowledge from textual data is significantly increasing. As the task of automatically recognizing the relations between two or more entities, semantic relation extraction has a prominent role in the exploitation of raw text. This article surveys different approaches and types of relation extraction in English and the most prominent proposed methods in Persian. We also introduce, analyze, and compare the most important datasets available for relation extraction in Persian and English. Furthermore, traditional and emerging evaluation metrics for supervised, semi-supervised, and unsupervised methods are described, along with pointers to commonly used performance evaluation datasets. Finally, we briefly describe challenges in extracting relationships in Persian and English and dataset creation challenges.

References

[1]
L. Abualigah, K. H. Almotairi, M. A. A. Al-qaness, A. A. Ewees, D. Yousri, M. A. Elaziz, and M. H. Nadimi-Shahraki. 2022. Efficient text document clustering approach using multi-search Arithmetic Optimization Algorithm. Knowledge-Based Systems 248 (2022), 108833.
[2]
L. Abualigah and M. Altalhi. 2022. A novel generalized normal distribution arithmetic optimization algorithm for global optimization and data clustering problems. Journal of Ambient Intelligence and Humanized Computing. Early access, (2022).
[3]
L. M. Abualigah, A. T. Khader, and E. S. Hanandeh. 2018. A new feature selection method to improve the document clustering using particle swarm optimization algorithm. Journal of Computational Science 25 (2018), 456–466.
[4]
H. Adel and J. Strötgen. 2021. Enriched attention for robust relation extraction. arXiv Preprint arXiv:2104.10899 (2021).
[5]
Y. A. Alhaj, A. Dahou, M. A. A. Al-Qaness, L. Abualigah, A. A. Abbasi, N. A. O. Almaweri, M. A. Elaziz, and R. Damaševičius. 2022. A novel text classification technique using improved particle swarm optimization: A case study of Arabic language. Future Internet 14, 7 (2022), 194.
[6]
E. Asgarian. 2021. Persian-NER: The large corpus of labeled persian named entity recognition. Retrieved April 18, 2023 from https://github.com/Text-Mining/Persian-NER.
[7]
M. Asgari-Bidhendi, B. Janfada, and B. Minaei-Bidgoli. 2021. FarsBase-KBP: A knowledge base population system for the Persian Knowledge Graph. Journal of Web Semantics 68 (2021), 100638.
[8]
M. Asgari-Bidhendi, M. Nasser, B. Janfada, and B. Minaei-Bidgoli. 2020. PERLEX: A bilingual persian-english gold dataset for relation extraction. Retrieved April 18, 2023 from http://arxiv.org/abs/2005.06588.
[9]
S. Atarod and A. Yari. 2020. A distant supervised approach for relation extraction in Farsi texts. International Journal of Web Research 3, 2 (2020), 1–8.
[10]
M. Aydar, O. Bozal, and F. Ozbay. 2020. Neural Relation Extraction: A Survey. Retrieved April 18, 2023 from http://arxiv.org/abs/2007.04247.
[11]
M. Y. Ayoubi and S. Davoodeh. 2021. PersianQA: A Dataset for Persian Question Answering. Retrieved April 18, 2023 from https://github.com/sajjjadayobi/PersianQA.
[12]
N. Bach and S. Badaskar. 2011. A Review of Relation Extraction. Carnegie Mellon University.
[13]
H.-R. Baek and Y.-S. Choi. 2022. Enhancing targeted minority class prediction in sentence-level relation extraction. Sensors 22, 13 (2022), 4911.
[14]
L. Baldini Soares, N. FitzGerald, J. Ling, and T. Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. arXiv e-Prints arXiv:1906.03158 (2019).
[15]
K. Bollacker, C. Evans, P. Paritosh, T. Sturge, and J. Taylor. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data. 1247–1250.
[16]
D. Chen, Y. Li, K. Lei, and Y. Shen. 2020. Relabel the noise: Joint extraction of entities and relations via cooperative multiagents. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 5940–5950.
[17]
F. Christopoulou, M. Miwa, and S. Ananiadou. 2019. Connecting the dots: Document-level neural relation extraction with edge-oriented graphs. arXiv Preprint arXiv:1909.00228 (2019).
[18]
D. B. Claro, M. Souza, C. Castellã Xavier, and L. Oliveira. 2019. Multilingual open information extraction: Challenges and opportunities. Information 10, 7 (2019), 228.
[19]
A. D. N. Cohen, S. Rosenman, and Y. Goldberg. 2020. Relation classification as two-way span-prediction. arXiv Preprint arXiv:2010.04829 (2020).
[20]
T. Deußer, S. M. Ali, L. Hillebrand, D. Nurchalifah, B. Jacob, C. Bauckhage, and R. Sifa. 2022. KPI-EDGAR: A novel dataset and accompanying metric for relation extraction from financial documents. arXiv:2210.09163 (2022). http://arxiv.org/abs/2210.09163
[21]
J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv Preprint arXiv:1810.04805 (2018).
[22]
J. Ellis, X. Li, K. Griffitt, S. M. Strassel, and J. Wright. 2012. Linguistic resources for 2013 knowledge base population evaluations. In Proceedings of the 2013 TAC KBP Workshop.
[23]
T. Erjavec. 2010. MULTEXT-East version 4: Multilingual morphosyntactic specifications, lexicons and corpora. In Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC’10).
[24]
R. Etezadi and M. Shamsfard. 2021. PeCoQ: A dataset for Persian complex question answering over knowledge graph. arXiv:2106.14167 (2021). http://qald.aksw.org/.
[25]
H. Fadaei and M. Shamsfard. 2010. Extracting conceptual relations from Persian resources. In ITNG2010 – Proceedings of the 7th International Conference on Information Technology: New Generations (ITNG’10). 244–248.
[26]
M. Farahani, M. Gharachorloo, M. Farahani, and M. Manthouri. 2021. ParsBERT: Transformer-based model for Persian language understanding. Neural Processing Letters 53 (2021), 3831–3847.
[27]
C. Gardent, A. Shimorina, S. Narayan, and L. Perez-Beltrachini. 2017. Creating training corpora for NLG micro-planning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Long Papers) (ACL’17). 179–188.
[28]
H. Gharagozlou, J. Mohammadzadeh, A. Bastanfard, and S. S. Ghidary. 2022. RLAS-BIABC: A reinforcement learning-based answer selection using the BERT model boosted by an improved ABC algorithm. Computational Intelligence and Neuroscience 2022 (2022), 7839840.
[29]
Y. Gu, R. Tinn, H. Cheng, M. Lucas, N. Usuyama, X. Liu, T. Naumann, J. Gao, and H. Poon. 2022. Domain-specific language model pretraining for biomedical natural language processing. ACM Transactions on Computing for Healthcare 3, 1 (2022), Article 2, 23 pages.
[30]
P. Gupta, S. Rajaram, H. Schütze, and T. Runkler. 2019. Neural relation extraction within and across sentence boundaries. Proceedings of the AAAI Conference on Artificial Intelligence 33, 1 (2019), 6513–6520.
[31]
P. Gupta, H. Schütze, and B. Andrassy. 2016. Table filling multi-task recurrent neural network for joint entity and relation extraction. In Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers (COLING’16). 2537–2547.
[32]
X. Han, Z. Liu, and M. Sun. 2018. Denoising distant supervision for relation extraction via instance-level adversarial training. arXiv Preprint arXiv:1805.10959 (2018a).
[33]
X. Han, P. Yu, Z. Liu, M. Sun, and P. Li. 2018b. Hierarchical relation extraction with coarse-to-fine grained attention. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2236–2245.
[34]
X. Han, H. Zhu, P. Yu, Z. Wang, Y. Yao, Z. Liu, and M. Sun. 2018c. FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP’18). 4803–4809. http://arxiv.org/abs/1810.10147.
[35]
T. Hang, J. Feng, Y. Wu, L. Yan, and Y. Wang. 2021. Joint extraction of entities and overlapping relations using source-target entity labeling. Expert Systems with Applications 177 (2021), 114853.
[36]
J. Hartmann, P. Spyns, A. Giboin, D. Maynard, R. Cuel, M. C. Suárez-Figueroa, and Y. Sure. 2005. D1.2.3 Methods for Ontology Evaluation. EU-IST Network of Excellence (NoE) IST-2004-507482 KWEB Deliverable D, 1. University of Karlsruhe.
[37]
Z. He, W. Chen, Y. Wang, W. Zhang, G. Wang, and M. Zhang. 2020. Improving neural relation extraction with positive and unlabeled learning. Proceedings of the AAAI Conference on Artificial Intelligence 34, 5 (2020), 7927–7934.
[38]
I. Hendrickx, S. N. Kim, Z. Kozareva, P. Nakov, D. Ó. Séaghdha, S. Padó, M. Pennacchiotti, L. Romano, and S. Szpakowicz. 2019. SemEval-2010 Task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the 5th International Workshop on Semantic Evaluation (SemEval’19). 33–38. http://arxiv.org/abs/1911.10422.
[39]
J. Hoffart, F. M. Suchanek, K. Berberich, and G. Weikum. 2013. YAGO2: A spatially and temporally enhanced knowledge base from Wikipedia. Artificial Intelligence 194 (2013), 28–61.
[40]
R. Hoffmann, C. Zhang, X. Ling, L. Zettlemoyer, and D. S. Weld. 2011. Knowledge-based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. 541–550.
[41]
J. Y. Huang, B. Li, J. Xu, and M. Chen. 2022. Unified semantic typing with meaningful label inference. arXiv Preprint arXiv:2205.01826 (2022).
[42]
Y. Y. Huang and W. Y. Wang. 2017. Deep residual learning for weakly-supervised relation extraction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP’17). 1803–1807. http://arxiv.org/abs/1707.08866.
[43]
M. M. Jafari, S. Behmanesh, A. Talebpour, and A. N. Ghomsheh. 2021. Improving pre-trained language model for relation extraction using syntactic information in Persian. In Proceedings of the 2nd International Workshop on NLP Solutions for Under Resourced Languages (NSURL’21). 31.
[44]
H. Jiang, Q. Bao, Q. Cheng, D. Yang, L. Wang, and Y. Xiao. 2020. Complex relation extraction: Challenges and opportunities. arXiv:2012.04821 (2020). http://arxiv.org/abs/2012.04821.
[45]
H. A. Khojasteh, E. Ansari, and M. Bohlouli. 2020. LSCP: Enhanced large scale colloquial Persian language understanding. arXiv Preprint arXiv:2003.06499 (2020).
[46]
J. Lehmann, R. Isele, M. Jakob, A. Jentzsch, D. Kontokostas, P. N. Mendes, S. Hellmann, et al. 2015. Dbpedia—A large-scale, multilingual knowledge base extracted from Wikipedia. Semantic Web 6, 2 (2015), 167–195.
[47]
J. Libovický, R. Rosa, and A. Fraser. 2019. How language-neutral is multilingual BERT? arXiv:1911.03310 (2019). http://arxiv.org/abs/1911.03310.
[48]
X. Ling and D. S. Weld. 2012. Fine-grained entity recognition. In Proceedings of the 26th AAAI Conference on Artificial Intelligence.
[49]
L. Liu, X. Ren, Q. Zhu, S. Zhi, H. Gui, H. Ji, and J. Han. 2017. Heterogeneous supervision for relation extraction: A representation learning approach. arXiv preprint arXiv:1707.00166.
[50]
T. Liu, X. Zhang, W. Zhou, and W. Jia. 2018. Neural relation extraction via inner-sentence noise reduction and transfer learning. arXiv Preprint arXiv:1808.06738 (2018).
[51]
Y. Liu, W. Hua, and X. Zhou. 2019. Extracting temporal patterns from large-scale text corpus. In Databases Theory and Applications. Lecture Notes in Computer Science, Vol. 11393. Springer, 17–30.
[52]
Y. Luan, L. He, M. Ostendorf, and H. Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP’18). 3219–3232. http://arxiv.org/abs/1808.09602.
[53]
L. Luo, P.-T. Lai, C.-H. Wei, C. N. Arighi, and Z. Lu. 2022. BioRED: A rich biomedical relation extraction dataset. Briefings in Bioinformatics 23, 5 (2022), bbac282. https://www.teamtat.org.
[54]
I. Mani, J. Hitzeman, J. Richer, and D. Harris. 2008. ACE 2005 English SpatialML Annotations. Linguistic Data Consortium, Philadelphia, PA.
[55]
M. Mintz, S. Bills, R. Snow, and D. Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. 1003–1011.
[56]
M. Miwa and M. Bansal. 2016. End-to-end relation extraction using LSTMs on sequences and tree structures. arXiv Preprint arXiv:1601.00770 (2016).
[57]
S. Mizzaro, V. Scienze, and L. Rizzi. 2002. A New Measure of Retrieval Effectiveness (or: What's Wrong with Precision And Recall). Retrieved April 18, 2023 from https://users.dimi.uniud.it/∼stefano.mizzaro/researchpapers/ADM.pdf.
[58]
L. Mohammad and Q. Abualigah. 2018. Feature Selection and Enhanced Krill Herd Algorithm for Text Document Clustering. Studies in Computational Intelligence (Vol. 816). Springer, Cham, Switzerland.
[59]
J. Moreira, C. Oliveira, D. Macêdo, C. Zanchettin, and L. Barbosa. 2020. Distantly-supervised neural relation extraction with side information using BERT. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN’20). 1–7.
[60]
M. Nasser, M. Asgari, and B. Minaei-Bidgoli. 2019. Distant supervision for relation extraction in the Persian language using piecewise convolutional neural networks. In Proceedings of the 2019 5th International Conference on Web Research (ICWR’19). 96–99.
[61]
T. Nayak, N. Majumder, and S. Poria. 2021. Improving distantly supervised relation extraction with self-ensemble noise filtering. arXiv Preprint arXiv:2108.09689 (2021).
[62]
T. Nayak and H. T. Ng. 2019. Effective attention modeling for neural relation extraction. arXiv Preprint arXiv:1912.03832 (2019).
[63]
D. Q. Nguyen and K. Verspoor. 2019. End-to-end neural relation extraction using deep biaffine attention. In Proceedings of the European Conference on Information Retrieval. 729–738.
[64]
J. Nivre, M.-C. de Marneffe, F. Ginter, J. Hajič, C. D. Manning, S. Pyysalo, S. Schuster, F. Tyers, and D. Zeman. 2020. Universal Dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the 12th Language Resources and Evaluation Conference (LREC’20). 4034–4043. http://arxiv.org/abs/2004.10643.
[65]
S. Park and H. Kim. 2021. Improving sentence-level relation extraction through curriculum learning. arXiv Preprint arXiv:2107.09332 (2021).
[66]
S. Pawar, G. K. Palshikar, and P. Bhattacharyya. 2017. Relation extraction: A survey. arXiv:1712.05191 (2017). http://arxiv.org/abs/1712.05191.
[67]
H. Peng, T. Gao, X. Han, Y. Lin, P. Li, Z. Liu, M. Sun, and J. Zhou. 2020. Learning from context or names? An Empirical Study on Neural Relation Extraction. arXiv Preprint arXiv:2010.01923 (2020).
[68]
H. Poostchi, E. Z. Borzeshi, M. Abdous, and M. Piccardi. 2016. PersoNER: Persian named-entity recognition. In Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers (COLING’16). 3381–3389.
[69]
M. Rahat, A. Talebpour, and S. Monemian. 2018. A recursive algorithm for open information extraction from Persian texts. International Journal of Computer Applications in Technology 57, 3 (2018), 193–206.
[70]
S. Riedel, L. Yao, and A. McCallum. 2010. Modeling relations and their mentions without labeled text. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases. 148–163.
[71]
O. Sainz, O. L. de Lacalle, G. Labaka, A. Barrena, and E. Agirre. 2021. Label verbalization and entailment for effective zero- and few-shot relation extraction. arXiv:2109.03659 (2021). http://arxiv.org/abs/2109.03659.
[72]
C. N. dos Santos, B. Xiang, and B. Zhou. 2015. Classifying relations by ranking with convolutional neural networks. arXiv Preprint arXiv:1504.06580 (2015).
[73]
M. S. Sartakhti, R. Etezadi, and M. Shamsfard. 2021. Improving Persian relation extraction models by data augmentation. In Proceedings of the 2nd International Workshop on NLP Solutions for Under Resourced Languages (NSURL’21). 32–37. http://arxiv.org/abs/2203.15323.
[74]
M. Seraji. 2015. Morphosyntactic Corpora and Tools for Persian. Acta Universitatis Upsaliensis.
[75]
M. S. Shahshahani, M. Mohseni, A. Shakery, and H. Faili. 2018. PEYMA: A tagged corpus for Persian named entities. arXiv Preprint arXiv:1801.09936 (2018).
[76]
M. Shamsfard and A. A. Barforoush. 2004. Learning ontologies from natural language texts. International Journal of Human-Computer Studies 60, 1 (2004), 17–63.
[77]
R. Socher, B. Huval, C. D. Manning, and A. Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. 1201–1211.
[78]
N. Taghizadeh, E. Doostmohammadi, E. Seifossadat, H. R. Rabiee, and M. S. Tahaei. 2021. SINA-BERT: A pre-trained language model for analysis of medical texts in Persian. arXiv Preprint arXiv:2104.07613 (2021).
[79]
M. E. Torbati, G. Ghassem-Sani, S. A. Mirroshandel, Y. Yaghoobzadeh, and N. K. Hosseini. 2013. Temporal relation classification in Persian and English contexts. In Proceedings of the International Conference Recent Advances in Natural Language Processing (RANLP’13). 261–269.
[80]
D. Vrandečić and M. Krötzsch. 2014. Wikidata: A free collaborative knowledgebase. Communications of the ACM 57, 10 (2014), 78–85.
[81]
C. Walker, S. Strassel, J. Medero, and K. Maeda. 2006. ACE 2005 multilingual training corpus. Linguistic Data Consortium, Philadelphia 57 (2006), 45.
[82]
J. Wang. 2020. RH-Net: Improving neural relation extraction via reinforcement learning and hierarchical relational searching. arXiv Preprint arXiv:2010.14255 (2020).
[83]
W. Wang, R. Besançon, O. Ferret, B. Grau, and W. Wang. 2012. Evaluation of unsupervised information extraction. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC’12). 552–558. https://hal.archives-ouvertes.fr/hal-02282037.
[84]
Z. Wei, J. Su, Y. Wang, Y. Tian, and Y. Chang. 2019. A novel hierarchical binary tagging framework for joint extraction of entities and relations. arXiv Preprint arXiv:1909.03227 (2019).
[85]
S. Wu and Y. He. 2019. Enriching pre-trained language model with entity information for relation classification. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management. 2361–2364.
[86]
T. Wu, X. Li, Y.-F. Li, R. Haffari, G. Qi, Y. Zhu, and G. Xu. 2021. Curriculum-meta learning for order-robust continual relation extraction. arXiv:2101.01926 (2021). http://arxiv.org/abs/2101.01926.
[87]
Y. Xu, L. Mou, G. Li, Y. Chen, H. Peng, and Z. Jin. 2015. Classifying relations via long short term memory networks along shortest dependency paths. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 1785–1794.
[88]
D. Yang, S. Wang, and Z. Li. 2018. Ensemble neural relation extraction with adaptive boosting. arXiv Preprint arXiv:1801.09334 (2018).
[89]
Y. Yao, D. Ye, P. Li, X. Han, Y. Lin, Z. Liu, Z. Liu, L. Huang, J. Zhou, and M. Sun. 2019. DocRED: A large-scale document-level relation extraction dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL’19). 764–777. http://arxiv.org/abs/1906.06127.
[90]
Z.-X. Ye and Z.-H. Ling. 2019. Multi-level matching and aggregation network for few-shot relation classification. arXiv Preprint arXiv:1906.06678 (2019).
[91]
C. Yuan, H. Huang, C. Feng, X. Liu, and X. Wei. 2019. Distant supervision for relation extraction with linear attenuation simulation and non-IID relevance embedding. Proceedings of the AAAI Conference on Artificial Intelligence 33, 1 (2019), 7418–7425.
[92]
Y. Yuan, X. Zhou, S. Pan, Q. Zhu, Z. Song, and L. Guo. 2020. A relation-specific attention network for joint entity and relation extraction. In Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI’20). 4054–4060.
[93]
D. Zeng, K. Liu, S. Lai, G. Zhou, and J. Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of the 25th International Conference on Computational Linguistics: Technical Papers (COLING’14). 2335–2344.
[94]
X. Zeng, S. He, K. Liu, and J. Zhao. 2018. Large scaled relation extraction with reinforcement learning. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.
[95]
S. Zhang, D. Zheng, X. Hu, and M. Yang. 2015. Bidirectional long short-term memory networks for relation classification. In Proceedings of the 29th Pacific Asia Conference on Language, Information, and Computation. 73–78.
[96]
Y. Zhang, P. Qi, and C. D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. arXiv Preprint arXiv:1809.10185 (2018).
[97]
Y. Zhang, V. Zhong, D. Chen, G. Angeli, and C. D. Manning. 2017. Position-aware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP’17). 35–45. https://nlp.stanford.edu/pubs/zhang2017tacred.pdf.
[98]
K. Zhao, H. Xu, Y. Cheng, X. Li, and K. Gao. 2021. Representation iterative fusion based on heterogeneous graph neural network for joint entity and relation extraction. Knowledge-Based Systems 219 (2021), 106888.
[99]
K. Zhao, H. Xu, J. Yang, and K. Gao. 2022. Consistent representation learning for continual relation extraction. In Findings of the Association for Computational Linguistics: ACL 2022. Association for Computational Linguistics, 3402–3411. http://arxiv.org/abs/2203.02721.
[100]
Y. Zhao, H. Wan, J. Gao, and Y. Lin. 2019. Improving relation classification by entity pair graph. In Proceedings of the Asian Conference on Machine Learning. 1156–1171.
[101]
P. Zhou, W. Shi, J. Tian, Z. Qi, B. Li, H. Hao, and B. Xu. 2016. Attention-based bidirectional long short-term memory networks for relation classification. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 207–212.
[102]
T. Zhu, H. Wang, J. Yu, X. Zhou, W. Chen, W. Zhang, and M. Zhang. 2020. Towards accurate and consistent evaluation: A dataset for distantly-supervised relation extraction. arXiv abs/2010.16275 (2020).

Cited By

View all
  • (2024)Melanoma classification using generative adversarial network and proximal policy optimizationPhotochemistry and Photobiology10.1111/php.14006Online publication date: 30-Jul-2024
  • (2024)A Multi-Channel Advertising Budget Allocation Using Reinforcement Learning and an Improved Differential Evolution AlgorithmIEEE Access10.1109/ACCESS.2024.342935912(100559-100580)Online publication date: 2024
  • (2024)Object Recognition from Scientific Document Based on Compartment and Text Blocks Refinement FrameworkSN Computer Science10.1007/s42979-024-03130-75:7Online publication date: 23-Aug-2024
  • Show More Cited By

Index Terms

  1. Semantic Relation Extraction: A Review of Approaches, Datasets, and Evaluation Methods With Looking at the Methods and Datasets in the Persian Language

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Asian and Low-Resource Language Information Processing
      ACM Transactions on Asian and Low-Resource Language Information Processing  Volume 22, Issue 7
      July 2023
      422 pages
      ISSN:2375-4699
      EISSN:2375-4702
      DOI:10.1145/3610376
      Issue’s Table of Contents

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 20 July 2023
      Online AM: 14 April 2023
      Accepted: 08 April 2023
      Revised: 04 April 2023
      Received: 02 May 2022
      Published in TALLIP Volume 22, Issue 7

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Semantic relations
      2. relation extraction
      3. Natural Language Processing (NLP)
      4. automatic extraction
      5. Persian text processing
      6. linguistics
      7. dataset
      8. evaluation methods
      9. information extraction

      Qualifiers

      • Review-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)238
      • Downloads (Last 6 weeks)24
      Reflects downloads up to 09 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Melanoma classification using generative adversarial network and proximal policy optimizationPhotochemistry and Photobiology10.1111/php.14006Online publication date: 30-Jul-2024
      • (2024)A Multi-Channel Advertising Budget Allocation Using Reinforcement Learning and an Improved Differential Evolution AlgorithmIEEE Access10.1109/ACCESS.2024.342935912(100559-100580)Online publication date: 2024
      • (2024)Object Recognition from Scientific Document Based on Compartment and Text Blocks Refinement FrameworkSN Computer Science10.1007/s42979-024-03130-75:7Online publication date: 23-Aug-2024
      • (2024)Advanced RUL Estimation for Lithium-Ion Batteries: Integrating Attention-Based LSTM with Mutual Learning-enhanced Artificial Bee Colony OptimizationJournal of The Institution of Engineers (India): Series B10.1007/s40031-024-01123-xOnline publication date: 4-Aug-2024
      • (2024)Aspect term extraction and optimized deep learning for sentiment classificationSocial Network Analysis and Mining10.1007/s13278-024-01375-x14:1Online publication date: 24-Nov-2024
      • (2024)Smartphone detector examination for transportation mode identification utilizing imbalanced maximizing-area under the curve proximal support vector machineSignal, Image and Video Processing10.1007/s11760-024-03479-518:11(8361-8377)Online publication date: 13-Aug-2024

      View Options

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      Full Text

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media