Abstract
The existing Document-Level Event Factuality Identification (DEFI) work relies on the syntactic and semantic features of event trigger and sentences. However, focusing only on the relevant features of event trigger may omit the important information for event factuality identification, while finding critical information from the whole document is still challenging. In this paper, our motivation is that DEFI can be inferred from a complete set of evidential sentences rather than the event trigger. Hence, we construct a new Evidence-Based Document-Level Event Factuality (EB-DLEF) corpus, and introduce a new evidential sentence selection task for DEFI. Moreover, we propose a pipeline approach to solve the two-step work of evidential sentence selection and event factuality identification, which outperforms various baselines.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Saurí, R.: A Factuality Profiler for Eventualities in Text. Brandeis University (2008)
Qian, Z., Li, P., Zhu, Q., Zhou, G.: Document-level event factuality identification via adversarial neural network. In: NAACL-HLT (1), pp. 2799–2809. Association for Computational Linguistics (2019)
Klenner, M., Clematide, S.: How factuality determines sentiment inferences. In: *SEM@ACL. The *SEM 2016 Organizing Committee (2016)
Born, L., Mesgar, M., Strube, M.: Using a graph-based coherence model in document-level machine translation. In: DiscoMT@EMNLP, pp. 26–35. Association for Computational Linguistics (2017)
Qazvinian, V., Rosengren, E., Radev, D.R., Mei, Q.: Rumor has it: identifying misinformation in microblogs. In: EMNLP, pp. 1589–1599. ACL (2011)
Zhang, H., Qian, Z., Zhu, X., Li, P.: Document-level event factuality identification using negation and speculation scope. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds.) ICONIP 2021. LNCS, vol. 13108, pp. 414–425. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-92185-9_34
Cao, P., Chen, Y., Yang, Y., Liu, K., Zhao, J.: Uncertain local-to-global networks for document-level event factuality identification. In: EMNLP (1), pp. 2636–2645. Association for Computational Linguistics (2021)
Thorne, J., Vlachos, A., Christodoulopoulos, C., Mittal, A.: FEVER: a large-scale dataset for fact extraction and verification. In: NAACL-HLT, pp. 809–819. Association for Computational Linguistics (2018)
Qian, Z., Li, P., Zhang, Y., Zhou, G., Zhu, Q.: Event factuality identification via generative adversarial networks with auxiliary classification. In: IJCAI, pp. 4293–4300. ijcai.org (2018)
Rudinger, R., White, A.S., Durme, B.V.: Neural models of factuality. In: NAACL-HLT, pp. 731–744. Association for Computational Linguistics (2018)
Huang, R., Zou, B., Wang, H., Li, P., Zhou, G.: Event factuality detection in discourse. In: Tang, J., Kan, M.-Y., Zhao, D., Li, S., Zan, H. (eds.) NLPCC 2019. LNCS (LNAI), vol. 11839, pp. 404–414. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32236-6_36
Veyseh, A.P.B., Nguyen, T.H., Dou, D.: Graph based neural networks for event factuality prediction using syntactic and semantic structures. In: ACL (1), pp. 4393–4399. Association for Computational Linguistics (2019)
Kruengkrai, C., Yamagishi, J., Wang, X.: A multi-level attention model for evidence-based fact checking. In: ACL/IJCNLP (Findings). Findings of ACL, ACL/IJCNLP 2021, pp. 2447–2460. Association for Computational Linguistics (2021)
Yoneda, T., Mitchell, J., Welbl, J., Stenetorp, P., Riedel, S.: UCL machine reading group: four factor framework for fact finding (HexaF). In: Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pp. 97–102 (2018)
Hanselowski, A., et al.: UKP-Athene: multi-sentence textual entailment for claim verification. CoRR abs/1809.01479 (2018)
Zhou, J., et al.: GEAR: graph-based evidence aggregating and reasoning for fact verification. In: ACL (1), pp. 892–901. Association for Computational Linguistics (2019)
Nie, Y., Chen, H., Bansal, M.: Combining fact extraction and verification with neural semantic matching networks. In: AAAI, pp. 6859–6866. AAAI Press (2019)
Nie, Y., Wang, S., Bansal, M.: Revealing the importance of semantic retrieval for machine reading at scale. In: EMNLP/IJCNLP (1), pp. 2553–2566. Association for Computational Linguistics (2019)
Liu, Z., Xiong, C., Sun, M., Liu, Z.: Fine-grained fact verification with kernel graph attention network. In: ACL, pp. 7342–7351. Association for Computational Linguistics (2020)
Soleimani, A., Monz, C., Worring, M.: BERT for evidence retrieval and claim verification. In: Jose, J.M., et al. (eds.) ECIR 2020. LNCS, vol. 12036, pp. 359–366. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-45442-5_45
Si, J., Zhou, D., Li, T., Shi, X., He, Y.: Topic-aware evidence reasoning and stance-aware aggregation for fact verification. In: ACL/IJCNLP (1), pp. 1612–1622. Association for Computational Linguistics (2021)
Zhong, W., et al.: Reasoning over semantic-level graph for fact checking. In: ACL, pp. 6170–6180. Association for Computational Linguistics (2020)
Samarinas, C., Hsu, W., Lee, M.: Improving evidence retrieval for automated explainable fact-checking. In: NAACL-HLT (Demonstrations), pp. 84–91. Association for Computational Linguistics (2021)
Subramanian, S., Lee, K.: Hierarchical evidence set modeling for automated fact extraction and verification. In: EMNLP (1), pp. 7798–7809. Association for Computational Linguistics (2020)
Jiang, K., Pradeep, R., Lin, J.: Exploring listwise evidence reasoning with T5 for fact verification. In: ACL/IJCNLP (2), pp. 402–410. Association for Computational Linguistics (2021)
Yin, W., Roth, D.: TwoWingOS: a two-wing optimization strategy for evidential claim verification. In: EMNLP, pp. 105–114. Association for Computational Linguistics (2018)
Ma, J., Gao, W., Joty, S.R., Wong, K.: Sentence-level evidence embedding for claim verification with hierarchical attention networks. In: ACL (1), pp. 2561–2571. Association for Computational Linguistics (2019)
Cohen, J.: A coefficient of agreement for nominal scales. Educ. Psychol. Measur. 20(1), 37–46 (1960)
Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT (1), pp. 4171–4186. Association for Computational Linguistics (2019)
Vincze, V., Szarvas, G., Farkas, R., Móra, G., Csirik, J.: The bioscope corpus: biomedical texts annotated for uncertainty, negation and their scopes. BMC Bioinform. 9(S-11), 1–9 (2008)
Zou, B., Zhu, Q., Zhou, G.: Negation and speculation identification in Chinese language. In: ACL (1), pp. 656–665. The Association for Computer Linguistics (2015)
Vaswani, A., et al.: Attention is all you need. In: NIPS, pp. 5998–6008 (2017)
Chen, Q., Zhu, X., Ling, Z., Wei, S., Jiang, H., Inkpen, D.: Enhanced LSTM for natural language inference. In: ACL (1), pp. 1657–1668. Association for Computational Linguistics (2017)
Acknowledgments
The authors would like to thank the three anonymous reviewers for their comments on this paper. This research was supported by the National Natural Science Foundation of China (Nos. 61836007, and 62006167), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, H., Qian, Z., Li, P., Zhu, X. (2022). Evidence-Based Document-Level Event Factuality Identification. In: Khanna, S., Cao, J., Bai, Q., Xu, G. (eds) PRICAI 2022: Trends in Artificial Intelligence. PRICAI 2022. Lecture Notes in Computer Science, vol 13630. Springer, Cham. https://doi.org/10.1007/978-3-031-20865-2_18
Download citation
DOI: https://doi.org/10.1007/978-3-031-20865-2_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-20864-5
Online ISBN: 978-3-031-20865-2
eBook Packages: Computer ScienceComputer Science (R0)