Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3652628.3652740acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicaiceConference Proceedingsconference-collections
research-article

EnhEE: A Joint Learning Framework with Enhanced Decoding for Overlapping Event Extract

Published: 23 May 2024 Publication History

Abstract

Event extraction aims to extract event information from unstructured texts and present it in a structured form. In overlapping event extraction tasks, the overlapping triggers and arguments in events will directly affect the performance of event extraction. This overlapping problem results in the traditional event extraction methods unable to extract and identify overlapping event sentences effectively. In response to the above problems, this paper presents an enhanced decoding joint learning framework for overlapping event extraction, EnhEE, which consists of a BERT encoder and three decoders. In the decoding part, the conditional fusion function is used to fuse the previous information, which effectively improves the performance of the model in overlapping event extraction tasks. In addition, the BiLSTM module is added in the trigger extraction and argument extraction stages, effectively improving the model's ability to obtain contextual information. In the overlapping event extraction task, EnhEE achieved F1 values of 79.4% and 72.9% in the event trigger and argument extraction tasks, respectively. The experimental results show that this method is effective on the FewFC of the public dataset, especially for the extraction of overlap events, which is a significant improvement from the previous methods.

References

[1]
Ahn, D.: The stages of event extraction. In: Proceedings of the Workshop on Annotating and Reasoning About Time and Events, pp. 1–8, 2006.
[2]
Sheng, J., Guo, S., Yu, B., Li, Q., Hei, Y., Wang, L., Liu, T., Xu, H.: Casee: A joint learning framework with cascade decoding for overlapping event extraction. Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 164–174, 2021.
[3]
Huang, L., Ji, H., Cho, K., Voss, C.R.: Zero-shot transfer learning for event extraction. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), 2160–2170, 2017.
[4]
Chelsea Finn. 2018. Learning to Learn with Gradients. PhD Thesis, EECS Department, University of Berkeley.
[5]
Funke, H., Breß, S., Noll, S., Markl, V., Teubner, J.: Pipelined query processing in coprocessor environments. In: Proceedings of the 2018 International Conference on Management of Data, pp. 1603–1618, 2018.
[6]
Chen, Y., Xu, L., Liu, K., Zeng, D., Zhao, J.: Event extraction via dynamic multi-pooling convolutional neural networks. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 167–176, 2015.
[7]
Yang, S., Feng, D., Qiao, L., Kan, Z., Li, D.: Exploring pre-trained language models for event extraction and generation. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 5284–5294, 2019.
[8]
Du, X., Cardie, C.: Event extraction by answering (almost) natural questions. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 671–683, 2020.
[9]
Liu, J., Chen, Y., Liu, K., Bi, W., Liu, X.: Event extraction as machine reading comprehension. In: Proceedings of the 2020 Conference on Empirical Methods in 13 Natural Language Processing (EMNLP), pp. 1641–1651, 2020.
[10]
Xu, N., Xie, H., Zhao, D.: A novel joint framework for multiple Chinese events extraction. In: Chinese Computational Linguistics: 19th China National Conference, CCL 2020, Hainan, China, October 30–November 1, 2020, Proceedings, pp. 174–183 (2020). Springer.
[11]
Li, Q., Ji, H., Huang, L.: Joint event extraction via structured prediction with global features. In: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 73–82, 2013.
[12]
Nguyen, T.H., Cho, K., Grishman, R.: Joint event extraction via recurrent neural networks. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 300–309, 2016.
[13]
Liu, S., Chen, Y., He, S., Liu, K., Zhao, J.: Leveraging framenet to improve automatic event detection. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2134– 2143, 2016.
[14]
Zhu, Z., Li, S., Zhou, G., Xia, R.: Bilingual event extraction: a case study on trigger type determination. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 842–847, 2014.
[15]
Liu, X., Luo, Z., Huang, H.: Jointly multiple events extraction via attention-based graph information aggregation. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 1247–1256, 2018.
[16]
Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 4171–4186, 2019.
[17]
Su, J.: Conditional text generation based on conditional layer normalization, 2019.
[18]
Zhang, S., Zheng, D., Hu, X., Yang, M.: Bidirectional long short-term memory networks for relation classification. In: Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation, pp. 73–78, 2015.
[19]
Du, X., Cardie, C.: Document-level event role filler extraction using multigranularity contextualized encoding. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 8010–8020, 2020.
[20]
Yang, S., Feng, D., Qiao, L., Kan, Z., Li, D.: Exploring pre-trained language models for event extraction and generation. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 5284–5294, 2019.
[21]
Li, F., Peng, W., Chen, Y., Wang, Q., Pan, L., Lyu, Y., Zhu, Y.: Event extraction as multi-turn question answering. In: Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 829–838, 2020.

Index Terms

  1. EnhEE: A Joint Learning Framework with Enhanced Decoding for Overlapping Event Extract

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICAICE '23: Proceedings of the 4th International Conference on Artificial Intelligence and Computer Engineering
    November 2023
    1263 pages
    ISBN:9798400708831
    DOI:10.1145/3652628
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 23 May 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    ICAICE 2023

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 13
      Total Downloads
    • Downloads (Last 12 months)13
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 24 Sep 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media