Nothing Special   »   [go: up one dir, main page]

Skip to main content

Cascaded Solution for Multi-domain Conditional Question Answering with Multiple-Span Answers

  • Conference paper
  • First Online:
CCKS 2022 - Evaluation Track (CCKS 2022)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1711))

Included in the following conference series:

  • 585 Accesses

Abstract

This paper introduces our technical solution for CCKS-2022’s task of “A Dataset of Conditional Question Answering with Multiple-Span Answers”. The solution consists of Data Analysis and Processing, Condition-Answer Extraction, Post-extraction Processing, Condition-Answer Relation Classification, and Post-classification Processing. The rule-based post-extraction and Post-classification Processing modules consist of seven cascaded modules. Because the training data of the task contains multi-domain questions and answers and is constrained, we have designed a prediction method based on the conditions, coarse-grained answers, and fine-grained answers of the fine-tuned pre-trained language model for multi-domain scenarios. Binary classification is used for relation extraction of conditions, coarse-grained answers, and fine-grained answers, and the constraint extraction method is based on rules. The proposed solution obtains an F1 value of 0.74487 on the test set (ranking 3rd), and its effectiveness in multi-domain scenarios is verified.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Li, J., et al.: Unified Named Entity Recognition as Word-Word Relation Classification (2022)

    Google Scholar 

  2. Wu, S., He, Y.: Enriching Pre-trained language model with entity information for relation classification. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management (CIKM ’19), pp. 2361–2364. Association for Computing Machinery, New York, NY, USA, (2019). https://doi.org/10.1145/3357384.3358119

  3. Zhuang, L., Wayne, L., Ya, S., Jun, Z.: A robustly optimized BERT pre-training approach with post-training. In: Proceedings of the 20th Chinese National Conference on Computational Linguistics, pp. 1218–1227, Huhhot, China. Chinese Information Processing Society of China (2021)

    Google Scholar 

  4. Devlin, J., et al.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL. 2019, Association for Computational Linguistics, pp. 4171–4186 (2019)

    Google Scholar 

  5. Ashish, A., et al.: Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS’ 17). Curran Associates Inc., Red Hook, NY, pp. 6000–6010 (2017)

    Google Scholar 

  6. Zhu, Y., Wang, G., Karlsson, B.F.: CAN-NER: Convolutional attention network for Chinese named entity recognition. arXiv preprint arXiv:1904.02141 (2019)

    Google Scholar 

  7. Kishimoto, Y., Murawaki, Y., Kurohashi, S.: Adapting bert to implicit discourse relation classification with a focus on discourse connectives. In: Proceedings of the 12th Language Resources and Evaluation Conference, pp. 1152–1158 (2020)

    Google Scholar 

  8. Cao, X., Liu, Y.: Coarse-grained decomposition and fine-grained interaction for multi-hop question answering. J. Intell. Inf. Syst. 1–21 (2021). https://doi.org/10.1007/s10844-021-00645-w

  9. Huang, P., Huang, J., Guo, Y., Qiao, M., Zhu, Y.: Multi-grained attention with object-level grounding for visual question answering. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 3595–3600 (2019)

    Google Scholar 

  10. Krishna, K., Iyyer, M.: Generating question-answer hierarchies. arXiv preprint arXiv:1906.02622 (2019)

    Google Scholar 

  11. Liu, B., Wei, H., Niu, D., Chen, H., He, Y.: Asking questions the human way: scalable question-answer generation from text corpus. In: Proceedings of The Web Conference 2020, pp. 2032–2043 (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junhao Zhu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhu, J. et al. (2022). Cascaded Solution for Multi-domain Conditional Question Answering with Multiple-Span Answers. In: Zhang, N., Wang, M., Wu, T., Hu, W., Deng, S. (eds) CCKS 2022 - Evaluation Track. CCKS 2022. Communications in Computer and Information Science, vol 1711. Springer, Singapore. https://doi.org/10.1007/978-981-19-8300-9_6

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-8300-9_6

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-8299-6

  • Online ISBN: 978-981-19-8300-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics