Abstract
This paper highlights the significance of natural language processing (NLP) within artificial intelligence, underscoring its pivotal role in comprehending and modeling human language. Recent advancements in NLP, particularly in conversational bots, have garnered substantial attention and adoption among developers. This paper explores advanced methodologies for attaining smaller and more efficient NLP models. Specifically, we employ three key approaches: (1) training a Transformer-based neural network to detect offensive language, (2) employing data augmentation and knowledge distillation techniques to increase performance, and (3) incorporating multi-task learning with knowledge distillation and teacher annealing using diverse datasets to enhance efficiency. The culmination of these methods has yielded demonstrably improved outcomes.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
References
Avram, A.M., et al.: Distilling the knowledge of romanian berts using multiple teachers. In: Proceedings of the thirteenth LREC, pp. 374–384 (2022)
Awal, M.R., Cao, R., Lee, R.K.-W., Mitrović, S.: AngryBERT: joint learning target and emotion for hate speech detection. In: Karlapalem, K., et al. (eds.) PAKDD 2021. LNCS (LNAI), vol. 12712, pp. 701–713. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-75762-5_55
Buciluǎ, Cristian anrofbd Caruana, R., Niculescu-Mizil, A.: Model compression. In: Proceedings of the 12th ACM SIGKDD, pp. 535–541 (2006)
Caruana, R.: Multitask learning. Mach. Learn. 28, 41–75 (1997)
Chiril, P., Pamungkas, E.W., Benamara, F., Moriceau, V., Patti, V.: Emotionally informed hate speech detection: a multi-target perspective. Cogn. Comput. 14, 322–352 (2022). https://doi.org/10.1007/s12559-021-09862-5
Ciobotaru, A., Constantinescu, M.V., Dinu, L.P., Dumitrescu, S.: Red v2: enhancing red dataset for multi-label emotion detection. In: Proceedings of the Thirteenth Language Resources and Evaluation Conference, pp. 1392–1399 (2022)
Clark, K., Luong, M.T., Khandelwal, U., Manning, C.D., Le, Q.: Bam! born-again multi-task networks for natural language understanding. In: Proceedings of the 57th ACL, pp. 5931–5937 (2019)
Cojocaru, A., Paraschiv, A., Dascalu, M.: News-ro-offense-a romanian offensive language dataset and baseline models centered on news article comments. In: RoCHI, pp. 65–72 (2022)
Council, E.: Framework decision on combating certain forms and expressions of racism and xenophobia. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=LEGISSUM%3Al33178 (2008), Accesed 16 June 2023
Feng, S.Y., et al.: A survey of data augmentation approaches for NLP. In: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pp. 968–988 (2021)
Fortuna, P., Nunes, S.: A survey on automatic detection of hate speech in text. ACM Comput. Surv. (CSUR) 51(4), 1–30 (2018)
Guo, H., Mao, Y., Zhang, R.: Augmenting data with mixup for sentence classification: an empirical study. CoRR abs/1905.08941 (2019)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)
Hoefels, D.C., Çöltekin, Ç., Mădroane, I.D.: Coroseof-an annotated corpus of romanian sexist and offensive tweets. In: Proceedings of the Thirteenth Language Resources and Evaluation Conference, pp. 2269–2281 (2022)
Hosseini, M., Caragea, C.: Distilling knowledge for empathy detection. In: Findings of EMNLP 2021, pp. 3713–3724 (2021)
Jafari, A., Rezagholizadeh, M., Sharma, P., Ghodsi, A.: Annealing knowledge distillation. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pp. 2493–2504 (2021)
Kenton, J.D.M.W.C., Toutanova, L.K.: Bert: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT, pp. 4171–4186 (2019)
Li, W.-H., Bilen, H.: Knowledge distillation for multi-task learning. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020, Part VI. LNCS, vol. 12540, pp. 163–176. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-65414-6_13
Li, Y., Caragea, C.: Target-aware data augmentation for stance detection. In: Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1850–1860 (2021)
Li, Y., Zhao, C., Caragea, C.: Improving stance detection with multi-dataset learning and knowledge distillation. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 6332–6345 (2021)
Liu, X., He, P., Chen, W., Gao, J.: Multi-task deep neural networks for natural language understanding. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4487–4496 (2019)
Liu, Y., Shen, S., Lapata, M.: Noisy self-knowledge distillation for text summarization. In: Proceedings of the 2021 Conference of the NAACL, pp. 692–703 (2021)
Martins, R., Gomes, M., Almeida, J.J., Novais, P., Henriques, P.: Hate speech classification in social media using emotional analysis. In: 2018 7th Brazilian Conference on Intelligent Systems (BRACIS), pp. 61–66. IEEE (2018)
Mirzadeh, S.I., Farajtabar, M., Li, A., Levine, N., Matsukawa, A., Ghasemzadeh, H.: Improved knowledge distillation via teacher assistant. In: Proceedings of the AAAI conference on artificial intelligence, vol. 34, pp. 5191–5198 (2020)
Niculescu, M.A., Ruseti, S., Dascalu, M.: Rogpt2: Romanian gpt2 for text generation. In: 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI), pp. 1154–1161. IEEE (2021)
Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2009)
Park, S., Caragea, C.: Multi-task knowledge distillation with embedding constraints for scholarly keyphrase boundary classification. In: Proceedings of the 2023 Conference on EMNLP, pp. 13026–13042 (2023)
Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018)
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019)
Struß, J.M., Siegel, M., Ruppenhofer, J., Wiegand, M., Klenner, M., et al.: Overview of germeval task 2 (2019)
Tache, A., Mihaela, G., Ionescu, R.T.: Clustering word embeddings with self-organizing maps. application on laroseda-a large romanian sentiment data set. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pp. 949–956 (2021)
Vlad, G.A., Tanase, M.A., Onose, C., Cercel, D.C.: Sentence-level propaganda detection in news articles with transfer learning and bert-bilstm-capsule model. In: Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda, pp. 148–154 (2019)
Waseem, Z., Thorne, J., Bingel, J.: Bridging the gaps: multi task learning for domain transfer of hate speech detection. Online harassment, pp. 29–55 (2018)
Wei, J., Zou, K.: Eda: Easy data augmentation techniques for boosting performance on text classification tasks. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 6382–6388 (2019)
Wu, X., Lv, S., Zang, L., Han, J., Hu, S.: Conditional BERT Contextual Augmentation. In: Rodrigues, J.M.F., et al. (eds.) ICCS 2019, IV. LNCS, vol. 11539, pp. 84–95. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-22747-0_7
Xie, Q., Luong, M.T., Hovy, E., Le, Q.V.: Self-training with noisy student improves ImageNet classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10687–10698 (2020)
Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., Kumar, R.: SemEval-2019 task 6: identifying and categorizing offensive language in social media (OffensEval). In: Proceedings of the 13th International Workshop on Semantic Evaluation, pp. 75–86. Minneapolis, Minnesota, USA (2019)
Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: beyond empirical risk minimization. arXiv preprint arXiv:1710.09412 (2017)
Acknowledgements
This work was supported by the NUST POLITEHNICA Bucharest through the PubArt program, and a grant from the National Program for Research of the National Association of Technical Universities - GNAC ARUT 2023.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Matei, VC., Tăiatu, IM., Smădu, RA., Cercel, DC. (2024). Enhancing Romanian Offensive Language Detection Through Knowledge Distillation, Multi-task Learning, and Data Augmentation. In: Rapp, A., Di Caro, L., Meziane, F., Sugumaran, V. (eds) Natural Language Processing and Information Systems. NLDB 2024. Lecture Notes in Computer Science, vol 14762. Springer, Cham. https://doi.org/10.1007/978-3-031-70239-6_22
Download citation
DOI: https://doi.org/10.1007/978-3-031-70239-6_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-70238-9
Online ISBN: 978-3-031-70239-6
eBook Packages: Computer ScienceComputer Science (R0)