Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Bengali text document categorization based on very deep convolution neural network

Published: 01 December 2021 Publication History

Highlights

Illustrated the development of benchmark text corpus for the low-resource languages.
Presented an algorithm for optimisation of hyperparameters of embedding models.
Evaluated several embedding models using semantic and syntactic similarity measures.
Integrated embedding and very deep learning models to improve text classification.
Evaluated the proposed and existing models on built corpus for text classification.

Abstract

In recent years, the amount of digital text contents or documents in the Bengali language has increased enormously on online platforms due to the effortless access of the Internet via electronic gadgets. As a result, an enormous amount of unstructured data is created that demands much time and effort to organize, search or manipulate. To manage such a massive number of documents effectively, an intelligent text document classification system is proposed in this paper. Intelligent classification of text document in a resource-constrained language (like Bengali) is challenging due to unavailability of linguistic resources, intelligent NLP tools, and larger text corpora. Moreover, Bengali texts are available in two morphological variants (i.e., Sadhu-bhasha and Cholito-bhasha) making the classification task more complicated. The proposed intelligent text classification model comprises GloVe embedding and Very Deep Convolution Neural Network (VDCNN) classifier. Due to the unavailability of standard corpus, this work develops a large Embedding Corpus (EC) containing 969, 000 unlabelled texts and Bengali Text Classification Corpus (BDTC) containing 156, 207 labelled documents arranged into 13 categories. Moreover, this work proposes the Embedding Parameters Identification (EPI) Algorithm, which selects the best embedding parameters for low-resource languages (including Bengali). Evaluation of 165 embedding models with intrinsic evaluators (semantic & syntactic similarity measures) shows that the GloVe model is more suitable (regarding Spearman & Pearson correlation) than other embeddings (Word2Vec, FastText, m-BERT) in Bengali text. Experimental results on the test dataset confirm that the proposed GloVe + VDCNN model outperformed (achieving the highest 96.96 % accuracy) the other classification models and existing methods to perform the Bengali text classification task.

References

[1]
D. Abuaiadah, J.E. Sana, W. Abusalah, Article: On the impact of dataset characteristics on arabic document classification, International Journal of Computer Applications 101 (2014) 31–38,.
[2]
Agarap, A. F. M. (2018). Deep learning using rectified linear units (relu). CoRR, abs/1803.08375. url:http://arxiv.org/abs/1803.08375. arXiv:1803.08375.
[3]
A. Ahmad, M.R. Amin, Bengali word embeddings and it’s application in solving document classification problem, in: 2016 19th international conference on computer and information technology (ICCIT), 2016, pp. 425–430,.
[4]
M.P. Akhter, Z. Jiangbin, I.R. Naqvi, M. Abdelmajeed, A. Mehmood, M.T. Sadiq, Document-level text classification using single-layer multisize filters convolutional neural network, IEEE Access 8 (2020) 42689–42707,.
[5]
Y.A. Alhaj, J. Xiang, D. Zhao, M.A.A. Al-Qaness, M.A. Elaziz, A. Dahou, A study of the effects of stemming strategies on arabic document classification, IEEE Access 7 (2019) 32664–32671,.
[6]
M. Alhawarat, A.O. Aseeri, A superior arabic text categorization deep model (satcdm), IEEE Access 8 (2020) 24653–24661,.
[7]
A.K. Ambalavanan, M.V. Devarakonda, Using the contextual language model bert for multi-criteria classification of scientific articles, Journal of Biomedical Informatics 112 (2020) url:https://www.sciencedirect.com/science/article/pii/S1532046420302069. 10.1016/j.jbi.2020.103578.
[8]
S. Bahassine, A. Madani, Mohamed, Arabic text classification using new stemmer for feature selection and decision trees, Journal of Engineering Science and Technology 12 (2017) 1475–1487.
[9]
R.K. Behera, M. Jena, S.K. Rath, S. Misra, Co-lstm: Convolutional lstm model for sentiment analysis in social big data, Information Processing and Management 58 (2021),. url:https://www.sciencedirect.com/science/article/pii/S0306457320309286.
[10]
P. Bojanowski, E. Grave, A. Joulin, T. Mikolov, Enriching word vectors with subword information, Transactions of the Association for Computational Linguistics 5 (2017) 135–146,.
[11]
Catanzaro, B., Sundaram, N., & Keutzer, K. (2008). Fast support vector machine training and classification on graphics processors. In Machine learning, proceedings of the twenty-fifth international conference (ICML 2008), Helsinki, Finland, June 5–9, 2008 (pp. 104–111). ACM volume 307 of ACM International Conference Proceeding Series. url:https://doi.org/10.1145/1390156.1390170.
[12]
C.-C. Chang, C.-J. Lin, LIBSVM: A library for support vector machines, ACM Transactions on Intelligent Systems and Technology 2 (2011) 27:1–27:27. Software available at url:http://www.csie.ntu.edu.tw/ cjlin/libsvm.
[13]
Chiu, B., Korhonen, A., & Pyysalo, S. (2016). Intrinsic evaluation of word vectors fails to predict extrinsic performance. In Proceedings of the 1st workshop on evaluating vector-space representations for NLP (pp. 1–6). Berlin, Germany: Association for Computational Linguistics. url:https://www.aclweb.org/anthology/W16-2501.
[14]
Chung, J., Gülçehre, Ç., Cho, K., & Bengio, Y. (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR, abs/1412.3555. url:http://arxiv.org/abs/1412.3555. arXiv:1412.3555.
[15]
J. Cohen, A coefficient of agreement for nominal scales, Educational and Psychological Measurement 20 (1960) 37–46,. url:https://doi.org/10.1177/001316446002000104.
[16]
Conneau, A., Schwenk, H., Barrault, L., & Lecun, Y. (2017). Very deep convolutional networks for text classification. In Proceedings of the 15th conference of the european chapter of the association for computational linguistics: Volume 1, Long Papers (pp. 1107–1116). Valencia, Spain: Association for Computational Linguistics. url:https://www.aclweb.org/anthology/E17-1104.
[17]
Dang, H.T., & Palmer, M. (2002). Combining contextual features for word sense disambiguation. In Proceedings of the ACL-02 workshop on word sense disambiguation: recent successes and future directions (pp. 88–94). Association for Computational Linguistics. url:https://www.aclweb.org/anthology/W02-0813.
[18]
N.S. Dash, L. Ramamoorthy, Process of text corpus generation, in: Utility and application of language corpora, Springer, Singapore, 2019, pp. 17–34,.
[19]
X. Deng, Y. Li, J. Weng, J. Zhang, Feature selection for text classification: A review, Multimedia Tools Applications 78 (2019) 3797–3816,. url:https://doi.org/10.1007/s11042-018-6083-5.
[20]
J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, BERT: Pre-training of deep bidirectional transformers for language understanding, in: Proceedings of the 2019 conference of the north american chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers), Association for Computational Linguistics, Minneapolis, Minnesota, 2019, pp. 4171–4186,. url:https://www.aclweb.org/anthology/N19-1423.
[21]
Dhar, A., Mukherjee, H., Obaidullah, S. M., Santosh, K. C., Dash, N.S., & Roy, K. (2020). Web text categorization: A lstm-rnn approach. In ICICC 2019: Intelligent computing and communication (pp. 281–290). Springer, Singapore. vol. 1034.
[22]
F. Enrıquez, J.A. Troyano, T. López-Solaz, An approach to the use of word embeddings in an opinion classification task, Expert Systems with Applications 66 (2016) 1–6,. url:https://www.sciencedirect.com/science/article/pii/S0957417416304833.
[23]
Gambino, G., & Pirrone, R. (2019). Investigating embeddings for sentiment analysis in italian. In Proceedings of the 3rd workshop on natural language for artificial intelligence co-located with the 18th international conference of the italian association for artificial intelligence (AIIA 2019), Rende, Italy, November 19th-22nd, 2019. CEUR-WS.org volume 2521 of CEUR Workshop Proceedings. url:http://ceur-ws.org/Vol-2521/paper-03.pdf.
[24]
Grave, E., Bojanowski, P., Gupta, P., Joulin, A., & Mikolov, T. (2018). Learning word vectors for 157 languages. In Proceedings of the international conference on language resources and evaluation (LREC 2018). url:https://www.aclweb.org/anthology/L18-1550.
[25]
D. Grießhaber, N.T. Vu, J. Maucher, Low-resource text classification using domain-adversarial learning, Computer Speech & Language 62 (2020),. url:https://www.sciencedirect.com/science/article/pii/S0885230819303006.
[26]
S. Hashemi, Y. Yang, Z. Mirzamomen, M. Kangavari, Adapted one-versus-all decision trees for data stream classification, IEEE Transactions on Knowledge and Data Engineering 21 (2009) 624–637,.
[27]
S.U. Hashmi, A. Bansal, Information extraction and visualization of unstructured textual data, in: 2019 IEEE 13th international conference on semantic computing (ICSC), 2019, pp. 142–145,.
[28]
M.A. Hearst, Support vector machines, IEEE Intelligent Systems 13 (1998) 18–28,. url:https://doi.org/10.1109/5254.708428.
[29]
J. He, L. Wang, L. Liu, J. Feng, H. Wu, Long document classification from local word glimpses via recurrent attention learning, IEEE Access 7 (2019) 40707–40718,.
[30]
K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2016, pp. 770–778,.
[31]
K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: 2016 IEEE conference on computer vision and pattern recognition (CVPR), 2016, pp. 770–778,.
[32]
S. Hochreiter, The vanishing gradient problem during learning recurrent neural nets and problem solutions, nternational Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 6 (1998) 107–116,. url:https://doi.org/10.1142/S0218488598000094.
[33]
S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural Computation 9 (1997) 1735–1780,. url:https://doi.org/10.1162/neco.1997.9.8.1735 arXiv:https://doi.org/10.1162/neco.1997.9.8.1735.
[34]
Hossain, M. R., & Hoque, M. M. (2018). Automatic bengali document categorization based on word embedding and statistical learning approaches. In 2018 International conference on computer, communication, chemical, material and electronic engineering (IC4ME2) (pp. 1–6).
[35]
Hossain, M. R., & Hoque, M. M. (2020). Towards bengali word embedding: Corpus creation, intrinsic and extrinsic evaluations. In Proceedings of the 17th international conference on natural language processing (pp. 453–459). IIT Patna, India, 2020 NLP Association of India (NLPAI): Preprints 2020. url:https://www.preprints.org/manuscript/202012.0600/v1.
[36]
Hossain, M. R., & Hoque, M. M. (2021). Semantic meaning based bengali web text categorization using deep convolutional and recurrent neural networks (dcrnns). In Proc. ICIoTCT (pp. 494–505). India, IIT Patna.
[37]
M.R. Hossain, M.M. Hoque, Automatic bengali document categorization based on deep convolution nets, Emerging Research in Computing, Information, Communication and Applications. Advances in Intelligent Systems and Computing, vol. 882, Springer, Singapore, 2019, pp. 513–525,.
[38]
M.R. Hossain, M.M. Hoque, I.H. Sarker, Text classification using convolution neural networks with fasttext embedding, in: A. Abraham, T. Hanne, O. Castillo, N. Gandhi, T. Nogueira Rios, T.-P. Hong (Eds.), Hybrid intelligent systems, Springer International Publishing, Cham, 2021, pp. 103–113.
[39]
R. Johnson, T. Zhang, Deep pyramid convolutional neural networks for text categorization, in: Proceedings of the 55th annual meeting of the association for computational linguistics (volume 1: long papers), Association for Computational Linguistics, Vancouver, Canada, 2017, pp. 562–570,. url:https://www.aclweb.org/anthology/P17-1052.
[40]
F. Kabir, S. Siddique, M.R.A. Kotwal, M.N. Huda, Bangla text document categorization using stochastic gradient descent (sgd) classifier, in: 2015 International conference on cognitive computing and information processing(CCIP), 2015, pp. 1–4,.
[41]
Kaiming, H., Xiangyu, Z., Shaoqing, R., & Jian, S. (2015). Deep residual learning for image recognition. CoRR, abs/1512.03385. url:http://arxiv.org/abs/1512.03385. arXiv:1512.03385.
[42]
Keskar, N.S., Mudigere, D., Nocedal, J., Smelyanskiy, M., & Tang, P.T.P. (2016). On large-batch training for deep learning: Generalization gap and sharp minima. CoRR, abs/1609.04836. url:http://arxiv.org/abs/1609.04836. arXiv:1609.04836.
[43]
N.H. Khan, A. Adnan, Urdu optical character recognition systems: Present contributions and future directions, IEEE Access 6 (2018) 46019–46046,.
[44]
M. Khan, B. Jan, H. Farman, Deep learning: Convergence to big data analytics, in: SpringerBriefs in computer science, Springer, Singapore, 2019, pp. 31–42,.
[45]
A. Khatun, A. Rahman, M.S. Islam, Marium-E-Jannat, Authorship attribution in bangla literature using character-level cnn, in: 2019 22nd International conference on computer and information technology (ICCIT), 2019, pp. 1–5,.
[46]
Y. Kim, Convolutional neural networks for sentence classification, in: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), Association for Computational Linguistics, Doha, Qatar, 2014, pp. 1746–1751,. url:https://www.aclweb.org/anthology/D14-1181.
[47]
K. Kowsari, D.E. Brown, M. Heidarysafa, K.J. Meimandi, M.S. Gerber, L.E. Barnes, Hdltex: Hierarchical deep learning for text classification, in: 2017 16th IEEE international conference on machine learning and applications (ICMLA), 2017, pp. 364–371,.
[48]
M. Kumari, A. Jain, A. Bhatia, Synonyms based term weighting scheme: An extension to tf.idf, Procedia Computer Science 89 (2016) 555–561,. url:https://www.sciencedirect.com/science/article/pii/S1877050916311589.
[49]
J.Y. Lee, F. Dernoncourt, Sequential short-text classification with recurrent and convolutional neural networks, in: Proceedings of the 2016 conference of the north american chapter of the association for computational linguistics: human language technologies, Association for Computational Linguistics, San Diego, California, 2016, pp. 515–520,. url:https://www.aclweb.org/anthology/N16-1062.
[50]
C. Liebeskind, L. Kotlerman, I. Dagan, Text categorization from category name in an industry-motivated scenario, Language Resources and Evaluation 49 (2015) 227–261,.
[51]
X. Li, D. Roth, Learning question classifiers, in: COLING 2002: The 19th international conference on computational linguistics, 2002, url:https://www.aclweb.org/anthology/C02-1150.
[52]
J.-P. Mei, Y. Wang, L. Chen, C. Miao, Large scale document categorization with fuzzy clustering, IEEE Transactions on Fuzzy Systems 25 (2017) 1239–1251,.
[53]
Mikolov, T., Chen, K., Corrado, G. S., & Dean, J. (2013). Efficient estimation of word representations in vector space. CoRR, abs/1301.3781.
[54]
M.M. Mirończuk, J. Protasiewicz, A recent overview of the state-of-the-art elements of text classification, Expert Systems with Applications 106 (2018) 36–54,. url:https://www.sciencedirect.com/science/article/pii/S095741741830215X.
[55]
D.S. Moirangthem, M. Lee, Hierarchical and lateral multiple timescales gated recurrent units with pre-trained encoder for long text classification, Expert Systems with Applications 165 (2021),. url:https://www.sciencedirect.com/science/article/pii/S095741742030693X.
[56]
D.S. Moirangthem, M. Lee, Hierarchical and lateral multiple timescales gated recurrent units with pre-trained encoder for long text classification, Expert Systems with Applications 165 (2021),. url:https://www.sciencedirect.com/science/article/pii/S095741742030693X.
[57]
Mucherino, A., authorPetraq J. Papajorgji, & Pardalos, P. M. (2009). k-nearest neighbor classification. In Data Mining in Agriculture (pp. 83–106). New York, NY: Springer, New York. url:https://doi.org/10.1007/978-0-387-88615-2_4.
[58]
Nikolentzos, G., Meladianos, P., Rousseau, F., Stavrakas, Y., & Vazirgiannis, M. (2017). Multivariate gaussian document representation from word embeddings for text categorization. In Proceedings of the 15th conference of the european chapter of the association for computational linguistics: volume 2, short papers (pp. 450–455). Valencia, Spain: Association for Computational Linguistics. url:https://www.aclweb.org/anthology/E17-2072.
[59]
J. Pennington, R. Socher, C. Manning, Glove: Global vectors for word representation, in: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), Association for Computational Linguistics, Doha, Qatar, 2014, pp. 1532–1543,. url:https://www.aclweb.org/anthology/D14-1162.
[60]
S. Phani, S. Lahiri, A. Biswas, A supervised learning approach for authorship attribution of bengali literary texts, ACM Transactions on Asian and Low-Resource Language Information Processing 16 (2017),. url:https://doi.org/10.1145/3099473.
[61]
M.A. Rahman, E.K. Dey, Datasets for aspect-based sentiment analysis in bangla and its baseline evaluation, Data 03 (2018),.
[62]
P. Rebecca, Measuring agreement on set-valued items (masi) for semantic and pragmatic annotation, in: Proceedings of the fifth international conference on language resources and evaluation (LREC’06), European Language Resources Association (ELRA), 2006, pp. 831–836. url:http://www.lrec-conf.org/proceedings/lrec2006/pdf/636_pdf.pdf.
[63]
Řeh∘uřek, R., & Sojka, P. (2010). Software framework for topic modelling with large corpora. In Proceedings of LREC 2010 workshop new challenges for NLP frameworks (pp. 46–50). Valletta, Malta: University of Malta. url:http://is.muni.cz/publication/884893/en.
[64]
Ruder, S. (2016). An overview of gradient descent optimization algorithms. CoRR, abs/1609.04747. url:http://arxiv.org/abs/1609.04747. arXiv:1609.04747.
[65]
D.E. Rumelhart, G.E. Hinton, R.J. Williams, Learning representations by back-propagating errors, Nature 323 (1986) 533–536,.
[66]
S.E. Saad, J. Yang, Twitter sentiment analysis based on ordinal regression, IEEE Access 7 (2019) 163677–163685,.
[67]
A. Sakalle, P. Tomar, H. Bhardwaj, D. Acharya, A. Bhardwaj, A lstm based deep learning network for recognizing emotions using wireless brainwave driven system, Expert Systems with Applications 173 (2021),. url:https://www.sciencedirect.com/science/article/pii/S095741742031160X.
[68]
I.H. Sarker, M.H. Furhad, R. Nowrozy, Ai-driven cybersecurity: an overview, security intelligence modeling and research directions, SN Computer Science 2 (2021) 1–18,.
[69]
E. Shriberg, R. Dhillon, S. Bhagat, J. Ang, H. Carvey, The ICSI meeting recorder dialog act (MRDA) corpus, in: Proceedings of the 5th SIGdial workshop on discourse and dialogue at HLT-NAACL 2004, Association for Computational Linguistics, Cambridge, Massachusetts, USA, 2004, pp. 97–100. url:https://www.aclweb.org/anthology/W04-2319.
[70]
V.S. Stehman, Selecting and interpreting measures of thematic classification accuracy, Remote Sensing of Environment 62 (1997) 77–89,.
[71]
D. Tang, B. Qin, T. Liu, Document modeling with gated recurrent neural network for sentiment classification, in: Proceedings of the 2015 conference on empirical methods in natural language processing, Association for Computational Linguistics, Lisbon, Portugal, 2015, pp. 1422–1432,. url:https://www.aclweb.org/anthology/D15-1167.
[72]
I.V. Tetko, D.J. Livingstone, A.I. Luik, Neural network studies. 1. Comparison of overfitting and overtraining, Journal of Chemical Information and Computer Sciences 35 (1995) 826–833. doi:110.1021/ci00027a006.
[73]
Z. Wen, J. Shi, Q. Li, B. He, J. Chen, Thundersvm: A fast svm library on gpus and cpus, Journal of Machine Learning Research 19 (2018) 1–5. url:http://jmlr.org/papers/v19/17-740.html.
[74]
D. Wu, M. Zhang, C. Shen, Z. Huang, M. Gu, Btm and glove similarity linear fusion-based short text clustering algorithm for microblog hot topic discovery, IEEE Access 8 (2020) 32215–32225,.
[75]
Y. Xiao, B. Liu, J. Yin, Z. Hao, A multiple-instance stream learning framework for adaptive document categorization, A Knowledge-Based System 120 (2017) 198–210. url:https://doi.org/10.1016/j.knosys.2017.01.001.
[76]
K. Xu, Y. Feng, S. Huang, D. Zhao, Semantic relation classification via convolutional neural networks with simple negative sampling, in: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Lisbon, Portugal, 2015, pp. 536–540,. url:https://www.aclweb.org/anthology/D15-1062.
[77]
Zhang, X., Zhang, J., & LeCun, Y. (2015). Character-level convolutional networks for text classification. In Proceedings of the 28th international conference on neural information processing systems (pp. 649–657). Cambridge, MA, USA: MIT Press Vol. 1.
[78]
D.-X. Zhou, Theory of deep convolutional neural networks: Downsampling, Neural Networks 124 (2020) 319–327,. url:https://www.sciencedirect.com/science/article/pii/S0893608020300204.
[79]
T. Zia, M.P. Akhter, Q. Abbas, Comparative study of feature selection approaches for urdu text categorization. Malaysian, Journal of Computer Science 28 (2015) 93–109. url:https://ejournal.um.edu.my/index.php/MJCS/article/view/6857.

Cited By

View all
  • (2024)AraCovTexFinderEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.107987133:PAOnline publication date: 1-Jul-2024
  • (2023)A Comprehensive Roadmap on Bangla Text-based Sentiment AnalysisACM Transactions on Asian and Low-Resource Language Information Processing10.1145/357278322:4(1-29)Online publication date: 6-Apr-2023
  • (2023)Improving news headline text generation quality through frequent POS-Tag patterns analysisEngineering Applications of Artificial Intelligence10.1016/j.engappai.2023.106718125:COnline publication date: 1-Oct-2023
  • Show More Cited By

Index Terms

  1. Bengali text document categorization based on very deep convolution neural network
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Please enable JavaScript to view thecomments powered by Disqus.

          Information & Contributors

          Information

          Published In

          cover image Expert Systems with Applications: An International Journal
          Expert Systems with Applications: An International Journal  Volume 184, Issue C
          Dec 2021
          1533 pages

          Publisher

          Pergamon Press, Inc.

          United States

          Publication History

          Published: 01 December 2021

          Author Tags

          1. Intelligent systems
          2. Natural language processing
          3. Low resource language
          4. Semantic feature extraction
          5. Document categorization
          6. Deep convolution network

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 22 Dec 2024

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)AraCovTexFinderEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.107987133:PAOnline publication date: 1-Jul-2024
          • (2023)A Comprehensive Roadmap on Bangla Text-based Sentiment AnalysisACM Transactions on Asian and Low-Resource Language Information Processing10.1145/357278322:4(1-29)Online publication date: 6-Apr-2023
          • (2023)Improving news headline text generation quality through frequent POS-Tag patterns analysisEngineering Applications of Artificial Intelligence10.1016/j.engappai.2023.106718125:COnline publication date: 1-Oct-2023
          • (2023)Leveraging the meta-embedding for text classification in a resource-constrained languageEngineering Applications of Artificial Intelligence10.1016/j.engappai.2023.106586124:COnline publication date: 1-Sep-2023
          • (2023)CovTiNet: Covid text identification network using attention-based positional embedding feature fusionNeural Computing and Applications10.1007/s00521-023-08442-y35:18(13503-13527)Online publication date: 14-Mar-2023
          • (2022)Named Entity Recognition for Public Interest Litigation Based on a Deep Contextualized Pretraining ApproachScientific Programming10.1155/2022/76823732022Online publication date: 1-Jan-2022
          • (2022)A Graph Convolution Neural Network-Based Framework for Communication Network K-Terminal Reliability EstimationSecurity and Communication Networks10.1155/2022/43166232022Online publication date: 1-Jan-2022
          • (2022)A dictionary based model for bengali document classificationApplied Intelligence10.1007/s10489-022-03955-w53:11(14023-14042)Online publication date: 20-Oct-2022

          View Options

          View options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media