An Efficient Optimized DenseNet Model for Aspect-Based Multi-Label Classification
<p>Virtualization of multi-labeling [<a href="#B12-algorithms-16-00548" class="html-bibr">12</a>].</p> "> Figure 2
<p>Proposed System Model for multi-labeling.</p> "> Figure 3
<p>DenseNet model.</p> "> Figure 4
<p>Flow of DenseNet.</p> "> Figure 5
<p>AO’s working flow.</p> "> Figure 6
<p>Proposed model accuracy.</p> "> Figure 7
<p>Proposed model loss.</p> "> Figure 8
<p>Frequency-based features of the emotion dataset.</p> "> Figure 9
<p>Frequency-based features of the medical dataset.</p> "> Figure 10
<p>Frequency-based features of the news dataset.</p> "> Figure 11
<p>Classifier accuracy on the news dataset.</p> "> Figure 12
<p>Classifier accuracy on the movie dataset.</p> "> Figure 13
<p>Classifier accuracy on the emotion dataset.</p> "> Figure 14
<p>Accuracy of classifiers on medical dataset.</p> "> Figure 15
<p>Proposed ensembler sensitivity analysis.</p> "> Figure 16
<p>CNN sensitivity analysis.</p> "> Figure 17
<p>Confusion matrix of DenseNet-AO.</p> "> Figure 18
<p>Confusion matrix of NB.</p> "> Figure 19
<p>Confusion matrix of BERT.</p> "> Figure 20
<p>Confusion matrix of CNN.</p> ">
Abstract
:1. Introduction
- Introducing a novel approach that utilizes aspect-based multi-label classification for accurately classifying emotions in textual data, enabling more granular sentiment analysis.
- Comparative Analysis of Sentiment Classification Techniques: Conducting an extensive analysis to compare the effectiveness of different sentiment classification techniques in categorizing emotions, providing insights into their suitability for multi-label sentiment analysis tasks.
- Evaluation of State-of-the-Art Algorithms: Assessing the performance of five advanced multi-label classification algorithms on diverse emotion-based textual datasets, offering a comprehensive understanding of their strengths and weaknesses in handling multi-label sentiment analysis.
- Introduction of Ensembler: Presenting the approach called Ensemble of DenseNet based on AO (EDAO) technique, which enhances the accuracy and variation of multi-label sentiment analysis models by integrating accuracy and diversity, offering a new perspective on multi-label instruction approaches.
- Development of Comprehensive Workflow: Establishing a comprehensive workflow that incorporates preprocessing, feature extraction, and model tuning techniques, elevating the performance of sentiment analysis models and ensuring the use of refined and precise data.
- Experimental Validation and Performance Comparison: Extensive experiments are being conducted to validate the effectiveness of the proposed EDAO approach. These experiments involve comparing the approach with existing benchmark methods and showcasing its superior ability to capture sentiment variations and handle complex multi-label datasets.
2. Related Work
2.1. Transformation-Based Schemes
2.2. Adaption Algorithm
2.3. Ensemble Methods
3. Proposed Methodology
3.1. Dataset Collection
3.2. Preprocessing of Data
3.3. Feature Extraction
3.4. Word2Vec Representation
3.5. Swarm-Based Ensembler
3.5.1. DenseNet Classification
3.5.2. DenseNet Mechanism
3.5.3. Multi-Objective Optimization Solution
Algorithm 1 Aquila Optimizer for DenseNet parameters. |
|
Algorithm 2 Proposed model algorithm (pseudocode). |
Require: Data (Input data for sentiment analysis) Ensure: SentimentResults (Output Classification results)
|
4. Simulation Results and Discussion
4.1. Dataset Description
4.2. Metrics for Performance Evaluation
4.2.1. Accuracy
4.2.2. Precision
4.2.3. Recall
4.2.4. F-Measure
4.3. Compared Models
- Proposed Ensembler: DenseNet is used as a DNN algorithm ensemble with AO and as an ensembler. The DenseNet makes use of its bunch of layers.
- NB: The ML algorithm was utilized in the multi-labeling method.
- CNN: The multi-label learning (MLL) algorithm is fundamentally rooted in the domain of NNs.
- ML-RBF [22]: Additionally, the RBF NN algorithm-based MLL algorithm is the primary trainer utilized by the ENL system and is developed based on RBF NN.
- RAKEL [26]: Another approach to multivalued learning. In this approach, a basic learner with a single label makes judgments based on a randomly selected subset of the encountered labels. This limited subset serves as the basis for decision-making, reflecting the nature of most basic learners in this context.
- ECC [24] provides a detailed description of an ensemble approach for MLL based on using classifier chains. To transform EnML into a subproblem, we must first modify the unique goal into the one we used to create EnML.
4.4. Performance of Different Methods
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
AO | Aquila Optimizer |
BR | Binary Relevance |
CC | Classifier Chain |
CNN | Convolutional NN |
EDAO | Ensemble of DenseNet based on Aquila Optimizer |
LB | Label Combination |
ML | Machine Learning |
MLL | Multi-Label Learner |
NN | Neural Networks |
SA | Sensitivity Analysis |
SVM | Support Vector Machine |
References
- Singh, P. Money Laundering and Abuse of the Financial System. Indian JL & Legal Rsch. 2023, 5, 1. [Google Scholar]
- Wu, C.; Xiong, Q.; Yi, H.; Yu, Y.; Zhu, Q.; Gao, M.; Chen, J. Multiple-element joint detection for aspect-based sentiment analysis. Knowl.-Based Syst. 2021, 223, 107073. [Google Scholar] [CrossRef]
- Sun, J.; Lang, J.; Fujita, H.; Li, H. Imbalanced enterprise credit evaluation with DTE-SBD: Decision tree ensemble based on SMOTE and bagging with differentiated sampling rates. Inf. Sci. 2018, 425, 76–91. [Google Scholar] [CrossRef]
- Piri, S.; Delen, D.; Liu, T. A synthetic informative minority over-sampling (SIMO) algorithm leveraging support vector machine to enhance learning from imbalanced datasets. Decis. Support Syst. 2018, 106, 15–29. [Google Scholar] [CrossRef]
- Zhang, C.; Tan, K.C.; Li, H.; Hong, G.S. A cost-sensitive deep belief network for imbalanced classification. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 109–122. [Google Scholar] [CrossRef]
- Collell, G.; Prelec, D.; Patil, K.R. A simple plug-in bagging ensemble based on threshold-moving for classifying binary and multiclass imbalanced data. Neurocomputing 2018, 275, 330–340. [Google Scholar] [CrossRef] [PubMed]
- Yu, H.; Sun, D.; Xi, X.; Yang, X.; Zheng, S.; Wang, Q. Fuzzy one-class extreme autoencoder. Neural Process. Lett. 2019, 50, 701–727. [Google Scholar] [CrossRef]
- Ben, X.; Zhang, P.; Lai, Z.; Yan, R.; Zhai, X.; Meng, W. A general tensor representation framework for cross-view gait recognition. Pattern Recogn. 2019, 90, 87–98. [Google Scholar] [CrossRef]
- Ben, X.; Gong, C.; Zhang, P.; Jia, X.; Wu, Q.; Meng, W. Coupled Patch Alignment for Matching Cross-view Gaits. IEEE Trans. Image Process. 2019, 28, 3142–3157. [Google Scholar] [CrossRef]
- Wang, S.; Yang, L.Y. Adaptive bi-weighting toward automatic initialization and model selection for HMM-based hybrid meta-clustering ensembles. IEEE Trans. Cybern. 2019, 49, 1657–1668. [Google Scholar]
- Moyano, J.M.; Gibaja, E.L.; Cios, K.J.; Ventura, S. Review of ensembles of multi-label classifiers: Models, experimental study, and prospects. Inform. Fusion 2018, 44, 33–45. [Google Scholar] [CrossRef]
- García-Pablos, A.; Cuadros, M.; Rigau, G. W2VLDA: Almost unsupervised system for aspect-based sentiment analysis. Expert Syst. Appl. 2018, 91, 127–137. [Google Scholar] [CrossRef]
- Kumar, V.; Pujari, A.K.; Padmanabhan, V.; Kagita, V.R. Group preserving label embedding for multi-label classification. Pattern Recognit. 2019, 90, 23–34. [Google Scholar] [CrossRef]
- Deng, M.; Wang, C.; Tang, M.; Zheng, T. Extracting cardiac dynamics within ECG signal for human identification and cardiovascular diseases classification. Neural Netw. 2018, 100, 70–83. [Google Scholar] [CrossRef] [PubMed]
- Szymánski, P.; Kajdanowicz, T. Scikit-multilearn: A scikit-based Python environment for performing multi-label classification. J. Mach. Learn. Res. 2019, 20, 209–230. [Google Scholar]
- Cevikalp, H.; Benligiray, B.; Gerek, O.N. Semi-supervised robust deep neural networks for multi-label image classification. Pattern Recognit. 2020, 100, 107164. [Google Scholar] [CrossRef]
- Charte, F.; Rivera, A.J.; Charte, D.; del Jesus, M.J.; Herrera, F. Tips, guidelines and tools for managing multi-label datasets: The mldr.datasets R package and the cometa data repository. Neurocomputing 2018, 289, 68–85. [Google Scholar] [CrossRef]
- Ning, F.; Delhomme, D.; LeCun, Y.; Piano, F.; Bottou, L.; Barbano, P.E. Toward automatic phenotyping of developing embryos from videos. IEEE Trans. Image Process. 2005, 14, 1360–1371. [Google Scholar] [CrossRef]
- Mohamed, A.; Dahl, G.E.; Hinton, G. Acoustic modeling using deep belief networks. IEEE Trans. Audio Speech Lang. Process. 2012, 20, 14–22. [Google Scholar] [CrossRef]
- Banbhrani, S.K.; Xu, B.; Soomro, P.D.; Jain, D.K.; Lin, H. TDO-Spider Taylor ChOA: An Optimized Deep-Learning-Based Sentiment Classification and Review Rating Prediction. Appl. Sci. 2022, 12, 10292. [Google Scholar] [CrossRef]
- Liao, S.; Wang, J.; Yu, R.; Sato, K.; Cheng, Z. CNN for situations understanding based on sentiment analysis of Twitter data. Procedia Comput. Sci. 2016, 111, 376–381. [Google Scholar] [CrossRef]
- Severyn, A.; Moschitti, A. Twitter sentiment analysis with deep convolutional neural networks. In Proceedings of the SIGIR ’15: The 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, Santiago, Chile, 9–13 August 2015; pp. 959–962. [Google Scholar]
- Ouyang, X.; Zhou, P.; Li, C.H.; Liu, L. Sentiment analysis using convolutional neural network. In Proceedings of the 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, Liverpool, UK, 26–28 October 2015; pp. 2359–2364. [Google Scholar]
- Liu, S.M.; Chen, J.H. A multi-label classification-based approach for sentiment classification. Expert Syst. Appl. 2015, 42, 1083–1093. [Google Scholar] [CrossRef]
- Montañes, E.; Senge, R.; Barranquero, J.; Quevedo, J.R.; del Coz, J.J.; Hüllermeier, E. Dependent binary relevance models for multi-label classification. Pattern Recognit. 2014, 47, 1494–1508. [Google Scholar] [CrossRef]
- Wu, G.; Zheng, R.; Tian, Y.; Liu, D. Joint Ranking SVM and Binary Relevance with robust Low-rank learning for multi-label classification. Neural Netw. 2020, 122, 24–39. [Google Scholar] [CrossRef] [PubMed]
- Lin, X.; Chen, X.W. Mr.KNN: Soft relevance for multi-label classification. In Proceedings of the 19th ACM International Conference on Information and Knowledge Management, Virtual, 19–23 October 2010; pp. 349–358. [Google Scholar]
- Wu, G.; Tian, Y.; Zhang, C. A unified framework implementing linear binary relevance for multi-label learning. Neurocomputing 2018, 289, 86–100. [Google Scholar] [CrossRef]
- Yan, Y.; Wang, Y.; Gao, W.C.; Zhang, B.W.; Yang, C.; Yin, X.C. LSTM: Multi-Label Ranking for Document Classification. Neural Process. Lett. 2018, 47, 117–138. [Google Scholar] [CrossRef]
- Azarbonyad, H.; Dehghani, M.; Marx, M.; Kamps, J. Learning to rank for multi-label text classification: Combining different sources of Information. Nat. Lang. Eng. 2020, 27, 89–111. [Google Scholar] [CrossRef]
- Nguyen, T.T.; Dang, M.T.; Luong, A.V.; Liew, A.W.-C.; Liang, T.; McCall, J. Multi-label classification via incremental clustering on an evolving data stream. Pattern Recognit. 2019, 95, 96–113. [Google Scholar] [CrossRef]
- Nguyen, T.T.T.; Nguyen, T.T.; Luong, A.V.; Nguyen, Q.V.H.; Liew, A.W.-C.; Stantic, B. Multi-label classification via label correlation and first order feature dependence in a data stream. Pattern Recognit. 2019, 90, 35–51. [Google Scholar] [CrossRef]
- Reyes, O.; Ventura, S. Evolutionary strategy to perform batch-mode active learning on multi-label data. ACM Trans. Intell. Syst. Tech. 2018, 6, 46:1–46:26. [Google Scholar] [CrossRef]
- Wang, R.; Wang, X.-Z.; Kwong, S.; Xu, C. Incorporating diversity and informativeness in multiple-instance active learning. IEEE Trans. Fuzzy Syst. 2018, 25, 1460–1475. [Google Scholar] [CrossRef]
- Wang, X.-Z.; Wang, R.; Xu, C. Discovering the relationship between generalization and uncertainty by incorporating complexity of classification. IEEE Trans. Cybern. 2018, 48, 703–715. [Google Scholar] [CrossRef]
- Wei, X.; Yu, Z.; Zhang, C.; Hu, Q. Ensemble of label specific features for multi-label classification. In Proceedings of the 2018 IEEE International Conference on Multimedia and Expo (ICME), San Diego, CA, USA, 23–27 July 2018. [Google Scholar]
- Yapp, E.K.; Li, X.; Lu, W.F.; Tan, P.S. Comparison of base classifiers for MLL. Neurocomputing 2020, 394, 51–60. [Google Scholar] [CrossRef]
- Zhang, M.L.; Li, Y.K.; Liu, X.Y.; Geng, X. Binary relevance for MLL: An overview. Front. Comput. Sci. 2018, 12, 191–202. [Google Scholar] [CrossRef]
- Zhou, J.-P.; Chen, L.; Guo, Z.-H. iATC-NRAKEL: An efficient multi-label classifier for recognizing anatomical therapeutic chemical classes of drugs. Bioinformatics 2020, 36, 1391–1396. [Google Scholar] [CrossRef]
- Xie, M.-K.; Huang, S.-J. Partial MLL. In Proceedings of the AAAI, New Orleans, LA, USA, 2–7 February 2018; pp. 4302–4309. [Google Scholar]
- Zhou, Z.-H. A brief introduction to weakly supervised learning. Nat. Sci. Rev. 2018, 5, 44–53. [Google Scholar] [CrossRef]
- Al-Smadi, M.; Qawasmeh, O.; Al-Ayyoub, M.; Jararweh, Y.; Gupta, B. Deep recurrent neural network vs. support vector machine for aspect-based sentiment analysis of Arabic hotels’ reviews. J. Comput. Sci. 2018, 27, 386–393. [Google Scholar] [CrossRef]
- Ghosh, S.; Ekbal, A.; Bhattacharyya, P. A multitask framework to detect depression, sentiment and multi-label emotion from suicide notes. Cogn. Comput. 2022, 14, 110–129. [Google Scholar] [CrossRef]
- Al-Smadi, M.; Al-Ayyoub, M.; Jararweh, Y.; Qawasmeh, O. Enhancing aspect-based sentiment analysis of Arabic hotels’ reviews using morphological, syntactic and semantic features. Inf. Process. Manag. 2019, 56, 308–319. [Google Scholar] [CrossRef]
- Da’u, A.; Salim, N.; Rabiu, I.; Osman, A. Recommendation system exploiting aspect-based opinion mining with deep learning method. Inf. Sci. 2020, 512, 1279–1292. [Google Scholar]
- Kumar, J.A.; Trueman, T.E.; Cambria, E. Gender-based multi-aspect sentiment detection using multilabel learning. Inf. Sci. 2022 606, 453–468. [CrossRef]
- Fu, X.; Wei, Y.; Xu, F.; Wang, T.; Lu, Y.; Li, J.; Huang, J.Z. Semi-supervised aspect-level sentiment classification model based on variational autoencoder. Knowl.-Based Syst. 2019, 171, 81–92. [Google Scholar] [CrossRef]
- Gu, X.; Gu, Y.; Wu, H. Cascaded convolutional neural networks for aspect-based opinion summary. Neural Process. Lett. 2017, 46, 581–594. [Google Scholar] [CrossRef]
- Gargiulo, F.; Silvestri, S.; Ciampi, M.; De Pietro, G. Deep neural network for hierarchical extreme multi-label text classification. Appl. Soft Comput. 2019, 79, 125–138. [Google Scholar] [CrossRef]
- Dahou, A.; Ewees, A.A.; Hashim, F.A.; Al-Qaness, M.A.; Orabi, D.A.; Soliman, E.M.; Abd Elaziz, M. Optimizing fake news detection for Arabic context: A multitask learning approach with transformers and an enhanced Nutcracker Optimization Algorithm. Knowl.-Based Syst. 2023, 280, 111023. [Google Scholar] [CrossRef]
- MultiLabel Dataset. Available online: http://www.uco.es/kdis/mllresources/ (accessed on 15 June 2023).
- MultiLabel Dataset. Available online: www.booking.com/hotel-reviews (accessed on 16 June 2023).
- MultiLabel Dataset. Available online: https://sci2s.ugr.es/keel/multilabel.php (accessed on 17 June 2023).
- Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic optimization algorithm. IEEE Access 2021, 9, 107250. [Google Scholar] [CrossRef]
- Charte, F. A comprehensive and didactic review on multi-label learning software tools. IEEE Access 2020, 8, 50330–50354. [Google Scholar] [CrossRef]
- Mai, L.; Le, B. Aspect-based sentiment analysis of Vietnamese texts with deep learning. In Proceedings of the Asian Conference on Intelligent Information and Database Systems, Dong Hoi City, Vietnam, 19–21 March 2018; Springer: Cham, Switzerland, 2018; pp. 149–158. [Google Scholar] [CrossRef]
Name | Description of Algorithms |
---|---|
IPF | Interior Function of Penalty |
FIMF | Foreword Sequence + Label Combination Limited Mutual Information Interaction Sequence |
MAMFS | Sequential Forward Selection + High-Order Label Combination + Mutual Information |
MDMR | One by One + Mutual Information Selecting Forward Sequences |
MFNMI | n Forward Sequence Selection + Mutual Information Determined Locally |
MIFS | Optimization Alternative |
PSO-MMI | Particle Swarm Optimization |
mRMR | One by One + Mutual Information + Next Sequence |
PMU | Forward Sequence + Label Combination Second Order + Mutual Information |
Approach | Advantages | Disadvantages |
---|---|---|
BR | Simple and fast | Does not model label correlations, affected by class imbalance, requires complete labeled data |
Pairwise (PW) | Conceptually straightforward | Time complexity concerns, ineffective for overlapping labels, needs complete labeled data |
Power Set Label | Accounts for label correlations | Computational complexity issues, prone to overfitting |
Pruned Sets Method | Handles irregular labeling, operates quickly, considers label correlations | Relies on predictive confidence function, challenges with unlabeled data |
Ensembles of Pruned Sets (EPS) | Efficient predictive performance, parallel processing | Does not utilize unlabeled data |
C4.5 | Allows attribute selection, enhances learnability | Does not consider class correlations, cannot use unlabeled data |
AdaBoost-MR | AdaBoost-MH and minimize hamming loss errors, improve accuracy | Poor performance, does not use unlabeled data |
ML-kNN | Enhances performance works well with text and image data | Does not exploit unlabeled data |
BP-MLL | Provides optimal generalization capabilities | High computational complexity during training, does not use unlabeled data |
CNN-HF | Considers correlations between data and classes | Reduces accuracy with unlabeled data |
Domain | Number of Labels | Number of Features | Domain Type |
---|---|---|---|
Hotel | 7 | 19,000 | Text |
Medical | 7 | 9000 | Biology |
Movies | 7 | 12,000 | Textual |
Proteins | 7 | 10,000 | Textual |
Automobiles | 7 | 14,000 | Textual |
Emotions | 7 | 19,000 | Textual |
Birds | 7 | 8000 | Textual |
News | 7 | 14,000 | Media |
Aspect | Description | Benefits | Notable Features | Dataset Applications | Main Contribution | Direct Relevance |
---|---|---|---|---|---|---|
Optimization Approach | EDAO employs an ensemble methodology based on an optimization algorithm to optimize two objective functions dynamically. | - Generates accurate and diverse base learners | Dynamic optimization of objective functions | Multi-label classification tasks | Enhanced generalization efficiency of DNN models | Multi-objective optimization |
Integration of Accuracy and Diversity | EDAO integrates accuracy and diversity within the optimization process, ensuring the ensemble consists of precise and diverse learners. | - Captures subtle sentiment variations effectively | Accurate and diverse base learners | Sentiment analysis | Improved accuracy and variation in sentiment analysis | Ensembling techniques |
Improved Accuracy and Variation in Sentiment Analysis | EDAO enhances accuracy and variation in sentiment analysis, particularly in multi-label datasets. | - Provides more precise and comprehensive sentiment analysis results | Captures sentiment nuances and variations | Text classification | Enhanced sentiment analysis performance | Multi-label sentiment analysis |
Utilization of DenseNet-AO | EDAO leverages DenseNet-AO, a variant of DenseNet that targets diversity-related objectives, to capture a wider range of sentiment nuances. | - Increases the diversity of predictions, improving performance | DenseNet-AO for capturing sentiment variations | Sentiment analysis, image classification | Improved sentiment diversity in predictions | Deep learning architectures, sentiment analysis |
Superior Performance | EDAO consistently outperforms other benchmark schemes and individual learning methods regarding precision, recall, and accuracy. | - Enhances the generalization efficiency and prediction performance of multi-label classification tasks | Improved precision, recall, and accuracy | Various multi-label classification tasks | State-of-the-art performance on benchmark datasets | Multi-label classification, evaluation metrics |
Sensitivity Analysis | EDAO provides a sensitivity analysis to quantify uncertainty and variability, demonstrating its reliability in handling uncertain elements. | - Offers insights into the robustness and stability of the decision-making process | Quantifies uncertainty and variability in decision-making | Model evaluation and uncertainty analysis | Robustness and stability analysis of the proposed method | Uncertainty analysis, decision-making process |
Characteristic | # of Labels | # of Features | Domain |
---|---|---|---|
Hotels | 7 | 19,000 | Text |
Medical | 7 | 9000 | Biology |
Movies | 7 | 12,000 | Textual |
Proteins | 7 | 10,000 | Textual |
Automobiles | 7 | 14,000 | Textual |
Emotions | 7 | 19,000 | Textual |
Birds | 7 | 8000 | Textual |
News | 7 | 14,000 | Media |
Techniques | Datasets (%) | |||||||
---|---|---|---|---|---|---|---|---|
News (%) | Emotions (%) | Medical (%) | Birds (%) | Hotel (%) | Automobiles (%) | Movies (%) | Proteins (%) | |
DenseNet-AO | 92.27 | 94.22 | 90.23 | 92.24 | 94.66 | 94.23 | 93.84 | 90.77 |
NB | 80.9 | 72.34 | 80.53 | 61.34 | 62.34 | 67.34 | 82.09 | 58.34 |
ECC | 72.12 | 72.8 | 67.67 | 70.88 | 70.03 | 73.11 | 70.78 | 73.69 |
RAKEL | 77.12 | 77.8 | 72.67 | 75.88 | 75.03 | 78.11 | 75.78 | 78.69 |
CNN | 85.27 | 88.02 | 86.23 | 84.56 | 85.74 | 86.11 | 86.84 | 81.57 |
ML-RBF | 69.12 | 69.8 | 64.67 | 67.88 | 67.03 | 70.11 | 67.78 | 70.69 |
BERT | 88.67 | 79.56 | 84.78 | 87.45 | 88.21 | 89.56 | 90.12 | 85.32 |
LSTM | 79.23 | 74.32 | 76.45 | 78.32 | 78.45 | 79.67 | 81.56 | 77.89 |
Transformer | 87.54 | 81.45 | 79.56 | 76.45 | 77.89 | 79.34 | 82.67 | 75.67 |
GAT | 81.32 | 78.23 | 75.32 | 72.56 | 73.21 | 75.78 | 80.23 | 72.34 |
ResNet | 84.21 | 79.87 | 78.21 | 75.32 | 76.45 | 78.56 | 81.34 | 75.89 |
SVM | 78.95 | 75.34 | 72.45 | 70.34 | 71.56 | 73.89 | 77.56 | 70.67 |
Logistic Regression | 83.67 | 77.89 | 75.67 | 73.21 | 74.32 | 76.45 | 80.45 | 74.21 |
Random Forest | 86.32 | 80.56 | 77.32 | 76.56 | 77.45 | 79.23 | 83.21 | 77.56 |
Techniques | Datasets (%) | |||||||
---|---|---|---|---|---|---|---|---|
News (%) | Emotions (%) | Medical (%) | Birds (%) | Hotel (%) | Automobiles (%) | Movies (%) | Proteins (%) | |
DenseNet-AO | 96.34 | 92.34 | 93.47 | 95.56 | 96.66 | 97.44 | 96.34 | 92.54 |
ML-RBF | 88.34 | 88.34 | 91.47 | 89.66 | 90.01 | 89.46 | 88.34 | 91.28 |
RAKEL | 75.21 | 82.77 | 80.24 | 78.79 | 80.56 | 78.79 | 82.6 | 83.78 |
ECC | 67.21 | 74.77 | 72.24 | 70.79 | 72.56 | 70.79 | 74.6 | 75.78 |
CNN | 70.21 | 77.77 | 75.24 | 73.79 | 75.56 | 73.79 | 77.6 | 78.78 |
NB | 39.07 | 69.34 | 50.51 | 62.79 | 63.34 | 76.34 | 82.61 | 70.34 |
BERT | 84.56 | 82.44 | 87.56 | 85.23 | 88.12 | 86.21 | 83.78 | 89.12 |
LSTM | 79.12 | 77.89 | 80.67 | 81.34 | 82.77 | 80.24 | 79.66 | 79.66 |
Transformer | 87.34 | 85.12 | 88.34 | 86.21 | 89.56 | 87.12 | 86.67 | 88.34 |
GAT | 83.56 | 81.23 | 85.12 | 82.56 | 86.23 | 84.34 | 82.34 | 85.12 |
ResNet | 91.34 | 89.12 | 92.34 | 90.45 | 91.56 | 90.67 | 89.23 | 92.34 |
SVM | 88.12 | 85.56 | 89.56 | 87.12 | 88.34 | 87.56 | 86.78 | 89.56 |
Logistic Regression | 86.78 | 83.21 | 87.12 | 84.56 | 86.21 | 85.23 | 84.78 | 87.12 |
Random Forest | 89.23 | 87.34 | 90.01 | 88.67 | 89.78 | 88.12 | 87.89 | 90.01 |
Techniques | Datasets (%) | |||||||
---|---|---|---|---|---|---|---|---|
News (%) | Emotions (%) | Medical (%) | Birds (%) | Hotel (%) | Automobiles (%) | Movies (%) | Proteins (%) | |
DenseNet-AO | 96.81 | 91.38 | 94.15 | 95.5 | 92.34 | 94.44 | 97.67 | 92.22 |
RAKEL | 82.56 | 79.67 | 78.87 | 83.94 | 83.67 | 84.45 | 86.94 | 86.34 |
ECC | 77.56 | 74.67 | 73.87 | 78.94 | 78.67 | 79.45 | 81.94 | 81.34 |
ML-RBF | 74.56 | 71.67 | 70.87 | 75.94 | 75.67 | 76.45 | 78.94 | 78.34 |
CNN | 88.81 | 74.38 | 91.15 | 88.56 | 91.34 | 89.45 | 89.67 | 87.76 |
NB | 52.56 | 71.11 | 60.64 | 75.55 | 69.67 | 75.56 | 79.54 | 69.11 |
BERT | 90.22 | 75.56 | 87.45 | 86.77 | 89.12 | 88.67 | 92.33 | 83.45 |
LSTM | 86.12 | 72.34 | 81.34 | 80.55 | 84.56 | 82.33 | 88.22 | 77.67 |
Transformer | 93.45 | 78.45 | 90.34 | 89.12 | 92.45 | 91.56 | 94.15 | 88.89 |
GAT | 84.56 | 70.78 | 79.12 | 82.34 | 81.56 | 80.89 | 85.45 | 75.67 |
ResNet | 89.12 | 75.12 | 86.45 | 84.67 | 88.67 | 86.12 | 89.78 | 82.56 |
SVM | 82.45 | 68.56 | 77.56 | 76.45 | 80.45 | 78.56 | 81.67 | 73.22 |
Logistic Regression | 78.56 | 65.45 | 71.89 | 72.56 | 76.34 | 74.78 | 79.12 | 68.91 |
Random Forest | 89.34 | 78.34 | 85.67 | 85.22 | 88.12 | 87.45 | 88.89 | 84.56 |
Ref | Limitations of Existing Methods | Challenges Addressed by EDAO | How EDAO Addresses the Limitations | Related Benefits | Proposed Advancements Addressed by EDAO |
---|---|---|---|---|---|
[1,2,3,4] | Narrow focus on accuracy | Limited ability to capture sentiment variations and nuances | Integrates accuracy and diversity through an optimization algorithm | Improved sentiment analysis accuracy | Improved ensemble learning techniques |
[5] | Difficulty in handling multi-label datasets | Handling multiple sentiments or emotions associated with a given text | Leverages DenseNet-AO to capture subtle variations in sentiment across different texts | Effective handling of multi-label datasets | Advanced deep learning architectures |
[6] | Limited generalization efficiency | Inability to effectively generalize to new or unseen data. | Enhances generalization efficiency by dynamically optimizing accuracy and variety | Enhanced generalization efficiency | Novel optimization algorithms |
[7,8,9] | Lack of robustness and stability | Sensitivity to uncertain elements and variability in decision-making | Offers sensitivity analysis to assess reliability and stability | Improved robustness and stability | Comprehensive sensitivity analysis |
[10,11,13] | Computational complexity | High computational costs in transforming label spaces | Implements an efficient and scalable approach to label space transformation using the LP method | Efficient label space transformation | Efficient label space transformation methods |
[14,15,16,55,56] | Insufficient consideration of label dependencies | Inability to capture complex relationships between labels | Incorporates classifier chains to capture interdependencies between labels | Improved modeling of label dependencies | Enhanced modeling of label dependencies |
[17,18,19,20] | Lack of interpretability | Difficulty understanding and explaining the decision-making process | Introduces a sensitivity analysis to assess the reliability and stability of the decision-making process | Improved interpretability and explainability | Comprehensive sensitivity analysis |
[21,22,23,24] | Limited scalability | Inability to handle large-scale datasets efficiently | Utilizes scalable optimization algorithms and parallel processing techniques | Enhanced scalability and efficiency | Scalable optimization algorithms and parallel processing |
Techniques | Datasets | |||||||
---|---|---|---|---|---|---|---|---|
News | Emotions | Medical | Birds | Hotel | Automobiles | Movies | Proteins | |
DenseNet-AO | 30 | 20 | 35 | 25 | 30 | 28 | 32 | 22 |
RAKEL | 75 | 120 | 90 | 85 | 70 | 150 | 80 | 75 |
ECC | 180 | 75 | 85 | 70 | 200 | 80 | 85 | 75 |
ML-RBF | 70 | 180 | 75 | 80 | 70 | 75 | 70 | 65 |
CNN | 75 | 200 | 80 | 120 | 75 | 70 | 80 | 85 |
NB | 65 | 70 | 180 | 80 | 65 | 70 | 75 | 120 |
BERT | 80 | 75 | 85 | 200 | 80 | 85 | 150 | 80 |
LSTM | 120 | 75 | 80 | 85 | 70 | 75 | 200 | 75 |
Transformer | 85 | 80 | 70 | 150 | 85 | 80 | 70 | 80 |
GAT | 75 | 120 | 80 | 75 | 70 | 150 | 80 | 75 |
ResNet | 120 | 80 | 75 | 85 | 70 | 200 | 80 | 75 |
SVM | 75 | 80 | 200 | 75 | 80 | 75 | 70 | 120 |
Logistic Regression | 70 | 150 | 80 | 200 | 70 | 75 | 80 | 75 |
Random Forest | 85 | 80 | 120 | 75 | 85 | 70 | 75 | 200 |
Techniques | Datasets | |||||||
---|---|---|---|---|---|---|---|---|
News | Emotions | Medical | Birds | Hotel | Automobiles | Movies | Proteins | |
DenseNet−AO | 0.86 | 0.92 | 0.78 | 0.81 | 0.90 | 0.87 | 0.82 | 0.91 |
RAKEL | 0.70 | 0.64 | 0.72 | 0.68 | 0.71 | 0.76 | 0.69 | 0.63 |
ECC | 0.45 | 0.51 | 0.59 | 0.47 | 0.53 | 0.54 | 0.58 | 0.49 |
ML−RBF | 0.60 | 0.56 | 0.57 | 0.62 | 0.61 | 0.58 | 0.63 | 0.59 |
CNN | 0.78 | 0.82 | 0.75 | 0.77 | 0.81 | 0.80 | 0.79 | 0.84 |
NB | −0.32 | −0.29 | −0.35 | −0.29 | −0.26 | −0.31 | −0.28 | −0.34 |
BERT | 0.73 | 0.79 | 0.70 | 0.75 | 0.78 | 0.77 | 0.74 | 0.81 |
LSTM | 0.68 | 0.71 | 0.65 | 0.67 | 0.70 | 0.72 | 0.66 | 0.73 |
Transformer | 0.80 | 0.83 | 0.77 | 0.79 | 0.82 | 0.84 | 0.76 | 0.85 |
GAT | 0.55 | 0.58 | 0.52 | 0.54 | 0.57 | 0.59 | 0.51 | 0.61 |
ResNet | 0.72 | 0.76 | 0.69 | 0.71 | 0.75 | 0.74 | 0.68 | 0.77 |
SVM | 0.50 | 0.45 | 0.49 | 0.48 | 0.46 | 0.47 | 0.50 | 0.44 |
Logistic Regression | 0.65 | 0.67 | 0.63 | 0.64 | 0.66 | 0.69 | 0.62 | 0.68 |
Random Forest | 0.74 | 0.77 | 0.71 | 0.73 | 0.76 | 0.78 | 0.70 | 0.79 |
Technique | F-Value | p-Value |
---|---|---|
DenseNet-AO | 15.24 | 0.001 |
RAKEL | 4.56 | 0.028 |
ECC | 2.89 | 0.076 |
ML-RBF | 3.15 | 0.061 |
CNN | 6.72 | 0.012 |
NB | 1.05 | 0.422 |
BERT | 5.36 | 0.020 |
LSTM | 3.98 | 0.034 |
Transformer | 7.92 | 0.006 |
GAT | 2.25 | 0.122 |
ResNet | 4.86 | 0.023 |
SVM | 1.42 | 0.285 |
Logistic Regression | 3.54 | 0.045 |
Random Forest | 5.99 | 0.009 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ayub, N.; Tayyaba; Hussain, S.; Ullah, S.S.; Iqbal, J. An Efficient Optimized DenseNet Model for Aspect-Based Multi-Label Classification. Algorithms 2023, 16, 548. https://doi.org/10.3390/a16120548
Ayub N, Tayyaba, Hussain S, Ullah SS, Iqbal J. An Efficient Optimized DenseNet Model for Aspect-Based Multi-Label Classification. Algorithms. 2023; 16(12):548. https://doi.org/10.3390/a16120548
Chicago/Turabian StyleAyub, Nasir, Tayyaba, Saddam Hussain, Syed Sajid Ullah, and Jawaid Iqbal. 2023. "An Efficient Optimized DenseNet Model for Aspect-Based Multi-Label Classification" Algorithms 16, no. 12: 548. https://doi.org/10.3390/a16120548
APA StyleAyub, N., Tayyaba, Hussain, S., Ullah, S. S., & Iqbal, J. (2023). An Efficient Optimized DenseNet Model for Aspect-Based Multi-Label Classification. Algorithms, 16(12), 548. https://doi.org/10.3390/a16120548