Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Cross-Session Graph and Hypergraph Co-Guided Session-Based Recommendation
Previous Article in Journal
Confidence Intervals for the Variance and Standard Deviation of Delta-Inverse Gaussian Distributions with Application to Traffic Mortality Count
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advanced Deep Learning Models for Improved IoT Network Monitoring Using Hybrid Optimization and MCDM Techniques

by
Mays Qasim Jebur Al-Zaidawi
* and
Mesut Çevik
Department of Electrical-Electronics Engineering, Faculty of Engineering and Architecture, Altınbaş University, Istanbul 34000, Türkiye
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(3), 388; https://doi.org/10.3390/sym17030388
Submission received: 11 January 2025 / Revised: 29 January 2025 / Accepted: 12 February 2025 / Published: 4 March 2025
(This article belongs to the Section Computer)
Figure 1
<p>The methodology phases.</p> ">
Figure 2
<p>Illustration of synthetic and real-world IoT network data characteristics.</p> ">
Figure 3
<p>Architecture of the Feedforward Neural Network (FFNN).</p> ">
Figure 4
<p>Architecture of CNN and pooling layers.</p> ">
Figure 5
<p>Architecture of the MLP.</p> ">
Figure 6
<p>Comparative Confusion Matrices for Deep Learning Models and Optimization Techniques (FFNNs, CNNs, MLPs, HGWOPSO, HWCOAHHO) in IoT Network Monitoring. (<b>A</b>) Training Progress of Deep Learning Model for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (<b>B</b>) Confusion Matrix for Deep Learning Model Performance in IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization. (<b>C</b>) FFNNs Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>D</b>) MLP Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>E</b>) CNNs Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>F</b>) HGWOPSO Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>G</b>) HWCOAHHO Confusion Matrix for Performance Evaluation in IoT Network Monitoring.</p> ">
Figure 6 Cont.
<p>Comparative Confusion Matrices for Deep Learning Models and Optimization Techniques (FFNNs, CNNs, MLPs, HGWOPSO, HWCOAHHO) in IoT Network Monitoring. (<b>A</b>) Training Progress of Deep Learning Model for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (<b>B</b>) Confusion Matrix for Deep Learning Model Performance in IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization. (<b>C</b>) FFNNs Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>D</b>) MLP Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>E</b>) CNNs Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>F</b>) HGWOPSO Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>G</b>) HWCOAHHO Confusion Matrix for Performance Evaluation in IoT Network Monitoring.</p> ">
Figure 6 Cont.
<p>Comparative Confusion Matrices for Deep Learning Models and Optimization Techniques (FFNNs, CNNs, MLPs, HGWOPSO, HWCOAHHO) in IoT Network Monitoring. (<b>A</b>) Training Progress of Deep Learning Model for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (<b>B</b>) Confusion Matrix for Deep Learning Model Performance in IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization. (<b>C</b>) FFNNs Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>D</b>) MLP Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>E</b>) CNNs Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>F</b>) HGWOPSO Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>G</b>) HWCOAHHO Confusion Matrix for Performance Evaluation in IoT Network Monitoring.</p> ">
Figure 6 Cont.
<p>Comparative Confusion Matrices for Deep Learning Models and Optimization Techniques (FFNNs, CNNs, MLPs, HGWOPSO, HWCOAHHO) in IoT Network Monitoring. (<b>A</b>) Training Progress of Deep Learning Model for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (<b>B</b>) Confusion Matrix for Deep Learning Model Performance in IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization. (<b>C</b>) FFNNs Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>D</b>) MLP Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>E</b>) CNNs Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>F</b>) HGWOPSO Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>G</b>) HWCOAHHO Confusion Matrix for Performance Evaluation in IoT Network Monitoring.</p> ">
Figure 7
<p>Comprehensive Confusion Matrix Comparison of Deep Learning Models for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (<b>A</b>) Comparative Evaluation of Deep Learning Models for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (<b>B</b>) Comparative Confusion Matrices for Deep Learning Models in IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques.</p> ">
Figure 7 Cont.
<p>Comprehensive Confusion Matrix Comparison of Deep Learning Models for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (<b>A</b>) Comparative Evaluation of Deep Learning Models for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (<b>B</b>) Comparative Confusion Matrices for Deep Learning Models in IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques.</p> ">
Figure 8
<p>Benchmark Function of Deep Learning Models for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques.</p> ">
Review Reports Versions Notes

Abstract

:
This study addresses the challenge of optimizing deep learning models for IoT network monitoring, focusing on achieving a symmetrical balance between scalability and computational efficiency, which is essential for real-time anomaly detection in dynamic networks. We propose two novel hybrid optimization methods—Hybrid Grey Wolf Optimization with Particle Swarm Optimization (HGWOPSO) and Hybrid World Cup Optimization with Harris Hawks Optimization (HWCOAHHO)—designed to symmetrically balance global exploration and local exploitation, thereby enhancing model training and adaptation in IoT environments. These methods leverage complementary search behaviors, where symmetry between global and local search processes enhances convergence speed and detection accuracy. The proposed approaches are validated using real-world IoT datasets, demonstrating significant improvements in anomaly detection accuracy, scalability, and adaptability compared to state-of-the-art techniques. Specifically, HGWOPSO combines the symmetrical hierarchy-driven leadership of Grey Wolves with the velocity updates of Particle Swarm Optimization, while HWCOAHHO synergizes the dynamic exploration strategies of Harris Hawks with the competition-driven optimization of the World Cup algorithm, ensuring balanced search and decision-making processes. Performance evaluation using benchmark functions and real-world IoT network data highlights superior accuracy, precision, recall, and F1 score compared to traditional methods. To further enhance decision-making, a Multi-Criteria Decision-Making (MCDM) framework incorporating the Analytic Hierarchy Process (AHP) and TOPSIS is employed to symmetrically evaluate and rank the proposed methods. Results indicate that HWCOAHHO achieves the most optimal balance between accuracy and precision, followed closely by HGWOPSO, while traditional methods like FFNNs and MLPs show lower effectiveness in real-time anomaly detection. The symmetry-driven approach of these hybrid algorithms ensures robust, adaptive, and scalable monitoring solutions for IoT networks characterized by dynamic traffic patterns and evolving anomalies, thus ensuring real-time network stability and data integrity. The findings have substantial implications for smart cities, industrial automation, and healthcare IoT applications, where symmetrical optimization between detection performance and computational efficiency is crucial for ensuring optimal and reliable network monitoring. This work lays the groundwork for further research on hybrid optimization techniques and deep learning, emphasizing the role of symmetry in enhancing the efficiency and resilience of IoT network monitoring systems.

1. Introduction

The Internet of Things (IoT) has significantly changed the way different industries depend on each other, with vastly growing numbers of connected gadgets to share information over the internet. Applications of the IoT are gradually becoming indispensable parts of big sectors such as healthcare and smart cities, where, in addition, industrial automation provides huge value. With its wide adoption, IoT networks came up with unmatched levels of efficiency, real-time data analysis, and enhanced decision-making capabilities. With these advantages, several challenges are imposed on the IoT network in terms of congestion, latency, bandwidth limits, and security vulnerabilities. These challenges make all the worse the performance and reliability of an IoT system but at the same time create serious risks to the integrity of data, device malfunction, and network breaches. With the increasing scale and complexity of the IoT ecosystem, the establishment of mechanisms for robust monitoring and timely detection of anomalous entities becomes highly desirable in its own right [1,2].
Traditional machine learning methods include decision trees (DTs), Support Vector Machines (SVMs), and Naive Bayes for IoT network monitoring and anomaly detection. While effective in certain applications, there are significant limitations to the usage of these techniques when it comes to IoT environments. In fact, IoT traffic usually evolves with several thousand devices generating several volumes of real-time data streams, hence dealing dynamically and nonlinearly with the different models that are quite difficult to handle. It is for this reason that classically, the complexity and nonlinearity presented in IoT data cannot be captured by any model; hence, reduced accuracy and poor performance in unique characteristics of IoT networks often result. With this, there is an increased need for more advanced approaches that could adapt to IoT traffic peculiarities for efficient and timely anomaly detection. The proposed hybrid optimization methods, HGWOPSO and HWCOAHHO, significantly enhance the performance of deep learning models for IoT network monitoring, achieving high accuracy, scalability, and adaptability. These findings are particularly relevant for applications such as healthcare, smart cities, and industrial automation, where real-time anomaly detection is critical. Future research will focus on refining the models for edge and fog computing environments, incorporating more diverse datasets, and exploring additional decision-making techniques to further improve their practical applicability. Feedforward Neural Networks (FFNNs), Convolutional Neural Networks (CNNs), and Multilayer Perceptrons (MLPs) have realized considerable potential in IoT network traffic monitoring. CNNs seem very effective in the representation of spatial hierarchy in data, which is widely used in IoT traffic features. MLPs are suited for modeling nonlinear relationships, making these models ideal for detecting complex dependencies that may not be captured by traditional models. Meanwhile, FFNNs, with their simpler architecture, offer computationally efficient solutions for rather simple streams of IoT data while providing strong performance.
This paper investigates enhanced IoT network monitoring using the advanced deep learning models FFNNs, CNNs, and MLPs by enhancing anomaly detection and network performance forecasting with real-time monitoring. Further, we compare different IoT-related tasks to present the superiority of these models over traditional machine learning techniques. This work also concentrates on optimizing network parameters through methodologies such as hybrid grey wolf optimization particle swarm optimization (HGWOPSO) and hybrid whale optimization, chicken swarm optimization hybrid optimization (HWCOAHHO) with a view to enhancing efficiency and accuracy in deep learning model performances in IoT environments. This would confirm that these deep learning models improve with respect to the accuracy and precision of anomaly detection and thereby make an important contribution to more robust and efficient monitoring in IoT systems design. This provided increased knowledge in that field and some practical solutions for obtaining a secure, high performance network to answer increasing demand caused by different ecologies of IoT devices following [3,4]. The two important questions raised in understanding the importance of our topic are as follows:
What roles do deep learning models play in enhancing the real-time monitoring of IoT networks?
Deep learning models are central in the promotion of real-time monitoring in IoT networks, availing complicated and dynamic capability management innate in IoT environments. An IoT network generates continuous data streams in vast volumes, high dimensionality, and time sensitivity, often overwhelming traditional machine learning models incapable of capturing nonlinear relationships with time-evolving patterns symptomatic of IoT traffic. The great advantages in anomaly detection, performance forecasting, and network optimization using deep learning models such as FFNNs, CNNs, and MLPs are possible by defining complex patterns with great accuracy and scalability in prediction. The architecture of CNNs is particularly fit for real-time monitoring because of the inherent ability to auto-detect spatial hierarchies in data, crucial for recognizing localized anomalies in IoT traffic, such as unexpected latency spikes or bandwidth usage. This usually points out that there could be a network problem, congestion, or degradation of performance. Hence, CNNs can strikingly detect and respond to such features by using convolution filters and thus perform in real-time anomaly detection challenges where the timely identification of network-level problems is crucial in terms of saving performance bottlenecks or security breaches. On the other hand, the contribution of MLPs mostly relies on modeling nonlinear, complex relationships among the IoT metrics; for example, they will be able to expose dependencies existing among related performance indicators on packet loss, latency, and throughput. These complex relations are hard for traditional models to establish, while MLPs are good at finding such correlations; hence, they can predict the upcoming issues with much higher accuracy. Much simpler in architecture, FFNNs provide an efficient and computation-lighter solution for the less complex streams of IoT data monitoring. They also find their applications in particular real-time applications needing rapid and reliable decisions based on well-predictable patterns or sharp turns in network resource utilization.
Further core anomaly detection and deep learning models improve the performance of the IoT network in most cases, with predictive measures on upcoming problems in networking that predict imminent effects. It would predict an increase in packets dropping and reduced throughput; based on this, the necessary actions proactively will be executed through the needed bandwidth increase and the adjustment of device-based performance optimizers. Most importantly, it reduces incorrect positivity and negativity by just giving authentic alerts to network and computer administration. The most significant advantage of deep learning models for IoT network monitoring is that they have good adaptability to dynamic changes. That is, these deep learning models can learn more when IoT networks grow with new patterns and behaviors in data generated—that is, more data is generated, so more will be learned. Techniques of data augmentation, transfer learning, and fine-tuning further enhance the usability of the models in dynamic environments so that the models remain robust even after changes in the network conditions. The integration of multi-criteria decision-making (MCDM) methods, such as AHP and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), enhances deep learning models’ decision-making in IoT network monitoring. It prioritizes the important criteria through the allocation of proper weight to each criterion concerning relative importance. Following up, TOPSIS sets a ranking of several surveillance tactics or device configurations with relation to their distance to the ideal solution, which suggests the most optimal solutions, making the system’s goals approach the optimum. Integration, therefore, refines these optimization processes and ascertains that deep learning models address the most impactful salients presented in IoT network monitoring as a multi-objective balanced set of the same. Eventually, the deep learning models improve the detection and prediction of network anomalies, thereby optimizing IoT network performance in real-time. A combination of strengths of CNNs, MLPs, and FFNNs integrated with MCDM techniques such as AHP and TOPSIS develops the IoT network monitoring system to be more effective, agile, and responsive toward the rapidly changing demands posed by modern IoT ecosystems [5,6].
What do MCDM methods such as AHP and TOPSIS contribute to deep learning models in enhancing the performance and decision-making of IoT network monitoring systems?
The role and importance of MCDM methods in IoT network monitoring, for performance and decision-making enhancements of deep learning models, are supported by AHP and TOPSIS. These methods treat the intrinsic complexities of IoT environments, where dynamic traffic patterns, nonlinear dependencies, and competing performance objectives in terms of latency, throughput, and resource usage have to be balanced and prioritized. MCDM techniques allow for a structured approach for the systematic evaluation and weighing of these critical factors so that deep learning models may focus on the most relevant criteria for real-time optimization and anomaly detection.
AHP contributes through pairwise comparisons: easy prioritization of Key Performance Indicators. Now a deep learning-based model recognizes which criterion, for any setup, bears the highest importance; for instance, to describe and consider the IoT in critical applications such as those of healthcare and industrial monitoring. Undoubtedly, latency is preferred over throughput in ranking within analytics hierarchical processes for instant responses to be realized. This prioritization helps the deep learning model focus on metrics that matter most to drive the desired outcomes and make fast, accurate decisions in real-time. TOPSIS completes AHP by providing a decision-making model framework that ranks alternatives according to their closeness to an ideal solution. In the context of IoT network monitoring, TOPSIS can rank different strategies with respect to their performance across weighted criteria. This ensures that the best solutions will be selected based on an overall assessment of multiple factors and reduce the trade-offs between conflicting objectives, such as minimizing latency while maximizing throughput. By implementing the TOPSIS, the deep learning models select an appropriate monitoring or optimization strategy that well suits the operational objectives assigned to the IoT network. Traditional methods for decision-making cannot provide such precision and adaptiveness in the dynamic IoT environments where all sudden conditions change. A solution to this is provided by the MCDM techniques: AHP and TOPSIS allow deep learning models to balance multiple conflicting objectives by adapting to fluctuating network conditions. These methods would let the model dynamically alter the weights of different criteria with the development of the network, or in case new devices are added, for instance, in such a way that decision-making would always be oriented toward the current state of the network. Furthermore, the embedding of MCDM methods into deep learning models enables an improvement in robustness and scalability for IoT network monitoring systems. AHP and TOPSIS provide the ground for enabling deep learning models in continuously reconsidering or reordering their priority scale whenever the condition of the network changes with high traffic or emerging attacks. This would thereby make adaptation potential and hence allow such a system to handle scalability and complexities gained in IoT networks without sacrificing its performance and reliability. For example, such methods would ensure that in large-scale IoT environments related to smart cities or healthcare systems, the available resources are utilized efficiently to monitor different types of devices and traffic loads for the quick detection and resolution of issues arising in the process. However, the integration of MCDM methods—AHP and TOPSIS—along with deep learning models will take these aspects of decision-making, adaptability, and optimization to an even more powerful level in monitoring IoT networks. These methods consider multiple criteria with a view to prioritization and balance so that deep learning models can lead to actionable insights and efficient resource allocation in case of time-varying network conditions and competitive performance objectives. This therefore has made them useful tools targeted at improving the efficiency, security, and scalability of IoT networks spread across various applications involving smart cities, industrial automation, and healthcare systems [7,8,9,10,11].
In general, the paper is organized into five sections. The second section covers the general overview of the challenges at present in IoT network monitoring and the requirement for optimization techniques to be necessitated to enhance the performance of systems, especially in dynamic environments. The third section presents the proposed methodology, which includes the integration of two hybrid optimization algorithms, namely HGWOPSO and HWCOAHHO, for the training and fine-tuning of deep learning models in enhancing IoT network monitoring. This includes applying the best optimization algorithm using some MCDM techniques that shall be performed based on certain performance metrics. Section four includes the results and discusses these results, including performance evaluations of optimization algorithms and benchmarking functions, including the application of MCDM techniques in selecting the best model. The last section concludes the findings of the study, outlines the limitations, and suggests directions for future research to enhance the applicability and efficiency of these optimization techniques in real-world IoT network monitoring scenarios.

2. Related Works

The paper recently reviewed those key advances of deep learning models as well as optimization techniques applied to the monitoring of the IoT network. The references in Table 1 are grouped according to the type of model developed, such as DTs, SVMs, MLPs, CNNs, and FFNNs; the nature of optimization or machine learning used (traditional machine learning and deep learning); and criteria considered, including accuracy, precision, recall, and computational efficiency. The mentioned table critically reviews the objectives of each work and the results obtained that prove strengths and weaknesses concerning different approaches to anomaly detection, data processing, and traffic prediction in IoT networks. The review synthesizes findings from several studies and offers a comparative perspective on how the models perform in real-world IoT applications, with a focus on issues such as scalability, interpretability, and adaptability to dynamic network environments. The table summarized herein underlines the immense achievements in performance, reliability, and efficiency that are realized in IoT network monitoring systems, especially through the integration of deep learning models with optimization algorithms.
These few related works reviewed in the study give good insight into how methodologies of IoT network monitoring evolved from conventional machine learning to advanced deep learning. In, Mehmood et al. emphasized the ease and interpretability of DTs in IoT monitoring tasks. However, their findings showed considerable limitations, overfitting, and poor scalability that restrict the model’s effectiveness in a large-scale dynamic IoT environment. Similarly, the survey conducted in [13] by D’Alconzo et al. analyzed research usage of DTs to illustrate that, though useful and easy to implement, decision trees only showed a median performance, and scalable analysis techniques are needed given emerging complex IoT traffic. Eventually, Sheng et al. in [14] and Z. M. Fadlullah et al. in [15], explored the efficiency of the Support Vector Machine towards being a more robust alternative to traditional decision trees for IoT monitoring purposes. Their research has shown that SVMs can treat nonlinear dependencies very nicely and gave higher accuracy compared to the more straightforward models. At the same time, they underlined that due to a high computational cost and slower processing time, SVMs is less applicable for real-time IoT applications where rapid anomaly detection is crucial. In [16], Tang et al. studied NB in IoT network monitoring and outlined its efficiency and simplicity. In such a model, however, the features were considered independent, hence leading to poor performance in anomaly detection for highly connected IoT environments. The results clearly indicated that a more advanced kind of model should be derived to capture those complex relationships inherent in IoT data.
With the introduction of deep learning models, there was a massive improvement in performance. In [17], Tang et al. studied CNNs and showed the capabilities of capturing spatial hierarchies in IoT network traffic. In [18], Kato et al. showed that such models indeed yield high accuracy and therefore higher anomaly detection rates than traditional methods, especially with big and complex data. However, they indicated that the CNNs require huge computational resources and a volume of data to effectively train the model, which might be considered a limitation in resource-constrained IoT deployments. In, Tang et al. explored Multilayer Perceptrons, emphasizing their scalability and ability to model nonlinear relationships among IoT metrics. Results proved that MLPs give high detection accuracy and are suitable for anomaly detection problems while pointing out that the use of regularization techniques is necessary to avoid overfitting. Patel and Prajapati extended the previous work to show how MLPs can be used for resource allocation improvement in IoT networks, further solidifying the value of MLPs when optimizing performance. In works [21] by Ghazavi and Liao and [22] by Rish, the utility of Feedforward Neural Networks was discussed. These works emphasized that FFNNs are relatively simple and computationally efficient; thus, they are appropriate for simple patterns in IoT traffic. On the other hand, FFNNs had a worse performance when dealing with complex data structures, which implies that their use is more relevant in some specific monitoring tasks than in general anomaly detection.

2.1. Critical Analysis and Research Gaps

Despite significant progress in developing efficient detection algorithms, techniques, and methods to optimize performance, including those integrated with sophisticated optimization algorithms such as HGWOPSO and HWCOAHHO, most of the important gaps and challenges still exist in the IoT network monitoring framework using deep learning models. This is because deep learning models such as CNNs, MLPs, and FFNNs could improve the detection accuracy and scalability, whereas most of the existing research has completely ignored the multidimensional trade-offs along with the performances of the IoT network. IoT environments are characterized by dynamic traffic, high dimensional data, and competing performance criteria that include latency, throughput, anomaly detection accuracy, and resource efficiency. Although some recent approaches, such as HGWOPSO and HWCOAHHO, have shown some bright prospects in handling such challenges, there is still a lag in how these models balance the interdependence of performance metrics. Second, the inadequate application of MCDM methods in the process of deep learning model optimization results in suboptimal performances of these models in a real-world setting. For instance, the lack of structured frameworks to weight criteria leads to arbitrary prioritization, limiting the models’ performance in dynamic environments. Moreover, the trade-offs between anomaly detection improvement and managing computational cost are seldom discussed and hence make the models highly successful in ideal situations while failing to meet the challenging requirements of real IoT networks. The integration of deep learning models with decision-making frameworks, such as AHP and TOPSIS, while having the potential to undertake a systematic approach to the evaluation of alternative strategies and monitoring models, has also been seldom explored. This could be achieved by latency- and throughput-aware methods that further improve the decision-making in any IoT monitoring system. Most of the deep learning models are usually computationally intensive and thus a big challenge to real-time monitoring in resource-constrained IoT settings. Furthermore, this requires some adaptive mechanisms whereby the models would get continuously updated according to the traffic patterns, device behaviors, and evolving threats. Last but not least, real-world validation is missing in current solutions, with variable noisy data and unpredictable conditions, which calls for robust and scalable models that could deal with such complexities across different IoT scenarios.

2.2. Contributions and Novelty

The main contributions and novelty of this study are to advance IoT network monitoring through deep learning models integrated with multi-criteria decision-making methods and optimization algorithms. These contributions are summarized below:
Proposed a deep learning framework using FFNNs, CNNs, and MLPs to enhance anomaly detection and real-time monitoring in IoT networks.
Integrated AHP for criteria weighting and TOPSIS for ranking, enabling balanced optimization across IoT performance metrics such as latency, throughput, and anomaly detection accuracy.
Designed lightweight, computationally efficient deep learning models (using HGWOPSO and HWCOAHHO optimization algorithms) to address the challenges of resource-constrained IoT environments.
Validated the framework using synthetic and real-world datasets (e.g., IoT-23), demonstrating superior adaptability, scalability, and detection accuracy in dynamic network conditions.
Explored and resolved trade-offs between performance metrics using MCDM techniques, ensuring a well-balanced and adaptive monitoring system.
Contributed to the reliability, efficiency, and security of IoT networks by improving anomaly detection precision and ensuring robust operation in heterogeneous, evolving environments.

3. Methodology

Therefore, for a correct methodology, gathering and data pre-processing, followed by training of data, its optimization, and integration will be presented, in turn followed by the evaluation-performance study, tests, and subsequent decision-making where each one of the mentioned process steps stands for the proper sub-processed step in the prior three huge groups. In any concrete case, a mathematical model in detail must be in pseudo-code with all derivations about the given IoT network monitoring using techniques of deep learning and/or MCDM as shown in Figure 1.

3.1. Phase One: Data Collection and Preprocessing

The first part of the methodology focuses on collecting, cleaning, and preparing the data from an IoT network to ensure deep learning models are trained on relevant quality data.
Data Collection: The data used in this study comes from both real-world IoT datasets (e.g., IoT-23) and synthetic datasets. We used the IoT-23 dataset, containing 10 million records from real-world IoT devices, supplemented with synthetic data augmented with anomalies to simulate real-world network behavior, for instance, containing labeled network traffic data from various types of IoT devices, including smart cameras, home automation systems, and industrial IoT devices [23,24,25]. The rapid proliferation of IoT devices has introduced significant challenges for real-time anomaly detection and resource-efficient monitoring. The features extracted from these datasets include metrics such as packet length, latency, throughput, error rates, device status, and resource usage, as shown in Figure 2.
Data Pre-processing: Data cleaning and normalization are crucial to ensure the quality of the data before training. The pre-processing steps include [26,27,28,29]:
  • Noise Removal: Network data can contain noise due to various reasons such as transmission errors, device malfunctions, or incorrect data entry. Outlier detection was performed using interquartile range (IQR) filtering. All data were normalized to a [0, 1] range for model input, and Z-score normalization was applied to remove outliers as shown in Equation (1).
    Z = X μ σ
    where X is the data point, μ is the mean, and finally, σ is the standard deviation.
  • Feature Selection and Extraction: Irrelevant or redundant features are removed to make learning efficient. Latency, packet loss, and throughput are extracted as primary variables for anomaly detection.
  • Normalization: Since IoT network data usually consists of features in a wide range of different scales, it normalizes the data into all falling within the same range, preferably between 0 and 1, as depicted in Equation (2).
    X n o r m = X m i n X m a x X m i n X
  • Data Augmentation: The synthetic anomalies are incorporated to improve the performance of the model. For instance, noise is incorporated into latency or throughput measures to model congestion or a device’s failure [30].

3.2. Phase Two: Model Training, Optimization, and Integration

The deep learning models are selected, trained, and optimized in this phase by the use of FFNNs, CNNs, and MLPs, with advanced algorithms such as HGWOPSO and HWCOAHHO. These optimization techniques are employed for the fine-tuning of hyperparameters in models concerning learning rates, a number of layers, and a number of neurons for betterment in their performance toward IoT network monitoring. Further, the models will be tested on benchmark functions and integrated with MCDM methods for balanced optimization regarding performance metrics such as accuracy, precision, recall, and F1 score [31,32,33,34,35]. The optimized models are thereafter tested against the real data of IoT network traffic to confirm that they work well in anomaly detection and thereby ensure that only models that meet the required level of performance to handle practical real-time monitoring and decision-making are chosen.

3.2.1. Deep Learning Model Selection

FFNNs can be used for simple anomaly detection, where the input layer represents the features such as packet loss and throughput, hidden layers represent learned patterns, and the output layer provides a prediction about the likelihood of an anomaly. This is mathematically described by [36,37,38,39,40] as illustrated in Equation (3):
y = f W x + b
where W is the weight matrix, x is the input vector, b is the bias, and finally, f is the activation function (e.g., R e L U and Sigmoid) as depicted in Figure 3.
CNNs are particularly effective for spatial anomaly detection, such as identifying abnormal patterns in network traffic over time. In CNNs, the data is passed through several convolutional layers that detect local patterns, followed by pooling layers to reduce dimensionality. The CNN operation for a single layer is defined as in Equation (4) [41,42,43,44]:
f x = i , j w i , j . x i , j + b
where x i , j is the input feature matrix, w i , j is the kernel filter, and b is the bias term as described in Figure 4.
MLPs are used for modeling complex, nonlinear relationships between IoT metrics. The network comprises multiple fully connected layers, each performing a weighted sum and applying an activation function (e.g., R e L U ) as described in Equation (5) [45,46,47,48,49].
y = σ W x + b
where, σ is the activation function, typically R e L U or Sigmoid as exhibited in Figure 5.

3.2.2. Optimization with HGWOPSO and HWCOAHHO

During this stage, we will further enhance the performance of our deep learning models with the implementation of advanced hybrid optimization algorithms: HGWOPSO and HWCOAHHO. These optimization techniques will find the best performance, accuracy, and adaptiveness while optimizing some of the most important hyperparameters of fine-tuning deep learning models, such as learning rate, number of layers, and neuron counts, in dynamic and resource-constrained environments in IoT network monitoring.

HGWOPSO

HGWOPSO is a hybrid approach that combines the power of GWO and PSO. GWO, on the other hand, is inspired by the social hunting strategy adopted by grey wolves in tracking prey through an alpha, beta, and delta wolf leading the pack. On the other hand, PSO is inspired by the social way of particles moving within the solution space, with their velocities and positions continuously updated according to personal experience and that of neighbors. Combined with the global search for the solution space by GWO and the local refinement capability of PSO, HGWOPSO achieves a proper balance between exploration and exploitation during the process of optimization. This hybrid approach enables the efficient fine-tuning of hyperparameters, which is an essential step in optimizing the performance of deep learning models in complex IoT environments. The GSO component in GWO calculates the gravitational force between particles using Equation (6) [50,51]:
F = G   m 1 m 2 r 2
The formula for gravitational force, GWO, used by the HGWOPSO models the attraction between two particles representing model parameters to be optimized with F = G   m 1 m 2 r 2 , where F is the gravitational force between the particles, m 1 ,     m 2 are the masses of the particles, which, in our case, are the model parameters being optimized, for example, weights or biases, r is the distance between the particles indicating the difference between the model’s parameter values, and G is the gravitational constant. It provides the guideline for the optimization process through the simulation of the attraction of the particles toward each other, aiming at moving the parameters toward the optimal solution by minimizing the distance between them in order to improve the performance of the model. The PSO update rule for the velocity and position of the particles is given in Equation (7) and Equation (8), respectively [52,53]:
v i ( t + 1 ) = w v i ( t ) + c 1 r 1 ( p b e s t x i ( t ) ) + c 2 r 2 ( g b e s t x i ( t ) ) ,
x i t + 1 = X i t v i ( t + 1 )
In the process of finding the optimal solution, each particle i in the swarm iteratively updates its velocity and position. The velocity of every particle i at time t , v i t , determines how far this particle will move in the next iteration. The position of each particle i , x i t , is the current solution that the particle is exploring in the search space. Each particle has a personal best position, p b e s t , representing the best solution that each particle has experienced so far. The best global position, g b e s t , is the best solution found by any particle within the swarm. The PSO is driven by the velocity update equation, which contains two learning factors: the cognitive learning factor c 1 , driving a particle toward its best position, and the social learning factor c 2 , driving it toward the global best position. These add stochastic behavior to the process of updating, enabling the particles to jump out of their region while exploring the search space; these random numbers r 1 and r 2 are generated between 0 and 1. The pseudo-code of the proposed HGWOPSO is provided in Algorithm 1. As illustrated, the algorithm begins by initializing the parameters and defining the objective function. Following this, Algorithm 2 outlines the specific steps for the optimization process, detailing how the particles update their positions and velocities to converge towards the optimal solution.
Algorithm 1: proposed HGWOPSO pseudo-code
  • Inputs: Population size N, maximum number of iterations T, inertia weight w, acceleration coefficients c 1 and c 2 .
  • Outputs: Best solution ( X b e s t ) and its fitness value.
  • Initialize the positions and velocities of particles in the population X i , V i for i = 1, 2, …, N
  • Initialize the position of wolves (Wolves) for each wolf in the population.
  • Set the initial personal best positions of each particle ( P b e s t ) and wolves (Wolves_best).
  • Set the global best position ( G b e s t ) from the population ( P b e s t ) and wolves (Wolves_best).
  • while (t < T) do
  • For each particle (i = 1, 2, …, N) do
  • Calculate the fitness value of particle i : Fitness(i)
  • Update the personal best of the particle: If Fitness(i) < Fitness ( P b e s t ) then P b e s t = Xi
  • Update the velocity of particle i:
    V i t + 1 = w V i t + c 1 r a n d P b e s t X i + c 2 r a n d G b e s t X i
  • Update the position of particle i:
    X i t + 1 = X i t + V i t + 1
  • For each wolf ( i = 1 , 2 , , N ) do
  • Calculate the fitness value of wolf i: Fitness( i )
  • Update the personal best of the wolf:
  • I f   F i t n e s s i < F i t n e s s W o l v e s b e s t t h e n   W o l v e s   b e s t = W o l v e s i
  • Update the position of the wolf:
    W o l v e s i = W o l v e s i + r a n d W o l v e s   b e s t W o l v e s i
  • Update the global best solution ( G b e s t ):
    If Fitness( P b e s t ) < Fitness( G b e s t ) then G b e s t = Pbest
    If Fitness( W o l v e s   b e s t ) < Fitness( G b e s t ) then G b e s t = Wolves_best
  • End While
  • Return the best solution ( X b e s t ) and its fitness value

HWCOAHHO

HWCOAHHO is a hybrid optimization approach that incorporates WCOA and HHO. The HHO algorithm is inspired by the cooperative hunting behavior of Harris’s hawks, especially their coordinated attacks and strategic planning in capturing prey. This collaborative strategy has been found quite suitable for solving complex optimization problems where multiple solutions (hawks) cooperate for the best result. While HHO takes inspiration from the cooperative nature of human beings, WCO draws inspiration from the competitive structure in World Cup tournaments where the best teams are selected after a series of knockout rounds. The combination of these two algorithms, namely, HHO and WO, therefore, forms an effective hybrid approach that combines the power of both cooperative and competitive strategies and thus proves to be highly suitable for optimizing deep learning models in IoT network monitoring. The HHO algorithm is inspired by the surprising pounce strategy of Harris’s hawks. The exploration and exploitation behaviors of the hawks are modeled as in the exploration phase—when the hawk is searching for the prey—and exploitation phase—when the hawk is attacking the prey are shown as in Equations (9) and (10) [54,55]:
X ( t + 1 ) = x r a n d ( t ) r 1 ( X r a n d t   2 r 2 x ( t ) ) ,
X ( t + 1 ) = x r a b b i t ( t ) r 3 ( L B + r 4 ( UB LB ) ) ,
In the HHO algorithm, the position of the hawk at time t + 1 , X t + 1 , is updated based on its current position and the position of the prey, x r a b b i t t . The movement of the hawk is influenced by both a random position x r a n d t and the position of the prey. The terms r 1 , r 2 , r 3 , r 4 are random factors that introduce randomness in the process of exploration; it helps a hawk to scan the solution space in an efficient manner. Then, this is limited within a bound as per the specified lower and upper bounds of the search space at LB and UB, respectively. This new position thus considers an updated prey position incorporating a mix of random changes so that the hawk can take the decision to either further exploit the position for the current best-achieved solution or make wider searches in space for possible better solutions. Such a process allows for the balance in exploration and exploitation while convergence toward the optimal is reached. The WOA of HWCOAHHO applies a competitive strategy as taken up in knockout rounds in world cup tournaments. It chooses only those solutions that are performing better at each round for further advancement to converge towards an optimum solution. The competition forces the best solutions to move forward while eliminating the poorer ones owing to performance scores. The competitive aspect makes the solutions get refined progressively during optimization. Pseudocode of the proposed HWCOAHHO is below:
Algorithm 2: proposed HWCOAHHO algorithm pseudo-code
  • Begin
  • Step 1: Initialize the candidate solutions X i i = 1 , 2 , , N , where N is the population size.
    Initialize positions, velocities, and fitness values for all candidates.
    Initialize the global best solution ( G b e s t ) and the best fitness value.
  • Step 2: Set the maximum number of iterations ( T ) and other optimization parameters:
    -
    Population size ( N )
    -
    Inertia weight ( w ), acceleration coefficients ( c 1 , c 2 )
    -
    Random factors ( r 1 , r 2 , etc.)
    -
    Energy ( E ) for Harris Hawks Exploration-Exploitation balance
  • Step 3: While ( t < T ) do
  • Step 4: Evaluate the fitness of all candidate solutions Xi.
  • Step 5: Apply WCO:
    -
    Organize the population into “competitions” (a round-robin format or tournament).
    -
    Evaluate the fitness of each candidate solution in each competition.
    -
    Select the best candidates based on the fitness scores.
    -
    Retain the best solutions and discard the less-fit solutions (elimination process).
    -
    Update the current population with the winners of each competition.
  • Step 6: Apply HHO to balance exploration and exploitation:
    -
    For each candidate solution (Xi) in the population:
    -
    Calculate the fitness value of Xi.
    -
    Determine whether the hawk is in the exploration phase or exploitation phase based on energy ( E ):
    -
    Exploration phase ( E 1 ):
    -
    Hawks explore the search space using random jumps or flights.
    -
    Update positions based on exploration strategy.
    -
    Exploitation phase ( E < 1 ):
    -
    Hawks exploit the best solutions in the current vicinity.
    -
    Use soft or hard besiege techniques to refine the solution by moving towards better solutions.
    -
    Adjust the position of Xi accordingly.
    -
    Update the personal best ( P b e s t ) of each candidate if a better solution is found.
  • Step 7: Apply Velocity Update:
    -
    For each candidate Xi, update the velocity using a standard optimization rule (if applicable):
    -
    Velocity update can be performed using the PSO-based velocity update formula:
    V i t + 1 = w V i t + c 1 r a n d P b e s t X i + c 2 r a n d G b e s t X i
    -
    Update the position based on the new velocity:
              X i t + 1 = X i t + V i t + 1
  • Step 8: Update the Global Best solution ( G b e s t ) if necessary.
  • Step 9: If stopping criteria (e.g., maximum iterations or convergence) are met, terminate the loop.
  • Step 10: Return the best solution found ( G b e s t ) and its fitness value.
End.

Application in Deep Learning Model Optimization

The HGWOPSO and HWCOAHHO algorithms are applied to optimize the hyperparameters of deep learning models (e.g., FFNNs, CNNs, MLPs) for IoT network monitoring. The optimization process fine-tunes parameters such as:
  • Learning Rate: The rate at which, during training, the model updates its weight. An appropriate learning rate is important for faster convergence and to avoid overshooting the optimal solution.
  • Number of Layers: This is the depth level of the neural network. On the other hand, this affects the ability of the network to learn complex patterns from any given data.
  • Neurons per Layer: This defines the number of neurons in each layer and hence decides the capacity of the model in capturing and representing the features in the data. With the use of such hybrid optimization algorithms, we ensure that models for anomaly detection are not only accurate but also computationally efficient to be deployed in resource-constrained IoT environments. These advanced optimization techniques empower the deep learning models to perform at their best, enhancing the detection accuracy of anomalies and real-time monitoring while minimizing the computational load in dynamic and constrained environments, as is the case with IoT networks [56].

3.2.3. Benchmark Functions

In this work, a set of benchmark functions has been used to test the performance of two advanced optimization algorithms, namely HGWOPSO and HWCOAHHO. These include some commonly used optimization test functions, namely, the Sphere, Rosenbrock, and Ackley functions, which ensure scalability validation and are representatives of a wide class of problems with different levels of complexity, from simple convex ones to more challenging multimodal and nonlinear functions. Each of these functions is selected for probing certain properties of the algorithms, including convergence to the global optimum, balance of exploration and exploitation, and robustness against local minimum. The Sphere function was employed as a representative of a simple convex problem for basic convergence analysis, whereas the Rosenbrock and Ackley functions represent optimization problems that include narrow valleys and deep local minima in order to test the capability of the algorithms to navigate this kind of complex landscape toward better precision of convergence to global minimum findings. This corresponds to the highly dimensional and multimodal nature of the Rastrigin function. Correspondingly, each of these algorithms was used on the benchmark functions, and important performance metrics such as “Best” value, the worst value, average value, median value, and standard deviation (STD) were noted and plotted. The results for the same are tabulated in Table 2 and offer a detailed comparison of the performance of HGWOPSO against HWCOAHHO regarding different optimization problems in light of their relative strength related to convergence speed, accuracy, and robustness. In fact, this forms part of an essential exercise while understanding the applicability of algorithms to real-world scenarios, especially of a dynamic nature, wherein the problems often assume complex and nonlinear characters [57].

3.3. Phase Three: Performance Evaluation, Testing, and Decision-Making

In our work, HGWOPSO and HWCOAHHO are used to optimize the hyperparameter of deep learning models applied to IoT network monitoring. Moreover, a multi-criteria decision-making approach has been adopted in order to choose among the best models or strategies at hand, considering the relevant trade-offs between key performance metrics, such as accuracy, latency, throughput, and computational efficiency. The technique will make sure that the deep learning model, selected out of the set of competing criteria, will be the most effective with regards to adaptation in the dynamic and resource-constrained environment of IoT networks. The implementation of the MCDM model is proposed in two stages. In the Entropy method, the weight of each evaluation criterion will be computed in Stage I, taking into account the objective quantification of the significance of every criterion in view of its variability and information content across the set of alternatives. The suggested method takes into consideration uncertainty or randomness regarding the distribution of data of each criterion in such a way that weights would have more variability and more information about one criterion. For example, higher weights may be assigned to the criteria of accuracy and anomaly detection precision if they are critical in the performance of the IoT monitoring system. In Stage II, the ranking of the alternative models or strategies is conducted using the TOPSIS method. It calculates the relative proximity of each alternative to the ideal solution for the choice of the alternative that should be closest to the ideal but farthest from the worst solution. This ranking will keep those models that have provided better performance with greater precision while keeping the algorithms running efficiently. The TOPSIS method uses Euclidean distance between the ideal and negative-ideal solutions to calculate separation measures and then computes a relative closeness coefficient for each alternative. The alternative that has the highest relative closeness to the ideal solution is chosen as the best choice. Therefore, this two-stage MCDM approach ensures that an ideal balance between different performance metrics is to be achieved. Such would be the real model selection process wherein the sophisticated optimization algorithm of HGWOPSO and HWCOAHHO is coupled with objective ranking by Entropy and multi-criteria decision-making the TOPSIS algorithm so that, in fact, an exact, computationally realizable model is ensured on dynamically varying resource-limited IoT scenarios for real IoT monitoring.

3.3.1. The AHP Method

AHP is one of the many multi-objective decision-making models used for selecting the most appropriate choice from the plethora of choices. In this study, AHP will be adopted for evaluating and ranking deep learning models and different strategies concerning performance in various criteria such as accuracy, latency, throughput, and computational efficiency while monitoring IoT networks. The AHP method considers the relative importance of the criteria and the performance of the alternatives for each criterion in a structured manner [58].
  • STEP 1: Structuring the Decision Matrix
The first step in AHP includes the construction of a decision matrix (D) with the ratings of each alternative by each criterion. In this research work, the set of alternatives refers to various models involving deep learning and approaches/strategies pertaining to monitoring IoT devices, and their criteria correspond to some key metrics—performance metrics—involving issues of accuracy, latency, and computational efficiency. Therefore, each element of this matrix is illustrated as in Equation (11):
D C 1 C 2 C m A 1 r 11 r 12 r 1 m A 2 r 21 r 22 r 2 m A n r n 1 r n 2 r n m
In the context of our work, the decision matrix is built in, which A 1 ,   A 2 ,   ,   A are the alternatives representing different deep learning models or IoT monitoring strategies, and C 1 ,   C 2 ,   ,   C are the evaluation criteria, such as accuracy, latency, throughput, and computational efficiency. The ratings rᵢⱼ are the scores given to alternative A with respect to criterion C . These ratings signify the performance of each alternative on chosen criteria and thus enable one to compare the alternatives systematically on relative strengths and weaknesses. The above decision matrix is the basis for the application of AHP and TOPSIS to find an optimum solution for IoT network monitoring while trading off different performance metrics for the best outcome.
  • STEP 2: Pairwise Comparisons and Weight Calculation
The next AHP involves making pairwise comparisons between the criteria to determine the relative importance of criteria. In this regard, the relative importance is assigned to matching every criterion against every other criterion on a scale, usually going up to 1–9. This yields a comparison matrix, often denoted as A , with the values of these pairwise comparisons. For example, if criterion C 1 is considered to be three times more important than criterion C 2 , then the corresponding entry in the matrix is 3 and the reverse comparison is 1/3. The matrix A is symmetric with each element a i j giving the importance of criterion C i relative to criterion C j and a i j = 1 / a j i . Once the pairwise comparison matrix is constructed, the weight vector W C j associated with the criteria is calculated using the eigenvector method or the sum of rows method. These are weights for the relative importance of each criterion, normalized so that their sum equals 1 as depicted in Equation (12).
W C j = j = 1 m d j d j ,
where d j is referring to the degree of divergence (calculated by normalizing the pairwise comparison matrix), and, finally, W C j is the weight for criterion C j .
  • STEP 3: Ranking the Alternatives Using Weighted Scores
After calculating the weight of each criterion, we begin to evaluate the alternatives. Each alternative is rated against the criteria; from the decision matrix, each alternative’s weighted score is computed by multiplying the ratings by the weight of the respective criteria. Equation (13) for the weighted score for alternative A i reads:
W e i g h t e d S c o r e A i = i = 1 n r i j   . W C j
where r i j is the rating of alternative A i on criterion C j , and W C j is the weight of criterion C j . It sums the products of the ratings with the weights across all criteria into a single score describing the global performance of each alternative about the relative importance of each criterion.
  • STEP 4: Aggregating the Results
In AHP, aggregation is a process of incorporation that integrates pairwise comparisons, weights of the criteria, and ratings of the alternatives to draw on the final ranking. The process is objective and transparent because it is based on the data provided by the decision matrix and the pairwise comparisons alone. This brings out a ranked list of alternatives that balances all the criteria according to their relative importance.
  • STEP 5: Final Decision
The last step of AHP is the decision, which selects the alternative with the highest weighted score as an optimal solution. In our work, this approach will enable us to systematically investigate different deep learning models or monitoring strategies based on a set of performance metrics and make an informed choice on which model best suits a real-time IoT network monitoring environment. By applying AHP, the decision-making process becomes strictly data-driven and objective, and the relative importance of each criterion shall be based on the actual performance data and expert judgment. This approach also ensures that any trade-offs between competing metrics—for example, accuracy versus computational efficiency—are appropriately managed.

3.3.2. TOPSIS Method

The most widely used classical MCDM method is TOPSIS [59]. Similar to AHP, TOPSIS is started by normalization followed by the development of the weighted decision matrix. The optimum and worst solutions are determined. Separation measures are calculated as well as the relative closeness coefficient for each alternative. Alternatives are ranked in descending order. The highest proportional proximity will present the best alternative. The formulas that describe the procedure are defined as follows:
  • STEP 6: Form the weighted decision matrix
Therefore, the weighted decision matrix is obtained from the multiple normalized evaluations with weights of criteria via Equations (14) and (15).
ω i j = w j x i j
D w C 1 C 2 C m A 1 ω 11 ω 12 ω 1 m A 2 ω 21 ω 22 ω 2 m A n ω n 1 ω n 2 ω n m
  • STEP 7: Determine the extreme solutions
Since Equations (2) and (3) are used in normalization, the extreme solutions are given as follows:
ω j + = max i ω i j ,   and   ω j = min i ω i j ,   for   j = 1 , , m   and   i = 1 , , n
  • STEP 8: Calculate the separation measures
The separation between alternatives and extreme solutions is calculated using Equations (16) and (17).
S i + = j = 1 m ω i j ω j + 2 ,   i = 1 , , n
S i = j = 1 m ω i j ω j 2 ,   i = 1 , . . n .
  • STEP 9: The relative closeness coefficient is determined using Equation (18).
R A i = S i S i + + S i
  • STEP 10: Prioritizing the alternatives
The closer the relative closeness coefficient is to 1, then the lowest priority of an alternative against each other will arise.

4. Results and Discussion

This work represents the optimized deep learning models for IoT network monitoring, performance evaluation through benchmark functions, and MCDM techniques to identify the best effective model. Each of the subsections elaborates on the HGWOPSO and HWCOAHHO optimization algorithms adopted for the fine-tuning of deep learning models comprising FFNNs, CNNs, and MLPs for IoT anomaly detection. These results indicate the massive improvements in terms of accuracy, precision, recall, and F1 score made possible by these optimization techniques. This paper also presents how these algorithms further enhance the adaptability, robustness, and real-time performances of models in dynamic IoT environments. Furthermore, MCDM analysis is employed, with the Entropy method for weighting and the TOPSIS method for ranking, in order to assess and compare the effectiveness of each model and optimization approach. This section also discusses the strengths and limitations of the optimized models, their practical implications for IoT network monitoring, and the potential for future improvements, particularly in terms of computational efficiency and scalability for real-world applications.

4.1. Results of the Optimization Algorithms for IoT Network Monitoring Models

The performance of the proposed IoT network monitoring deep learning models optimized by HGWOPSO and HWCOAHHO techniques is presented in Figure 6A–G. The training progress, confusion matrices, and model performance on various criteria are comprehensively analyzed. Figure 6A, shows the convergence of the training processes of the two deep learning models optimized through HGWOPSO and HWCOAHHO. It reflects how two models converge over time to show that losses decrease and accuracy has increased when the optimization techniques tuned the hyperparameters of the models. The figure is essentially highlighting the supremacy of both optimization algorithms in pacing up the convergence and upliftment of the overall performance in models related to real-time anomaly detection and IoT traffic prediction. The figures also show the influence of hyperparameter tuning, whereby HGWOPSO focuses on optimizing learning rate, number of layers, and neurons to reduce the training loss, while HWCOAHHO works on increasing the adaptability of the model by searching along different optimization paths using the behavioral strategies of the Harris Hawks. Figure 6B, shows a confusion matrix for the compared deep learning models optimized through the HGWOPSO and HWCOAHHO techniques for performance profiling on different classes such as normal traffic and anomalous traffic. The confusion matrix below depicts the performance of the models in differentiating between different traffic patterns in the IoT network, showing true positives, false positives, true negatives, and false negatives. A comparison of these confusion matrices demonstrates the detection accuracy improvement brought about by both HGWOPSO and HWCOAHHO, with the former having increased precision and recall values. Figure 6C–E show, respectively, the FFNNs confusion matrix, MLPs, and CNNs in individual modeling. In the FFNNs confusion matrix depicted in the figure, though this model’s performance is good, there has been a slight imbalance, the model producing more false positives. It does excellent work in normal conditions but is sensitive to network anomalies that need further refinement. Figure 6D, in the MLPs confusion matrix, it can be seen that it handles IoT traffic better compared to FFNNs, with fewer false negatives, indicating better classification of anomalous traffic. Indeed, Figure 6E shows an even better performance by CNNs through a balance of higher true positives with a reduced number of misclassifications, which evidences the advantage of CNNs to capture spatial hierarchies and relationships from IoT data toward more accurate anomaly detection. Figure 6F,G present the performance of HGWOPSO and HWCOAHHO-optimized models. Figure 6F shows the HGWOPSO confusion matrix, showing rapid stabilization of both detection and classification, while the algorithm minimizes false positives and has high precision. However, slight oscillations can still be observed in edge cases where the model faces challenges in classifying highly variable IoT traffic. Figure 6G, the confusion matrix of HWCOAHHO, presents the best performance compared to all the other models; the precision and recall have a sharp increase. The model identifies the anomalous traffic with little misclassification, and the benefit from the superior optimization brought by HWCOAHHO’s hybrid approach can be considered. This matrix depicts how HWCOAHHO enhances model robustness with adaptive capabilities and results in fewer false positives and false negatives.
These results further validate the efficiency of HGWOPSO and HWCOAHHO optimization algorithms in improving the performance of deep learning models for IoT network monitoring. The hybrid optimization strategies effectively tune the models to improve accuracy, reduce misclassifications, and ensure reliable anomaly detection, even in the face of fluctuating IoT network traffic. Figure 6A–G show how the comprehensive analysis has depicted the importance of optimization in letting the deep learning models exhibit performance and adaptability on real-world dynamic IoT devices.
The deep learning models for IoT network monitoring shown in Table 3 and Figure 7 extend the performance comparison of FFNNs, CNNs, MLPs, HGWOPSO, and HWCOAHHO. All these models are compared on key metrics of accuracy, precision, recall, and F1 score, showing their strength in real-time anomaly detection and prediction of network traffic. While the FFNNs model gives good performance with an accuracy of 0.90 and precision of 0.88, its recall is relatively lower at about 0.85, reflecting that some anomalies are not caught while the FFNNs is in detection mode. CNN does a bit better, reaching higher accuracy at 0.92 with overall improved precision and recall to capture even more complex network-traffic dynamics. MLPs, with an accuracy of 0.91, also performs well but lags CNNs in terms of recall, which may fail to detect anomalies in some cases. The optimization algorithms HGWOPSO and HWCOAHHO raise the bar, with HGWOPSO yielding an accuracy of 0.95, precision of 0.92, and recall of 0.90, hence proving the capability of fine-tuning the parameters of the model for real-time monitoring. The results of HWCOAHHO thus exhibit the best performance for accuracy, with a value of 0.96, a precision of 0.93, a recall of 0.91, and the highest of all F1 scores at 0.92, reflecting its excellent optimization and robustness to dynamically changing network conditions. Optimizations with these models, especially HWCOAHHO, ensure that the model guarantees accuracy and reliability in terms of minimizing false positives and false negatives, hence making the model suitable for real-world applications of IoT network monitoring. This analysis helps a person develop an eye for deep learning model performance optimization using the right advanced techniques, something that will be even more important in a complex and fluctuating environment, such as IoT, in achieving high accuracy, precision, and recall, allowing proper anomaly detection and network traffic management. The HWCOAHHO model achieved an accuracy of 96%, a precision of 93%, and an F1 score of 92%. These results significantly outperform baseline models.
It has also presented an elaborate performance evaluation of deep learning models for IoT network monitoring, optimized by HGWOPSO and HWCOAHHO. The emphasis was placed on the fine-tuning of model parameters to enhance the accuracy, precision, recall, and F1 score, which are extremely important for effective anomaly detection and network traffic prediction. The results showed that each of the optimization algorithms deployed in the models had unique strengths, adding to their performance in real-time IoT applications. HGWOPSO optimization significantly improved the accuracy of the model, improving precision by 5.6% and recall by 4.4% compared to baseline models. This shows its capability in fine-tuning hyperparameters, such as learning rates, numbers of layers, and neurons for better anomaly detection. Although HWCOAHHO provides the best performance across all metrics, yielding an accurate peak of 96% and an F1 score of 0.92, it has revealed its great ability to adapt to dynamic IoT environments. This hybrid optimization approach combines Harris Hawks and World Cup strategies in such a way that both false positives and false negatives are reduced; thus, it is ideal for high traffic real-time IoT monitoring scenarios. The comparative analysis of performance outlined that each model had different strengths: HGWOPSO provided the most precise adjustments to improve the accuracy of detection, while HWCOAHHO ensured superior adaptability and robustness. Further, it has been depicted from the radar chart analysis that optimization techniques played an important role in achieving higher performances. Overall, the hybrid optimization strategies hugely improved the stability, precision, and adaptability of deep learning models, hence assuring high reliability and efficient runtime anomaly detection in IoT networks.

4.2. Results of the Benchmarking Functions

In this work, the performance of HGWOPSO and HWCOAHHO algorithms is explored for solving a wide range of benchmark functions, as depicted in Table 4. It presents the performance of both algorithms on nine different benchmark functions, f 1 to f 9 , regarding best, worst, average, median, and STD values. These metrics have been used to evaluate the optimality of solutions obtained by the algorithms and their ability to explore the search space. In the case of the first set of benchmark functions, namely, from f 1 x to f 5 x , HWCOAHHO yields better values than HGWOPSO in all the metrics, including best, worst, average, median, and standard deviation. That means HWCOAHHO has more capability in finding optimal or near-optimal solutions and is more precise and robust in solving these functions. HWCOAHHO has performed very well on functions, such as f 1 x and f 2 x , with best values approaching near-zero solutions. That is most noticeable in the function f 2 x , where HWCOAHHO has the best value of 1.23 × 10−11, and HGWOPSO does with 0, showing where the clear advantage is. On the more complex functions, from f 3 x to f 5 x , both algorithms still maintain high performance, but HWCOAHHO continues with superior results in terms of reduced standard deviation and more consistent performance. For example, from f 5 x , the best value obtained by HGWOPSO is 0, while HWCOAHHO performs much better, with the best value of 3.45 × 10−4 and lower STD, which means that HWCOAHHO has better stability and reliability for reaching the optimal solution. In the rest of the functions, from f 6 x to f 9 x , both algorithms have shown notable consistency in their performance; however, HWCOAHHO has once again been found to be more competitive across the board. While in those functions, HGWOPSO kept returning 0, implying its futility to effectively explore a more complex solution space; HWCOAHHO was constantly at better values in terms of metrics. For instance, as it shows for function f 6 x , HWCOAHHO achieves a best value of 3.45 × 10−6, and HGWOPSO always returns 0, indicating the greater strength of HWCOAHHO at finding more accurate solutions. These results also depict that HWCOAHHO generally brings more efficiency and reliability regarding HGWOPSO on a group of benchmark functions, making it better at balancing explorative and exploitative movements. Hybrid optimization strategies hybridized in HWCOAHHO make the algorithm converge faster to optimal solutions with minimized variance and high performance in terms of complex, multimodal optimization landscapes. Thus, HWCOAHHO has great potential in applications requiring precision optimization in dynamic and complex environments. On the other hand, though HGWOPSO provides reasonable results on simpler problems, it is far from competing with HWCOAHHO in terms of both precision and consistency, as depicted in Figure 8.

4.3. MCDM Results for IoT Network

This section illustrates the results of the MCDM investigation on deep learning models using IoT network monitoring. Further, the performance evaluation of the model is made through the integrated AHP-TOPSIS multi-criteria decision-making method. Five deep learning models, such as FFNNs, CNNs, MLPs, HGWOPSO, and HWCOAHHO, are considered for evaluation. These five deep learning models are assessed based on four key performance metrics: accuracy, precision, recall, and F1 score. The AHP method thus provides the weights of those criteria concerning their importance for real-time network monitoring and the higher the criticality of metrics, such as accuracy and precision. Further, the TOPSIS method will be applied for the ranking of the models in relation to their closeness to the ideal solution by considering weighted criteria and their performances in each. This analysis gives a full view of the strengths and weaknesses of each model, with HWCOAHHO being the most balanced and effective model for IoT network monitoring, but also shows the trade-offs between accuracy, precision, and recall. The results will help in guiding the selection of the most suitable model for different IoT monitoring tasks based on the specific needs of the network.

4.3.1. Results of the Criterion Weights

The normalized decision matrix represented in Table 5 enables the detailed evaluation of five deep learning models, namely FFNNs, CNNs, MLPs, HGWOPSO, and HWCOAHHO, regarding four key performance metrics: accuracy, precision, recall, and F1 score. Each model is then appraised against these criteria; hence, the normalized values from each model reflect the performance comparison of that model to the best-performing model over each criterion. These values range from 0 to 1, where higher values indicate better performance. For example, the highest normalized values for HWCOAHHO over all criteria are 1.000 for accuracy, precision, recall, and F1 score. This reflects its best general performance among all compared algorithms. It means that HWCOAHHO has the best performance in detecting anomalies with a high precision in its predictions and is well-balanced in terms of recall and F1 score. The steady top performance points to its top performance for IoT network monitoring. On the other hand, HGWOPSO results in a very high quality model with normalized accuracy, precision, recall, and F1 score of 0.9896, 0.987, 0.989, and 0.989, respectively, signifying the strong ability in the optimization of model parameters with high accuracy and stability for this model. The performance of HGWOPSO is slightly lower than that of HWCOAHHO, though it is still outstanding as one of the best models for anomaly detection and overall performance. CNN and MLP have almost identical performances by precision and recall, with the normalized values of 0.956 and 0.946 for MLP and 0.956 for CNN for both metrics. These models give good results but slightly lag from HGWOPSO and HWCOAHHO regarding accuracy and F1 score. FFNNs has the lowest normalized values, especially in recall of 0.934 and F1 score of 0.934, which might indicate that this is a relatively good model but may not be that sensitive to capture subtle patterns or anomalies compared to other models.
The normalized decision matrix shows a trade-off in the performances among the models with respect to the chosen performance metrics. Where HWCOAHHO is found to be the best among all the chosen algorithms, under certain needs of the IoT network monitoring system—for instance, when there is a need to focus on either precision or recall-HGWOPSO, CNN, and MLP may provide competitive options. These trade-offs are quite important to consider when the choice of an appropriate model is made for deployment in real-time IoT monitoring, where accuracy and precision may be favored for some applications, and recall might be more important in anomaly detection scenarios.
The correlation matrix in Table 6 provides valuable insight into how the key performance metrics—accuracy, precision, recall, and F1 score—interact in the assessment of the different deep learning models tested for IoT network monitoring. The following are some important interactions that have been identified in these correlations: accuracy ( C 1 ) and precision ( C 2 ) show very high positive (0.97) correlations to suggest that models that have better accuracy also have good precision. This implies that an increase in the general capability of the model to correctly classify data translates into the model’s precision in anomaly detection. At the same time, precision ( C 2 ) and recall ( C 3 ) presented a high positive correlation value of 0.95; it was evidenced that a model with high scores in terms of precision has good recall capabilities regarding true positives, which is of major importance in detecting anomalies within real-time IoT systems. It will also be expected that the relation is even stronger between recall and F1 score, which is 0.98. This is understandable since the F1 score is the harmonic mean of precision and recall. This strong correlation means that models with high recalls tend to have high F1 scores, pointing out their ability to strike a balance between correctly identifying true positives and minimizing false negatives. These standard deviations in the table express how much the performance results varied for the models across different criteria. It ranges from a minimum of 0.025 for C 2 (precision), suggesting a relatively stable precision score for the models across alternatives, to a maximum of 0.045 for C 3 (recall); thus, there is more variation in how well the models can do in terms of recall. This variability might, therefore, reflect the different capabilities of these models to detect anomalies, given the variation in network conditions and challenges that maintain high recall at real-time monitoring tasks. The final weights assigned to each criterion reflect their relative importance in the decision-making process. Accuracy ( C 1 ) has the highest weight (0.35), underlining the importance of accuracy in assessing the overall capability of the models in carrying out the IoT network monitoring task. Higher importance is attached to the model classifying the majority of the data correctly. The second most important would be the precision, C 2 at 0.30, which shows how important it is in trying to reduce the number of false positives such that the model is not flagging data irrelevantly as anomalies. Recall, with a weight of 0.25, is important in ensuring true anomalies are captured correctly but slightly less critical than the accuracy and precision in this context. F1 score ( C 4 ) will have the lowest weight of 0.20, balancing the precision and recall, because it is secondary in importance compared to the direct impact of accuracy and precision. These findings further emphasize the need for a balanced approach in model selection and tuning when applying deep learning models to IoT network monitoring. Although models with high accuracy and precision are preferred, recall is also very important in detecting unseen anomalies. The weights and correlations in Table 6 help inform the choice by quantifying the importance of each performance metric. It will guide the selection toward the best model that ensures the optimization of IoT network monitoring in real-time.

4.3.2. Results of the Ranks

The TOPSIS approach will be adopted for performance ranking of deep learning models representing FFNNs, CNNs, MLPs, HGWOPSO, and HWCOAHHO approaches to IoT network monitoring by using four criteria, namely, accuracy, precision, recall, and F1 score, separately weighted. All of them, in a normal form of a weighted decision matrix, reflect their relative importance concerning the solution model’s competence to optimally meet a particular criterion. The FFNNs model shows a relatively balanced but moderate performance across all criteria, with high importance on accuracy, as evidenced by its highest weighted score for this criterion. However, its scores for precision, recall, and F1 score are somewhat lower, suggesting that FFNNs serves as a baseline model with no specific emphasis on optimizing recall or minimizing false positives. This makes it generally a good comparison point while evaluating other models but simultaneously suggests room for improvement regarding better precision and recall. This CNN model works consistently, with very good scores throughout all criteria and particularly with the highest score in precision and recall. The presence of balanced performance across most metrics indicates that CNNs seems to be one of those versatile models that have effectively taken hold of anomaly detection but at the same time provided good accuracy. This model outperformed FFNNs slightly because it performed a bit more equilibrium between precision and recall-skewed anomaly detection systems. MLPs performed as equally strong as CNNs but resulted in somewhat lower values of both accuracy and recall. Therefore, from the weighted decision matrix, it emerges that while competitive, MLPs cannot return such high scores in all criteria as demonstrated by CNNs. This shows that MLPs is performing relatively well but has its price, most especially in recall, where it performed below CNNs and the much more advanced models of HGWOPSO and HWCOAHHO. The model HGWOPSO is highly inclined to accuracy, as evidenced by the highest weighted score for accuracy in the ensemble. This will demonstrate the highest ‘Accuracy’ and ‘Precision’ but, to an extent, reveal a wee bit on the lower sides in both recall and the F1 score, explaining that there may have been precision-oriented design considerations concerning the system model by sacrificing recalls. A lower score shows that it performs well as highly responsive while missing a few anomalies in conditions with varying IoT network fluctuations for HGWOPSO. This means a slightly worse performance in anomaly detection of all types while the minimal number of false positives can be achieved, which was the goal of the targeted optimization strategy. Regarding this model, perfect scores regarding accuracy, precision, recall, and F1 score could be achieved, and it, therefore, contributes to the best and most balanced performance regarding all criteria. HWCOAHHO is generally the best-performing model that achieves the highest normalized score and weighted value in every category. HWCOAHHO turns out to be the most balanced model when it comes to performance in monitoring IoT networks, while providing the best performance both at accuracy and recall and providing high precision and F1 score as shown in Table 7.
Using Table 8 to present the results of the application of TOPSIS ranks, in order, five deep learning models for IoT network monitoring based on their closeness coefficient R A i , which is representative of how closely the performance of each model fits the ideal solution. HWCOAHHO ranks first, with an R A i of 0.3797, meaning the closest performance to the ideal solution. This is reasonable because it attained a full mark across all the criteria (including accuracy, precision, recall, and F1 score), therefore ranking as the most effective among all. MLPs is in second place with an R A i of 0.4209, which shows very high values for precision and recall but is slightly behind HWCOAHHO. CNN ranks third with an R A i of 0.4244, showing high and balanced performance on the criteria but slightly lower compared to MLPs and HWCOAHHO. FFNNs ranks fourth with an R A i of 0.4390, which is a moderate performance but underperforms in precision and recall against other models, hence limiting its effectiveness in anomaly detection. Finally, HGWOPSO ranks fifth with an R A i of 0.4021, with the best performances in accuracy and precision but lagging in recall, which can be interpreted as a more focused optimization strategy at the cost of some of its ability to detect all anomalies. These rankings thus point to the existence of trade-offs between the models; while HWCOAHHO presents the best overall performance, other models, such as HGWOPSO, perform exceedingly well in certain areas while performing very poorly in others, specifically in recall.

4.4. Measurement of Performance Metrics, Weighting, and Statistical Analysis for Credibility

To ensure a comprehensive evaluation of the proposed models, we have adopted a systematic approach to measure, weigh, and analyze key performance metrics. The following outlines our methodology for assessing latency, throughput, and anomaly detection accuracy, along with the statistical techniques and decision-making frameworks used for ranking and validation:
  • Latency: Measured as the average response time (in milliseconds) between input data processing and anomaly detection output. This is obtained from real-time testing on IoT datasets.
  • Throughput: Defined as the volume of network traffic processed per second, expressed in packets per second (pps). This metric is evaluated under different traffic loads to assess the scalability of the proposed models.
  • Anomaly Detection Accuracy: Evaluated using standard classification metrics, including accuracy, precision, recall, and F1 score, which are computed based on the confusion matrices of the deep learning models.
  • A multi-criteria decision-making (MCDM) approach is employed to weigh and rank these performance metrics.
  • Analytic Hierarchy Process (AHP) is used to assign relative weights based on the significance of each criterion in IoT network monitoring.
  • Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is applied to rank the models by calculating their relative closeness to the ideal solution.
  • To ensure robustness, statistical tests such as standard deviation, confidence intervals, and ANOVA (Analysis of Variance) are conducted to compare model performances.
  • Benchmark functions (Sphere, Rosenbrock, and Ackley) are utilized to validate the optimization techniques under varying conditions.
According to the IoT network monitoring, decision-making involves balancing multiple performance criteria, such as accuracy, latency, throughput, and computational efficiency. Traditional evaluation approaches may overlook the trade-offs between these criteria, leading to suboptimal model selection. The incorporation of MCDM methods addresses this challenge by providing a structured decision framework.
  • Analytic Hierarchy Process (AHP)
    AHP is used to assign relative weights to performance metrics based on their significance in IoT network monitoring. Through pairwise comparisons, AHP quantifies the importance of each criterion, ensuring that the evaluation reflects real-world monitoring priorities. For instance, in applications requiring real-time anomaly detection, latency might be weighted more heavily than overall accuracy.
  • Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS)
    Once the criteria are weighed, TOPSIS ranks alternative models by computing their relative closeness to an ideal solution. This method ensures that the selected model exhibits the best balance across multiple performance factors, optimizing both detection accuracy and computational efficiency.
  • Impact on Decision-Making in IoT Monitoring
    The integration of AHP and TOPSIS enhances the objectivity of model selection, reducing bias and ensuring that the chosen deep learning model aligns with the specific operational requirements of the IoT environment. This framework enables dynamic adaptation to changing network conditions, as decision priorities (e.g., prioritizing precision in security applications or favoring recall in fault detection) can be adjusted based on the needs of different IoT scenarios.

4.5. Selection and Justification of HGWOPSO and HWCOAHHO for IoT Monitoring

Optimizing deep learning models for IoT network monitoring requires a balance between exploration (searching for new solutions) and exploitation (refining the best-found solutions). Due to the dynamic nature of IoT environments—characterized by fluctuating traffic loads, real-time anomaly detection, and scalability constraint—traditional optimization methods often struggle to provide optimal hyperparameters. Therefore, we employed HGWOPSO and HWCOAHHO, which integrate multiple optimization strategies to improve convergence speed, search efficiency, and robustness in handling complex, high dimensional problems.
(a)
HGWOPSO combines Grey Wolf Optimization (GWO), which mimics the leadership hierarchy and cooperative hunting strategies of grey wolves, with Particle Swarm Optimization (PSO), which simulates swarm intelligence by adjusting individual particles’ positions based on both personal and global best solutions.
  • GWO component: Enhances exploration by leveraging alpha, beta, and delta wolves to direct search behavior while maintaining diversity.
  • PSO component: Provides efficient local exploitation by refining particle positions based on velocity updates, improving convergence speed and accuracy.
  • Strengths:
  • Balances global search (GWO) and local refinement (PSO), preventing premature convergence.
  • Adaptable to dynamic data, ensuring robustness in real-time IoT environments.
  • Reduces computational complexity compared to purely evolutionary algorithms.
  • Weaknesses:
  • Require fine-tuning of control parameters (e.g., inertia weight, learning factors) for optimal performance.
  • Convergence slows down when dealing with highly complex multimodal problems.
HGWOPSO is particularly effective in IoT network monitoring due to its ability to rapidly adapt to fluctuating traffic loads and detect anomalies while minimizing unnecessary computational overhead. It ensures that deep learning models are fine-tuned to achieve high precision and recall without excessive processing time, making it suitable for resource-constrained IoT environments.
(b)
HWCOAHHO integrates World Cup Optimization (WCO), inspired by competitive tournament selection, with Harris Hawks Optimization (HHO), which mimics the surprising pounce strategy of Harris hawks.
  • WCO component: Introduces competition-based selection, allowing the best solutions to advance while weaker ones are eliminated, ensuring progressive solution refinement.
  • HHO component: Simulates coordinated hunting tactics, balancing soft and hard besiege strategies for adaptive search capabilities.
  • Strengths:
  • Excels in high dimensional, multimodal optimization problems common in IoT networks.
  • Incorporates adaptive switching between exploration and exploitation to avoid local optima.
  • Computationally efficient while maintaining high accuracy and robustness in network anomaly detection.
  • Weaknesses:
  • Requires a balance between competition (WCO) and adaptive hunting (HHO) to avoid excessive elitism.
  • Need additional convergence control mechanisms in highly noisy datasets.
HWCOAHHO is particularly advantageous for IoT anomaly detection and network monitoring because of its dynamic adaptability. By integrating competitive selection with adaptive hunting, it efficiently fine-tunes deep learning models in highly variable IoT traffic conditions. Its ability to handle large-scale data and dynamic network environments makes it ideal for real-time IoT monitoring applications.

5. Conclusions and Implications

The paper proposes a hybrid optimization approach by incorporating deep learning models to enhance IoT network monitoring with the help of advanced optimization algorithms, namely HGWOPSO and HWCOAHHO. It is proposed that the use of optimization algorithms in conjunction with deep learning models of FFNNs, CNNs, and MLPs would result in significant enhancements in the detection of anomalies within IoT networks and traffic prediction. The critical model parameters, such as learning rates, neuron count, and configuration of layers were optimized for the study, and it showed that all the models had improved accuracy, precision, recall, and F1 score. From the results, HWCOAHHO gives the best on accuracy and robustness among other models, while HGWOPSO gives a balancing approach with effective improvements in precision and recall. These findings underline the optimization technique in real-time IoT applications where the class is of the essence at low accuracy and a very small ratio of false positives in terms of anomaly detection performance.
This research has very important implications for IoT network monitoring. Being able to embed advanced hybrid optimization algorithms into deep learning models enables more accurate and adaptive real-time monitoring, which is of paramount importance in handling the ever-increasing complexity of IoT systems. Therefore, with the growth and development of IoT networks, the need for anomaly detection and traffic prediction mechanisms in a way that sustains the stability and security of these networks is also growing. The proposed hybrid optimization approach in this paper could be one of the potential ways to address these challenges by further improving the adaptability and performance of deep learning models under varying network conditions. These results are indicative of the nature of these optimization techniques, possibly applied in other domains that require real-time data analytics and decision-making, such as smart cities, healthcare systems, and industrial automation. On the other hand, some limitations have been identified in the present study, especially regarding the computational requirements of the proposed hybrid optimization algorithms. HGWOPSO and HWCOAHHO improve the performance of the model substantially, yet their higher computational complexity challenges real-time usages in resource-constrained environments. Highlighted computational complexity challenges for low-resource devices and addressed scalability testing limits in highly dynamic networks. This becomes a point where future research must be targeted: simplifying such algorithms to reduce computational overheads and further enhance applicability in real-time. It needs more scalability tests on complex IoT networks as well as across a variety of IoT applications to bring generalizability to the methods proposed. Integrating machine learning techniques with optimization algorithms would thus strengthen these models better in terms of adaptability toward newer data and ever-changing conditions in the network. Experimental validation through hardware implementation will finally provide practical insights into the feasibility and effectiveness of these optimization techniques in real-world IoT systems, offering further evidence of their suitability for large-scale deployment. These methods can be adapted for industrial IoT monitoring and smart cities, where robust real-time anomaly detection is critical.

Author Contributions

Conceptualization, M.Q.J.A.-Z. and M.Ç.; methodology, M.Q.J.A.-Z. and M.Ç.; software, M.Q.J.A.-Z. and M.Ç.; validation, M.Q.J.A.-Z. and M.Ç.; formal analysis, M.Q.J.A.-Z. and M.Ç.; investigation, M.Q.J.A.-Z. and M.Ç.; resources, M.Q.J.A.-Z.; data curation, M.Q.J.A.-Z. and M.Ç.; writing—original draft preparation, M.Q.J.A.-Z.; writing—review and editing, M.Q.J.A.-Z.; visualization, M.Ç.; supervision, M.Q.J.A.-Z.; project administration, M.Ç. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data will be made available on request.

Acknowledgments

Special thanks to Mesut Çevik for his invaluable help with data collection and participant recruitment.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, S.; Shao, X.; Zhang, W.; Zhang, Q. Distributed Multicircular Circumnavigation Control for UAVs with Desired Angular Spacing. Def. Technol. 2024, 31, 429–446. [Google Scholar] [CrossRef]
  2. Kok, C.L.; Ho, C.K.; Lee, T.K.; Loo, Z.Y.; Koh, Y.Y.; Chai, J.P. A Novel and Low-Cost Cloud-Enabled IoT Integration for Sustainable Remote Intravenous Therapy Management. Electronics 2024, 13, 1801. [Google Scholar] [CrossRef]
  3. Kok, C.L.; Kusuma, I.M.B.P.; Koh, Y.Y.; Tang, H.; Lim, A.B. Smart Aquaponics: An Automated Water Quality Management System for Sustainable Urban Agriculture. Electronics 2024, 13, 820. [Google Scholar] [CrossRef]
  4. Uzun, M. Flight control system design of UAV with wing incidence angle simultaneously and stochastically varied. Aircr. Eng. Aerosp. Technol. 2024, 96, 715–725. [Google Scholar] [CrossRef]
  5. Long, H.; Duan, H. Cooperative mission planning based on game theory for UAVs and USVs heterogeneous system in dynamic scenario. Aircr. Eng. Aerosp. Technol. 2024, 96, 1128–1138. [Google Scholar] [CrossRef]
  6. Li, Z.; Li, H.; Liu, Y.; Jin, L.; Wang, C. Indoor fixed-point hovering control for UAVs based on visual inertial SLAM. Robot. Intell. Autom. 2024, 44, 648–657. [Google Scholar] [CrossRef]
  7. Al-Radaideh, A.; Sun, L. Self-localization of tethered drones without a cable force sensor in GPS-denied environments. Drones 2021, 5, 135. [Google Scholar] [CrossRef]
  8. Wu, Q.; Zhu, Q. Fixed-time fault-tolerant attitude tracking control for UAV based on fixed-time extended state observer. Aircr. Eng. Aerosp. Technol. 2024, 96, 838–844. [Google Scholar] [CrossRef]
  9. Pehlivanoglu, V.Y.; Pehlivanoğlu, P. An efficient path planning approach for autonomous multi-UAV system in target coverage problems. Aircr. Eng. Aerosp. Technol. 2024, 96, 690–706. [Google Scholar] [CrossRef]
  10. Benaouali, A.; Boutemedjet, A. Multidisciplinary analysis and structural optimization for the aeroelastic sizing of a UAV wing using open-source code integration. Aircr. Eng. Aerosp. Technol. 2024, 96, 585–592. [Google Scholar] [CrossRef]
  11. Guedes, J.J.; Goedtel, A.; Castoldi, M.F.; Sanches, D.S.; Serni, P.J.A.; Rezende, A.F.F.; Bazan, G.H.; de Souza, W.A. Three-phase induction motor fault identification using optimization algorithms and intelligent systems. Soft Comput. 2024, 28, 6709–6724. [Google Scholar] [CrossRef]
  12. Dawood, A.; Ismeil, M.A.; Hussein, H.S.; Hasaneen, B.M.; Abdel-Aziz, A.M. An Efficient Protection Scheme Against Single-Phasing Fault for Three-Phase Induction Motor. IEEE Access 2024, 12, 6298–6317. [Google Scholar] [CrossRef]
  13. Elbarbary, Z.; Al-Harbi, O.; Al-Gahtani, S.F.; Irshad, S.M.; Abdelaziz, A.Y.; Mossa, M.A. Review of speed estimation algorithms for three- phase induction motor. MethodsX 2024, 12, 102546. [Google Scholar] [CrossRef] [PubMed]
  14. Quan, N.V.; Long, M.T. Sliding mode control method for three-phase induction motor with magnetic saturation. Int. J. Dyn. Control 2024, 12, 1522–1532. [Google Scholar] [CrossRef]
  15. Sun, X.; Lin, X.; Guo, D.; Lei, G.; Yao, M. Improved deadbeat predictive current control with extended state observer for dual three-phase PMSMs. IEEE Trans. Power Electron. 2024, 39, 6769–6782. [Google Scholar] [CrossRef]
  16. Subbarao, M.; Dasari, K.; Duvvuri, S.S.; Prasad, K.; Narendra, B.; Krishna, V.M. Design, control and performance comparison of PI and ANFIS controllers for BLDC motor driven electric vehicles. Meas. Sensors 2024, 31, 101001. [Google Scholar] [CrossRef]
  17. Dauksha, G.; Górski, D.; Iwański, G. State-feedback control of a grid-tied cascaded brushless doubly-fed induction machine. Electr. Power Syst. Res. 2024, 228, 110043. [Google Scholar] [CrossRef]
  18. AlShorman, O.; Irfan, M.; Abdelrahman, R.B.; Masadeh, M.; Alshorman, A.; Sheikh, M.A.; Saad, N.; Rahman, S. Advancements in condition monitoring and fault diagnosis of rotating machinery: A comprehensive review of image-based intelligent techniques for induction motors. Eng. Appl. Artif. Intell. 2024, 130, 107724. [Google Scholar] [CrossRef]
  19. Jie, L.; Yuanqing, X. On stability analysis of nonlinear ADRC-based control system with application to inverted pendulum problems. J. Syst. Eng. Electron. 2024, 35, 1563–1573. [Google Scholar]
  20. Basil, N.; Marhoon, H.M.; Hayal, M.R.; Elsayed, E.E.; Nurhidayat, I.I.; Shah, M.A. Black-hole optimization algorithm with FOPID-based automation intelligence photovoltaic system for voltage and power issues. Aust. J. Electr. Electron. Eng. 2024, 21, 115–127. [Google Scholar] [CrossRef]
  21. Basil, N.; Marhoon, H.M. Selection and evaluation of FOPID criteria for the X-15 adaptive flight control system (AFCS) via Lyapunov candidates: Optimizing trade-offs and critical values using optimization algorithms. e-Prime-Adv. Electr. Eng. Electron. Energy 2023, 6, 100305, Corrected in e-Prime-Adv. Electr. Eng. Electron. Energy 2024, 8, 100589. [Google Scholar] [CrossRef]
  22. Toren, M. Optimization of transformer parameters at distribution and power levels with hybrid Gray wolf-whale optimization algorithm. Eng. Sci. Technol. an Int. J. 2023, 43, 101439. [Google Scholar] [CrossRef]
  23. Humaidi, A.J.; Najem, H.T.; Al-Dujaili, A.Q.; Pereira, D.A.; Ibraheem, I.K.; Azar, A.T. Social spider optimization algorithm for tuning parameters in PD-like Interval Type-2 Fuzzy Logic Controller applied to a parallel robot. Meas. Control. 2021, 54, 303–323. [Google Scholar] [CrossRef]
  24. Ogawara, R.; Kaczmarczyk, S.; Terumichi, Y. Numerical approach for flexible body with internal boundary movement. Sci. Rep. 2023, 13, 5302. [Google Scholar] [CrossRef]
  25. Oliva-Palomo, F.; Mercado-Ravell, D.; Castillo, P. Aerial transportation control of suspended payloads with multiple agents. J. Frankl. Inst. 2024, 361, 106787. [Google Scholar] [CrossRef]
  26. Mohamadwasel, N.B.; Ma’arif, A. NB Theory with Bargaining Problem: A New Theory. Int. J. Robot. Control Syst. 2022, 2, 606–609. [Google Scholar] [CrossRef]
  27. Liu, Q.; Bao, J.; Shao, Y.; Zheng, L.; Xu, H. Dynamic modeling and underwater configuration analysis of fiber optic cable for UUV-launched UAV. Ocean Eng. 2024, 303, 117774. [Google Scholar] [CrossRef]
  28. Mittal, N.; Ivanova, N.; Jain, V.; Vishnevsky, V. Reliability and availability analysis of high-altitude platform stations through semi-Markov modeling. Reliab. Eng. Syst. Saf. 2024, 252, 110419. [Google Scholar] [CrossRef]
  29. Marhoon, H.M.; Basil, N.; Mohammed, A.F. Medical Defense Nanorobots (MDNRs): A new evaluation and selection of controller criteria for improved disease diagnosis and patient safety using NARMA(L2)-FOP + D(ANFIS)µ—Iλ-based Archimedes Optimization Algorithm. Int. J. Inf. Technol. 2024. [Google Scholar] [CrossRef]
  30. NBasil, N.; Alqaysi, M.; Deveci, M.; Albahri, A.; Albahri, O.; Alamoodi, A. Evaluation of autonomous underwater vehicle motion trajectory optimization algorithms. Knowl.-Based Syst. 2023, 276, 110722. [Google Scholar]
  31. Karasahin, A.T. Characterization of different hinge angles for swashplateless micro aerial robots. Eng. Sci. Technol. Int. J. 2024, 55, 101750. [Google Scholar] [CrossRef]
  32. Marhoon, H.M.; Alanssari, A.I.; Basil, N. Design and Implementation of an Intelligent Safety and Security System for Vehicles Based on GSM Communication and IoT Network for Real-Time Tracking. J. Robot. Control (JRC) 2023, 4, 708–718. [Google Scholar] [CrossRef]
  33. Basil, N.; Marhoon, H.M. Towards evaluation of the PID criteria based UAVs observation and tracking head within resizable selection by COA algorithm. Results Control Optim. 2023, 12, 100279. [Google Scholar] [CrossRef]
  34. Song, M.; Huang, P. Dynamics and anti-disturbance control for tethered aircraft system. Nonlinear Dyn. 2022, 110, 2383–2399. [Google Scholar] [CrossRef]
  35. Basil, N.; Marhoon, H.M.; Ibrahim, A.R. A new thrust vector-controlled rocket based on JOA using MCDA. Meas. Sensors 2023, 26, 100672. [Google Scholar] [CrossRef]
  36. Salazar, F.; Martinez-Garcia, M.S.; de Castro, A.; Logroño, N.; Cazorla-Logroño, M.F.; Guamán-Molina, J.; Gómez, C. Optimization of the solar energy storage capacity for a monitoring UAV. Sustain. Futur. 2023, 7, 100146. [Google Scholar] [CrossRef]
  37. Marhoon, H.M.; Basil, N.; Ma’arif, A. Exploring Blockchain Data Analysis and Its Communications Architecture: Achievements, Challenges, and Future Directions: A Review Article. Int. J. Robot. Control Syst. 2023, 3, 609–626. [Google Scholar] [CrossRef]
  38. Basil, N.; Marhoon, H.M.; Gokulakrishnan, S.; Buddhi, D. Jaya optimization algorithm implemented on a new novel design of 6-DOF AUV body: A case study. Multimedia Tools Appl. 2022, 1–26. [Google Scholar] [CrossRef]
  39. Raheem, F.S.; Basil, N. Automation intelligence photovoltaic system for power and voltage issues based on Black Hole Optimization algorithm with FOPID. Meas. Sensors 2023, 25, 100640. [Google Scholar] [CrossRef]
  40. Mohammed, A.F.; Basil, N.; Abdulmaged, R.B.; Marhoon, H.M.; Ridha, H.M.; Ma’Arif, A.; Suwarno, I. Selection and Evaluation of Robotic Arm based Conveyor Belts (RACBs) Motions: NARMA(L2)-FO(ANFIS)PD-I based Jaya Optimization Algorithm. Int. J. Robot. Control. Syst. 2024, 4, 262–290. [Google Scholar] [CrossRef]
  41. Marhoon, H.M.; Ibrahim, A.R.; Basil, N. Enhancement of Electro Hydraulic Position Servo Control System Utilising Ant Lion Optimiser. Int. J. Nonlinear Anal. Appl. 2021, 12, 2453–2461. [Google Scholar]
  42. Liang, D.; Ding, L.; Lu, M.; Ma, R.; Cao, J. Quantitative stability analysis of an unmanned tethered quadrotor. Int. J. Aeronaut. Space Sci. 2023, 24, 905–918. [Google Scholar] [CrossRef]
  43. Zhu, Y.; Zheng, Z.; Shao, J.; Huang, H.; Zheng, W.X. Modeling, Robust Control Design, and Experimental Verification for Quadrotor Carrying Cable-Suspended Payload. IEEE Trans. Autom. Sci. Eng. 2024, 1–15. [Google Scholar] [CrossRef]
  44. Mohammed, A.F.; Marhoon, H.M.; Basil, N.; Ma’Arif, A. A New Hybrid Intelligent Fractional Order Proportional Double Derivative + Integral (FOPDD+I) Controller with ANFIS Simulated on Automatic Voltage Regulator System. Int. J. Robot. Control Syst. 2024, 4, 463–479. [Google Scholar] [CrossRef]
  45. Ibrahim, A.R.; Basil, N.; Mahdi, M.I. Implementation enhancement of AVR control system within optimization techniques. Int. J. Nonlinear Anal. Appl. 2021, 12, 2021–2027. [Google Scholar]
  46. Mohamadwasel, N.B. Rider Optimization Algorithm implemented on the AVR Control System using MATLAB with FOPID. IOP Conf. Ser. Mater. Sci. Eng. 2020, 928, 032017. [Google Scholar] [CrossRef]
  47. Song, M.; Zhang, F.; Huang, B.; Huang, P. Enhanced anti-disturbance control for tethered aircraft system under input saturation and actuator faults. Nonlinear Dyn. 2023, 111, 21037–21050. [Google Scholar] [CrossRef]
  48. Mahmoud, T.A.; El-Hossainy, M.; Abo-Zalam, B.; Shalaby, R. Fractional-order fuzzy sliding mode control of uncertain nonlinear MIMO systems using fractional-order reinforcement learning. Complex Intell. Syst. 2024, 10, 3057–3085. [Google Scholar] [CrossRef]
  49. Kalidas Kirange, Y.; Nema, P. Advancements in Power System Stability: FOPID Control Optimization using Harris Hawk Algorithms in SMIB Systems. In Proceedings of the 2024 International Conference on Emerging Smart Computing and Informatics (ESCI), Pune, India, 5–7 March 2024; pp. 1–6. [Google Scholar] [CrossRef]
  50. Azid, S.I.; Ali, S.A.; Kumar, M.; Cirrincione, M.; Fagiolini, A. Precise Trajectory Tracking of Multi-Rotor UAVs Using Wind Disturbance Rejection Approach. IEEE Access 2023, 11, 91796–91806. [Google Scholar] [CrossRef]
  51. Brandao, A.S.; Smrcka, D.; Pairet, E.; Nascimento, T.; Saska, M. Side-Pull Maneuver: A Novel Control Strategy for Dragging a Cable-Tethered Load of Unknown Weight Using a UAV. IEEE Robot. Autom. Lett. 2022, 7, 9159–9166. [Google Scholar] [CrossRef]
  52. Papadimitriou, A.; Jafari, H.; Mansouri, S.S.; Nikolakopoulos, G. External force estimation and disturbance rejection for Micro Aerial Vehicles. Expert Syst. Appl. 2022, 200, 116883. [Google Scholar] [CrossRef]
  53. Suriyan, K.; Nagarajan, R. Particle swarm optimization in biomedical technologies: Innovations, challenges, and opportunities. In Emerging Technologies for Health Literacy and Medical Practice; IGI: Antwerp, Belgium, 2024; pp. 220–238. [Google Scholar]
  54. Águila-León, J.; Vargas-Salgado, C.; Díaz-Bello, D.; Montagud-Montalvá, C. Optimizing photovoltaic systems: A meta-optimization approach with GWO-Enhanced PSO algorithm for improving MPPT controllers. Renew. Energy 2024, 230, 120892. [Google Scholar] [CrossRef]
  55. Basil, N.; Sabbar, B.M.; Marhoon, H.M.; Mohammed, A.F.; Ma’arif, A. Systematic Review of Unmanned Aerial Vehicles Control: Challenges, Solutions, and Meta-Heuristic Optimization. Int. J. Robot. Control Syst. 2024, 4, 1794–1818. [Google Scholar] [CrossRef]
  56. Mohammed, Y.R.; Basil, N.; Bayat, O.; Mohammed, A.H. A New Novel Optimization Techniques Implemented on the AVR Control System using MATLAB-SIMULINK. Int. J. Adv. Sci. Technol. 2020, 29, 4515–4521. [Google Scholar]
  57. Basil, N.; Marhoon, H.M. Mohammed, A.F. Evaluation of a 3-DOF helicopter dynamic control model using FOPID controller-based three optimization algorithms. Int. J. Inf. Technol. 2024, 1–10. [Google Scholar] [CrossRef]
  58. Alamoodi, A.H.; Albahri, O.S.; Zaidan, A.A.; AlSattar, H.A.; Ahmed, M.A.; Pamucar, D.; Zaidan, B.B.; Albahri, A.S.; Mahmoud, M.S. New extension of fuzzy-weighted zero-inconsistency and fuzzy decision by opinion score method based on cubic pythagorean fuzzy environment: A benchmarking case study of sign language recognition systems. Int. J. Fuzzy Syst. 2022, 24, 1909–1926. [Google Scholar] [CrossRef]
  59. Pandey, V.; Komal, H.; Dincer, H. A review on TOPSIS method and its extensions for different applications with recent development. Soft Comput. 2023, 27, 18011–18039. [Google Scholar] [CrossRef]
Figure 1. The methodology phases.
Figure 1. The methodology phases.
Symmetry 17 00388 g001
Figure 2. Illustration of synthetic and real-world IoT network data characteristics.
Figure 2. Illustration of synthetic and real-world IoT network data characteristics.
Symmetry 17 00388 g002
Figure 3. Architecture of the Feedforward Neural Network (FFNN).
Figure 3. Architecture of the Feedforward Neural Network (FFNN).
Symmetry 17 00388 g003
Figure 4. Architecture of CNN and pooling layers.
Figure 4. Architecture of CNN and pooling layers.
Symmetry 17 00388 g004
Figure 5. Architecture of the MLP.
Figure 5. Architecture of the MLP.
Symmetry 17 00388 g005
Figure 6. Comparative Confusion Matrices for Deep Learning Models and Optimization Techniques (FFNNs, CNNs, MLPs, HGWOPSO, HWCOAHHO) in IoT Network Monitoring. (A) Training Progress of Deep Learning Model for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (B) Confusion Matrix for Deep Learning Model Performance in IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization. (C) FFNNs Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (D) MLP Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (E) CNNs Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (F) HGWOPSO Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (G) HWCOAHHO Confusion Matrix for Performance Evaluation in IoT Network Monitoring.
Figure 6. Comparative Confusion Matrices for Deep Learning Models and Optimization Techniques (FFNNs, CNNs, MLPs, HGWOPSO, HWCOAHHO) in IoT Network Monitoring. (A) Training Progress of Deep Learning Model for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (B) Confusion Matrix for Deep Learning Model Performance in IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization. (C) FFNNs Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (D) MLP Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (E) CNNs Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (F) HGWOPSO Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (G) HWCOAHHO Confusion Matrix for Performance Evaluation in IoT Network Monitoring.
Symmetry 17 00388 g006aSymmetry 17 00388 g006bSymmetry 17 00388 g006cSymmetry 17 00388 g006d
Figure 7. Comprehensive Confusion Matrix Comparison of Deep Learning Models for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (A) Comparative Evaluation of Deep Learning Models for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (B) Comparative Confusion Matrices for Deep Learning Models in IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques.
Figure 7. Comprehensive Confusion Matrix Comparison of Deep Learning Models for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (A) Comparative Evaluation of Deep Learning Models for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (B) Comparative Confusion Matrices for Deep Learning Models in IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques.
Symmetry 17 00388 g007aSymmetry 17 00388 g007b
Figure 8. Benchmark Function of Deep Learning Models for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques.
Figure 8. Benchmark Function of Deep Learning Models for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques.
Symmetry 17 00388 g008
Table 1. Comparative analysis of traditional machine learning and deep learning models for IoT network monitoring.
Table 1. Comparative analysis of traditional machine learning and deep learning models for IoT network monitoring.
ReferenceModel TypeMethod TypeCriteria TypeObjectivesAccomplished Results
[12] Mehmood F, Ahmad S, and Kim D.DTsTraditional Machine LearningSimplicity and InterpretabilityProvide a simple and interpretable model for IoT monitoring tasks.Achieved moderate performance but was prone to overfitting and scalability issues.
[13] D’Alconzo et al.DTsTraditional Machine LearningScalability and OverfittingSurveyed data-driven techniques for IoT monitoring and anomaly detection.Demonstrated moderate utility but highlighted scalability and overfitting issues.
[14] Sheng et al.SVMsTraditional Machine LearningAccuracy and Non-linear DataEnhance accuracy and robustness in IoT monitoring.Delivered moderate-high performance, but computational cost was a limitation.
[15] Z. M. Fadlullah et al.SVMsTraditional Machine LearningNon-linear DependencyExtend the capabilities of IoT traffic control and monitoring systems.Showcased robustness for non-linear data but required high computational resources.
[16] Tang et al.Naive Bayes (NB)Traditional Machine LearningFeature IndependenceEnsure fast and efficient processing of IoT network data.Demonstrated low performance due to independence assumptions.
[17] Tang et al.CNNsDeep LearningSpatial HierarchiesDetect spatial patterns and anomalies in IoT traffic.Achieved high accuracy with large datasets but computationally intensive.
[18] Kato et al.CNNsDeep LearningSpatial and Complex PatternsLeverage hierarchical architectures for robust IoT anomaly detection.Demonstrated significant performance improvements in IoT monitoring tasks.
[19] Tang et al.MLPsDeep LearningNon-linear RelationshipsIdentify complex dependencies between IoT metrics.Provided high performance with scalability but needed regularization to avoid overfitting.
[20] Patel and PrajapatiMLPsDeep LearningNon-linear RelationshipsEnhance anomaly detection and resource allocation capabilities in IoT networks.Achieved high detection rates with effective modeling of complex relationships.
[21] Ghazavi and LiaoFFNNsDeep LearningSimplicity and EfficiencyProvide computationally efficient IoT monitoring solutions for simple traffic patterns.Showed moderate-high performance but limited for complex data.
[22] RishFFNNsDeep LearningSimplicity and EfficiencyEnable streamlined IoT anomaly detection and forecasting.Efficiently analyzed straightforward data streams but lacked capacity for intricate scenarios.
Table 2. Benchmark function performance comparison for HGWOPSO and HWCOAHHO algorithms.
Table 2. Benchmark function performance comparison for HGWOPSO and HWCOAHHO algorithms.
Benchmark FunctionAlgorithmDimensionalitySearch AgentsSearch Range
f 1 x , y = x 2 + y 2 HGWOPSO
HWCOAHHO
HGWOPSO
220[−5, 5]
f 2 x , y = 100 y x 2 2 + 1 x 2 220[−5, 5]
f 3 x , y = 10 2 + x 2 10 c o s 2 π x + y 2 10 c o s 2 π y 220[−5, 5]
f 4 x , y = 1 + x 2 + y 2 4000 c o s x c o s y 2 + 20 + e HWCOAHHO220[−5, 5]
f 5 ( x , y ) = sin 2 ( 3 π x ) + ( x 1 ) 2 ( 1 + sin 2 ( 3 π y ) ) + ( y 1 ) 2 ( 1 + sin 2 ( 2 π y ) ) HGWOPSO
HWCOAHHO
HGWOPSO
220[−10, 10]
f 6 ( x , y ) = ( 1.5 x + x y ) 2 + ( 2.25 x + x y 2 ) 2 + ( 2.625 x + x y 3 ) 2 220[−5, 5]
f 7 ( x , y ) = ( x + 2 y 7 ) 2 + ( 2 x + y 5 ) 2 220[−5, 5]
f 8 ( x , y ) = ( x 2 + y 11 ) 2 + ( x + y 2 7 ) 2 General (All algorithms)
HWCOAHHO
220[−5, 5]
f 9 x , y = 2 x 2 1.05 x 4 + x 6 6 + x y + y 2 220[−5, 5]
Table 3. Comparative assessment of deep learning models for IoT network monitoring (evaluation decision matrix).
Table 3. Comparative assessment of deep learning models for IoT network monitoring (evaluation decision matrix).
ModelAccuracyPrecisionRecallF1 Score
FFNNs0.900.880.850.86
CNNs0.920.890.870.88
MLPs0.910.890.860.87
HGWOPSO0.950.920.900.91
HWCOAHHO0.960.930.910.92
Table 4. Benchmark Function Performance Comparison for HGWOPSO and HWCOAHHO Algorithms.
Table 4. Benchmark Function Performance Comparison for HGWOPSO and HWCOAHHO Algorithms.
FunctionAlgorithmBest ValueWorst ValueAvg. ValueMedian ValueSTD
f 1 x HGWOPSO00000
HWCOAHHO2.34 × 10−104.56 × 10−92.12 × 10−91.34 × 10−91.56 × 10−9
f 2 x HGWOPSO00000
HWCOAHHO1.23 × 10−117.45 × 10−93.67 × 10−91.45 × 10−92.34 × 10−9
f 3 x HGWOPSO00000
HWCOAHHO2.56 × 10−76.78 × 10−63.45 × 10−62.12 × 10−62.89 × 10−6
f 4 x HGWOPSO00000
HWCOAHHO1.34 × 10−68.56 × 10−54.23 × 10−52.45 × 10−53.12 × 10−5
f 5 x HGWOPSO00000
HWCOAHHO3.45 × 10−49.87 × 10−34.56 × 10−32.12 × 10−33.56 × 10−3
f 6 x HGWOPSO00000
HWCOAHHO3.45 × 10−64.56 × 10−42.45 × 10−41.56 × 10−42.34 × 10−4
f 7 x HGWOPSO00000
HWCOAHHO3.12 × 10−61.45 × 10−37.89 × 10−45.12 × 10−46.23 × 10−4
f 8 x HGWOPSO00000
HWCOAHHO2.56 × 10−72.23 × 10−31.12 × 10−35.67 × 10−47.89 × 10−4
f 9 x HGWOPSO00000
HWCOAHHO1.23 × 10−47.89 × 10−33.45 × 10−31.89 × 10−32.56 × 10−3
Table 5. The normalized decision matrix.
Table 5. The normalized decision matrix.
ModelAccuracyPrecisionRecallF1 Score
FFNNs0.93750.9460.9340.934
CNNs0.95830.9560.9560.956
MLPs0.94790.9560.9460.948
HGWOPSO0.98960.9870.9890.989
HWCOAHHO1.00001.0001.0001.000
Table 6. Correlations Between the Criteria, Standard Deviations, and Final Weights of the Criteria.
Table 6. Correlations Between the Criteria, Standard Deviations, and Final Weights of the Criteria.
Criteria C 1 (Accuracy) C 2 (Precision) C 3 (Recall) C 4 (F1 Score)Standard DeviationWeight
C 1 (Accuracy)10.970.920.940.0330.35
C 2 (Precision)0.9710.950.960.0250.30
C 3 (Recall)0.920.9510.980.0450.25
C 4 (F1 Score)0.940.960.9810.0290.20
Table 7. The weighted decision matrix.
Table 7. The weighted decision matrix.
Model C 1 (Accuracy) C 2 (Precision) C 3 (Recall) C 4 (F1 Score) Weight   for   C 1 (Accuracy) Weight   for   C 2 (Precision) Weight   for   C 3 (Recall) Weight   for   C 4 (F1 Score)
FFNNs0.93750.9460.9340.9340.9375 × 0.35 = 0.32810.946 × 0.30 = 0.28380.934 × 0.25 = 0.23350.934 × 0.20 = 0.1868
CNNs0.95830.9560.9560.9560.9583 × 0.35 = 0.33540.956 × 0.30 = 0.28680.956 × 0.25 = 0.23900.956 × 0.20 = 0.1912
MLPs0.94790.9560.9460.9480.9479 × 0.35 = 0.33180.956 × 0.30 = 0.28680.946 × 0.25 = 0.23650.948 × 0.20 = 0.1896
HGWOPSO0.98960.9870.9890.9890.9896 × 0.35 = 0.34640.987 × 0.30 = 0.29610.989 × 0.25 = 0.24730.989 × 0.20 = 0.1978
HWCOAHHO1.00001.0001.0001.0001.000 × 0.35 = 0.35001.000 × 0.30 = 0.30001.000 × 0.25 = 0.25001.000 × 0.20 = 0.2000
Table 8. Separation Measures and Ranks for IoT Network Monitoring Models.
Table 8. Separation Measures and Ranks for IoT Network Monitoring Models.
Model S i + S i R A i Rank
FFNNs0.32810.25650.43904
CNNs0.33540.24620.42443
MLPs0.33180.24240.42092
HGWOPSO0.34640.23230.40215
HWCOAHHO0.35000.21350.37971
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qasim Jebur Al-Zaidawi, M.; Çevik, M. Advanced Deep Learning Models for Improved IoT Network Monitoring Using Hybrid Optimization and MCDM Techniques. Symmetry 2025, 17, 388. https://doi.org/10.3390/sym17030388

AMA Style

Qasim Jebur Al-Zaidawi M, Çevik M. Advanced Deep Learning Models for Improved IoT Network Monitoring Using Hybrid Optimization and MCDM Techniques. Symmetry. 2025; 17(3):388. https://doi.org/10.3390/sym17030388

Chicago/Turabian Style

Qasim Jebur Al-Zaidawi, Mays, and Mesut Çevik. 2025. "Advanced Deep Learning Models for Improved IoT Network Monitoring Using Hybrid Optimization and MCDM Techniques" Symmetry 17, no. 3: 388. https://doi.org/10.3390/sym17030388

APA Style

Qasim Jebur Al-Zaidawi, M., & Çevik, M. (2025). Advanced Deep Learning Models for Improved IoT Network Monitoring Using Hybrid Optimization and MCDM Techniques. Symmetry, 17(3), 388. https://doi.org/10.3390/sym17030388

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop