Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,273)

Search Parameters:
Keywords = global convergence

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 4757 KiB  
Article
Electrical Storm Optimization (ESO) Algorithm: Theoretical Foundations, Analysis, and Application to Engineering Problems
by Manuel Soto Calvo and Han Soo Lee
Mach. Learn. Knowl. Extr. 2025, 7(1), 24; https://doi.org/10.3390/make7010024 - 6 Mar 2025
Abstract
The electrical storm optimization (ESO) algorithm, inspired by the dynamic nature of electrical storms, is a novel population-based metaheuristic that employs three dynamically adjusted parameters: field resistance, field intensity, and field conductivity. Field resistance assesses the spread of solutions within the search space, [...] Read more.
The electrical storm optimization (ESO) algorithm, inspired by the dynamic nature of electrical storms, is a novel population-based metaheuristic that employs three dynamically adjusted parameters: field resistance, field intensity, and field conductivity. Field resistance assesses the spread of solutions within the search space, reflecting strategy diversity. The field intensity balances the exploration of new territories and the exploitation of promising areas. The field conductivity adjusts the adaptability of the search process, enhancing the algorithm’s ability to escape local optima and converge on global solutions. These adjustments enable the ESO to adapt in real-time to various optimization scenarios, steering the search toward potential optima. ESO’s performance was rigorously tested against 60 benchmark problems from the IEEE CEC SOBC 2022 suite and 20 well-known metaheuristics. The results demonstrate the superior performance of ESOs, particularly in tasks requiring a nuanced balance between exploration and exploitation. Its efficacy is further validated through successful applications in four engineering domains, highlighting its precision, stability, flexibility, and efficiency. Additionally, the algorithm’s computational costs were evaluated in terms of the number of function evaluations and computational overhead, reinforcing its status as a standout choice in the metaheuristic field. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of the electrical storm optimization (ESO) algorithm illustrating the initialization of agents, the iterative adjustments of environmental parameters, and the continuous selection and refinement of solutions toward identifying the optimum.</p>
Full article ">Figure 2
<p>Conceptual behavior of field intensity curves under different scenarios, showing the dynamic modulation of the transition between the exploration and exploitation stages.</p>
Full article ">Figure 3
<p>Convergence curves of the algorithms for unimodal problems. The ESO (red line) shows consistent convergence and MFO (pink line) shows the highest performance variability.</p>
Full article ">Figure 4
<p>Convergence curves of the algorithms for multimodal problems. The ESO (red line) shows consistent convergence, whereas the MFO (pink line) and PSO (orange line) show greater performance variability.</p>
Full article ">Figure 5
<p>Normalized behavior of field resistance, field conductivity, field intensity, storm power, and progression toward the global best solution over 1000 iterations for benchmark functions F6, F20, F24, F33, F46, and F50.</p>
Full article ">Figure 6
<p>Statistical results for the three groups of functions. The critical difference diagrams (<b>A</b>–<b>D</b>) illustrate the relative rankings of the algorithms, highlighting the groups of algorithms that are not significantly different from each other. The heatmaps (<b>A1</b>–<b>D1</b>) show that the Bayesian probability of one algorithm outperforms the other.</p>
Full article ">Figure 6 Cont.
<p>Statistical results for the three groups of functions. The critical difference diagrams (<b>A</b>–<b>D</b>) illustrate the relative rankings of the algorithms, highlighting the groups of algorithms that are not significantly different from each other. The heatmaps (<b>A1</b>–<b>D1</b>) show that the Bayesian probability of one algorithm outperforms the other.</p>
Full article ">
12 pages, 1030 KiB  
Article
A New Finite-Difference Method for Nonlinear Absolute Value Equations
by Peng Wang, Yujing Zhang and Detong Zhu
Mathematics 2025, 13(5), 862; https://doi.org/10.3390/math13050862 - 5 Mar 2025
Viewed by 88
Abstract
In this paper, we propose a new finite-difference method for nonconvex absolute value equations. The nonsmooth unconstrained optimization problem equivalent to the absolute value equations is considered. The finite-difference technique is considered to compose the linear programming subproblems for obtaining the search direction. [...] Read more.
In this paper, we propose a new finite-difference method for nonconvex absolute value equations. The nonsmooth unconstrained optimization problem equivalent to the absolute value equations is considered. The finite-difference technique is considered to compose the linear programming subproblems for obtaining the search direction. The algorithm avoids the computation of gradients and Hessian matrices of problems. The new finite-difference parameter correction technique is considered to ensure the monotonic descent of the objective function. The convergence of the algorithm is analyzed, and numerical experiments are reported, indicating the effectiveness by comparison against a state-of-the-art absolute value equations. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of selecting parameter <math display="inline"><semantics> <msub> <mi>t</mi> <mi>k</mi> </msub> </semantics></math> in Algorithm 1.</p>
Full article ">Figure 2
<p>Left of the first row: the comparison results of the iterations of Algorithm 1 with SM and IP when we chose <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics></math>. Right of the first row: the comparison results of the iterations of Algorithm 1 with SM and IP when we chose <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>. Left of the second row: the comparison results of the iterations of Algorithm 1 with SM and IP when we chose <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>8</mn> </mrow> </msup> </mrow> </semantics></math>. Right of the second row: the comparison results of the iterations of Algorithm 1 with SM and IP when we chose <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>10</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Left of the first row: the comparison results of CPU time of Algorithm 1 with SM and IP when we chose <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics></math>. Right of the first row: the comparison results of CPU time of Algorithm 1 with SM and IP when we chose <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>. Left of the second row: the comparison results of CPU time of Algorithm 1 with SM and IP when we chose <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>8</mn> </mrow> </msup> </mrow> </semantics></math>. Right of the second row: the comparison results of CPU time of Algorithm 1 with SM and IP when we chose <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>10</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">
18 pages, 7417 KiB  
Article
An Efficient Optimization Method for Large-Solution Space Electromagnetic Automatic Design
by Lingyan He, Fengling Peng and Xing Chen
Materials 2025, 18(5), 1159; https://doi.org/10.3390/ma18051159 - 5 Mar 2025
Viewed by 109
Abstract
In the field of electromagnetic design, it is sometimes necessary to search for the optimal design solution (i.e., the optimal solution) within a large solution space to complete the optimization. However, traditional optimization methods are not only slow in searching for the solution [...] Read more.
In the field of electromagnetic design, it is sometimes necessary to search for the optimal design solution (i.e., the optimal solution) within a large solution space to complete the optimization. However, traditional optimization methods are not only slow in searching for the solution space but are also prone to becoming trapped in local optima, leading to optimization failure. This paper proposes a dual-population genetic algorithm to quickly find the optimal solution for electromagnetic optimization problems in large solution spaces. The method involves two populations: the first population uses the powerful dynamic decision-making ability of reinforcement learning to adjust the crossover probability, making the optimization process more stable and enhancing the global optimization capability of the algorithm. The second population accelerates the convergence speed of the algorithm by employing a “leader dominance” mechanism, allowing the population to quickly approach the optimal solution. The two populations are integrated through an immigration operator, improving optimization efficiency. The effectiveness of the proposed method is demonstrated through the optimization design of an electromagnetic metasurface material. Furthermore, the method designed in this paper is not limited to the electromagnetic field and has practical value in other engineering optimization areas, such as vehicle routing optimization, energy system optimization, and fluid dynamics optimization, etc. Full article
(This article belongs to the Special Issue Metamaterials and Metasurfaces: From Materials to Applications)
Show Figures

Figure 1

Figure 1
<p>Random lattice electromagnetic super-surface.</p>
Full article ">Figure 2
<p>Process design of IDPGA.</p>
Full article ">Figure 3
<p>Traditional immigration operator.</p>
Full article ">Figure 4
<p>Design of the second immigration operator.</p>
Full article ">Figure 5
<p>Process of modifying CP and updating Q-table.</p>
Full article ">Figure 6
<p>Difference between traditional crossover method and improved method. (<b>a</b>) Legacy crossover operation (<b>b</b>) Improved crossover operation.</p>
Full article ">Figure 7
<p>Structure of random lattice electromagnetic super-surface. (<b>a</b>) Honeycomb structure (<b>b</b>) Block structure (<b>c</b>) Supersurface side view.</p>
Full article ">Figure 8
<p>The flow of parallel computing.</p>
Full article ">Figure 9
<p>Performance comparison between IDPGA and traditional GA under 7 × 7 structure. (<b>a</b>) Fitness curve (<b>b</b>) Average fitness curve (<b>c</b>) Standard deviation curve.</p>
Full article ">Figure 10
<p>Optimization results of 7 × 7 structure by ID PGA and traditional GA. (<b>a</b>) IDPGA results (<b>b</b>) Traditional GA result 1 (<b>c</b>) Traditional GA result 2 (<b>d</b>) IDPGA bandwidth (<b>e</b>) Traditional GA bandwidth 1 (<b>f</b>) Traditional GA bandwidth 2.</p>
Full article ">Figure 11
<p>Performance comparison between IDPGA and traditional GA under 5 × 5 structure. (<b>a</b>) Fitness curve (<b>b</b>) Average fitness curve (<b>c</b>) Standard deviation curve.</p>
Full article ">Figure 12
<p>Optimization results of 5 × 5 structure by IDPGA and traditional GA. (<b>a</b>) IDPGA results (<b>b</b>) Traditional GA result 1 (<b>c</b>) Traditional GA result 2 (<b>d</b>) IDPGA bandwidth (<b>e</b>) Traditional GA bandwidth 1 (<b>f</b>) Traditional GA bandwidth 2.</p>
Full article ">Figure 13
<p>Performance differences in each multi-population algorithm. (<b>a</b>) Fitness curve (<b>b</b>) Standard deviation curve.</p>
Full article ">Figure 14
<p>Optimization results obtained by different multi-population algorithms. (<b>a</b>) SLFA results (<b>b</b>) SA results (<b>c</b>) MPDEA results. (<b>d</b>) SLFA bandwidth. (<b>e</b>) SA bandwidth. (<b>f</b>) MPDEA bandwidth.</p>
Full article ">
19 pages, 1854 KiB  
Article
Fixed-Time Global Sliding Mode Control for Parallel Robot Mobile Platform with Prescribed Performance
by Aojie Wang, Guoqin Gao and Xue Li
Sensors 2025, 25(5), 1584; https://doi.org/10.3390/s25051584 - 5 Mar 2025
Viewed by 159
Abstract
A fixed-time global sliding mode control with prescribed performance is proposed for the varying center of mass parallel robot mobile platform with model uncertainties and external disturbances to improve the global robustness and convergence performance of the model, and reduce overshoots. Firstly, kinematic [...] Read more.
A fixed-time global sliding mode control with prescribed performance is proposed for the varying center of mass parallel robot mobile platform with model uncertainties and external disturbances to improve the global robustness and convergence performance of the model, and reduce overshoots. Firstly, kinematic and dynamic models of the parallel robot mobile platform with a varying center of mass are established. A reference velocity controller for the mobile platform system’s outer loop is designed using the back-stepping method, which provides the expected reference velocity for the inner loop controller. Secondly, to improve the global robustness and convergence performance of the system, a fixed-time global sliding mode control algorithm in the inner loop of the system is designed to eliminate the reaching phase of sliding mode control and ensure that the system converges quickly within a fixed time. Meanwhile, by designing a performance function to constrain the system errors within the performance boundary further, the fixed-time global sliding mode control with prescribed performance is implemented to reduce overshoots of the system. Then, the Lyapunov stability of the proposed method is proved theoretically. Finally, the effectiveness and superiority of the proposed control method are verified by simulation experiments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Vehicle-type mobile platform structure.</p>
Full article ">Figure 2
<p>Prescribed performance fixed-time global sliding mode control system structure diagram.</p>
Full article ">Figure 3
<p>(<b>a</b>) Performance boundaries. (<b>b</b>) Error transformation function mapping diagram.</p>
Full article ">Figure 4
<p>Results of simulation comparison experiments between FnTGSMC and FxTGSMC. (<b>a</b>) Trajectory in the X direction. (<b>b</b>) Tracking error curve in the X direction. (<b>c</b>) Tracking error curve in the Y direction. (<b>d</b>) Angular error. (<b>e</b>) Line velocity error. (<b>f</b>) Angular velocity error. (<b>g</b>) Driving wheel torque.</p>
Full article ">Figure 5
<p>Comparison results of simulation experiments between FxTGSMC and PPFxTGSMC. (<b>a</b>) Trajectory in the X direction. (<b>b</b>) Tracking error curve in the X direction. (<b>c</b>) Tracking error curve in the Y direction. (<b>d</b>) Angular error. (<b>e</b>) Line velocity error. (<b>f</b>) Angular velocity error. (<b>g</b>) Driving wheel torque.</p>
Full article ">Figure 6
<p>Results of comparison of simulation experiments between FnTGSMC and FxTGSMC. (<b>a</b>) Trajectory in the X direction. (<b>b</b>) Tracking error curve in the X direction. (<b>c</b>) Tracking error curve in the Y direction. (<b>d</b>) Angular error. (<b>e</b>) Line velocity error. (<b>f</b>) Angular velocity error. (<b>g</b>) Driving wheel torque.</p>
Full article ">Figure 7
<p>Results of comparison of simulation experiments between FxTGSMC and PPFxTGSMC. (<b>a</b>) Trajectory in the X direction. (<b>b</b>) Tracking error curve in the X direction. (<b>c</b>) Tracking error curve in the Y direction. (<b>d</b>) Angular error. (<b>e</b>) Line velocity error. (<b>f</b>) Angular velocity error. (<b>g</b>) Driving wheel torque.</p>
Full article ">
38 pages, 5655 KiB  
Article
Advanced Deep Learning Models for Improved IoT Network Monitoring Using Hybrid Optimization and MCDM Techniques
by Mays Qasim Jebur Al-Zaidawi and Mesut Çevik
Symmetry 2025, 17(3), 388; https://doi.org/10.3390/sym17030388 - 4 Mar 2025
Viewed by 186
Abstract
This study addresses the challenge of optimizing deep learning models for IoT network monitoring, focusing on achieving a symmetrical balance between scalability and computational efficiency, which is essential for real-time anomaly detection in dynamic networks. We propose two novel hybrid optimization methods—Hybrid Grey [...] Read more.
This study addresses the challenge of optimizing deep learning models for IoT network monitoring, focusing on achieving a symmetrical balance between scalability and computational efficiency, which is essential for real-time anomaly detection in dynamic networks. We propose two novel hybrid optimization methods—Hybrid Grey Wolf Optimization with Particle Swarm Optimization (HGWOPSO) and Hybrid World Cup Optimization with Harris Hawks Optimization (HWCOAHHO)—designed to symmetrically balance global exploration and local exploitation, thereby enhancing model training and adaptation in IoT environments. These methods leverage complementary search behaviors, where symmetry between global and local search processes enhances convergence speed and detection accuracy. The proposed approaches are validated using real-world IoT datasets, demonstrating significant improvements in anomaly detection accuracy, scalability, and adaptability compared to state-of-the-art techniques. Specifically, HGWOPSO combines the symmetrical hierarchy-driven leadership of Grey Wolves with the velocity updates of Particle Swarm Optimization, while HWCOAHHO synergizes the dynamic exploration strategies of Harris Hawks with the competition-driven optimization of the World Cup algorithm, ensuring balanced search and decision-making processes. Performance evaluation using benchmark functions and real-world IoT network data highlights superior accuracy, precision, recall, and F1 score compared to traditional methods. To further enhance decision-making, a Multi-Criteria Decision-Making (MCDM) framework incorporating the Analytic Hierarchy Process (AHP) and TOPSIS is employed to symmetrically evaluate and rank the proposed methods. Results indicate that HWCOAHHO achieves the most optimal balance between accuracy and precision, followed closely by HGWOPSO, while traditional methods like FFNNs and MLPs show lower effectiveness in real-time anomaly detection. The symmetry-driven approach of these hybrid algorithms ensures robust, adaptive, and scalable monitoring solutions for IoT networks characterized by dynamic traffic patterns and evolving anomalies, thus ensuring real-time network stability and data integrity. The findings have substantial implications for smart cities, industrial automation, and healthcare IoT applications, where symmetrical optimization between detection performance and computational efficiency is crucial for ensuring optimal and reliable network monitoring. This work lays the groundwork for further research on hybrid optimization techniques and deep learning, emphasizing the role of symmetry in enhancing the efficiency and resilience of IoT network monitoring systems. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

Figure 1
<p>The methodology phases.</p>
Full article ">Figure 2
<p>Illustration of synthetic and real-world IoT network data characteristics.</p>
Full article ">Figure 3
<p>Architecture of the Feedforward Neural Network (FFNN).</p>
Full article ">Figure 4
<p>Architecture of CNN and pooling layers.</p>
Full article ">Figure 5
<p>Architecture of the MLP.</p>
Full article ">Figure 6
<p>Comparative Confusion Matrices for Deep Learning Models and Optimization Techniques (FFNNs, CNNs, MLPs, HGWOPSO, HWCOAHHO) in IoT Network Monitoring. (<b>A</b>) Training Progress of Deep Learning Model for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (<b>B</b>) Confusion Matrix for Deep Learning Model Performance in IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization. (<b>C</b>) FFNNs Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>D</b>) MLP Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>E</b>) CNNs Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>F</b>) HGWOPSO Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>G</b>) HWCOAHHO Confusion Matrix for Performance Evaluation in IoT Network Monitoring.</p>
Full article ">Figure 6 Cont.
<p>Comparative Confusion Matrices for Deep Learning Models and Optimization Techniques (FFNNs, CNNs, MLPs, HGWOPSO, HWCOAHHO) in IoT Network Monitoring. (<b>A</b>) Training Progress of Deep Learning Model for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (<b>B</b>) Confusion Matrix for Deep Learning Model Performance in IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization. (<b>C</b>) FFNNs Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>D</b>) MLP Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>E</b>) CNNs Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>F</b>) HGWOPSO Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>G</b>) HWCOAHHO Confusion Matrix for Performance Evaluation in IoT Network Monitoring.</p>
Full article ">Figure 6 Cont.
<p>Comparative Confusion Matrices for Deep Learning Models and Optimization Techniques (FFNNs, CNNs, MLPs, HGWOPSO, HWCOAHHO) in IoT Network Monitoring. (<b>A</b>) Training Progress of Deep Learning Model for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (<b>B</b>) Confusion Matrix for Deep Learning Model Performance in IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization. (<b>C</b>) FFNNs Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>D</b>) MLP Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>E</b>) CNNs Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>F</b>) HGWOPSO Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>G</b>) HWCOAHHO Confusion Matrix for Performance Evaluation in IoT Network Monitoring.</p>
Full article ">Figure 6 Cont.
<p>Comparative Confusion Matrices for Deep Learning Models and Optimization Techniques (FFNNs, CNNs, MLPs, HGWOPSO, HWCOAHHO) in IoT Network Monitoring. (<b>A</b>) Training Progress of Deep Learning Model for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (<b>B</b>) Confusion Matrix for Deep Learning Model Performance in IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization. (<b>C</b>) FFNNs Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>D</b>) MLP Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>E</b>) CNNs Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>F</b>) HGWOPSO Confusion Matrix for Performance Evaluation in IoT Network Monitoring. (<b>G</b>) HWCOAHHO Confusion Matrix for Performance Evaluation in IoT Network Monitoring.</p>
Full article ">Figure 7
<p>Comprehensive Confusion Matrix Comparison of Deep Learning Models for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (<b>A</b>) Comparative Evaluation of Deep Learning Models for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (<b>B</b>) Comparative Confusion Matrices for Deep Learning Models in IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques.</p>
Full article ">Figure 7 Cont.
<p>Comprehensive Confusion Matrix Comparison of Deep Learning Models for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (<b>A</b>) Comparative Evaluation of Deep Learning Models for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques. (<b>B</b>) Comparative Confusion Matrices for Deep Learning Models in IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques.</p>
Full article ">Figure 8
<p>Benchmark Function of Deep Learning Models for IoT Network Monitoring Using HGWOPSO and HWCOAHHO Optimization Techniques.</p>
Full article ">
30 pages, 2514 KiB  
Article
FedCon: Scalable and Efficient Federated Learning via Contribution-Based Aggregation
by Wenyu Gao, Gaochao Xu and Xianqiu Meng
Electronics 2025, 14(5), 1024; https://doi.org/10.3390/electronics14051024 - 4 Mar 2025
Viewed by 206
Abstract
With the increasing application of federated learning to medical and image data, the challenges of class distribution imbalances and Non-IID heterogeneity across clients have become critical factors affecting the generalization ability of global models. In the medical domain, the phenomenon of data silos [...] Read more.
With the increasing application of federated learning to medical and image data, the challenges of class distribution imbalances and Non-IID heterogeneity across clients have become critical factors affecting the generalization ability of global models. In the medical domain, the phenomenon of data silos is particularly pronounced, leading to significant differences in data distributions across hospitals, which in turn hinder the performance of global model training. To address these challenges, this paper proposes FedCon, a federated learning method capable of dynamically adjusting aggregation weights, while accurately evaluating client contributions. Specifically, FedCon initializes aggregation weights based on client data volume and class distribution and employs Monte Carlo sampling to effectively simplify the computation of Shapley values. Subsequently, it further optimizes the aggregation weights by comprehensively considering the historical contributions of clients and the similarity between clients and the global model. This approach significantly enhances the ability to generalize and update the stability of the global model. Experimental results demonstrate that, compared to existing methods, FedCon achieved a superior generalization performance on public datasets and significantly accelerated the convergence of the global model. Full article
(This article belongs to the Special Issue Empowering IoT with AI: AIoT for Smart and Autonomous Systems)
Show Figures

Figure 1

Figure 1
<p>The FedCon framework: (<b>A</b>) the calculation of client data quality in the first round using discrepancies between global and local data distributions to determine initialization weights; (<b>B</b>) the dynamic adjustment of aggregation weights based on precise client contribution computations (using Shapley values and similarity metrics) in each round, improving the model convergence stability.</p>
Full article ">Figure 2
<p>The above chart is the heatmap of data partitioning for CIFAR10-NIID-1.</p>
Full article ">Figure 3
<p>The above chart is the heatmap of data partitioning for CIFAR10-NIID-2.</p>
Full article ">Figure 4
<p>Convergence analysis of different methods on CIFAR10 and CIFAR100 datasets under independent and identically distributed (Homo) settings. (<b>a</b>) shows the convergence analysis of CIFAR10 under Homo settings, and (<b>b</b>) shows the convergence analysis of CIFAR100 under Homo settings.</p>
Full article ">Figure 5
<p>Convergence analysis of different methods on the CIFAR10 dataset under two Non-IID data partitioning strategies, NIID-1 and NIID-2. The figure above demonstrates that the FedCon method achieved a faster convergence and outperformed the other methods in terms of performance. (<b>a</b>) shows the convergence analysis of CIFAR10 under NIID-1 settings, and (<b>b</b>) shows the convergence analysis of CIFAR10 under NIID-2 settings.</p>
Full article ">Figure 6
<p>Convergence analysis of various methods on the CIFAR100 dataset under two Non-IID data partitioning strategies, NIID-1 and NIID-2. (<b>a</b>) shows the convergence analysis of CIFAR100 under NIID-1 settings, and (<b>b</b>) shows the convergence analysis of CIFAR100 under NIID-2 settings.</p>
Full article ">Figure 7
<p>Convergence analysis of different methods on the HAR and HAM10000 datasets under the NIID-1 partitioning strategy (since these two datasets have different numbers of classes, only the NIID-1 partitioning strategy could be applied). (<b>a</b>) shows the convergence analysis of HAR under NIID-1 settings, and (<b>b</b>) shows the convergence analysis of HAM10000 under NIID-1 settings.</p>
Full article ">Figure 8
<p>Convergence analysis of different methods on the OrganAMNIST dataset under the NIID-1 and NIID-2 Non-IID data partitioning strategies. (<b>a</b>) shows the convergence analysis of OrganAMNIST under NIID-1 settings, and (<b>b</b>) shows the convergence analysis of OrganAMNIST under NIID-2 settings.</p>
Full article ">Figure 9
<p>Convergence analysis of different methods on the OrganCMNIST dataset under the NIID-1 and NIID-2 Non-IID data partitioning strategies. (<b>a</b>) shows the convergence analysis of OrganCMNIST under NIID-1 settings, and (<b>b</b>) shows the convergence analysis of OrganCMNIST under NIID-2 settings.</p>
Full article ">Figure 10
<p>Convergence analysis of different methods on the OrganSMNIST dataset under the NIID-1 and NIID-2 Non-IID data partitioning strategies. (<b>a</b>) shows the convergence analysis of OrganSMNIST under NIID-1 settings, and (<b>b</b>) shows the convergence analysis of OrganSMNIST under NIID-2 settings.</p>
Full article ">Figure 11
<p>The figure above presents the RMSE of the eight baseline methods and the FedCon method across the different datasets and partitioning strategies. Notably, the performance of FedCon was particularly remarkable on the HAM10000 dataset. On other datasets, FedCon also demonstrated superior performance, highlighting its advanced capabilities.</p>
Full article ">Figure 12
<p>Comparison of communication time, round time, and accuracy for NIID-1 setting. (<b>a</b>) shows communication time and accuracy on NIID-1 setting. (<b>b</b>) shows round time and accuracy on NIID-1 setting.</p>
Full article ">Figure 13
<p>Comparison of communication time, round time, and accuracy for NIID-2 setting. (<b>a</b>) shows communication time and accuracy on NIID-2 setting. (<b>b</b>) shows round time and accuracy on NIID-2 setting.</p>
Full article ">Figure 14
<p>The above figure shows the hyperparameter analysis under CIFAR10 with the NIID-1 data partitioning.</p>
Full article ">Figure 15
<p>The above figure shows the hyperparameter analysis under CIFAR10 with the NIID-2 data partitioning.</p>
Full article ">Figure 16
<p>Violin plot comparing FedCon without A, FedCon without B, FedCon, and other baseline methods.</p>
Full article ">
25 pages, 375 KiB  
Article
On the Exact Formulation of the Optimal Phase-Balancing Problem in Three-Phase Unbalanced Networks: Two Alternative Mixed-Integer Nonlinear Programming Models
by Oscar Danilo Montoya, Brandon Cortés-Caicedo and Óscar David Florez-Cediel
Electricity 2025, 6(1), 9; https://doi.org/10.3390/electricity6010009 - 2 Mar 2025
Viewed by 195
Abstract
This article presents two novel mixed-integer nonlinear programming (MINLP) formulations in the complex variable domain to address the optimal phase-balancing problem in asymmetric three-phase distribution networks. The first employs a matrix-based load connection model (M-MINLP), while the second uses a compact vector-based representation [...] Read more.
This article presents two novel mixed-integer nonlinear programming (MINLP) formulations in the complex variable domain to address the optimal phase-balancing problem in asymmetric three-phase distribution networks. The first employs a matrix-based load connection model (M-MINLP), while the second uses a compact vector-based representation (V-MINLP). Both integrate the power flow equations through the current injection method, capturing the nonlinearities of Delta and Wye loads. These formulations, solved via an interior-point optimizer and the branch-and-cut method in the Julia software, ensure global optima and computational efficiency. Numerical validations on 8-, 25-, and 37-node feeders showed power loss reductions of 24.34%, 4.16%, and 19.26%, outperforming metaheuristic techniques and convex approximations. The M-MINLP model was 15.6 times faster in the 25-node grid and 2.5 times faster in the 37-node system when compared to the V-MINLP approach. The results demonstrate the robustness and scalability of the proposed methods, particularly in medium and large systems, where current techniques often fail to converge. These formulations advance the state of the art by combining exact mathematical modeling with efficient computation, offering precise, scalable, and practical tools for optimizing power distribution networks. The corresponding validations were performed using Julia (v1.10.2), JuMP (v1.21.1), and AmplNLWriter (v1.2.1). Full article
(This article belongs to the Special Issue Advances in Operation, Optimization, and Control of Smart Grids)
Show Figures

Figure 1

Figure 1
<p>General implementation of the Julia-based methodology to solve the optimal phase-balancing problem in unbalanced three-phase distribution networks.</p>
Full article ">Figure 2
<p>Single-line diagram of the 8-node test system.</p>
Full article ">Figure 3
<p>Single-line diagram of the 25-node test system.</p>
Full article ">Figure 4
<p>Single-line diagram of the 37-node test system.</p>
Full article ">Figure 5
<p>Percentage of unbalance before and after the implementation of the best solution provided by the proposed methodology: (<b>a</b>) 8-node test system; (<b>b</b>) 25-node test system; (<b>c</b>) 37-node test system.</p>
Full article ">Figure 6
<p>Voltage profiles for the 8-node test system: (<b>a</b>) Before phase-balancing. (<b>b</b>) After phase-balancing.</p>
Full article ">Figure 7
<p>Voltage profiles for the 25-node test system: (<b>a</b>) before phase-balancing; (<b>b</b>) after phase-balancing.</p>
Full article ">Figure 8
<p>Voltage profiles for the 37-node test system: (<b>a</b>) before phase-balancing; (<b>b</b>) after phase-balancing.</p>
Full article ">
19 pages, 2108 KiB  
Article
Modeling the Influence of Climate Change on the Water Quality of Doğancı Dam in Bursa, Turkey, Using Artificial Neural Networks
by Aslıhan Katip and Asifa Anwar
Water 2025, 17(5), 728; https://doi.org/10.3390/w17050728 - 2 Mar 2025
Viewed by 295
Abstract
Population growth, industrialization, excessive energy consumption, and deforestation have led to climate change and affected water resources like dams intended for public drinking water. Meteorological parameters could be used to understand these effects better to anticipate the water quality of the dam. Artificial [...] Read more.
Population growth, industrialization, excessive energy consumption, and deforestation have led to climate change and affected water resources like dams intended for public drinking water. Meteorological parameters could be used to understand these effects better to anticipate the water quality of the dam. Artificial neural networks (ANNs) are favored in hydrology due to their accuracy and robustness. This study modeled climatic effects on the water quality of Doğancı dam using a feed-forward neural network with one input, one hidden, and one output layer. Three models were tested using various combinations of meteorological data as input and Doğancı dam’s water quality data as output. Model success was determined by the mean squared error and correlation coefficient (R) between the observed and predicted data. Resilient back-propagation and Levenberg–Marquardt were tested for each model to find an appropriate training algorithm. The model with the least error (1.12–1.68) and highest R value (0.93–0.99) used three meteorological inputs (air temperature, global solar radiation, and solar intensity), six water quality parameters of Doğancı dam as output (water temperature, pH, dissolved oxygen, manganese, arsenic, and iron concentrations), and ten hidden nodes. The two training algorithms employed in this study did not differ statistically (p > 0.05). However, the Levenberg–Marquardt training approach demonstrated a slight advantage over the resilient back-propagation algorithm by achieving reduced error and higher correlation in most of the models tested in this study. Also, better convergence and faster training with a lesser gradient value were noted for the LM algorithm. It was concluded that ANNs could predict a dam’s water quality using meteorological data, making it a useful tool for climatological water quality management and contributing to sustainable water resource planning. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Satellite view of Doğancı dam (acquired from Google Maps on 10 January 2024).</p>
Full article ">Figure 2
<p>Feed-forward neural network (acquired from MATLAB).</p>
Full article ">Figure 3
<p>Graphs of the correlation coefficients of models using the RProp ((<b>a</b>) Model 1, (<b>c</b>) Model 2, and (<b>e</b>) Model 3) and the LM ((<b>b</b>) Model 1, (<b>d</b>) Model 2, and (<b>f</b>) Model 3) training algorithms.</p>
Full article ">Figure 4
<p>Mean squared errors of the models using the RProp ((<b>a</b>) Model 1, (<b>c</b>) Model 2, and (<b>e</b>) Model 3) and the LM ((<b>b</b>) Model 1, (<b>d</b>) Model 2, and (<b>f</b>) Model 3) training algorithms.</p>
Full article ">Figure 5
<p>Gradients during training progress using different algorithms for the tested models.</p>
Full article ">
20 pages, 2880 KiB  
Article
A Second Examination of Trigonometric Step Sizes and Their Impact on Warm Restart SGD for Non-Smooth and Non-Convex Functions
by Mahsa Soheil Shamaee and Sajad Fathi Hafshejani
Mathematics 2025, 13(5), 829; https://doi.org/10.3390/math13050829 - 1 Mar 2025
Viewed by 153
Abstract
This paper presents a second examination of trigonometric step sizes and their impact on Warm Restart Stochastic Gradient Descent (SGD), an essential optimization technique in deep learning. Building on prior work with cosine-based step sizes, this study introduces three novel trigonometric step sizes [...] Read more.
This paper presents a second examination of trigonometric step sizes and their impact on Warm Restart Stochastic Gradient Descent (SGD), an essential optimization technique in deep learning. Building on prior work with cosine-based step sizes, this study introduces three novel trigonometric step sizes aimed at enhancing warm restart methods. These step sizes are formulated to address the challenges posed by non-smooth and non-convex objective functions, ensuring that the algorithm can converge effectively toward the global minimum. Through rigorous theoretical analysis, we demonstrate that the proposed approach achieves an O1T convergence rate for smooth non-convex functions and extend the analysis to non-smooth and non-convex scenarios. Experimental evaluations on FashionMNIST, CIFAR10, and CIFAR100 datasets reveal significant improvements in test accuracy, including a notable 2.14% increase on CIFAR100 compared to existing warm restart strategies. These results underscore the effectiveness of trigonometric step sizes in enhancing optimization performance for deep learning models. Full article
Show Figures

Figure 1

Figure 1
<p>Comparison of new step sizes and cosine step size.</p>
Full article ">Figure 2
<p>(<b>a</b>) Overall structure of DenseNet-BC model: (<b>b</b>) dense block; (<b>c</b>) transition block.</p>
Full article ">Figure 3
<p>Comparison of tan and cosine step sizes.</p>
Full article ">Figure 4
<p>Comparison of new proposed step size and four other step sizes.</p>
Full article ">Figure 5
<p>Comparison of new proposed step size and four other step sizes.</p>
Full article ">Figure 6
<p>Confusion matrix of the Alpha method evaluated on the test set for (<b>a</b>) the FashionMNIST dataset and (<b>b</b>) the CIFAR-10 dataset.</p>
Full article ">
26 pages, 513 KiB  
Article
The Role of Domestic Formal and Informal Institutions in Food Security: Research on the European Union Countries
by Aldona Zawojska and Tomasz Siudek
Sustainability 2025, 17(5), 2132; https://doi.org/10.3390/su17052132 - 1 Mar 2025
Viewed by 278
Abstract
Although food seems abundant in the European Union, challenges related to specific aspects of food security continue to exist and require ongoing attention. A country’s food security depends on various economic, social, environmental, and institutional factors, which are studied using several scientific research [...] Read more.
Although food seems abundant in the European Union, challenges related to specific aspects of food security continue to exist and require ongoing attention. A country’s food security depends on various economic, social, environmental, and institutional factors, which are studied using several scientific research methodologies. The role of institutions in determining national success and failure has been increasingly emphasized in recent academic discourse. Our research makes a novel contribution to the literature on institutions and food security by integrating New Institutional Economics with food security metrics. It aims to examine the relationships between food security dimensions and country-specific institutional matrices in the twenty EU member states from 2012 to 2019. How strong were those relationships, and how did they differ between the new and old member states? Food security is proxied by the Global Food Security Index and its three pillars (economic accessibility, physical availability, and quality and safety). The institutional quality of a country is represented by the Worldwide Governance Indicators (regulatory quality, rule of law, and control of corruption). Using the food security indices as the dependent variables, we apply multiple regression models to identify which institutions determined national food security over time. The study revealed that between 2012 and 2019, there was no evidence of sigma convergence or reduction in the dispersion of institutional quality (except for control of corruption) and overall food security within the EU20. The domestic institutions were generally statistically significantly positively related to the GFSI and its elements. The weakest correlations for the EU20 were those linking institutional variables with food quality and safety. The rule of law, incorporating such formal institutions as the quality of contract enforcement and property rights, positively affected food security within the EU20, with the mostgreatest impact on food quality, safety, and availability. The dependence of food security on national institutional factors was stronger in new member states from Central and Eastern Europe. The exploratory results shed some light on the role of institutions in shaping food security. However, further research is required to gain a more detailed understanding of this phenomenon. The research findings suggest that policymakers in the EU countries could enhance national institutions to promote food security and, consequently, achieve the Sustainable Development Goals more effectively. Full article
(This article belongs to the Section Sustainable Food)
Show Figures

Scheme 1

Scheme 1
<p>Factors of economic development and food security. Note: The arrows point from a cause to an effect. Source: adopted and modified from [<a href="#B86-sustainability-17-02132" class="html-bibr">86</a>].</p>
Full article ">
20 pages, 1748 KiB  
Article
A Chaotic Decomposition-Based Approach for Enhanced Multi-Objective Optimization
by Javad Alikhani Koupaei and Mohammad Javad Ebadi
Mathematics 2025, 13(5), 817; https://doi.org/10.3390/math13050817 - 28 Feb 2025
Viewed by 142
Abstract
Multi-objective optimization problems often face challenges in balancing solution accuracy, computational efficiency, and convergence speed. Many existing methods struggle with achieving an optimal trade-off between exploration and exploitation, leading to premature convergence or excessive computational costs. To address these issues, this paper proposes [...] Read more.
Multi-objective optimization problems often face challenges in balancing solution accuracy, computational efficiency, and convergence speed. Many existing methods struggle with achieving an optimal trade-off between exploration and exploitation, leading to premature convergence or excessive computational costs. To address these issues, this paper proposes a chaotic decomposition-based approach that leverages the ergodic properties of chaotic maps to enhance optimization performance. The proposed method consists of three key stages: (1) chaotic sequence initialization, which generates a diverse population to enhance the global search while reducing computational costs; (2) chaos-based correction, which integrates a three-point operator (TPO) and a local improvement operator (LIO) to refine the Pareto front and balance the exploration–exploitation trade-offs; and (3) Tchebycheff decomposition-based updating, ensuring efficient convergence toward optimal solutions. To validate the effectiveness of the proposed method, we conducted extensive experiments on a suite of benchmark problems and compared its performance with several state-of-the-art methods. The evaluation metrics, including inverted generational distance (IGD), generational distance (GD), and spacing (SP), demonstrated that the proposed method achieves competitive optimization accuracy and efficiency. While maintaining computational feasibility, our approach provides a well-balanced trade-off between exploration and exploitation, leading to improved solution diversity and convergence stability. The results establish the proposed algorithm as a promising alternative for solving multi-objective optimization problems. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of the search space and the objective space of a two-objective optimization problem.</p>
Full article ">Figure 2
<p>The contributions of the proposed chaotic decomposition-based approach.</p>
Full article ">Figure 3
<p>The distribution function <math display="inline"><semantics> <mrow> <mi>B</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </semantics></math> used in the TPO operator.</p>
Full article ">Figure 4
<p>The distribution function <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mfenced separators="|"> <mrow> <mi>x</mi> </mrow> </mfenced> </mrow> </semantics></math> used in the LIO operator.</p>
Full article ">Figure 5
<p>Flowchart of the overall CMOA process.</p>
Full article ">Figure 6
<p>Comparison of CMOA’s Pareto front with the true Pareto front on five ZDT problems.</p>
Full article ">
19 pages, 1418 KiB  
Article
An Improvement of the Alternating Direction Method of Multipliers to Solve the Convex Optimization Problem
by Jingjing Peng, Zhijie Wang, Siting Yu and Zengao Tang
Mathematics 2025, 13(5), 811; https://doi.org/10.3390/math13050811 - 28 Feb 2025
Viewed by 184
Abstract
The alternating direction method is one of the attractive approaches for solving convex optimization problems with linear constraints and separable objective functions. Experience with applications has shown that the number of iterations depends significantly on the penalty parameter for the linear constraint. The [...] Read more.
The alternating direction method is one of the attractive approaches for solving convex optimization problems with linear constraints and separable objective functions. Experience with applications has shown that the number of iterations depends significantly on the penalty parameter for the linear constraint. The penalty parameters in the classical alternating direction method are a constant. In this paper, an improved alternating direction method is proposed, which not only adaptively adjusts the penalty parameters per iteration based on the iteration message but also adds relaxation factors to the Lagrange multiplier update steps. Preliminary numerical experiments show that the technique of adaptive adjusting of the penalty parameters per iteration and attaching relaxation factors in the Lagrange multiplier updating steps are effective in practical applications. Full article
Show Figures

Figure 1

Figure 1
<p>Objective function curve of Algorithm 1 and the other three algorithms.</p>
Full article ">Figure 2
<p>First column is original images; second column is corrupted images.</p>
Full article ">Figure 3
<p>First column is blurred images; second column is recovered images by Algorithm 1; and third column is recovered images by the algorithm in [<a href="#B18-mathematics-13-00811" class="html-bibr">18</a>].</p>
Full article ">
16 pages, 1925 KiB  
Review
Link Between Metabolic Syndrome, Inflammation, and Eye Diseases
by Kamila Pieńczykowska, Anna Bryl and Małgorzata Mrugacz
Int. J. Mol. Sci. 2025, 26(5), 2174; https://doi.org/10.3390/ijms26052174 - 28 Feb 2025
Viewed by 277
Abstract
Metabolic syndrome (MetS)—a cluster of conditions including obesity, hypertension, dyslipidemia, and insulin resistance—is increasingly recognized as a key risk factor for the development of various eye diseases. The metabolic dysfunctions associated with this syndrome contribute to vascular and neurodegenerative damage within the eye, [...] Read more.
Metabolic syndrome (MetS)—a cluster of conditions including obesity, hypertension, dyslipidemia, and insulin resistance—is increasingly recognized as a key risk factor for the development of various eye diseases. The metabolic dysfunctions associated with this syndrome contribute to vascular and neurodegenerative damage within the eye, influencing disease onset and progression. Understanding these links highlights the importance of early diagnosis and management of metabolic syndrome to prevent vision loss and improve ocular health outcomes. This review explores the intricate interplay between metabolic syndrome, chronic low-grade inflammation, and eye diseases such as diabetic retinopathy, age-related macular degeneration, glaucoma, and dry eye syndrome. It highlights how inflammatory mediators, oxidative damage, and metabolic dysregulation converge to compromise ocular structures, including the retina, optic nerve, and ocular surface. We discuss the molecular and cellular mechanisms underpinning these associations and examine evidence from clinical and experimental studies. Given the rising global prevalence of metabolic syndrome, addressing this connection is crucial for improving overall patient outcomes and quality of life. Future research should focus on delineating the precise mechanisms linking these diseases as well as exploring targeted interventions that address both metabolic and ocular health. Full article
(This article belongs to the Special Issue Latest Advances in Metabolic Syndrome)
Show Figures

Figure 1

Figure 1
<p>Components of metabolic syndrome. Created in BioRender. Pieńczykowska, K. (accessed on 20 February 2025).</p>
Full article ">Figure 2
<p>Pathways underlying MetS and inflammation. Created in BioRender. Pieńczykowska, K. (accessed on 20 February 2025) ↓ NO—the level of nitric oxide lowers, ↑ ICAM-1—intercellular adhesion molecule 1 (its level elevates), ↑ VCAM-1—vascular cell adhesion molecule 1 (its level elevates).</p>
Full article ">Figure 3
<p>A presentation of connections between metabolic syndrome ocular disorders, highlighting the key cellular and molecular mediators and the area of the eye affected. Created in BioRender. Pieńczykowska, K. (accessed on 20 February 2025).</p>
Full article ">Figure 4
<p>Eye diseases linked with MetS. Created in BioRender. Pieńczykowska, K. (accessed on 20 February 2025).</p>
Full article ">
25 pages, 3082 KiB  
Article
Double Deep Q-Network-Based Solution to a Dynamic, Energy-Efficient Hybrid Flow Shop Scheduling System with the Transport Process
by Qinglei Zhang, Huaqiang Si, Jiyun Qin, Jianguo Duan, Ying Zhou, Huaixia Shi and Liang Nie
Systems 2025, 13(3), 170; https://doi.org/10.3390/systems13030170 - 28 Feb 2025
Viewed by 239
Abstract
In this paper, a dynamic energy-efficient hybrid flow shop (TDEHFSP) scheduling model is proposed, considering random arrivals of new jobs and transport by transfer vehicles. To simultaneously optimise the maximum completion time and the total energy consumption, a co-evolutionary approach (DDQCE) using a [...] Read more.
In this paper, a dynamic energy-efficient hybrid flow shop (TDEHFSP) scheduling model is proposed, considering random arrivals of new jobs and transport by transfer vehicles. To simultaneously optimise the maximum completion time and the total energy consumption, a co-evolutionary approach (DDQCE) using a double deep Q-network (DDQN) is introduced, where global and local search tasks are assigned to different populations to optimise the use of computational resources. In addition, a multi-objective NEW heuristic strategy is implemented to generate an initial population with enhanced convergence and diversity. The DDQCE incorporates an energy-efficient strategy based on time interval ‘left shift’ and turn-on/off mechanisms, alongside a rescheduling model to manage dynamic disturbances. In addition, 36 test instances of varying sizes, simplified from the excavator boom manufacturing process, are designed for comparative experiments with traditional algorithms. The experimental results demonstrate that DDQCE achieves 40% more Pareto-optimal solutions compared to NSGA-II and MOEA/D while requiring 10% less computational time, confirming that this algorithm efficiently solves the TDEHFSP problem. Full article
(This article belongs to the Section Supply Chain Management)
Show Figures

Figure 1

Figure 1
<p>The dynamic rescheduling system framework.</p>
Full article ">Figure 2
<p>Results without turn-on/off mechanisms; Gantt chart.</p>
Full article ">Figure 3
<p>Results with turn-on/off mechanisms; Gantt chart.</p>
Full article ">Figure 4
<p>Two-point order crossover.</p>
Full article ">Figure 5
<p>Swap sequence mutation.</p>
Full article ">Figure 6
<p>Local search operators: CSwap, CInsr, and CInv.</p>
Full article ">Figure 7
<p>Main effect plot of parameter tuning.</p>
Full article ">Figure 8
<p>The Pareto solution results of 10-5-3-3.</p>
Full article ">Figure 9
<p>The Pareto solution results of 20-5-3-3.</p>
Full article ">Figure 10
<p>The Pareto solution results of 50-5-3-3.</p>
Full article ">Figure 11
<p>Sensitivity results plot for <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>T</mi> <mi>E</mi> <mi>C</mi> </mrow> </semantics></math>. (Assuming the values of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>T</mi> <mi>E</mi> <mi>C</mi> </mrow> </semantics></math> are 1 when <math display="inline"><semantics> <mrow> <mi mathvariant="normal">q</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>).</p>
Full article ">
20 pages, 327 KiB  
Article
Exponential Bounds for the Density of the Law of the Solution of an SDE with Locally Lipschitz Coefficients
by Cristina Anton
Mathematics 2025, 13(5), 798; https://doi.org/10.3390/math13050798 - 27 Feb 2025
Viewed by 140
Abstract
Under the uniform Hörmander hypothesis, we study the smoothness and exponential bounds of the density of the law of the solution of a stochastic differential equation (SDE) with locally Lipschitz drift that satisfies a monotonicity condition. We extend the approach used for SDEs [...] Read more.
Under the uniform Hörmander hypothesis, we study the smoothness and exponential bounds of the density of the law of the solution of a stochastic differential equation (SDE) with locally Lipschitz drift that satisfies a monotonicity condition. We extend the approach used for SDEs with globally Lipschitz coefficients and obtain estimates for the Malliavin covariance matrix and its inverse. Based on these estimates and using the Malliavin differentiability of any order of the solution of the SDE, we prove exponential bounds of the solution’s density law. These results can be used to study the convergence of implicit numerical schemes for SDEs. Full article
(This article belongs to the Special Issue Advances in Probability Theory and Stochastic Analysis)
Back to TopTop