1. Introduction
In the optimization field, solving an optimization problem usually means finding the optimal value to maximize or minimize a set of objective functions without violating constraints [
1]. Optimization methods can be divided into two main categories: exact algorithms and metaheuristics [
2]. While exact algorithms can provide global optima precisely, they have exponentially increasing execution times in proportion to the number of variables and are considered less suitable and practical [
3]. In contrast, metaheuristic algorithms can identify the best or near-optimal solution in a reasonable amount of time [
4]. During the last two decades, metaheuristic algorithms have gained much attention, and much development and work there have been on them due to their flexibility, simplicity, and global optimization. Thus, they are widely used for solving optimization problems in almost every domain, such as big data text clustering [
5], tuning of fuzzy control systems [
6,
7], path planning [
8,
9], feature selection [
10,
11,
12], training neural networks [
13], parameter estimation for photovoltaic cells [
14,
15,
16], image segmentation [
17,
18], tomography analysis [
19], and permutation flowshop scheduling [
20,
21].
Metaheuristic algorithms simulate natural phenomena or laws of physics and are usually classified into three categories: evolutionary algorithms, physical and chemical algorithms, and swarm-based algorithms. Evolutionary algorithms are a class of algorithms that simulate the laws of evolution in nature. The best known is the genetic algorithm (GA) [
22], which was developed from Darwin’s theory of superiority and inferiority. There are other algorithms, such as differential evolution (DE) [
23], which simulates the crossover and variation mechanisms of inheritance, evolutionary programming (EP) [
24], and evolutionary strategies (ES) [
25]. Physical and chemical algorithms search for the optimum by simulating the universe’s chemical laws or physical phenomena. Algorithms in this category include simulated annealing (SA) [
26], electromagnetic field optimization (EFO) [
27], equilibrium optimizer (EO) [
28], and Archimedes’ optimization algorithm (ArchOA) [
29]. Swarm-based algorithms simulate the behavior of social groups of animals or humans. Examples of such algorithms include the whale optimization algorithm (WOA) [
30], salp swarm algorithm (SSA) [
31], moth search algorithm (MSA) [
32], aquila optimizer (AO) [
33], grey wolf optimizer (GWO) [
34], harris hawks optimization (HHO) [
35], and particle swarm optimization (PSO) [
36].
However, the no free lunch (NFL) theorem [
37] proves that no single algorithm can solve all optimization problems well. If an algorithm is particularly effective for a particular class of problems, it may not be able to solve other classes of optimization problems. This motivates us to propose new algorithms or improve the existing ones. The slime mold algorithm (SMA) [
38] is a new meta-heuristic algorithm proposed by Li et al. in 2020. The basic idea of the SMA is based on the foraging behavior of slime mold, which has different feedback aspects according to the food quality. Different search mechanisms have been introduced into the SMA to solve various optimization problems. For example, Zhao et al. [
39] introduced a diffusion mechanism and association strategy into the SMA and applied the proposed algorithm to the image segmentation of CT images. Salah et al. [
40] applied the slime mold algorithm to optimize an artificial neural network model for predicting monthly stochastic urban water demand. Wang et al. [
41] developed a parallel slime mold algorithm for the distribution network reconfiguration problem with distributed generation. Tang et al. [
42] introduced chaotic opposition-based learning and spiral search strategies into the SMA and proposed two adaptive parameter control strategies. The simulation results show that the proposed algorithms outperform other similar algorithms. Örnek et al. [
43] proposed an enhanced SMA that combines the sine cosine algorithm with the position update of the SMA. Experimental results show that the proposed hybrid algorithm has a better ability to jump out of local optima with faster convergence.
Although the SMA, as a new algorithm, is competitive with other algorithms, it also suffers from some shortcomings. The SMA, similarly to many other swarm-based metaheuristic algorithms, suffers from slow convergence and premature convergence to a local optimum solution [
44]. In addition, the update strategy of SMA reduces exploration capabilities and reduces population diversity. To improve the above problems, an improved algorithm based on SMA, called the dynamic-quantum-rotation-gate- and opposition-based learning SMA (DQOBLSMA), is proposed. In this paper, we introduce two mechanisms, the dynamic quantum rotation gate (DQGR) and opposition-based learning (OBL), into the SMA simultaneously. Both mechanisms improve the shortcomings of the original algorithm in terms of slow convergence and the tendency to fall into local optima. First, DQGR rotates the search individuals to the direction of the optimum, improving the diversity of the population and enhancing the global exploration capability of the algorithm. At the same time, OBL explores the partial solution in the opposite direction, improving the algorithm’s ability to jump out of local optima. The performance of the DQOBLSMA was evaluated by comparing it with the original SMA algorithm and with other advanced algorithms. In addition, three different constraint engineering problems were used to verify the performance of the DQOBLSMA further: the welded beam design problem, the tension/compression spring design problem, and pressure vessel design.
The main contributions of this paper are summarized as follows:
- 1.
DQRG and OBL strategies were introduced into SMA to improve the exploration capabilities of SMA.
- 2.
The DQRG strategy is proposed in order to balance the exploration and exploitation phases.
- 3.
By comparing five well-known metaheuristic algorithms, experiments show that the proposed DQOBLSMA is more robust and effective.
- 4.
Experiments on three engineering design optimization problems show that the DQOBLSMA can be effectively applied to practical engineering problems.
This paper is organized as follows.
Section 2 describes the slime mold algorithm, quantum rotation gate, and opposition-based learning.
Section 3 presents the proposed improved slime mold algorithm.
Section 4 show the experimental study and discussion using benchmark functions. The DQOBLSMA is applied to solve the three engineering problems in
Section 5. Finally, the conclusion and future work are given in
Section 6.
4. Experiments and Discussion
We conducted a series of experiments to verify the performance of the DQOBLSMA. The classical benchmark functions are introduced in
Section 4.1. In the experiments of test functions, the impacts of two mechanisms were analyzed; see
Section 4.2. In
Section 4.3, the DQOBLSMA is compared with several advanced algorithms. In
Section 4.4, the convergence of the algorithms is analyzed.
The performance of the DQOBLSMA was investigated using the mean result (Mean) and standard deviation (Std). In order to accurately make statistically reasonable conclusions, the results of the benchmark test functions were ranked using the Friedman test. In addition, the Wilcoxon’s rank-sum test was used to assess the average performances of the algorithms in a statistical sense. In this study, it was used to test whether there was a difference in the effect of the DQOBLSMA compared with those of other algorithms in pairwise comparisons. When the p-value is less than 0.05, the result is significantly different from the other methods. The symbols “+”, “−”, and “=” indicate if the DQOBLSMA is better than, inferior to, or equal to the other algorithms, respectively.
4.1. Benchmark Function Validation and Parameter Settings
In this study, the test set for the DQOBLSMA comparison experiment was the 23 classical test functions that had been used in the literature [
34]. The details are shown in
Table 2. These classical test functions are divided into unimodal functions, multimodal functions, and fixed-dimension multimodal functions. The unimodal functions (F1–F7) have only one local solution and one optimal global solution and are usually used to evaluate the local exploitation ability of the algorithm. Multimodal functions (F8–F13) are often used to test the exploration ability of the algorithm. F14–F23 are fixed-dimensional multimodal functions with many local optimal points and low dimensionality, which can be used to evaluate the stability of the algorithm.
The DQOBLSMA has been compared to the original SMA and five other algorithms: the slime mold algorithm improved by opposition-based learning and Levy flight distribution (OBLSMAL) [
48], the equilibrium slime mold algorithm (ESMA) [
49], the equilibrium optimizer with a mutation strategy (MEO) [
50], the adaptive differential evolution with an optional external archive (JADE) [
51], and the gray wolf optimizer based on random walk (RWGWO) [
52]. The parameter settings of each algorithm are shown in
Table 3, and the experimental parameters for all optimization algorithms were chosen to be the same as those reported in the original works.
In order to maintain a fair comparison, each algorithm was independently run 30 times. The population size (N) and the maximum function evaluation times () of all experimental methods were fixed at 30 and 15,000, respectively. The comparative experiment was run under the same test conditions to keep the experimental conditions consistent. The proposed method was coded in Python3.8 and tested on a PC with an AMD R5-4600 Hz, 3.00 GHz of memory, 16 GB of RAM, and the Windows 11 operating system.
4.2. Impacts of Components
In this section, different versions of the improvement are investigated. The proposed DQOBLSMA adds two different mechanisms to the original SMA. To verify their respective effects, they are compared when separated. Different combinations between SMA and two mechanisms are listed below:
SMA combined with DQRG and OBL (DQOBLSMA);
SMA combined with DQRG (DQSMA);
SMA combined with OBL(OBLSMA);
Original SMA;
Table 4 gives the comparison results between the original SMA and the improved algorithm after adding the mechanism. The ranking of the four algorithms is given at the end of the table, and it can be seen that the first-ranked algorithm is the DQOBLSMA. This ranking was obtained using the Friedman ranking test [
53] and reveals the overall performance rankings of the compared algorithms against the tested functions. In these cases, the ranking from best to worst was roughly as follows: DQOBLSMA > OBLSMA > SMA > DQSMA. With the addition of both mechanisms, the performance of the DQOBLSMA is more stable, and the global search capability is much improved. When comparing DQSMA with OBLSMA, we can see that OBLSMA is much stronger than DQSMA, indicating that the contribution of OBL to the performance of SMA is more significant than the contribution of DQRG to the performance of SMA. When comparing DQSMA with SMA, we can see that DQSMA becomes worse on unimodal functions but stronger on most multimodal and fixed-dimensional multimodal functions than the original SMA in terms of optimization.
Wilcoxon’s rank-sum test was used to verify the significance of the DQOBLSMA against the original SMA and SMA with the addition of one mechanism. The results are shown in
Table 5. Based on these results and those in
Table 4, the DQOBLSMA outperformed SMA on 13 benchmark functions, DQSMA on 17 benchmark functions, and OBLSMA on 8 benchmark functions. Thus, the DQOBLSMA algorithm proposed in this paper combines DQRG with OBL. Although DQSMA and OBLSMA can both find the solutions, there are more benefits to be gained by combining the two strategies. In conclusion, the DQOBLSMA offers better optimization performance and is significantly better than SMA, DQSMA, and OBLSMA.
4.3. Benchmark Function Experiments
As seen from
Table 6, on unimodal benchmark functions (F1–F7), the DQOBLSMA can achieve better results than other optimization algorithms. For F1, F3, and F6, the DQOBLSMA could find the theoretical optimal value. For all unimodal functions, the DQOBLSMA obtained the smallest mean values and standard deviations compared to other algorithms, showing the best accuracy and stability.
From the results shown in
Table 7 and
Table 8, the DQOBLSMA outperformed the other algorithms for most of the multimodal and fixed-dimensional multimodal functions. For the multimodal functions F8–F13, the DQOBLSMA obtained almost all the best mean and standard deviation values, and obtainedthe global optimal solution for four functions (F8–F11). As shown in
Table 8, the DQOBLSMA obtained theoretically optimal values in 8 of the 10 fixed-dimensional multimodal functions (F14–F23). Although the DQOBLSMA did not outperform JADE in F14–F23, it exceeded ESMA and OBLSMAL in overall performance. These results show that the DQOBLSMA also provides powerful and robust exploitation capabilities.
In addition,
Table 9 presents Wilcoxon’s rank-sum test results to verify the significant differences between the DQOBLSMA and the other five algorithms. It is worth noting that
p-values less than 0.05 mean significant differences between the respective pairs of compared algorithms. The DQOBLSMA outperformed all other algorithms to varying degrees, and outperformed OBLSMAL, ESMA, MEO, JADE, and RWGWO, on 14, 15, 16, 15, and 18 benchmark functions, respectively.
Table 10 shows the statistical results of the Friedman test, where the DQOBLSMA ranked first in F1–F7 and F8–F13 and second after JADE by a small margin in F14–F23. The DQOBLSMA received the best ranking overall. In summary, the DQOBLSMA provided better results on almost all benchmark functions than the other algorithms.
4.4. Convergence Analysis
To demonstrate the effectiveness of the proposed DQOBLSMA,
Figure 2 shows the convergence curves of the DQOBLSMA, SMA, ESMA, AEO, JADE, and RWGWO for the classical benchmark functions. The convergence curves show that the initial convergence of the DQOBLSMA was the fastest in most cases, except for
, and
; and RWGWO had faster initial convergence for these functions. For
–
, all comparison algorithms converged quickly to the global optimum, and the DQOBLSMA did not show a significant advantage. In
Figure 2, a step or cliff drop in the DQOBLSMA’s convergence curve can be observed, which indicates outstanding exploration capability. In almost all test cases, the DQOBLSMA had a better convergence rate than SMA and SMA variants, indicating that the SMA’s convergence results can be significantly improved when applying the proposed search strategies. In conclusion, the DQOBLSMA is not only robust and effective at producing the best results, but also has a higher convergence speed than the other algorithms.
6. Conclusions
In this paper, an enhanced SMA (DQOBLSMA) was proposed by introducing two mechanisms, DQRG and OBL, into the original SMA. In the DQOBLSMA, these two strategies further enhance the global search capability of the original SMA: DQRG enhances the exploration capability of the original SMA, and OBL increases the population diversity. The DQOBLSMA overcomes the weaknesses of the original search method and avoids premature convergence. The performance of the proposed DQOBLSMA was analyzed by using 23 classical mathematical benchmark functions.
First, the DQOBLSMA and the individual combinations of these two strategies were analyzed and discussed. The results showed that the proposed strategies are effective, and SMA achieved the best performance with the combination of the two mechanisms. Secondly, the results of the DQOBLSMA were compared with five state-of-the-art algorithms ESMA, AEO, JADE, OBLSMAL, and RWGWO. The results show that the DQOBLSMA is competitive with other advanced metaheuristic algorithms. To further validate the superiority of the DQOBLSMA, it was applied to three industrial engineering design problems. The experimental results show that the DQOBLSMA also achieves better results when solving engineering problems and significantly improves the original solutions.
As a future perspective, a multi-objective version of the DQOBLSMA will be considered. The proposed algorithm has promising applications in scheduling problems, image segmentation, parameter estimation, multi-objective engineering problems, text clustering, feature selection, test classification, and web applications.