Abstract
Gaussian bare-bone imperialist competitive algorithm (GBB-ICA) is an effective variant of imperialist competitive algorithm (ICA), which updates the position of colonies by sampling a Gaussian distribution. However, the mean and standard deviation adopted by GBB-ICA is calculated only using the positions of imperialist and the colony itself, making the searching tends to trap into local optimum. To overcome this drawback, a new double Gaussian sampling strategy is proposed in this paper. An extra Gaussian sampling point, whose mean and standard is calculated using the positions of the second best colony and the current colony itself, is introduced into GBB-ICA. To further speed up the convergence and explore informative region, the quasi-oppositional learning technique is incorporated into GBB-ICA to produce more potential candidates in the assimilation step as well as generating a higher quality initial population. The proposed algorithm is called quasi-oppositional learning-based double Gaussian sampling bare-bone imperialist competitive algorithm (QOLBDGSBB-ICA) and is tested on 20 benchmark functions and four engineering design problems. Experimental results show that the proposed algorithm outperforms over other referenced ICA variants on 19 benchmark functions, which well validates the effectiveness of the proposed algorithm.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
As a powerful alternative for classical optimization method, meta-heuristic algorithms had attracted numerous attention of researchers in the past decades. Compared with classical optimization method, meta-heuristic algorithm is relatively simple, less computationally expensive and global searching. Most of important, meta-heuristic algorithm does not impose any restrictions on the objective function, which maybe continuous or discontinuous, differentiable or non-differentiable, convex or non-convex, etc. Nowadays, different kinds of meta-heuristic algorithms had been proposed. For instance, differential evolution (DE) [34], biogeography-based optimization (BBO) [40], particle swarm optimization (PSO) [23], artificial bee colony [43], bacteria foraging optimization [17], Remora optimization algorithm (ROA) [20], coronavirus optimization algorithm (COA) [25, 26], hunter–prey optimization [31], marine predators algorithm (MPA) [14] and so on.
As a competitive meta-heuristic algorithm, imperialist competition algorithm (ICA) was suggested by Atashpaz-Gargari and Lucas. ICA stimulates the imperialistic competitive phenomenon among countries in the world [5]. Different from many other meta-heuristic algorithms, ICA intrinsically divides the individuals into several sub-swarms. In the evolutionary process, each sub-swarm exchanges information via imperialistic competition. Thus, ICA can be viewed as a multi-swarm evolutionary algorithm. Due to this characteristic, ICA exhibits excellent optimization performance in solving different engineering problems like job shop scheduling problem [27], multi-layer perceptron training [29], node placement problem in wireless sensor networks [7], fuzzy controller coefficient optimization [38], data clustering [4], and so on.
Though ICA had gained success in many real applications, it also possesses several common shortcomings as most other meta-heuristic algorithms. Specifically, ICA has a tendency to stuck into local optimum and results in premature phenomenon. On the other hand, ICA is sometimes difficult to maintain the diversity of population making it cannot search more potential region in the search space. To overcome these drawbacks, researchers had paid great effort and proposed different schemes to improve the performance of ICA.
The first class improving scheme attempts to change the way of assimilation, including the assimilation coefficients and assimilation formula. In [2], a dynamically adjusting scheme for assimilation coefficients was designed on the basis of attraction and repulsion principle. The authors in [9] proposed a scheme to dynamically change the assimilation coefficient via an interval type-2 fuzzy system (T2FS). While in [24], an adaptive deviation angle adjusting scheme was designed using fuzzy inference system (FIS) and in [1], an dynamically adjusting approach was designed for deviation angle using Gaussian probability model. Also, the colony’s moving formula had been re-designed by several researchers. For example, in [19, 35], the velocity-position model in PSO was adopted as the moving formula of colony’s in the assimilation phase. While in Gaussian bare-bone ICA (GBB-ICA) [10], the Gaussian sampling strategy in bare-bone particle swarm optimization (BBPSO) was also borrowed by the authors as the assimilation scheme. In [39], the moving formula of firefly algorithm (FA) was utilized as the assimilation formula of ICA. In [12, 33], the authors proposed to use the crossover and mutation in DE algorithm as assimilation strategy.
The second class improving scheme is to add mutation operator into ICA. For example, in [42], a mutation-based imperialist competition algorithm is proposed, in which the imperialist was mutated via Gaussian, Cauchy and Lévy three operators. In [3], two new candidate positions are generated for each colony using the unimodal normal distribution crossover (UNDX) operator. The best of the two candidate positions are taken as the new position of the colony. In [30], a chaotic mutation-based ICA was proposed. In the proposed method, the chaotic map is adopted as the mutation strategy to generate new individuals to replace the old ones in the revolution stage, making the algorithm escape from local optima. The third class of improving scheme is to hybrid other evolutionary algorithms with ICA. For example, ICA hybrids with GA [6, 18], with simulating annealing (SA) [32], with PSO [28].
Among various improved ICAs, GBB-ICA is an interesting ICA variant. GBB-ICA has less parameters to be adjusted than initial ICA and the other ICA variants. This characteristic enables GBB-ICA to be easily applied by different engineers. Most of important, GBB-ICA shows superior performance than many other ICA variants. However, in the searching process, GBB-ICA generates new candidate position for colony around the imperialist. If the imperialist locates at unproductive region, GBB-ICA cannot perform effective searching and cannot locate high-quality candidate position for colony. In addition, the searching range of GBB-ICA is also relatively small, as a result, it cannot explore more effective region that global optimal may locate.
To overcome the drawbacks of GBB-ICA, we proposed a new double Gaussian sampling strategy to enhance the searching ability of GBB-ICA. Specifically, an extra Gaussian sampling point is added to the GBB-ICA. The mean and standard of the extra Gaussian sampling point is calculated using the positions of the second best colony in the empire and the current colony itself. To further speed up the converge and explore informative searching region, the quasi-oppositional learning technique is also incorporated into GBB-ICA for two purposes. One is to generate a higher quality initial population and the second is to produce more potential candidates in the assimilation step. The proposed algorithm is called quasi-oppositional learning-based double Gaussian sampling bare-bone imperialist competitive algorithm (QOLBDGSBB-ICA). The main contributions of our paper include the following aspects.
-
A new double Gaussian sampling strategy is proposed to enhance the searching ability of GBB-ICA such that it can effectively avoid premature convergence.
-
The quasi-oppositional learning strategy is incorporated into GBB-ICA to further speed up the convergence of GBB-ICA and help it explore more information searching region to produce high-quality solution.
-
The proposed QOLBDGSBB-ICA exhibits superior performance in terms of global searching ability and faster convergence speed.
The remaining part of this paper is arranged as follows. In Sect. 2, the GBB-ICA is shortly reviewed. The proposed QOLBDGSBB-ICA is explained in Sect. 3. The experimental results and comparison analysis are reported in Sect. 4. Lastly, Sect. 5 presents the conclusive remarks.
2 Review of GBB-ICA
The main steps of GBB-ICA are similar to those of ICA, which mainly consist of three evolutionary processes, including initialization, assimilation, imperialistic competition. In the assimilation stage of GBB-ICA, double Gaussian sampling is used to generate new position for colonies, instead of random moving in ICA. In the following, the evolutionary processes of GBB-ICA are introduced in details.
2.1 Initialization
Let us consider a d variables minimization problem as
In ICA, each candidate solution for optimization problem (1) is called a country. Mathematically, a d-dimensional vector \(\varvec{x} = [x_1, x_2, \ldots , x_d]\) denotes the country’s position. In initialization stage, N initial countries are produced in the search range as the following random manner,
where \(x_{ij}\) stands the jth component of the ith country, r is a uniformly distributed random number in [0, 1], \(l_j\) and \(u_j\) are the minimum and maximum value allowed for variable \(x_j\). Then, the cost or power of each country can be computed as
In the next step, the N countries are sorted in ascend order according to their cost. The first \(N_{\text {imp}}\) countries with the best cost are taken as imperialist while the remained \(N-N_{\text {imp}}\) are colonies. Each imperialist possesses several colonies. To determine how much colonies the nth imperialist can possess, one first calculate the normalized cost or power of the nth imperialist as
The nth imperialist can possess \(NC_n\) colonies, which is computed as
where
After assigning the colonies to each imperialist, \(N_{\text {imp}}\) empires are initially formed and the initialization process completes.
2.2 Assimilation
In this stage, each colony moves toward the imperialist to improve its power and thus strengthens the whole power of the empire. In ICA, each colony tries to approach to the imperialist to increase its power. This process is illustrated in Fig. 1.
In Fig. 1, a colony approaches to the imperialist by moving a distance \(\Delta \varvec{x}\) from its current position with a deviation angle \(\theta \). The length of \(\Delta \varvec{x}\) is component-wise random, that is to say,
where \(\Delta {x}_j\) is the jth component of \(\Delta \varvec{x}\) and L is the distance between the ith colony and its imperialist, \(\beta \) is a real number greater than 1, which is called assimilation coefficient. Similarly, \(\theta \) is a number randomly selected from \([-\pi /4, \pi /4]\), that is [5],
In GBB-ICA, Gaussian sampling process is utilized to produce new candidate position for colony. For the convenience of statement, let \(\varvec{x}^{\text {imp}}=[x_{1}^{\text {imp}}, x_{2}^{\text {imp}}, \cdots , x_{d}^{\text {imp}}]\) be the imperialist’s position and \(\varvec{x}_i^{\text {col}}= [x_{i1}^{\text {col}}, x_{i2}^{\text {col}}, \ldots , x_{id}^{\text {col}}] \) be the ith colony’s position. The new position of the ith colony is denoted as \(\varvec{z}_i^{\text {col}}=[z_{i1}^{\text {col}}, z_{i2}^{\text {col}}, \ldots , z_{id}^{\text {col}}]\). GBB-ICA generates new position using Gaussian sampling technique as
where \(\mathcal {N}(\mu _j, \sigma _j)\) represents a Gaussian distribution with mean \(\mu _j\) and deviation \(\sigma _j\). They are calculated as
GBB-ICA generates new position for colony only around the imperialist, which has the potential risk of getting stuck into local optima and missing the chance of exploring more productive region.
2.3 Imperialistic Competition
After assimilation, the imperialistic competition between empires begins. In this stage, the stronger empires seize the colonies of the weaker empires such that the stronger empires become more and more stronger while the weaker empires become more and more weaker. To do this, the weakest colony in the weakest empire will be occupied by the other stronger empires. The probability that the stronger empires occupy the lost colony is calculated as follows. First, the total cost of the nth empire is calculated as
where \(\xi \) is a real positive number smaller than 1, which is called cost ratio coefficient, \(c_n\) is the cost of the nth imperialist, \(c_i\) is the cost of the ith colony in the empire. The normalized total cost \(NTC_{n}\) of the nth empire is defined as
All the empires have chances to possess the lost colony with a probability \(p_n\), which is computed as
Denotes
be the probability vector. Generating a random vector \(\varvec{r} =[r_{1},r_{2},...,r_{N_{\text {imp}}}]\) with \(r_i \in [0,1]\). The difference between P and R is computed as
In \(\varvec{m}\), the index corresponding to the maximized value among \(p_i-r_i,i=1,2,\ldots , N_{imp}\) is denoted as \(\kappa \). Then, the \(\kappa \)th empire will possess the lost colony. Figure 2 gives an explanation of imperialistic competition.
3 The Proposed QOLBDGSBB-ICA
In this section, the proposed QOLBDGSBB-ICA is explained in details. First, the proposed double Gaussian sampling strategy is introduced, and then, the quasi-oppositional learning is explained.
3.1 Double Gaussian Sampling Strategy
As stated before, GBB-ICA generates new position for A colony only around the imperialist, which has the potential risk of trapping into local minimum and missing the chance of exploring more productive region. Observing this, we proposed a Double Gaussian sampling strategy, in which an extra Gaussian sampling is added to generate new position for current colony. Let \(\varvec{x}_k^{\text {col}}= [x_{k1}^{\text {col}}, x_{k2}^{\text {col}}, \ldots , x_{kd}^{\text {col}}]\) denotes the best colony in an empire, double Gaussian sampling strategy generates new position for a colony in the following way,
where \(r_1\) and \(r_2\) are two random number distributed in [0,1], and
Compared with GBB-ICA, the double Gaussian sampling strategy utilizes another information in the empire to generate new position for colony, which enlarges the searching range and can effectively avoid premature convergence.
3.2 Quasi-Oppositional Learning Strategy
Here, the quasi-oppositional learning technique is used to further strengthen the performance of GSBB-ICA. Several research results had proven that, in most evolutionary algorithm, generating new candidate solution using oppositional guess are more effective than purely random ones [41]. Thus, Tizhoosh proposed the concept of quasi-oppositional learning [36]. The quasi-oppositional learning had been widely adopted to enhance the performance of different evolutionary algorithms. In the following, several concepts related to quasi-oppositional learning are firsts introduced.
Opposite number The opposite number of a real number x in the interval [a, b] is defined as
Opposite point The opposite point of the point \(\varvec{x} = [x_1, x_2, \ldots , x_d]\) in d-dimensional space is denoted as \(\breve{{\varvec{x}}}=[\breve{x}_1, \breve{x}_2, \ldots , \breve{x}_d]\), in which, \(\breve{x}_i\) is defined as
Quasi-opposite number The quasi-oppositional number of a real number x in the interval [a, b] is defined as
Quasi-oppositional point The quasi-oppositional point of \(\varvec{x}=[x_1, x_2, \ldots , x_d]\) is denoted as \(\bar{\varvec{x}}=[\bar{x}_1, \bar{x}_2, \ldots , \bar{x}_d]\), the components of \(\bar{\varvec{x}}\) are defined as
Quasi-opposition-based optimization Let \({\varvec{x}}=(x_1, x_2, \ldots , x_d)\) be a point in d-dimensional space and \(f({\varvec{x}})\) the fitness function. The quasi-oppositional point of \(\varvec{x}\) is denoted as \(\bar{{\varvec{x}}}\). If \(f(\bar{{\varvec{x}}})\) is better than \(f({\varvec{x}})\), then point \({\varvec{x}}\) is replaced with \(\bar{{\varvec{x}}}\); otherwise, point \(\varvec{x}\) is reserved.
3.3 Details of QOLBDGSBB-ICA
In this section, the double Gaussian sampling strategy and quasi-oppositional learning techniques are incorporated into GBB-ICA to enhance its optimization performance. The resulted algorithm is called QOLBDGSBB-ICA. The quasi-oppositional learning technique is used in the initialization and assimilation stage to enhance the algorithm’s performance.
3.3.1 Quasi-Oppositional Learning-Based Initialization
In the initialization stage, quasi-oppositional learning is used to generate higher quality initial solution. The detailed process is explained as follows.
-
1.
Generating an initial population \(\varvec{P}_0\) with N countries.
-
2.
Generating the quasi-oppositional population \(\varvec{QOP}_0\) of \(\varvec{P}_0\) according to method shown in (20).
-
3.
Calculating the cost of the initial population \(\varvec{P}_0\) and \(\varvec{QOP}_0\). The costs is denoted as \(\varvec{F}_0\) and \(\varvec{QOF}_0\).
-
4.
Sorting the costs \(\varvec{F}_0 \bigcup \varvec{QOF}_0\) in ascend order. Correspondingly, the 2N countries are also sorted in the same order as in \(\varvec{F}_0 \bigcup \varvec{QOF}_0\).
-
5.
From the sorted 2N countries, selecting the first N countries and corresponding costs as the initial population.
3.3.2 Quasi-Oppositional Learning-Based Assimilation
After generating new position for colonies using double Gaussian sampling, quasi-oppositional learning technique is adopted to help the algorithm find better solution. Let \(\varvec{z}_i^{\text {col}}\) be the newly generated position for the ith colony, one generates the quasi-oppositional point \(\varvec{qoz}_i^{\text {col}}\) of \(\varvec{z}_i^{\text {col}}\). If \(f(\varvec{qoz}_i^{\text {col}}) < f(\varvec{z}_i^{\text {col}}) \), then replacing \(\varvec{z}_i^{\text {col}}\) with \(\varvec{qoz}_i^{\text {col}}\), otherwise, reserving \(\varvec{z}_i^{\text {col}}\) as the new position of the ith colony.
The proposed QOLB-DGSBB-ICA are summarized in Algorithm 1.
4 Experimental Results
To demonstrate the performance of the proposed QOLB-DGSBB-ICA, optimization experiments on 20 benchmark functions and several engineering design problem are conducted. Several good improved ICA algorithms, i.e., AR-ICA, ICA-PSO, and GBB-ICA are selected as the comparing algorithms. The parameters of the above referenced algorithms are set to the same value. Specifically, the population size is 50, the initial number of empire is 8, the assimilation coefficient is 2. All the referenced algorithms dependently run 30 times. The best value of objective function, and the worst, the std are used as the evaluation metric. Furthermore, comparisons between the referenced algorithms via a nonparametric test are provided.
4.1 Experiments on Benchmark Function
First, the optimization experiments are conducted on Benchmark functions to verify the performance of the proposed QOLBDGSBB-ICA.
4.1.1 Description of Benchmark Function
Here, 20 famous benchmark functions are adopted to examine the performance of the proposed QOLBDGSBB-ICA. The adopted benchmark functions, whose details are listed in Tables 11 and 12 in the Appendix section. The adopted benchmark functions can be divided into two classes, i.e., unimodal and multimodal function. Unimodal function has only one global optimum, and it is relatively easy to be optimized. On the contrary, the multimodal function has many local optimum, it is more difficult to be optimized than unimodal function. In Tables 11 and 12, function \(f_1\) and \(f_2\) are unimodal, and the other 18 functions, i.e., \(f_{03}\)–\(f_{20}\) are multimodal function.
4.1.2 Experimental Results of 30-Dimensional Case
The optimization results of the 30-dimensional benchmark functions in the 30 independent runs are statistically reported in Tables 1 and 2 in terms of the best, the worst, the mean and std (standard deviation). In general, one can see from Tables 1 and 2 that the performance of the proposed QOLBDGSBB-ICA is superior to other referenced ICAs.
For the unimodal functions \(f_1\) and \(f_2\), the best, the worst, the mean and std of the proposed QOLBDGSBB-ICA all are 0, which indicates that the proposed QOLBDGSBB-ICA can achieves the global minimum 0 each time in the 30 runs. The phenomenon illustrates the superior performance of QOLBDGSBB-ICA in optimizing the unimodal functions. However, for \(f_1\), AR-ICA, ICA-PSO and GBB-ICA achieve acceptable optimization results. However, ICA fails to give a satisfactory optimization result because its mean is 1.0052, which is far from the global minimum. For \(f_2\), ICA, AR-ICA, ICA-PSO and GBB-ICA all cannot give satisfactory optimization results. They do not converge to the global minimum within the maximum iterative steps.
Finding the global minimum of multimodal functions is very difficult because many local optimum exist in the searching space. For our proposed QOLBDGSBB-ICA, it achieves perfect optimization results on function \(f_4\), \(f_5\), \(f_7\), \(f_9\), \(f_{10}\), \(f_{11}\), \(f_{12}\), \(f_{13}\), \(f_{15}\), \(f_{16}\), \(f_{17}\). On these functions, the obtained the best, the worst, the mean and the std obtained by QOLBDGSBB-ICA all approach to 0. This phenomenon show that QOLBDGSBB-ICA can find the global minimum each time within the 30 runs, which indicates that QOLBDGSBB-ICA has powerful capacity to escape from local optimums in the searching process. For \(f_3\), the optimization result obtained by QOLBDGSBB-ICA is not the best compared with ICA, AR-ICA, ICA-PSO, GBB-ICA and DGSBB-ICA. AR-ICA achieves the best optimization result. For function \(f_6\), the best obtained QOLBDGSBB-ICA approaches the global minimum – 1.2279E+04, which is superior to those obtained by the other referenced ICAs. However, there is a little pity that the mean is – 1.1646E+04, which is not equal to the global minimum. Furthermore, the std is not equal to 0, which indicates that QOLBDGSBB-ICA cannot approach the global minimum each time within the 30 runs. For function \(f_8\), QOLBDGSBB-ICA and DGSBB-ICA find the same optimum result, that is to say, the best, the worst, the mean all are 4.4409E – 16 and the std is 0. This illustrates that QOLBDGSBB-ICA and DGSBB-ICA find the optimal results 4.4409E – 16 each time within the 30 runs. Though 4.4409E – 16 is not the global minimum, it is a satisfactory result. For function \(f_9\), the global minimum is – 1 and the all the referenced ICAs obtain satisfactory optimization result, that is to say, the best found by all the referenced ICAs approach to – 1. However, the performance of DGSBB-ICA and QOLBDGSBB-ICA is the best. As far as function \(f_{14}\) is concerned, its global minimum is \(-\)1174.9797. The best obtained by QOLBDGSBB-ICA is \(-\)1.1749E+03, which is the closest to the global minimum. Furthermore, the mean obtained by QOLBDGSBB-ICA is also the closest to the global minimum. For function \(f_{19}\) and \(f_{20}\), the results obtained by all the referenced ICAs do not achieve the global minimum; however, the results of QOLBDGSBB-ICA is the best compared with other referenced ICAs. For function \(f_{19}\), QOLBDGSBB-ICA gives the best result 1.30667E\(-\)16, which is better than 7.6723E\(-\)09 and 9.5570E\(-\)09 obtained by GBB-ICA and AR-ICA, respectively. For function \(f_{20}\), QOLBDGSBB-ICA give the best 3.02014E\(-\)13, which is better than 3.1536E\(-\)09 and 3.5010E\(-\)09 given by ICA-PSO and GBB-ICA.
4.1.3 Experimental Results on 50-Dimensional Functions
The optimization results on 50-dimensional benchmark functions are reported here. Usually, as the dimension increases, the optimal problem becomes more difficult. The optimization results of 50-dimensional benchmark functions are statistically listed in Tables 3 and 4. Obviously, with the increasing of dimension, the optimization performance of the referenced ICA, AR-ICA, ICA-PSO, GBB-ICA decrease dramatically no matter for unimodal functions or multimodal functions. On the contrary, our proposed QOLBDGSBB-ICA still maintains good optimization performance. In the following, a detailed explanation is given.
First, for the unimodal functions \(f_1\) and \(f_2\), the best, the worst, the mean and std obtained by QOLBDGSBB-ICA all equal to 0, which explains that QOLBDGSBB-ICA finds the global minimum each time within the 30 runs. DGSBB-ICA gives satisfactory optimization results. However, ICA, AR-ICA, ICA-PSO and GBB-ICA gives unacceptable optimization results because the means obtained by these algorithm are far away from the global minimum.
For multimodal functions \(f_4\), \(f_5\), \(f_7\), \(f_9\), \(f_{10}\sim f_{13}\), \(f_{15}\), \(f_{17}\), the best, the worst, the mean and std obtained by QOLBDGSBB-ICA all equal to 0, which illustrates that QOLBDGSBB-ICA finds the global minimum of these functions each time within 30 runs. This phenomenon demonstrates the superior optimization performance of QOLBDGSBB-ICA. For function \(f_3\), all the referenced ICAs fail to give satisfactory optimization results. For function \(f_4\), \(f_7\), \(f_9\), \(f_{13}\), \(f_{17}\), the DGSBB-ICA also finds the global minimum each time as QOLBDGSBB-ICA. For function \(f_5\), the ICA, AR-ICA, ICA-PSO and GBB-ICA all give poor result. For function \(f_6\), its global minimum is -2.0949E04. All the referenced algorithm do not find the global minimum, however, the result found by QOLBDGSBB-ICA is the closest to the global minimum. For function \(f_8\), the ICA, AR-ICA, ICA-PSO and GBB-ICA all give poor optimization results, the DGSBB-ICA and QOLBDGSBB-ICA achieve the best value 4.4409E-16, which is a satisfactory result. For function \(f_{11}\), \(f_{15}\), ICA, AR-ICA, ICA-PSO and GBB-ICA all give poor results, while DGSBB-ICA presents a satisfactory result. The proposed QOLBDGSBB-ICA finds the global minimum of \(f_{11}\) each time. For function \(f_{14}\), its global minimum is 1.9582E03. The result obtained by ICA-PSO is the closest to the global minimum and the that obtained by QOLBDGSBB-ICA is the second best. For function \(f_{16}\), QOLBDGSBB-ICA can find its global minimum within the 30 runs. However, the worst, the mean and std do not equal to 0, which indicates that QOLBDGSBB-ICA cannot find the global minimum each time. Nevertheless, ICA, AR-ICA, ICA-PSO, GBB-ICA and DGSBB-ICA cannot find an acceptable optimization result. Similar phenomenon occurs for function \(f_{18}\). For function \(f_{19}\) and \(f_{20}\), all the reference algorithms cannot find the global minimum. However, the optimization results obtained by QOLBDGSBB-ICA is relatively the best compared with other algorithms.
From the optimization results listed in Tables 1, 2, 3 and 4 and analysis performed above, one can make a conclusion that the performance of QOLBDGSBB-ICA is the best compared with other comparison algorithms. It can effectively overcome the drawbacks of premature convergence and explore more informative region in the searching process.
4.1.4 Convergence Analysis
In this section, the convergence of the proposed QOLBDGSBB-ICA are investigated. To this end, the evolutionary curves of all the algorithms are plotted.
Figures 3 and 4 show the evolutionary curves of 30-dimension benchmark functions. For unimodal function \(f_1\) and \(f_2\), one can see that ICA, AR-ICA, ICA-PSO and GBB-ICA exhibit stagnation behavior in the searching process. However, DGSBBB-ICA and QOLBDGSBB-ICA exhibit perfect searching performance. This phenomenon explains that the introduced double sampling strategy can indeed improve searching performance of GBB-ICA. Compared with DGSBBB-ICA, QOLBDGSBB-ICA converges to the global minimum within 400 generations, while DGSBBB-ICA do not converge to the global minimum within the maximum generations, i.e., 1000.
For multimodal functions, DGSBBB-ICA, QOLBDGSBB-ICA also demonstrate superior convergence performance. For functions \(f_4\), \(f_7\sim f_9\), \(f_{13}\) and \(f_{17}\), the DGSBBB-ICA and QOLBDGSBB-ICA converge to the global minimum at an extremely fast speed, while other referenced ICA’s variant exhibits slow convergence speed. For functions \(f_5\), \(f_{10}\), \(f_{11}\), \(f_{12}\) and \(f_{15}\), the converge speed of DGSBBB-ICA and QOLBDGSBB-ICA is relatively faster than that of ICA, AR-ICA, ICA-PSO and GBB-ICA. It seems that ICA, AR-ICA, ICA-PSO and GBB-ICA trap into local minimum and cannot jump out from the local minimum. For functions \(f_6\), \(f_{14}\), \(f_{18}\), \(f_{19}\) and \(f_{20}\), the situation is more complex. QOLBDGSBB-ICA cannot converge to the global minimum within 1000 generations, and DGSBBB-ICA exhibits stagnation behavior. While several other referenced ICA variants exhibit good convergence performance. But overall, the convergence speed of QOLBDGSBB-ICA is the faster compared with other ICA variants. From the above analysis, one can draw a conclusion that the proposed double sampling strategy indeed can improve the exploration ability of GBB-ICA, and meanwhile, the quasi-oppositional learning can effectively help the algorithm explore valuable region in the searching process, and therefore improves the convergence speed.
Figures 5 and 6 show the evolutionary curves of 50-dimensional benchmark function. It is well known that with the increasing of dimension, the difficulty of finding the global minimum also increases. From Figs. 5 and 6, one can see similar convergence behavior as the 30-dimensional case. Specifically, the unimodal function \(f_1\) and \(f_2\), ICA, AR-ICA, ICA-PSO, GBB-ICA all exhibit stagnation behavior in the searching process, while the proposed QOLBDGSBB-ICA converges to the global minimum within 400 generations. For multimodal functions \(f_4\), \(f_7\sim f_9\), \(f_{13}\) and \(f_{17}\), the proposed QOLBDGSBB-ICA and DGSBBB-ICA converge to the global minimum at an extremely fast speed, and for \(f_5\), \(f_{10}\sim f_{12}\) and \(f_{15}\), the convergence speed of QOLBDGSBB-ICA and DGSBBB-ICA is relatively faster than other ICA variants. For \(f_6\), \(f_{14}\), \(f_{19}\) and \(f_{20}\), DGSBBB-ICA demonstrates stagnation behavior while QOLBDGSBB-ICA still exhibits good convergence performance.
4.1.5 Wilcoxon Test
To further verify the superior performance of QOLBDGSBB-ICA, a statistical test called Wilcoxon signed-rank test is conducted. The test examines two hypothesis, i.e., the null hypothesis \(H_0\): QOLBDGSBB-ICA is similar to GBB-ICA and the alternative hypothesis \(H_1\): QOLBDGSBB-ICA is better than GBB-ICA. The data used in the test are the best function values of each algorithm in the 30 runs.
The test results are listed in Tables 5 and 7, where \(h=1\) denotes that the QOLBDGSBB-ICA is superior to the algorithm involved in comparison in statistical sense. From Table 5, one can see that the QOLBDGSBB-ICA is superior to GBB-ICA on 19 functions. Meanwhile, QOLBDGSBB-ICA is superior to DGSBB-ICA on 12 functions. This indicates that by introducing the quasi-oppositional learning and double sampling strategy can indeed improve the optimization performance of GBB-ICA. For 50-dimensional case, the number of functions that QOLBDGSBB-ICA is superior to DGSBB-ICA increases to 15. This explains that QOLBDGSBB-ICA is more power than DGSBB-ICA when handling with more complex optimization problem.
4.2 Application to Engineering Design Problems
In this subsection, the proposed QOLBDGSBB-ICA is further applied to solve four engineering design problems for the purpose of testing its performance.
4.2.1 Pressure Vessel Design Problem
Pressure vessel (PV) has capped ends and hemispherical heads. The pressure vessel design problem aims to minimize the total cost of the cylindrical PV, which is closely related to material, forming and welding [22]. For PV design problem, the design variables involve the depth of the shell (\(T_s\)) and head (\(T_h\)), the internal radius (R), and the extent of the section, minus the head (L). The mathematical expression of this problem is formulated as follows.
This problem had been solved by several approaches, for example, PSO, GA, ES and harmony search algorithm, see [11] for details. The optimization results of the proposed and referenced algorithm in the literature are reported in Table 7. It can be seen from Table 7 that the OLBDGSBB-ICA obtains the minimum value 4543.5956, and QOLBDGSBB-ICA 4624.8766, which are all smaller than that by IHS, PSO, GA, ES, WOA and BWOA.
4.2.2 Gear Train Design Problem
Gear train (GT) design problem is an unconstrained discrete design problem in mechanical engineering. The design goal is to minimize the gear ratio, which is defined as the angular velocity of the output shaft to that of input shaft. The design variables usually are the number of teeth of gears \(N_a\), \(N_b\), \(N_d\) and \(N_f\). This problem can be expressed as the following optimization problem.
This problem had been solved by several different algorithms such as Lagrange multiplier method [22], cuckoo search algorithm [16] and genetic adaptive search (GAS) [21]. The optimization results of gear train design problem are shown in Table 8. Obviously, the optimization results obtained by the proposed OLBDGSBB-ICA and QOLBDGSBB-ICA are slightly better than the methods in the literature.
4.2.3 Multiple Disk Clutch Brake Design Problem
Multiple disk clutch brake designed problem is a constrained optimization problem with discrete variables. Its primary goal is to minimize the mass of a multiple disk clutch brake [8]. The involved design variables are the inner radius \((x_1)\), outer radius \((x_2)\), thickness of the disk \((x_3)\), actuating force \((x_4)\) and number of friction surfaces \((x_5)\). The mathematical formulation of the problem can be expressed as follows.
In the literature, constrained iterative topographical global optimization (C-ITGO) [15], teaching–learning-based optimization (TLBO) [37], water cycle algorithm (WCA) [13] and improved accelerated PSO (IAPSO) [8] and APSO [8]. The optimization results are shown in Table 9. It can be seen that the proposed QOLBDGSBB-ICA and OLBDGSBB-ICA can obtain satisfactory results. Of course, the results obtained by the proposed algorithms are slightly inferior to that by C-ITGO, WCA and IAPSO.
4.2.4 Car Side Impact Design Problem
This problem aims to minimize the total weight of the door. The involved design variables include thickness of B-Pillar inner (\(x_1\)), thickness of B-Pillar reinforcement \((x_2)\), thickness of floor side inner \((x_3)\), thickness of cross members \((x_4)\), thickness of door beam \((x_5)\), thickness of door beltline reinforcement \((x_6)\), thickness of roof rail \((x_7)\), materials of B-Pillar inner \((x_8)\), materials of floor side inner \((x_9)\), barrier height \((x_{10})\) and hitting position \((x_{11})\). This problem is mathematically expressed as follows.
This problem had been solved by several different algorithms, for example, PSO, DE, FA, CS and SNS. The optimization results are shown in Table 10. It can be seen from Table 10 that the proposed algorithms obtain smaller cost value, compared with the referenced comparing algorithm.
5 Conclusion
In this paper, an improved GBB-ICA is proposed to enhance the searching ability and speed up the convergence of GBB-ICA. First, a double Gaussian sampling strategy is designed to enhance the searching ability of GBB-ICA. The Gaussian sampling strategy generates a new position according to the current position of the imperialist and the best colony in an empire simultaneously. This strategy ensures the good perfect search direction while avoids getting stuck into local optima. Second, the quasi-oppositional learning strategy is incorporated into the initialization and assimilation step, which leads to a faster convergence speed and powerful ability to explore valuable region in the searching space. Extensive experiments had been conducted on 20 benchmark functions and 4 engineering design problems. The experiment results validate that the proposed algorithm exhibit more perfect optimization performance both in terms of convergence speed and global search ability.
In the future, we will strive to design more powerful Gaussian sampling strategy to increase the global search capacity of GBB-ICA for more complex optimization problem.
Data Availability
All data generated or analysed during this study are included in this article.
References
Abdechiri, M., Faez, K., Bahrami, H.: Adaptive imperialist competitive algorithm (aica). In: 9th IEEE International Conference on Cognitive Informatics (ICCI’10). IEEE, pp. 940–945 (2010). https://doi.org/10.1109/COGINF.2010.5599776
Afonso, L.D., Mariani, V.C., Dos Santos, C.L.: Modified imperialist competitive algorithm based on attraction and repulsion concepts for reliability-redundancy optimization. Expert Syst. Appl. 40(9), 3794–3802 (2013). https://doi.org/10.1016/j.eswa.2012.12.093
Aliniya, Z., Keyvanpour, M.R.: CB-ICA: a crossover-based imperialist competitive algorithm for large-scale problems and engineering design optimization. Neural Comput. Appl. 31(11), 7549–7570 (2019). https://doi.org/10.1007/s00521-018-3587-x
Aliniya, Z., Mirroshandel, S.A.: A novel combinatorial merge-split approach for automatic clustering using imperialist competitive algorithm. Expert Syst. Appl. 117, 243–266 (2019). https://doi.org/10.1016/j.eswa.2018.09.050
Atashpaz-Gargari, E., Lucas, C.: Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition. In: 2007 IEEE Congress on Evolutionary Computation. IEEE, pp. 4661–4667 (2007). https://doi.org/10.1109/CEC.2007.4425083
Banisadr, A.H., Zandieh, M., Mahdavi, I.: A hybrid imperialist competitive algorithm for single-machine scheduling problem with linear earliness and quadratic tardiness penalties. Int. J. Adv. Manuf. Technol. 65(5–8), 981–989 (2013). https://doi.org/10.1007/s00170-012-4233-x
Barkhoda, W., Sheikhi, H.: Immigrant imperialist competitive algorithm to solve the multi-constraint node placement problem in target-based wireless sensor networks. Ad Hoc Netw. 106, 102183 (2020). https://doi.org/10.1016/j.adhoc.2020.102183
Ben Guedria, N.: Improved accelerated pso algorithm for mechanical engineering optimization problems. Appl. Soft Comput. 40, 455–467 (2016). https://doi.org/10.1016/j.asoc.2015.10.048
Bernal, E., Castillo, O., Soria, J., et al.: Interval type-2 fuzzy logic for dynamic parameter adjustment in the imperialist competitive algorithm. In: 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–5 (2019). https://doi.org/10.1109/FUZZ-IEEE.2019.8858935
Chen, C.H., Chen, W.H.: Bare-bones imperialist competitive algorithm for a compensatory neural fuzzy controller. Neurocomputing 173, 1519–1528 (2016). https://doi.org/10.1016/j.neucom.2015.09.025
Chen, H., Xu, Y., Wang, M., et al.: A balanced whale optimization algorithm for constrained engineering design problems. Appl. Math. Model. 71, 45–59 (2019). https://doi.org/10.1016/j.apm.2019.02.004
Ebrahimzadeh, A., Addeh, J., Rahmani, Z.: Control chart pattern recognition using k-mica clustering and neural networks. ISA Trans. 51(1), 111–119 (2012). https://doi.org/10.1016/j.isatra.2011.08.005
Eskandar, H., Sadollah, A., Bahreininejad, A., et al.: Water cycle algorithm—a novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 110–111, 151–166 (2012). https://doi.org/10.1016/j.compstruc.2012.07.010
Faramarzi, A., Heidarinejad, M., Mirjalili, S., et al.: Marine predators algorithm: a nature-inspired metaheuristic. Expert Syst. Appl. 152, 113377 (2020). https://doi.org/10.1016/j.eswa.2020.113377
Ferreira, M.P., Rocha, M.L., Silva Neto, A.J., et al.: A constrained itgo heuristic applied to engineering optimization. Expert Syst. Appl. 110, 106–124 (2018). https://doi.org/10.1016/j.eswa.2018.05.027
Gandomi, A.H., Yang, X.S., Talatahari, S., et al.: Firefly algorithm with chaos. Commun. Nonlinear Sci. Numer. Simul. 18(1), 89–98 (2013). https://doi.org/10.1016/j.cnsns.2012.06.009
Gazi, V., Passino, K.M., Gazi, V., et al.: Bacteria foraging optimization. In: Swarm Stability and Optimization, pp. 233–249. Springer, Berlin Heidelberg, Berlin, Heidelberg (2011)
Goldansaz, S.M., Jolai, F., Zahedi Anaraki, A.H.: A hybrid imperialist competitive algorithm for minimizing Makespan in a multi-processor open shop. Appl. Math. Model. 37(23), 9603–9616 (2013). https://doi.org/10.1016/j.apm.2013.05.002
Idoumghar, L., Chérin, N., Siarry, P., et al.: Hybrid ICA-PSO algorithm for continuous optimization. Appl. Math. Comput. 219(24), 11149–11170 (2013). https://doi.org/10.1016/j.amc.2013.05.027
Jia, H., Peng, X., Lang, C.: Remora optimization algorithm. Expert Syst. Appl. 185, 115665 (2021). https://doi.org/10.1016/j.eswa.2021.115665
Kalyanmoy, D., Goyal, M.: A combined genetic adaptive search (geneas) for engineering design. Comput. Sci. Informatics 26, 30–45 (1996)
Kannan, B.K., Kramer, S.N.: An augmented Lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. J. Mech. Design 116(2), 405–411 (1994). https://doi.org/10.1115/1.2919393
Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of ICNN’95 - International Conference on Neural Networks, pp. 1942–1948 vol.4, (1995). https://doi.org/10.1109/ICNN.1995.488968
Khaled, A.A., Hosseini, S.: Fuzzy adaptive imperialist competitive algorithm for global optimization. Neural Comput. Appl. 26(4), 813–825 (2015). https://doi.org/10.1007/s00521-014-1752-4
Khalid, A.M., Hamza, H.M., Mirjalili, S., et al.: Bcovidoa: a novel binary coronavirus disease optimization algorithm for feature selection. Knowl.-Based Syst. 248, 108789 (2022). https://doi.org/10.1016/j.knosys.2022.108789
Khalid, A.M., Hosny, K.M., Mirjalili, S.: Covidoa: a novel evolutionary optimization algorithm based on coronavirus disease replication lifecycle. Neural Comput. Appl. (2022). https://doi.org/10.1007/s00521-022-07639-x
Li, Y., Yang, Z., Wang, L., et al.: A hybrid imperialist competitive algorithm for energy-efficient flexible job shop scheduling problem with variable-size sublots. Comput. Ind. Eng. 172, 108641 (2022). https://doi.org/10.1016/j.cie.2022.108641
Mehdinejad, M., Mohammadi-Ivatloo, B., Dadashzadeh-Bonab, R., et al.: Solution of optimal reactive power dispatch of power systems using hybrid particle swarm optimization and imperialist competitive algorithms. Int. J. Electr. Power 83, 104–116 (2016). https://doi.org/10.1016/j.ijepes.2016.03.039
Moayedi, H., Gör, M., Kok Foong, L., et al.: Imperialist competitive algorithm hybridized with multilayer perceptron to predict the load-settlement of square footing on layered soils. Measurement 172, 108837 (2021). https://doi.org/10.1016/j.measurement.2020.108837
Mortazavi, A., Khamseh, A.A., Naderi, B.: A novel chaotic imperialist competitive algorithm for production and air transportation scheduling problems. Neural Comput. Appl. 26(7), 1709–1723 (2015). https://doi.org/10.1007/s00521-015-1828-9
Naruei, I., Keynia, F., Sabbagh Molahosseini, A.: Hunter-prey optimization: algorithm and applications. Soft. Comput. 26(3), 1279–1314 (2022). https://doi.org/10.1007/s00500-021-06401-0
Niknam, T., Fard, E.T., Ehrampoosh, S., et al.: A new hybrid imperialist competitive algorithm on data clustering. Sadhana - Acad. Proc. Eng. Sci. 36(3), 293–315 (2011). https://doi.org/10.1007/s12046-011-0026-4
Niknama, T., Fard, E.T., Pourjafarian, N., et al.: An efficient hybrid algorithm based on modified imperialist competitive algorithm and k-means for data clustering. Eng. Appl. Artif. Intel. 24(2), 306–317 (2011). https://doi.org/10.1016/j.engappai.2010.10.001
Price, K.V.: Differential evolution: a fast and simple numerical optimizer. In: Proceedings of North American Fuzzy Information Processing, pp. 524–527 (1996). https://doi.org/10.1109/NAFIPS.1996.534790
Rabiee, A., Sadeghi, M., Aghaei, J.: Modified imperialist competitive algorithm for environmental constrained energy management of microgrids. J. Clean. Prod. 202, 273–292 (2018). https://doi.org/10.1016/j.jclepro.2018.08.129
Rahnamayan, S., Tizhoosh, H.R., Salama, M.M.: Quasi-oppositional differential evolution. In: 2007 IEEE Congress on Evolutionary Computation, pp. 2229–2236 (2007). https://doi.org/10.1109/CEC.2007.4424748
Rao, R., Savsani, V., Vakharia, D.: Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 43(3), 303–315 (2011). https://doi.org/10.1016/j.cad.2010.12.015
Reisi, N., Hadipour Lakmesari, S., Mahmoodabadi, M., et al.: Optimum fuzzy control of human immunodeficiency virus type1 using an imperialist competitive algorithm. Informatics Med. Unlocked 16, 100241 (2019). https://doi.org/10.1016/j.imu.2019.100241
Sadhu, A.K., Rakshit, P., Konar, A.: A modified imperialist competitive algorithm for multi-robot stick-carrying application. Robot. Auton. Syst. 76, 15–35 (2016). https://doi.org/10.1016/j.robot.2015.11.010
Simon, D.: Biogeography-based optimization 12(6), 702–713 (2008). https://doi.org/10.1109/TEVC.2008.919004
Tizhoosh, H.: Opposition-based learning: a new scheme for machine intelligence. In: International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), pp. 695–701 (2005). https://doi.org/10.1109/CIMCA.2005.1631345
Xu, S., Wang, Y., Lu, P.: Improved imperialist competitive algorithm with mutation operator for continuous optimization problems. Neural Comput. Appl. 28(7), 1667–1682 (2017). https://doi.org/10.1007/s00521-015-2138-y
Zhu, G., Kwong, S.: Gbest-guided artificial bee colony algorithm for numerical function optimization. Appl. Math. Comput. 217(7), 3166–3173 (2010)
Funding
This work was supported by public welfare research program of Zhejiang Province (LGN22C140007, LGG19F050003).
Author information
Authors and Affiliations
Contributions
Dongge Lei: conceptualization, writing—original draft. Lulu Cai: writing—editing and revising. Fei Wu: data curation and software.
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A Description of the Benchmark Functions
Appendix A Description of the Benchmark Functions
In this appendix, the adopted benchmark functions in the experiment are described, which are listed in Tables 11 and 12. In Tables 11 and 12, the name, the mathematical expression, the search range and the global minimum are presented in details.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Lei, D., Cai, L. & Wu, F. Enhanced Gaussian Bare-Bone Imperialist Competition Algorithm Based on Doubling Sampling and Quasi-oppositional Learning for Global Optimization. Int J Comput Intell Syst 17, 128 (2024). https://doi.org/10.1007/s44196-024-00503-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s44196-024-00503-x