Nothing Special   »   [go: up one dir, main page]

Skip to content
BY 4.0 license Open Access Published by De Gruyter December 4, 2021

Evaluation of several initialization methods on arithmetic optimization algorithm performance

  • Jeffrey O. Agushaka EMAIL logo and Absalom E. Ezugwu

Abstract

Arithmetic optimization algorithm (AOA) is one of the recently proposed population-based metaheuristic algorithms. The algorithmic design concept of the AOA is based on the distributive behavior of arithmetic operators, namely, multiplication (M), division (D), subtraction (S), and addition (A). Being a new metaheuristic algorithm, the need for a performance evaluation of AOA is significant to the global optimization research community and specifically to nature-inspired metaheuristic enthusiasts. This article aims to evaluate the influence of the algorithm control parameters, namely, population size and the number of iterations, on the performance of the newly proposed AOA. In addition, we also investigated and validated the influence of different initialization schemes available in the literature on the performance of the AOA. Experiments were conducted using different initialization scenarios and the first is where the population size is large and the number of iterations is low. The second scenario is when the number of iterations is high, and the population size is small. Finally, when the population size and the number of iterations are similar. The numerical results from the conducted experiments showed that AOA is sensitive to the population size and requires a large population size for optimal performance. Afterward, we initialized AOA with six initialization schemes, and their performances were tested on the classical functions and the functions defined in the CEC 2020 suite. The results were presented, and their implications were discussed. Our results showed that the performance of AOA could be influenced when the solution is initialized with schemes other than default random numbers. The Beta distribution outperformed the random number distribution in all cases for both the classical and CEC 2020 functions. The performance of uniform distribution, Rayleigh distribution, Latin hypercube sampling, and Sobol low discrepancy sequence are relatively competitive with the Random number. On the basis of our experiments’ results, we recommend that a solution size of 6,000, the number of iterations of 100, and initializing the solutions with Beta distribution will lead to AOA performing optimally for scenarios considered in our experiments.

1 Introduction

Optimization techniques have been applied successfully in many real-world problems. These real-world problems are usually complex, with multiple nonlinear constraints and multimodal nature. Solving these complex, nonlinear, and multimodal problems usually requires reliable optimization techniques. Metaheuristic algorithms are among the reliable optimization techniques that have been used to solve real-world problems in medicine, imaging processing, and many more [1,2].

Nature has been the main inspiration behind most metaheuristic algorithms. Mimicking some natural phenomena is central to these nature-inspired metaheuristic algorithms. Several classifications or taxonomies of metaheuristic algorithms exist in the literature. The common taxonomy is bioinspired and physical based (based on physical phenomena such as physics and chemistry) [3]. Another taxonomy is inspired by swarm intelligence, evolution, physics-based, and human based [4]. Metaheuristic algorithms are modeled after natures’ best features, and many attributed their popularity to this fact and their ability to find near-optimal solutions [5].

The population-based metaheuristic algorithms use stochastic methods to generate the population’s location vectors. The location vectors are updated after every iteration to help find the location of the global optimum. Finding the global optima usually involves exploring and exploiting the search space. Different metaheuristic algorithms use various mechanisms to achieve exploration and exploitation. A good balance of exploration and exploitation leads to an excellent performance of the algorithm. Most metaheuristic algorithms have the advantage of being gradient-free and usually avoid being stuck in the local optima [6].

The nature and diversity of population-based algorithms play a significant role in their performances. It has also been shown by empirical observation and experiments that the finding of the global optima depends heavily on the initial starting points of the population of metaheuristic algorithms [7]. The population size and the number of iterations also contribute to the metaheuristic algorithm’s performance. Some algorithms require a large population size and a small number of iterations to achieve optimality, and others require the reverse setting. Then, others require the population size and the number of iterations to be almost the same [8]. Accordingly, the relative sizes of the population and number of iterations need to be considered carefully to ensure good performance for the algorithm.

New metaheuristic algorithms are proposed daily; each is either wholly novel or improves existing ones or hybridizes two (or more) existing algorithms. Arithmetic optimization algorithm (AOA) is a recently proposed population-based metaheuristic algorithm [9]. AOA is based on the distributive behavior of arithmetic operators: multiplication (M), division (D), subtraction (S), and addition (A). A detailed description of AOA is presented in Section 2. The authors checked the algorithm’s performance using 23 benchmark functions, six hybrid composite functions, and several real-world engineering design problems. The experimental results were compared against results from 11 other well-known optimization algorithms. The outcome showed that the AOA provided promising results in most cases and was very competitive in others.

Being a new metaheuristic algorithm, the need for a performance evaluation of AOA is an excellent idea. The motivation of this research is to evaluate the performance of AOA when the population size (solution as is the case here) and the number of iterations are varied. Also, the initial solutions (populations) in AOA are initialized using the random number generator. Previous studies [10,11,12] have shown that the random number generator may not be the optimal scheme for initializing the populations. We also checked the performance of AOA when initialized using various initialization schemes available in the literature. The goal is to propose the balance of population size, the number of iterations, and initialization methods that will lead to the optimal performance of AOA for the optimization problems considered in this study. The significant contribution of this article is mainly focused on conducting a performance analysis study for the newly developed AOA metaheuristic optimizer. Moreover, the specific contributions of this article are given as follows:

  • We evaluate the influence of different initialization schemes on the performance of the newly proposed AOA metaheuristic optimizer by varying initialization conditions, namely, population size and the number of iterations, coupled with the sensitivity analysis test.

  • Also, we evaluated the performance of AOA when the solutions are initialized using six different probability distribution initialization schemes available in the literature.

  • Finally, we recommended a balance of population size, the number of iterations, and probability distribution methods to yield high performance for the AOA optimizer under the scenarios considered in this article.

The rest of this article is organized as follows. In Section 2, the AOA is presented and discussed. We provided the methodology, which included the experimental setup in Section 3. Section 4 covers the detailed experiments conducted and discusses the results for the proposed modified AOA optimizer. Finally, Section 5 presents the concluding remarks and future direction.

2 The arithmetic optimization algorithm

The main inspiration of this algorithm is the use of arithmetic operators (multiplication, division, subtraction, and addition) in solving arithmetic problems. The AOA is modeled after the rules of the arithmetic operators in mathematics. The algorithm randomly initializes the starting solutions, and the best solution for each iteration is considered the near-optimal solution. The adaptation of pseudocode for AOA from ref. [9] is presented in Algorithm 1.

Algorithm 1

Pseudocode of the AOA.

1: Initialize the AOA parameters α, µ

2: Initialize the solutions’ positions using rand( ). (Solutions: i = 1 , . . . , N .)

3: while ( C _ Iter < M _ Iter ) do

4: Calculate the fitness function ( FF ) for the given solutions

5: Find the best solution (determined best so far).

6: Update the MOA value using equation (1).

7: Update the MOP value using equation (3).

8: for ( i = 1 to solutions ) do

9:  for ( j = 1 to positions ) do

10:   Generate a random value between [ 0 , 1 ] ( r 1 , r 2 , r 3 )

11:   if r 1 > MOA

12:    Exploration phase

13:     if r 2 > 0.5 then

14:      (1) Apply the division math operator ( D ÷ ) .

15:      Update the i th solutions’ positions using the first rule in equation (2).

16:     else

17:      (2) Apply the multiplication math operator ( M × ) .

18:      Update the i th solutions’ positions using the second rule in equation (2).

19:     end if

20:    else

21:     Exploitation phase

22:     if r 3 > 0.5 then

23:      (1) Apply the subtraction math operator ( S ) .

24:      Update the i th solutions’ positions using the first rule in equation (4).

25:     else

26:      (2) Apply the addition math operator ( A + ) .

27:      Update the i th solutions’ positions using the second rule in equation (4).

28:     end if

29:    end if

30:   end for

31:  end for

32:  C _ Iter = C _ Iter + 1

33: end while

34: Return the best solution ( x )

2.1 Exploration

The next phase after initialization is the exploration or exploitation phase. The math optimizer accelerated (MOA) function is used to determine whether the next phase will be exploration or exploitation. MOA is evaluated using equation (1), and it is evaluated for every iteration. Depending on the comparison of MOA and a random value r1, AOA goes into the exploration or exploitation phase, as shown in Figure 1. If r1 > MOA, the exploration phase is activated. The high distribution of numbers generated when using division and multiplication is used for the exploration phase.

(1) MOA ( C _ Iter ) = Min + C _ Iter × Max Min M _ Iter ,

where C _ Iter is the current iteration, Max and Min are maximum and minimum values of the accelerated function, respectively, and M _ Iter is the maximum number of iterations.

Figure 1 
                  Exploratory and exploitative phases of AOA [9].
Figure 1

Exploratory and exploitative phases of AOA [9].

The division operator is activated if the random number r 2 < 0.5 and multiplication is activated otherwise. The position vector is updated using equation (2). As shown in Figure 1, if r 2 < 0.5 , the division operator will continue to be executed until the condition fails, then the multiplication operator would be activated. The high distribution of numbers generated at this phase ensures that the search space is searched exhaustively for the optimal solution. The stochastic scaling factor ( μ ) ensures the randomness of the numbers generated, which corresponds to the position vectors, thereby ensuring that the algorithm does not return to a previously occupied position. The value of μ = 0.5 was experimentally selected by the authors.

(2) x i , j ( C Iter + 1 ) = best ( x j ) ÷ ( MOP + ε ) × ( ( UB j LB j ) × μ + LB j ) , r 2 < 0.5 , best ( x j ) × MOP × ( ( UB j LB j ) × μ + LB j ) , otherwise,

where x i , j ( C Iter + 1 ) denotes the solution’s position vector at the next iteration, best ( x j ) is the current best solution, ε is a small integer, and UB j and LB j are the j th upper and lower bounds, respectively. Also, the math optimization probability is defined by equation (3).

(3) MOP ( C Iter ) = 1 C Iter 1 α M Iter 1 α ,

where MIter is the maximum number of iterations and α denotes the exploitation accuracy for each iteration. The authors set α = 5 after a series of experiments.

2.2 Exploitation

Now, if r 1 MOA , AOA enters the exploitation phase. The highly dense or low dispersion numbers generated when the addition and subtraction operators are performed help the exploitation phase of the algorithm. The results of the addition and subtraction operators modeled in equation (4) exploit the search space deeply to find a (near) optimal solution. AOA enters the addition or subtraction phase depending on the value of the random number r 3 . As shown in Figure 1, if r 3 < 0.5 , the subtraction operator is activated; otherwise, the addition operator is activated. The subtraction operator continues to execute until the condition fails. The numbers generated in this phase are highly dense as such and help to converge to the (near) optimal solution. Also, the operators at this phase, along with the carefully selected value of μ , helps AOA to avoid being trapped in a local optimum.

(4) x i , j ( C Iter + 1 ) = best ( x j ) MOP × ( ( UB j LB j ) × μ + LB j ) , r 3 < 0.5 , best ( x j ) + MOP × ( ( UB j LB j ) × μ + LB j ) , otherwise .

2.3 Discussion

AOA is an interesting population-based metaheuristic algorithm that harnesses the basic arithmetic operators’ powers in solving arithmetic problems. AOA used division (D) and multiplication (M) for exploration and addition (A) and subtraction (S) for exploitation. Figure 2 shows how the four (4) operators behave. We can see from the direction of the arrowhead that the high dispersion of numbers generated by the D and M operators makes it unable to converge toward the target solution. However, it gives the algorithm its exploratory ability to scan the search space effectively. Similarly, the direction of the arrowhead of the A and S operators shows that the generated high dense numbers make it possible to converge at the target location. The operators work complementarily to enable convergence, as shown by the direction of the arrowhead, as shown in Figure 2. The starting solutions of the original AOA are initialized using the random number generator. The solution size and number of iterations are fixed, and the results are obtained for both test functions and some engineering problems. Research abounds in the literature that shows that population-based performance of metaheuristic algorithms is greatly influenced when initialized with, say, low discrepancy sequences such as Sobol, Faure, Halton, and Van der Corput [12,13,14]. The low discrepancy sequences are known to falter when the dimension gets higher.

Figure 2 
                  The McKay behavior of the arithmetic operators.
Figure 2

The McKay behavior of the arithmetic operators.

The population of metaheuristic algorithms has been initialized using Lévy flight [15,16]. Other authors have used chaos theory to improve the diversity of the population of metaheuristic algorithms [17,18]. However, chaos-based algorithms suffer from high computational complexity. The hybrid of other metaheuristic algorithms has been used to improve the diversity of the population of algorithms [19,20]. The success of this approach depends heavily on the relevant authors’ experience, and time complexity may also be an issue here.

Probability distributions, varying population sizes, and the number of iterations have also been shown to affect the performance of metaheuristic algorithms [8]. Relevant literature elaborated on the performance of the AOA when applied to test functions and some engineering problems. However, the effect of solution size, the number of iterations, and other initializing methods have not been discussed. Therefore, in our work, a performance study of AOA is undertaken in terms of its reaction to solution size, the number of iterations, and other initialization methods. Results of the analysis are presented and discussed in Section 4.

3 Methodology

In this section, we describe in detail all the steps we took to achieve our objectives. The experimental setup is fully described, the range of values for the solution size and number of iterations are given, and the initialization schemes are also discussed.

3.1 Experimental setup

The AOA was implemented using MATLAB R2020b; this was run on Windows 10 OS, Intel Core i7-7700@3.60 GHz CPU, 16G RAM. The number of function evaluations was set at 15,000, and the number of independent runs was set at 30. We tested our work using 23 classical test functions (Table 1), consisting of a wide variety of unimodal, nonseparable multimodal, numbers of local optima, and multidimensional problems. We also tested our work using benchmark functions defined in CEC2020 (Table 2). The suite consists of ten functions that are designed specially to make finding the global optimum difficult. The AOA parameters μ and α are set at 0.5 and 5, respectively.

Table 1

Classical test functions

ID Type Function Dimension Bounds Global
F1 Unimodal f x = i = 1 n x i 2 30 [−100,100] 0
F2 Unimodal f x = i = 0 n x i + i = 0 n x i 30 [−10,10] 0
F3 Unimodal f x = i = 1 d j = 1 i x j 2 30 [−100,100] 0
F4 Unimodal f x = max i { x i , 1 i n } 30 [−100,100] 0
F5 Unimodal f x = i = 1 n 1 [ 100 ( x i x i + 1 ) 2 + 1 x i 2 ] 30 [−30,30] 0
F6 Unimodal f x = i = 1 n ( [ x i 0.5 ] ) 2 30 [−100,100] 0
F7 Unimodal f x = i = 0 n i x 1 4 + r and 0 , 1 30 [−128,128] 0
F8 Multimodal f x = i = 1 n x i sin x i 30 [−500,500] 418.9829 × n
F9 Multimodal f x = 10 + i = 1 n ( x i 2 10 cos ( 2 π x i ) ) 30 [−5.12,5.12] 0
F10 Multimodal f x = a exp 0.02 n 1 i = 1 n x i 2 exp n 1 i = 1 n cos 2 π x i + a + e , a = 20 30 [−32,32] 0
F11 Multimodal f X = 1 + 1 4 , 000 i = 1 n x i 2 i = 1 n cos x i i 30 [−600,600] 0
F12 Multimodal f x = π n 10 sin π y i + i = 1 n 1 y i 1 2 1 + 10 sin 2 π y i + 1 + i = 1 n u x i , 10 , 100 , 4 where y i = 1 + x i + 1 4 , u ( x i , a , k , m ) K ( x i a ) m , if x i > a 0 , a x i a K ( x i a ) m , a x i 30 [−50,50] 0
F13 Multimodal f x = 0.1 ( sin 2 3 π x 1 + i = 1 n x i 1 2 [ 1 + sin 2 3 π x i + 1 ] + x n 1 2 1 + sin 2 ( 2 π x n ) ) + i = 1 n u x i , 5 , 100 , 4 30 [−50,50] 0
F14 Fixed-dimension multimodal f x = 1 500 + j = 1 25 1 j + i = 1 2 x i a i j 1 2 [−65,65] 1
F15 Fixed-dimension multimodal f x = i = 1 11 a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 2 4 [−5,5] 0.00030
F16 Fixed-dimension multimodal f x = a x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2 [−5,5] −1.0316
F17 Fixed-dimension multimodal f x = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos x 1 + 10 2 [−5,5] 0.398
F18 Fixed-dimension multimodal f ( X ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ] 2 [−2,2] 3
F19 Fixed-dimension multimodal f x = i = 1 4 c i exp i = 1 3 a i j x j p i j 2 3 [−1,2] −3.86
F20 Fixed-dimension multimodal f x = i = 1 4 c i exp i = 1 6 a i j x j p i j 2 6 [0,1] −3.2
F21 Fixed-dimension multimodal f x = i = 1 5 X a i X a i T + c i 1 4 [0,1] −10.1532
F22 Fixed-dimension multimodal f x = i = 1 7 X a i X a i T + c i 1 4 [0,1] −10.4028
F23 Fixed-dimension multimodal f x = i = 1 10 X a i X a i T + c i 1 4 [0,1] −10.5363
Table 2

Summary of the CEC2020 test suite

Type Number Functions F i
Unimodal function 1 Shifted and rotated bent cigar function (CEC 2017[4] F1) 100
Basic functions 2 Shifted and rotated Schwefel’s function (CEC 2014[3] F11) 1,100
3 Shifted and rotated Lunacek bi-Rastrigin function (CEC 2017[4] F7) 700
4 Expanded Rosenbrock’s plus Griewangk’s function (CEC2017[4] F19) 1,900
Hybrid functions 5 Hybrid function 1 (N = 3) (CEC 2014[3] F17) 1,700
6 Hybrid function 2 (N = 4) (CEC 2017[4] F16) 1,600
7 Hybrid function 3 (N = 5) (CEC 2014[3] F21) 2,100
Composition functions 8 Composition function 1 (N = 3) (CEC 2017[4] F22) 2,200
9 Composition function 2 (N = 4) (CEC 2017[4] F24) 2,400
10 Composition function 3 (N = 5) (CEC 2017[4] F25) 2,500

*Search range: [−100,100]D.

[3] and [4] are under the basic function category.

To test the effect of solution size and the number of iterations on AOA, we carefully selected a range of numbers that reflect three scenarios. The first scenario is a large number of solution sizes and a small number of iterations. In the second scenario, we considered a small solution size and a large number of iterations. The last scenario has an almost equal number of solution sizes of iterations. Fourteen cases were considered, each case consisting of a pair of solution sizes and the number of iterations. Table 3 presents the range of numbers selected for the solution size and number of iterations. The experiment was carried out for each case using the CEC2020 test suite, and results are presented and discussed in Section 4. The best-performing population size and iteration number were then used for the subsequent experiments discussed in Section 3.2.

Table 3

Solution size and number of iterations

Solution size 10 50 100 100 200 300 300 500 600 600 1,000 2,000 3,000 6,000
Number of iterations 1,000 600 500 6,000 3,000 300 2,000 100 50 1,000 10 300 200 100

3.2 Initialization methods

We also selected six different initialization schemes available in the literature to test how they would affect the performance of AOA. The initialization methods consist of two variants of beta distribution, a uniform distribution, Rayleigh distribution, Latin hypercube sampling, and Sobol low discrepancy sequence. A detailed description is given in Section 3.2.1.

3.2.1 Beta distribution

The beta distribution is defined over the interval (0,1).

(5) P ( x ; a , b ) = Γ ( a + b ) Γ ( a ) + Γ ( b ) x a 1 ( 1 x ) b 1 .

It can be written as X Be ( a , b ) . For our work, we varied the values of a and b to get two variants of the beta distribution that we then used to generate a position vector for the AOA initial solutions. The variants used for our experiments are betarnd(3,2) and betarnd(2.5,2.5).

3.2.2 Uniform distribution

A uniform distribution defined over the interval [ a , b ] can be defined as follows:

(6) p ( x ) = 1 b a , a < x < b , 0 , otherwise .

It is usually written as X U ( a , b ) . We used unifrnd(0,1) for this research to generate a position vector for the AOA initial solutions.

3.2.3 Rayleigh distribution

The Rayleigh distribution [21] is defined as follows:

(7) p ( x ) = x σ 2 e x 2 2 σ 2 , x > 0 .

It can be written as X Rayleigh ( σ ) . We used raylrnd(0.4) for our experiments.

3.2.4 Latin hypercube sampling

Latin hypercube sampling (LHS) can fill the search space spatially and produce samples that reflect the underlying distribution. A grid is created in the search space by dividing each dimension into equal interval segments and generating random points within the interval [22]. We used a MATLAB function that generates LHS sequences for our experiments.

3.2.5 Sobol low discrepancy sequence

Given F 2 = { 0 1 } and a linear nonrecurrence relation defined over it, n > 0 can be expanded as follows:

n = n 1 2 0 + n 2 2 1 + + n w 2 w 1 .

A Sobol sequence ( X n ( j ) ) can then be defined as follows [23]:

(8) X n ( j ) = n 1 v 1 ( j ) n 2 v 2 ( j ) n w v w ( j ) ,

where

v i ( j ) = a 1 v i 1 ( j ) a 2 v i 2 ( j ) a q v i q + 1 ( j ) v i q ( j ) ( v i q ( j ) / 2 q ) , i > q

and a i is the coefficient of the q th primitive polynomial of F 2 . This definition is used to generate the Sobol sequence using the MATLAB function based on equation (8), as was used for our experiments.

To incorporate these initialization schemes into the AOA, we modified step 2 in Algorithm 1. Instead of initializing the solutions using the rand function, we used the following functions as shown in Figure 3: betarnd(3,2) and betarnd(2.5,2.5), unifrnd(0,1), raylrnd(0.4), lhsdesign( ), and sobol( ). Algorithm 2 shows the modifications, which gave rise to six variants of AOA, each initialized with one of the functions. The box with the star in Figure 3 showed the part of the flowchart that we modified – only the solution’s initial positions were affected. The rest of the algorithm runs as described by the original authors. The six resultant variants are each compared with the original AOA, and results are presented.

Figure 3 
                     The modified flowchart for AOA variants.
Figure 3

The modified flowchart for AOA variants.

Algorithm 2

Pseudocode of the modified AOA.

1: Initialize the AOA parameters α , µ .

2: Initialize the solutions’ positions using

 a. rand( ). (Solutions: i = 1 , . . . , N .)

 b. betarnd(3,2)

 c. betarnd(2.5,2.5)

 d. raylrnd(0.4)

 e. lhsdesign( )

 f. sobol( )

 g. unifrnd(0,1)

3: While (C_Iter < M_Iter) do

4: Calculate the fitness function (FF) for the given solutions

5: Find the best solution (determined best so far).

6: Update the MOA value using equation (1).

7: Update the MOP value using equation (3).

8: for (i = 1 to solutions) do

9:  for (j = 1 to positions) do

10:   Generate a random value between [0,1] (r_1,r_2,r_3)

11:   if r_1 > MOA

12:    Exploration phase

13:     If r_2 > 0.5 then

14:      (1) Apply the division math operator (D “÷”).

15:      Update the ith solutions’ positions using the first rule in equation (2).

16:     else

17:      (2) Apply the multiplication math operator (M “×”).

18:      Update the ith solutions’ positions using the second rule in equation (2).

19:     end if

20:    else

21:     Exploitation phase

22:      if r_3 > 0.5 then

23:       (1) Apply the subtraction math operator (S “−”).

24:       Update the ith solutions’ positions using the first rule in equation (4).

25:       else

26:       (2) Apply the addition math operator (A “+”).

27:       pdate the ith solutions’ positions using the second rule in equation (4).

28:      end if

29:     end if

30:    end for

31:   end for

32: C_Iter = C_Iter + 1

33: end while

34: Return the best solution (x)

4 Results and discussion

In this section, we present and discuss all the results of our experiment. The results are reported using the following performance indicators: best, worst, mean, standard deviation, and the algorithm mean runtime. The statistical analyses of the results were carried out using Friedman’s test.

4.1 Influence of solution size and number of iterations

This experiment sets out to determine whether there is any significant effect due to the number of the solutions and the maximum number of iterations the algorithm uses. Studies in existing literature have shown that different population sizes and numbers of iterations can significantly influence the performance of metaheuristic algorithms [24]. The results of the experiment are presented in Table 4. The results from Friedman’s test presented in Table 5 show significant influence (with a p value of 0.000, which is less than the significance tolerance level of 0.05) when the solution size and the number of iterations vary. We also see that the lowest mean ranking occurred when the solution size was 6,000 and the number of iterations was 100. This is closely followed by a solution size of 3,000 and 200 iterations and then a solution size of 2,000 and 300 iterations. We see that the best results are returned when the solution size is the largest. This means that AOA depends heavily on the number of solutions, in which case it manages to find the optimal solution with a small number of iterations. So, for the remaining experiments, we used a solution size of 6,000 and set the number of iterations to 100 to obtain the results presented in Section 4.2.

Table 4

Influence of the population size and maximum number of iterations using CEC2020 suite

Function Population size 10 50 100 100 200 300 300 500 600 600 1,000 2,000 3,000 6,000
Number iterations 1,000 600 500 6,000 3,000 300 2,000 100 50 1,000 10 300 200 100
F1 Mean 1.29 × 1010 7.56 × 109 5.07 × 109 8.52 × 103 9.32 × 103 2.04 × 109 9.46 × 103 3.23 × 109 4.05 × 109 9.37 × 103 4.45 × 109 8.57 × 103 1.02 × 104 1.08 × 104
Stand. dev. 1.38 × 1010 8.52 × 109 5.50 × 109 9.01 × 103 9.77 × 103 2.57 × 109 1.01 × 104 3.63 × 109 4.47 × 109 9.90 × 103 4.67 × 109 8.86 × 103 1.09 × 104 1.15 × 104
Best 3.34 × 109 9.21 × 108 9.46 × 108 3.43 × 103 4.33 × 103 1.72 × 108 4.15 × 103 3.63 × 108 1.30 × 109 5.08 × 103 2.02 × 109 4.00 × 103 4.50 × 103 5.27 × 103
Worst 2.21 × 1010 1.74 × 1010 8.67 × 109 1.69 × 104 1.62 × 104 6.66 × 109 1.62 × 104 7.31 × 109 7.99 × 109 1.71 × 104 7.44 × 109 1.37 × 104 2.15 × 104 2.10 × 104
Mean runtime 3.13 × 10−1 6.88 × 10−1 1.14 × 10+00 1.36 × 101 1.32 × 101 4.93 × 10+00 1.38 × 101 1.16 × 10+00 3.05 × 10−1 6.47 × 10+00 1.59 × 10−1 6.26 × 10+00 1.39 × 101 1.36 × 101
F2 Mean 2.31 × 103 1.92 × 103 1.99 × 103 1.82 × 103 1.83 × 103 1.92 × 103 1.86 × 103 1.86 × 103 1.89 × 103 1.81 × 103 2.31 × 103 1.71 × 103 1.74 × 103 1.64 × 103
Stand. dev. 2.32 × 103 1.96 × 103 2.01 × 103 1.84 × 103 1.85 × 103 1.94 × 103 1.89 × 103 1.88 × 103 1.90 × 103 1.82 × 103 2.32 × 103 1.74 × 103 1.76 × 103 1.66 × 103
Best 1.92 × 103 1.11 × 103 1.45 × 103 1.36 × 103 1.12 × 103 1.38 × 103 1.14 × 103 1.23 × 103 1.29 × 103 1.36 × 103 1.90 × 103 1.15 × 103 1.33 × 103 1.18 × 103
Worst 2.77 × 103 2.53 × 103 2.46 × 103 2.42 × 103 2.31 × 103 2.51 × 103 2.58 × 103 2.49 × 103 2.42 × 103 2.22 × 103 2.66 × 103 2.17 × 103 2.11 × 103 2.15 × 103
Mean runtime 4.45 × 10−1 7.54 × 10−1 1.31 × 10+00 1.50 × 101 1.48 × 101 5.22 × 10+00 1.51 × 101 1.34 × 10+00 3.96 × 10−1 7.11 × 10+00 1.67 × 10−1 7.38 × 10+00 1.56 × 101 1.55 × 101
F3 Mean 8.01 × 102 7.98 × 102 7.97 × 102 8.00 × 102 7.97 × 102 7.94 × 102 7.92 × 102 7.91 × 102 7.91 × 102 7.87 × 102 8.04 × 102 7.91 × 102 7.84 × 102 7.76 × 102
Stand. dev. 8.01 × 102 7.98 × 102 7.97 × 102 8.00 × 102 7.97 × 102 7.95 × 102 7.92 × 102 7.91 × 102 7.91 × 102 7.87 × 102 8.04 × 102 7.91 × 102 7.84 × 102 7.76 × 102
Best 7.68 × 102 7.50 × 102 7.65 × 102 7.71 × 102 7.67 × 102 7.59 × 102 7.68 × 102 7.56 × 102 7.49 × 102 7.43 × 102 7.76 × 102 7.45 × 102 7.47 × 102 7.44 × 102
Worst 8.23 × 102 8.29 × 102 8.19 × 102 8.33 × 102 8.48 × 102 8.21 × 102 8.17 × 102 8.37 × 102 8.12 × 102 8.28 × 102 8.32 × 102 8.70 × 102 8.06 × 102 8.17 × 102
Mean runtime 4.22 × 10−1 7.24 × 10−1 1.25 × 10+00 1.46 × 101 1.43 × 101 3.16 × 10+00 2.04 × 101 1.30 × 10+00 4.52 × 10−1 7.01 × 10+00 1.96 × 10−1 7.29 × 10+00 1.53 × 101 1.73 × 101
F4 Mean 3.00 × 105 9.12 × 104 4.19 × 104 1.99 × 103 1.98 × 103 8.90 × 103 1.99 × 103 1.15 × 104 1.49 × 104 1.97 × 103 3.27 × 104 1.97 × 103 1.97 × 103 1.96 × 103
Stand. dev. 3.34 × 105 1.19 × 105 6.49 × 104 1.99 × 103 1.98 × 103 1.68 × 104 1.99 × 103 1.89 × 104 2.24 × 104 1.97 × 103 4.80 × 104 1.97 × 103 1.97 × 103 1.96 × 103
Best 6.66 × 104 6.38 × 103 1.97 × 103 1.94 × 103 1.93 × 103 2.05 × 103 1.95 × 103 2.14 × 103 2.01 × 103 1.92 × 103 2.66 × 103 1.93 × 103 1.93 × 103 1.92 × 103
Worst 5.39 × 105 3.12 × 105 1.97 × 105 2.07 × 103 2.02 × 103 7.88 × 104 2.04 × 103 7.07 × 104 6.58 × 104 2.05 × 103 1.38 × 105 2.02 × 103 2.02 × 103 2.01 × 103
Mean runtime 3.68 × 10−1 7.09 × 10−1 1.21 × 10+00 1.41 × 101 1.39 × 101 2.53 × 10+00 1.62 × 101 1.23 × 10+00 4.27 × 10−1 6.59 × 10+00 1.84 × 10−1 6.86 × 10+00 1.42 × 101 1.98 × 101
F5 Mean 4.64 × 105 2.35 × 105 1.36 × 105 1.03 × 104 1.26 × 104 6.25 × 104 1.40 × 104 1.36 × 105 1.67 × 105 1.57 × 104 3.25 × 105 1.40 × 104 9.55 × 103 1.16 × 104
Stand. dev. 5.01 × 105 2.72 × 105 1.97 × 105 1.09 × 104 1.63 × 104 8.90 × 104 2.02 × 104 1.91 × 105 2.16 × 105 2.07 × 104 5.68 × 105 1.82 × 104 1.03 × 104 1.30 × 104
Best 2.54 × 104 2.51 × 104 8.34 × 103 5.46 × 103 4.50 × 103 1.63 × 104 3.46 × 103 1.20 × 104 1.63 × 104 5.25 × 103 4.43 × 103 2.62 × 103 2.42 × 103 3.39 × 103
Worst 9.28 × 105 6.39 × 105 5.96 × 105 2.30 × 104 5.15 × 104 3.34 × 105 6.37 × 104 5.10 × 105 5.25 × 105 6.77 × 104 2.14 × 106 5.92 × 104 2.14 × 104 2.85 × 104
Mean runtime 3.47 × 10−1 7.53 × 10−1 1.25 × 10+00 1.46 × 101 1.44 × 101 2.57 × 10+00 1.45 × 101 1.31 × 10+00 4.36 × 10−1 7.44 × 10+00 1.90 × 10−1 7.32 × 10+00 1.53 × 101 2.08 × 101
F6 Mean 1.62 × 103 1.61 × 103 1.61 × 103 1.61 × 103 1.61 × 103 1.61 × 103 1.61 × 103 1.61 × 103 1.61 × 103 1.60 × 103 1.61 × 103 1.60 × 103 1.60 × 103 1.60 × 103
Stand. dev. 1.62 × 103 1.61 × 103 1.61 × 103 1.61 × 103 1.61 × 103 1.61 × 103 1.61 × 103 1.61 × 103 1.61 × 103 1.60 × 103 1.61 × 103 1.60 × 103 1.60 × 103 1.60 × 103
Best 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103
Worst 1.64 × 103 1.64 × 103 1.62 × 103 1.62 × 103 1.62 × 103 1.62 × 103 1.62 × 103 1.62 × 103 1.64 × 103 1.62 × 103 1.62 × 103 1.62 × 103 1.62 × 103 1.60 × 103
Mean runtime 3.56 × 10−1 7.32 × 10−1 1.22 × 10+00 1.43 × 101 1.39 × 101 2.56 × 10+00 1.40 × 101 1.15 × 10+00 4.30 × 10−1 7.18 × 10+00 1.43 × 10−1 6.93 × 10+00 1.49 × 101 1.99 × 101
F7 Mean 2.26 × 106 5.60 × 104 6.36 × 103 6.27 × 103 6.61 × 103 7.14 × 103 5.17 × 103 7.11 × 103 7.24 × 103 5.82 × 103 2.74 × 104 6.03 × 103 6.22 × 103 5.85 × 103
Stand. dev. 3.68 × 106 2.36 × 105 7.16 × 103 6.83 × 103 7.27 × 103 7.56 × 103 5.64 × 103 8.00 × 103 8.02 × 103 6.41 × 103 5.43 × 104 6.65 × 103 6.92 × 103 6.74 × 103
Best 3.37 × 103 3.35 × 103 2.39 × 103 2.35 × 103 2.49 × 103 2.58 × 103 2.41 × 103 2.53 × 103 2.92 × 103 2.43 × 103 4.37 × 103 2.35 × 103 2.37 × 103 2.38 × 103
Worst 1.23 × 107 1.27 × 106 1.53 × 104 1.11 × 104 1.31 × 104 1.16 × 104 9.75 × 103 1.71 × 104 1.65 × 104 1.24 × 104 2.70 × 105 1.24 × 104 1.36 × 104 1.36 × 104
Mean runtime 3.62 × 10−1 7.13 × 10−1 1.24 × 10+00 1.46 × 101 1.93 × 101 2.55 × 10+00 1.49 × 101 5.63 × 10−1 4.34 × 10−1 7.50 × 10+00 2.13 × 10−1 6.98 × 10+00 1.51 × 101 2.03 × 101
F8 Mean 3.32 × 103 2.89 × 103 2.71 × 103 2.46 × 103 2.44 × 103 2.56 × 103 2.42 × 103 2.62 × 103 2.68 × 103 2.43 × 103 2.76 × 103 2.38 × 103 2.38 × 103 2.35 × 103
Stand. dev. 3.34 × 103 2.89 × 103 2.71 × 103 2.46 × 103 2.44 × 103 2.57 × 103 2.42 × 103 2.62 × 103 2.68 × 103 2.43 × 103 2.76 × 103 2.38 × 103 2.38 × 103 2.35 × 103
Best 2.68 × 103 2.57 × 103 2.44 × 103 2.25 × 103 2.30 × 103 2.27 × 103 2.27 × 103 2.33 × 103 2.33 × 103 2.31 × 103 2.35 × 103 2.23 × 103 2.24 × 103 2.27 × 103
Worst 4.19 × 103 3.53 × 103 3.03 × 103 3.03 × 103 2.54 × 103 2.96 × 103 2.54 × 103 2.91 × 103 2.89 × 103 2.63 × 103 3.09 × 103 2.59 × 103 2.58 × 103 2.53 × 103
Mean runtime 3.53 × 10−1 8.66 × 10−1 1.48 × 10+00 1.72 × 101 1.69 × 101 3.00 × 10+00 1.80 × 101 7.45 × 10−1 5.11 × 10−1 8.33 × 10+00 2.35 × 10−1 8.33 × 10+00 1.78 × 101 2.39 × 101
F9 Mean 2.91 × 103 2.82 × 103 2.78 × 103 2.74 × 103 2.73 × 103 2.77 × 103 2.71 × 103 2.79 × 103 2.77 × 103 2.74 × 103 2.79 × 103 2.64 × 103 2.61 × 103 2.58 × 103
Stand. dev. 2.91 × 103 2.82 × 103 2.78 × 103 2.74 × 103 2.74 × 103 2.77 × 103 2.72 × 103 2.79 × 103 2.77 × 103 2.74 × 103 2.79 × 103 2.65 × 103 2.61 × 103 2.58 × 103
Best 2.72 × 103 2.64 × 103 2.59 × 103 2.50 × 103 2.50 × 103 2.52 × 103 2.50 × 103 2.60 × 103 2.61 × 103 2.50 × 103 2.66 × 103 2.50 × 103 2.50 × 103 2.50 × 103
Worst 3.08 × 103 2.91 × 103 2.87 × 103 2.92 × 103 2.87 × 103 2.88 × 103 2.86 × 103 2.91 × 103 2.84 × 103 2.84 × 103 2.87 × 103 2.81 × 103 2.81 × 103 2.78 × 103
Mean runtime 3.19 × 10−1 9.19 × 10−1 1.49 × 10+00 1.79 × 101 1.74 × 101 2.88 × 10+00 1.75 × 101 7.93 × 10−1 5.18 × 10−1 8.55 × 10+00 1.70 × 10−1 8.44 × 10+00 1.81 × 101 2.45 × 101
F10 Mean 3.50 × 103 3.30 × 103 3.13 × 103 2.94 × 103 2.95 × 103 3.01 × 103 2.96 × 103 3.09 × 103 3.16 × 103 2.95 × 103 3.16 × 103 2.94 × 103 2.93 × 103 2.94 × 103
Stand. dev. 3.51 × 103 3.30 × 103 3.13 × 103 2.94 × 103 2.95 × 103 3.01 × 103 2.96 × 103 3.09 × 103 3.16 × 103 2.95 × 103 3.16 × 103 2.94 × 103 2.93 × 103 2.94 × 103
Best 3.13 × 103 3.00 × 103 2.94 × 103 2.60 × 103 2.90 × 103 2.90 × 103 2.90 × 103 2.93 × 103 2.96 × 103 2.90 × 103 2.97 × 103 2.90 × 103 2.60 × 103 2.90 × 103
Worst 4.32 × 103 3.85 × 103 3.39 × 103 3.03 × 103 3.02 × 103 3.15 × 103 3.04 × 103 3.42 × 103 3.32 × 103 3.01 × 103 3.39 × 103 3.04 × 103 3.02 × 103 3.00 × 103
Mean runtime 2.98 × 10−1 8.36 × 10−1 1.44 × 10+00 2.42 × 101 1.63 × 101 3.84 × 10+00 1.58 × 101 7.66 × 10−1 4.85 × 10−1 8.08 × 10+00 2.19 × 10−1 8.32 × 10+00 1.71 × 101 2.37 × 101
Table 5

Friedman’s test mean rank for solution size and number of iterations

Population size 10 50 100 100 200 300 300 500 600 600 1,000 2,000 3,000 6,000
Number iterations 1,000 600 500 6,000 3,000 300 2,000 100 50 1,000 10 300 200 100
Friedman’s mean ranks 13.80 12.40 10.40 5.80 6.00 8.80 5.30 8.70 9.60 4.40 11.60 3.30 2.70 2.20
General mean rank 13 12 10 6 7 9 5 8 10 4 11 3 2 1

4.2 Initialization methods

We used the beta distribution, uniform distribution, Rayleigh distributions, the Latin hypercube sampling, and Sobol low discrepancy sequence to initialize the AOA solutions. The goal was to find out if any of the different initialization procedures led to a significant improvement in the performance of AOA. We carried out the experiments using both the classical test functions and the CEC2020 test. The results obtained are presented in Tables 6 and 8.

Table 6

Results for classical test function

Function Global Value rand betarnd(3,2) betarnd(2,5,2,5) unifrnd(0,1) raylrnd(0,4) lhsdesign( ) Sobol( )
F1 0 Mean 0 0 0 0 0 0 0
Stand. dev. 0 0 0 0 0 0 0
Best 0 0 0 0 0 0 0
Worst 0 0 0 0 0 0 0
Mean runtime 1.42 × 10+00 1.40 × 10+00 1.49 × 10+00 1.38 × 10+00 1.50 × 10+00 1.71 × 10+00 1.35 × 10+00
F2 0 Mean 0 0 0 0 0 0 0
Stand. dev. 0 0 0 0 0 0 0
Best 0 0 0 0 0 0 0
Worst 0 0 0 0 0 0 0
Mean runtime 1.44 × 10+00 1.42 × 10+00 1.54 × 10+00 1.41 × 10+00 1.59 × 10+00 1.82 × 10+00 1.37 × 10+00
F3 0 Mean 0 0 0 0 0 0 0
Stand. dev. 0 0 0 0 0 0 0
Best 0 0 0 0 0 0 0
Worst 0 0 0 0 0 0 0
Mean runtime 2.98 × 10+00 2.86 × 10+00 2.75 × 10+00 2.88 × 10+00 2.80 × 10+00 3.29 × 10+00 2.61 × 10+00
F4 0 Mean 0 0 0 0 0 0 0
Stand. dev. 0 0 0 0 0 0 0
Best 0 0 0 0 0 0 0
Worst 0 0 0 0 0 0 0
Mean runtime 1.43 × 10+00 1.42 × 10+00 1.44 × 10+00 1.37 × 10+00 1.44 × 10+00 1.84 × 10+00 1.37 × 10+00
F5 0 Mean 2.93 × 10+00 2.79 × 10+00 2.74 × 10+00 2.89 × 10+00 2.91 × 10+00 2.82 × 10+00 2.81 × 10+00
Stand. dev. 2.97 × 10+00 2.81 × 10+00 2.77 × 10+00 2.93 × 10+00 2.93 × 10+00 2.85 × 10+00 2.85 × 10+00
Best 2.01 × 10+00 2.20 × 10+00 2.09 × 10+00 1.83 × 10+00 2.09 × 10+00 2.06 × 10+00 2.00 × 10+00
Worst 4.24 × 10+00 3.71 × 10+00 3.53 × 10+00 3.62 × 10+00 3.55 × 10+00 3.62 × 10+00 3.93 × 10+00
Mean runtime 1.87 × 10+00 1.79 × 10+00 1.85 × 10+00 1.79 × 10+00 1.88 × 10+00 2.24 × 10+00 1.77 × 10+00
F6 0 Mean 5.73 × 10−3 5.20 × 10−3 6.38 × 10−3 6.03 × 10−3 6.21 × 10−3 6.05 × 10−3 5.96 × 10−3
Stand. dev. 5.96 × 10−3 5.40 × 10−3 6.59 × 10−3 6.24 × 10−3 6.47 × 10−3 6.25 × 10−3 6.14 × 10−3
Best 1.94 × 10−3 1.48 × 10−3 2.17 × 10−3 2.87 × 10−3 2.38 × 10−3 3.63 × 10−3 4.01 × 10−3
Worst 8.35 × 10−3 8.35 × 10−3 9.06 × 10−3 8.68 × 10−3 1.08 × 10−2 9.85 × 10−3 9.82 × 10−3
Mean runtime 1.38 × 10+00 1.35 × 10+00 1.36 × 10+00 1.35 × 10+00 1.46 × 10+00 1.82 × 10+00 1.32 × 10+00
F7 0 Mean 1.58 × 10−6 1.87 × 10−6 1.51 × 10−6 1.82 × 10−6 1.58 × 10−6 2.36 × 10−6 2.33 × 10−6
Stand. dev. 2.23 × 10−6 2.79 × 10−6 1.83 × 10−6 2.60 × 10−6 1.84 × 10−6 3.56 × 10−6 3.43 × 10−6
Best 1.47 × 10−8 1.94 × 10−7 8.99 × 10−9 1.47 × 10−7 5.43 × 10−8 2.12 × 10−8 3.36 × 10−8
Worst 8.39 × 10−6 1.02 × 10−5 4.03 × 10−6 8.01 × 10−6 3.51 × 10−6 1.19 × 10−5 1.20 × 10−5
Mean runtime 2.26 × 10+00 2.27 × 10+00 2.31 × 10+00 2.25 × 10+00 2.31 × 10+00 2.66 × 10+00 2.22 × 10+00
F8 −418.9829 × n Mean −3.26 × 103 −3.13 × 103 −2.95 × 103 −3.37 × 103 −2.65 × 103 −3.25 × 103 −3.24 × 103
Stand. dev. 3.27 × 103 3.14 × 103 2.96 × 103 3.38 × 103 2.68 × 103 3.27 × 103 3.25 × 103
Best −3.72 × 103 −3.74 × 103 −3.24 × 103 −3.76 × 103 −3.50 × 103 −3.83 × 103 −3.62 × 103
Worst −2.77 × 103 −2.55 × 103 −2.61 × 103 −2.97 × 103 −2.01 × 103 −2.65 × 103 −2.75 × 103
Mean runtime 1.80 × 10+00 1.73 × 10+00 1.78 × 10+00 1.73 × 10+00 1.79 × 10+00 2.11 × 10+00 1.72 × 10+00
F9 0 Mean 0 0 0 0 0 0 0
Stand. dev. 0 0 0 0 0 0 0
Best 0 0 0 0 0 0 0
Worst 0 0 0 0 0 0 0
Mean runtime 1.43 × 10+00 1.39 × 10+00 1.43 × 10+00 1.41 × 10+00 1.52 × 10+00 1.80 × 10+00 1.40 × 10+00
F10 0 Mean 8.88 × 10−16 8.88 × 10−16 8.88 × 10−16 8.88 × 10−16 8.88 × 10−16 8.88 × 10−16 8.88 × 10−16
Stand. dev. 8.88 × 10−16 8.88 × 10−16 8.88 × 10−16 8.88 × 10−16 8.88 × 10−16 8.88 × 10−16 8.88 × 10−16
Best 8.88 × 10−16 8.88 × 10−16 8.88 × 10−16 8.88 × 10−16 8.88 × 10−16 8.88 × 10−16 8.88 × 10−16
Worst 8.88 × 10−16 8.88 × 10−16 8.88 × 10−16 8.88 × 10−16 8.88 × 10−16 8.88 × 10−16 8.88 × 10−16
Mean runtime 1.58 × 10+00 1.50 × 10+00 1.54 × 10+00 1.53 × 10+00 1.59 × 10+00 1.96 × 10+00 1.48 × 10+00
F11 0 Mean 0 0 0 0 0 0 0
Stand. dev. 0 0 0 0 0 0 0
Best 0 0 0 0 0 0 0
Worst 0 0 0 0 0 0 0
Mean runtime 1.93 × 10+00 1.84 × 10+00 1.92 × 10+00 1.83 × 10+00 2.15 × 10+00 2.38 × 10+00 1.85 × 10+00
F12 0 Mean 3.45 × 10−1 3.50 × 10−1 3.52 × 10−1 3.46 × 10−1 3.54 × 10−1 3.39 × 10−1 3.48 × 10−1
Stand. dev. 3.47 × 10−1 3.52 × 10−1 3.55 × 10−1 3.48 × 10−1 3.56 × 10−1 3.41 × 10−1 3.49 × 10−1
Best 2.79 × 10−1 2.87 × 10−1 2.74 × 10−1 2.83 × 10−1 2.81 × 10−1 2.50 × 10−1 3.01 × 10−1
Worst 4.49 × 10−1 4.09 × 10−1 4.25 × 10−1 4.25 × 10−1 4.37 × 10−1 4.77 × 10−1 3.99 × 10−1
Mean runtime 1.33 × 101 1.27 × 101 1.28 × 101 1.30 × 101 1.31 × 101 1.50 × 101 1.30 × 101
F13 0 Mean 6.42 × 10−1 7.10 × 10−1 7.39 × 10−1 8.40 × 10−1 6.95 × 10−1 6.86 × 10−1 8.21 × 10−1
Stand. dev. 7.09 × 10−1 7.56 × 10−1 7.87 × 10−1 8.60 × 10−1 7.47 × 10−1 7.53 × 10−1 8.46 × 10−1
Best 1.11 × 10−1 1.09 × 10−1 1.70 × 10−1 2.35 × 10−1 9.81 × 10−2 1.09 × 10−1 2.11 × 10−1
Worst 9.94 × 10−1 9.94 × 10−1 9.94 × 10−1 9.94 × 10−1 9.93 × 10−1 9.94 × 10−1 9.94 × 10−1
Mean runtime 4.84 × 10+00 4.72 × 10+00 4.72 × 10+00 4.73 × 10+00 4.78 × 10+00 5.13 × 10+00 4.71 × 10+00
F14 1 Mean 1.06 × 10+00 1.89 × 10+00 1.10 × 10+00 1.16 × 10+00 1.10 × 10+00 1.30 × 10+00 1.06 × 10+00
Stand. dev. 1.09 × 10+00 2.05 × 10+00 1.14 × 10+00 1.27 × 10+00 1.14 × 10+00 1.37 × 10+00 1.09 × 10+00
Best 9.98 × 10−1 9.98 × 10−1 9.98 × 10−1 9.98 × 10−1 9.98 × 10−1 9.98 × 10−1 9.98 × 10−1
Worst 1.99 × 10+00 2.98 × 10+00 1.99 × 10+00 2.98 × 10+00 1.99 × 10+00 1.99 × 10+00 1.99 × 10+00
Mean runtime 1.42 × 101 1.42 × 101 1.39 × 101 1.39 × 101 1.35 × 101 1.54 × 101 1.38 × 101
F15 3.00 × 10−4 Mean 1.39 × 10−3 1.18 × 10−3 1.03 × 10−3 9.31 × 10−4 1.18 × 10−3 1.28 × 10−3 1.12 × 10−3
Stand. dev. 1.61 × 10−3 1.31 × 10−3 1.13 × 10−3 1.10 × 10−3 1.33 × 10−3 1.71 × 10−3 1.38 × 10−3
Best 4.34 × 10−4 4.42 × 10−4 4.43 × 10−4 4.13 × 10−4 3.42 × 10−4 4.34 × 10−4 3.65 × 10−4
Worst 3.13 × 10−3 2.95 × 10−3 2.49 × 10−3 2.62 × 10−3 2.62 × 10−3 6.17 × 10−3 3.27 × 10−3
Mean runtime 1.29 × 10+00 1.22 × 10+00 1.21 × 10+00 1.20 × 10+00 1.23 × 10+00 1.41 × 10+00 1.23 × 10+00
F16 −1.03 × 10+00 Mean −1.03 × 10+00 −1.03 × 10+00 −1.03 × 10+00 −1.03 × 10+00 −1.03 × 10+00 −1.03 × 10+00 −1.03 × 10+00
Stand. dev. 1.03 × 10+00 1.03 × 10+00 1.03 × 10+00 1.03 × 10+00 1.03 × 10+00 1.03 × 10+00 1.03 × 10+00
Best −1.03 × 10+00 −1.03 × 10+00 −1.03 × 10+00 −1.03 × 10+00 −1.03 × 10+00 −1.03 × 10+00 −1.03 × 10+00
Worst −1.03 × 10+00 −1.03 × 10+00 −1.03 × 10+00 −1.03 × 10+00 −1.03 × 10+00 −1.03 × 10+00 −1.03 × 10+00
Mean runtime 1.15 × 10+00 1.16 × 10+00 1.15 × 10+00 1.11 × 10+00 1.28 × 10+00 1.28 × 10+00 1.11 × 10+00
F17 3.98 × 10−1 Mean 3.99 × 10−1 3.99 × 10−1 3.99 × 10−1 3.99 × 10−1 3.99 × 10−1 3.99 × 10−1 3.99 × 10−1
Stand. dev. 3.99 × 10−1 3.99 × 10−1 3.99 × 10−1 3.99 × 10−1 3.99 × 10−1 3.99 × 10−1 3.99 × 10−1
Best 3.98 × 10−1 3.98 × 10−1 3.98 × 10−1 3.98 × 10−1 3.98 × 10−1 3.98 × 10−1 3.98 × 10−1
Worst 4.04 × 10−1 4.01 × 10−1 4.01 × 10−1 4.02 × 10−1 4.03 × 10−1 4.01 × 10−1 4.02 × 10−1
Mean runtime 9.84 × 10−1 1.11 × 10+00 9.72 × 10−1 9.61 × 10−1 1.14 × 10+00 1.10 × 10+00 9.78 × 10−1
F18 3.00 × 10+00 Mean 3.00 × 10+00 3.00 × 10+00 3.00 × 10+00 3.00 × 10+00 3.00 × 10+00 3.00 × 10+00 3.00 × 10+00
Stand. dev. 3.00 × 10+00 3.00 × 10+00 3.00 × 10+00 3.00 × 10+00 3.00 × 10+00 3.00 × 10+00 3.00 × 10+00
Best 3.00 × 10+00 3.00 × 10+00 3.00 × 10+00 3.00 × 10+00 3.00 × 10+00 3.00 × 10+00 3.00 × 10+00
Worst 3.00 × 10+00 3.00 × 10+00 3.00 × 10+00 3.00 × 10+00 3.00 × 10+00 3.00 × 10+00 3.00 × 10+00
Mean runtime 9.34 × 10−1 9.20 × 10−1 9.28 × 10−1 9.10 × 10−1 9.05 × 10−1 1.08 × 10+00 9.22 × 10−1
F19 −3.86 × 10+00 Mean −3.86 × 10+00 −3.86 × 10+00 −3.86 × 10+00 −3.86 × 10+00 −3.86 × 10+00 −3.86 × 10+00 −3.86 × 10+00
Stand. dev. 3.86 × 10+00 3.86 × 10+00 3.86 × 10+00 3.86 × 10+00 3.86 × 10+00 3.86 × 10+00 3.86 × 10+00
Best −3.86 × 10+00 −3.86 × 10+00 −3.86 × 10+00 −3.86 × 10+00 −3.86 × 10+00 −3.86 × 10+00 −3.86 × 10+00
Worst −3.85 × 10+00 −3.85 × 10+00 −3.85 × 10+00 −3.85 × 10+00 −3.85 × 10+00 −3.85 × 10+00 −3.85 × 10+00
Mean runtime 1.42 × 10+00 1.38 × 10+00 1.34 × 10+00 1.32 × 10+00 1.36 × 10+00 1.48 × 10+00 1.35 × 10+00
F20 −3.20 × 10+00 Mean −3.19 × 10+00 −3.21 × 10+00 −3.21 × 10+00 −3.18 × 10+00 −3.24 × 10+00 −3.19 × 10+00 −3.19 × 10+00
Stand. dev. 3.19 × 10+00 3.21 × 10+00 3.21 × 10+00 3.18 × 10+00 3.24 × 10+00 3.19 × 10+00 3.19 × 10+00
Best −3.31 × 10+00 −3.31 × 10+00 −3.28 × 10+00 −3.28 × 10+00 −3.29 × 10+00 −3.27 × 10+00 −3.27 × 10+00
Worst −3.15 × 10+00 −3.16 × 10+00 −3.10 × 10+00 −3.14 × 10+00 −3.18 × 10+00 −3.12 × 10+00 −3.13 × 10+00
Mean runtime 1.53 × 10+00 1.60 × 10+00 1.52 × 10+00 1.48 × 10+00 1.49 × 10+00 1.69 × 10+00 1.48 × 10+00
F21 −1.02 × 101 Mean −6.11 × 10+00 −6.96 × 10+00 −6.39 × 10+00 −6.15 × 10+00 −6.18 × 10+00 −6.33 × 10+00 −6.20 × 10+00
Stand. dev. 6.29 × 10+00 7.07 × 10+00 6.57 × 10+00 6.40 × 10+00 6.38 × 10+00 6.56 × 10+00 6.38 × 10+00
Best −9.28 × 10+00 −8.93 × 10+00 −9.30 × 10+00 −9.64 × 10+00 −8.81 × 10+00 −9.57 × 10+00 −8.84 × 10+00
Worst −3.83 × 10+00 −3.49 × 10+00 −4.37 × 10+00 −2.40 × 10+00 −2.54 × 10+00 −3.98 × 10+00 −4.09 × 10+00
Mean runtime 1.82 × 10+00 1.79 × 10+00 1.79 × 10+00 1.77 × 10+00 1.75 × 10+00 2.15 × 10+00 1.80 × 10+00
F22 −1.04 × 101 Mean −6.91 × 10+00 −7.12 × 10+00 −7.17 × 10+00 −6.98 × 10+00 −6.87 × 10+00 −6.22 × 10+00 −6.81 × 10+00
Stand. dev. 7.05 × 10+00 7.25 × 10+00 7.33 × 10+00 7.16 × 10+00 7.09 × 10+00 6.43 × 10+00 6.96 × 10+00
Best −9.40 × 10+00 −9.59 × 10+00 −9.19 × 10+00 −9.92 × 10+00 −1.00 × 101 −9.83 × 10+00 −8.65 × 10+00
Worst −4.62 × 10+00 −4.48 × 10+00 −4.12 × 10+00 −2.58 × 10+00 −2.40 × 10+00 −2.53 × 10+00 −3.94 × 10+00
Mean runtime 2.12 × 10+00 2.05 × 10+00 2.05 × 10+00 2.02 × 10+00 2.04 × 10+00 2.26 × 10+00 2.04 × 10+00
F23 −1.05 × 101 Mean −7.28 × 10+00 −7.18 × 10+00 −7.95 × 10+00 −6.80 × 10+00 −7.67 × 10+00 −7.27 × 10+00 −6.80 × 10+00
Stand. dev. 7.41 × 10+00 7.29 × 10+00 8.03 × 10+00 7.09 × 10+00 7.79 × 10+00 7.39 × 10+00 6.97 × 10+00
Best −9.15 × 10+00 −1.01 × 101 −9.86 × 10+00 −1.03 × 101 −1.01 × 101 −9.62 × 10+00 −9.45 × 10+00
Worst −2.63 × 10+00 −4.93 × 10+00 −5.93 × 10+00 −2.64 × 10+00 −4.79 × 10+00 −4.65 × 10+00 −4.26 × 10+00
Mean runtime 2.71 × 10+00 2.40 × 10+00 2.44 × 10+00 2.38 × 10+00 2.45 × 10+00 2.76 × 10+00 2.48 × 10+00
Table 7

Friedman’s test mean rank for classical functions

Initialization methods rand betarnd(3,2) betarnd(2,5,2,5) unifrnd(0,1) raylrnd(0,4) lhsdesign( ) Sobol( )
Friedman’s mean rank 10.43 10.43 10.13 9.33 11.13 11.63 11.85
General mean rank 3 3 2 1 4 5 6
Table 8

Results for CEC2020 test suite

Function Global Value rand betarnd(3,2) betarnd(2,5,2,5) unifrnd(0,1) raylrnd(0,4) lhsdesign( ) Sobol( )
F1 1.00 × 102 Mean 9.54 × 103 7.97 × 103 8.57 × 103 9.25 × 103 8.86 × 103 9.56 × 103 1.02 × 104
Stand. dev. 9.95 × 103 8.42 × 103 8.86 × 103 9.60 × 103 9.29 × 103 1.00 × 104 1.07 × 104
Best 4.39 × 103 4.29 × 103 2.95 × 103 5.84 × 103 3.94 × 103 4.95 × 103 4.18 × 103
Worst 1.68 × 104 1.76 × 104 1.25 × 104 1.50 × 104 1.59 × 104 1.70 × 104 1.76 × 104
Mean runtime 6.66 × 10+00 6.82 × 10+00 6.78 × 10+00 6.56 × 10+00 6.96 × 10+00 7.08 × 10+00 6.72 × 10+00
F2 1.10 × 103 Mean 1.61 × 103 1.73 × 103 1.72 × 103 1.64 × 103 1.77 × 103 1.64 × 103 1.70 × 103
Stand. dev. 1.63 × 103 1.74 × 103 1.74 × 103 1.65 × 103 1.78 × 103 1.66 × 103 1.72 × 103
Best 1.10 × 103 1.10 × 103 1.23 × 103 1.24 × 103 1.44 × 103 1.13 × 103 1.12 × 103
Worst 2.11 × 103 2.13 × 103 2.33 × 103 2.20 × 103 2.19 × 103 2.16 × 103 2.14 × 103
Mean runtime 7.19 × 10+00 7.25 × 10+00 7.21 × 10+00 7.14 × 10+00 7.60 × 10+00 7.57 × 10+00 7.42 × 10+00
F3 7.00 × 102 Mean 7.85 × 102 7.64 × 102 7.64 × 102 7.82 × 102 7.72 × 102 7.89 × 102 7.82 × 102
Stand. dev. 7.85 × 102 7.65 × 102 7.64 × 102 7.82 × 102 7.72 × 102 7.89 × 102 7.82 × 102
Best 7.57 × 102 7.33 × 102 7.34 × 102 7.30 × 102 7.23 × 102 7.47 × 102 7.55 × 102
Worst 8.30 × 102 7.95 × 102 7.92 × 102 8.23 × 102 8.17 × 102 8.47 × 102 8.34 × 102
Mean runtime 7.01 × 10+00 7.14 × 10+00 7.07 × 10+00 7.05 × 10+00 7.41 × 10+00 7.54 × 10+00 7.11 × 10+00
F4 1.90 × 103 Mean 1.96 × 103 1.96 × 103 1.97 × 103 1.96 × 103 1.98 × 103 1.97 × 103 1.97 × 103
Stand. dev. 1.96 × 103 1.96 × 103 1.97 × 103 1.96 × 103 1.98 × 103 1.97 × 103 1.97 × 103
Best 1.93 × 103 1.92 × 103 1.94 × 103 1.93 × 103 1.93 × 103 1.91 × 103 1.94 × 103
Worst 2.00 × 103 1.99 × 103 2.02 × 103 2.00 × 103 2.02 × 103 2.01 × 103 2.01 × 103
Mean runtime 6.99 × 10+00 6.99 × 10+00 6.93 × 10+00 6.77 × 10+00 7.34 × 10+00 7.34 × 10+00 7.01 × 10+00
F5 1.70 × 103 Mean 1.25 × 104 9.67 × 103 8.95 × 103 1.63 × 104 5.03 × 104 1.05 × 104 1.17 × 104
Stand. dev. 1.57 × 104 1.00 × 104 9.33 × 103 2.12 × 104 1.46 × 105 1.14 × 104 1.42 × 104
Best 5.39 × 103 3.06 × 103 2.46 × 103 2.95 × 103 2.49 × 103 2.64 × 103 6.01 × 103
Worst 5.96 × 104 1.49 × 104 1.43 × 104 6.56 × 104 6.58 × 105 2.25 × 104 5.02 × 104
Mean runtime 7.17 × 10+00 7.20 × 10+00 7.20 × 10+00 7.21 × 10+00 7.39 × 10+00 7.54 × 10+00 7.42 × 10+00
F6 1.60 × 103 Mean 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103
Stand. dev. 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103
Best 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103 1.60 × 103
Worst 1.60 × 103 1.62 × 103 1.62 × 103 1.60 × 103 1.62 × 103 1.60 × 103 1.60 × 103
Mean runtime 6.99 × 10+00 7.02 × 10+00 6.99 × 10+00 7.33 × 10+00 7.49 × 10+00 7.43 × 10+00 7.21 × 10+00
F7 2.10 × 103 Mean 6.71 × 103 6.01 × 103 6.66 × 103 6.28 × 103 7.20 × 103 5.58 × 103 6.23 × 103
Stand. dev. 7.51 × 103 6.50 × 103 7.30 × 103 7.12 × 103 7.81 × 103 6.16 × 103 7.24 × 103
Best 2.45 × 103 2.31 × 103 2.70 × 103 2.60 × 103 2.44 × 103 2.52 × 103 2.37 × 103
Worst 1.28 × 104 1.30 × 104 1.30 × 104 1.52 × 104 1.41 × 104 1.28 × 104 1.51 × 104
Mean runtime 7.15 × 10+00 7.13 × 10+00 7.08 × 10+00 7.47 × 10+00 7.71 × 10+00 7.48 × 10+00 7.22 × 10+00
F8 2.20 × 103 Mean 2.38 × 103 2.28 × 103 2.40 × 103 2.38 × 103 2.44 × 103 2.37 × 103 2.39 × 103
Stand. dev. 2.38 × 103 2.29 × 103 2.40 × 103 2.38 × 103 2.45 × 103 2.38 × 103 2.39 × 103
Best 2.24 × 103 2.23 × 103 2.27 × 103 2.25 × 103 2.25 × 103 2.25 × 103 2.25 × 103
Worst 2.52 × 103 2.44 × 103 2.58 × 103 2.61 × 103 2.62 × 103 2.68 × 103 2.60 × 103
Mean runtime 8.24 × 10+00 8.19 × 10+00 8.12 × 10+00 7.94 × 10+00 8.17 × 10+00 8.40 × 10+00 8.17 × 10+00
F9 2.40 × 103 Mean 2.61 × 103 2.50 × 103 2.50 × 103 2.60 × 103 2.59 × 103 2.62 × 103 2.61 × 103
Stand. div. 2.61 × 103 2.50 × 103 2.50 × 103 2.60 × 103 2.59 × 103 2.62 × 103 2.61 × 103
Best 2.50 × 103 2.50 × 103 2.50 × 103 2.50 × 103 2.50 × 103 2.50 × 103 2.50 × 103
Worst 2.80 × 103 2.50 × 103 2.50 × 103 2.80 × 103 2.79 × 103 2.80 × 103 2.80 × 103
Mean runtime 8.49 × 10+00 8.45 × 10+00 8.43 × 10+00 8.59 × 10+00 8.49 × 10+00 8.71 × 10+00 8.46 × 10+00
F10 2.50 × 103 Mean 2.96 × 103 2.94 × 103 2.94 × 103 2.94 × 103 2.94 × 103 2.96 × 103 2.94 × 103
Stand. div. 2.96 × 103 2.94 × 103 2.94 × 103 2.94 × 103 2.94 × 103 2.96 × 103 2.95 × 103
Best 2.90 × 103 2.90 × 103 2.90 × 103 2.90 × 103 2.90 × 103 2.90 × 103 2.90 × 103
Worst 3.03 × 103 2.98 × 103 3.01 × 103 3.01 × 103 2.97 × 103 3.04 × 103 3.03 × 103
Mean runtime 8.14 × 10+00 8.07 × 10+00 8.04 × 10+00 8.17 × 10+00 8.01 × 10+00 8.38 × 10+00 8.04 × 10+00

4.2.1 Classical test functions

A quick look at Table 6 shows that all the AOA variants found the global optima for F1–F4, F9, and F11. For F5, all the variants failed to find the global optimum; however, the variant initialized with unifrnd(0,1) performed better than all other variants. We see that one or more modified AOAs outperformed the original AOA (initialized by the rand( )) for most functions. The modified AOA’s “best” value is closer to the global optimum than the original AOA. Looking at the deviation from the mean, we see that the variants have lower values for “Stand. dev.,” which means they are more stable and closer to the “best” value. Variant initialized with betarnd(3,2) seems to have the lowest mean run time for most functions. The performance of all the variants is the same for F10, F16–F19 as indicated by values returned for all the metrics considered.

A Friedman’s ranking test was carried out to understand better the results obtained, and the results are presented in Table 7. The p-value is 0.039, which is less than 0.05 (significant level); this means that there is a significant difference in the respective means compared. AOA initialized with unifrnd(0,1) is ranked first because it has the lowest mean rank. The original AOA initialized with rand is ranked joint third with betarnd(3,2); this clearly showed that the performance of AOA is greatly influenced when initialized with other methods bearing in mind the solution size and the number of iterations.

The convergence curve for classical functions (F1–F10) is shown in Figure 4. All the AOA variants exhibited similar behavior for these functions. However, our proposed variants showed a smoother curve than the original AOA in most cases. The nature of the curve exhibited by all the variants for F1–F4 indicated that the algorithms did not converge the search positions around the best result for a particular iteration. This means proper scouring of the search space. Similar behavior is noticed for F9 and F10; however, there is some aggregation around the best solution after the 400th iteration. Early convergence around the best result can be observed by the algorithms for F5–F8; this can be attributed to the inability of the algorithms to find the global optimum. The best run value, however, is close to the optimum.

Figure 4 
                     The convergence behavior of AOA variants on classical functions.
Figure 4

The convergence behavior of AOA variants on classical functions.

4.2.2 CEC 2020 test suite

The benchmark test functions defined in CEC 2020 suite are designed to make finding the global optima a challenging task. The seven variants of AOA were tested on this suite, and the results are presented in Table 8. All the chosen variants of AOA were able to find the global optimum for F6 and failed to find the global optima for F1, F3, F5, and F7–F10. However, when all the variants fail to find the global optimum, the “best” values are still significantly close to the global optimum. For F2, two variants (betarnd(3,2) and rand) were able to find the global optimum.

Friedman’s test results given in Table 9 suggest that the lowest mean rank is for AOA initialized with betarnd(3,2), which means it ranked first. The original AOA initialized with rand is ranked joint fourth; this significantly means that initializing AOA with these schemes influenced the algorithm’s performance.

Table 9

Friedman’s test mean rank for CEC 2020 functions

Initialization methods rand betarnd(3,2) betarnd(2,5,2,5) unifrnd(0,1) raylrnd(0,4) lhsdesign( ) Sobol( )
Friedman’s mean rank 7.50 3.65 4.65 5.40 7.50 7.60 6.80
General mean rank 4 1 2 3 4 5 6

We also looked at the convergence behavior of the AOA variants under study; the results are shown in Figure 5. We see clearly that the algorithms displayed unsteady convergence for most of the functions, which can be attributed to the nature and design of the functions. However, for F1, F4, and F7, a steady convergence curve is observed. The algorithms could quickly converge to the optimum (F2 and F6) and near-optimum for the remaining functions despite this unsteadiness. Although one can see that there is premature convergence for F2, F3, and F8, this can be attributed to the algorithm’s ability to find optimal or near-optimal solutions early because of the initial positions of the solutions in the search space. It is also clear from Figure 5 that in most cases, the other variants converged steadily and searched the solution space more effectively than did the original AOA.

Figure 5 
                     Convergence curves for CEC 2020 test functions.
Figure 5

Convergence curves for CEC 2020 test functions.

5 Conclusion

This article tested the influence of solution size, the number of iterations, and other initialization schemes on AOA. AOA is a new metaheuristic algorithm based on the behavior of arithmetic operators in mathematical calculations. We started by testing how AOA was affected when the solution size and the number of iterations were varied. Experiments were conducted on three scenarios: where the solution size was large and the number of iterations small, the number of iterations large and the solution size small, and where the solution size and the number of iterations were similar. The results showed that AOA is sensitive to solution size, which must be large for optimal performance. We then initialized AOA with six initialization schemes, and their performances were tested on the classical functions and the functions defined in the CEC 2020 suite. The results were presented, and their implications were discussed.

Our results showed that the performance of the AOA was influenced when the solution was initialized with schemes other than random numbers. The beta distribution outperformed the random number distribution in all cases of both the classical and CEC 2020 functions. The performances of the uniform distribution, Rayleigh distribution, Latin hypercube sampling, and Sobol low discrepancy sequence were on a par with the random number. On the basis of our experimental results, we recommend that setting a solution size of 6,000, using 100 as the number of iterations, and initializing the solutions with the beta distribution will lead to the AOA performing optimally. We agree that the initial population’s distribution will play a less significant role in the algorithm’s performance for high-dimension problems. However, with the difficulty of finding global optima for most real-world problems, anything that can increase the algorithms’ ability to converge at the global optimum is worthwhile in the field of metaheuristic optimization.

  1. Funding information: This work was supported by the Tertiary Education Trust Fund under Grant TETF/ES/UNIV/NASARAWA/TSAS/2019.

  2. Conflict of interest: The authors declare no conflict of interest.

References

[1] Agushaka JO , Ezugwu AE . Diabetes classification techniques: a brief state-of-the-art literature review. In International Conference on Applied Informatics, Ogun-Nigeria; 2020.10.1007/978-3-030-61702-8_22Search in Google Scholar

[2] Oyelade ON , Ezugwu AE . Ebola optimization search algorithm (EOSA): a new metaheuristic algorithm based on the propagation model of Ebola virus disease. arXiv preprint arXiv; 2021. p. 2106.01416.Search in Google Scholar

[3] Ezugwu AE , Adeleke OJ , Akinyelu AA , Viriri S . A conceptual comparison of several metaheuristic algorithms on continuous optimization problems. Neural Comput Appl. 2020;32(10):6207–51.10.1007/s00521-019-04132-wSearch in Google Scholar

[4] Abualigah L , Shehab M , Alshinwan M , Alabool H . Salp swarm algorithm: a comprehensive survey. Neural Comput Appl. 2019;32(15):11195–215.10.1007/s00521-019-04629-4Search in Google Scholar

[5] Agushaka JO , Ezugwu AE . Advanced arithmetic optimization algorithm for solving mechanical engineering design problems. PLoS One. 2021;16(8):e0255703.10.1371/journal.pone.0255703Search in Google Scholar PubMed PubMed Central

[6] Ezugwu AE , Shukla AK , Nath R , Akinyelu AA , Agushaka JO , Chiroma H , et al. Metaheuristics: a comprehensive overview and classification along with bibliometric analysis. Artif Intell Rev. 2021;54:1–80.10.1007/s10462-020-09952-0Search in Google Scholar

[7] Yang XS , Deb S , Zhao YX , Fong S , He X . Swarm intelligence: past, present and future. Soft Comput. 2018;22(18):5923–33.10.1007/s00500-017-2810-5Search in Google Scholar

[8] Li Q , Liu SY , Yang XS . Influence of initialization on the performance of metaheuristic optimizers. Appl Soft Comput. 2020;91:106193.10.1016/j.asoc.2020.106193Search in Google Scholar

[9] Abualigah L , Diabat A , Mirjalili S , Abd Elaziz M , Gandomi AH. The arithmetic optimization algorithm. Comput Methods Appl Mech Eng. 2021;376:113609.10.1016/j.cma.2020.113609Search in Google Scholar

[10] Parsopoulos K , Vrahatis M . Initializing the particle swarm optimizer using the nonlinear simplex method. Adv Intell Systems, Fuzzy Systems, Evolut computation. 2002;216:1–6.Search in Google Scholar

[11] Richards M , Ventura D . Choosing a starting configuration for particle swarm optimization. In Proceeding of the IEEE International Joint Conference on Neural Networks. 2004;3:2309–12.Search in Google Scholar

[12] Agushaka J , Ezugwu A . Influence of initializing Krill Herd algorithm with low-discrepancy sequences. IEEE Access. 2020;8:210886–909.10.1109/ACCESS.2020.3039602Search in Google Scholar

[13] Uy NQ , Hoai NX , McKay RI , Tuan PM. Initialising PSO with randomised low-discrepancy sequences: the comparative results. In 2007 IEEE Congress on Evolutionary Computation; 2007.Search in Google Scholar

[14] Pant M , Thangaraj R , Grosan C , Abraham A . Improved particle swarm optimization with low-discrepancy sequences. In 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence); 2008.10.1109/CEC.2008.4631204Search in Google Scholar

[15] Heidari AA , Pahlavani P . An efficient modified grey wolf optimizer with Lévy flight for optimization tasks. Appl Soft Comput. 2017;60:115–34.10.1016/j.asoc.2017.06.044Search in Google Scholar

[16] Abdulwahab HA , Noraziah A , Alsewari AA , Salih SQ . An enhanced version of black hole algorithm via levy flight for optimization and data clustering problems. IEEE Access. 2019;7:142085–96.10.1109/ACCESS.2019.2937021Search in Google Scholar

[17] Afrabandpey H , Ghaffari M , Mirzaei A , Safayani M . A novel bat algorithm based on chaos for optimization tasks. In 2014 Iranian Conference on Intelligent Systems (ICIS); 2014.10.1109/IranianCIS.2014.6802527Search in Google Scholar

[18] Barshandeh S , Haghzadeh M . A new hybrid chaotic atom search optimization based on tree-seed algorithm and Levy flight for solving optimization problems. Eng Computers. 2020;37(4):1–44.10.1007/s00366-020-00994-0Search in Google Scholar

[19] Brits R , Engelbrecht A , Van den Bergh F . A niching particle swarm optimizer. In Proceedings of the 4th Asia-Pacific Conference on Simulated Evolution and Learning. Vol. 2; 2002.Search in Google Scholar

[20] Amirsadri S , Mousavirad SJ , Ebrahimpour-Komleh H . A Levy flight-based grey wolf optimizer combined with back-propagation algorithm for neural network training. Neural Comput Appl. 2018;30(12):3707–20.10.1007/s00521-017-2952-5Search in Google Scholar

[21] Weik MH . Rayleigh distribution. Comput Sci Commun Dict. 2001;1:1416–6.10.1007/1-4020-0613-6_15517Search in Google Scholar

[22] Georgioudakis M , Lagaros ND , Papadrakakis M . Probabilistic shape design optimization of structural components under fatigue. Comput Struct. 2017;182:252–66.10.1016/j.compstruc.2016.12.008Search in Google Scholar

[23] Bratley P , Fox B . Algorithm 659: implementing Sobol’s quasirandom sequence generator. ACM Trans Math Softw (TOMS). 1988;14(1):88–100.10.1145/42288.214372Search in Google Scholar

[24] Akay B , Karaboga D . A modified artificial bee colony algorithm for real-parameter optimization. Inf Sci. 2012;192:120–42.10.1016/j.ins.2010.07.015Search in Google Scholar

Received: 2021-07-22
Revised: 2021-10-01
Accepted: 2021-10-28
Published Online: 2021-12-04

© 2022 Jeffrey O. Agushaka and Absalom E. Ezugwu, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 16.11.2024 from https://www.degruyter.com/document/doi/10.1515/jisys-2021-0164/html
Scroll to top button