An improved volleyball premier league
algorithm based on sine cosine algorithm
for global optimization problem
Reza Moghdani, Mohamed Abd Elaziz,
Davood Mohammadi & Nabil Neggaz
Engineering with Computers
An International Journal for SimulationBased Engineering
ISSN 0177-0667
Engineering with Computers
DOI 10.1007/s00366-020-00962-8
1 23
Your article is protected by copyright and all
rights are held exclusively by Springer-Verlag
London Ltd., part of Springer Nature. This eoffprint is for personal use only and shall not
be self-archived in electronic repositories. If
you wish to self-archive your article, please
use the accepted manuscript version for
posting on your own website. You may
further deposit the accepted manuscript
version in any repository, provided it is only
made publicly available 12 months after
official publication or later and provided
acknowledgement is given to the original
source of publication and a link is inserted
to the published article on Springer's
website. The link must be accompanied by
the following text: "The final publication is
available at link.springer.com”.
1 23
Author's personal copy
Engineering with Computers
https://doi.org/10.1007/s00366-020-00962-8
ORIGINAL ARTICLE
An improved volleyball premier league algorithm based on sine cosine
algorithm for global optimization problem
Reza Moghdani1 · Mohamed Abd Elaziz2 · Davood Mohammadi3 · Nabil Neggaz4,5
Received: 19 June 2019 / Accepted: 22 January 2020
© Springer-Verlag London Ltd., part of Springer Nature 2020
Abstract
Volleyball premier league (VPL) simulating some phenomena of volleyball game has been presented recently. This powerful
algorithm uses such racing and interplays between teams within a season. Furthermore, the algorithm imitates the coaching
procedure within a game. Therefore, some volleyball metaphors, including substitution, coaching, and learning, are used to
find a better solution prepared by the VPL algorithm. However, the learning phase has the largest effect on the performance
of the VPL algorithm, in which this phase can lead to making the VPL stuck in optimal local solution. Therefore, this paper
proposed a modified VPL using sine cosine algorithm (SCA). In which the SCA operators have been applied in the learning
phase to obtain a more accurate solution. So, we have used SCA operators in VPL to grasp their advantages resulting in a
more efficient approach for finding the optimal solution of the optimization problem and avoid the limitations of the traditional
VPL algorithm. The propounded VPLSCA algorithm is tested on the 25 functions. The results captured by the VPLSCA
have been compared with other metaheuristic algorithms such as cuckoo search, social-spider optimization algorithm, ant
lion optimizer, grey wolf optimizer, salp swarm algorithm, whale optimization algorithm, moth flame optimization, artificial bee colony, SCA, and VPL. Furthermore, the three typical optimization problems in the field of designing engineering
have been solved using the VPLSCA. According to the obtained results, the proposed algorithm shows very reasonable and
promising results compared to others.
Keywords Metaheuristic · Global optimization · Volleyball premier league · Sine cosine algorithm
1 Introduction
* Mohamed Abd Elaziz
abd_el_aziz_m@yahoo.com
Reza Moghdani
reza.moghdani@gmail.com
Davood Mohammadi
mohammady1366@yahoo.com
Nabil Neggaz
nabil.neggaz@univ-usto.dz
1
Industrial Management Department, Persian Gulf University,
Boushehr, Iran
2
Department of Mathematics, Faculty of Science, Zagazig
University, Zagazig, Egypt
3
Industrial Engineering Department, Payam Noor University,
Asalouyeh, Iran
4
Université des Sciences et de la Technologie d’Oran
Mohamed Boudiaf, USTO-MB, BP 1505, EL M’naouer,
31000 Oran, Algeria
5
Faculté des Mathématiques et Informatique, Département
d’Informatique, Laboratoire SIMPA, Oran, Algeria
In recent years, human’s progress in the fields of physical
and social science and especially in industrialization have
emerged complex problems. This situation persuades scientists to use and develop new algorithms for solving these
problems. Due to the nature and complexity of these problems, new algorithms have been developed in recent years
to overcome the solving problems. In this matter, artificial
intelligence and stochastic optimization approaches have
been the center of attention to tackling these obstacles. The
optimization is a process of finding the best solutions in a
reasonable time for a problem to gain minimization or maximization of objective functions which are restricted by some
constraints. The optimization is broadly executable in every
field such as economics [1], engineering design [2], pattern
recognition [3], chemistry and information technology [4, 5].
As the aforementioned, the complexity of problems needs
to use a novel paradigm of optimization algorithms rather
than traditional approaches. The new paradigm is inspired
13
Vol.:(0123456789)
Author's personal copy
Engineering with Computers
by natural phenomena in terms of physical and biological.
These algorithms are called Meta-heuristics (MH). These
algorithms imitate natural processes such as natural selective
or collective behavior in seeking the best solution. Generally
speaking, evolutionary algorithms can be categorized into
four groups. The first group, stochastic, uses randomness
to explore in search space including Local Search (LS) [6],
Adaptive Random Search (ARS) [7], Stochastic Hill Climbing (SHC), Iterated Local Search (ILS) [8], Variable Neighborhood Search (VNS) [9], Greedy Randomized Adaptive
Search Procedure (GRASP) [9], Tabu Search (TS) [10]. The
second group of EA named the population-based algorithm.
The most well-known of MH, Genetic Algorithm (GA) [11],
is dropped in this category. The most famous algorithms
related to this group consist Evolution Strategies (ES) [12],
Evolutionary Programming (EP) [13], Grammatical Evolution (GE) [14], Adaptive Differential Evolution (ADE) [15],
Interior Search Algorithm (ISA) [16], and Stochastic Fractal
Search (SFS) [17]. The third group of MH is called a Physical algorithm in which inspired a different range of physical
systems combination neighborhood-based and global search
techniques. The most well-known algorithms in this group
are Ions Motion Algorithm (IMA) [18], Forest Optimization
Algorithm (FOA), Water Wave Optimization (WWO) [19],
Mine Blast Algorithm (MBA) [20], and Grenade Explosion Method (GEM) [21]. The swarm-based MH is the last
group in which mimics the social and individual behavior of
swarms, animals, and so on. The most prominent algorithms
in this group include Particle Swarm Optimization (PSO)
[22], Ant Colony Optimization (ACO) [23–25], Migrating Birds Optimization (MBO) [26], Grey Wolf Optimizer
(GWO) [27], Bees Algorithm (BA) [28], Social Spider Optimization (SSO) [29], and Artificial Bee Colony (ABC) [30].
SCA is a new meta-heuristic algorithm that is inspired by
mathematical concepts considering Sine and Cosine functions for enhancing the exploration and exploitation of the
search space [31]. The SCA procedure consists of two parts,
and the new position of each part could be inside or outside
of the other part’s neighborhood [31]. To estimate the new
position, the Sine and Cosine functions are used.
In the same context, the VPL is taking place in the metaheuristic category [32]. This algorithm inspired by interaction and competition between teams within each season
and imitating decision making procedure by coaches. VPL
algorithm tries to solve the global optimization problem by
utilizing a volleyball metaphor namely substitution, coaching, and learning. Like other meta-heuristic algorithms, VPL
starts with generating random teams as an initial solution for
a given problem. Each team consists of two configurations
named formation and substitutes. To scheduling matches,
VPL uses the single round-robin (SRR) method to specifying rivals. Afterwards, to determine the winner of each
game, the algorithm uses a power factor that is applied in
13
a formulation to calculate the winning probability of each
team. In the VPL algorithm coaching is presenting as a
knowledge sharing strategy to extract information from the
game and training players and substitutes during the match.
Similar to any Evolutionary algorithm, VPL uses Repositioning Strategy and Substitution Strategy to change current
player’s positions and change current players with substitutes, respectively, during the match based on their roles and
match conditions to reach the supremacy in the match (generating new population). In the VPL, each team is located
in the space search of the problem as a solution. Then each
solution will evaluate respect to the objective function(s) at
its contemporary positions.
In this research, the advantages of SCA is used to enhance
the space exploitation and exploration of the VPL algorithm.
Generally speaking, the SCA will use to improve the updating stage of the Learning Phase in the VPL algorithm to
provide a good diversity. Since the SCA has a high ability
to improve the performance of other metaheuristic methods
using its operators.
The main contributions of this paper can be summarized
as:
1. Proposed an alternative global optimization method
based on a modified version of the recent MH method
called VPL.
2. Using the operators of SCA to improve the performance
of VPL so the proposed called VPLSCA.
3. Evaluate the performance of the proposed VPLSCA
using a twenty-five benchmark function and three engineering problems.
4. Evaluate the results of VPLSCA with other similar MH
algorithms
The rest of the paper’s structure is constituted as follows.
Section 2 will review the state of the art of related works in
the Meta-heuristics with the focus on SCA, its variants, and
applications. Section 3 takes a quick glance at the mechanisms of the VPL algorithm and the SCA. Section 4 devotes
to introducing the mechanism of the proposed algorithm.
Section 5 presents the experimental analysis and applications of the proposed algorithm on engineering test problems, respectively. In the last section, we will present a conclusion and future visions.
2 Related works
In the past years, we have been witnessed the increasing
need for developing many different meta-heuristic algorithms. Due to the no-free-lunch theory, it is not applicable
to use all these algorithms for the same problem. On the
other hand, the most suitable algorithm should be chosen
Author's personal copy
Engineering with Computers
to solve the related problem. These algorithms are very versatile but commonly divided into two main fields named as
swarm algorithms and evolutionary algorithms. Swarm algorithms inspired by the collective behavior of animal folks.
The evolutionary algorithms refer to the methods which are
inspired by Darwin’s evolutionary theory and use mutation
and crossover operators. The major privileges of the swarm
algorithms are robustness, fewer needing parameters and
good efficiency in the exploitation of search space. As the
aforementioned, due to the power of SCA, the performance
of the PSO algorithm is enhanced using SCA to gain more
exploration and exploitation in the field of search space in
[33]. In this research paper, the proposed algorithm includes
two levels named the bottom and the top. In the bottom level
exploration rate will increase using SCA and the top layer
gives more exploitation using PSO agents to find the best
solutions. This approach provides a good diversity and also
the best information of each position at each iteration. To
the supremacy of premature convergence, a hybrid PSO with
SCA is designed in [34].
The SCA and Differential Evolution algorithm are combined to avoid trapping in local optimum solutions and gain
faster convergence rather than an original version of them.
The new algorithm named hybrid sine cosine differential
evolution algorithm is presented by [35]. The main purpose
of developing this algorithm is designing a better framework
in the field of optimization problems solving techniques. An
acceptable balance is guaranteed by the SCA amongst exploration and exploitation in the search space but unfortunately,
like other meta-heuristics, it is in the habit of sticking in the
suboptimal areas. To overcome this weakness, Abd Elaziz
et al. [36], used the Opposition-Based Learning method to
generate more best solutions by increasing the performance
of the space searching. To omit the drawbacks of SCA such
as the imparity of exploitation and trapping in local optimum
areas, a combination of SCA and Multi-Orthogonal Search
Strategy (MOSS) to the supremacy of these difficulties has
presented by [37].
The binary version of SCA is presented by [38] in which
they use a sigmoidal transformation function for binary mapping of continuous real-valued search space to the binary
counterpart. This novel represented algorithm is used to
solve electricity market problems. Also, the SCA is used
for finding the best solution in the re-entry trajectory problem for space shuttle vehicles [39]. It is noticeable that the
Multi-Objective version of SCA (MOSCA) is introduced
by [40]. The MOSCA uses elitist’s non-dominated sorting
and crowding distance attitude to gain non-dominated and
provide diversity. Moreover, to test the abilities of SCA, it
is used to design airfoil [31]. Furthermore, the SCA also
applied in a different context. In [41], a handwritten of Arabic text is binarization using SCA. The SCA is applied to
finding the best solution for unit commitment to generating
energy [38]. A very amazing application of SCA is made by
[42] in the galaxies discovery by applying image recovery.
In the same context, the modified version of SCA is applied
to finding the optimum solution for Multi-Objective problems [40].
The non-dominated sorting method amending non-dominated solutions achieved up to now by SCA, and the crowding distance part enhances the performance of the diversity
of non-dominated solutions. The binary version of SCA
benefiting from the rounding method for solving discrete
and binary optimization problems introduced by Hafez et al.
[43]. It is worth noticing that for testing this version, the
feature selection problem is utilized. In the following, we
will depict some advances in SCA, which are introduced by
researchers recently.
The Opposition-Based Learning (OBL) is a mean to evaluate the opposite position of each solution to boost the performance of the SCA algorithm in [36, 44]. Furthermore, a
variety of operators are used to enhance the exploration and
exploration behavior such as Levy flights, Chaotic maps, and
weighted position updating in SCA in [45–48], respectively.
Moreover, in terms of application, the SCA algorithm has
been successfully adapted with Machine learning techniques
to solve a wide variety of problems such as clustering, classification, regression, and prediction.
From what has been discussed above, we may classified
studies on SCA algorithm into hybrid algorithms and applications as shown in Table 1.
3 A brief review on volleyball premier
league and sine cosine algorithm
In this section, the general concepts of the VPL and SCA
are discussed.
3.1 Volleyball premier league algorithm
The VPL algorithm mimics the interactive behavior through
the league teams of volleyball [32].
This algorithm has a certain peculiarity in the representation of the solution by comparing it with the evolutionary
algorithms. The solution includes two different parts called
the active and passive parts. The former section illustrates
the typical team, which contains six players, i.e., the main
formation where the fitness function is evaluated according
to this part. The passive part represents a substitute player.
The structure of solution representation is shown in Fig. 1.
Figure 2 shows the different steps of the VPL and it is
important to define a certain vocabulary dedicated to this
algorithm. First, the term league means population. Second,
the term team represents a solution and finally, a season
13
Author's personal copy
Engineering with Computers
Table 1 Review of SCA algorithm
Contribution Title
Reference
Variants
[34] [33],
[49] [50], [35], [51],
[52]
[53]
[54]
[55]
[37]
[53]
[56]
[39]
[57]
[58]
[59]
[60]
[61]
[62]
[63]
[64] [65],
[66]
[67]
[68]
SCA with particle swarm optimization
SCA with differential evolution
SCA with ant lion optimizer
SCA with whale optimization algorithm (WOA)
SCA with grey wolf optimizer (GWO)
SCA with water wave optimization algorithm
SCA with multi-orthogonal search strategy
SCA with crow search algorithm
SCA with teaching learning based optimization
Applications Re-entry trajectory optimization for a space shuttle
Breast cancer classification
Power distribution network Reconfiguration
Temperature-dependent optimal power flow
Pairwise global sequence alignment
Tuning controller parameters for AGC of multi-source power systems
Load frequency control of an autonomous power system
Coordination of heat pumps, electric vehicles and AGC for efficient LFC in a smart hybrid power system
Economic and emission dispatch problems
Optimization of CMOS analog circuits
Loss reduction in distribution system with unified power quality conditioner
Capacitive energy storage with optimized controller for frequency regulation in realistic multisource
deregulated power system
Reduction of higher-order continuous systems
Designing FO cascade controller in automatic generation control of multi-area thermal system incorporating dish-Stirling solar and geothermal power plants
SSSC damping controller design in power system
Selective harmonic elimination in five level inverter
Short-term hydrothermal scheduling
Optimal selection of conductors in Egyptian radial distribution systems
Data clustering
Loading margin stability improvement under a contingency
Feature selection
Designing vehicle engine connecting rods
Designing a single sensor-based MPPT of partially shaded PV system for battery charging
Handwritten Arabic manuscript image binarization
Thermal and economical optimization of a shell and tube evaporator
Forecasting wind speed
Object tracking
[69]
[70]
[71]
[72]
[73]
[74]
[75]
[76]
[77]
[78]
[79]
[80]
[81]
[82]
[35]
represents an iteration and week represents the scheduling of
the league, which will be explained in the following.
3.1.1 Initialization
Substitutes
1
2
3
...
Formation
i
Fig. 1 Solution representation [32]
13
1
2
3
...
i
In this step, NT teams are generated as mentioned above,
each solution contains two parts: formation and substitutes.
For each part and for each variable jth, random numbers
are generated in the specified interval values using Eqs. (1)
and (2).
Author's personal copy
Engineering with Computers
Fig. 2 The framework of VPL
algorithm
Start
Apply learning phase
Yes
Set Parameters
Initialization
Max num
week=num
week
No
Identify Best team
Yes
num_season= num_season+1, i=1;
Remove top k worst team
Generate league schedule
Add new team to league
Apply competition between team A
and team B
Apply transfer process
Update Best Team
Calculate power index for team A
and team B
Determine winner and loser team
No
Max num
season=num
season
Apply different strategies
Yes
Update Best Team
Determine Best solution
End
f
Xj = lbj + Rand() × (ubj − lbj )
f
Xj = lbj + Rand() × (ubj − lbj )
(1)
(2)
where lbj indicates the lower bound of the variable j and
ubj represent its upper bound. A function that generates a
distributed number which is randomly uniformed between
0 and 1 is shown by Rand().
To illustrate the formation and substitutes of teams we
utilize the following matrices, respectively.
f
f
f
⎡ X1,1
⎢ f
X
F = ⎢ 2,1
⎢ ⋮
⎢ f
⎣ Xi,1
X1,2
f
X2,2
⋮
f
Xi,2
⋯
…
⋱
…
X1,j ⎤
f ⎥
X2,j ⎥
⋮ ⎥
f ⎥
Xi,j ⎦
s
⎡ X1,1
⎢ Xs
S = ⎢ 2,1
⎢ ⋮
⎢ Xs
⎣ i,1
s
X1,2
s
X2,2
⋮
s
Xi,2
⋯
…
⋱
…
s ⎤
X1,j
s ⎥
X2,j
⎥.
⋮ ⎥
s ⎥
Xi,j
⎦
on the polygon method. The whole number of corporated teams within a tournament illustrated by N and N
− 1 shows the number of games for a team which mean
N(N − 1)∕2 matches will be played within a tournament.
To better understand the SRR process, we will explain the
concept based on Fig. 3. Each line determines the opposing teams which will be playing in the first round. For
example, A plays with H, B with G, C with F and D with
E.
The polygon is rotated clockwise to assigning teams
for the scheduling of the league for the next coming round
(Fig. 4), and Table 2 shows generic scheduling of a league
for eight teams.
(3)
Fig. 3 The first round of the
SSR method
H
B
G
(4)
3.1.2 Match schedule
In this section, we explain first the process of the SRR
which provides the scheduling of the league. SRR relies
A
F
C
E
D
A BCD
HGF E
Round 1
13
Author's personal copy
Engineering with Computers
A
E
G
E
H
F
A
B D
D
E
F
G
F
D
HC
A
E
C
A
G B
D
C
A BC F
C
B
A BD E
B
H
A BCD
GEDH
F CHG
Round 2
Round 3
C
A
FH
H
B
D
B
G
G
C
H
A
E G
F
A BCG
A BCD
F
E
A CD E
E HG F
DFEH
HGF E
B HG F
Round 4
Round 5
Round 6
Round 7
D
Fig. 4 Producing of scheduling a league for rounds 2 to 7
Table 2 Depicts the generic
scheduling of a league for eight
teams
First week
Second week
Third week
Fourth week
Fifth week
Sixth week
Seventh week
A–H
B–G
C–F
D–E
A–G
B–E
C–D
F–H
A–F
B–C
C–G
D–F
A–E
B–H
C–G
D–F
A–D
B–F
C–E
G–H
A–H
B–G
C–F
D–E
A–B
C–H
D–G
E–F
3.1.3 Competition
Table 3 Function of racing among team i and team j
In this section, we will explain mathematically the chance
to win and the winning probability in a competition. The
power index 𝜑(i) of the formation of the team, i is repref
senting by Xi in a week using the following Eqs. (5) and
(6):
f
𝜑(i) =
Z=
f (Xi )
(5)
Z
n
∑
f
f (Xi )
(6)
Function Competition (i, j)
and using Eqs (5) and (6),
Calculate
using Eq (9),
Calculate
,
Generate
If
team is the winner and team is the loser,
Else
team is the winner and team is the loser,
End if
apply winning strategy for the winning team,
apply losing strategies for the loser team,
End
i=1
f
where f (Xi ) represents the value of fittingness of the team
i related to its configuration. The whole summation of the
value of fittingness within a week is shown by Z. Higher
value of fitness determines the stronger team.
f
We suppose teams j with the formation Xj and k with
f
the formation Xk . Therefore, the power index for each team
is calculated as follows:
f
𝜑(j) =
f (Xj )
(7)
Z
f
𝜑(k) =
f (Xk )
Z
(8)
The probability that the probability of team j wins the
current game is defined by:
13
p(j, k) =
𝜑(j)
𝜑(j) + 𝜑(k)
(9)
According to the laws of probability, we find the following relation:
p(j, k) + p(k, j) = 1
(10)
After the determination of the winner team, we utilize
its formation and strategies for the conqueror and loser
teams. For the loser team, three strategies are considered
such as knowledge sharing, repositioning, and substitution
while the winner team utilizes the leading role strategy.
Table 3 shows the function of racing among team i and
j.
Author's personal copy
Engineering with Computers
3.1.4 Knowledge sharing strategy
Bs = Xjs
Knowledge sharing strategy is modeled by the following
formulas:
Afterwards, the vice versa of the aforementioned formulations from Eqs. (15)–(18) are presented as follows:
f
f
(18)
f
Xj (t + 1) = Xj (t) + r1 𝜆f (ubj − lbj )
(11)
Xi = Bf
(19)
Xjs (t + 1) = Xjs (t) + r2 𝜆s (ubj − lbj )
(12)
Xis = Bs
(20)
where 𝜆f is the formation’s coefficient and 𝜆s is indicating a
coefficient of substitutes. r1 and r2 are random numbers distributed uniformly in [0, 1]. The rate of sharing knowledge
is represented by 𝛿ks and the number of knowledge sharing
formulation is presented as follows:
[
]
Nks = J𝛿ks
(13)
where the number of knowledge sharing positions represents
Nks and each team has J positions. Table 4 shows the steps
of the Knowledge sharing strategy.
f
Xj = Af
(21)
Xjs = As
(22)
Table 5 depicts the steps of the repositioning method.
3.1.6 Substitution strategy
Within a match, the substitution number is defined by the
following formula:
3.1.5 Repositioning strategy
Ns = [rJ]
This step is led by the coach who determines the best position
of each player. This procedure is called a repositioning strategy. In the volleyball game, the role of each player is presented
as its position. The amount of repositioning method within a
team is illustrated by 𝛿rs.
[
]
Nrs = J𝛿rs
(14)
where Ns illustrates the substitution number within a team,
r is a random number distributed uniformly between 0 and
1, and J indicates the positions number.
Within a competition, we determine the loser team, and
we select randomly an index of a position called h . All
members of the formation set F and substitutions set S are
exchanged randomly. The pseudo-code is shown in Table 6.
where the number of repositioning methods within a match
is shown by Nks. We select randomly two positions i and j .
A and B represent active and passive players.
The allocating attributes of positions i to A and j to B are
shown as follows:
3.1.7 Winner strategy
f
Af = Xi
(15)
As = Xis
(16)
f
Bf = Xj
(23)
We determine the position of the winning team, and we combine with a random position to generate a new position using
the following formulas:
Table 5 Steps repositioning method
(17)
Table 4 Steps of Knowledge sharing strategy
For k=1 :
Select randomly a position
For j=1 to
Update position j of formation property using Eq (11),
Update position j of substitutes property using Eq (12),
End For
End For
For k=1 to
Select randomly two locations i and j
Define A and B
Use Eqs. (15) -(18),
Reverse two positions i and j using Eqs. (19)- (21)
End for
Table 6 Pseudo-code for the substitution process
Compute the number of substitution process using Eq (14)
Define sets ℎ, , and
For k=1 to
End for
13
Author's personal copy
Engineering with Computers
X f (t + 1) = X f (t) + r1 𝜓 f (X f (t)∗ − X f (t))
(24)
X s (t + 1) = X s (t) + r2 𝜓 s (X s (t)∗ − X s (t))
(25)
In Eq. (24), 𝜓 f represent the weights of inertia of formation, while, in Eq. (25), the 𝜓 s is the weights of inertia of
substitutes. Moreover, r1, r2 are random numbers distributed
uniformly in [0, 1].
3.1.8 Learning phase
where g represents a set, which contains substitutes and formation (g = {s, f }) and the index 𝛷 takes a value from 1 to
3 which means the best team (1), the second (2) and the third
g
team (3), respectively. Xj (t + 1)𝛷 represents the worth of the
location j of attribute g in the field of the supreme solution
g
𝛷. Xj (t) is the value of position j of the current iteration t .
𝜃 and 𝜗 are coefficient values.
𝜃 = dbr1 − b
(27)
𝜗 = dr2
(28)
where r1 and r2 are random numbers distributed uniformly
in [0, 1]. b is linearly decreased from 𝛽 to 0 using the following equation:
b = 𝛽 − (t(𝛽∕T))
(29)
The coaches seek to understand the gameplay of teams
on the podium to find the best combination of active (formation) and passive players (substitutes). The following equations are given to capture the learning phase for formation
and substitutes properties:
(
)
|
|
f
f
f
f
Xj (t + 1)1 = (Xj (t))1 − 𝜃 |𝜗(Xj (t))1 − Xj (t)|
(30)
|
|
+ 1)2 =
f
(Xj (t))2
(
)
|
|
f
f
− 𝜃 |𝜗(Xj (t))2 − Xj (t)|
|
|
(
)
|
|
f
f
f
f
Xj (t + 1)3 = (Xj (t))3 − 𝜃 |𝜗(Xj (t))2 − Xj (t)|
|
|
f
f
Xj (t
+ 1) =
(
)
|
|
Xjs (t + 1)2 = (Xjs (t))2 − 𝜃 |𝜗(Xjs (t))2 − Xjs (t)|
|
|
(
)
|
|
Xjs (t + 1)3 = (Xjs (t))3 − 𝜃 |𝜗(Xjs (t))2 − Xjs (t)|
|
|
Xjs (t + 1) =
The main formula to explain the learning phase is depicted
as follow:
(
)
|
g
g |
g
g
Xj (t + 1)𝛷 = (Xj (t))𝛷 − 𝜃 |𝜗(Xj (t))𝛷 − Xj (t)|
(26)
|
|
f
Xj (t
(
)
|
|
Xjs (t + 1)1 = (Xjs (t))1 − 𝜃 |𝜗(Xjs (t))1 − Xjs (t)|
|
|
f
(31)
(32)
3
(33)
(36)
(37)
3.1.9 Season transfers
We select randomly H teams for the transfer process if r
is greater than 0.5 where r is a random number distributed
uniformly in [0, 1]. Hence, the following formulation represented the number of participated teams within the transferring season:
]
[
Nst = N𝛿st
(38)
where 𝛿st represents the percentage of teams that participate
to transfer process. The process of season transfer is shown
in Table 7.
3.1.10 Promotion and relegation process
In the VPL algorithm, we consider only one league, therefore, we remove Npr worst teams and replaced by new teams
that are generated randomly. the number of transferred teams
to other leagues is shown by Npr and the whole number of
teams is indicated by N.
Table 7 Steps of transferring season method
For k=1 to
|
End For
For k=1 to
For j=1 to
r=rand()
If r>0.5
w=select randomly from current available teams
f
Xj (t + 1)1 + Xj (t + 1)2 + Xj (t + 1)3
3
(35)
Generally speaking, we have used these equations to
improve the exploitation process of the proposed algorithm.
End If
End For
End For
13
Xjs (t + 1)1 + Xjs (t + 1)2 + Xjs (t + 1)3
(34)
Author's personal copy
Engineering with Computers
[
]
Npr = N𝛿pr
(39)
where 𝛿pr represents the percentage of teams that are relegated and promoted. Table 7 depicts the pseudo-code of
the promotion and relegation process.
3.2 Sine cosine algorithm
The SCA algorithm is a new method that belongs to the class
of population-based optimization techniques. This algorithm
is introduced by [31]. The particularity of this algorithm lies
in the movement of search agents that uses two mathematical
operators based on the sine and cosine functions as in Eqs. (40)
or (41), respectively:
( ) |
|
[Xit+1 = Xit + r1 × sin r2 × |r3 Bestpost − Xit | if
i
|
|
r4 < 0.5]
( ) |
|
= Xit + r1 × cos r2 × |r3 Bestpost − Xit | if
i
|
|
r4 ≥ 0.5]
[Xit+1
(40)
(41)
where Bestpost is the target solution in i th dimension at t th
i
iteration, Xit is the current solution in i th dimension at ith
iteration, || indicates the absolute cost. r1 , r2 , r3 and r4 are
random numbers (Table 8).
The parameter r1 controls the balance between exploration
and exploitation. This parameter is modified during the iterations using the following formula:
r1 = a − t
a
T
(42)
where t is the current iteration, T is the maximum number of iterations and a is a constant which is equal to 2. r2
Table 8 Pseudo-code for the promotion and relegation method
Remove N worst teams of the league.
Define
empty teams
with formation and substitutes
For k=1 to
For j=1 to
s=select randomly from currently available teams (
End For
End For
teams to the league
Add
Table 9 Pseudo-code SCA
determines the direction of the movement of the next solution if it towards or outwards target. r3 indicates the weight
for the best solution to stochastically emphasize ( r3 > 1)
or de-emphasize (r3 < 1) the effect of destination in defining the distance [36]. The parameter r4 allows switching
between sine and cosine or vice versa using (Eqs. (40) and
(41)). Then the general frame work of the SCA is depicted
in Table 9.
4 Proposed algorithm
This paper aims to propose an improvement for the VPL
algorithm, employs some strong exploitation mechanisms
to enhancing its learning phase. This enhancement is performed using the SCA, so the proposed algorithm is called
VPLSCA.
In general, the proposed VPLSCA algorithm begins by
constructing a population of teams that represents the solutions for the given problem, this process performed using
Eqs. (1) and (2). The next step is to generate the league
schedule and apply the competition between the teams and
find the winner and loser teams using the fitness function as
in Eqs. (5)–(10). Thereafter, the knowledge sharing strategy
is applied to the loser teams followed by the repositioning
strategy, then the substitution strategy; while the winning
team will apply the leading role operators to update its
behavior. The next step is to improve the behaviors of teams
using the learning phase, however, this phase is different
from the original VPL algorithm. In which the operators
of SCA and traditional strategy in learning phase are used
together to learn the teams through computing the probability of the fitness function as the following:
f
Probi = ∑n i
i=1 fi
)
(43)
Based on the value of the Probi the current team can
update its behavior using the SCA or the traditional process
in VPL. If the value of Probi ≥ rpr (it is a random number
which determined based on our experiments and its value is
equal to 0.7) then the traditional learning phase (Sect. 3.1.8)
is used; otherwise, the SCA operators are used. Then apply
1. Initialize N solutions
2. Repeat
3. Evaluate each solution and we determine the best solution
4. Update random parameters r1 , r2 , r3 and r4
5. Update the position of search using Eqs. (40) and (41)
6. Until t < T
7. Return the best solution obtained as the global optimum solution.
13
Author's personal copy
Engineering with Computers
the promotion and relegation process, also apply the season
transfer process. The previous steps are performed until the
terminal conditions are met. For more clarification, Table 9
shows the general framework of proposed the approach.
4.1 The complexity of VPLSCA
Computation complexity is a crucial factor for measuring its performance, which can be expressed based on the
structure of the proposed algorithm. Theoretically speaking, the computational complexity of the proposed algorithm
can be grasped from different factors such as the size of
the population, dimension size, maximum number of iterations, and sorting mechanism, which is applied in all iterations. According to [83, 84], the quicksort algorithm, with
the complexity of O(nlogn) and O(n2) in the best and worst
case, has been utilized in both algorithm. Since our proposed
compromises of two different algorithms, VPL and SCA, so
we have:
O(VPLSCA) = O(VPL)O(f ) + O(SCA)O(f )
(44)
where O(f ) is the complexity of the objective function. The
complexity of VPL is defined as follows:
O(VPL) = (O(T(O(qs)) + O(pu))O(f )
O(VPL) = (O(T(2n)2 ) + O(nd))O(f ) = (O(T(4n2 )) + O(2nd))O(f )
(46)
it is worth mention here that VPL uses specific position
including passive part and the active part which changes
n to 2n . Another significant part of the complexity of the
proposed algorithm is related to SCA, which is defined as:
(47)
And finally, we have the following formula for the computational complexity of our proposed algorithm.
O(VPLSCA) = (O(T(5n2 )) + O(3nd))O(f )
5.1 The definition of the tested Functions
In this section, the definition of test functions is given where
these functions include three different categories (1) unimodal, (2) multimodal, (3) fixed dimension. The description of these functions is illustrated in Table 10, in which in
this table the unimodal functions have only a single extreme
maximum or minimum in the specified domain. These functions were applied to evaluate the quality of exploitation for
the optimization method (an example of these functions are
F1–F10). While the multimodal functions have many local
minima and they are applied to evaluate the ability of the
methods to avoid the stagnation at these optimal points (an
example of these functions are F11–F25).
In Table 10, the Dim and fmin represent the dimension
of the test function and its corresponding optimum value of
fitness function.
(45)
where T denotes the number of iterations, qs is the quicksort algorithm, O(f ) states the complexity of the objective
function. Let n and d be a number of population (team) and
dimension space, so we have:
O(SCA) = (O(T(n2 )) + O(nd))O(f )
series, the performance of the VPLSCA method is compared
with the traditional VPL and SCA using different optimization problems at different conditions such as variant population size, and dimension. Finally, the proposed VPLSCA is
applied to different engineering problems in the third experimental series.
(48)
5.2 Parameter setting
In this study, the results of the proposed VPLSCA are compared with other approaches including cuckoo search (CS)
[85], Social-Spider Optimization (SSO) algorithm [86], Ant
Lion Optimizer (ALO) [87], Grey Wolf Optimizer (GWO)
[27], Salp Swarm Algorithm (SSA) [88], Whale Optimization Algorithm (WOA) [89], Moth Flame Optimization
(MFO) [83], Artificial Bee Colony (ABC) [30], SCA [31],
and VPL [32]. Where the parameter value of each method is
given as mentioned in the original reference. In addition, for
a fair comparison between these methods and the proposed
method, the common parameters are set the same value for
all these methods. For example, the population size is 30,
the maximum number of iterations is set to 150 and for providing a suitable statistical analysis each method was run
30 times.
All the methods are implemented using Matlab R2017b
that installed over windows 10 64 bit, the system of
3.40 GHz processor with 4 GB RAM.
5 Experimental analysis
To show the validation and capability of the proposed algorithm, we have used a set of experiments to explore the
quality of the proposed approach. In this regard, first, the
performance of the proposed method is compared against the
other state-of-the-art methods. In the second experimental
13
5.2.1 Measures of performance
To evaluate the ability of each method as a global optimization method, a set of performance metrics is used. For example, average and standard deviation of the fitness function,
Author's personal copy
Engineering with Computers
Success rate, number function calling and they are defined
as [45]:
∙ Mean of fitness values:
Nr
1 ∑
F
Mean =
Nr i=1 i
(49)
Standard deviation (STD):
√
√
Nr
√ 1 ∑
(
)2
Fi − mean
STD = √
Nr − 1 i=1
(50)
Success rate (SR):
SR =
NVTR
Nr
(51)
where Nr represents the total number of runs, whereas and
NVTR is the total number that the algorithm reached to
value-to-reach (VTR).
Table 10 Steps of the proposed
approach
5.3 Experimental series 1: Comparison
with state‑of‑the‑art approaches
The aim of this experimental series is to assess the performance of the proposed VPLSCA against some of the stateof-the-art algorithms such as SSO, SSA, CS, ALO, GWO,
WOA, MFO, ABC, VPL, and SCA. Since all these kinds of
algorithms have been proposed recently, and most of them
are considered as the most prominent algorithms in the
evolutionary computation context, it would be quite fair to
compare the proposed algorithm with these state-of-the-art
methods. It is worth mention here that all used algorithms
in this study are considered continuous metaheuristic optimization methods. In this study, for the more convenient,
ten search agents are applied to find the best solutions over
150 iterations for all mentioned algorithms. The comparison
results are given in Table 11, in which one can observe that
the results of the proposed VPLSCA method are better than
other methods in general. However, the proposed method
achieves the best performance in eleven functions (i.e., F2,
F4, F6, F7, F15, F16, F17, F18, F20, F24, F25). While, VPL
has a better average of the fitness function in two functions
namely F22, and F23. As well as, the WOA can reach the
best value at the functions F12, F13, and F19. Meanwhile,
for the functions F15, the SSO has a better value overall for
Input: (Generation)=0, parameters, cost function
OutPut: the best solution
Initialization stage
While <
Generate a league schedule
For i=1: ( -1)×2
Best team =Select Best team according to Cost Functions
Apply Competition procedure between team A, and B
Determine winner and loser teams
Apply different strategies for winner and loser teams
For j=1: number of teams
Compute Probj using Eq. (43).
≥
If
Update the position of the team(j) by Eqs. (30) to (37).
Else
Generate a random number from 0 to 1 ( 4 ).
If 4 <rand (rand represents a random number belong to [0,1])
Update the position of the team(j) by Eq. (40).
Else
Update the position of the team(j) by Eq. (41).
End if
End if
End for
End for
Apply Promotion and relegation process
Apply season transfer process
= +1
End While
13
Author's personal copy
Engineering with Computers
Table 11 The definition of the functions
Objective function
Dim
Search range
30
[− 100, 100]
0
30
[− 10, 10]
0
30
[− 100,100]
0
30
[− 100,100]
0
30
[− 30,30]
0
��2
xi + 0.5
30
[− 100,100]
0
ixi4 + rand[0, 1)
30
[− 1.28,1.28]
0
ixi2
30
[− 10,10]
0
ixi4
30
[− 1.28,1.28]
0
30
[− 1,1]
0
30
[− 500,500]
�
�
�
xi2 − 10 cos 2𝜋xi + 10
30
[− 5.12,5.12]
0
�
30
[− 32,32]
0
30
[− 600,600]
0
30
[− 50,50]
0
30
[− 50,50]
0
30
[− 10,10]
0
30
[− 10,10]
0
30
[− 1,1]
0
30
[− 5,10]
0
30
[− 100,100]
0
30
[− 5,5]
0
n
∑
F1(x) =
xi2
i=1
n
n
∑� � ∏
� �
�xi � + �xi �
F2(x) =
i=1
f3 (x) =
n
∑
fmin
i=1
�
i=1
i
∑
�2
xj
j−1
{
}
f4 (x) = maxi ||xi ||, 1 ≤ i ≤ n
n−1
�
�2 �
�2
∑
f5 (x) = [100 xi+1 − xi2 + xi − 1 ]
i=1
f6 (x) =
n ��
∑
i=1
n
f7 (x) =
∑
i=1
n
f8 (x) =
∑
i=1
n
f9 (x) =
∑
i=1
n
∑ � �(i+1)
�xi �
�� �
n
∑
�xi �
f11 (x) = −xi sin
� �
f10 (x) =
i=1
− 418.9829 × 5
i=1
f12 (x) =
n �
∑
i=1
�
1
n
f13 (x) = −20exp −0.2
f14 (x) =
1
4000
n
∑
i=1
n
xi2 −
∏
cos
i=1
n
∑
i=1
�
�
� n
�
�
�
∑
− exp 1n cos 2𝜋xi + 20 + e
xi2
x
√i
i
�
i=1
+1
n �
� � n−1
�2 �
�
�� �
�2
�
∑�
∑
f15 (x) = 𝜋n {10 sin 𝜋y1 +
yi − 1 1 + 10sin2 𝜋yi+1 + yn − 1 } + u xi , 10, 100, 4
i=1
i=1
�m
⎧ �
�
� ⎪ k xi − a xi > a
xi+1
yi = 1 + 4 , u xi , a, ak, m = ⎨ � 0 − a <� xi < a
⎪ k −xi − a m xi < −a
⎩
�
� n
n �
�
� ∑
�2 �
�
�� �
�2 �
�
��
�
∑ �
f16 (x) = 0.1 sin2 3𝜋x1 +
xi − 1 1 + sin2 3𝜋xi + 1 + xn − 1 1 + sin2 2𝜋xn
+ u xi , 5, 100, 4
i=1
i=1
f17 (x) =
n �
∑
i=1
n
f18 (x) =
�2 �
�
��
�
�
�
��
�
xi − 1 1 + sin2 3𝜋xi + 1 + sin2 3𝜋x1 + ��xn − 1�� 1 + sin2 3𝜋xn
� �
∑�
�
�xi . sin xi + 0.1.xi �
�
i=1 �
f19 (x) = 0.1n − (0.1
n
∑
i=1
f20 (x) =
n
∑
i=1
f21 (x) =
n
∑
x2i +
�
0.5 +
i=1
n
∑
n
∑
cos(5𝜋xi ) −
i=1
0.5ixi
i=1
�2
+
�
n
∑
i=1
x2i
0.5ixi
�4
√
sin2 (
2
100xi−1
+xi2 −0.5
1+.001(xi2 −2xi−1 xi +xi2 )
2
�
� n−1
�2 �
�
�� �
�2 �
�
��
∑�
f22 (x) = 0.1sin2 3𝜋x1 +
xi − 1 1 + sin2 3𝜋xi+1 + xn − 1 1 + sin2 2𝜋xn
i=1
13
Author's personal copy
Engineering with Computers
Table 11 (continued)
Objective function
n
f23 (x) =
∑�
106
�(i−1)∕(n−1)
i=1
f24 (x) = (−1)n+1
n
∏
i=1
f25 (x) = 0.5 +
xi2
� n
�
� �
�2
∑�
xi − 𝜋
cos xi .exp −
Dim
Search range
30
[− 100,100]
0
30
[− 100,100]
0
30
[− 100,100]
0
fmin
i=1
√∑n
sin2 (
2
i=1 xi −0.5
∑
2
(1+0.001 ni=1 x2i )
other algorithms. In addition, it can be noticed from this
table that the proposed VPLSCA and VPL have the same
average at seven functions namely F1, F3, F5, F8–F10 and
F14. However, the VPLSCA and VPL cannot reach to the
optimal value at the functions F11, in which the WOA and
ALO have the better value. As shown in the following table
in which the numbers in boldface specifies the best average
of the fitness found in various test functions, VPLSCA has
obtained better results in comparison with other methods.
Moreover, to investigate the stability of these algorithms,
the standard deviation of the fitness function value is computed overall the number of runs (i.e., 25 in this study) as
illustrated in Table 12. It can be concluded from this table
that the MFO is the method that allocates the first rank overall the other methods, followed by the SSO. Also, the proposed VPLSCA and VPL achieve the third and fourth rank,
respectively. However, by taking the results in Table 12, we
can observe that the superiority of the MFO and SSO is
a negative effect since both of them cannot reach the best
solution at any function except the SSO algorithm achieves
only the best value at F21.
The results of the success rate (SR) for each algorithm are
given in Table 13, it can be concluded from this table that
the SSO and SCA algorithms cannot reach to the specified
value (1E − 5) overall the tested functions. Meanwhile, the
proposed VPLSCA allocates the first rank in SR followed by
VPL and WOA in the second and third rank, respectively. In
addition, GWO, ALO, MFO, CS are allocating the followed
ranks, which is the same order.
The convergence curve of each algorithm along each
function is given in Figs. 5, 6, 7 and 8, in which we can
observe that the WOA algorithm can convergence faster
than other algorithms at functions F12, F13, F19, and F21.
However, the convergence of the traditional VPL and the
proposed VPLSCA to the best solution is faster than the
other methods at the rest of the test functions. As well as,
by analysis, the convergence of the VPL and the VPLSCA
at some functions like F1 and F2 we can see that at F1 the
proposed VPLSCA can converge to the best solution after
around 80 iterations but the VPL needs the all iterations
to reach the best value for F1. In addition, at function F2,
we can see the convergence of the VPL is better than the
VPLSCA at the first of 95 iterations, however, after the 95th
iteration the proposed can reach the optimal solution of F2
due to the behaviors of the SCA and VPL are combined.
The previous behaviors of VPLSCA are common over the
most functions.
In addition, the Wilcoxon’s rank-sum test (WRST) is
applied to provide more statistical evaluation of the performance of the VPLSCA method. This test is a non-parametric
which used to provide a statistical value to indicate there is
a significant difference between the VPLSCA and the other
approaches or not. In this test there are two hypotheses, the
first one is called null which means that no significant difference between the proposed and other methods, while the
second one is called alternative which means there is a significant difference between the VPLSCA and others.
The results of the WRST are given in Table 14, in which
according to these results it can be noticed that there is a
significant difference between the VPLSCA and the other
methods all tested functions. However, this significant difference at some functions is positive, which indicates the
VPLSCA is better, and at other functions is negative, which
indicates the VPLSCA is the worst, for example, F11, F12,
F13, F19, and F22. In those functions, the VPLSCA fails to
find the best solution and it provides results worse than the
other functions (these worst results are represents using the
negative sign).
Moreover, by analysis the WRST of comparison the proposed VPLSCA and the traditional VPL method, it can be
concluded that there is a significant difference between them
at functions F2, F4, F6, F7, F15, F16, F17, F18, F13, F22,
F24, and F24.
5.4 Experimental series 2: comparison
with traditional SCA and VPL
In this experimental series, the proposed method is compared with the SCA and the VPL at variant dimensions (i.e.,
60, 100, and 1000) using fifteen optimization problems.
The comparison results are given in Table 15, one can be
observed from this table that the VPLSCA approach is better
13
13
Table 12 The average of the fitness function for each algorithm
ALO
GWO
SSA
WOA
MFO
ABC
SCA
VPL
VPLSCA
2.27E + 03
4.35E + 01
1.45E + 04
3.23E + 01
4.35E + 05
2.33E + 03
9.57E − 02
2.46E + 02
2.77E − 01
8.90E − 04
− 1.16E + 03
1.89E + 02
1.49E + 01
1.86E + 01
1.99E + 03
3.37E + 05
6.99E + 01
2.04E + 01
2.50E + 00
3.39E + 02
− 4.71E + 00
2.22E + 01
8.54E + 06
7.63E + 00
4.99E − 01
3.44E + 00
3.42E + 00
1.33E + 03
1.23E + 01
2.26E + 02
5.62E + 00
1.20E − 01
2.45E + 01
4.95E − 01
3.15E − 03
− 6.26E + 02
9.82E + 01
3.02E + 00
1.10E + 00
2.71E + 00
4.12E + 00
5.28E + 00
2.41E + 00
2.08E + 00
8.46E + 01
1.59E + 00
3.11E + 00
6.64E + 05
2.30E + 00
3.73E − 01
2.75E + 03
7.93E + 01
1.41E + 04
3.00E + 01
6.42E + 05
3.72E + 03
7.22E − 02
4.03E + 02
8.45E − 02
1.48E − 03
− 1.63E + 03
9.00E + 01
1.23E + 01
1.96E + 01
5.40E + 01
3.42E + 05
1.34E + 02
9.69E + 00
2.21E + 00
4.09E + 02
− 2.70E + 01
3.14E + 01
3.70E + 07
8.90E + 00
4.99E − 01
6.57E − 06
4.38E − 04
4.00E + 01
1.88E − 01
2.88E + 01
2.18E + 00
8.75E − 03
1.85E − 06
3.14E − 15
5.18E − 23
− 8.01E + 02
1.79E + 01
6.26E − 04
2.15E − 02
8.20E − 02
1.68E + 00
5.56E + 00
9.27E − 03
2.51E − 08
4.75E + 00
3.94E + 00
4.10E + 00
2.37E − 02
4.00E − 01
7.82E − 02
3.07E + 02
1.51E + 01
7.16E + 03
1.82E + 01
2.38E + 04
2.90E + 02
1.34E − 02
6.74E + 01
1.66E − 02
3.68E − 03
− 1.08E + 03
5.92E + 01
6.83E + 00
5.39E + 00
2.40E + 01
8.62E + 01
4.89E + 01
6.10E + 00
1.19E + 00
2.93E + 02
− 1.05E + 00
1.82E + 01
1.30E + 07
5.48E + 00
4.87E − 01
1.67E − 18
5.84E − 15
8.69E + 04
3.35E + 01
2.88E + 01
1.58E + 00
1.82E − 02
1.58E − 22
6.31E − 35
2.52E − 24
-1.63E + 03
3.41E − 14
4.25E − 11
1.01E − 01
8.17E − 02
1.16E + 00
7.91E + 00
1.01E − 15
8.88E − 17
5.16E + 02
− 2.87E + 01
2.40E + 00
4.35E − 15
2.00E − 01
4.81E − 02
2.51E + 03
5.57E + 01
4.05E + 04
7.12E + 01
2.01E + 06
2.99E + 03
1.57E − 01
8.77E + 02
6.54E − 01
3.68E − 04
-1.41E + 03
2.24E + 02
1.89E + 01
3.40E + 01
6.74E + 06
9.57E + 05
1.27E + 02
1.31E + 01
2.63E + 00
4.72E + 02
− 1.04E + 01
3.19E + 01
4.19E + 07
8.53E + 00
4.99E − 01
1.41E + 01
4.34E − 01
2.63E + 04
6.68E + 01
2.05E + 02
1.50E − 01
2.09E − 01
1.45E − 02
3.53E − 06
1.51E − 03
− 1.23E + 03
5.08E + 01
9.10E + 00
4.40E − 01
9.26E − 04
1.33E − 02
3.52E − 01
2.95E + 00
6.48E − 01
3.11E + 02
− 1.06E + 01
1.09E − 01
1.24E + 04
1.12E + 01
5.00E − 01
2.13E + 03
3.21E + 00
3.01E + 04
6.75E + 01
9.44E + 06
1.77E + 03
5.43E − 02
1.04E + 02
2.93E + 00
6.82E − 02
− 7.21E + 02
9.50E + 01
1.75E + 01
1.34E + 01
1.53E + 07
4.99E + 07
6.29E + 01
6.25E + 00
7.56E − 01
1.21E + 02
4.83E + 00
3.12E + 01
6.31E + 05
5.56E + 00
4.81E − 01
0.00E + 00
7.10E − 292
0.00E + 00
3.57E − 57
0.00E + 00
1.51E − 07
3.83E − 03
0.00E + 00
0.00E + 00
0.00E + 00
6.55E + 04
− 1.62E + 02
− 3.30E + 11
0.00E + 00
7.95E − 09
1.38E − 07
2.03E − 05
2.52E − 294
− 1.35E + 00
1.05E − 33
− 2.61E + 01
1.52E − 06
0.00E + 00
2.71E − 02
7.17E − 03
0.00E + 00
0.00E + 00
0.00E + 00
1.18E − 112
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
− 7.28E + 02
− 1.60E + 02
− 1.38E + 11
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
− 1.29E + 00
0.00E + 00
− 2.87E + 01
− 2.87E + 01
2.09E − 270
4.79E − 153
0.00E + 00
Bold values indicate that the best value
Author's personal copy
SSO
Engineering with Computers
F1
F2
F3
F4
F5
F6
F7
F8
F9
F10
F11
F12
F13
F14
F15
F16
F17
F18
F19
F20
F21
F22
F23
F24
F25
CS
Author's personal copy
Engineering with Computers
Table 13 The standard deviation of each algorithm
F1
F2
F3
F4
F5
F6
F7
F8
F9
F10
F11
F12
F13
F14
F15
F16
F17
F18
F19
F20
F21
F22
F23
F24
F25
CS
SSO
ALO
GWO
SSA
WOA
MFO
ABC
SCA
VPL
VPLSCA
470.5189
7.321595
2193.915
2.621772
187162.1
662.1452
0.029892
55.59725
0.151267
0.000751
78.37489
14.70913
1.743181
3.111753
3667.794
197646
14.92508
1.613329
0.203527
51.30038
5.841165
5.200115
3076595
0
0.000511
0
0
0
0
3.18E − 14
0
0
0
6.21E − 17
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1335.196
44.83635
3816.533
4.992455
492,583.1
2259.513
0.015361
128.7383
0.050645
0.001133
0
24.46471
2.264061
13.74987
27.61479
721,653
31.43881
3.340244
0.357598
167.3644
3.265889
10.62915
21,298,086
0
0.000997
6.43E − 06
0.000219
27.5037
0.110942
0.111073
0.651926
0.004321
1.42E − 06
3.42E − 15
1.03E − 22
127.09
7.066993
0.000274
0.029628
0.05027
0.386406
1.805604
0.002601
2.05E − 08
2.746084
0.493572
2.581385
0.02092
0
1.37E − 08
174.8933
4.965003
4273.52
4.488419
11279.46
67.09181
0.006918
46.69334
0.014584
0.001537
51.80483
12.80315
1.985675
2.021409
5.911447
41.99036
34.95171
1.060625
0.197541
74.31802
2.991278
6.258481
6,786,806
0
0.004627
3.72E − 18
1.05E − 14
23926.06
24.03471
0.04695
0.514979
0.013054
1.89E − 22
8.8E − 35
5.5E − 24
0
5.08E − 14
2.99E − 11
0.22559
0.018293
0.227589
2.656536
1.05E − 15
1.99E − 16
136.7639
0.422347
1.30332
9.17E − 15
0
0.029667
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
31.20456
0.090987
8773.53
5.823474
119.866
0.07342
0.048019
0.005351
6.38E − 06
0.000801
30.71497
12.29266
0.93617
0.260479
0.001066
0.017531
0.365191
0.531883
0.317798
40.8132
1.939147
0.108599
8334.345
0
0.000113
2225.367
2.989914
9745.905
7.441104
14,148,718
1685.431
0.017026
70.49272
1.689006
0.119225
36.51013
33.81461
6.017207
12.28756
20,818,361
55,721,913
37.95476
4.205247
0.620385
55.65405
0.884634
4.301878
469,185.2
0
0.016262
0
0
0
7.24E − 57
0
7.68E − 08
0.002389
0
0
0
0.005864
24.37028
1.77E + 11
0
2.98E − 09
4.73E − 08
1.01E − 05
0
0.204641
2.35E − 33
3.882627
6.63E − 07
0
8.29E − 06
0.00372
0
0
0
2.6E − 112
0
0
0
0
0
0
35.410
26.87455
5.41E + 10
0
0
0
0
0
0.249273
0
0.289588
0.289588
0
0.000172
0
than the other two methods on most of all test functions. For
example, at the dimension 60, the VLP is better than the
SCA algorithm, while the performance of the VPL and the
VPLSCA nearly the same as in functions (F1–F3, and F14).
However, at the functions F4–F7, F9, F15–F16, the proposed
method has the smallest average and standard deviation than
the traditional VPL. Moreover, the three algorithms have
the same characteristics as the other dimension (100, and
1000), but the VPLSCA provides better results than VPL
at the function F2 at dimension 1000. In addition, from this
table, it can observe that the quality of the proposed method
not changed with a variant of the dimension in all functions
(used in this experimental) except only two functions namely
F4, and F16.
Figure 9 depicts the diversity of solutions for each of the
three methods (i.e., VPL, SCA, VPLSCA). From this figure, it can be noticed that the proposed VPLSCA has high
diversity value than the other two methods. Further analysis of the diversity can be noticed that SCA improves the
diversity of the traditional VPL which observed from the
diversity curve of VPLSCA. In addition, by comparing the
diversity of SCA and the proposed VPLSCA it can be seen
that the diversity of the SCA decreases with increasing the
number of iterations. In contrast, the solutions of the proposed VPLSCA maintain their diversity during the iterations. However, the diversity of SCA is better than VLP and
VPLSCA at dimension 1000, but by analysis the behaviors
of SCA we observed its diversity is decreased and this
means if the number of iterations is increased the diversity
of SCA become very small that will effect on the quality of
the final solution.
5.5 Experimental series 3: engineering application
In this experimental series, the quality of the results of the
VPLSCA algorithm is evaluated to solve different real engineering problems. These engineering problems are tested
with different conditions, namely, tension/compression
spring design, welded beam design, and pressure vessel
design. In which to handle various inequality constraints,
the easiest way, called the death penalty function is used, in
which the objective function is given a large constant value
if any violated constraint.
13
Author's personal copy
Engineering with Computers
Fig. 5 The convergence curve for functions F1–F6
13
Author's personal copy
Engineering with Computers
Fig. 6 The convergence curve for functions F7–F12
13
Author's personal copy
Engineering with Computers
Fig. 7 The convergence curve for functions F13–F18
13
Author's personal copy
Engineering with Computers
Fig. 8 Convergence curve for functions F19–F25
13
Author's personal copy
Engineering with Computers
Table 14 The average of SR for each algorithm
F1
F2
F3
F4
F5
F6
F7
F8
F9
F10
F11
F12
F13
F14
F15
F16
F17
F18
F19
F20
F21
F22
F23
F24
F25
Average
CS
SSO
ALO
GWO
SSA
WOA
MFO
ABC
SCA
VPL
VPLSCA
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.4453
0
0
0
0
0.0178
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.9866
0
0
0
0
0.0394
0
0
0
0
0
0
0
0
0
0
0
0.6993
0.8365
0
0
0
0
0
0.5302
0
0
0
0
0
0
0.0826
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.1666
0
0
0
0
0.0066
0.3963
0.5096
0
0
0
0
0
0.3861
0
0
0.548936
0.702041
0.66205
0
1
1
0
0.4331
1
0
0.9882
0
0.3642
0
0
0.3196
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.7333
0
0
0
0
0.0293
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.1333
0
0
0
0
0.0053
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.7436
0.7430
0.7369
0.7089
0.7373
0.1453
0
0.7383
0
0
0.71773
0.737415
0.750693
0.8885
0.6282
0.7272
0
0.7426
0.4624
0.3712
0.7373
0
0.7368
0.1444
0
0.4879
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0.8418
0.8264
1
1
0.7941
1
1
1
1
1
1
0.9785
5.5.1 Welded beam design
The objective function of the welded beam design problem
is to minimize total fabrication cost subject to some constraints such as bending stress in the beam ( 𝜎 ), buckling
load on the bar ( Pc ), shear stress (𝜏 ), end deflection of the
beam (𝛿) in which four variables including the width,
( )length
of the welded area, the depth, and the thickness b x4 of the
main beam are computed. The diagram of this problem is
clarified in Fig. 10.
The formulation of this problem can be expressed as
follows:
[
]
Consider x⃗ = x1 x2 x3 x4 = [hltb]
(52)
( )
(
)
Minimize f x⃗ = 1.10471x2 x12 + 0.04811x3 x4 14.0 + x2
(53)
( )
( )
g1 x⃗ = 𝜏 x⃗ − 𝜏max ≤ 0
(54)
13
( )
( )
g2 x⃗ = 𝜎 x⃗ − 𝜎max ≤ 0
(55)
( )
( )
g3 x⃗ = 𝛿 x⃗ − 𝛿max ≤ 0
(56)
( )
g4 x⃗ = x1 − x4 ≤ 0
(57)
( )
( )
g5 x⃗ = P − Pc x⃗ ≤ 0
(58)
( )
g6 x⃗ = 0.125 − x1 ≤ 0
(59)
( )
g7 x⃗ = 1.10471x12 + 0.04811x3 x4 (14.0 + x2 ) − 5.0 ≤ 0
(60)
0.10 ≤ x1 ≤ 2.00,
(61)
0.10 ≤ x2 ≤ 10.00,
(62)
Author's personal copy
Engineering with Computers
Table 15 The results of the
Wilcoxon’s rank-sum test for
comparison between VPLSCA
and other algorithms
VPLSCA vs# CS
SSO
ALO
GWO
SSA
WOA
MFO
ABC
SCA
VPL
F1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(−)1
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(−)1
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(+)1
0.8413
0
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(−)1
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(−)1
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(−)1
0.0079
(−)1
0.0476
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(+)1
0.6905
0
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(−)1
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(−)1
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(−)1
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
1
0
0.007937
1
(+)1
0
0.007937
(+)1
1
0
0.007937
(+)1
0.007937
(+)1
(+)1
0
1
0
1
0
1
0
1
0
0.1507
0
1
0
0.007
(+)1
0.0079
1
0.0079
(+)1
0.0079
(+)1
0.8412
0
0.0079
(+)1
0.2222
0
0.0079
(−)1
1
0
0.0158
(+)1
0.0079
(+)1
F2
F3
F4
F5
F6
F7
F8
F9
F10
F11
F12
F13
F14
F15
F16
F17
F18
F19
F20
F21
F22
F23
F24
F25
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(−)1
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(−)1
0.0079
(+)1
0.0079
(+)1
0.0079
(+)1
13
Author's personal copy
Engineering with Computers
Fig. 9 Diversity curve for functions F3, F5, and F13 at dimension 60, 100, and 100
0.10 ≤ x3 ≤ 10.00,
(63)
0.10 ≤ x4 ≤ 2.00,
(64)
P
,
𝜏� = √
2x1 x2
√
where
( )
𝜏 x⃗ =
R=
√
13
x
(𝜏 � )2 + 2𝜏 � 𝜏 �� 2 + (𝜏 �� )2
2R
x22
4
(
+
𝜏 �� =
x1 + x3
2
MR
,
J
)2
�
x �
M =P L+ 2
2
Author's personal copy
Engineering with Computers
x2
x1
Load
x3
A
B
x4
5.5.2 Tension/compression spring design
Fig. 10 The structure of welded beam design
�
�
�
� ��
√
x22
x1 + x3 2
+
J=2
2x1 x2
4
2
( ) 6PL ( ) 6PL3
, 𝛿 x⃗ =
𝜎 x⃗ =
x4 x32
Ex4 x32
√
4.013E
( )
Pc x⃗ =
x32 x46
36
L2
(
x
1− 3
2L
√
Table 16. According to this table, some methods (e.g.
SCA, Improved HS GSA, WOA, CBO, MCSS, and ACO)
indicate better cost values than our proposed algorithm.
With respect to analyses the results scrupulously, we have
identified the methods in which has the better cost function
than our proposed algorithm violating constraints of the
model. To show the efficacy of the presented approach,
we have added a new column in Table 16 to show feasible
solutions of the different method. So that, we can simply
conclude that VPLSCA has better performance in comparison with its rivals.
)
E
4G
The schematic of the problem is exposed in Fig. 11. This
problem aims to minimize the weight of a tension/compression spring (TCS). There are many studies considering this
problem in the literature. As shown in figure, there are four
variables consisting of wire diameter (d ), mean coil diameter
( D ), and the number of active coils ( N ) (Table 17).
The problem comprises three constraints including surge
frequency, minimum deflection, and shear stress. The problem
formalization is shown in the following:
[
]
Consider x⃗ = x1 x2 x3 = [dDN]
(65)
( ) (
)
Minimize f x⃗ = x3 + 2 x2 x12
P = 6000 lb,
L14In,
(66)
𝛿max = 0.25In,
E = 30 × 106 psi,
G = 30 × 106 psi
𝜏max = 13600psi,
𝜎max = 30000psi
As stated above, the objective function is simply
revealed in Eq. (53), associated seven constraints are
reflected in Eqs. (54)–(60), and finally, related variables
are shown in Eqs. (61)–(64).
VPLSCA is applied to this problem and also compared
with some most well-known studies in the literature. In
this regard, many methods such as simplex, random, Davidon–Fletcher–Powell method (DFP) [90], co-evolutionary
differential evolution (CEDE) [91], genetic algorithm (GA)
[92], co-evolutionary particle swarm optimization (CPSO)
[93], evolution strategy (ES) [94], ant colony optimization
(ACO) [95], gravitational search algorithm (GSA) [96],
CSS [95], multi-verse optimization (MVO) [84], harmony search (HS) [97], improved harmony search (IHS)
[98], Reinforced Cuckoo Search Algorithm (RCSA) [99],
grouping particle swarm optimizer (GPSO) [100], whale
optimization algorithm(WOA) [89], ray optimization (RO)
[101] and magnetic charged system search (MCSS) [102].
The results of applying different methods can be seen in
( )
g1 x⃗ = 1 −
( )
g2 x⃗ =
x23 x3
71785x14
≤0
4x22 − x1 x2
1
−1≤0
)+
(
5108x12
12566 x2 x13 − x14
(67)
(68)
( )
140.45x1
g3 x⃗ = 1 −
≤0
x22 x3
(69)
( ) x + x2
−1≤0
g4 x⃗ = 1
1.5
(70)
0.05 ≤ x1 ≤ 2.00,
(71)
0.25 ≤ x2 ≤ 1.30,
(72)
2.00 ≤ x3 ≤ 15.00,
(73)
Equation (66) shows the function of objective considering the weight minimization of a tension/compression
spring. Equations (67)–(70) expresses all constraints and
Eqs. (71)–(73) describe variable ranges.
13
Author's personal copy
Engineering with Computers
Table 16 The average and standard deviation of fitness value for VPL, SCA, and VPLSCA algorithms
Function
F1
F2
F3
F4
F5
F6
F7
F8
F9
F14
F15
F16
Measure
Avg
STD
Avg
STD
Avg
STD
Avg
STD
Avg
STD
Avg
STD
Avg
STD
Avg
STD
Avg
STD
Avg
STD
Avg
STD
Avg
STD
Dim = 60
Dim = 100
Dim = 1000
SCA
VPL
VPLSCA
SCA
VPL
VPLSCA
SCA
VPL
VPLSCA
16,441.98
9269.677
18.11929
8.052529
131144.2
28,975.32
85.61415
4.964163
85,168,409
55,356,994
14,075.26
8263.1
0.335055
0.145676
2944.409
1737.702
75.23315
60.48956
114.7424
91.1922
2.62E + 08
1.53E + 08
4.43E + 08
1.8E + 08
0
0
0
0
0
0
1.12E − 64
2.5E − 64
0
0
9.36E − 07
7.3E − 07
0.000173
4.16E − 05
1.21E − 14
0
1.25E − 43
0
0
0
2.9E − 14
4.21E − 12
3.37E − 12
4.6E − 12
0
0
0
0
0
0
9E − 280
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
33,188.84
12,736.53
32.05697
19.20775
403,038.5
105,043.2
93.46759
1.904676
3.43E + 08
1.15E + 08
32947.32
16,299.74
1.38401
0.468815
17,304.54
8409.813
470.7525
142.1045
361.3009
142.9304
6.65E + 08
2.34E + 08
1.55E + 09
5.08E + 08
0
0
0
0
0
0
2.44E − 60
5.45E − 60
0
0
0.000224
0.000367
0.000203
0.00015
1.46E − 12
3.26E − 12
4.53E − 28
1.01E − 27
0
0
8.79E − 12
1.16E − 11
3.08E − 12
3.56E − 12
0
0
0
0
0
0
1.3E − 160
2.6E − 160
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
911,983.1
174,962.1
65,535
65,535
38,446,881
8,007,180
99.6176
0.111
7.99E + 09
6.5E + 08
878,432.3
232,821.2
296.2154
32.61321
4,170,860
1,156,112
135,779.7
17,014.49
8303.409
1656.362
2.35E + 10
2.46E + 09
3.99E + 10
4.81E + 09
0
0
8.42E − 278
0
0
0
7.35E − 64
1.63E − 63
0
0
2.183286
0.531683
7.59E − 05
5.39E − 05
2.43E − 09
6.19E − 08
9.32E − 20
2.79E − 24
0
0
1.25E − 12
1.25E − 12
8.17E − 13
8.23E − 13
0
0
0
0
0
0
3.15E − 124
7E − 124
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.894479
0.169847
Bold values indicate that the best value
x3
x2
x1
Fig. 11 Structure of the TCS design
There are many scholars working on different approaches
to solving this problem. These researchers have used a variety of techniques to reach a solution to this problem. Ray
optimization (RO) [101], Genetic algorithm (GA) [11], A
13
novel particle swarm optimizer [105], Reinforced Cuckoo
Search Algorithm (RCSA) [99], grouping particle swarm
optimizer (GPSO) [100], GA [92], evolution strategies (ES)
[12], WOA [89], SES [107], CEDE [91], co-evolutionary
particle swarm optimization approach (CPSO) [104], new
swarm algorithm with information sharing strategy [108]
Improved harmony search algorithm (HIS) [98], and others
performed different approaches to handle the current problem, which are shown in Table 18. As shown in this table,
the result of VPLSCA shows superior performance in comparison with others.
5.5.3 Pressure vessel design
The final instance represents the minimization of total cost in
designing a pressure vessel, including the cost of the materials, forming and welding, is considered as an objective
function. Figure 12 describes this problem and its features.
To solve this problem, variables including the shell thickness (x1), the head thickness (x2), the radius of the interior
(x3), and the cylindrical part length of the vessel (x4) are
Author's personal copy
Engineering with Computers
Table 17 Results of VPLSCA
and other methods to solve the
welded beam design problem
Algorithm
VPLSCA
VPL [32]
SCA [31]
WOA [89]
RCSA [99]
GPSO [100]
GSA [96]
Improved HS [98]
Simplex [90]
CBO [103]
CEDE [91]
ACO [25]
MCSS [102]
DFP [90]
CPSO [104]
RO [101]
Random [90]
CSS [95]
GSA [96]
APPROX [90]
GA [92]
MVO [84]
GA [11]
ES [12]
GA [92]
HS [97]
GA [92]
Optimum variables
l
h
b
t
6.898941
6.898945
3.47111
3.484293
0.20572
0.206
3.856989
3.47049
5.6256
3.47041
3.542998
3.471131
3.470493
6.2552
3.544214
3.528467
4.7313
3.468109
3.856979
6.2189
3.420500
3.473193
3.544214
3.612060
6.1730
6.2231
3.471328
0.215231
0.215235
0.20581
0.205396
3.47041
7.092
0.182129
0.20573
0.2792
0.205722
0.203137
0.205700
0.205729
0.2434
0.202369
0.203687
0.4575
0.205820
0.182129
0.2444
0.2489
0.205463
0.202369
0.199742
0.2489
0.2442
0.205986
0.215253
0.216253
0.2157
0.206276
9.03727
9.037
0.202376
0.2057
0.2796
0.205735
0.206179
0.205731
0.205729
0.2444
0.205723
0.207241
0.6600
0.205723
0.202376
0.2444
0.21000
0.205695
0.205723
0.206082
0.2533
0.2443
0.206480
8.811012
8.815033
9.037125
9.037426
0.20573
0.206
10.000000
9.03662
7.7512
9.037276
9.033498
9.036683
9.036623
8.2915
9.04821
9.004233
5.0853
9.038024
10
8.2915
8.997500
9.044502
9.048210
9.037500
8.1789
8.2915
9.020224
Optimum cost
Feasibility (yes/
no)
2.25998
2.26973
1.800885
1.730499
1.7246
2.218
1.879952
1.7248
2.5307
1.724663
1.733461
1.724918
1.724853
2.3841
1.72802
1.735344
4.1185
1.724866
1.87995
2.3815
1.748309
1.72802
1.728024
1.737300
2.4331
2.3807
1.728226
Yes
Yes
No
No
No
Yes
No
No
No
No
No
No
No
Yes
No
No
Yes
No
No
Yes
No
No
No
No
Yes
No
No
given which are written in the mathematical formulation
as follows:
[
]
] [
Consider x⃗ = x1 x2 x3 x4 = Ts Th RL
(74)
0 ≤ x1 ≤ 99,
(80)
0 ≤ x2 ≤ 99,
(81)
( )
Minimize f x⃗ = 0.62224x1 x3 x4 + 1.7781x2 x32
10 ≤ x3 ≤ 200,
(82)
10 ≤ x4 ≤ 200,
(83)
+ 3.1661x4 x12 + +19.84x3 x12
(75)
( )
g1 x⃗ = −x1 + 0.0193x3 ≤ 0
(76)
( )
g2 x⃗ = −x3 + 0.00954x3 ≤ 0
(77)
( )
4
g3 x⃗ = −𝜋x32 x4 − 𝜋x33 + 1, 296, 000 ≤ 0
3
(78)
( )
g4 x⃗ = x4 − 240 ≤ 0
(79)
Equation (75) is defined as an objective function in which
minimizing the total cost of the problem, Eqs. (76)–(79)
are related to all constraints, and the ranges of variables
are shown in Eqs. (80)–(83). Like the above-mentioned
problems, there are some approaches implemented in this
problem. We can mention branch and bound method [109],
WOA, improved ACO [95], augmented Lagrangian multiplier approach [110], CEDE, wind-driven water wave optimization (WDWWO) [111], RCSA [99], grouping particle
swarm optimizer (GPSO) [100] improved HS [98], different GAs [112], [113] [92], CPSO [104], MVO [84], ES
13
Author's personal copy
Engineering with Computers
Table 18 Comparison of the
proposed approach with other
methods for TCS design
Algorithm
VPLSCA
VPL [32]
SCA [31]
IHS [98]
CEDE [91]
RO [101]
ES [12]
[108]
GA [11]
PSO [105]
ACO [25]
GPSO [100]
SES [107]
GA [92]
WOA [89]
DE [91]
MCSS [102]
RCSA [99]
x1
x4
x3
Optimum variables
D
N
d
0.331580
0.331680
0.343215
0.349871
0.354714
0.349096
0.363965
0.050417
0.351661
0.357644
0.361500
0.0517
N/A
0.355360
0.345215
0.354714
0.356496
0.051688
12.744269
12.834269
11.994032
12.076432
11.410831
11.76279
10.890522
3.979915
11.632201
11.244543
11.00000
0.3573
N/A
11.397926
12.004032
11.410831
11.271529
0.356710
0.0501550
0.0501910
0.050905
0.051154
0.051609
0.051370
0.051989
0.321532
0.051480
0.051728
0.051865
11.2540
N/A
0.051643
0.051207
0.051609
0.051645
11.289398
x2
x3
Fig. 12 Pressure vessel design and its features
[94], CSS [95], PSO-DE [114], and DE [105] as the main
approaches considering in the literature to cope with this
problem. The comparison of the results given by different
approaches is shown in Table 19 which is described as the
competitive result of the proposed approach in comparison
with other approaches.
From the experimental results obtained from different
comparisons between the proposed VPLSCA method and
other methods. It can be observed that the VPLSCA has
shown good behavior at the level of convergence for global
optimization problems and engineering problems. It can be
seen that VPLSCA outperforms other optimizers (CS, SSO,
ALO, GWO, WOA, MFO, ABC, SCA, VPL) for 44% of the
total number of functions and provides the same performance with 28%. The main reason for this high performance
of the proposed VPLSCA is using the SCA as local search
operators to the traditional VPL. This leads to providing the
13
Optimum cost
Feasibility (yes/
no)
0.012298157
0.0123947
0.012446006
0.0126706
0.0126702
0.0126788
0.0126810
0.013060
0.0127048
0.0126747
0.0126432
0.0127
0.012732
0.0126698
0.0126763
0.0126702
0.0126192
0.0126652
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
Yes
Yes
No
N/A
Yes
Yes
Yes
No
No
VPL with suitable operators that avoiding the stagnation
at the local optima as well as maintain the diversity of the
solutions during the optimization process. However, it still
needs more improvement especially in the CPU time this
the original VPL is time-consuming and this can be solved
by applying parallel procedure. Another limitation can be
highlighted, the VPLSCA contains several parameters that
are randomly generated. This factor influences deeply on the
convergence of the algorithm. Therefore, the chaotic maps
can be used to overcome this limitation.
6 Conclusion and future works
This paper provides a modified version for the VPL algorithm to enhance its ability to find the best solution. In general, the VPL algorithm emulates the rules of the volleyball
game such as competition, the interaction between teams
during a season, as well as simulates the coaching process.
The VPL is applied to a different number of optimization
and engineering problems and the results established that
it has a high ability to reach the best solution than other
methods. However, the accuracy of the VPL algorithm still
requires to improve, especially the learning phase which can
pressure the VPL towards the local optimal point. Therefore,
this paper used the SCA which has a high ability to explore
the search space. In the proposed method called VPLSCA,
the SCA is used during the Learning Phase which gives the
VPL a high ability to search for the solution at this phase. To
Author's personal copy
Engineering with Computers
Table 19 Comparison of the
proposed approach results with
literature for a pressure vessel
design problem
Algorithm
VPLSCA
VPL [32]
GA [92]
WOA [89]
ACO [25]
improved HS [98]
MVO [84]
GA [11]
SCA [31]
ES [12]
PSO-DE [114]
DE [91]
ALM [115]
CSS [95]
GA [113]
B & B [109]
CEDE [91]
WDWWO [111]
RCSA [99]
GAS [112]
CPSO [104]
GPSO [100]
Optimum variables
Ts
Th
R
L
0.8152
0.815200
0.937500
0.812500
0.812500
1.125000
08125
0.812500
0.8125
0.812500
0.812500
0.812500
1.125000
0.812500
0.812500
1.125000
0.812500
0.9803
0.9803
0.937500
0.812500
0.778
0.4265
0.426500
0.437500
0.437500
0.437500
0.625000
0.4375
0.437500
0.4378
0.437500
0.437500
0.437500
0.625000
0.437500
0.437500
0.625000
0.437500
0.4854
0.4854
0.50000
0.437500
0.385
42.0851245
42.0912541
42.097398
42.0982699
42.098353
58.29015
42.090738
40.323900
42.0883699
42.098087
42.098446
42.098411
58.2910
42.103624
48.329000
47.7000
42.098411
50.7236
50.7236
48.329
42.091266
40.321
176.73154
176.742314
176.654050
176.638998
176.637751
43.69268
176.73869
200.000000
176.648998
176.640518
176.636600
176.637690
43.69
176.572656
112.679000
117.7010
176.637690
92.7062
92.7062
112.679
176.746500
200.000
investigate the performance of the proposed VPLSCA algorithm, a set of experimental series is performed using a set of
different twenty-five CEC2005 optimization and three engineering problems. The results of these experimental series
show that the proposed VPLSCA algorithm has higher performance than that from other algorithms such as CS, SSA,
ALO, MFO, WOA, and the classic types of SCA, and VPL.
Based on the superiority of the proposed VPLSCA algorithm, it can be used in future works in different fields using
it as (1) feature selection method by converting it to a binary
version, (2) a multi-level thresholding image segmentation
through finding the optimal threshold value, (3) reducing
the energy consumption of virtual machine placement in
cloud computing.
2.
3.
4.
5.
6.
7.
Compliance with ethical standards
8.
Conflict of interest The authors declare no conflict of interest.
9.
10.
References
1. Mousavi-Avval SH et al (2017) Application of multi-objective
genetic algorithms for optimization of energy, economics and
11.
Optimum cost
Feasibility (yes/
no)
6042.711935
6043.986
6059.9463
6059.7410
6059.7258
7197.730
6060.8066
6288.7445
6058.2907
6059.7456
6059.71433
6059.7340
7198.0428
6059.0888
6410.3811
8129.1036
6059.7340
6335.4270
6335.4270
6410.3811
6061.0777
5885.703
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Yes
Yes
No
No
No
No
Yes
No
No
Yes
Yes
Yes
environmental life cycle assessment in oilseed production. J
Clean Product 140:804–815
Chou J-S, Pham A-D (2017) Nature-inspired metaheuristic optimization in least squares support vector regression for obtaining
bridge scour information. Inf Sci 399:64–80
Shamir J et al (1992) Optimization methods for pattern recognition. In: Critical reviews. SPIE, Bellingham
Ghaedi AM et al (2016) Adsorption of Triamterene on multiwalled and single-walled carbon nanotubes: artificial neural network modeling and genetic algorithm optimization. J Mol Liq
216:654–665
Wang Z et al (2016) A modified ant colony optimization algorithm for network coding resource minimization. IEEE Trans
Evol Comput 20(3):325–342
Voudouris C, Tsang EP, and Alsheddy A (2010) Guided local
search. In: Handbook of metaheuristics. Springer, New York, pp
321–361
Baba N, Shoman T, Sawaragi Y (1977) A modified convergence theorem for a random optimization method. Inf Sci
13(2):159–166
Lourenço HR, Martin O, Stützle T (2001) A beginner’s introduction to iterated local search. In: Proceedings of MIC
Mladenović N, Hansen P (1997) Variable neighborhood search.
Comput Oper Res 24(11):1097–1100
Burke EK, Kendall G, Soubeiga E (2003) A tabu-search
hyperheuristic for timetabling and rostering. J Heuristics
9(6):451–470
Goldberg D (1989) Genetic algorithms in search, optimization,
and machine learning. In: Ohno K, Esfarjani K, Kawazoe Y (eds)
Computational materials and science. Addison-Wesley, Reading
13
Author's personal copy
Engineering with Computers
12. Beyer H-G, Schwefel H-P (2002) Evolution strategies—a comprehensive introduction. Nat Comput 1(1):3–52
13. Yao X, Liu Y, Lin G (1999) Evolutionary programming made
faster. IEEE Trans Evol Comput 3(2):82–102
14. O’Neill M, Ryan C (2001) Grammatical evolution. IEEE Trans
Evol Comput 5(4):349–358
15. Cui L et al (2016) Adaptive differential evolution algorithm with
novel mutation strategies in multiple sub-populations. Comput
Oper Res 67:155–173
16. Gandomi AH (2014) Interior search algorithm (ISA): a novel
approach for global optimization. ISA Trans 53(4):1168–1183
17. Salimi H (2015) Stochastic fractal search: a powerful metaheuristic algorithm. Knowl-Based Syst 75:1–18
18. Javidy B, Hatamlou A, Mirjalili S (2015) Ions motion algorithm
for solving optimization problems. Appl Soft Comput 32:72–79
19. Zheng Y-J (2015) Water wave optimization: a new natureinspired metaheuristic. Comput Oper Res 55:1–11
20. Sadollah A et al (2013) Mine blast algorithm: a new population
based algorithm for solving constrained engineering optimization
problems. Appl Soft Comput 13(5):2592–2612
21. Ahrari A, Atai AA (2010) Grenade explosion method—a novel
tool for optimization of multimodal functions. Appl Soft Comput
10(4):1132–1140
22. Eberhart RC, Kennedy J (1995) A new optimizer using particle
swarm theory. In: Proceedings of the sixth international symposium on micro machine and human science, New York
23. Dorigo M, Maniezzo V, Colorni A (1996) Ant system: optimization by a colony of cooperating agents. IEEE Trans Syst Man
Cybern Part B Cybern 26(1):29–41
24. Dorigo M et al (2008) Ant colony optimization and swarm intelligence. In: Proceedings of the 6th international conference, ANTS
2008, vol 5217, Springer, Brussels, 22–24 Sep 2008
25. Dorigo M, Stützle T (2010) Ant colony optimization: overview
and recent advances. In: Handbook of metaheuristics
26. Dorigo M, Gambardella LM (1997) Ant colony system: a cooperative learning approach to the traveling salesman problem.
IEEE Trans Evol Comput 1(1):53–66
27. Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer.
Adv Eng Softw 69:46–61
28. Pham D et al (2011) The Bees algorithm–a novel tool for complex optimisation. In: Intelligent production machines and systems—2nd I* PROMS virtual international conference, 3–14 Jul
2006, Elsevier
29. Cuevas E et al (2013) A swarm optimization algorithm
inspired in the behavior of the social-spider. Expert Syst Appl
40(16):6374–6384
30. Karaboga D, Ozturk C (2011) A novel clustering approach:
artificial bee colony (ABC) algorithm. Appl Soft Comput
11(1):652–657
31. Mirjalili S (2016) SCA: a sine cosine algorithm for solving optimization problems. Knowl-Based Syst 96:120–133
32. Moghdani R, Salimifard K (2018) Volleyball premier league
algorithm. Appl Soft Comput 64:161–185
33. Issa M et al (2018) ASCA-PSO: adaptive sine cosine optimization algorithm integrated with particle swarm for pairwise local
sequence alignment. Expert Syst Appl 99:56–70
34. Chen K et al (2018) A hybrid particle swarm optimizer with sine
cosine acceleration coefficients. Inf Sci 422:218–241
35. Nenavath H, Jatoth RK (2018) Hybridizing sine cosine algorithm
with differential evolution for global optimization and object
tracking. Appl Soft Comput 62:1019–1043
36. Abd Elaziz M, Oliva D, Xiong S (2017) An improved oppositionbased sine cosine algorithm for global optimization. Expert Syst
Appl 90:484–500
13
37. Rizk-Allah RM (2018) Hybridizing sine cosine algorithm with
multi-orthogonal search strategy for engineering design problems. J Comput Des Eng 5(2):249–273
38. Reddy KS et al (2018) A new binary variant of sine–cosine algorithm: development and application to solve profit-based unit
commitment problem. Arab J Sci Eng 43(8):4041–4056
39. Banerjee A, Nabi M (2017) Re-entry trajectory optimization for
space shuttle using sine–cosine algorithm. In: 2017 8th international conference on recent advances in space technologies
(RAST)
40. Tawhid MA, Savsani V (2017) Multi-objective sine–cosine algorithm (MO-SCA) for multi-objective engineering design problems. Neural Comput Appl
41. Mohammed Mudhsh SX, El Aziz MA, Hassanien AE, Duan P
(2017) Hybrid swarm optimization for document image binarization based on Otsu function. CASA
42. Abd El Aziz M, Selim IM, Xiong S (2017) Automatic detection
of galaxy type from datasets of galaxies image based on image
retrieval approach. Sci Rep 7(1):4463
43. Hafez AI et al (2016) Sine cosine optimization algorithm for
feature selection. In: 2016 international symposium on innovations in intelligent systems and applications (INISTA). IEEE,
New York
44. Bairathi D, Gopalani D (2017) Opposition-based sine cosine
algorithm (OSCA) for training feed-forward neural networks.
In: 2017 13th international conference on signal-image technology & internet-based systems (SITIS). IEEE, New York
45. Li N, Li G, Deng Z (2017) An improved sine cosine algorithm
based on levy flight. In: Ninth international conference on digital
image processing (ICDIP 2017). International Society for Optics
and Photonics
46. Qu C et al (2018) A modified sine–cosine algorithm based on
neighborhood search and greedy levy mutation. Comput Intell
Neurosci
47. Zou Q et al (2018) Optimal operation of cascade hydropower stations based on chaos cultural sine cosine algorithm. In: IOP conference series: materials science and engineering. IOP Publishing
48. Meshkat M, Parhizgar M (2017) A novel weighted update position mechanism to improve the performance of sine cosine algorithm. In: 2017 5th Iranian joint congress on fuzzy and intelligent
systems (CFIS). IEEE, New York
49. Bureerat S, Pholdee N (2017) Adaptive sine cosine algorithm
integrated with differential evolution for structural damage detection. In: International conference on computational science and
its applications. Springer, New York
50. Elaziz MEA et al (2017) A hybrid method of sine cosine algorithm and differential evolution for feature selection. In: International conference on neural information processing. Springer,
New York
51. Zhou C et al (2017) A sine cosine mutation based differential
evolution algorithm for solving node location problem. Int J
Wirel Mobile Comput 13(3):253–259
52. Oliva D et al (2018) Context based image segmentation using
antlion optimization and sine cosine algorithm. Multimed Tools
Appl 77(19):25761–25797
53. Pasandideh SHR, Khalilpourazari S (2018) Sine cosine crow
search algorithm: a powerful hybrid meta heuristic for global
optimization. arXiv preprint: arXiv:1801.08485
54. Singh N, Singh S (2017) A novel hybrid GWO-SCA approach for
optimization problems. Eng Sci Technol Int J 20(6):1586–1601
55. Zhang J, Zhou Y, Luo Q (2018) An improved sine cosine water
wave optimization algorithm for global optimization. J Intell
Fuzzy Syst 34(4):2129–2141
56. Nenavath H, Jatoth RK (2019) Hybrid SCA–TLBO: a novel optimization algorithm for global optimization and visual tracking.
Neural Comput Appl 31(9):5497–5526
Author's personal copy
Engineering with Computers
57. Majhi SK (2018) An efficient feed foreword network model with
sine cosine algorithm for breast cancer classification. Int J Syst
Dyn Appl (IJSDA) 7(2):1–14
58. Raut U, Mishra S (2019) Power distribution network reconfiguration using an improved sine–cosine algorithm-based meta-heuristic search. In: Soft computing for problem solving. Springer,
New York, pp 1–13
59. Ghosh A, Mukherjee V (2017) Temperature dependent optimal power
flow. In: 2017 international conference on technological advancements in power and energy (TAP energy). IEEE, New York
60. Issa M et al (2018) Pairwise global sequence alignment using
sine–cosine optimization algorithm. In: International conference
on advanced machine learning technologies and applications.
Springer, New York
61. SeyedShenava S, Asefi S (2018) Tuning controller parameters for
AGC of multi-source power system using SCA algorithm. Delta
2(B2):B2
62. Rajesh K, Dash S (2019) Load frequency control of autonomous
power system using adaptive fuzzy based PID controller optimized on improved sine cosine algorithm. J Ambient Intell Hum
Comput 10(6):2361–2373
63. Khezri R et al (2018) Coordination of heat pumps, electric vehicles and AGC for efficient LFC in a smart hybrid power system
via SCA-based optimized FOPID controllers. Energies 11(2):420
64. Mostafa E, Abdel-Nasser M, Mahmoud K (2017) Performance
evaluation of metaheuristic optimization methods with mutation operators for combined economic and emission dispatch.
In: 2017 nineteenth international middle east power systems
conference (MEPCON). IEEE, New York
65. Singh PP et al (2017) Comparative analysis on economic load
dispatch problem optimization using moth flame optimization
and sine cosine algorithms 2:65–75
66. Majeed MAM, Rao PS (2017) Optimization of CMOS analog
circuits using sine cosine algorithm. In: 2017 8th international
conference on computing, communication and networking technologies (ICCCNT)
67. Ramanaiah ML, Reddy MD (2017) Sine cosine algorithm for
loss reduction in distribution system with unified power quality
conditioner. i-Manag J Power Syst Eng 5(3):10
68. Dhundhara S, Verma YP (2018) Capacitive energy storage with
optimized controller for frequency regulation in realistic multisource deregulated power system. Energy 147:1108–1128
69. Singh V (2017) Sine cosine algorithm based reduction of higher
order continuous systems. In: 2017 international conference on
intelligent sustainable systems (ICISS). IEEE, New York
70. Tasnin W, Saikia LC (2017) Maiden application of an sine–
cosine algorithm optimised FO cascade controller in automatic
generation control of multi-area thermal system incorporating
dish-Stirling solar and geothermal power plants. IET Renew
Power Gener 12(5):585–597
71. Rout B, Pati BB, Panda S (2018) Modified SCA algorithm for
SSSC damping Controller design in Power System. ECTI Trans
Electric Eng Electron Commun 16(1):46–63
72. Sahu N, Londhe ND (2017) Selective harmonic elimination in
five level inverter using sine cosine algorithm. In: 2017 IEEE
international conference on power, control, signals and instrumentation engineering (ICPCSI). IEEE, New York
73. Das S, Bhattacharya A, Chakraborty AK (2018) Solution of
short-term hydrothermal scheduling using sine cosine algorithm.
Soft Comput 22(19):6409–6427
74. Ismael SM, Aleem SHA, Abdelaziz AY (2017) Optimal selection
of conductors in Egyptian radial distribution systems using sine–
cosine optimization algorithm. In: 2017 nineteenth international
middle east power systems conference (MEPCON). IEEE, New
York
75. Kumar V, Kumar D (2017) Data clustering using sine cosine
algorithm: data clustering using SCA. In: Handbook of research
on machine learning innovations and trends.IGI Global, pp
715–726
76. Mahdad B, Srairi K (2018) A new interactive sine cosine algorithm for loading margin stability improvement under contingency. Electr Eng 100(2):913–933
77. Sindhu R et al (2017) Sine–cosine algorithm for feature selection with elitism strategy and new updating mechanism. Neural
Comput Appl 28(10):2947–2958
78. Yıldız BS, Yıldız AR (2018) Comparison of grey wolf, whale,
water cycle, ant lion and sine–cosine algorithms for the optimization of a vehicle engine connecting rod. Mater Test
60(3):311–315
79. Kumar N et al (2017) Single sensor-based MPPT of partially
shaded PV system for battery charging by using cauchy and
gaussian sine cosine optimization. IEEE Trans Energy Convers
32(3):983–992
80. Abd Elfattah M et al (2017) Handwritten Arabic manuscript
image binarization using sine cosine optimization algorithm. In:
Genetic and evolutionary computing. Springer, Cham
81. Turgut OE (2017) Thermal and economical optimization of a
shell and tube evaporator using hybrid backtracking search—
sine–cosine algorithm. Arab J Sci Eng 42(5):2105–2123
82. Wang J et al (2018) A novel hybrid forecasting system of wind
speed based on a newly developed multi-objective sine cosine
algorithm. Energy Convers Manag 163:134–150
83. Mirjalili S (2015) Moth-flame optimization algorithm: a
novel nature-inspired heuristic paradigm. Knowl-Based Syst
89:228–249
84. Mirjalili S, Mirjalili SM, Hatamlou A (2016) Multi-verse optimizer: a nature-inspired algorithm for global optimization. Neural Comput Appl 27(2):495–513
85. Yang XS, Deb S (2009) Cuckoo search via Levy flights. In: Proceedings of world congress on nature & biologically inspired
computing, pp 210–225
86. Yu JJQ, Li VOK (2015) A social spider algorithm for global
optimization. Appl Soft Comput 30:614–627
87. Mirjalili S (2015) The ant lion optimizer. Adv Eng Softw
83:80–98
88. Mirjalili S et al (2017) Salp swarm algorithm: a bio-inspired
optimizer for engineering design problems. Adv Eng Softw
114:163–191
89. Mirjalili S, Lewis A (2016) The whale optimization algorithm.
Adv Eng Softw 95:51–67
90. Ragsdell K, Phillips D (1976) Optimal design of a class of
welded structures using geometric programming. J Eng Ind
98(3):1021–1025
91. Huang F-Z, Wang L, He Q (2007) An effective co-evolutionary
differential evolution for constrained optimization. Appl Math
Comput 186(1):340–356
92. Coello CAC, Montes EM (2002) Constraint-handling in genetic
algorithms through the use of dominance-based tournament
selection. Adv Eng Inform 16(3):193–203
93. Krohling RA, Hoffmann F, Coelho LS (2004) Co-evolutionary
particle swarm optimization for min-max problems using Gaussian distribution. In: Proceedings of the 2004 congress on evolutionary computation (IEEE cat. no. 04TH8753)
94. Mezura-Montes E, Coello CAC (2008) An empirical study about
the usefulness of evolution strategies to solve constrained optimization problems. Int J Gen Syst 37(4):443–473
95. Kaveh A, Talatahari S (2010) Optimal design of skeletal structures via the charged system search algorithm. Struct Multidiscip
Optim 41(6):893–911
96. Rashedi E, Nezamabadi-pour H, Saryazdi S (2009) GSA: a gravitational search algorithm. Inf Sci 179(13):2232–2248
13
Author's personal copy
Engineering with Computers
97. Lee KS, Geem ZW (2005) A new meta-heuristic algorithm for
continuous engineering optimization: harmony search theory and
practice. Comput Methods Appl Mech Eng 194(36):3902–3933
98. Mahdavi M, Fesanghary M, Damangir E (2007) An improved
harmony search algorithm for solving optimization problems.
Appl Math Comput 188(2):1567–1579
99. Thirugnanasambandam K et al (2019) Reinforced cuckoo search
algorithm-based multimodal optimization. Appl Intell
100. Zhao X, Zhou Y, Xiang Y (2019) A grouping particle swarm
optimizer. Appl Intell
101. Kaveh A, Khayatazad M (2012) A new meta-heuristic method:
ray optimization. Comput Struct 112–113:283–294
102. Kaveh A, Motie Share M, Moslehi M (2013) A new meta-heuristic algorithm for optimization: magnetic charged system search.
Acta Mech 224(1):85–107
103. Kaveh A, Mahdavi VR (2014) Colliding bodies optimization: a
novel meta-heuristic method. Comput Struct 139:18–27
104. He Q, Wang L (2007) An effective co-evolutionary particle
swarm optimization for constrained engineering design problems. Eng Appl Artif Intell 20(1):89–99
105. Li L et al (2007) A heuristic particle swarm optimizer for optimization of pin connected structures. Comput Struct 85(7):340–349
106. Belegundu AD (1983) Study of mathematical programming
methods for structural optimization. Diss Abstr Int Part B Sci
Eng 43(12):1983
107. Mezura-Montes E, Coello CAC, Landa-Becerra R (2003) Engineering optimization using simple evolutionary algorithm. In:
Proceedings of the 15th IEEE international conference on tools
with artificial intelligence
13
108. Ray T, Saini P (2001) Engineering design optimization using a
swarm with an intelligent information sharing among individuals. Eng Optim 33(6):735–748
109. Sandgren E (1990) Nonlinear integer and discrete programming
in mechanical design optimization. J Mech Des 112(2):223–229
110. Kannan B, Kramer SN (1994) An augmented Lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. J Mech Des
116(2):405–411
111. Zhang J, Zhou Y, Luo Q (2019) Nature-inspired approach: a
wind-driven water wave optimization algorithm. Appl Intell
49(1):233–252
112. Deb K (1997) GeneAS: A robust optimal design technique for
mechanical component design. In: Evolutionary algorithms in
engineering applications. Springer, New York, pp 497–514
113. Coello CAC (2000) Use of a self-adaptive penalty approach for
engineering optimization problems. Comput Ind 41(2):113–127
114. Liu H, Cai Z, Wang Y (2010) Hybridizing particle swarm optimization with differential evolution for constrained numerical
and engineering optimization. Appl Soft Comput 10(2):629–640
115. Souza E, Nikolaidis I, Gburzynski P (2010) A new aggregate local
mobility (ALM) clustering algorithm for VANETs. In: 2010 IEEE
international conference on communications. IEEE, New York
Publisher’s Note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.