1. Introduction
Optimization searches for the minimum or the maximum of an NDV–variables function subject to a set of NCON inequality/equality constraint functions or . Optimization problems may be iteratively solved with gradient based algorithms (GBAs) or metaheuristic algorithms (MHAs). GBAs formulate and solve a series of approximate sub-problems until the search process converges to the optimal solution; sub-problems are built by evaluating first-order and second-order derivatives of cost functions and constraints at the solution point found in the previous iteration. MHAs do not require gradients: local information provided by gradients in GBAs are replaced in MHAs by global information gathered from a population of candidate designs. Trial solutions of MHAs are randomly generated using a mathematical model inspired by evolutionary theory/processes, biology, physics, chemistry, mathematics, astronomy, astrophysics, herd behavior of animals, human behavior and activities, social sciences, etc. MHAs evaluate the new trial solutions randomly generated and then attempt to improve the design with respect to the previous iterations.
Metaheuristic algorithms can inherently deal with highly nonlinear and non-convex optimization problems. The random search allows MHAs to explore larger portions of search space than GBAs without being trapped in local optima. This ability together with their straightforward software implementation made MHAs become the standard approach to the solution of complex engineering optimization problems. Exploration and exploitation are the two typical phases carried out in metaheuristic optimization. Exploration is the leading search mechanism in the early optimization stages where large perturbations are given to optimization variables to quickly find the best regions of the search space. Exploitation governs the search process as the optimizer converges toward the global optimum: local search is carried out in properly selected neighborhoods of the most promising solutions. Exploration and exploitation must be appropriately combined in metaheuristic optimization to fully search the design space and find the globally optimal solution with low computational effort.
MHAs can be categorized in four groups: (i) evolutionary algorithms; (ii) science-based algorithms; (iii) human-based algorithms; (iv) swarm intelligence-based algorithms. Evolutionary algorithms such as genetic algorithms (GAs) [
1,
2], differential evolution (DE) [
3,
4], evolutionary programming (EP) [
5], evolution strategies (ESs) [
6], and biogeography-based optimization (BBO) [
7] reproduce evolution theory and evolutionary processes.
Science-based MHAs mimic laws of physics, chemistry, astronomy and astrophysics, and mathematics. Simulated annealing (SA) [
8,
9], charged system search (CSS) [
10], magnetic charged system search (MCSS) [
11], colliding bodies optimization (CBO) [
12], water evaporation optimization (WAO) [
13], thermal exchange optimization (TEO) [
14], equilibrium optimizer (EO) [
15], gases Brownian motion optimization (GBMO) [
16], and Henry gas solubility optimization (HGSO) [
17] are physics/chemistry-based MHAs that reproduce equilibrium conditions of mechanical, electro-magnetic, physical/chemical, or thermal systems subject to external perturbations. Ray optimization (RO) [
18] and light spectrum optimizer (LSO) [
19] reproduce optics laws to define search directions containing high-quality trial solutions. Big bang–big crunch optimization (BB-BC) [
20] and the gravitational search algorithm (GSA) [
21] are inspired by astronomy/astrophysics phenomena such as the expansion (big bang)—contraction (big crunch) cycles leading to the formation of new star–planetary systems and gravitational interactions between masses. Mathematics inspired the sine cosine algorithm (SCA) [
22], the Runge–Kutta optimizer (RUN) [
23], and the arithmetic optimization algorithm (AOA) [
24].
Tabu search (TS) [
25], harmony search optimization (HS) [
26], teaching–learning-based optimization (TLBO) [
27], JAYA [
28], the group teaching optimization algorithm (GTOA) [
29], the mother optimization algorithm (MOA) [
30], the preschool education optimization algorithm (PEOA) [
31], the learning cooking algorithm (LCA) [
32], the imperialist competitive algorithm (ICA) [
33], and the political optimizer (PO) [
34] are representative algorithms reproducing human activities, behaviors, learning/education processes, social sciences, international strategies, and politics.
Swarm intelligence-based algorithms represent the largest category in metaheuristic optimization. These algorithms mimic the social/individual behavior of animals (insects, terrestrial animals, birds and other flying animals, and aquatic animals) in reproduction, food search, hunting, migration, etc. Particle swarm optimization (PSO) [
35], reproducing interactions between individuals of bird/fish swarms, is the most cited metaheuristic algorithm according to the Scopus database; in PSO, a population of candidate designs (the particles) are generated, and their positions and velocities are updated referring to the position of the leader(s) and the best positions of individual particles in each iteration. Insect behavior-inspired algorithms, for example, include ant colony optimization (ACO) [
36], artificial bee colony (ABC) [
37], the firefly algorithm (FFA) [
38], and the ant lion optimizer (ALO) [
39].
The grey wolf optimizer (GWO) [
40], coyote optimization algorithm (COA) [
41], snake optimizer (SO) [
42], and snow leopard optimization algorithm (SLOA) [
43] simulate the behavior of terrestrial animals. The GWO, reproducing the hunting behavior of grey wolves, is the 2nd most cited algorithm in terms of citations/year after PSO. However, the GWO is preferred to PSO in view of its simpler formulation that does not include internal parameters except population size and a limit to the number of iterations.
The bat algorithm (BA) [
44], cuckoo search (CS) [
45], crow search algorithm (CSA) [
46], Harris hawks optimization (HHO) [
47], and starling murmuration optimizer (SMO) [
48] are inspired by flying animals like bats and birds. CS (reproducing the parasitic behavior of some cuckoo species that mix their eggs with those of other birds to guarantee the survival of their chicks) and HHO (reproducing the cooperative behavior of Harris hawks in nature, specifically their surprise attacks during the chase) are the most cited MHAs of this sub-category.
Last, the dolphin echolocation algorithm (DEA) [
49], whale optimization algorithm (WOA) [
50], salp swarm algorithm (SSA) [
51], marine predators algorithm (MPA) [
52], and giant trevally optimizer (GTO) [
53] reproduce social behavior, hunting strategy, swarming, and foraging of aquatic animals; WOA, SSA, and MPA are the most cited MHAs in this sub-category.
Hybrid/improved/enhanced methods also were developed in the optimization literature by adding new equations in the original formulation or merging two or more MHAs. The goal always was finding the best balance between exploration and exploitation to minimize computational cost (i.e., the number of structural analyses or function evaluations required by the optimizer), improve robustness, and limit the number of internal parameters of the metaheuristic formulation. Hybrid metaheuristic algorithms may have a parallel or a serial architecture [
54]. In the former case, the component algorithms are independently run on parallel computers, while in the latter case, they are sequentially executed on the same machine.
Metaheuristic optimization algorithms are widely employed in various engineering fields such as, for example, the mechanical characterization of materials and structural identification [
55], static and dynamic structural optimization [
56,
57], damage identification [
58], vehicle routing optimization [
59], 3D printing process optimization [
60], cancer classification [
61], and image processing [
62]. However, in spite of the large diffusion of MHAs in engineering practice, some issues remain open in metaheuristic optimization: (i) no metaheuristic algorithm can always outperform all other MHAs in all optimization problems; (ii) MHAs may require a very large number of function evaluations (analyses) for completing optimization process; (iii) sophisticated algorithmic variants and hybrid MHAs combining multiple methods often increase the computational complexity of the optimizer also because of the presence of additional internal parameters that are difficult to be tuned. As a matter of fact, newly developed MHAs often added very little to the optimization field and stopped attracting potential users even after a rather short time from their release.
The main objective of this study was to develop an efficient and robust hybrid metaheuristic algorithm for engineering optimization. Generally speaking, a hybrid optimizer should combine high-performance MHAs that complement each other in terms of exploration and exploitation. Furthermore, component algorithms should be versatile so as to successfully deal with as many different optimization problems as possible. Lastly, component algorithms should be simple enough in order to simplify the formulation of the hybrid optimizer and to limit the number of additional internal parameters governing the switch from one component algorithm to another. In view of this, the grey wolf optimizer (GWO) and the JAYA algorithms were selected in the present study and combined into the novel FHGWJA hybrid algorithm (the acronym stands for Fast Hybrid Grey Wolf JAYA) because they certainly satisfy the above-mentioned requisites.
The GWO mimics the leadership hierarchy and hunting mechanism of grey wolves [
40]. The leadership hierarchy is simulated by defining four types of grey wolves (α, β, δ, and ω). The hunting mechanism is simulated with different algorithmic operators that represent searching for prey, encircling prey, and attacking prey. The α wolf is the group leader. The β wolf helps the α in decision-making or other pack activities. The lowest ranking is represented by ω wolves. The δ wolves rank between the α-β wolves and the ω wolves. Alpha, beta, and delta wolves estimate the position of the prey, and other wolves randomly update their positions around the prey.
As previously mentioned, the GWO is the 2nd most cited metaheuristic algorithm after PSO according to the Scopus database. It was successfully applied to many fields because of its simple formulation and computational efficiency (see, for example, the surveys on GWO applications presented in [
63,
64,
65]). However, the GWO may suffer from lack of exploitation, risk of stagnation especially in complex problems, and limited adaptability to variations in the problem landscape during the optimization process. Enhanced GWO formulations including (i) new operators to capture other characteristic behaviors of wolves (i.e., gaze cues learning [
64] and dimension learning-based hunting [
66]), (ii) Cauchy-Gaussian mutation (increasing the search range of leader wolves when they tend to the local optimal solution) along with greedy selection (to maintain population diversity) and improved search strategy considering average position of all individuals [
67], (iii) chaotic grouping and dynamic regrouping of individuals to increase population diversity [
68], (iv) optimization of initial population along with a nonlinear control parameter convergence strategy and nonlinear tuning strategy of parameters [
69], and (v) update of wolves’ positions with spiral movements [
70], as well as (vi) hybrid algorithms combining GWO with other powerful MHAs (i.e., particle swarm optimization, biogeography-based optimization, differential evolution, Harris hawks optimization, and the whale optimization algorithm [
71,
72,
73,
74,
75]) were proposed in order to overcome the above-mentioned limitations.
JAYA [
28] utilizes the most straightforward search scheme amongst all MHAs consisting of only one equation to perturb optimization variables: to approach the population’s best solution and move away from the worst solution, thus achieving a significant convergence capability. JAYA is a very versatile algorithm with a large number of applications documented in the literature [
76,
77]. JAYA has an inherently hybrid nature combining basic features of evolutionary algorithms (the survival of fittest individual) and swarm-based algorithms where the swarm normally follows the leader in the search of the optimal solution. These characteristics make JAYA a very good candidate component of new hybrid metaheuristic algorithms.
Similar to the GWO, JAYA may present a rather weak exploitation phase. Furthermore, JAYA uses only one equation to perturb optimization variables involving only the best and the worst individuals of the population: this may produce stagnation and also limit the exploration phase. To overcome these issues, improved JAYA formulations using sub-populations [
78], or involving also the current average solution and the historical solutions (a population of candidate solutions initially generated besides the standard population and permuted in the optimization process according to a probabilistic criterion) [
79], were developed. Fuzzy clustering competitive learning, experience learning, and Cauchy mutation mechanisms were also implemented [
80] to effectively utilize population information, speed up the convergence rate, improve the balance between exploiting the previously visited regions and exploring new regions of search space in the search process, and reduce the risk of being trapped into local optimum by fine-tuning the quality of the so-far best solution. The high computational cost is another issue in JAYA optimization. In [
81,
82], it was attempted to reduce the number of analyses and increase convergence speed by directly rejecting heavier designs than population individuals, but this strategy missed the global optimum in many structural optimization problems. Similarly, in [
83], the population was updated not only if the new trial solutions improve the individuals currently stored in the population but also if they have the same values for the cost function or penalized cost function. In [
84], generation of trial designs also relied on two randomly selected individuals
and
, and the perturbation equation varied if
m >
n or
m <
n as well as if
<
or
>
. JAYA was also hybridized with other efficient MHAs such as, for example, harmony search optimization [
56], genetic algorithms [
85], crow search [
86], the Rao-1 optimizer [
87], teaching–learning-based optimization [
88], and differential evolution [
89].
The main features of the novel FHGWJA algorithm developed in this study and the advancements with respect to the state-of-the-art can be summarized as follows:
- (1)
GWO and JAYA were simply merged into FHGWJA without complicating the algorithmic structure resulting from the hybridization process: this is a significant step further with respect to the above-mentioned studies where performance improvements achieved with the GWO and JAYA variants often were problem-sensitive and entailed complicated optimization formulations that were not easy to reproduce;
- (2)
FHGWJA utilizes elitist and repair strategies to generate high-quality candidate solutions that always have a significantly high probability of improving the current best record. The movements assigned to optimization variables in the perturbation process may be adjusted according to a rank-proportional variation strategy. This approach is more effective than those adopted in the GWO/JAYA variants proposed in the literature that do not guarantee each new trial solution be better than the current best record. Since elitist/repair strategies are utilized regardless of performing exploration or exploitation, FHGWJA actually optimizes the balance between these two phases;
- (3)
The positions of leading wolves are dynamically updated to avoid stagnation and convergence to local optima. For that purpose, FHGWJA utilizes a mirroring strategy based on the concept of descent direction, which is much simpler than the strategies documented in the literature;
- (4)
FHGWJA evaluates constraints only after having verified that the new trial design can effectively improve the current best record in the current or subsequent iterations. Should this not occur, repair strategies are activated. This allows for reduced computational costs of optimization to a great extent;
- (5)
FHGWJA does not require any internal parameters aside from population size and a limit to the number of iterations. The elitist strategies and convergence criterion implemented via FHGWJA actually make it unnecessary to specify the limit to the number of iterations.
The proposed FHGWJA algorithm was successfully tested in seven “real world” engineering problems. The selected test cases, including up to 29 optimization variables and 1200 nonlinear constraints, regarded, in particular, the following: (i–ii) 2D path planning (minimization of trajectory lengths, respectively, with 7 or 10 obstacles); (iii) shape optimization of a concrete gravity dam with additional earthquake load (volume minimization); (iv–v) calibration of nonlinear hydrologic models (the Muskingum problem solved with 3 or 25 unknown model parameters to be identified); (vi) optimal crashworthiness design of a vehicle subject to side impact; (vii) weight minimization of a planar 200-bar truss structure subject to three independent loading conditions. The present algorithm was compared with the best performing algorithms indicated in the literature for each test problem and advanced variants of state-of-the-art MHAs.
The rest of this manuscript is structured as follows.
Section 2 describes the new hybrid optimization algorithm FHGWJA developed in this study. Test problems and optimization results are presented and discussed in
Section 3.
Section 4 summarizes the main findings of this study and outlines directions of future research.
2. The FHGWJA Algorithm
The new hybrid metaheuristic algorithm FHGWJA developed in this study is now described in detail. The algorithm is composed of seven steps: (i) initialization; (ii) preliminary definition of trial solutions with the classical GWO; (iii) rank-based refinement of preliminary trial solutions defined with the classical GWO; (iv) evaluation/repair of trial solutions with elitist strategies and JAYA-based schemes; (v) population reordering to define the new best and worst individuals and update of the α-β-δ wolves trying to avoid stagnation; (vi) convergence check; (vii) end of optimization search and output.
FHGWJA randomly generates a population of
NPOP candidate designs (i.e., wolves) as:
where
NDV is the number of optimization variables;
and
, respectively, are the lower and upper bounds of the jth optimization variable;
is a random number in the (0,1) interval.
In the minimization problem of the
W(
) function (depending on
NDV variables stored in the solution vector
) subjected to
NCON inequality constraint functions
Gk(
) ≤ 0, the penalized cost function
Wp(
) is defined as follows:
where
p is a penalty coefficient. The penalty function
is defined as follows:
The Wp() penalized cost function coincides with the W() cost function if the trial solution satisfies optimization constraints. No penalty function is defined in the case of unconstrained optimization problems. Any equality constraint H() = 0 included in the optimization problem is transformed into two inequality constraints G() ≤ 0 and −Gk() ≤ 0. Candidate solutions are sorted in terms of penalized cost function: the current best solution and the worst solution , respectively, correspond to the lowest and highest values of .
FHGWJA utilizes the classical GWO scheme to preliminarily generate the new trial designs. The best three individuals stored in the population are ranked as wolves α, β, and δ. The other individuals of the population are ranked as wolves ω. Wolves encircle the prey during the hunt. Such a behavior is mathematically described as follows:
In Equations (4) and (5), it denotes the current iteration, and are coefficient vectors, is the position vector of the prey, and is the generic grey wolf (i.e., search agent) of the population updated in the current iteration to the new solution . The “·” notation denotes the term-by-term multiplication between vectors that leads to defining another vector.
Vectors
and
are defined as follows:
In Equations (6) and (7), the components of the vector decrease linearly from 2 to 0 as the optimization progresses; and are random vectors in the [0,1] interval.
While grey wolves actually recognize the prey’s location in the hunting phase, the optimum position (prey) usually is not known a priori in the optimization search. In order to reproduce wolves’ behavior in the GWO algorithm, it is assumed that the three best solutions of the population (i.e., α, β, and δ wolves) have a better knowledge of the potential prey’s location (i.e., the target optimal solution). The rest of population must be updated using the
,
, and
positions of the best three search agents. The generic solution
of population is updated to
as follows:
In Equation (9), vectors are defined for each candidate design of the population. Equations (8)–(10) update the positions of all search agents.
In the optimization process, the wolves α, β, and δ estimate the prey’s position (i.e., the position of the optimal solution) and each candidate solution in the population updates its distance from the prey. The parameter a is reduced from 2 to 0 to emphasize exploration and exploitation, respectively. In the exploration phase, candidate solutions tend to search for another prey when > 1. Conversely, in the exploitation phase, candidate solutions converge towards the prey when < 1.
In the classical GWO, the new population formed by the trial solutions obtained by perturbing the individuals stored in the population in the previous iteration are sorted in terms of penalized cost function values to update positions , , and . The process ends upon reaching the limit to the number of iterations or function evaluations defined by the user. However, the classical GWO search scheme has two inherent drawbacks: (i) since the position of the prey (optimal solution) is not known a priori, the search is directly biased toward the α wolf that has a null distance from the current best record Xbest as it corresponds to the current best record itself in view of population sorting; (ii) there is no guarantee that each new trial solutions may improve the current best record or at least the corresponding individual stored in the previous population.
In order to solve these issues, FHGWJA refines the preliminary trial solutions
obtained in Step 1 by means of a rank-based criterion. The population defined in the previous iteration is sorted with respect to the penalized cost function values
Wp. The best solution is assigned the rank 1 while the worst solution is assigned the rank 1/
NPOP: let
rank(i) = 1/
i be the rank of the generic individual
. The average rate of variation for the penalized cost function with respect to the current best record
γ(i) = Δ
Wp(i)/Δ
S(i) is evaluated for each solution
. The Δ
Wp(i) = [
Wp −
Wp] numerator represents the increase in the penalized cost function that occurs when the design is perturbed by Δ
S(i) =
from the current best record to the generic candidate solution
. The average rate of variation
is hence defined. The refinement step size
is defined for each individual
as follows:
The preliminary trial solution
is adjusted to the trial solution
using the following equation:
The rationale behind Equations (11) and (12) is the following. All trial solutions generated via the GWO must be evaluated to see if they improve design. Individuals ranked as high-quality solutions in the previous iteration are very likely to preserve their rank in the current iteration. However, rank may decrease if the cost function changes sharply in the neighborhood of the currently perturbed solution: for example, this may occur if the perturbed solution turns infeasible, and the penalty term increases. For this reason, Equation (11) tends to preserve the overall ranks of population individuals by setting small refinement step sizes . Since the design improves by moving from to , the direction is a descent direction with respect to the individual. Hence, FHGWJA attempts to perturb design along the direction to further reduce the cost function at least with respect to .
The trial designs generated via FHGWJA in Steps 1 and 2 are then evaluated. In the classical GWO, if the new design improves the old design it replaces in the new population. However, this task entails a new constraint evaluation to compute the penalized cost function value for each new trial design.
In order to reduce the number of constraint evaluations required in the optimization search, FHGWJA utilizes an elitist strategy retaining only the trial solutions that are very likely to improve design. Hence, FHGWJA initially compares only the cost function W() computed for the new trial design with the penalized cost function Wp computed for the current best record. If W() > Wp, the new trial design is directly rejected because it certainly cannot improve the current best record as the cost function value is by itself larger than the penalized cost of which also accounted for any constraint violation.
If
W(
) <
Wp, the cost function value
W(
) computed for the new trial solution
is compared with 1.1 times the cost function value
W(
) computed for the current best record. Ifit holds
W(
) ≤ 1.1
W(
), the new trial solution
is provisionally included in the new population. The 1.1
W(
) threshold was selected upon the following rationale. Exploration is characterized by a high probability of improving design: hence, the
W(
) > 1.1
W(
) condition is not likely to occur. In exploitation, local minima should be bypassed by the optimizer to converge to the global optimum. Similar to SA, FHGWJA is allowed to provisionally accept slightly worse candidate solutions than
. The threshold level of acceptance probability in state-of-the-art SA formulations such as [
81,
90] is 0.9 if the ratio between the cost function increment [
W(
) −
W(
)] and the annealing temperature
T is 0.1. This means that there is a 90% probability to provisionally accept a worse design than
and make it improve in the next iterations. Since
T is initially set equal to the expected optimum cost (or roughly corresponds to the cost function value
W), it is reasonable to assume that solutions up to 10% worse than
may improve
itself in the next few iterations.
Since each new trial solution
provisionally included in the new population was generated by FHGWJA using the classical GWO scheme based on the positions of wolves
α,
β, and
δ, FHGWJA adopts a JAYA-based scheme to avoid stagnation. For that purpose, when
W(
) ≤ 1.1
W(
), a new trial solution
is defined as follows:
where
and
are two vectors of
NDV random numbers generated in the [0,1] interval. Equation (13) is relative to exploitation as it perturbs the good design
that was provisionally included in the new population being close to the current best record or likely better than it. The δ wolf is temporarily selected as the worst individual of the population, and FHGWJA is forced to locally search in a region of design space containing only high-quality solutions.
The new trial solution is compared with to select the solution that finally updates population. If W(()′) < 1.1W() and W(()′) < W(), is finally retained in the new population, and is rejected. If W(()′) < 1.1W() but it occurs that W(()′) > W(), is rejected, and is finally retained in the new population. If W(()′) > 1.1W(), is finally retained in the new population, and is directly rejected.
FHGWJA adopts a repair strategy if the new trial solution
is such that
W(
) > 1.1
W(
). In this case, the trial solution
was sufficiently good to avoid immediate rejection (i.e., it holds
W(
) <
Wp) but it is not good enough to be provisionally included in the new population. Another JAYA-based scheme is adopted in FHGWJA to define the new trial solution
as follows:
where
and
are two vectors of
NDV random numbers similar to those of Equation (13). Absolute values of optimization variables are taken for each component of vectors
or
, respectively, in Equations (13) and (14).
Equation (14) is related to exploration and resembles the classical JAYA solution updating equation where each individual is perturbed trying to approach the current best solution and moving away from the worst solution of the population . FHGWJA involves the whole population of NPOP search agents in the generation of new trial design ()′ because wolves α, β, and δ that drive search in the GWO phase failed to move other individuals (i.e., ) to good positions of the search space located near the prey (i.e., the current best record). Since population renewal aims to improve each individual , FHGWJA searches on the descent direction that leads to a better design than and escape from the worst design of the population , which certainly cannot improve .
Similar to the process implemented for Equation (13), FHGWJA compares the cost function value W(()′) corresponding to the new trial design defined with Equation (14) with 1.1 times the cost function value of the current best record W(). The modified trial design ()′ is included in the new population if it holds that W(()′) ≤ 1.1W().
However, if the JAYA-based repair strategy fails and it still holds that
W((
)′) > 1.1
W(
), FHGWJA has to deal with three solutions,
,
, and
, that do not improve the current best record
.
Figure 1 shows the mutual positions of the cost function gradient
evaluated at the current best record
and the non-descent directions
,
, and
; P
best is the point of the search space corresponding to the current best record
while the P
i, P
i,tr, and P
i,tr′ points, respectively, correspond to trial solutions
,
, and
. It can be seen from
Figure 1 that non-descent directions make positive scalar products with the gradient
. Design may be improved with respect to
by moving on the descent directions
,
, and
that are, respectively, opposite to the non-descent directions
,
, and
and hence make negative scalar products with respect to
(see
Figure 1). Hence, FHGWJA uses a “mirroring” strategy to define the new trial solution
as follows:
where
ηmirr,i,
ηmirr,i-tr, and
ηmirr,(i-tr)’ are three random numbers in the [0,1] interval. Basically, FHGWJA takes the descent direction
obtained by combining three other descent directions that may improve design. Hence, the
trial solution (corresponding to the point P
i,tr″
(mirr) of search space shown in
Figure 1) is very likely to improve design. The new solution
is evaluated as outlined above for the other trial solutions generated by FHGWJA.
Step 5. Resort population, update α, β, and δ wolves, , and
Once FHGWJA updated all candidate solutions , the population is resorted and the NPOP individuals are ranked with′ respect to their penalty function values. Should Wp() or Wp(()′ or Wp(()′) be greater than Wp(), all new trial solutions generated/refined for are rejected and the old design is retained in the population. However, the elitist strategies based on the 1.1W() threshold, JAYA-based schemes of Equations (13) and (14), and the “mirroring” strategy of Equation (15) greatly increase the probability of updating the whole population with better designs than those stored in the previous population.
The best three individuals of the new population are reset as wolves α, β, and δ with the corresponding design vectors , , and . The best and worst solutions are set as and , respectively.
In order to avoid stagnation, ranking of wolves
α,
β, and
δ is checked with another elitist criterion if FHGWJA did not update their positions in the current iteration. The criterion is again based on the concept of descent direction. The
and
obviously are descent directions with respect to positions
and
of wolves
β and
δ, and the design improves by moving towards the
α wolf. Hence, FHGWJA perturbs
and
along the descent directions
and
. The positions of wolves
β and
δ are “mirrored” with respect to
as follows:
where
ηmirr,β and
ηmirr,δ are two random numbers in the interval (0,1) that limit step sizes to reduce the probability of generating infeasible positions. The best three positions amongst
,
,
,
, and
are taken as wolves
α,
β, and
δ for the next iteration, while the two worst positions are compared with the rest of the population and may replace
and the 2nd worst design of the old population. The latter operation also covers the case that
and
did not improve any of the wolves
α,
β, and
δ.
The mirror strategy used in FHGWJA is illustrated in
Figure 2. In particular, the figure shows the following: (i) the original positions of wolves
α,
β, and
δ, respectively, correspond to points P
opt≡P
α, P
β, and P
δ of the search space; (ii) the positions of “mirror” wolves
βmirr and
δmirr are defined with Equation (16) and, respectively, correspond to points P
β,mirr and P
δ,mirr of the search space; (iii) the cost function gradient vector
is evaluated at
. It can be seen that the “mirror” wolves
βmirr and
δmirr defined by Equation (16) lie on descent directions and may even improve
, the position of wolf
α. In fact, it holds that
and
, where “×” denotes the scalar product between two vectors. Interestingly, since
is in principle a steeper descent direction than
as
Wp(
)
> Wp(
)
> Wp(
), wolf
δ may have a higher probability than wolf
β to replace wolf
α even though it originally occupied a worse position than wolf
β in the search space. This elitist approach forces FHGWJA to carry out a new exploration of the search space rather than trying to exploit solutions that have not improved the design in the last iteration. Furthermore, by eliminating the two worst designs of the population, FHGWJA increases the average quality of candidate solutions and achieves a higher probability of generating high-quality trial solutions in the next iteration.
Standard deviations on optimization variables and penalty function values of candidate solutions included in the updated population should decrease as the optimization search approaches the global optimum. Hence, FHGWJA normalizes standard deviations with respect to average design
and average value of penalty function
. The termination criterion is as follows:
where the convergence limit
is equal to 10
−7. Since convergence occurs in the last stage of the optimization process when exploitation dominates, Equation (17) requires all search agents be actually located in the best region of the search space hosting the global optimum and contributing to exploitation. Population diversity must decrease to aggregate search agents in the neighborhood of
. Hence, Equation (17) normalizes the standard deviation of the search agents’ positions to average the solution to quantify the level of population diversity. The same rationale is adopted for the penalty function values by normalizing their standard deviation with respect to the average penalty function value: hence, competitive solutions correspond to the optimum solution only when they are effectively close to it, that is when the search process is near to end. Penalty function values are considered in Equation (17) to account for the effect of some solution that may turn locally infeasible if the set of active constraints changes as design is perturbed in the neighborhood of the global optimum. Penalty function values are equal to cost function values for feasible solutions or in the case of unconstrained optimization problems.
Steps 2 through 6 are repeated until FHGWJA converges to the global optimum.
FHGWJA terminates the optimization process and writes the output data (i.e., optimum design and optimized cost function value) in the results file.
Figure 3 shows the flow chart of the novel FHGWJA algorithm developed in this study. The present algorithm actually is an advanced grey wolf optimizer, which updates population taking care that the
α,
β, and
δ wolves effectively lead to improve the current best record in each iteration. The JAYA schemes and the elitist strategies implemented in FHGWJA enhance exploration and exploitation forcing the present algorithm to increase diversity and select high-quality trial designs without performing too many function evaluations. The definition of all descent directions utilized in FHGWJA in the optimization search does not entail any new constraint evaluation, and all decisions are made in FHGWJA by comparing only cost function values. Interestingly, FHGWJA does not need any new internal parameters with respect to the classical GWO and JAYA formulations that require users to specify population size
NPOP and limit number of iterations
Nitermax. This feature of FHGWJA is not very common as simple hybrid metaheuristic algorithms may adopt a set of new heuristic parameters to switch from one component optimizer to the other. However, FHGWJA implements a master–slave algorithmic architecture where the slave algorithm, JAYA, is utilized only to refine or correct the trial designs generated by the master algorithm, GWO.
The computational cost of most GWO/JAYA implementations documented in the literature was NPOP × Nitermax function evaluations (also indicated as function calls or analyses for general problems or structural analyses for structural optimization problems) including constraint evaluations. The FHGWJA’s ability of generating high-quality trial solutions throughout the optimization process significantly reduces the computational cost of the optimization. As far as population size NPOP, it is well known that increasing NPOP in metaheuristic optimization may enhance exploration capabilities but also results in a very large number of function evaluations. Indeed, it is not necessary to work with a large population if the search engine can always generate high-quality designs that improve the current best record or the currently perturbed search agents such as is accomplished with FHGWJA. It should be noted that grey wolves hunt in nature in groups comprising, at most, 10–20 individuals (the typical family pack is formed by 5–11 animals but in some cases the pack can be formed by 2–3 families). In view of this, FHGWJA optimizations carried out in this study utilized a population of 10 individuals. Sensitivity analysis confirmed the validity of this setting.
3. Test Problems and Optimization Results
The FHGWJA algorithm proposed in this study was tested in seven engineering problems including up to 29 optimization variables and 1200 nonlinear constraints. Test cases regarded the following: (i–ii) 2D path planning (minimization of trajectory lengths with 7 or 10 obstacles); (iii) shape optimization of a concrete gravity dam with additional earthquake load (volume minimization); (iv–v) calibration of nonlinear hydrologic models (the Muskingum problem solved with 3 or 25 unknown model parameters to be identified); (vi) optimal crashworthiness design of a vehicle subject to side impact; (vii) weight minimization of a planar 200-bar truss structure under three independent loading conditions.
FHGWJA was implemented in MATLAB as a standalone optimization code. Optimization runs were carried out on a standard PC equipped with a 3.1 Mhz AMD processor and 16 MB of RAM memory. The Mersenne-Twister (MT19937) MATLAB default algorithm was utilized to generate random numbers uniformly distributed between 0 and 1; this algorithm was selected in view of its ability to generate high-quality numbers with (219937–1) long period. In order to draw statistically significant conclusions on the computational efficiency of FHGWJA, 20 independent optimization runs were executed for each test problem starting from different initial populations. The population size and limit number of iterations of FHGWJA were, respectively, set equal to 10 and 5000. Initial populations included some designs with up to 1000% constraint violation to check whether FHGWJA can quickly approach the best region of search space containing the optimum solution. Remarkably, FHGWJA always completed all optimization runs within a much lower number of analyses than the theoretical computational budget of 10 × 5000 = 50,000 analyses of the classical GWO and JAYA. The actual computational cost of FHGWJA was almost entirely due (about 90%) to the number of analyses performed in the optimization search. For example, the optimization runs performed with FHGWJA for the largest test problem solved in this study (weight minimization of a planar 200-bar truss structure including 29 sizing variables) were completed on average within about 25 min of wall clock time.
3.1. 2D Path Planning
Test cases (i–ii) regarded a classical robotics problem: to determine the optimal 2D path connecting two points A and B with a piecewise cubic spline trajectory, defined by a certain number of control points (NCP), that passes through points A, B, and a series of base points (NBP) while avoiding a set of obstacles (NOB). The problem objective is to minimize the total length of the trajectory. In an obstacle-free environment, the shortest path between A and B obviously is a straight line. Cubic splines allow for building a smooth and continuous curved path avoiding obstacles. Path planning complexity increases with the number of obstacles and the number of base points; coordinates of base points are selected as optimization variables.
The 2D path planning problem searching for the minimum length of a 2D trajectory, passing through
NBP base points so as to avoid
NOB obstacles, and defined by
NCP control points connected by (
NCP−1) spline segments, can be formulated as follows:
In Equations (18) and (19), (xbi,ybi) are the coordinates of the generic i-th base point of the trajectory, (xcr,ycr) are the coordinates of the generic r-th control point of the trajectory, radob(p) is the radius of the circular area limited by the p-th obstacle, respectively. The coordinates of base points are taken as optimization variables.
In general, the optimization problem stated above includes 2
.NBP design variables and (
NCP +
NBP) ×
NOB nonlinear constraints. Coordinates of base points must also satisfy the relationships
and
(
i = 1,…,
NBP−1) to preserve the local convexity of the trajectory. The “
PENALTY” term in Equation (18) aggregates squared violations of constraint inequalities occurring for
G(
) > 0 multiplied by 1000 to amplify the effect of collision with obstacles; it is equal to 0 if the optimizer finds a feasible solution where the designed trajectory does not intersect the
NOB obstacles. More details on the formulation of path planning problems in robotics can be found in Refs. [
91,
92,
93].
Figure 4 shows the design space of the two path planning problems (all coordinates are expressed in mm) solved in this study considering the presence of 7 or 10 obstacles and the same number of base points. In the former case, there are the start/end points A(1,1) and B(29,29) mm to be connected and the seven obstacles, O
1(6,6), O
2(7,24), O
3(9,15), O
4(16,23), O
5(20,11), O
6(23,6), and O
7(24,23) mm, limiting circular areas of radii 3, 1, 3, 2, 2, 1, and 2 mm, respectively. The optimization problem hence includes 14 variables corresponding to the coordinates of base points (i.e., 2
.NBP = 2 × 7) that can vary between 1 and 29 mm,
NOB = 7 obstacles,
NCP = 100 control points, and 749 nonlinear constraints (i.e., (
NCP+
NBP)×
NOB = (100 + 7)× 7 = 749).
In the latter case, there are three additional obstacles O8(12,10), O9(14,16), and O10(19.5,17.5) mm all limiting circular areas of radius 2 mm. The optimization problem hence includes 20 variables (i.e., 2.NBP = 2 × 10), NOB = 10 obstacles, NCP = 100 control points, and 1100 nonlinear constraints (i.e., (NCP + NBP) × NOB = (100 + 10) × 10 = 1100).
Besides the 20 independent optimization runs, 1 additional optimization run was carried out by setting the initial positions of base points equal to the coordinates of obstacle centers shifted by 0.001 mm in order to avoid constraint singularity. This is the worst-case scenario when the trajectory intersects all obstacles. The corresponding path lengths were 231.2 mm for 7 base points and 238.5 mm for 10 base points, while initial constraint violation raised to 159,816% and 204,741%, respectively.
For both problem variants, the FHGWJA hybrid optimizer developed here was compared with the best optimizers quoted in the literature: namely, hybrid harmony search (hybrid HS), hybrid big bang–big crunch (hybrid BBBC), and hybrid fast simulated annealing (HFSA) [
81]. Such a comparison should be considered highly indicative because the hybrid optimizers developed in [
81] combined efficient metaheuristic search engines with approximate line search strategies and gradient information while FHGWJA repeatedly utilizes elitist strategies based on the concept of descent direction. The classical grey wolf optimizer (GWO) and the classical and improved JAYA formulations [
81] also were compared with FHGWJA to prove the advantages of the hybrid optimizer. Finally, modified harmony search optimization with adaptive parameter updating (mAHS) (derived from the AHS algorithm of Ref. [
94]), modified big bang–big crunch with upper bound strategy (mBBBC–UBS) (derived from the BBBC–UBS algorithm of Ref. [
95]), and modified sinusoidal differential evolution (MsinDE) (derived from the sinDE algorithm of Ref. [
96]) were enhanced with the
W(
) ≤ 1.1
W(
) elitist strategy of FHGWJA to gather specific information on the efficiency of this strategy.
The selected population size in Ref. [
81] for the hybrid HS, hybrid BBBC, and improved JAYA algorithms was 20, while it was set equal to 10 in this study for the new optimization runs carried out for mAHS, mBBBC–UBS, and MsinDE to have a homogeneous basis of comparison with FHGWJA and its component algorithms, the standard GWO and standard JAYA. All other internal parameters required for SHGWJA’s competitors were left the same as those indicated in the original references [
81,
94,
95,
96]. The same approach was adopted for the two problem variants with 14 and 20 optimization variables.
Table 1 presents the optimization results for the 2D path planning problem variant with 14 design variables (seven base points and seven obstacles). All data listed in the table refer to feasible solutions; the number of analyses refers to the number of times that nonlinear constraints were evaluated. FHGWJA was the best optimizer overall and designed the shortest path of 40.811 mm, outperforming the component algorithms, the standard GWO and standard JAYA, that designed longer trajectories of 41.104 mm and 41.050 mm, respectively. The optimal positions of base points found with FHGWJA were (3.6617;1.3944), (6.5927;2.7934), (16.562;11.705), (21.231;17.081), (24.186;19.826), (27.855;26.254), and (28.626;28.196) mm. The other algorithms ranked as follows: hybrid HS [
81], MsinDE (derived from [
96]), hybrid BBBC [
81], HFSA [
81], mAHS (derived from [
94]), mBBBC–UBS (derived from [
95]), and improved JAYA [
81].
The rate of success of FHGWJA was 100% because all independent optimization runs designed shorter trajectories than the best solutions obtained with its competitors. In particular, the worst solution of FHGWJA achieved a path length of 40.890 mm while the shortest paths designed via all other optimizers ranged between 40.983 and 41.104 mm. Such a behavior occurred because FHGWJA inherently balances exploration and exploitation phases enhancing the exploration capability of α, β, and δ wolves without risk of stagnation and forcing the optimizer to search very high-quality solutions regardless of performing exploration or exploitation. Interestingly, the
W(
) ≤ 1.1
W(
) elitist strategy implemented in FHGWJA was effective also for mAHS, mBBBC–UBS, and MsinDE that improved their optimized solutions with respect to the original AHS [
94], BBBC–UBS [
95], and sinDE [
96] algorithms utilized in [
81]: in particular, MsinDE enhanced its overall rank up to the 3rd position.
FHGWJA was the fastest optimizer overall in the 2D path planning problem with 14 variables, requiring on average only 4581 analyses vs. 50,000 and 17,204 analyses, respectively, required for standard GWO and JAYA to converge to their optimized solutions. Furthermore, the present algorithm was about 42.3% faster than the improved JAYA. The computational speed for the fastest optimization run of FHGWJA was practically the same as those of the HS/BBBC/SA hybrid algorithms: only 3837 analyses vs. 4150, 3611, and 4127 analyses, respectively. However, the present algorithm generated a feasible intermediate design with a path length of 40.961 mm (hence, shorter than optimized path lengths of hybrid HS/BBBC/SA ranging between 40.983 and 40.994 mm) within only 2875 analyses.
FHGWJA ranked 6th in terms of standard deviation on optimized path length after mBBBC–UBS, standard GWO, improved JAYA, mAHS, and MsinDE, which, however, completed their optimizations within more analyses than the present algorithm and converged to worse designs. The present algorithm was also robust enough with respect to computational cost showing only 24.6% dispersion on the number of analyses required in the 20 independent optimization runs.
Figure 5 confirms the superiority of FHGWJA over its competitors in terms of convergence behavior. The figure compares the optimization histories of the best runs for the algorithms listed in
Table 1; the average FHGWJA convergence curve is also shown in the figure. The plot is limited to the first 4500 analyses of the optimization history and to the 40.5–48.5 mm trajectory length interval for the sake of clarity. It can be seen from the figure that the best and average optimization runs’ convergence curves of FHGWJA always lie below those of the other algorithms. FHGWJA started its best optimization run from the very large cost of 225.35 mm (i.e., about 5.5 times the globally optimum length of 40.811 mm found with FHGWJA) while the initial cost for all other optimizers ranged between 59.62 and 184.214 mm. However, the present algorithm immediately recovered the initial gap in cost function with respect to its competitors. The hybrid simulated annealing algorithm (HFSA) of Ref. [
81] and the modified big bang–big crunch algorithm (mBBBC–UBS) implementing the
W(
) ≤ 1.1
W(
) elitist strategy of FHGWJA were the only optimizers to compete in convergence speed with FHGWJA, respectively, for the first 150 and 190 analyses.
Table 2 presents the optimization results for the 2D path planning problem variant solved with 20 design variables (10 base points and 10 obstacles). FHGWJA again was the best optimizer overall. In fact, it designed the very short trajectory of length 40.898 mm while the best solutions found with the other algorithms correspond to longer trajectories than 40.898 mm, ranging between 40.997 mm (for hybrid HS) and 41.141 mm (for improved JAYA). Furthermore, the worst solution of FHGWJA corresponds to the path length of 40.924 mm, again shorter than the best designs obtained with all its competitors. The optimal positions of base points found with FHGWJA were (3.9165;1.7729), (7.9192;3.6484), (20.853;16.019), (25.143;21.071), (27.178;24.426), (27.185;24.444), (27.188;24.449), (27.967;26.268), (27.998;26.345), and (28.981;28.578) mm.
FHGWJA was also significantly faster than the other algorithms, completing optimization runs on average within only 5153 analyses while computational cost of its competitors ranged between 7408 (for hybrid HS) and 50,000 (for standard GWO) analyses. Standard JAYA was slightly more efficient than improved JAYA while standard GWO was the worst optimizer overall. The present algorithm was the most robust optimizer, achieving the lowest standard deviation on path length and just 4.8% dispersion on the number of analyses.
The
W(
) ≤ 1.1
W(
) elitist strategy implemented in FHGWJA again improved significantly the performance of the mAHS, mBBBC–UBS, and MsinDE algorithms with respect to the original AHS, BBBC–UBS, and sinDE formulations documented in [
94,
95,
96]: in particular, computational cost decreased on average by 36%. Interestingly, the high level of nonlinearity of 2D path planning problem variants including 14 and 20 optimization variables was better handled with FHGWJA than with the hybrid HS/BBBC/SA algorithms of Ref. [
81]. This occurred in spite of the fact that hybrid HS/BBBC/SA utilized gradient information and multiple line searches to minimize the number of analyses. However, this limited the approximation quality and hence the ability of the optimizers to generate trial solutions effectively located in regions of the search space where
is likely to improve.
Figure 6 compares the optimization histories of the best runs for FHGWJA and its competitors in the 2D path planning problem solved with 20 design variables; the average FHGWJA convergence curve is also shown in the figure. The plot is limited to the first 9000 analyses of optimization history for the sake of clarity. Similar to the 14-variable problem variant, the best and average optimization runs’ convergence curves of FHGWJA practically always lie below those of the other algorithms. All algorithms started their best optimization runs from the very large cost of 238.54 mm (i.e., about 5.83 times the optimum trajectory length of 40.898 mm found with FHGWJA). However, the present algorithm recovered the gap in cost function with respect to target optimum in a much faster way than the other algorithms. The hybrid simulated annealing algorithm (HFSA) of Ref. [
81] was the only optimizer to compete in convergence speed with FHGWJA but only after 2000 analyses, that is when the distance from target optimum was less than 5%.
Figure 7a compares the optimized trajectories obtained with FHGWJA and its competitors in the 2D path planning problem variant including 14 design variables. For the sake of clarity, the figure shows only the best six solutions quoted in
Table 1. It can be seen that optimized paths involve only obstacles 1 and 7. Furthermore, the trajectory branch limited by the start point A and obstacle 1 is practically the same for all optimizers. Optimal trajectories diverge between obstacles 1 and 7: the slope is larger for hybrid HS, HFSA, and standard JAYA than for the present algorithm, hybrid BBBC, and MsinDE. Hybrid HS, hybrid BBBC, HFSA, and MsinDE found very similar path lengths between 40.983 and 40.994 mm; these trajectories are practically symmetric about the line connecting the point near obstacle 1, where trajectories diverge, and the end point B. Since the trajectory designed with standard JAYA between obstacles 1 and 7 is more curved than those designed with hybrid HS and HFSA, the length of standard JAYA’s optimal path increased to 41.050 mm.
The detailed analysis of the optimized trajectory revealed that, in order to minimize length, trajectory should be rectilinear between obstacles 1 and 7 as well as in the final branch from obstacle 7 to end point B. Interestingly, FHGWJA was the only optimizer to satisfy this requirement. In fact, the optimal path found with FHGWJA between obstacles 1 and 7 may be fitted in the XY plane via a linear regression with R2 = 0.999 vs. only R2 = 0.992 and R2 = 0.994, respectively, for hybrid HS (HFSA’s trajectory is very close to the one designed with hybrid HS) and hybrid BBBC (MsinDE’s trajectory is very close to the one designed with hybrid BBBC). Furthermore, the optimal path of FHGWJA may be linearly fitted with R2 = 0.998 vs. only R2 = 0.991 and R2 = 0.994 for hybrid HS and hybrid BBBC, respectively. Besides the higher curvature of the trajectories designed with the other algorithms between obstacles 1 and 7, the final branches of the optimal trajectories designed via hybrid HS, HFSA, hybrid BBBC, and MsinDE show changes in concavity that make trajectory deviate from a straight line.
Figure 7b compares the optimized trajectories obtained with FHGWJA and its competitors in the 2D path planning problem variant including 20 optimization variables. The figure shows the trajectories of the best and worst optimization runs of FHGWJA as well as the other five best solutions quoted in
Table 2. The new obstacles 8 and 10 introduced in the problem became active for all algorithms while the optimized trajectory approached the new obstacle 9 only for mAHS. This explains why the optimized path lengths of hybrid HS, HFSA, MsinDE, and hybrid BBBC varied at most by 0.037 mm (i.e., between 40.997 mm and 41.034 mm) while path length increased by 0.041 mm just passing from hybrid BBBC to mAHS (i.e., from 41.034 mm to 41.075 mm). Again, FHGWJA designed the straightest trajectory in the path branches between obstacles 1 and 7 and between obstacle 7 and end point B. Such a behavior occurred for both the best and worst FHGWJA optimization runs. Conversely, trajectories designed with hybrid HS/BBBC/SA and MsinDE algorithms exhibited significant changes in slope and curvature especially in the final branch of the path or between obstacles 10 and 7. For example, the final branch of the FHGWJA’s optimal paths may be linearly fitted with R
2 = 0.997 vs. only R
2 = 0.965, R
2 = 0.940, and R
2 = 0.978 for hybrid HS, hybrid BBBC, and HFSA, respectively.
In summary, the FHGWJA algorithm developed in this study is an efficient and robust tool for solving highly nonlinear 2D path planning problems in robotics. The proposed algorithm could always design shorter paths requiring less analyses to complete optimization runs than its competitors.
3.2. Shape Optimization of a Concrete Gravity Dam Under Multiple Earthquake Loads
The test case (iii) solved in this study was the shape optimization of a concrete gravity dam under multiple earthquake loads. The real structure is located in the Shafaroud region of Northern Iran: the dam height (
Hdam) is 150 m, water elevation in the normal state in the dam upstream (
hwater) is 145 m, and the sediment height (
hse) is 7 m. The 3D schematic of the dam is shown in
Figure 8a. Here, the concrete volume per unit width of the dam was minimized by optimizing the dam cross-section shown in
Figure 8b with the nine shape variables (x
1,x
2,x
3,x
4,x
5,x
6,x
7,x
8,x
9) indicated in the figure.
The optimization problem was stated as follows:
where SFS is the safety factor against dam sliding; SFO is the safety factor against dam overturning; σ
U, σ
D, and σ
max are the vertical fatigue specific loads in upstream and downstream and the corresponding fatigue limit, respectively. Side constraints of shape variables are as follows: 1 ≤ x
1,x
3 ≤ 20 m; 1 ≤ x
2 ≤ 50 m; 5 ≤ x
4,x
5,x
6,x
7 ≤ 100 m; 50 ≤ x
8,x
9 ≤ 150 m.
In Equation (21), the SFS factor (involved in the inequality constraint G1) is defined as (f·ΣFv+σ·b)/ΣFH, where f = 0.7 is the static friction coefficient, σ = 3.5 MPa is the allowable shear tension in the cutting surface, and b = (x1+x3+x5+x7+x9) is the dam base length. The dam is subject to the vertical forces denoted as Fv: concrete weight W, vertical components of water pressure Fhv, and sediment pressure PSV directed downward; uplift force generated by the dam basement Fuplift, and earthquake force FE directed upward. The dam is also subject to horizontal forces FH in the positive X-direction: an additional earthquake force of magnitude 2FE, the horizontal components of water pressure Fhh, and sediment pressure PSO.
The SFO factor (involved in the inequality constraint G2) is defined as ΣMR/ΣMo where MR and Mo, respectively, are the torque of resisting forces (W, Fhv, PSV) and the torque of driving forces (Fuplift, 2FE, Fhh, PSO) on the dam.
Vertical fatigue loads σU and σD (involved in inequality constraints G5 and G6) are defined as (ΣFv/b − 6ΣMo/b2) and (ΣFv/b + 6ΣMo/b2), respectively.
The last inequality constraint G7 in Equation (21) is relative to geometric proportions in the vertical direction, while the equality constraint H8 requires the dam height to be 150 m.
The concrete volume per unit width of 10,502.1 m
3 of the real dam was significantly reduced to 7448.774 m
3 via the multi-level cross entropy optimizer (MCEO) [
97] and the flying squirrel optimizer (FSO) [
98]. However, Refs. [
97,
98] neither considered the equality constraint
H8 nor the presence of the horizontal earthquake force 2F
E. These issues were instead included in [
81], which presented the results of hybrid HS, hybrid BBBC, hybrid fast simulated annealing (HFSA), adaptive harmony search (AHS) [
94], big bang–big crunch with upper bound strategy (BBBC–UBS) [
95], and sinusoidal differential evolution [
96]. For this reason, the FHGWJA algorithm developed in this study was compared with hybrid HS/BBBC/SA as well as with modified adaptive harmony search (mAHS), modified big bang–big crunch with upper bound strategy (mBBBC–UBS), and modified sinusoidal differential evolution (MsinDE). Similar to the 2D path planning problem, mAHS, mBBBC–UBS, and MsinDE included the
W(
) ≤ 1.1
W(
) elitist strategy implemented in FHGWJA to enhance the performance of the original algorithms.
Similar to the 2D path planning problem, population size was set equal to 20 in Ref. [
81] for hybrid HS, hybrid BBBC, and improved JAYA, while mAHS, mBBBC–UBS, and MsinDE were run with
NPOP = 10 in this study.
Table 3 compares the optimization results of FHGWJA and its competitors for the dam problem. The number of structural analyses corresponds to the number of evaluations of constraints
G1–7 and
H8. The average number of structural analyses and corresponding standard deviation along with the number of structural analyses required by the fastest optimization run also are reported in the table when available.
FHGWJA was the best optimizer overall also in this test problem. In fact, it converged to the lowest concrete volume for unit width of 7079.558 m
3, which is 0.411% smaller than the 7108.772 m
3 for the best design quoted in literature so far for hybrid BBBC [
81]. All algorithms performed significantly better than MCEO [
97] and FSO [
98]: in fact, optimized values of unit volume ranged from 7059.558 m
3 to 7121.134 m
3 vs. 7448.774 m
3 of Refs. [
97,
98]. The solutions quoted in
Table 3 fully satisfied design constraints
G1 through
G7 and practically the geometric equality constraint
H8 on dam height (in fact, violation never exceeded 6.067 × 10
−3%). The algorithms ranked as follows in terms of optimized concrete volume values: FHGWJA, hybrid BBBC, mBBBC–UBS, mAHS, hybrid HS, MsinDE, HFSA, improved JAYA, standard GWO, and standard JAYA.
FHGWJA was also the fastest and most robust optimizer. In fact, it converged to the global optimum of 7059.558 m3 in all independent optimization runs with zero standard deviation. Conversely, none of its competitors could achieve zero standard deviation on optimized concrete volume values. The present algorithm was on average between 4.7- and 6.5-times faster than its competitors; the fastest optimization run converging to the global optimum was completed by FHGWJA within only 2207 structural analyses vs. 16,924 structural analyses required for mBBBC–UBS that, however, obtained a higher value for the dam’s concrete volume.
The hybrid formulation implemented in FHGWJA was significantly more efficient than its component algorithms GWO and JAYA that converged in their standard formulation to the best solutions of 7117.441 m
3 and 7121.134 m
3, respectively. The improved JAYA algorithm of Ref. [
81], implementing a simplified elitist strategy where each new trial solution
is included in the new population only if it improves its counterpart individual
(i.e., if it holds
W(
) ≤
W(
)), also could not reduce the unit volume of the dam below 7117.44 m
3. The classical GWO was on average about 13-times more computationally expensive than the present algorithm, while classical JAYA and improved JAYA were on average about 6.5- and 5.9-times slower than FHGWJA.
The
W(
) ≤ 1.1
W(
) elitist strategy implemented in FHGWJA again improved significantly the performance of the mAHS, mBBBC–UBS, and MsinDE algorithms with respect to the original AHS, BBBC–UBS, and sinDE formulations documented in [
94,
95,
96]: in particular, the best solutions of mAHS, mBBBC–UBS, and MsinDE became very close to those of hybrid HS/BBBC/SA while computational cost decreased on average by 16.5%, 6.1%, and 35.2%, respectively.
Figure 9 clearly shows the superior convergence behavior of FHGWJA with respect to its competitors. The figure compares the optimization histories of the best runs for the algorithms listed in
Table 3; the average FHGWJA convergence curve also is shown in the figure. The plot is limited to the first 21,000 structural analyses of the optimization history and to the 7000–14,000 m
3 concrete volume interval for the sake of clarity. It can be seen from the figure that the best and average optimization runs’ convergence curves of FHGWJA always lie below those of the other algorithms. FHGWJA started its best optimization run from the very large cost of 18,372.761 m
3 (i.e., about 2.6 times the globally optimum length of 7079.558 m
3 found with FHGWJA) while the initial cost for all other optimizers ranged between 13,265.001 and 14,209.219 m
3. However, the present algorithm immediately recovered the initial gap in cost function with respect to its competitors. The hybrid simulated annealing algorithm (HFSA) of Ref. [
81] was the only optimizer to compete in convergence speed with FHGWJA for the first 100 structural analyses.
Table 4 lists the optimized designs obtained with FHGWJA and its main competitors; coordinate values are expressed in meters. The data relative to the standard GWO and JAYA and the improved JAYA are omitted for the sake of brevity as these algorithms converged to the worst solutions overall. The 7079.558 m
3 dam’s concrete volume design obtained with FHGWJA is very similar to the optimal solutions obtained with the other algorithms: all design variables changed by at most 5.6% except X
3 that was fixed to its lower bound of 1 mm by FHGWJA and between 4.0001 m and 4.1358 m for the other algorithms.
Figure 10 compares the optimized shapes of the dam (coordinates are expressed in meters) obtained with FHGWJA and its competitors with the real structure and the optimum design of Refs. [
97,
98]. For the sake of clarity, the average of hybrid HS, hybrid BBBC, and HFSA configurations and the average of mAHS, mBBBC–UBS, and MsinDE configurations are shown in the figure. It can be seen that all designs resemble the aspect ratio of the real dam: in particular, the ratio between optimized base length and height is very close to 1. The optimized profiles are slightly shifted in the positive X-direction with respect to the real dam and the dam configurations optimized in [
97,
98] to counteract the effect of the additional horizontal earthquake force 2F
E included in the design problem. The smaller inclination of the bottom segment of the dam’s upstream side increases the dam’s storage capacity with respect to the real structure.
The safety factors against sliding and overturning evaluated for the optimal solution of FHGWJA became 5.738 and 3.470, respectively, practically the same as those quoted in [
81]. Hence, reducing the concrete volume did not affect at all the level of structural safety of the dam. This proves the suitability of the proposed FHGWJA algorithm for civil engineering design problems.
3.3. Calibration of Nonlinear Hydrologic Models
Test problems (iv–v) regarded the calibration of nonlinear hydrologic models including 3 or 25 unknown parameters. This is a typical inverse problem in the hydrology field [
99,
100,
101]. The Muskingum model was selected in this study. The difference between observed river outflows and their counterpart computed with the Muskingum model over a period of 126 h, split in 21 intervals Δ
t = 6 h, was minimized. The inverse problem was formulated as an unconstrained optimization problem:
where
Ooro,t and
Ocro,t, respectively, are the observed and computed river outflows at time
t;
NCT = 22 is the number of control points defined by the 21 time intervals. The computed river outflow
Ocro,t can be determined using the classical three-parameter nonlinear Muskingum model:
In Equation (23),
K,
h, and
r are the parameters of Muskingum model to be calibrated, respectively, taken as optimization variables
x1,
x2, and
x3;
St and
It, respectively, are the channel storage and measured river inlet at time
t. The channel storage
St at time
t is updated as follows:
The storage rate
at time
t is as follows:
Two variants of the hydrologic model calibration problem were solved in this study: (i) the classical problem including only 3 optimization variables, the unknown parameters
K,
h, and
r [
102,
103,
104,
105,
106]; (ii) the extended problem solved in [
81] including also the 22 storage values
St as additional design variables for a total of 25 optimization variables. The side constraints on Muskingum model parameters and storage values are as follows: 0.01 ≤
k ≤ 0.2; 0.2 ≤
h ≤ 0.3; 1.5 ≤
r ≤ 2.5, and 1 ≤
St ≤ 1000 m
3.
The FHGWJA algorithm developed in this study was compared with the best solutions of the literature obtained with following: (i) metaheuristic methods such as the hybrid HS/BBBC/SA algorithms of Ref. [
81], other HS variants [
102], improved immune clonal selection (IICSA) [
103], backtracking search with orthogonal initialization and chaotic map (COBSA) [
104], genetic algorithms (GA) [
104], differential evolution (DE) [
104], and particle swarm optimization (PSO) [
104]; (ii) modified adaptive harmony search (mAHS), modified big bang–big crunch with upper bound strategy (mBBBC–UBS), and modified sinusoidal differential evolution (MsinDE) including the FHGWJA’s elitist strategy
W(
) ≤ 1.1
W(
) to improve the original AHS/BBBC–UBS/sinDE algorithms of Refs. [
94,
95,
96]; (iii) gradient-based methods such as the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) [
105]; (iv) zero-order heuristic methods such as the Nelder-Mead Simplex algorithm (NMS) [
106]. FHGWJA was also compared with the standard GWO as well as standard and improved JAYA variants [
81].
Similar to the previous test cases, population size was set equal to 20 in Ref. [
81] for hybrid HS, hybrid BBBC, and improved JAYA, while mAHS, mBBBC–UBS, and MsinDE were run with
NPOP = 10 in the present study.
Table 5 presents the optimization results of FHGWJA and its competitors for the three-variables problem variant. The table reports the average number of evaluations of the SSQ cost function (
NFE) and corresponding standard deviation, along with the number of function evaluations required by the fastest optimization run when available.
It can be seen from
Table 5 that the present algorithm always converged to the lowest value of SSQ = 36.760 in all independent optimization runs thus reaching 0 standard deviation on optimized cost. Hence, FHGWJA was the most robust optimizer overall. Standard and improved JAYA, modified AHS/BBBC–UBS/sinDE, COBSA, and NMS also found the globally optimum solution. However, none of these algorithms converged to the same design in all optimization runs: standard deviations on optimized SSQ values ranged between 1.442 × 10
−4 (improved JAYA) and 0.01 (COBSA). All other methods found optimized solutions ranging between SQQ = 36.761 (standard GWO and hybrid HS/BBBC/SA) and 36.77 (HS, IICSA, GA, DE, and PSO).
The hybrid FHGWJA algorithm was significantly more efficient than its components, the GWO and JAYA. In particular, the standard GWO converged to SSQ = 36.761 after 50,000 function evaluations (i.e., 25-times slower than FHGWJA) while standard JAYA converged to SSQ = 36.760 but it was on average 2.7-times slower than the present algorithm. Improved JAYA was on average 20.3% slower than FHGWJA while their fastest optimization runs practically required the same number of function evaluations (i.e., 1637 for FHGWJA vs. 1766 for improved JAYA). However, the fastest optimization run of improved JAYA converged to SSQ = 36.762 vs. the global optimum of 36.760 reached with the present algorithm in all runs.
The computational efficiency of FHGWJA operators generating new trial solutions in the search process is confirmed by the fact that the
W(
) ≤ 1.1
W(
) elitist strategy allowed mAHS, mBBBC–UBS, and MsinDE algorithms to converge to the global optimum value SSQ = 36.760 and reduce computational cost by 9.1%, 11%, and 35.6% with respect to the original AHS, BBBC–UBS, and sinDE algorithms of Refs. [
94,
95,
96]. Interestingly, mAHS and mBBBC–UBS became very competitive with FHGWJA in terms of average computational cost (respectively, 2368 and 2344 function evaluations vs. only 2005 of FHGWJA) and peak computational speed of the fastest optimization run (respectively, 1670 and 1711 vs. only 1637 of FHGWJA).
The hybrid HS/BBBC/SA algorithms of Ref. [
81] that utilized gradient information and approximate line searches could even complete their fastest optimization runs within less function evaluations than FHGWJA (i.e., between 1281 and 1331 vs. 1637 of FHGWJA) but converged to higher values of SSQ than the global optimum 37.760 found with FHGWJA. Furthermore, hybrid HS/BBBC/SA required on average more function evaluations than FHGWJA (i.e., between 2468 and 2755 vs. only 2005 of FHGWJA). Again, in a highly nonlinear problem like the hydrologic model calibration, the elitist strategies implemented in FHGWJA were extremely efficient in generating high-quality trial solutions over the whole search process. This happened because generation of trial solutions in FHGWJA is not biased to any extent by the quality of gradient information that may instead drive hybrid HS/BBBC/SA towards poor trial solutions should the cost function gradient not be well approximated. However, looking at
Table 5, it appears that computational cost was not an issue for this test problem: HS [
102] and COBSA [
104], respectively, required 20,000 and 100,000 function evaluations vs. only 2000 of FHGWJA while no information on the effective number of function evaluations were given in Refs. [
103,
104,
105,
106] for metaheuristic (i.e., IICSA, GA, DE, and PSO) as well as gradient-based algorithms (BFGS) and zero-order heuristic methods (NMS).
Figure 11 confirms the superiority of FHGWJA over its competitors in terms of convergence behavior also in this test problem. The figure compares the optimization histories of the best runs for the algorithms listed in
Table 5; the average FHGWJA convergence curve is also shown in the figure. Since FHGWJA always converged to the global optimum of 36.760 in all optimization runs, its best run corresponds to the fastest one. The plot is limited to the first 4500 function evaluations of optimization history and to the 35–70 cost function interval for the sake of clarity. It can be seen from the figure that the best and average optimization runs’ convergence curves for FHGWJA lie below those of the other algorithms practically over the whole search history for this test problem. FHGWJA started its best optimization run from the very large cost of 1520.145 (i.e., about 41.3 times the target optimum of 36.760 quoted in
Table 5) while the initial cost for all other optimizers ranged between 181.683 and 1122.263. However, the present algorithm immediately recovered the initial gap in cost function with respect to its competitors reducing cost to 37.5 (just 2% more than the target optimum) within only 280 function evaluations while all other algorithms required at least 520 function evaluations to reach the same intermediate cost function value. The hybrid harmony search algorithm of Ref. [
81] was competitive with FHGWJA only for the first 150 function evaluations, while the hybrid simulated annealing algorithm (HFSA) of Ref. [
81] went close to the average convergence curve of FHGWJA at about 325 function evaluations. The
W(
) ≤ 1.1
W(
) elitist strategy of FHGWJA allowed mAHS, mBBBC–UBS, and MsinDE to approach the cost function reduction rate of FHGWJA from 400 to 700 function evaluations, while the hybrid big bang–big crunch algorithm of Ref. [
81] showed a similar convergence speed to FHGWJA after about 750 function evaluations.
Table 6 presents the results obtained with FHGWJA and its competitors in the problem variant with 25 variables; the same nomenclature of
Table 5 is utilized. For the sake of clarity, the table reports only the optimal values of channel storage
St (included as optimization variables 4 through 25) found with FHGWJA. The present algorithm was the best optimizer also in this test problem variant. In fact, it converged to the lowest value overall of SSQ = 36.761, followed by the HFSA and hybrid HS/BBBC algorithms of Ref. [
81] that obtained SSQ = 36.762 and 36.763, respectively.
Standard JAYA ranked 5th overall in terms of SSQ but required on average 5.1-times more function evaluations than FHGWJA to complete optimization runs. Improved JAYA and standard GWO found the worst solutions amongst all optimizers (respectively, SSQ=36.806 and 36.818), requiring on average, respectively, about 1.64- and 11.3-times more function evaluations than the present algorithm. These results confirm the validity of the hybridization scheme implemented in FHGWJA.
The
W(
) ≤ 1.1
W(
) elitist strategy of FHGWJA significantly improved the performance of the AHS/BBBC–UBS/sinDE algorithms of Refs. [
94,
95,
96] also in the 25-variables problem variant. In particular, in their best optimization runs, mAHS reduced the optimized cost from 37.078 of AHS [
94] to 36.774 (i.e., the 6th best result amongst those quoted in
Table 6), mBBBC–UBS from 36.826 of BBBC–UBS [
95] to 36.786, and MsinDE from 37.101 of sinDE [
96] to 36.786. Furthermore, mAHS, mBBBC–UBS, and MsinDE reduced average computational cost by 32.7%, 34.4%, and 28.4% with respect to the original algorithms of Refs. [
94,
95,
96], thus becoming competitive enough with FHGWJA in terms of average computational speed.
The superiority of FHGWJA over its competitors is confirmed by the fact that the present algorithm achieved the lowest standard deviation on optimized cost (only 2.919 × 10
−12 vs. 3.333 × 10
−11 to 1.395 × 10
−2 of the other algorithms). Remarkably, the worst solution of FHGWJA practically coincided with its best solution, and it was always better than the best solutions found with its competitors. Furthermore, FHGWJA was faster than the other optimizers completing on average the 20 independent optimization runs within only 4438 function evaluations vs. 4744 to 5848 evaluations of hybrid HS/BBBC/SA and modified AHS/BBBC–UBS/sinDE. Interestingly, the hybrid HS/BBBC/SA algorithms of Ref. [
81] completed their fastest optimization runs within less function evaluations than FHGWJA (respectively, only 3277, 3656, and 3887 vs. 4122). However, FHGWJA found intermediate solutions corresponding to SSQ = 36.763 (i.e., the best solutions of hybrid HS and hybrid BBBC) always within only 3200 function evaluations, and SSQ = 36.762 (i.e., the best solution of HFSA) always within only 3700 function evaluations.
Figure 12 compares the optimization histories of the best runs for the algorithms listed in
Table 6; the average FHGWJA convergence curve also is shown in the figure. The plot is limited to the first 4500 function evaluations of optimization history and to the 35–80 cost function interval for the sake of clarity. The convergence behavior depicted in the figure resembles that observed for the three-variable problem variant. The best and average optimization runs’ convergence curves for FHGWJA again lie below those of the other algorithms practically over the whole search history. FHGWJA started its best optimization run from the very large cost of 3547.602 (i.e., about 96.5 times the target optimum of 36.761 quoted in
Table 6) while the initial cost for all other optimizers ranged between 461.502 and 2063.123, except the improved JAYA variant of Ref. [
81] that started from 75.965. However, the present algorithm immediately recovered the initial gap in cost function with respect to its competitors reducing cost to 37.2 (just 1.2% more than the target optimum) within only 320 function evaluations while all other algorithms required at least 995 function evaluations to reach the same intermediate cost function value. The hybrid big bang–big crunch algorithm of Ref. [
81] was competitive with FHGWJA only for the first 190 function evaluations. The
W(
) ≤ 1.1
W(
) elitist strategy of FHGWJA allowed mAHS and MsinDE to approach the cost function reduction rate of FHGWJA from 220 to 310 function evaluations, while mBBBC–UBS generated the closest intermediate designs to those of FHGWJA between 650 and 1000 function evaluations.
The excellent performance of FHGWJA in the two variants of the hydrologic model calibration problems confirms the suitability of the proposed algorithm for highly nonlinear optimization problems. Interestingly, the higher level of design freedom introduced in the optimization search by the additional 22 variables did not affect at all the robustness of FHGWJA that was able to converge practically to the same values (with at most 0.105% difference) of Muskingum model parameters (i.e., K, h, and m) and channel storage values St found in the three-variables problem variant.
3.4. Optimal Crashworthiness Design of a Vehicle Subject to Side Impact
The test case (vi) solved in this study regarded the optimal crashworthiness design of a vehicle. The goal was to minimize the weight of the vehicle subject to side impact. Crashworthiness optimization must ensure that the vehicle effectively absorbs and dissipates impact energy, minimizing forces transferred to occupants. The vehicle structure must be optimized to provide maximum protection without significantly increasing weight or compromising other performance aspects [
107,
108,
109].
Figure 13a shows the typical finite element schematization of a vehicle subject to side impact by the barrier [
108], while
Figure 13b illustrates the vehicle frame parts to be optimized. Since in side impact tests the B-pillar zone often is most affected, design variables are correlated with this zone.
The optimal crashworthiness design problem solved in this study included 11 optimization variables of which 7 related with the vehicle’s structural geometry: B-pillar inner thickness (x1), B-pillar reinforcement thickness (x2), floor side inner thickness (x3), cross members thickness (x4), door beam thickness (x5), door beltline reinforcement (x6), and roof rail thickness (x7). Two discrete variables refer to material selection and quantify the efficiency of the structure in absorbing energy during impact: shape factor for the B-pillar inner (x8) and shape factor for the floor side inner (x9). The last two variables are related with the presence of the obstacle: the barrier height (x10), and the hitting position of the barrier with respect to the center of mass of the vehicle (x11). In summary, the optimization problem includes nine continuous and two discrete variables.
Here, the optimal crashworthiness design problem was stated as follows:
The expressions written above for the objective function
(i.e., the structural weight of vehicle parts that must be optimized) and the constraints
were fitted in [
108] using response surface models. The 10 inequality constraint functions stated by Equation (27) define limitations on the force transferred to the dummy’s abdomen (
G1), velocities of the dummy’s upper/middle/lower chest (respectively,
G2,
G3, and
G4), deflections of the dummy’s upper/middle/lower ribs (respectively,
G5,
G6, and
G7), force transferred to the dummy’s pubic symphysis (
G8), velocity at B-pillar middle-point (
G9), and velocity at B-pillar front door (
G10). Forces are expressed in kN, velocities in mm/s, and deflections in mm.
The side constraints on optimization variables are as follows: 0.5 ≤ x1,x2,x3,x4,x5,x6,x7 ≤ 1.5 mm (continuous variables); x8,x9∈{0.192;0.345} (discrete variables); −30 ≤ x10,x11 ≤ 30 mm (continuous variables).
The best solution available in the literature for the optimal crashworthiness problem is that obtained with the multi-strategy fusion improved gray wolf optimization (IGWO) algorithm with a structural weight of 21.39473 kg [
69]. In particular, IGWO outperformed the chaotic GWO, CSO, and tunicate swarm algorithm (TSA) that converged to optimized structural weights between 21.46164 and 22.70296 kg. The adaptive dynamic self-learning grey wolf optimization algorithm (ASGWO) [
70] obtained a slightly higher optimized weight than TSA, 22.87188 kg. The starling murmuration optimizer (SMO) of Ref. [
47] converged to the optimized weight of 22.84298 kg, practically the same as the improved continuous ant colony optimization algorithm (LIACO
R) that obtained 22.84299 kg. The other metaheuristic algorithms tested in [
47] (i.e., the krill herd algorithm (KH), the best variant of artificial bee colony (ABC), the Harris hawks optimizer (HHO), the comprehensive learning particle swarm optimizer (CLPSO), and the whale optimization algorithm (WOA)) found optimized weights ranging between 22.88596 and 23.12717 kg.
Table 7 compares the optimal solution obtained with the FHGWJA algorithm developed in this study with those of the aforementioned optimizers. The results are grouped as follows: (i) FHGWJA and its component algorithms, the standard GWO and JAYA; (ii) IGWO and its three best competitors compared in [
69]; (iii) ASGWO [
70]; (iv) SMO and its six best competitors compared in [
47]. All designs reported in the table are practically feasible as the maximum constraint violation was 0.001%. Details on computational cost and statistical performance for independent optimization runs are reported below the table when available.
It can be seen that FHGWJA again was the best optimizer also for this test case. In fact, the present algorithm converged to the very low structural weight of 21.38340 kg while the optimized weights obtained with the other algorithms ranged between 21.39473 kg (IGWO used in [
69]) and 23.12717 kg (WOA used in [
70]). The chaotic GWO (used in [
69]) ranked 3rd overall, finding a very close structural weight to those of FHGWJA and IGWO: 21.46164 kg vs. 21.38340 and 21.39473 kg, respectively. However, the optimized values of variables x
8 and x
9 listed in
Table 7 for the IGWO and chaotic GWO do not coincide with the available discrete values {0.192;0.345} originally set for this test problem: the optimized designs of the IGWO and GWO would become infeasible as soon as x
8 is set equal to 0.192 or 0.345, thus violating constraint functions
G6,
G7,
G8, and
G9 by up to 13.9 and 23.7%, respectively.
Interestingly, the average optimized weight achieved with FHGWJA over the 20 independent runs was lower than the best solutions of IGWO and chaotic GWO: only 21.3905 kg vs. 21.39473 and 21.46164 kg, respectively. Furthermore, the present algorithm required on average only 1309 structural analyses vs. the 30,000 analyses indicated in [
69] for the IGWO and chaotic GWO. The hybrid optimizer proposed here clearly outperformed the standard GWO and standard JAYA that obtained 22.84755 and 23.19155 kg, respectively. Furthermore, FHGWJA required on average only 1309 structural analyses vs. 50,000 of standard GWO and 7560 of standard JAYA for completing the search process. The adaptive dynamic self-learning grey wolf optimization algorithm (ASGWO) [
70] ranked below the standard GWO in terms of optimized weight (22.87188 kg vs. 22.84755 kg) but it required only 15,000 structural analyses vs. 50,000 analyses of the standard GWO. In summary, using rather simple elitist strategies and properly refining/repairing trial solutions (so that they always lie on descent directions) allowed FHGWJA to outperform complex GWO schemes combining special perturbations of design variables or several strategies taken from various metaheuristic algorithms. This happened because only the present algorithm can always generate high-quality trial solutions throughout the optimization process.
The starling murmuration optimizer (SMO) and the other algorithms compared in Ref. [
70] also were less efficient than FHGWJA. In fact, they required 30,000 structural analyses vs. only 1309 of the present algorithm to find heavier optimized weights than FHGWJA (i.e., from 22.84298 to 23.12717 kg vs. only 21.38340 of FHGWJA).
The present algorithm was very robust: the standard deviation on optimized weight was only 0.0333% of the average optimized weight while the ratio between the standard deviation on number of structural analyses and the average number of structural analyses was 26.2%. Statistical data relative to the other algorithms compared with FHGWJA usually were not available in the literature. This can be explained in view of the formulation of the optimal crashworthiness design problem. The cost function
stated by Equation (26) depends only on variables x
1, x
2, x
3, x
4, x
5, and x
7, while x
10 is the only variable entering in the expressions of all constraint functions (see Equation (27)). Hence, the search space of this problem hosts many competitive solutions corresponding to significantly different optimized values of design variables with obvious implications in terms of statistical dispersion of solutions. For example, the optimized values of the first seven variables for the solutions listed in
Table 7 varied by at most 27.8% while optimized values of variables x
8 through x
11 varied from 36% to 259.5% where the largest variation occurred for variable x
10 that must be adjusted to satisfy all constraints. Remarkably, FHGWJA was always able to precisely approach and efficiently explore and exploit the best region of search space in all optimization runs in view of its inherent ability to generate high-quality trial solutions over the whole search process. This reduced by a great deal the standard deviation of the optimized solution.
The excellent convergence behavior and robustness of FHGWJA is confirmed in
Figure 14, which compares the optimization history of the proposed algorithm and its component algorithms GWO and JAYA. The convergence curves relative to the best and average optimization runs of FHGWJA are shown in the figure while those of the other algorithms listed in
Table 7 were not reported in [
47,
69,
70]. For the sake of clarity, the plot is limited to the first 1800 structural analyses. FHGWJA started its best optimization run from the structural weight of 40.032 kg, which is about 87.2% larger than the best optimized weight of 21.383 kg found by the present algorithm. In spite of this, in its best run, FHGWJA generated a feasible intermediate design just 1.48% worse than the global optimum after only 350 structural analyses, that is at approximately 26% of the optimization process. Furthermore, the best and average optimization runs’ convergence curves for FHGWJA practically coincided after only 830 structural analyses, that is at just 60.7% of the best run’s optimization history.
The results presented in this section confirm once again the suitability of FHGWJA for solving complex engineering optimization problems. This conclusion is supported by the fact that the present algorithm was compared with another 14 (actually 28, considering all methods evaluated in Refs. [
47,
69,
70]) state-of-the-art metaheuristic algorithms also including three advanced GWO variants developed just a few months before the present study.
3.5. Weight Minimization of a Planar 200-Bar Truss Structure
The last test case solved in this study regarded the weight minimization of the planar 200-bar truss structure shown in
Figure 15. The structure, made of steel (Young’s modulus of 206.91 GPa; mass density of 7833.413 kg/m
3), is composed of 200 elements connected by 77 nodes. Because of structural symmetry, the 200 elements are categorized in 29 groups as indicated in
Table 8: the elements of each group have the same cross-sectional area, which is taken as a sizing optimization variable. Hence, this test problem has 29 design variables.
The structure is subjected to three independent loading conditions:
- (a)
Concentrated forces of 4.45 kN (i.e., 1000 lbf) acting in the positive X-direction at nodes 1, 6, 15, 20, 29, 34, 43, 48, 57, 62, and 71 (denoted by the green horizontal arrows in
Figure 15);
- (b)
Concentrated forces of 44.497 kN (i.e., 10,000 lbf) acting in the negative Y-direction at nodes 1, 2, 3, 4, 5, 6, 8, 10, 12, 14, 15, 16, 17, 18, 19, 20, 22, 24, 26, 28, 29, 30, 31, 32, 33, 34, 36, 38, 40, 42, 43, 44, 45, 46, 47, 48, 50, 52, 54, 56, 57, 58, 59, 60,61,62,64, 66, 68, 70, 71, 72, 73, 74, and 75 (denoted by the vertical horizontal arrows in
Figure 15);
- (c)
Loading conditions (a) and (b) acting together.
The truss weight must be minimized under 1200 nonlinear constraints on element stresses that should not exceed ±68.97 MPa (i.e., ±10,000 psi, the same limit stress under tension and compression). The cross-sectional areas taken as sizing variables can vary between 0.64516 cm2 (i.e., 0.1 in2) and 645.16 cm2 (i.e., 100 in2).
The optimization problem was stated as follows:
In Equation (28), and γ, respectively, are the mass density and the specific weight of the material; NEL is the number of elements of the truss; Aj and Lj, respectively, denote the cross-sectional area and the length of the j-th element of the structure. The design vector includes the values of the 29 sizing variables corresponding to the cross-sectional areas of the 200 truss elements.
In Equation (29), σj,ilc denotes the stress developed in the j-th element of the structure under the ilc-th loading condition, while σt,lim = 10,000 psi and σc,lim = −10,000 psi, respectively, denote the limit stresses in tension and compression. A total of 1200 nonlinear constraints on element stresses are included in the optimization problem.
This design example is an average-scale structural optimization problem and has been extensively investigated in the literature. The target optimum weight for this problem is 11,542.4 kg, but there is also a local optimum corresponding to a structural weight of 11,544 kg. The non convexity of the search space makes this test problem hard to be solved with metaheuristic optimizers. The hybrid harmony search, big bang–big crunch, and simulated annealing algorithms developed in [
81] converged to a structural weight of 11,542.409 kg, practically the same as the target optimum. Other very competitive solutions were obtained with the following: (i) the eigenvectors of covariance matrix (ECM) algorithm derived from the covariance matrix adaptation evolution strategy high-performance optimizer (i.e., 11,543.521 kg, [
110]); (ii) the variant of success history-based adaptive differential evolution with linear population size reduction algorithm using an ensemble sinusoidal approach to automatically adapt the DE scaling factor (LSHADE-Epsin) (i.e., 11,543.980 kg, [
111]); (iii) the modified coyote optimization algorithm (MCOA) that uses chaotic sequences instead of random sequences for perturbing solutions (i.e., 11,544.007 kg, [
112]); (iv) the cyclic neighborhood network topology particle swarm optimizer (CNNT-PSO) that handles population diversity based on the interactions between each particle and its neighboring individuals (i.e., 11,545.330 kg, [
113]); (v) the hybrid geometric mean optimizer (hGMO) that integrates GMO with variable neighborhood to enhance exploitation (i.e., 11,545.568, [
114]).
Other advanced MHAs converged to lighter designs than the 11,542.4 kg target weight, but their optimized solutions were infeasible. For example, the corrected multi-level and multi-point simulated annealing (CMLPSA) algorithm of Ref. [
90], that generates the new trial design using a population of candidate designs lying on descent directions rather than developing a single point as is conducted in classical SA, found an optimized weight of 11,542.137 kg (practically the same value of target optimum weight) violating stress constraints by only 0.0709%. The hybrid HPSACO algorithm [
115], combining harmony search, particle swarm, and ant colony optimization, found an optimum weight of 11,410.737 kg but stress constraints were violated by 9.97%. An improved Harris hawks optimization algorithm was used in [
116]: the optimized design weighted 11,533.983 kg (i.e., 0.073% less than target optimum weight) with 9.55% violation on stress constraints. PSOHHO [
117], combining the particle swarm optimization and Harris hawks optimization methods, obtained 11,374.477 kg structural weight (i.e., 1.46% reduction) but the corresponding solution violated stress constraints by about 27.6%. Finally, the biogeography-based optimization [
118] reduced structural weight to 11,320.758 kg (i.e., 1.92% reduction) but the corresponding design violated stress constraints by 11.33%.
In view of the above arguments, the optimization results obtained with the FHGWJA algorithm developed in this study were compared with those of the hybrid HS/BBBC/SA [
81], ECM [
110], LSHADE-Epsin [
111], MCOA [
112], CNNT-PSO [
113], and hGMO [
114] algorithms providing the best feasible solutions available in the literature. Furthermore, FHGWJA was compared with the classical GWO and improved GWO [
119] as well as with classical and improved JAYA [
81]. The political optimizer (PO) algorithm implemented in [
120] also was included in the comparison as its best optimized weight was close to those of the classical GWO and standard and improved JAYA. The comparisons presented in this section should be considered highly significant because each competitor selected from Refs. [
81,
110,
111,
112,
113,
114,
119,
120] was reported to outperform from 5 to 76 other state-of-the-art metaheuristic algorithms.
According to the literature, population size values for this test problem were as follows: 50 for hybrid HS [
81], MCOA [
112], and hGMO [
114]; 100 for hybrid BBBC [
81]; 20 for improved JAYA [
81]; 14 for ECM [
110]; 1000 for CNNT-PSO [
113]; 121 for PO [
120]. Such a large variation in the number of search agents confirms the difficulties encountered with metaheuristic optimizers in solving the 200-bar truss problem. Interestingly, FHGWJA could use the smallest population size overall. This was the logical consequence of the inherent ability of FHGWJA to select high-quality trial solutions throughout the search process.
Table 9 presents the optimization results obtained with FHGWJA and its competitors in the 200-bar truss problem. The table reports the optimized values of cross-sections (expressed in in
2) and the structural weight for the best run along with the number of structural analyses; the best, average, and worst values of optimized weight along with the corresponding standard deviation (
ST Dev) over the independent runs; and the number of structural analyses (
NSA) required in the search process. Statistical data on computational cost (i.e., average number of structural analyses and corresponding standard deviation) are reported when available. The detailed data reported for this test problem in Refs. [
115,
116,
117,
118] that present solutions violating stress limits were not replicated in
Table 9 for the sake of brevity.
It can be seen from
Table 9 that the FHGWJA was the best optimizer also for this test case. In fact, it converged to the overall lowest structural weight of 11,541.380 kg. Furthermore, all optimization runs of FHGWJA found practically feasible designs (in fact, the maximum element stress evaluated for the optimized solutions of FHGWJA was only 10,000.412 psi vs. the 10,000 psi stress limit set for this problem) that were lighter than the target optimum weight of 11,542.4 kg. The hybrid HS/BBBC/SA algorithms of Ref. [
81] were very competitive with the present algorithm because they always converged to the target optimum with at most 0.00622 kg standard deviation vs. 0.4157 kg of FHGWJA. The average computational cost of FHGWJA (i.e., 3356 structural analyses) practically coincides with the mean of average computational costs recorded for hybrid HS (i.e., 2736 analyses), hybrid BBBC (i.e., 1676 analyses), and hybrid fast SA (i.e., 5806 analyses).
The other best competitors of FHGWJA, namely ECM [
110], LSHADE-Epsin [
111], MCOA [
112], CNNT-PSO [
113], and hGMO [
114], obtained optimized weights ranging between 11,543.521 kg (ECM) and 11,545.568 kg (hGMO) and required between 20,000 (hGMO) and 290,000 (LSHADE-Epsin) structural analyses vs. only 3356 structural analyses required on average by the present algorithm. FHGWJA clearly outperformed its component algorithms: the best solutions of improved JAYA [
81], standard JAYA, and the standard GWO, respectively, achieved optimized weights of 11,550.054 kg, 11,553.355 kg and 11,558.650 kg vs. only 11,541.380 kg of FHGWJA; furthermore, they, respectively, required 31,580, 40,941, and 50,000 structural analyses vs. only 3356 analyses of FHGWJA. The improved GWO variant (IGWO) of Ref. [
119], using an exponential variation scheme for the vectors
,
and
of Equations (6) and (9), prematurely converged to the optimized weight of 11,689.883 kg within 23,760 structural analyses. The political optimizer (PO) of Ref. [
120] obtained a slightly better solution than the standard GWO (the 2nd worst solution overall) weighing 11,557.384 kg vs. 11,558.650 kg within about 28,000 structural analyses vs. 50,000 analyses required for the standard GWO.
The present algorithm was very robust. In fact, it ranked 4th overall in terms of standard deviation on optimized weight but it achieved a rate of success of 100% in finding slightly lighter designs than the target global optimum of 11,542.4 kg, and it never got trapped in the 11,544 kg local minimum of structural weight. Furthermore, the ratio between standard deviation of computational cost and average computational cost achieved with FHGWJA was only 12.8%.
The
W(
) ≤ 1.1
W(
) elitist strategy of FHGWJA was again effective in improving the performance of the AHS/BBBC–UBS/sinDE algorithms of Refs. [
94,
95,
96], which were reported in [
81] to obtain optimized structural weights of 11,542.410 kg (yet with 0.9998% violation on stress constraints), 11,542.410 kg (yet with 1.076% constraint violation), and 11,555.006 kg (yet with 0.806% constraint violation), respectively. The modified mAHS, mBBBC–UBS, and MsinDE variants now converged to the fully feasible design listed in
Table 9 for hybrid HS/BBBC/SA weighing 11,542.409 kg. The computational cost of mAHS, mBBBC–UBS, and MsinDE decreased on average by about 30% with respect to the original algorithms, yet remaining about 3.5- to 8.5-times higher than its counterpart for FHGWJA. These results are not reported in
Table 9 for the sake of brevity.
Figure 16 compares the optimization histories of the best runs for the algorithms listed in
Table 9; the average FHGWJA convergence curve also is shown in the figure. The plot is limited to the first 8000 structural analyses of the optimization history and to the 11,000–31,000 kg structural weight interval for the sake of clarity. FHGWJA started its best optimization run from the very large cost of 208,700.225 kg (i.e., about 18.1 times the target optimum of 11,541.380 kg quoted in
Table 9) while the initial cost for all other optimizers ranged between 157,234.459 and 185,304.986 kg, except for the ECM (eigenvectors of covariance matrix) [
110] and CNNT-PSO (cyclic neighborhood network topology particle swarm optimizer) [
113] algorithms that started from 20,437.901 and 11,754.998 kg, respectively. However, the present algorithm immediately recovered the initial gap in cost function with respect to its competitors: for example, within only 210 and 1330 structural analyses for ECM and CNNT-PSO, respectively. The best and average optimization runs’ convergence curves for FHGWJA again lie below those of all other algorithms practically over the whole search history. The hybrid harmony search and big bang–big crunch algorithms of Ref. [
81] were competitive enough with FHGWJA approaching the best optimization run of the present algorithm after about 2000 and 1400 structural analyses, respectively, that is towards the end of their optimization histories. However, while hybrid HS/BBBC stopped to improve design after reaching their best structural weight of 11,542.409 kg, the present algorithm continued to reduce structural weight to 11,541.380 kg. The best and average optimization runs’ convergence curves of FHGWJA practically coincided after about 1750 structural analyses, that is at only 50.5% of the best run’s optimization history.
The data presented in this section confirmed the tendency of metaheuristic optimizers to prematurely converge to sub-optimal designs in the 200-bar truss weight minimization problem. However, FHGWJA emerged once again as the best algorithm in view of its inherent ability to generate high-quality trial solutions throughout the optimization process. Remarkably, the present algorithm was superior to high-performance optimizers such as ECM and LSHADE-Epsin that are competitive even with CEC (IEEE Congress on Evolutionary Computing) competition winners.