Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
New Hermite–Hadamard and Ostrowski-Type Inequalities for Newly Introduced Co-Ordinated Convexity with Respect to a Pair of Functions
Next Article in Special Issue
A Global Neighborhood with Hill-Climbing Algorithm for Fuzzy Flexible Job Shop Scheduling Problem
Previous Article in Journal
In-Service Mathematics Teachers’ Pedagogical Technology Knowledge Development in a Community of Inquiry Context
Previous Article in Special Issue
Meta-Heuristic Optimization and Keystroke Dynamics for Authentication of Smartphone Users
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Light Spectrum Optimizer: A Novel Physics-Inspired Metaheuristic Optimization Algorithm

by
Mohamed Abdel-Basset
1,*,
Reda Mohamed
1,
Karam M. Sallam
1 and
Ripon K. Chakrabortty
2
1
Faculty of Computers and Informatics, Zagazig University, Zagazig 44519, Sharqiyah, Egypt
2
School of Engineering and IT, UNSW Canberra at ADFA 2600, Campbell, ACT 2610, Australia
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(19), 3466; https://doi.org/10.3390/math10193466
Submission received: 2 August 2022 / Revised: 12 September 2022 / Accepted: 17 September 2022 / Published: 23 September 2022
(This article belongs to the Special Issue Metaheuristic Algorithms)
Figure 1
<p>Categories of metaheuristic algorithms.</p> ">
Figure 2
<p>Light dispersion and colors of rainbow.</p> ">
Figure 3
<p>Light dispersion and colors of rainbow and vector form of refraction and reflection in rainbow.</p> ">
Figure 4
<p>The behavior of the inverse incomplete gamma function with respect to <inline-formula><mml:math id="mm436"><mml:semantics><mml:mi>a</mml:mi></mml:semantics></mml:math></inline-formula> values.</p> ">
Figure 5
<p>Tracing F’s values versus R<sub>1</sub> for an individual over 9 independent runs: The red points indicate that <inline-formula><mml:math id="mm437"><mml:semantics><mml:mrow><mml:msub><mml:mi>R</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&lt;</mml:mo><mml:msup><mml:mi>F</mml:mi><mml:mo>′</mml:mo></mml:msup></mml:mrow></mml:semantics></mml:math></inline-formula>; and the blue points indicate that <inline-formula><mml:math id="mm438"><mml:semantics><mml:mrow><mml:msub><mml:mi>R</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&gt;</mml:mo><mml:msup><mml:mi>F</mml:mi><mml:mo>′</mml:mo></mml:msup></mml:mrow></mml:semantics></mml:math></inline-formula>.</p> ">
Figure 6
<p>Flowchart of LSO.</p> ">
Figure 7
<p>Depiction of exploration and exploitation stages of the proposed algorithm (LSO). (<bold>a</bold>) Exploitation phase. (<bold>b</bold>) Exploration phase.</p> ">
Figure 8
<p>Sensitivity analysis of LSO. (<bold>a</bold>) Tuning the parameter P<sub>h</sub> over F58. (<bold>b</bold>) Tuning the parameter P<sub>h</sub> over F57. (<bold>c</bold>) Tuning the parameter <bold><italic>β</italic></bold> over F58. (<bold>d</bold>) Tuning the parameter <bold><italic>β</italic></bold> over F57. (<bold>e</bold>) Tuning the parameter <bold><italic>β</italic></bold> over F58 in terms of convergence speed. (<bold>f</bold>) Tuning the parameter P<sub>e</sub> over F58. (<bold>g</bold>) Tuning the parameter P<sub>e</sub> over F57. (<bold>h</bold>) Adjusting the parameter P<sub>s</sub> over F58. (<bold>i</bold>) Adjusting the parameter P<sub>s</sub> over F57.</p> ">
Figure 8 Cont.
<p>Sensitivity analysis of LSO. (<bold>a</bold>) Tuning the parameter P<sub>h</sub> over F58. (<bold>b</bold>) Tuning the parameter P<sub>h</sub> over F57. (<bold>c</bold>) Tuning the parameter <bold><italic>β</italic></bold> over F58. (<bold>d</bold>) Tuning the parameter <bold><italic>β</italic></bold> over F57. (<bold>e</bold>) Tuning the parameter <bold><italic>β</italic></bold> over F58 in terms of convergence speed. (<bold>f</bold>) Tuning the parameter P<sub>e</sub> over F58. (<bold>g</bold>) Tuning the parameter P<sub>e</sub> over F57. (<bold>h</bold>) Adjusting the parameter P<sub>s</sub> over F58. (<bold>i</bold>) Adjusting the parameter P<sub>s</sub> over F57.</p> ">
Figure 9
<p>Average rank of each optimizer on all CEC2014.</p> ">
Figure 10
<p>Average SD of each optimizer on all CEC2014.</p> ">
Figure 11
<p>Average rank on all CEC2017 test functions.</p> ">
Figure 12
<p>Average SD of each optimizer on all CEC2017.</p> ">
Figure 13
<p>Average rank of each optimizer on all CEC2020.</p> ">
Figure 14
<p>Average SD of each optimizer on all CEC2020.</p> ">
Figure 15
<p>Average rank of each optimizer on all CEC2022.</p> ">
Figure 16
<p>Average SD of each optimizer on all CEC2022.</p> ">
Figure 17
<p>Depiction of averaged convergence speed among algorithms on some test functions. (<bold>a</bold>) F51 (Unimodal), (<bold>b</bold>) F52 (Unimodal), (<bold>c</bold>) F53 (Multimodal), (<bold>d</bold>) F54 (Multimodal), (<bold>e</bold>) F55 (Multimodal), (<bold>f</bold>) F56 (Multimodal), (<bold>g</bold>) F57 (Multimodal), (<bold>h</bold>) F59 (Multimodal), (<bold>i</bold>) F60 (Multimodal), (<bold>j</bold>) F61 (Multimodal), (<bold>k</bold>) F62 (Hybrid), (<bold>l</bold>) F63 (Hybrid), (<bold>m</bold>) F65 (Hybrid), (<bold>n</bold>) F64 (Hybrid), (<bold>o</bold>) F73 (Composition), (<bold>p</bold>) F75 (Composition), (<bold>q</bold>) F77 (Composition), and (<bold>r</bold>) F78 (Composition).</p> ">
Figure 17 Cont.
<p>Depiction of averaged convergence speed among algorithms on some test functions. (<bold>a</bold>) F51 (Unimodal), (<bold>b</bold>) F52 (Unimodal), (<bold>c</bold>) F53 (Multimodal), (<bold>d</bold>) F54 (Multimodal), (<bold>e</bold>) F55 (Multimodal), (<bold>f</bold>) F56 (Multimodal), (<bold>g</bold>) F57 (Multimodal), (<bold>h</bold>) F59 (Multimodal), (<bold>i</bold>) F60 (Multimodal), (<bold>j</bold>) F61 (Multimodal), (<bold>k</bold>) F62 (Hybrid), (<bold>l</bold>) F63 (Hybrid), (<bold>m</bold>) F65 (Hybrid), (<bold>n</bold>) F64 (Hybrid), (<bold>o</bold>) F73 (Composition), (<bold>p</bold>) F75 (Composition), (<bold>q</bold>) F77 (Composition), and (<bold>r</bold>) F78 (Composition).</p> ">
Figure 17 Cont.
<p>Depiction of averaged convergence speed among algorithms on some test functions. (<bold>a</bold>) F51 (Unimodal), (<bold>b</bold>) F52 (Unimodal), (<bold>c</bold>) F53 (Multimodal), (<bold>d</bold>) F54 (Multimodal), (<bold>e</bold>) F55 (Multimodal), (<bold>f</bold>) F56 (Multimodal), (<bold>g</bold>) F57 (Multimodal), (<bold>h</bold>) F59 (Multimodal), (<bold>i</bold>) F60 (Multimodal), (<bold>j</bold>) F61 (Multimodal), (<bold>k</bold>) F62 (Hybrid), (<bold>l</bold>) F63 (Hybrid), (<bold>m</bold>) F65 (Hybrid), (<bold>n</bold>) F64 (Hybrid), (<bold>o</bold>) F73 (Composition), (<bold>p</bold>) F75 (Composition), (<bold>q</bold>) F77 (Composition), and (<bold>r</bold>) F78 (Composition).</p> ">
Figure 17 Cont.
<p>Depiction of averaged convergence speed among algorithms on some test functions. (<bold>a</bold>) F51 (Unimodal), (<bold>b</bold>) F52 (Unimodal), (<bold>c</bold>) F53 (Multimodal), (<bold>d</bold>) F54 (Multimodal), (<bold>e</bold>) F55 (Multimodal), (<bold>f</bold>) F56 (Multimodal), (<bold>g</bold>) F57 (Multimodal), (<bold>h</bold>) F59 (Multimodal), (<bold>i</bold>) F60 (Multimodal), (<bold>j</bold>) F61 (Multimodal), (<bold>k</bold>) F62 (Hybrid), (<bold>l</bold>) F63 (Hybrid), (<bold>m</bold>) F65 (Hybrid), (<bold>n</bold>) F64 (Hybrid), (<bold>o</bold>) F73 (Composition), (<bold>p</bold>) F75 (Composition), (<bold>q</bold>) F77 (Composition), and (<bold>r</bold>) F78 (Composition).</p> ">
Figure 18
<p>Diversity, convergence curve, average fitness history, trajectory, and search history.</p> ">
Figure 18 Cont.
<p>Diversity, convergence curve, average fitness history, trajectory, and search history.</p> ">
Figure 19
<p>Comparison among algorithms in terms of CPU time.</p> ">
Figure 20
<p>Tuning the parameter P<sub>s</sub> over tension spring design.</p> ">
Figure 21
<p>Tension/compression spring design problem. (<bold>a</bold>) Structure. (<bold>b</bold>) Convergence curve.</p> ">
Figure 22
<p>Weld beam design problem. (<bold>a</bold>) Structure. (<bold>b</bold>) Convergence curve.</p> ">
Figure 23
<p>Pressure vessel design problem. (<bold>a</bold>) Structure. (<bold>b</bold>) Convergence curve.</p> ">
Versions Notes

Abstract

:
This paper introduces a novel physical-inspired metaheuristic algorithm called “Light Spectrum Optimizer (LSO)” for continuous optimization problems. The inspiration for the proposed algorithm is the light dispersions with different angles while passing through rain droplets, causing the meteorological phenomenon of the colorful rainbow spectrum. In order to validate the proposed algorithm, three different experiments are conducted. First, LSO is tested on solving CEC 2005, and the obtained results are compared with a wide range of well-regarded metaheuristics. In the second experiment, LSO is used for solving four CEC competitions in single objective optimization benchmarks (CEC2014, CEC2017, CEC2020, and CEC2022), and its results are compared with eleven well-established and recently-published optimizers, named grey wolf optimizer (GWO), whale optimization algorithm (WOA), and salp swarm algorithm (SSA), evolutionary algorithms like differential evolution (DE), and recently-published optimizers including gradient-based optimizer (GBO), artificial gorilla troops optimizer (GTO), Runge–Kutta method (RUN) beyond the metaphor, African vultures optimization algorithm (AVOA), equilibrium optimizer (EO), grey wolf optimizer (GWO), Reptile Search Algorithm (RSA), and slime mold algorithm (SMA). In addition, several engineering design problems are solved, and the results are compared with many algorithms from the literature. The experimental results with the statistical analysis demonstrate the merits and highly superior performance of the proposed LSO algorithm.
MSC:
68-04; 68Q25; 68T20; 68W25; 68W40; 68W50

1. Introduction

The practical implications of metaheuristic algorithms have been widely spread, especially in the last few years. The reason behind this is the rapidity, high-quality solutions, and problem-independent characteristics of metaheuristics [1,2,3,4,5]. Unfortunately, no metaheuristics can efficiently solve all types of optimization problems. Consequently, a significant number of metaheuristics have been proposed from time to time, aiming to find efficient metaheuristics that are proper for various types of optimization problems. In particular, metaheuristics depend on the progress or movement behavior of a specified phenomenon or creature. By simulating such a progress or movement style, a metaheuristic can invade the search space of a problem as the environment of the simulated phenomenon or creature.
Metaheuristics depend on two search mechanisms while trying to find the best solution to the given problem. The first mechanism is exploration, which invades the unvisited search area. The second mechanism is exploitation, which searches around the found best solution [6]. The main factor of any metaheuristic success is balancing these two mechanisms. In particular, using more exploration makes metaheuristics unable to reach the global best solution. Alternatively, using more exploitation may lead to trapping into the local optima. In general, metaheuristics’ searching mechanisms have stemmed from natural phenomena or the behavior of creatures.
Metaheuristics can be categorized based on their metaphors into seven main categories [7,8,9,10,11]: evolution-based, swarm-based, physics-based, human-based, chemistry-based, math-based, and others (See Figure 1). Evolution-based metaheuristics mimic the natural evolution process, which consists of select, crossover, and mutation processes such as genetic algorithm (GA) [12], genetic programming (GP) [13], evolution strategy (ES) [14], probability-based incremental learning (PBIL) [15], and differential evolution (DE) [16].
The second category, referred to as swarm-based algorithms, imitates the social behavior of swarms, birds, insects, and animal groups [17]. Some of the well-established and recently-published algorithms in this category are particle swarm optimization (PSO) [18], cuckoo search (CS) algorithm [19], flower pollination algorithm (FPA) [20], marine predators algorithm (MPA) [21], Harris hawks optimization (HHO) [22], salp swarm algorithm (SSA) [23], red fox optimizer (RFO) [24], duck swarm algorithm [25], chameleon swarm algorithm [26], artificial gorilla troops optimizer [27], cat optimization algorithm [28], donkey and smuggler optimization algorithm [29], krill herd algorithm [30], elephant herding optimization [31], wolf pack search algorithm [32], hunting search [33], monkey search [34], chicken swarm optimization [35], horse herd optimization algorithm (HOA) [36], moth search (MS) algorithm [37], earthworm optimization algorithm (EWA) [38], monarch butterfly optimization (MBO) [39], slime mold algorithm (SMA) [40] and whale optimization algorithm (WOA) [41]. In general, swarm-based metaheuristics have some advantages over evolution-based ones. In particular, a swarm-based metaheuristics search in a cumulative form preserves the information of subsequent search iterations. On the other hand, evolution-based metaheuristics ignore previous search information once the new population is generated. Additionally, evolution-based metaheuristics usually need more parameters than swarm-based metaheuristics. This makes swarm-based metaheuristics more applicable than evolution-based metaheuristics in most cases.
The third category of metaheuristics is human-based algorithms, which mimic human behaviors and human interactions in societies. The most popular algorithms belonging to this category are teaching–learning-based Optimization (TLBO) [42], harmony search (HS) [43], past present future (PPF) [44], political optimizer (PO) [45], brain storm optimization (BSO) [46], exchange market algorithm (EMA) [47], league championship algorithm (LCA) [48], poor and rich optimization algorithm [49], driving training-based optimization [50], gaining–sharing knowledge-based algorithm (GSK) [51], imperialist competitive algorithm (ICA) [52], and soccer league competition (SLC) [53].
The fourth category is physics-based algorithms, which are inspired by physical laws, such as inertia, electromagnetic force, gravitational force, and so on. In this category, the algorithms are based on physical principles to enable the search agents to interact and navigate the optimization problems’ search space to reach the near-optimal solution. This category includes several algorithms like simulated annealing (SA) [54], gravitational search algorithm (GSA) [55], charged system search (CSS) [56], big-bang big-crunch (BBBC) [57], artificial physics algorithm (APA) [58], galaxy-based search algorithm (GbSA) [59], black hole (BH) algorithm [60], river formation dynamics (RFD) algorithm [61], henry gas solubility optimization (HGSO) algorithm [62], curved space optimization (CSO) [63], central force optimization (CFO) [64], water cycle algorithm (WCA) [65], water waves optimization (WWO) [66], ray optimization (RO) algorithm [67], gravitational local search algorithm (GLSA) [68], small-world optimization algorithm (SWOA) [69], multi-verse optimizer (MVO) [70], intelligent water drops (IWD) algorithm [71], integrated radiation algorithm (IRA) [72], space gravitational algorithm (SGA) [73], ion motion algorithm (IMA) [74], electromagnetism-like algorithm (EMA) [75], equilibrium optimizer (EO) [76], light ray optimization (LRO) [77], and Archimedes optimization algorithm (AOA) [78]. Both light ray optimization (LRO) [77] and ray optimization (RO) [67] simulate the reflection and refraction of the light rays, respectively, when transferred from a medium to a darker one, which is completely different from the proposed algorithm, referred to as Light Spectrum Optimizer (LSO), as illustrated later.
The chemistry-based metaheuristic algorithms in the fifth category are inspired by mimicking certain chemical laws; some of those algorithms are gases Brownian motion optimization (GBMO) [79], artificial chemical reaction optimization algorithm (ACROA) [80], and several others [81]. The sixth category, called math-based metaheuristics, is based on presenting metaheuristic algorithms inspired by simulating certain mathematics functions like the golden sine algorithm (GSA) [82], base optimization algorithm (BOA) [83], and sine–cosine algorithm [84]. Table 1 presents the category and inspiration of some of the recently-published metaheuristic algorithms―specifically published over the last three years.
The last category (Others) includes all the metaheuristic algorithms, which has not been inspired by the behaviors of creatures or natural phenomena, such as adaptive large neighborhood search technique (ALNS) [85], large neighborhood search (LNS) [86,87], and greedy randomized adaptive search procedure (GRASP) [88,89]. For example, the large neighborhood search technique is a metaheuristic algorithm based on improving an initial solution using destroy and repair operators.
Table 1. Classification and inspiration of some recently-published metaheuristic algorithms.
Table 1. Classification and inspiration of some recently-published metaheuristic algorithms.
AlgorithmInspirationCategoryYear
Starling murmuration optimizer (SMO) [90]Starlings’ behaviorsSwarm-based2022
Snake optimizer (SO) [91]Mating behavior of snakesSwarm-based2022
Reptile Search Algorithm (RSA) [92]Hunting behavior of CrocodilesSwarm-based2022
Archerfish hunting optimizer (AHO) [93]Jumping behaviors of the archerfishSwarm-based2022
Water optimization algorithm (WAO) [94]Chemical and physical properties of water moleculesPhysics-based
Chemistry-based
2022
Ebola optimization search algorithm (EOSA) [95]Propagation mechanism of the Ebola virus diseaseOthers 2022
Beluga whale optimization (BWO) [96]Behaviors of beluga whalesSwarm-based2022
White Shark Optimizer (WSO)Behaviors of great white sharksSwarm-based2022
Aphid–Ant Mutualism (AAM) [97]The relationship between aphids and ants species is called MutualismSwarm-based2022
Circle Search Algorithm (CSA) [98]Geometrical features of circlesMath-based2022
Pelican optimization algorithm (POA) [99]The behavior of pelicans during huntingSwarm-based 2022
Sheep flock optimization algorithm (SFOA) [100]Shepherd and sheep behaviors in the pastureSwarm-based2022
Gannet optimization algorithm (GOA) [101]Behaviors of gannets during foragingSwarm-based2022
Prairie dog optimization (PDO) [102]The behavior of the prairie dogsSwarm-based2022
Driving Training-Based Optimization (DTBO) [50]The human activity of driving trainingHuman-based2022
Stock exchange trading optimization (SETO) [103]The behavior of traders and stock price changesHuman-based2022
Archimedes optimization algorithm (AOA) [78]Archimedes lawPhysics-based2021
Golden eagle optimizer (GEO) [104]Golden eagles’ hunting processSwarm-based2021
Heap-based optimizer (HBO) [105]Corporate rank hierarchyHuman-based2021
African vultures optimization algorithm (AVOA) [106]African vultures’ lifestyleSwarm-based2021
Artificial gorilla troops optimizer (GTO) [27]Gorilla troops’ social intelligenceSwarm-based2021
Quantum-based avian navigation optimizer algorithm (QANA) [107]Migratory birds’ navigation behaviorsEvolution-based
(Based DE)
2021
Colony predation algorithm (CPA) [108]Corporate predation of animalsSwarm-based2021
Lévy flight distribution (LFD) [42]Lévy flight random walkPhysics-based2020
Political Optimizer (PO) [45]Multi-phased process of politicsHuman-based2020
Marine predators algorithm (MPA) [21]Foraging strategy in the ocean between predators and preySwarm-based2020
Equilibrium optimizer (EO) [76]Mass balance modelsPhysics-based2020
Over the last few decades, several metaheuristic algorithms have been proposed, but unfortunately, most of these algorithms are not able to adapt themselves when tackling several optimization problems with various characteristics. Therefore, this paper proposes a novel physical-based metaheuristic algorithm called Light Spectrum Optimizer (LSO) for global optimization over a continuous search space. This novel metaheuristic is inspired by the sunlight ray dispersion while passing through the rain droplets causing the sparkle rainbow phenomenon. In particular, the mathematical formulation of the sunlight ray’s reflection, refraction, and dispersion can be efficiently utilized for presenting a variety in the updating process to preserve the population diversity, in addition to accelerating the convergence speed when applied to different optimization problems. Experimentally, LSO is extensively assessed using several mathematical benchmarks like CEC2005, CEC2014, CEC2017, CEC2020, and CEC2022 to reveal its performance compared to several well-established metaheuristic algorithms. In addition, LSO is employed to solve some engineering design problems to further affirm its efficiency. The main advantages of the proposed metaheuristic are:
Simple representation.
Robustness.
Balancing between exploration and exploitation.
High-quality solutions.
Swarm intelligence powerfulness.
Low computational complexity.
High scalability.
These advantages are proved with three different validating experiments that include several optimization problems with various characteristics. Besides, LSO is compared with many other optimization algorithms, and the results are analyzed with the appropriate statistical tests. The experimental findings affirm the superiority of LSO compared to all the other rival algorithms. Finally, the main contributions of this study are listed as follows:
  • Proposing a novel physical-based metaheuristic algorithm called Light Spectrum Optimizer (LSO), inspired by the sparkle rainbow phenomenon caused by passing sunlight rays through the rain droplets.
  • Validating LSO using four challengeable mathematical benchmarks like CEC2014, CEC2017, CEC2020, and CEC2022, as well as several engineering design problems.
  • The experimental findings, along with the Wilcoxon rank-sum test as a statistical test, illustrate the merits and highly superior performance of the proposed LSO algorithm
The remainder of this work is organized as follows. Section 2 gives the background illustration of the inspiration and the mathematical modelling of the rainbow phenomenon. Section 3 explains the mathematical formulation and the searching procedure of LSO. In Section 4, various experiments are done on the CEC2005, CEC2014, CEC2017, CEC2020, and CEC2022 benchmarks, and their experimental results are analyzed with the proper statistical analysis. Additionally, LSO sensitivity is presented. In Section 5, popular engineering design problems are solved with LSO.

2. Background

Rainbow is one of the most fabulous metrological wonders. From the physical perspective, it is a half-circle of spectrum colors created by dispersion and internal reflection of sunlight rays that hit spherical rain droplets [109]. When a white ray hits a water droplet, it changes its direction by refracting and reflecting inside and outside the droplet (sometimes more than once) [110]. In other words, the rainbow is formed by light rays’ refraction, reflection, and dispersion through water droplets.
According to Descartes’s laws [111,112], refraction occurs when the light rays travel from one material to another with a different refractive index. When light rays hit the outer surface of a droplet, some light rays reflect away from the droplet while the others are refracted. The refracted light rays hit the inner surface of a droplet, causing another reflection and refracting away from the droplet with different angles, which causes the white sunlight to be dispersed into its seven spectral colors: red, orange, yellow, green, blue, indigo, and violet, as depicted in Figure 2. These spectral colors, differs according to the angles of deviations, which range from 40° (violet) to 42° (red) [113,114] (See Figure 2).
Mathematically, the refraction and reflection of the rainbow spectrum have been illustrated by Snell’s laws. Snell’s law said that the ratio between the sines of the incident and refracted angles is equal to the ratio between the refractive indices of air and water, as [115]:
s i n ( θ 1 ) s i n ( θ 2 ) = k 2 k 1
where θ 1 is the incident angle, θ 2 is the refracted angle, k 2 is the refractive index of water, and k 1 is the refractive index of air.
In this work, Snell’s law is used in its vector form. As shown in Figure 3, all normal, incidents, refracted, and reflected rays are converted to vectors. The mathematical formulation of the refracted ray can be expressed as [116]:
L 1 = 1 k [ L 0 n A ( n A · L 0 ) ] n A [ 1 1 k 2 + 1 k 2 ( n A · L 0 ) 2 ] 1 2
where L 1 is the refracted light ray, k is the refractive index of the droplet, L 0 is the incident light ray, and n A is the normal line at the point of incidence. Meanwhile, the inner reflected ray can be formulated as:
L 2 = L 1 2 n B ( n B · L 1 )
where L 2 is the inner reflected light ray and n B is the normal line at the point of inner reflection. Finally, the outer refracted ray is expressed as:
L 3 = k [ L 2 n C ( n C · L 2 ) ] + n C [ 1 k 2 + k 2 ( n C · L 2 ) 2 ] 1 2
where L 3 is the outer refracted light ray and n C is the normal line at the point of outer refraction.

3. Light Spectrum Optimizer (LSO)

As discussed before, the rainbow spectrum rays are caused by colorful light dispersion. In this paper, the proposed algorithm takes its inspiration from this metrological phenomenon. In particular, LSO is based on the following assumptions:
(1)
Each colorful ray represents a candidate solution.
(2)
The dispersion of light rays ranges from 40° to 42° or have a refractive index that varies between k r e d = 1.331 and k v i o l e t = 1.344 .
(3)
The population of light rays has a global best solution, which is the best dispersion reached so far.
(4)
The refraction and reflection (inner or outer) are randomly controlled.
(5)
The current solution’s fitness value controls a colorful rainbow curve’s first and second scattering phases compared to the best so-far solution’s fitness. Suppose the fitness value between them is so close. In that case, the algorithm will apply the first scattering phase to exploit the regions around the current solution because it might be so close to the near-optimal solution. Otherwise, the second phase will be applied to help the proposed algorithm avoid getting stuck in the regions of the best-so-far solution because it might be local minima.
Next, the detailed mathematical formulation of LSO will be discussed.

3.1. Initialization Step

The search process of LSO begins with the random initialization of the initial population of white lights as:
x 0 = l b + R V 1 ( u b l b )
where x 0 is the initial solution, R V 1 is a vector of uniform random numbers generated between [ 0 ,   1 ] with a length equal to the given problem dimension (d), and l b and u b are the lower and upper bounds of the search space, respectively. After that, the generated initial solutions are evaluated in order to determine the global and personal best solutions.

3.2. Colorful Dispersion of Light Rays

In this subsection, we discuss the mathematical formulation of rainbow spectrum directions, colorful rays scattering, and the exploration and exploitation mechanisms of LSO.

3.2.1. The Direction of Rainbow Spectrums

After the initialization, the normal vector of inner refraction x n A , inner reflection x n B , and outer refraction x n C are calculated as:
x n A = x t r n o r m   ( x t r )
x n B = x t p n o r m   ( x t )
x n C = x * n o r m   ( x * )
where x t r is a randomly selected solution from the current population at iteration t , x t p is the current solution at iteration t , x * is the global best solution ever founded, and n o r m ( . ) indicates the normalized value of a vector and computed according to the following formula:
n o r m ( x ) = j = 0 d x j 2
where d stands for the number of dimensions in an optimization problem. x is the input vector to the n o r m function to normalize it. x j is the jth dimension in the input vector x . For the incident light ray, it is calculated as follows:
X m e a n = i N x i N
x L 0 = X m e a n n o r m   ( X m e a n )
where x L 0 is the incident light ray, X m e a n is the mean of the current population of solutions x i ( i = 1 , N ) , and N is the population size.
Then, the vectors of inner and outer refracted and reflected light rays are calculated as:
x L 1 = 1 k r [ x L 0 x n A ( x n A · x L 0 ) ] x n A | 1 1 ( k r ) 2 + 1 ( k r ) 2 ( x n A · x L 0 ) 2 | 1 2
  x L 2 = x L 1 2 x n B ( x L 1 · x n B )
x L 3 = k r [ x L 2 x n C ( x n C · x L 2 ) ] + x n C | 1 ( k r ) 2 + ( k r ) 2 ( x n C · x L 2 ) 2 | 1 2
where x L 1 , x L 2 , and x L 3 are the inner refracted, inner reflected, and outer refracted light rays, respectively. k r stands for the refractive index, which is updated randomly between k r e d and k v i o l e t to define a random spectrum color as:
k r = k r e d + R V 1 ( k v i o l e t k r e d )
where R V 1 is a uniform random number generated randomly between [ 0 ,   1 ] .
Table 2 presents a numerical example to illustrate the vectors with six dimensions generated by the previously described equations with noting that the vectors x t r , x t p , and x * presented at the same table are randomly generated between 100 and −100. The values of the inner refracted and inner reflected are obvious from this table. Outer refracted vectors could not be employed alone to update the individuals, which have to be ranged between 100 and −100 because the change rate in the updated solutions will be so low. Hence, many function evaluations will be consumed to reach better solutions. Therefore, the equations described in the next section are adapted to deal with this problem by extensively encouraging the exploration operator of the newly-proposed algorithm.

3.2.2. Generating New Colorful Ray: Exploration Mechanism

After the calculation of the rays’ directions, we calculate the candidate solutions according to the value of a randomly generated probability between 0 and 1, referred to as p . In particular, if the value of p is lower than a number generated randomly between 0 and 1, then the new candidate solution will be calculated as:
x t + 1 = x t + ϵ R V 1 n G I ( x L 1 x L 3 ) × ( x r 1 x r 2 )
Otherwise, the new candidate solution will be calculated as:
x t + 1 = x t + ϵ R V 2 n G I ( x L 2 x L 3 ) × ( x r 3 x r 4 )
where x t + 1 is the newly generated candidate solution, x t is the current candidate solution at iteration t . r 1 , r 2 , r 3 , and r 4 are indices of four solutions selected randomly from the current population. R V 1 n and R V 2 n are vectors of uniform random numbers that are generated between [ 0 ,   1 ] . ϵ is a scaling factor that is calculated using (18). G I is an adaptive control factor based on the inverse incomplete gamma function and computed according to (19).
ϵ = a × R V 3 n
where R V 3 n a vector of normally distributed random numbers with a mean equal to zero and a standard deviation equal to one, and a is an adaptive parameter that can be calculated using (20).
G I = a × r 1 × P 1 ( a , 1 )
G I is an adaptive control factor. r is a uniform random number between [ 0 ,   1 ] that is inversed to promote the exploration operator throughout the optimization process because inversing any random value will generate a new decimal number greater than 1, which might take the current solution to far away regions within the search space for finding a better solution. P 1 is the inverse incomplete gamma function for the corresponding value of a .
A = RV 2 ( 1 ( t T m a x ) )
where t is the current iteration number, R V 2 is a scalar numerical value of uniform random numbers generated between [ 0 ,   1 ] , and T m a x is the maximum number of function evaluations.
When the input numbers are greater than 0.5, this inverse incomplete gamma function generates high numerical values starting from almost 0.8 and ending at nearly 5.5, as described in Figure 4; otherwise, it generates decimal values down to 0. When the input numbers to this function are high, it will encourage the exploration operator. However, the highly-generated value might take the updated solutions out of the search boundary, and hence the algorithm might be converted into a randomization process because the boundary checking method will move those infeasible solutions back again into the search space. Therefore, the factor, a, described before in (20), is modeled with the inverse incomplete gamma values to reduce their magnitude and avoid the randomization process when the input values are high. Both the inverse function and factor a decrease gradually with increasing the current iteration, and hence the optimization process will be gradually converted from the exploration operator into the exploitation that might lead to falling into local minima. Therefore, to support the exploration operator throughout the optimization process, the inverse of a number generated randomly between 0 and 1 will be modeled with both the inverse function and factor a as defined in (19).

3.2.3. Colorful Rays Scattering: Exploitation Mechanism

This phase helps to scatter the rays in the direction of the best-so-far solution, the current solution, and a solution selected randomly from the current population to improve its exploitation operator. At the start, the algorithm will scatter the rays around the current solution to exploit the region around it to reach better outcomes. However, this might reduce the convergence speed of LSO, so an additional step size applied with a predefined probability β is integrated to move the current solution in the direction of the best-so-far solution to overcome this problem. The mathematical model of scattering around the current solution is as follows:
x t + 1 = x t + R V 3 × ( x r 1 x r 2 ) + R V 4 n × ( R < β ) × ( x * x t )
where x * is the best-so-far solution, and x r 1 and x r 2 are two solutions selected randomly from the current population. R V 3 includes a number selected randomly at the interval of 0 and 1. R V 4 n is a vector including numbers generated randomly between 0 and 1. The second scattering phase is based on generating rays in a new position based on the best-so-far solution and the current solution according to the following formula:
x t + 1 = 2   cos ( π × r 1 ) ( x * ) ( x t )
where r 1 is a randomly generated numerical value at the interval of 0 and 1. π indicates the ratio of the perimeter of a circle to its diameter. Exchanging between the first and second scattering phases is achieved based on a predefined probability Pe as shown in the following formula:
x t + 1 = { E q .   ( 21 )                             i f   R < P e E q .   ( 22 )                             O t h e r w i s e
where R is a number generated randomly between 0 and 1. The last scattering phase is based on generating a new solution according to a solution selected randomly from the population and the current solution according to the following formula:
x t + 1 = ( x r 1 p + | R V 5 | × ( x r 2 x r 3 ) ) × U + ( 1 U ) × x t
where R V 5 is a scalar value of normally distributed random numbers with a mean equal to zero and a standard deviation equal to one, and U is a vector including random values of 0 and 1. | | is the absolute symbol, which converts the negative values into positive ones and returns the positive numbers as passed. Exchanging between Equations (23) and (24) is based on computing the difference between the fitness value of each solution and that of the best-so-far solution and normalizing this difference between 0 and 1 according to (25). If this difference is less than a threshold value R 1 generated randomly between 0 and 1, (23) will be applied; otherwise, (24) is applied. Our hypothesis is based herein computing the probabilistic fitness value using (25) to determine how far the current light ray is close to the best-so-far light ray. If the probabilistic fitness value for the ith light ray is smaller than R 1 , it is preferable to scatter this light ray in the same direction as the best-so-far solution. Our proposed algorithm suggests this hypothesis to maximize its performance when dealing with any optimization problem that needs a high-exploitation operator to accelerate the convergence speed and save computational costs.
F = | F F b F b F w |
where F , F b , and F w indicate the fitness values of the current solution, best-so-far solution, and worst solution, respectively. However, the probability of applying (23) when the value of F is high is a little. For example, Figure 5 has tracked the values of F for an agent and the random number R1 during the optimization process of a test function; this figure shows that the F values are nearly greater than R 1 for most of the optimization process, as shown by red points that are much greater than the blue points in the 9 subgraphs depicted in Figure 5, and thus the chance of firing the first and second scattering stages is so low when relying solely on factor F′. Therefore, exchanging between (23) and (24) is also applied with a predefined probability Ps to further promote the first and second scattering stages for accelerating convergence toward the best-so-far solution. Finally, exchanging between these two equations is formulated in the following formula:
x t + 1 = { E q .   ( 23 )                                     i f   R < P s   |   F < R 1 E q .   ( 24 )                                                                   O t h e r w i s e
where R and R 1 are numbers generated randomly between 0 and 1.

3.3. LSO Pseudocode

The pseudocode of the proposed algorithm is stated in Algorithm 1, and the same steps are depicted in Figure 6. Some solutions might go outside the problem’s search space. Thus, they have to be returned back into the search space to find feasible solutions to the problem. There are two common ways to convert the infeasible solutions, which go outside the search space to feasible ones; the first one is based on setting the lower bound to the dimensions, which are smaller, and setting the upper bound to these, which are higher; while the second one is based on generating new random values within the search boundaries of the dimensions, which go outside the search space of the problem. Within our proposed algorithm, we make hybridization between these two methods to improve the convergence rate by the first and the exploration by the second. This hybridization is achieved based on a predefined probability Ph, which is estimated within the experiments section.
Algorithm 1: LSO Pseudo-Code
Input: Population size of light rays N , problem Number of Iterations T max
Output: The best light dispersion x * and its fitness
Generate initial random population of light rays x i     ( i = 1 , 2 , 3 ,   ,   N )
t = 0
1  While ( t < T m a x )
2          for each light ray
3            evaluate the fitness value
4            t = t + 1
5            keep the current global best x *
6            Update the current solution if the updated solution is better.
7            determine normal lines x n A , x n B , & x n C
8            determine direction vectors x L 0 , x L 1 , x L 2 , & x L 3
9            update the refractive index k r
10            update a , ϵ , and G I
11            Generate two random numbers: p , q between 0 and 1
%%%%Generating new ColorFul ray: Exploration phase
12            if p q
13             update the next light dispersion using Equation (16)
14            Else
15             update the next light dispersion using Equation (17)
16            end if
17            evaluate the fitness value
18            t = t + 1
19            keep the current global best x *
20            Update the current solution if the updated solution is better.
%%%%Scattering phase: exploitation phase
21            Update the next light dispersion using Equation (26)
22          end for
23end while
24Return  x *

3.4. Searching Behavior and Complexity of LSO

In this section, we will discuss the searching schema of LSO and its computational complexity.
A.
Searching behavior of LSO
As discussed before, LSO reciprocates the methods of finding the next solution through the use of x * , x t r , and x t p . In other words, x n A is calculated according to a randomly selected solution, which ensures the exploration of the search space. Meanwhile, the calculation of x n B and x n C depends on the global and personal best solutions, respectively. This preserves the exploitation of the search space. Another exploitation consolidation is preserved by the usage of the inverse incomplete gamma function [117], which can be expressed as follows:
j = P 1 ( y , w )
y = P ( j , w ) = 1 Γ ( w ) 0 z t w 1 e t d t
where w is a scaling factor that is greater than or equal 0.
Figure 7 depicts the LSO’s exploration and exploitation operators to illustrate the behavior of LSO experimentally. This figure is plotted by displaying the search core of an individual during the exploration and exploitation phases for the first two dimensions ( X 1   a n d   X 2 ) . From this figure, specifically Figure 7a, which depicts the LSO’s exploitation operator, it is obvious that this operator focuses its search toward a specific region, often the best-so-far region, to explore the solutions around and inside this region in the hope of reaching better solutions in a lower number of function evaluations. On the other side, Figure 7b pictures the exploration behavior of LSO to show how far LSO could reach; this figure shows that the individuals within the optimization process try to attack different regions, far from the current, within the search space of the optimization process for reaching the most promising region, which is attacked using the exploitation operator discussed formerly.
B.
Space and Time Complexity
(1)
LSO Space Complexity
The space complexity of any metaheuristic can be defined as the maximum space required during the search process. The big O notation of LSO space complexity can be stated as O ( N × d ) , where N is the number of search agents, and d is the dimension of the given optimization problem.
(2)
LSO Time Complexity
The time complexity of LSO is analyzed in this study using asymptotic analysis, which could analyze the performance of an algorithm based on the input size. Other than the input, all the other operations, like the exploration and exploitation operators, are considered constant. There are three asymptotic notations: big-O, omega, and theta, which are commonly used to analyze the running time complexity of an algorithm. The big-O notation is considered in this study to analyze the time complexity of LSO because it expresses the upper bound of the running time required by LSO for reaching the outcomes.
The time complexity of any metaheuristic depends on the required time for each step of the algorithm, like generating the initial population, updating candidate solutions, etc. Thus, the total time complexity is the sum of all such time measures. The time complexity of LSO results from three main algorithm steps:
(1)
Generation of the initial population.
(2)
Calculation of candidate solutions.
(3)
Evaluation of candidate solutions.
The first initialization step has a time complexity equal to O ( N × d ) . The candidate solutions calculation has a time complexity O ( T m a x × N × d ) , which includes the evaluation of the generated solutions and updating the current best solution, Where T m a x is the maximum number of search iterations. So, the total time complexity of LSO in big-O is O ( N × d × T m a x ) , which is confirmed in detail in Table 3.

3.5. Difference between LSO, RO, and LRO

This section compares the proposed algorithm to two other metaheuristic algorithms inspired by light reflection and refraction to demonstrate that LSO is completely different from those algorithms in terms of inspiration, formulation of candidate solutions, and the variation of the updating process, as illustrated in Table 4.

4. Experimental Results and Discussions

In this section, we investigate the efficiency of LSO by different benchmarks, including CEC2005, CEC2014, CEC2017, CEC2020, and CEC2022. In addition, the sensitivity and scalability analyses of the proposed algorithm are introduced in this section.

4.1. Benchmarks and Compared Optimizers

We first validate the efficiency of LSO by solving 20 classical benchmarks CEC2005 that were selected from [118,119,120]. The selected benchmarks consist of three classes: uni-modal, multi-modal, and fixed-dimension multi-modal. Both unimodal and multimodal functions of CEC2005 are solved in 100 dimensions. Appendix A Table A1, Table A2, Table A3, Table A4, Table A5, Table A6 and Table A7 shows the characteristics of these three classes, which are mathematical formulations of benchmarks, dimension (D), boundaries of the search space (B), and the global optimal solution (OS). Furthermore, the proposed algorithm is tested on solving challengeable benchmarks like CEC2014, CEC2017, CEC2020, and CEC2020, which are described in Appendix A Table A1, Table A2, Table A3, Table A4, Table A5, Table A6 and Table A7. The dimensions of these challengeable benchmarks are set to 10. In addition, the Wilcoxon test [121] is performed to analyze both algorithms’ performance during the 30 runs with a 5% significance level.
The experimental results of LSO are compared with highly-cited state-of-the-art optimization algorithms like grey wolf optimizer (GWO) [122], whale optimization algorithm (WOA) [123], and salp swarm algorithm (SSA) [23], evolutionary algorithms like differential evolution (DE), and recently-published optimizers including gradient-based optimizer (GBO) [124], artificial gorilla troops optimizer (GTO) [27], Runge–Kutta method (RUN) beyond the metaphor [125], African vultures optimization algorithm (AVOA) [106], equilibrium optimizer (EO) [76], grey wolf optimizer (GWO) [122], reptile search algorithm (RSA) [92], and slime mold algorithm (SMA) [40]. Both comparisons are based on standard deviation (SD), an average of fitness values (Avr), and rank. All the algorithms are coded in MATLAB© 2019. All experiments are performed on a 64-bit operating system with a 2.60 GHz CPU and 32 GB RAM. For a fair comparison, each algorithm runs for 25 independent times, the maximum number of function evaluations and population size are of 50,000 and 20, respectively (These parameters are constant within our experiments for all validated benchmarks). The other algorithms’ parameters are kept as standard. The used parameters are given in Table 5.

4.2. Sensitivity Analysis of LSO

Extensive experiments have been done to perform a sensitivity analysis of four controlling parameters found in LSO, which are Pe, Ps, Ph, and β. For each parameter of these, extensive experiments have been done using different values for each one to solve two test functions: F57 and F58, and the obtained outcomes, are depicted in Figure 8. This figure shows that the most effective values of these four parameters: Pe, Ps, Ph, and β, for two observed test functions are of 0.9, 0.05, 0.4, and 0.05, respectively.
The first investigated parameter is Ph (responsible for the tradeoff between two boundary checking methods to improve the LSO’s searchability), which is analyzed in Figure 8a,b using various randomly-picked values between 0 and 1.0. These figures show that LSO could reach the optimal value for the test function: F58 when Ph = 0.4. Based on that, this value is assigned to Ph within the experiments conducted in this study.
For the parameter β (responsible for improving the convergence speed of LSO), Figure 8c,d depicts the performance of LSO under various randomly-picked values between 0 and 0.6 for this parameter over two test problems: F57 and F58. According to this figure, over F57, on one side, the performance of LSO is substantially improved with increasing the value of this parameter even reaching 0.3, and then the performance again deteriorated. On the other side, over F58, LSO has poor performance when increasing the value of this parameter. Therefore, we found that the best value for the parameter β that will be substantially suitable for most test functions is 0.05, since LSO under this value could reach the optimal value for the test problem: F58. It is worth mentioning that this parameter is responsible for accelerating the convergence speed of LSO to reach the near-optimal solution in as low a number of function evaluations as possible. Therefore, an additional experiment has been conducted in this section to depict the convergence speed of LSO under various values for the parameter β over F58 (see Figure 8e). Figure 8e further affirms that the best value for this parameter is 0.05.
The third parameter is Pe, employed in LSO to exchange between the first and second scattering phases. Figure 8f,g compare the influence of various values for this parameter over the test functions: F57 and F58. According to these figures, the best value for this parameter is 0.9, since LSO under this value could reach 900 and 805.9 for F58 and F57, respectively. Regarding the parameter Ps, which is employed to further promote the first and second scattering stages for improving the exploitation operator of LSO, Figure 8h,i were presented to report the influence of various values for this parameter; these values range between 0 and 0.6. Inspecting these figures shows that LSO reaches the top performance when Ps has a value of 0.05 over two investigated test functions: F57 and F58.

4.3. Evaluation of Exploitation and Exploration Operators

The class of Uni-modal benchmarks has only one global optimal solution. This feature allows testing and validating a metaheuristic’s exploitation capabilities. Table 6 and Table 7 show that LSO is competitive with some other comparators for F1, F2, and F3. For F3 and F5, LSO has inferior performance compared to some of the recently-published rival algorithms. In general, LSO proves that it has a competitive exploitation operator. Multi-modal classes can efficiently discover the exploration of metaheuristics, as they have many optimal solutions. As observed in Table 6, LSO is able to reach the optimal solution for 13 benchmarks, especially fixed-dimension ones, including F11-F20. In addition, LSO is competitive with the other comparators for F6-F8. To affirm the difference between the outcomes produced by LSO and those of the rival algorithms, the Wilcoxon rank-sum test is employed to compute the p-values, which determine that there is a difference when its value is less than 5%; otherwise, there is no difference. Table 7 introduces the p-values on unimodal and multimodal test functions between LSO and each rival algorithm. These values clarify that there are differences between the outcomes of LSO and most rival algorithms on most test functions. NaN in this table indicates that the independent outcomes of LSO and the corresponding optimizer are the same. As a result, the results and discussion is given herein assure the prosperity of LSO’s exploration and exploitation capabilities.

4.4. LSO for Challengeable CEC2014

Additional validation is carried out on the CEC-2014 test suite in order to ensure that the proposed and other methods perform in accordance with expectations. With the help of this collection of test functions, you can determine whether or not an algorithm has the ability to explore, escape from local minima, and exploit. The test functions are divided into four categories: unimodal, multimodal, hybrid, and composition. The test suite’s characteristics are described in greater detail in Appendix A Table A1, Table A2, Table A3, Table A4, Table A5, Table A6 and Table A7. The dimensions taken into consideration within our experiments are all ten. Table 8 shows the average and standard deviation values and the rank metric obtained by rival optimizers and others on CEC2014. Inspecting this table demonstrates that LSO could rank first for 23 out of 30 test functions, while its performance for the other test functions is competitive with some of the other optimizers. The average of rank values presented in Table 8 for each test function to show the best-performance order of the algorithms is computed and displayed in Figure 9. This figure shows the superior performance of LSO because it could come in the first rank with a value of 1.7, followed by DE with a value of 4.4, while RSA is the worst performing one.
In terms of the standard deviation, Figure 10 displays the average SD values of 25 independent runs for each CEC-2014 test function; this figure discloses that the outcomes obtained by LSO within these independent runs are substantially similar since it could reach less average SD of 23, while RSA has the worst average SD. Finally, the Wilcoxon rank-sum statistical test is employed to show the difference between the outcomes of LSO and each rival algorithm; this test relies on two hypotheses: the null hypothesis, indicating that there is no difference, and the alternative one, indicating that there is difference between the outcomes of each pair of algorithms. This test determines the accepted hypothesis based on the confidence level and p-value returned after comparing the outcomes of each pair of algorithms. Within our experiments, the confidence level is 0.05. The obtained p-value between LSO and each rival algorithm is presented in Table 9. The majority of p-values presented in this table is less than 5%, which notifies us that the alternative hypothesis is accepted; hence, the outcome of LSO is different from those of the other compared algorithms.

4.5. LSO for Challengeable CEC2017

This section compares the performance of LSO and other optimizers using the CEC2017 test suite to further validate the performance of LSO against the comparators for more challenging mathematical test functions [126]. CEC2017 is composed of four mathematical function families: unimodal (F51–F52), multimodal (F53–F59), composition (F60–F69), and hybrid (F70–F79). As previously described, unimodal test functions are preferable for evaluating the exploitation operator of optimization algorithms because they involve only one global best solution, and multimodal test functions contain multiple local optimal solutions, which makes them particularly well-suited for evaluating the exploration operator of newly proposed optimizers; while composition and hybrid test functions have been designed to evaluate the optimization algorithms’ ability to escape out of local optima. The dimension of this benchmark is set to 10 within the conducted experiments in this section. Appendix A Table A1, Table A2, Table A3, Table A4, Table A5, Table A6 and Table A7 contains the characteristics of the CEC2017 benchmark.
Table 10 shows the Avr, SD, and Rank values of 25 independent findings obtained by this suite’s proposed and rival optimizers. According to this table, LSO comes in the first rank compared to all optimizers for unimodal, multimodal, composition and hybrid test functions since it could reach better Avr and SD for all test functions. Figure 11 and Figure 12 display the average of the rank and standard deviation values presented in Table 10 for all test functions for each algorithm. According to these figures, LSO is the best since it occupies the 1st rank with a value of 1 and has the lowest standard deviation of 32, while RSA is the worst.
The Wilcoxon rank-sum test is used to determine the difference between the outcomes of LSO and these of each rival optimizer on CEC2017 test functions. Wilcoxon rank-sum test demonstrates a significant difference between the outcomes of LSO with the rival algorithms, as the p-values in Table 11 support the alternative hypothesis. Ultimately, LSO is a strong optimizer, as demonstrated by its ability to defeat GBO, RUN, GTO, AVOA, SMA, RSA, and EO, which are the most recently published optimizers, as well as four highly-cited metaheuristic algorithms such as WOA, GWO, SSA, and DE.

4.6. LSO for Challengeable CEC2020

During this section, additional testing is carried out on the CEC-2020 test suite to determine whether the proposed has stable performance for more challenging test functions. An algorithm’s ability to explore, exploit, and stay away from local minima can be evaluated using this suite, which consists of ten test functions and is divided into four categories: unimodal, multimodal, hybrid, and compositional. The characteristics of this suite are shown in Appendix A Table A1, Table A2, Table A3, Table A4, Table A5, Table A6 and Table A7. LSO has superior performance for all test functions found in the CEC-2020 test suite except for F83, as evidenced by the Avr, rank, and SD values presented in Table 12 and obtained after analyzing the results of 25 independent runs. Figure 13 and Figure 14 show the average of the rank values and SD on all test functions of CEC2020. According to these figures, LSO is the best because it is ranked first with a value of 1.7 and has the lowest standard deviation of 38, whereas RSA is the worst because it is ranked last with a value of 12. Finally, the Wilcoxon rank-sum test is used to determine the difference between the results of LSO and those of each rival optimizer on this suite. The Wilcoxon rank-sum test results demonstrate a statistically significant difference between the outcomes of LSO and the rival algorithms for most test functions, as evidenced by the p-values in Table 13, which support the alternative hypothesis. As an added bonus, in this section, we present additional experimental evidence that LSO belongs to the strong optimizers.

4.7. LSO for Challengeable CEC2022

The proposed and other methods are tested again on the CEC2022 test suite. This test suite contains 12 test functions divided into unimodal, multimodal, hybrid, and compositional. The properties of this test suite are also listed in Appendix A (Table A1, Table A2, Table A3, Table A4, Table A5, Table A6 and Table A7), and their dimensions in the conducted experiments are 10. Table 14 shows the Avr, rank, and SD for 25 independent runs, demonstrating LSO’s superior performance for 9 out of 12 test functions of the CEC-2022 test suite. The average of the rank values and standard deviation on all test functions of CEC2022 are depicted in Figure 15 and Figure 16. These figures show that LSO is the best because it is ranked first with a value of 1.6 and has the lowest standard deviation of 12, whereas RSA is the worst because it is ranked last with a value of 12 and has the highest standard deviation. In the end, the Wilcoxon rank-sum test is used to determine whether there is a significant difference between the results of LSO and those of each rival optimizer on this suite of problems. For most test functions, the Wilcoxon rank-sum test results demonstrate a statistically significant difference between the outcomes of LSO and the rival algorithms, as demonstrated by the p-values in Table 15. This section further affirms that LSO belongs to the category of highly-performed optimizers.

4.8. The Overall Effectiveness of the Proposed Algorithm

In the previous sections, LSO has been separately assessed using five mathematical benchmarks: CEC2005, CEC2014, CEC2017, CEC2020, and CEC2022, and compared with twenty-two well-established metaheuristic algorithms, but the overall performance of LSO over all benchmarks has to be elaborated. Therefore, this section is presented to compare the overall performance of LSO and the other algorithms over all the test functions of each benchmark and all benchmarks. The average rank values and SD values of each benchmark are computed and reported in Table 16. This table also indicates the overall effectiveness of the proposed algorithm and other rival algorithms using an additional metric known as overall effectiveness (OE) and computed according to the following formula [127]:
OE   ( % ) = N L i N
where N denotes the total number of test functions, and L i denotes the number of test functions in which the i-th algorithm is a loser. Inspecting this table reveals that the proposed algorithm could be superior in terms of SD, rank, and OE for four challengeable benchmarks, and competitive regarding rank and superior regarding SD and OE for the remaining benchmark. The average of the rank values, OE values, and SD values of each algorithm across all benchmarks are computed and reported in the last rows of Table 16, respectively, to measure the overall effectiveness of each algorithm across all benchmarks. According to those rows, LSO ranks first for all indicators, with a significant difference from the nearest well-performed algorithm. The LSO’s strong performance due to the variation of the search process enables the algorithm to have a strong exploration and exploitation operator during the optimization process to help aid in preserving the population diversity, avoiding being stuck in local optima, accelerating the convergence towards the best-so-far solution. It is worth noting that both the inverse incomplete gamma function and the inverse random number are capable of preserving population diversity as well as avoiding becoming stuck in local minima due to their ability to generate significant numbers that aid in jumping the current solution to far away regions within the search space throughout the optimization process. On the other hand, three different scattering stages provide a variation in the exploitation operator to the LSO for rapidly reaching the near-optimal solution for various optimization problems of varying difficulty.

4.9. Convergence Curve

Figure 17 compares the convergence rates of LSO and competing algorithms to show how they differ in terms of reaching the near-optimal solution in less number function evaluations. This figure illustrates that all LSO convergence curves exhibit an accelerated reducing pattern within the various stages of the optimization process for four families of test functions (unimodal, multimodal, composition, and hybrid). The LSO optimizer is significantly faster than any of the other competing algorithms, as shown by the convergence curves in Figure 17. Exploration and exploitation operators of LSO can work together in harmony, which prevents stagnation in local minima and speeds up convergence in the right direction to the most promising regions.

4.10. Qualitative Analysis

The following metrics are used to evaluate the LSO performance during optimization: diversity, convergence curve, average fitness value, trajectory in the first dimension, and search history. The diversity metric shows how far apart an individual is on average from other individuals in the population; the convergence curve depicts the best-fitting value that was obtained within each iteration; the average fitness value represents the average of all individuals’ fitness values throughout each iteration; the trajectory curve shows how a solution’s first dimension changes over time as it progresses through the optimization process; and search history shows how a solution’s position changed during the optimization process.
The diversity metric shown in the second column of Figure 18 is computed by summing the difference mean between the positions of each two solutions in the population according to the following formula:
= i = 1 N j = i + 1 N 1 ( k = 0 d | x i , k x j , k | ) d
where d indicates the number of dimensions, N stands for the population size, x i , k indicate the kth dimension in the ith and jth solutions such that j > i . Observing Figure 18 shows that the diversity metric of LSO is decreasing over time, indicating that LSO’s optimization process is gradually shifting from exploration to exploitation. LSO’s performance initially started to explore most regions in the search space to avoid stagnation into local minima and then shifted gradually to the exploitation operator to reduce diversity quickly during the second half of the optimization process to accelerate the convergence toward the most promising region discovered thus far.
The LSO convergence curves show an accelerated reducing pattern on a variety of test functions during the latter half of the optimization process when population diversity is reduced. The exploratory phase is largely transformed into the exploitative phase, as illustrated in the third column of Figure 18. At the beginning of the optimization process, LSO convergence is slow to avoid becoming stuck in local minima. Then, it is highly improved in the second half of the optimization process.
The depiction of average fitness history in Figure 18 shows that LSO’s competitiveness has decreased over time due to all solutions focusing on exploiting the regions around the best-so-far solution, and as a result, the fitness values of all solutions are nearly moved towards the same region which involves the best-so-far solution. Figure 18 also depicts the trajectory of LSO’s search for the optimal position of the first dimension as it gradually explores all aspects of the search space, as depicted in Figure 18. Because of the need to find a better solution in a short period of time, the exploratory approach is being replaced by an exploitative approach that restricts the scope of the search to a single aspect of it. As can be seen from the LSO’s trajectory curve, the optimization process begins with an exploratory trend before moving to an exploitation trend in search of better outcomes before coming to an end.
In the final column of Figure 18, the history of LSO positions is depicted. The search history is investigated by depicting the search core followed by LSO’s solutions within the whole optimization process for the first two dimensions: X 1   and   X 2 of an optimization problem. The same pattern is followed for the other dimensions. As can be seen in this preceding column, LSO does not follow a consistent pattern for all test functions. Consider F21 as an example: to find an optimal solution for this problem, LSO first explores the entire search space before narrowing its focus to the range 0–50. The search history graph shows that LSO’s performance is more dispersed for the multimodal and composition test functions, while its performance for the unimodal test function is more concentrated around the optimum points.

4.11. Computational Cost

The average computational cost consumed by each algorithm on the investigated test functions is shown in Figure 19. The graph shows that the CPU time for all algorithms is nearly the same, except RSA and SMA, which take a long time, and WOA, which takes less than half the time required by the rest. LSO is thus far superior in terms of the convergence speed and the quality of the obtained outcomes, with a negligible difference in CPU time.

5. LSO for Engineering Design Problems

In section, LSO is applied to solve three constrained engineering benchmarks, including Tension/Compression Spring Design Optimization Problem, Welded Beam Design Problem, and Pressure Vessel Design Problem. The best values found by LSO during 25 runs are compared with many optimization algorithms. For compared the algorithms’ parameters settings, all parameters are left as the defaults suggested by the authors. The parameters of LSO are kept as mentioned in Table 5, except for Ps, which substantially affect the performance of LSO based on the nature of the solved problems. Therefore, an extensive experiment has been done under various values for this parameter, and the obtained outcomes are depicted in Figure 20. This figure shows that the performance of LSO is maximized when Ps = 0.6. In addition to all rival algorithms used in the previous comparisons, five recently-published metaheuristic algorithms known as political optimizer (PO) [45], continuous-state cellular automata algorithm (CCAA) [128], snake optimizer (SO) [91], beluga whale optimization (BWO), [96] and driving training-based optimization (DTBO) [50] are added in the next experiments to further show the superiority of LSO when tackling the real-world optimization problems, such as engineering design problems. Additionally, LSO is compared to some of the state-of-the-art optimizers proposed for each constrained engineering benchmark according to the cited results.
Engineering design problems are characterized by many different constraints. In order to handle this type of constraint, we employ penalty-based constraint handling techniques with LSO. There are several methods for handling constraints of optimization problems based on the penalty function. In this work, we choose to implement the Death Penalty method (the rejection of infeasible solutions method) [129], in which the infeasible solutions are rejected and regenerated. So, the infeasible solution is automatically omitted from the candidate solutions. The main advantage of the Death Penalty method is its simple implementation and low computational complexity.

5.1. Tension/Compression Spring Design Optimization Problem

The main objective of the tension/compression spring design optimization problem is to find the minimum volume f ( X ) of a coil spring under compression undergoing constraints of minimum deflection, shear stress, surge frequency, and limits on outside diameter and design variables (See Figure 21a). Mathematically, the problem can be formulated as [130]:
min f ( X ) = ( x 3 + 2 ) x 2 x 1 2
s . t .         g 1 ( X ) = 1 x 2 3 x 3 71785   x 1 4 0
g 2 ( X ) = x 2 ( 4 x 2 x 1 ) x 1 3 ( 12566   x 2 x 1 ) + 1 5108   x 1 2 1 0
g 3 ( X ) = 1 140.45   x 1 x 2 2 x 3 0
g 4 ( X ) = 2 ( x 1 + x 2 ) 3 1 0
0.05 x 1 2 ,     0.25 x 2 1.3 ,     2 x 3 15 ,
where x 1 is the wire diameter, x 2 is the mean coil diameter, and x 3 is the length or number of coils.
LSO is compared with 14 additional algorithms (mathematical techniques or metaheuristics) selected from various literature including evolution strategies (ES) [131], gravitational search algorithm (GSA) [55], Tabu search (TS) [132], swarm strategy [133], unified particle swarm optimization (UPSO) [134], cultural algorithms (CA) [132], two-point adaptive nonlinear approximations-3 (TANA-3) [135], particle swarm optimization (PSO) [136], ant colony optimization (ACO) [137], genetic algorithm (GA) [138], quantum evolutionary algorithm (QEA) [137], ray optimization (RO) [67], probability collectives (PC) [139], social interaction genetic algorithm (SIGA) [140], and parallel genetic algorithm with social interaction (PSIGA) [138]. Table 17 shows the results obtained by LSO for tension/compression spring design optimization problem. As shown, LSO is better able to reach minimum values than others. Figure 21b shows the convergence speed of LSO.

5.2. Welded Beam Design Problem

The problem of designing welded beams can be defined as finding the feasible dimensions of a welded beam x 1 , x 2 , x 3 , and x 4 (which are the thickness of weld, length of the clamped bar, the height of the bar, and thickness of the bar, respectively) that minimize the total manufacturing cost f ( X ) subject to a set of constraints. Figure 22a shows a representation of the weld beam design problem. Mathematically, the problem can be formulated as the following [130]:
m i n   f ( X ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( L + x 2 )
g 1 ( X ) = τ ( X ) τ m a x 0
s . t .
g 2 ( X ) = σ ( X ) σ m a x 0
g 3 ( X ) = x 1 x 4 0
g 4 ( X ) = 0.10471 x 1 2 0.04811 x 3 x 4 ( L + x 2 ) 5 0
g 5 ( X ) = 0.125 x 1 0
g 6 ( X ) = δ ( X ) δ m a x 0
g 7 ( X ) = P P C ( X ) 0
τ ( X ) = ( τ ) 2 + 2 τ τ x 2 2 R + ( τ ) 2 ,   τ = P 2 x 1 x 2 ,   τ = M R J
M = P ( L + x 2 2 ) ,   R = x 2 2 4 + ( x 1 + x 3 2 ) 2 ,
J = 2 ( 2 x 1 x 2 ( x 2 2 12 + ( x 1 + x 3 2 ) 2 ) )
σ ( X ) = 6 P L x 3 2 x 4 ,   δ ( X ) = 4 P L 3 E x 3 3 x 4 ,
P c ( X ) = 4.013 E ( x 3 2 x 4 6 36 )   L 2 ( 1 x 3 2 L E 4 G )
P = 6000   lb ,   L = 14   in ,   E = 30 × 10 6   psi ,   G = 12 × 10 6   psi
τ m a x = 13600   psi ,   σ m a x = 30000   psi ,   δ m a x = 0.25   in
0.1 x 1 2 ,   0.1 x 2 10 ,     0.1 x 3 10 ,   0.1 x 4 2 ,
where τ is shear stress, σ is the bending stress, P C is the buckling load, and δ is the end deflection.
The proposed algorithm is compared with nine additional algorithms from the literature including hybrid and improved ones, which are RO [67], WOA [41], HS [141], hybrid charged system search and particle swarm optimization (PSO) algorithms I (CSS&PSO I) [136], hybrid charged system search and particle swarm optimization (PSO) algorithms II (CSS&PSO II) [136], particle swarm optimization algorithm with struggle selection (PSOStr) [140], RO [67], firefly algorithm (FA) [142], differential evolution with dynamic stochastic selection (DE) [143], and hybrid artificial immune system and genetic algorithm (AIS-GA). As shown in Table 18, on one side, LSO has a competitive result comparing to GBO, GTO, EO, and DE, and superior in terms of the convergence speed as shown in Figure 22b. On the other side, LSO is superior to all the other optimizers.

5.3. Pressure Vessel Design Problem

The problem of pressure vessel design can be described as minimizing the total fabrication cost of a cylindrical pressure vessel with the consideration of optimization constraints. The mathematical formulation of the problem can be expressed as [41]:
m i n   f ( X ) = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3
s . t .                     g 1 ( X ) = x 1 + 0.0193 x 3 0
g 2 ( X ) = x 3 + 0.00954 x 3 0    
g 3 ( X ) = π x 3 2 x 4 4 3 π x 3 3 + 1296000 0          
  g 4 ( X ) = x 4 240 0    
  0 x 1 ,   x 2 99 ,   10 x 3 ,   x 4 200 ,
where x 1 is the shell thickness, x 2 is the head thickness, x 3 is the inner radius, and x 4 is the cylindrical section length (without the head), as shown in Figure 23a.
For the problem of pressure vessel design, the proposed algorithm is compared with 16 metaheuristics and mathematical methods, including Harris Hawks optimization (HHO) [22], grey wolf optimizer (GWO) [122], hybrid particle swarm optimization with a feasibility-based rule (HPSO) [144], Gaussian quantum-behaved particle swarm optimization (G-QPSO) [145], water evaporation optimization (WEO) [146], bat algorithm (BA) [147], MFO [148], charged system search (CSS) [56], multimembered evolution strategy (ESs) [149], genetic algorithm for design and optimization of composite laminates (BIANCA) [150], modified differential evolution (MDDE) [151], differential evolution with level comparison (DELC) [152], WOA [41], niched-pareto genetic algorithm (NPGA) [153], Lagrangian multiplier [41], and branch-bound [41]. As observed from Table 19, LSO significantly reaches the best results for the given problem compared to other algorithms except for DE and GTO, which have competitive performance for the best result, but LSO is superior for the average convergence speed as shown in Figure 23b.

6. Conclusions

In this work, a novel LSO metaheuristic algorithm is introduced that is inspired by sunlight dispersion through a water droplet, causing the rainbow phenomenon. The proposed algorithm is tested on several selected benchmarks. For CEC2005 benchmarks, LSO significantly performs well, especially for fixed dimensional multi-model functions. This indicates that LSO has high exploration capabilities. In addition, the sensitivity analysis of LSO parameters shows that the selected values of the parameters are the best. Finally, for CEC2020, CEC2017, CEC2022, and CEC2014, LSO has a superior performance compared to several well-established and recently published metaheuristic algorithms like DE, WOA, SMA, EO, GWO, GTO, GBO, RSA, SSA, RUN, and AVOA, which have been selected in our comparison due to their stability and recent publication compared to some of the other optimization algorithms like MBO, EWA, EHO, MS, HGS, CPA, and HHO proposed for tackling the same benchmarks. This indicates that LSO has a good balance between exploration and exploitation. LSO has competitive results for engineering design problems compared to other algorithms, even for improved and hybrid metaheuristics. For future work, we suggest developing the binary and multi-objective versions of LSO. In addition, several enhancements can be proposed for LSO by using fuzzy controllers or chaotic maps for defining LSO controlling probabilities and the hybridization with other algorithms. Finally, we suggest using LSO for solving recent real-life optimization problems such as sensor allocation problems, smart management of the power grid, and smart routing of vehicles.

Author Contributions

Conceptualization, M.A.-B. and R.M.; methodology, M.A.-B., R.M. and K.M.S.; software, M.A.-B. and R.M.; validation, M.A.-B., R.M., R.K.C. and K.M.S.; formal analysis, M.A.-B., R.M., R.K.C. and K.M.S.; investigation, M.A.-B., R.M., R.K.C. and K.M.S.; data curation, M.A.-B., R.M., R.K.C. and K.M.S.; writing—original draft preparation, M.A.-B. and R.M.; writing—review and editing M.A.-B., R.M., R.K.C. and K.M.S.; visualization, M.A.-B., R.M., R.K.C. and K.M.S.; supervision, M.A.-B. and R.K.C.; funding acquisition, R.K.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The code used in this paper can be obtained from this publicly accessible platform: https://drive.matlab.com/sharing/633c724c-9b52-4145-beba-c9e49f9df42e (accessed on 9 January 2022).

Acknowledgments

We would like to thank the Editor and anonymous reviewers for their thorough comments, which have helped us improve the manuscript—also, special thanks to Seyedali Mirjalili for his excellent suggestions and comments.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

Nomenclature of symbols used in this study
θ Angle of reflection or refraction
k Refractive index of a medium
L i ( i = 0 , , 3 ) i t h refracted or reflected light ray
n s Normal line at a point s
p [ 0 ,   1 ] Controlling probability of inner and outer reflection and refraction
q [ 0 ,   1 ] Controlling probability of the first scattering phase
z [ 0 ,   1 ] Controlling probability of the second scattering phase
t Iteration number
x 0 Initial candidate solution
N Population size
d Problem dimension
l b Lower bound of the search space
u b Upper bound of the search space
R V [ 0 ,   1 ] Vector of uniform random numbers
x t Candidate solution at iteration t
w [ 0 ,   ] Scaling factor
G I Scaling factor
ϵ Scaling factor
g i n v Inverse incomplete gamma function

Appendix A

Table A1. Description of uni-modal benchmark functions.
Table A1. Description of uni-modal benchmark functions.
IDBenchmarkDDomainGlobal Opt.
F1 f 1 ( x ) = i = 1 n x i 2 100 [ 100 , 100 ] 0
F2 f 2 ( x ) = i = 1 n | x i | + i = 1 n x i 100   [ 10 , 10 ] 0
F3 f 3 ( x ) = i = 1 n ( j 1 i x j ) 2 100 [ 100 , 100 ] 0
F4 f 4 ( x ) = m a x i { | x i | ,   1 i n } 100 [ 100 , 100 ] 0
F5 f 5 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 100 [ 30 , 30 ] 0
Table A2. Description of multi-modal benchmark functions.
Table A2. Description of multi-modal benchmark functions.
IDBenchmarkDDomainGlobal Opt.
F6 f 6 ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 100 [ 5.12 , 5.12 ] 0
F7 f 7 ( x ) = 20 e ( 0.2 1 n i = 1 n x i 2 ) e ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e 100 [ 32 , 32 ] 0
F8 f 8 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 100 [ 600 , 600 ] 0
F9 f 9 ( x ) = π n { 10 sin ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 s i n 2 ( π y i + 1 ) ] + ( y n 1 ) 2 } + i = 1 n u ( x i , 10 ,   100 ,   4 )
y i = 1 + x i + 1 4 ,   u ( x i , s ,   l ,   v ) = { l ( x i s ) v x i > s 0                         s < x i < s l ( x i s ) v x i < s
100 [ 50 , 50 ] 0
F10 f 10 ( x ) = 0.1 { s i n 2 ( 3 π x 1 )         + i = 1 n ( x i 1 ) 2 [ 1 + s i n 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 [ 1 + s i n 2 ( 2 π x n ) ]         + i = 1 n u ( x i , 5 ,   100 ,   4 ) } 100 [ 1.28 , 1.28 ] 0
Table A3. Fixed-dimension multi-modal benchmark.
Table A3. Fixed-dimension multi-modal benchmark.
IDBenchmarkDDomainGlobal Opt.
F11 f 11 ( x ) = ( 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 ) 1 2 [ 65 , 65 ] 1
F12 f 12 ( x ) = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 4 [ 5 , 5 ] 0.00030
F13 f 13 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2 [ 5 , 5 ] 1.0316
F14 f 14 ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) cos x 1 + 10 2 [ 5 , 5 ] 0.39789
F15 f 15 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ]         × [ 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2 [ 2 , 2 ] 3
F16 f 16 ( x ) = i = 1 4 c i e x p ( j = 1 6 a i j ( x j p i j ) 2 ) 3 [ 1 , 3 ] 3.86
F17 f 17 ( x ) = i = 1 4 c i e x p ( j = 1 3 a i j ( x j p i j ) 2 ) 6 [ 0 , 1 ] 3.32
F18 f 18 ( x ) = i = 1 5 [ ( X a i ) ( X a i ) T + c i ] 1 4 [ 0 , 10 ] 10.1532
F19 f 19 ( x ) = i = 1 7 [ ( X a i ) ( X a i ) T + c i ] 1 4 [ 0 , 10 ] 10.4028
F20 f 20 ( x ) = i = 1 10 [ ( X a i ) ( X a i ) T + c i ] 1 4 [ 0 , 10 ] 10.5363
Table A4. Description of CEC-2014 benchmark.
Table A4. Description of CEC-2014 benchmark.
TypeIDFunctionsGlobal Opt.Domain
Unimodal
function
F21 (CF1)Rotated High Conditioned Elliptic Function100[−100,100]
F22 (CF2)Rotated Bent Cigar Function200[−100,100]
F23 (CF3)Rotated Discus Function300[−100,100]
Simple multimodal
Test functions
F24 (CF4)Shifted and Rotated Rosenbrock’s Function400[−100,100]
F25 (CF5)Shifted and Rotated Ackley’s Function500[−100,100]
F26 (CF6)Shifted and Rotated Weierstrass Function600[−100,100]
F27 (CF7)Shifted and Rotated Griewank’s Function700[−100,100]
F28 (CF8)Shifted Rastrigin’s Function800[−100,100]
F29 (CF9)Shifted and Rotated Rastrigin’s Function900[−100,100]
F30 (CF10)Shifted Schwefel’s Function1000[−100,100]
F31 (CF11)Shifted and Rotated Schwefel’s Function1100[−100,100]
F32 (CF12)Shifted and Rotated Katsuura Function1200[−100,100]
F33 (CF13)Shifted and Rotated HappyCat Function1300[−100,100]
F34 (CF14)Shifted and Rotated HGBat Function1400[−100,100]
F35 (CF15)Shifted and Rotated Expanded Griewank’s plus Rosenbrock’s Function1500[−100,100]
F36 (CF16)Shifted and Rotated Expanded Scaffer’s F6 Function1600[−100,100]
Hybrid test functionsF37 (CF17)Hybrid Function 11700[−100,100]
F38 (CF18)Hybrid Function 21800[−100,100]
F39 (CF19)Hybrid Function 31900[−100,100]
F40 (CF20)Hybrid Function 42000[−100,100]
F41 (CF17)Hybrid Function 52100[−100,100]
F42 (CF18)Hybrid Function 62200[−100,100]
Composition test functionsF43 (CF23)Composition Function 12300[−100,100]
F44 (CF24)Composition Function 22400[−100,100]
F45 (CF25)Composition Function 32500[−100,100]
F46 (CF26)Composition Function 42600[−100,100]
F47 (CF27)Composition Function 52700[−100,100]
F48 (CF28)Composition Function 62800[−100,100]
F49 (CF29)Composition Function 72900[−100,100]
F50 (CF30)Composition Function 83000[−100,100]
Table A5. Description of CEC-2017 benchmark.
Table A5. Description of CEC-2017 benchmark.
TypeIDFunctionsGlobal Opt.Domain
Unimodal
function
F51 (CF1)Shifted and Rotated Bent Cigar Function100[−100,100]
F52 (CF3)Shifted and Rotated Zakharov Function300[−100,100]
Simple multimodal
Test functions
F53 (CF4)Shifted and Rotated Rosenbrock’s Function400[−100,100]
F54 (CF5)Shifted and Rotated Rastrigin’s Function500[−100,100]
F55 (CF6)Shifted and Rotated Expanded Scaffer’s Function600[−100,100]
F56 (CF7)Shifted and Rotated Lunacek Bi_Rastrigin Function700[−100,100]
F57 (CF8)Shifted and Rotated Non-Continuous Rastrigin’s Function800[−100,100]
F58 (CF9)Shifted and Rotated Levy Function900[−100,100]
F59 (CF10)Shifted and Rotated Schwefel’s Function1000[−100,100]
Hybrid test functionsF60 (CF11)Hybrid Function 11100[−100,100]
F61 (CF12)Hybrid Function 21200[−100,100]
F62 (CF13)Hybrid Function 31300[−100,100]
F63 (CF14)Hybrid Function 41400[−100,100]
F64 (CF15)Hybrid Function 51500[−100,100]
F65 (CF16)Hybrid Function 61600[−100,100]
F66 (CF17)Hybrid Function 71700[−100,100]
F67 (CF18)Hybrid Function 81800[−100,100]
F68 (CF19)Hybrid Function 91900[−100,100]
F69 (CF20)Hybrid Function 102000[−100,100]
Composition test functionsF70 (CF21)Composition Function 12100[−100,100]
F71 (CF22)Composition Function 22200[−100,100]
F72 (CF23)Composition Function 32300[−100,100]
F73 (CF24)Composition Function 42400[−100,100]
F74 (CF25)Composition Function 52500[−100,100]
F75 (CF26)Composition Function 62600[−100,100]
F76 (CF27)Composition Function 72700[−100,100]
F77 (CF28)Composition Function 82800[−100,100]
F78 (CF29)Composition Function 92900[−100,100]
F79 (CF30)Composition Function 103000[−100,100]
Table A6. Description of CEC2020 benchmarks.
Table A6. Description of CEC2020 benchmarks.
TypeIDFunctionsGlobal Opt.Domain
UnimodalF80 (CF1)Shifted and Rotated Bent Cigar Function100[−100,100]
multimodalF81 (CF2)Shifted and Rotated Lunacek Bi_Rastrigin Function700[−100,100]
F82 (CF3)Hybrid Function 11100[−100,100]
HybridF83 (CF4)Hybrid Function 21700[−100,100]
F84 (CF5)Hybrid Function 31900[−100,100]
F85 (CF6)Hybrid Function 42100[−100,100]
F86 (CF7)Hybrid Function 51600[−100,100]
CompositionF87 (CF8)Composition Function 12200[−100,100]
F88 (CF9)Composition Function 22400[−100,100]
F89 (CF10)Composition Function 32500[−100,100]
Table A7. Description of CEC2022 benchmark.
Table A7. Description of CEC2022 benchmark.
TypeNo.FunctionsGlobal Opt.Domain
Unimodal functionF90Shifted and full Rotated Zakharov Function 300[−100,100]
Basic functionsF91Shifted and full Rotated Rosenbrock’s Function 400[−100,100]
F92Shifted and full Rotated Expanded Schaffer’s f6 Function 600[−100,100]
F93Shifted and full Rotated Non-continuous Rastrigin’s Function 800[−100,100]
F94Shifted and Rotated Levy Function900[−100,100]
Hybrid functionsF95Hybrid function 1 (N = 3)1800[−100,100]
F96Hybrid function 2 (N = 6)2000[−100,100]
F97Hybrid function 3 (N = 5)2200[−100,100]
Composite functionsF98Composite function 1 (N = 5)2300[−100,100]
F99Composite function 2 (N = 4)2400[−100,100]
F100Composite function 3 (N = 5)2600[−100,100]
F101Composite function 4 (N = 6)2700[−100,100]

References

  1. Fausto, F.; Reyna-Orta, A.; Cuevas, E.; Andrade, Á.G.; Perez-Cisneros, M. From ants to whales: Metaheuristics for all tastes. Artif. Intell. Rev. 2020, 53, 753–810. [Google Scholar] [CrossRef]
  2. Ross, O.H.M. A review of quantum-inspired metaheuristics: Going from classical computers to real quantum computers. IEEE Access 2019, 8, 814–838. [Google Scholar] [CrossRef]
  3. Guo, Y.-N.; Zhang, X.; Gong, D.-W.; Zhang, Z.; Yang, J.-J. Novel interactive preference-based multiobjective evolutionary optimization for bolt supporting networks. IEEE Trans. Evol. Comput. 2019, 24, 750–764. [Google Scholar] [CrossRef]
  4. Guo, Y.; Yang, H.; Chen, M.; Cheng, J.; Gong, D. Ensemble prediction-based dynamic robust multi-objective optimization methods. Swarm Evol. Comput. 2019, 48, 156–171. [Google Scholar] [CrossRef]
  5. Ji, J.-J.; Guo, Y.-N.; Gao, X.-Z.; Gong, D.-W.; Wang, Y.-P. Q-Learning-Based Hyperheuristic Evolutionary Algorithm for Dynamic Task Allocation of Crowdsensing. IEEE Trans. Cybern. 2021, 34606469. [Google Scholar] [CrossRef]
  6. Loubière, P.; Jourdan, A.; Siarry, P.; Chelouah, R. A sensitivity analysis indicator to adapt the shift length in a metaheuristic. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–6. [Google Scholar]
  7. Gao, K.-Z.; He, Z.M.; Huang, Y.; Duan, P.-Y.; Suganthan, P.N. A survey on meta-heuristics for solving disassembly line balancing, planning and scheduling problems in remanufacturing. Swarm Evol. Comput. 2020, 57, 100719. [Google Scholar] [CrossRef]
  8. Li, J.; Lei, H.; Alavi, A.H.; Wang, G.-G. Elephant herding optimization: Variants, hybrids, and applications. Mathematics 2020, 8, 1415. [Google Scholar] [CrossRef]
  9. Feng, Y.; Deb, S.; Wang, G.-G.; Alavi, A.H. Monarch butterfly optimization: A comprehensive review. Expert Syst. Appl. 2021, 168, 114418. [Google Scholar] [CrossRef]
  10. Li, M.; Wang, G.-G. A Review of Green Shop Scheduling Problem. Inf. Sci. 2022, 589, 478–496. [Google Scholar] [CrossRef]
  11. Li, W.; Wang, G.-G.; Gandomi, A.H. A survey of learning-based intelligent optimization algorithms. Arch. Comput. Methods Eng. 2021, 28, 3781–3799. [Google Scholar] [CrossRef]
  12. Han, J.-H.; Choi, D.-J.; Park, S.-U.; Hong, S.-K. Hyperparameter optimization for multi-layer data input using genetic algorithm. In Proceedings of the 2020 IEEE 7th International Conference on Industrial Engineering and Applications (ICIEA), Bangkok, Thailand, 16–21 April 2020; pp. 701–704. [Google Scholar]
  13. Tang, Y.; Jia, H.; Verma, N. Reducing energy of approximate feature extraction in heterogeneous architectures for sensor inference via energy-aware genetic programming. IEEE Trans. Circuits Syst. I Regul. Pap. 2020, 67, 1576–1587. [Google Scholar] [CrossRef]
  14. Rechenberg, I. Evolutionsstrategien. In Simulationsmethoden in der Medizin und Biologie; Springer: New York, NY, USA, 1978; pp. 83–114. [Google Scholar]
  15. Dasgupta, D.; Michalewicz, Z. Evolutionary Algorithms in Engineering Applications; Springer Science & Business Media: Berlin, Germany, 2013. [Google Scholar]
  16. Opara, K.R.; Arabas, J. Differential Evolution: A survey of theoretical analyses. Swarm Evol. Comput. 2019, 44, 546–558. [Google Scholar] [CrossRef]
  17. Sharif, M.; Amin, J.; Raza, M.; Yasmin, M.; Satapathy, S.C. An integrated design of particle swarm optimization (PSO) with fusion of features for detection of brain tumor. Pattern Recognit. Lett. 2020, 129, 150–157. [Google Scholar] [CrossRef]
  18. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  19. Tsipianitis, A.; Tsompanakis, Y. Improved Cuckoo Search algorithmic variants for constrained nonlinear optimization. Adv. Eng. Softw. 2020, 149, 102865. [Google Scholar] [CrossRef]
  20. Adithiyaa, T.; Chandramohan, D.; Sathish, T. Flower Pollination Algorithm for the Optimization of Stair casting parameter for the preparation of AMC. Mater. Today Proc. 2020, 21, 882–886. [Google Scholar] [CrossRef]
  21. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  22. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Futur. Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  23. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  24. Połap, D.; Woźniak, M. Red fox optimization algorithm. Expert Syst. Appl. 2021, 166, 114107. [Google Scholar] [CrossRef]
  25. Zhang, M.; Wen, G.; Yang, J. Duck swarm algorithm: A novel swarm intelligence algorithm. arXiv 2021, arXiv:2112.13508. [Google Scholar]
  26. Braik, M.S. Chameleon Swarm Algorithm: A bio-inspired optimizer for solving engineering design problems. Expert Syst. Appl. 2021, 174, 114685. [Google Scholar] [CrossRef]
  27. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. Artificial gorilla troops optimizer: A new nature-inspired metaheuristic algorithm for global optimization problems. Int. J. Intell. Syst. 2021, 36, 5887–5958. [Google Scholar] [CrossRef]
  28. Chu, S.-C.; Tsai, P.-W.; Pan, J.-S. Cat swarm optimization. In Proceedings of the Pacific Rim International Conference on Artificial Intelligence, Guilin, China, 7–11 August 2006; pp. 854–858. [Google Scholar]
  29. Shamsaldin, A.S.; Rashid, T.A.; Al-Rashid Agha, R.A.; Al-Salihi, N.K.; Mohammadi, M. Donkey and smuggler optimization algorithm: A collaborative working approach to path finding. J. Comput. Des. Eng. 2019, 6, 562–583. [Google Scholar] [CrossRef]
  30. Bolaji, A.L.; Al-Betar, M.A.; Awadallah, M.A.; Khader, A.T.; Abualigah, L. A comprehensive review: Krill Herd algorithm (KH) and its applications. Appl. Soft Comput. 2016, 49, 437–446. [Google Scholar] [CrossRef]
  31. Wang, G.-G.; Deb, S.; Coelho, L.d.S. Elephant herding optimization. In Proceedings of the 2015 3rd International Symposium on Computational and Business Intelligence (ISCBI), Bali, Indonesia, 7–9 December 2015; pp. 1–5. [Google Scholar]
  32. Yang, C.; Tu, X.; Chen, J. Algorithm of marriage in honey bees optimization based on the wolf pack search. In Proceedings of the 2007 International Conference on Intelligent Pervasive Computing (IPC 2007), Jeju, Korea, 11–13 October 2007; pp. 462–467. [Google Scholar]
  33. Oftadeh, R.; Mahjoob, M.J.; Shariatpanahi, M. A novel meta-heuristic optimization algorithm inspired by group hunting of animals: Hunting search. Comput. Math. Appl. 2010, 60, 2087–2098. [Google Scholar] [CrossRef]
  34. Mucherino, A.; Seref, O. Monkey search: A novel metaheuristic search for global optimization. In Proceedings of the Conference on Data Mining, System Analysis and Optimization in Biomedicine, Gainesvile, FL, USA, 28–30 March 2007; pp. 162–173. [Google Scholar]
  35. Meng, X.; Liu, Y.; Gao, X.; Zhang, H. A new bio-inspired algorithm: Chicken swarm optimization. In Proceedings of the International Conference in Swarm Intelligence, Hefei, China, 17–20 October 2014; pp. 86–94. [Google Scholar]
  36. MiarNaeimi, F.; Azizyan, G.; Rashki, M. Horse herd optimization algorithm: A nature-inspired algorithm for high-dimensional optimization problems. Knowl.-Based Syst. 2021, 213, 106711. [Google Scholar] [CrossRef]
  37. Wang, G.-G. Moth search algorithm: A bio-inspired metaheuristic algorithm for global optimization problems. Memetic Comput. 2018, 10, 151–164. [Google Scholar] [CrossRef]
  38. Wang, G.G.; Deb, S.; Coelho, L.D.S. Earthworm optimisation algorithm: A bio-inspired metaheuristic algorithm for global optimisation problems. Int. J. Bio-Inspired Comput. 2018, 12, 1–22. [Google Scholar] [CrossRef]
  39. Wang, G.-G.; Deb, S.; Cui, Z. Monarch butterfly optimization. Neural Comput. Appl. 2019, 31, 1995–2014. [Google Scholar] [CrossRef]
  40. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  41. Zhang, J.; Wang, J.S. Improved whale optimization algorithm based on nonlinear adaptive weight and golden sine operator. IEEE Access 2020, 8, 77013–77048. [Google Scholar] [CrossRef]
  42. Houssein, E.H.; Saad, M.R.; Hashim, F.A.; Shaban, H.; Hassaballah, M. Lévy flight distribution: A new metaheuristic algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2020, 94, 103731. [Google Scholar] [CrossRef]
  43. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  44. Naik, A.; Satapathy, S.C. Past present future: A new human-based algorithm for stochastic optimization. Soft Comput. 2021, 25, 12915–12976. [Google Scholar] [CrossRef]
  45. Askari, Q.; Younas, I.; Saeed, M. Political Optimizer: A novel socio-inspired meta-heuristic for global optimization. Knowl.-Based Syst. 2020, 195, 105709. [Google Scholar] [CrossRef]
  46. Shi, Y. Brain storm optimization algorithm. In Proceedings of the International Conference in Swarm Intelligence, Chongqing, China, 12–15 June 2011; pp. 303–309. [Google Scholar]
  47. Ghorbani, N.; Babaei, E. Exchange market algorithm. Appl. Soft Comput. 2014, 19, 177–187. [Google Scholar] [CrossRef]
  48. Kashan, A.H. League Championship Algorithm (LCA): An algorithm for global optimization inspired by sport championships. Appl. Soft Comput. 2014, 16, 171–200. [Google Scholar] [CrossRef]
  49. Moosavi, S.H.S.; Bardsiri, V.K. Poor and rich optimization algorithm: A new human-based and multi populations algorithm. Eng. Appl. Artif. Intell. 2019, 86, 165–181. [Google Scholar] [CrossRef]
  50. Dehghani, M.; Trojovská, E.; Trojovský, P. Driving Training-Based Optimization: A New Human-Based Metaheuristic Algorithm for Solving Optimization Problems. Res. Square 2022. [Google Scholar] [CrossRef]
  51. Mohamed, A.W.; Hadi, A.A.; Mohamed, A. Gaining-sharing knowledge based algorithm for solving optimization problems: A novel nature-inspired algorithm. Int. J. Mach. Learn. Cybern. 2020, 11, 1501–1529. [Google Scholar]
  52. Atashpaz-Gargari, E.; Lucas, C. Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 4661–4667. [Google Scholar]
  53. Moosavian, N.; Roodsari, B.K. Soccer league competition algorithm: A novel meta-heuristic algorithm for optimal design of water distribution networks. Swarm Evol. Comput. 2014, 17, 14–24. [Google Scholar] [CrossRef]
  54. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  55. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  56. Kaveh, A.; Talatahari, S. A novel heuristic optimization method: Charged system search. Acta Mech. 2010, 213, 267–289. [Google Scholar] [CrossRef]
  57. Erol, O.K.; Eksin, I. A new optimization method: Big bang–big crunch. Adv. Eng. Softw. 2006, 37, 106–111. [Google Scholar] [CrossRef]
  58. Xie, L.; Zeng, J.; Cui, Z. General framework of artificial physics optimization algorithm. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 1321–1326. [Google Scholar]
  59. Shah-Hosseini, H. Principal components analysis by the galaxy-based search algorithm: A novel metaheuristic for continuous optimisation. Int. J. Comput. Sci. Eng. 2011, 6, 132–140. [Google Scholar]
  60. Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inf. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
  61. Rabanal, P.; Rodríguez, I.; Rubio, F. Using river formation dynamics to design heuristic algorithms. In Proceedings of the International Conference on Unconventional Computation, Kingston, UC, Canada, 13–17 August 2007; pp. 163–177. [Google Scholar]
  62. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  63. Moghaddam, F.F.; Moghaddam, R.F.; Cheriet, M. Curved space optimization: A random search based on general relativity theory. arXiv 2012, arXiv:1208.2214. [Google Scholar]
  64. Formato, R. Central force optimization. Prog. Electromagn. Res. 2007, 77, 425–491. [Google Scholar] [CrossRef]
  65. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm–A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110, 151–166. [Google Scholar] [CrossRef]
  66. Zheng, Y.-J. Water wave optimization: A new nature-inspired metaheuristic. Comput. Oper. Res. 2015, 55, 1–11. [Google Scholar] [CrossRef]
  67. Kaveh, A.; Khayatazad, M. A new meta-heuristic method: Ray optimization. Comput. Struct. 2012, 112–113, 283–294. [Google Scholar] [CrossRef]
  68. Webster, B.; Bernhard, P.J. A Local Search Optimization Algorithm Based on Natural Principles of Gravitation; Florida Institute of Technology: Melbourne, FL, USA, 2003. [Google Scholar]
  69. Du, H.; Wu, X.; Zhuang, J. Small-world optimization algorithm for function optimization. In Proceedings of the International Conference on Natural Computation, Xi’an, China, 24–28 September 2006; pp. 264–273. [Google Scholar]
  70. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  71. Shah-Hosseini, H. The intelligent water drops algorithm: A nature-inspired swarm-based optimization algorithm. Int. J. Bio-Inspired Comput. 2009, 1, 71–79. [Google Scholar] [CrossRef]
  72. Chuang, C.-L.; Jiang, J.-A. Integrated radiation optimization: Inspired by the gravitational radiation in the curvature of space-time. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 3157–3164. [Google Scholar]
  73. Hsiao, Y.-T.; Chuang, C.-L.; Jiang, J.-A.; Chien, C.-C. A novel optimization algorithm: Space gravitational optimization. In Proceedings of the 2005 IEEE international Conference on Systems, Man and Cybernetics, Waikoloa, HI, USA, 10–12 October 2005; pp. 2323–2328. [Google Scholar]
  74. Javidy, B.; Hatamlou, A.; Mirjalili, S. Ions motion algorithm for solving optimization problems. Appl. Soft Comput. 2015, 32, 72–79. [Google Scholar] [CrossRef]
  75. Birbil, Ş.İ.; Fang, S.-C. An electromagnetism-like mechanism for global optimization. J. Glob. Optim. 2003, 25, 263–282. [Google Scholar] [CrossRef]
  76. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  77. Shen, J.; Li, J. The principle analysis of light ray optimization algorithm. In Proceedings of the 2010 Second International Conference on Computational Intelligence and Natural Computing, Washington, DC, USA, 23–24 October 2010; pp. 154–157. [Google Scholar]
  78. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  79. Abdechiri, M.; Meybodi, M.R.; Bahrami, H. Gases Brownian motion optimization: An algorithm for optimization (GBMO). Appl. Soft Comput. 2013, 13, 2932–2946. [Google Scholar] [CrossRef]
  80. Alatas, B. ACROA: Artificial chemical reaction optimization algorithm for global optimization. Expert Syst. Appl. 2011, 38, 13170–13180. [Google Scholar] [CrossRef]
  81. Siddique, N.; Adeli, H. Nature-Inspired Computing: Physics-and Chemistry-Based Algorithms; Chapman and Hall/CRC: Boca Raton, FL, USA, 2017. [Google Scholar]
  82. Tanyildizi, E.; Demir, G. Golden sine algorithm: A novel math-inspired algorithm. Adv. Electr. Comput. Eng. 2017, 17, 71–78. [Google Scholar] [CrossRef]
  83. Salem, S.A. BOA: A novel optimization algorithm. In Proceedings of the 2012 International Conference on Engineering and Technology (ICET), Antalya, Turkey, 21–23 August 2017; pp. 1–5. [Google Scholar]
  84. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  85. Mara, S.T.W.; Norcahyo, R.; Jodiawan, P.; Lusiantoro, L.; Rifai, A.P. A survey of adaptive large neighborhood search algorithms and applications. Comput. Oper. Res. 2022, 146, 105903. [Google Scholar] [CrossRef]
  86. Pisinger, D.; Ropke, S. Large neighborhood search. In Handbook of Metaheuristics; Springer: New York, NY, USA, 2010; pp. 399–419. [Google Scholar]
  87. Ahuja, R.K.; Ergun, Ö.; Orlin, J.B.; Punnen, A.P. A survey of very large-scale neighborhood search techniques. Discret. Appl. Math. 2002, 123, 75–102. [Google Scholar] [CrossRef]
  88. Feo, T.A.; Resende, M. Greedy randomized adaptive search procedures. J. Glob. Optim. 1995, 6, 109–133. [Google Scholar] [CrossRef]
  89. Lee, D.-H.; Chen, J.H.; Cao, J.X.; Review, T. The continuous berth allocation problem: A greedy randomized adaptive search solution. Transp. Res. Part E Logist. Transp. Rev. 2010, 46, 1017–1029. [Google Scholar] [CrossRef]
  90. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. Starling murmuration optimizer: A novel bio-inspired algorithm for global and engineering optimization. Comput. Methods Appl. Mech. Eng. 2022, 392, 114616. [Google Scholar] [CrossRef]
  91. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  92. Abualigah, L.; Abd Elaziz, M.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  93. Zitouni, F.; Harous, S.; Belkeram, A.; Hammou, L.E.B. The archerfish hunting optimizer: A novel metaheuristic algorithm for global optimization. Arab. J. Sci. Eng. 2022, 47, 2513–2553. [Google Scholar] [CrossRef]
  94. Daliri, A.; Asghari, A.; Azgomi, H.; Alimoradi, M. The water optimization algorithm: A novel metaheuristic for solving optimization problems. Appl. Intell. 2022, 1–40. [Google Scholar] [CrossRef]
  95. Oyelade, O.N.; Ezugwu, A.E.-S.; Mohamed, T.I.A.; Abualigah, L. Ebola optimization search algorithm: A new nature-inspired metaheuristic optimization algorithm. IEEE Access 2022, 10, 16150–16177. [Google Scholar] [CrossRef]
  96. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  97. Eslami, N.; Yazdani, S.; Mirzaei, M.; Hadavandi, E. Aphid-Ant Mutualism: A novel nature-inspired metaheuristic algorithm for solving optimization problems. Math. Comput. Simul. 2022, 201, 362–395. [Google Scholar] [CrossRef]
  98. Qais, M.H.; Hasanien, H.M.; Turky, R.A.; Alghuwainem, S.; Tostado-Véliz, M.; Jurado, F. Circle Search Algorithm: A Geometry-Based Metaheuristic Optimization Algorithm. Mathematics 2022, 10, 1626. [Google Scholar] [CrossRef]
  99. Trojovský, P.; Dehghani, M. Pelican optimization algorithm: A novel nature-inspired algorithm for engineering applications. Sensors 2022, 22, 855. [Google Scholar] [CrossRef] [PubMed]
  100. Kivi, M.E.; Majidnezhad, V.J.; Computing, H. A novel swarm intelligence algorithm inspired by the grazing of sheep. J. Ambient. Intell. Humaniz. Comput. 2022, 13, 1201–1213. [Google Scholar] [CrossRef]
  101. Pan, J.-S.; Zhang, L.-G.; Wang, R.-B.; Snášel, V.; Chu, S.-C. Gannet optimization algorithm: A new metaheuristic algorithm for solving engineering optimization problems. Math. Comput. Simul. 2022, 202, 343–373. [Google Scholar] [CrossRef]
  102. Ezugwu, A.E.; Agushaka, J.O.; Abualigah, L.; Mirjalili, S.; Gandomi, A.H. Prairie dog optimization algorithm. Neural Comput. Appl. 2022, 1–49. [Google Scholar] [CrossRef]
  103. Emami, H. Stock exchange trading optimization algorithm: A human-inspired method for global optimization. J. Supercomput. 2022, 78, 2125–2174. [Google Scholar] [CrossRef] [PubMed]
  104. Mohammadi-Balani, A.; Nayeri, M.D.; Azar, A.; Taghizadeh-Yazdi, M. Golden eagle optimizer: A nature-inspired metaheuristic algorithm. Comput. Ind. Eng. 2021, 152, 107050. [Google Scholar] [CrossRef]
  105. Askari, Q.; Saeed, M.; Younas, I. Heap-based optimizer inspired by corporate rank hierarchy for global optimization. Expert Syst. Appl. 2020, 161, 113702. [Google Scholar] [CrossRef]
  106. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  107. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. QANA: Quantum-based avian navigation optimizer algorithm. Eng. Appl. Artif. Intell. 2021, 104, 104314. [Google Scholar] [CrossRef]
  108. Tu, J.; Chen, H.; Wang, M.; Gandomi, A.H. The colony predation algorithm. J. Bionic Eng. 2021, 18, 674–710. [Google Scholar] [CrossRef]
  109. Davies, O.; Wannell, J.; Inglesfield, J. The Rainbow; Cambridge University Press: Cambridge, UK, 2006; Volume 37, pp. 17–21. [Google Scholar]
  110. Adam, J.A. The mathematical physics of rainbows and glories. Phys. Rep. 2002, 356, 229–365. [Google Scholar] [CrossRef]
  111. Suzuki, M.; Suzuki, I.S. Physics of rainbow. Phys. Teach. 2010, 12, 283–286. [Google Scholar]
  112. Buchwald, J.Z. Descartes’s experimental journey past the prism and through the invisible world to the rainbow. Ann. Sci. 2008, 65, 1–46. [Google Scholar] [CrossRef]
  113. Zhou, J.; Fang, Y.; Wang, J.; Zhu, L.; Wriedt, T. Rainbow pattern analysis of a multilayered sphere for optical diagnostic of a heating droplet. Opt. Commun. 2019, 441, 113–120. [Google Scholar] [CrossRef]
  114. Wu, Y.; Li, H.; Wu, X.; Gréhan, G.; Mädler, L.; Crua, C. Change of evaporation rate of single monocomponent droplet with temperature using time-resolved phase rainbow refractometry. Proc. Combust. Inst. 2019, 37, 3211–3218. [Google Scholar] [CrossRef]
  115. Wu, Y.; Li, C.; Crua, C.; Wu, X.; Saengkaew, S.; Chen, L.; Gréhan, G.; Cen, K. Primary rainbow of high refractive index particle (1.547 < n < 2) has refraction ripples. Opt. Commun. 2018, 426, 237–241. [Google Scholar]
  116. Yu, H.; Xu, F.; Tropea, C. Optical caustics associated with the primary rainbow of oblate droplets: Simulation and application in non-sphericity measurement. Opt. Express 2013, 21, 25761–25771. [Google Scholar] [CrossRef] [PubMed]
  117. Greengard, P.; Rokhlin, V. An algorithm for the evaluation of the incomplete gamma function. Adv. Comput. Math. 2019, 45, 23–49. [Google Scholar] [CrossRef]
  118. Yang, X.-S. Firefly algorithm, stochastic test functions and design optimisation. Int. J. Bio-Inspired Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
  119. Digalakis, J.G.; Margaritis, K.G. On benchmarking functions for genetic algorithms. Int. J. Comput. Math. 2001, 77, 481–506. [Google Scholar] [CrossRef]
  120. Molga, M.; Smutnicki, C. Test functions for optimization needs. Test Funct. Optim. Needs 2005, 101, 48. [Google Scholar]
  121. De Barros, R.S.M.; Hidalgo, J.I.G.; de Lima Cabral, D.R. Wilcoxon rank sum test drift detector. Neurocomputing 2018, 275, 1954–1963. [Google Scholar] [CrossRef]
  122. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  123. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  124. Ahmadianfar, I.; Bozorg-Haddad, O.; Chu, X. Gradient-based optimizer: A new metaheuristic optimization algorithm. Inf. Sci. 2020, 540, 131–159. [Google Scholar] [CrossRef]
  125. Ahmadianfar, I.; Heidari, A.A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  126. Awad, N.H.; Ali, M.Z.; Suganthan, P.N. Ensemble sinusoidal differential covariance matrix adaptation with Euclidean neighborhood for solving CEC2017 benchmark problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), San Sebastián, Spain, 5–8 June 2017; pp. 372–379. [Google Scholar]
  127. Nadimi-Shahraki, M.H.; Zamani, H. DMDE: Diversity-maintained multi-trial vector differential evolution algorithm for non-decomposition large-scale global optimization. Expert Syst. Appl. 2022, 198, 116895. [Google Scholar] [CrossRef]
  128. Seck-Tuoh-Mora, J.C.; Hernandez-Romero, N.; Lagos-Eulogio, P.; Medina-Marin, J.; Zuñiga-Peña, N.S. A continuous-state cellular automata algorithm for global optimization. Expert Syst. Appl. 2021, 177, 114930. [Google Scholar] [CrossRef]
  129. Coello, C.A.C. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art. Comput. Methods Appl. Mech. Eng. 2002, 191, 1245–1287. [Google Scholar] [CrossRef]
  130. Rao, R.V.; Waghmare, G.G. A new optimization algorithm for solving complex constrained design optimization problems. Comput. Methods Appl. Mech. Eng. 2017, 49, 60–83. [Google Scholar] [CrossRef]
  131. Mezura-Montes, E.; Coello, C.A.C. An empirical study about the usefulness of evolution strategies to solve constrained optimization problems. Int. J. Gen. Syst. 2008, 37, 443–473. [Google Scholar] [CrossRef]
  132. Ali, M.Z.; Reynolds, R.G. Cultural algorithms: A Tabu search approach for the optimization of engineering design problems. Soft Comput. 2014, 18, 1631–1644. [Google Scholar] [CrossRef]
  133. Ray, T.; Saini, P. Engineering design optimization using a swarm with an intelligent information sharing among individuals. Eng. Optim. 2001, 33, 735–748. [Google Scholar] [CrossRef]
  134. Parsopoulos, K.E.; Vrahatis, M.N. Unified particle swarm optimization for solving constrained engineering optimization problems. In International Conference on Natural Computation; Springer: Berlin/Heidelberg, Germany, 2005; pp. 582–591. [Google Scholar]
  135. Kim, M.-S.; Kim, J.-R.; Jeon, J.-Y.; Choi, D.-H. Efficient mechanical system optimization using two-point diagonal quadratic approximation in the nonlinear intervening variable space. KSME Int. J. 2001, 15, 1257–1267. [Google Scholar] [CrossRef]
  136. Kaveh, A.; Talatahari, S. Hybrid charged system search and particle swarm optimization for engineering design problems. Eng. Comput. 2011, 28, 423–440. [Google Scholar] [CrossRef]
  137. Mani, A.; Patvardhan, C. An adaptive quantum evolutionary algorithm for engineering optimization problems. Int. J. Comput. Appl. 2010, 1, 43–48. [Google Scholar] [CrossRef]
  138. Lisbôa, R.; Yasojima, E.K.; de Oliveira, R.M.S.; Mollinetti, M.A.F.; Teixeira, O.N.; de Oliveira, R.C.L. Parallel genetic algorithm with social interaction for solving constrained global optimization problems. In Proceedings of the 2015 7th International Conference of Soft Computing and Pattern Recognition (SoCPaR), Fukuoka, Japan, 13–15 November 2015; pp. 351–356. [Google Scholar]
  139. Kulkarni, A.J.; Tai, K. Solving constrained optimization problems using probability collectives and a penalty function approach. Int. J. Comput. Intell. Appl. 2011, 10, 445–470. [Google Scholar] [CrossRef]
  140. Dimopoulos, G.G. Mixed-variable engineering optimization based on evolutionary and social metaphors. Comput. Methods Appl. Mech. Eng. 2007, 196, 803–817. [Google Scholar] [CrossRef]
  141. Sedhom, B.E.; El-Saadawi, M.M.; Hatata, A.Y.; Alsayyari, A.S.; Systems, E. Hierarchical control technique-based harmony search optimization algorithm versus model predictive control for autonomous smart microgrids. Int. J. Electr. Power Energy Syst. 2020, 115, 105511. [Google Scholar] [CrossRef]
  142. Gandomi, A.H.; Yang, X.-S.; Alavi, A.H. Mixed variable structural optimization using firefly algorithm. Comput. Struct. 2011, 89, 2325–2336. [Google Scholar] [CrossRef]
  143. Zhang, M.; Luo, W.; Wang, X. Differential evolution with dynamic stochastic selection for constrained optimization. Inf. Sci. 2008, 178, 3043–3074. [Google Scholar] [CrossRef]
  144. He, Q.; Wang, L. A hybrid particle swarm optimization with a feasibility-based rule for constrained optimization. Appl. Math. Comput. 2007, 186, 1407–1422. [Google Scholar] [CrossRef]
  145. Dos Santos Coelho, L. Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems. Expert Syst. Appl. 2010, 37, 1676–1683. [Google Scholar] [CrossRef]
  146. Kaveh, A.; Bakhshpoori, T. Water evaporation optimization: A novel physically inspired optimization algorithm. Comput. Struct. 2016, 167, 69–85. [Google Scholar] [CrossRef]
  147. Gandomi, A.H.; Yang, X.-S.; Alavi, A.H.; Talatahari, S. Bat algorithm for constrained optimization tasks. Neural Comput. Appl. 2013, 22, 1239–1255. [Google Scholar] [CrossRef]
  148. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  149. Mezura-Montes, E.; Coello, C.A. A simple multimembered evolution strategy to solve constrained optimization problems. IEEE Trans. Evol. Comput. 2005, 9, 1–17. [Google Scholar] [CrossRef]
  150. Montemurro, M.; Vincenti, A.; Vannucci, P. The automatic dynamic penalisation method (ADP) for handling constraints with genetic algorithms. Comput. Methods Appl. Mech. Eng. 2013, 256, 70–87. [Google Scholar] [CrossRef]
  151. Mezura-Montes, E.; Coello Coello, C.A.; Velázquez-Reyes, J.; Munoz-Dávila, L. Multiple trial vectors in differential evolution for engineering design. Eng. Optim. 2007, 39, 567–589. [Google Scholar] [CrossRef]
  152. Wang, L.; Li, L.-P. An effective differential evolution with level comparison for constrained engineering design. Struct. Multidiscip. Optim. 2010, 41, 947–963. [Google Scholar] [CrossRef]
  153. Coello, C.A.C.; Montes, E.M. Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Adv. Eng. Inform. 2002, 16, 193–203. [Google Scholar] [CrossRef]
Figure 1. Categories of metaheuristic algorithms.
Figure 1. Categories of metaheuristic algorithms.
Mathematics 10 03466 g001
Figure 2. Light dispersion and colors of rainbow.
Figure 2. Light dispersion and colors of rainbow.
Mathematics 10 03466 g002
Figure 3. Light dispersion and colors of rainbow and vector form of refraction and reflection in rainbow.
Figure 3. Light dispersion and colors of rainbow and vector form of refraction and reflection in rainbow.
Mathematics 10 03466 g003
Figure 4. The behavior of the inverse incomplete gamma function with respect to a values.
Figure 4. The behavior of the inverse incomplete gamma function with respect to a values.
Mathematics 10 03466 g004
Figure 5. Tracing F’s values versus R1 for an individual over 9 independent runs: The red points indicate that R 1 < F ; and the blue points indicate that R 1 > F .
Figure 5. Tracing F’s values versus R1 for an individual over 9 independent runs: The red points indicate that R 1 < F ; and the blue points indicate that R 1 > F .
Mathematics 10 03466 g005
Figure 6. Flowchart of LSO.
Figure 6. Flowchart of LSO.
Mathematics 10 03466 g006
Figure 7. Depiction of exploration and exploitation stages of the proposed algorithm (LSO). (a) Exploitation phase. (b) Exploration phase.
Figure 7. Depiction of exploration and exploitation stages of the proposed algorithm (LSO). (a) Exploitation phase. (b) Exploration phase.
Mathematics 10 03466 g007
Figure 8. Sensitivity analysis of LSO. (a) Tuning the parameter Ph over F58. (b) Tuning the parameter Ph over F57. (c) Tuning the parameter β over F58. (d) Tuning the parameter β over F57. (e) Tuning the parameter β over F58 in terms of convergence speed. (f) Tuning the parameter Pe over F58. (g) Tuning the parameter Pe over F57. (h) Adjusting the parameter Ps over F58. (i) Adjusting the parameter Ps over F57.
Figure 8. Sensitivity analysis of LSO. (a) Tuning the parameter Ph over F58. (b) Tuning the parameter Ph over F57. (c) Tuning the parameter β over F58. (d) Tuning the parameter β over F57. (e) Tuning the parameter β over F58 in terms of convergence speed. (f) Tuning the parameter Pe over F58. (g) Tuning the parameter Pe over F57. (h) Adjusting the parameter Ps over F58. (i) Adjusting the parameter Ps over F57.
Mathematics 10 03466 g008aMathematics 10 03466 g008b
Figure 9. Average rank of each optimizer on all CEC2014.
Figure 9. Average rank of each optimizer on all CEC2014.
Mathematics 10 03466 g009
Figure 10. Average SD of each optimizer on all CEC2014.
Figure 10. Average SD of each optimizer on all CEC2014.
Mathematics 10 03466 g010
Figure 11. Average rank on all CEC2017 test functions.
Figure 11. Average rank on all CEC2017 test functions.
Mathematics 10 03466 g011
Figure 12. Average SD of each optimizer on all CEC2017.
Figure 12. Average SD of each optimizer on all CEC2017.
Mathematics 10 03466 g012
Figure 13. Average rank of each optimizer on all CEC2020.
Figure 13. Average rank of each optimizer on all CEC2020.
Mathematics 10 03466 g013
Figure 14. Average SD of each optimizer on all CEC2020.
Figure 14. Average SD of each optimizer on all CEC2020.
Mathematics 10 03466 g014
Figure 15. Average rank of each optimizer on all CEC2022.
Figure 15. Average rank of each optimizer on all CEC2022.
Mathematics 10 03466 g015
Figure 16. Average SD of each optimizer on all CEC2022.
Figure 16. Average SD of each optimizer on all CEC2022.
Mathematics 10 03466 g016
Figure 17. Depiction of averaged convergence speed among algorithms on some test functions. (a) F51 (Unimodal), (b) F52 (Unimodal), (c) F53 (Multimodal), (d) F54 (Multimodal), (e) F55 (Multimodal), (f) F56 (Multimodal), (g) F57 (Multimodal), (h) F59 (Multimodal), (i) F60 (Multimodal), (j) F61 (Multimodal), (k) F62 (Hybrid), (l) F63 (Hybrid), (m) F65 (Hybrid), (n) F64 (Hybrid), (o) F73 (Composition), (p) F75 (Composition), (q) F77 (Composition), and (r) F78 (Composition).
Figure 17. Depiction of averaged convergence speed among algorithms on some test functions. (a) F51 (Unimodal), (b) F52 (Unimodal), (c) F53 (Multimodal), (d) F54 (Multimodal), (e) F55 (Multimodal), (f) F56 (Multimodal), (g) F57 (Multimodal), (h) F59 (Multimodal), (i) F60 (Multimodal), (j) F61 (Multimodal), (k) F62 (Hybrid), (l) F63 (Hybrid), (m) F65 (Hybrid), (n) F64 (Hybrid), (o) F73 (Composition), (p) F75 (Composition), (q) F77 (Composition), and (r) F78 (Composition).
Mathematics 10 03466 g017aMathematics 10 03466 g017bMathematics 10 03466 g017cMathematics 10 03466 g017d
Figure 18. Diversity, convergence curve, average fitness history, trajectory, and search history.
Figure 18. Diversity, convergence curve, average fitness history, trajectory, and search history.
Mathematics 10 03466 g018aMathematics 10 03466 g018b
Figure 19. Comparison among algorithms in terms of CPU time.
Figure 19. Comparison among algorithms in terms of CPU time.
Mathematics 10 03466 g019
Figure 20. Tuning the parameter Ps over tension spring design.
Figure 20. Tuning the parameter Ps over tension spring design.
Mathematics 10 03466 g020
Figure 21. Tension/compression spring design problem. (a) Structure. (b) Convergence curve.
Figure 21. Tension/compression spring design problem. (a) Structure. (b) Convergence curve.
Mathematics 10 03466 g021
Figure 22. Weld beam design problem. (a) Structure. (b) Convergence curve.
Figure 22. Weld beam design problem. (a) Structure. (b) Convergence curve.
Mathematics 10 03466 g022
Figure 23. Pressure vessel design problem. (a) Structure. (b) Convergence curve.
Figure 23. Pressure vessel design problem. (a) Structure. (b) Convergence curve.
Mathematics 10 03466 g023
Table 2. An illustrative numerical example to the results generated according to Equations (6)–(14).
Table 2. An illustrative numerical example to the results generated according to Equations (6)–(14).
x 1 x 2 x 3 x 4 x 5 x 6
x t r 61.4064−77.7085−91.5639−70.6667−41.4573−58.8625
x t p −33.842485.3758−13.0301−37.548962.164339.1392
x * −2.265125.305776.048545.3216−75.5683−80.4718
x n A 0.1006−0.1273−0.1500−0.1157−0.0679−0.0964
x n B −0.06220.1569−0.0239−0.06900.11430.0719
  x n C −0.00470.05290.15900.0947−0.1580−0.1682
x L 0 −0.00900.09720.0282−0.0217−0.0122−0.0749
x L 1 −0.09890.18930.15860.09000.05320.0325
x L 2 −0.07650.13270.16720.11490.01190.0066
x L 3 −0.10540.21060.32290.2128−0.0823−0.0959
Table 3. Execution time of each line in the proposed algorithm according to using the asymptotic analysis.
Table 3. Execution time of each line in the proposed algorithm according to using the asymptotic analysis.
Algorithm 1: LSO Pseudo-CodeExecution Time for Each Line
Input: Population size of light rays N , problem Number of Iterations T max One execution time for each input. This line contains 3 inputs, and the total execution time will be equal to 3
Output: The best light dispersion x * and its fitness
Generate an initial random population of light rays x i     ( i = 1 , 2 , 3 ,   ,   N )
-
The output will be returned one time, so the execution time of this line will be 1
-
Regarding the initialization, this stage will initialize d dimensions for N solutions, so the time complexity will be O ( N × d )
1While    ( t < T m a x )This line will be executed T m a x times
2        for each light ray x i   ( i = 1 , 2 , 3 ,   ,   N )Executed N times multiplied by  T m a x , which  O ( N × T max )
3          evaluate the fitness value O ( N × d × T max ) since the objective function will observe each dimension in the updated solution
4          t = t + 1 O ( N × T max )
5          keep the current global best x * O ( N × d × T max )
6          Update the current solution if the updated solution is better. O ( N × d × T max )
7          determine normal lines x n A ,   x n B ,   &   x n C 3 × N × d × T max = O ( N × d × T max ) |3 here indicates the number of generated vectors
8          determine direction vectors x L 0 ,   x L 1   , x L 2 ,   &   x L 3 3 × N × d × T max = O ( N × d × T max ) |3 here indicates the number of generated direction vectors
9          update the refractive index k r O ( N × T max )
10          update a ,   ϵ ,   &   G I O ( N × T max )
11          Generate two random numbers: p ,   q between 0 and 1 O ( N × T max )
%%%%Generating new ColorFul ray: Exploration phaseComments not implemented
12          if   p q O ( N × T max )
13           update the next light dispersion using Equation (16) O ( N × d × T max )
14          Else O ( N × T max )
15           update the next light dispersion using Equation (17) O ( N × d × T max )
16          end if O ( N × T max )
17          evaluate the fitness value O ( N × d × T max )
18          t = t + 1 O ( N × T max )
19          keep the current global best x * O ( N × d × T max )
20          Update the current solution if the updated solution is better. O ( N × d × T max )
%%%%Scattering phase: exploitation phaseComments not implemented
21          Update the next light dispersion using Equation (25) O ( N × d × T max )
22        end for O ( N × T max )
23end while O ( N × T max )
24Return    x * 1 time
By Summing the execution time of all lines, it is obvious that the highest growth rate is  O ( N × d × T max ) . Hence, the time omplexity of LSO is O ( N × d × T max ) since it has the highest growth rate
Table 4. Comparison between LSO, RO, and LRO.
Table 4. Comparison between LSO, RO, and LRO.
CharacteristicsLSOROLRO
InspirationSimulating the light movement and orientation in the rainbow metrological phenomenon.Simulating Snell’s light refraction law when light transfers from a lighter medium to a darker medium.Simulating the light’s reflection and refraction.
Formulation LSO mainly depends on the vector representation of the rainbow and its intersperse in the sky.The formulation of RO depends on the general Snell’s law of light ray transformation from a medium to a darker one to ray tracing in 2-dimensional and 3-dimensional spaces.The updating of candidate solutions depends on the division of a search space into grid cells and then considering these cells as reflection and refraction points.
Variation
-
Variations in the updating process enable it to overcome several optimization problems of varying difficulty.
-
Having a strong exploration operator due to employing both the inverse incomplete gamma function and the inverse random number in an effective way.
-
Enjoying in strong exploitation operator aiding in exploiting the regions around the current solution, the best-so-far solution, and solutions selected randomly from the population.
-
Having a weak exploration operator due to its inability to find the optimal solution for most fixed-dimensional multimodal optimization problems with many local minima necessitates using a strong exploration operator. On the contrary, our proposed algorithm can solve all fixed-dimensional multimodal problems.
-
Its performance for some unimodal problems requiring a strong exploitation operator is subpar.
-
Its updating process is limited due to its reliance solely on refraction and reflection.
-
Slower convergence speed than LSO due to the weakness of its exploitation operator.
Table 5. Parameters of the compared algorithms.
Table 5. Parameters of the compared algorithms.
AlgorithmsParametersValueAlgorithmsParametersValue
GWO (2014)Convergence constant a
N
Decreases Linearly from 2 to 0
20
SMA (2020)z
N
0.03
20
WOA (2017)Convergence constant a
Spiral factor b
N
Decreases Linearly from 2 to 0
1
20
GTO (2021)p
Beta
w
N
0.03
3
8
20
EO (2020)a1
a2
V
GP
N
2
1
1
0.5
20
AVOA (2021) Alpha   ( L 1 )
Beta   ( L 2 )
Gamma   ( w )
P 1
P 2
P 3
N
0.8
0.2
2.5
0.6
0.4
0.6
20
RUN (2021)a
b
N
20
12
20
RSA (2022)Alpha
Beta
N
0.1
0.1
20
GBO(2020)pr
β m i n
β m a x
N
0.5
0.2
1.2
20
DE
Crossover rate
Scaling factor
N

0.5
0.5
20
SSA (2017)c1
N
Decreases from 2 to 0
20
Table 6. Avr and SD of CEC2005 for 25 independent runs (The bolded value is the best overall).
Table 6. Avr and SD of CEC2005 for 25 independent runs (The bolded value is the best overall).
FIndexLSOSSAGBORUNWOAGTOAVOAEOGWORSASMADE
Unimodal
F1Avr0.00 × 1004.27 × 10−70.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1002.5 × 10−1453.90 × 10−700.00 × 1000.00 × 1003.20 × 10−3
SD0.00 × 1007.68 × 10−80.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1007.4 × 10−1457.34 × 10−700.00 × 1000.00 × 1001.75 × 10−2
Rank1.00 × 1001.10 × 1011.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1009.00 × 1001.00 × 1011.00 × 1001.00 × 1001.20 × 101
F2Avr0.00 × 1001.94 × 1011.6 × 10−2831.5 × 10−2380.00 × 1000.00 × 1000.00 × 1003.58 × 10−841.33 × 10−410.00 × 1000.00 × 1003.70 × 10−5
SD0.00 × 1001.20 × 1010.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1001.02 × 10−831.04 × 10−410.00 × 1000.00 × 1001.57 × 10−5
Rank1.00 × 1001.30 × 1017.00 × 1008.00 × 1001.00 × 1001.00 × 1001.00 × 1009.00 × 1001.00 × 1011.00 × 1001.00 × 1001.10 × 101
F3Avr0.00 × 1002.92 × 1040.00 × 1000.00 × 1007.25 × 1050.00 × 1000.00 × 1002.63 × 10−108.42 × 10−40.00 × 1000.00 × 1003.44 × 105
SD0.00 × 1001.24 × 1040.00 × 1000.00 × 1001.42 × 1050.00 × 1000.00 × 1001.01 × 10−93.97 × 10−30.00 × 1000.00 × 1003.81 × 104
Rank1.00 × 1001.00 × 1011.00 × 1001.00 × 1001.30 × 1011.00 × 1001.00 × 1007.00 × 1008.00 × 1001.00 × 1001.00 × 1001.20 × 101
F4Avr0.00 × 1003.22 × 1013.9 × 10−2531.3 × 10−1967.78 × 1010.00 × 1000.00 × 1002.87 × 10−162.72 × 10−80.00 × 1000.00 × 1009.59 × 101
SD0.00 × 1002.81 × 1000.00 × 1000.00 × 1002.30 × 1010.00 × 1000.00 × 1001.57 × 10−151.26 × 10−70.00 × 1000.00 × 1001.17 × 100
Rank1.00 × 1001.00 × 1016.00 × 1007.00 × 1001.10 × 1011.00 × 1001.00 × 1008.00 × 1009.00 × 1001.00 × 1001.00 × 1001.30 × 101
F5Avr9.70 × 1013.85 × 1029.31 × 1019.65 × 1019.74 × 1018.30 × 10−31.03 × 10−59.43 × 1019.76 × 1019.90 × 1011.88 × 1001.66 × 103
SD6.18 × 10−14.25 × 1022.49 × 1001.12 × 1006.96 × 10−11.14 × 10−21.35 × 10−59.49 × 10−16.38 × 10−10.00 × 1001.87 × 1005.76 × 103
Rank7.00 × 1001.10 × 1014.00 × 1006.00 × 1008.00 × 1002.00 × 1001.00 × 1005.00 × 1009.00 × 1001.00 × 1013.00 × 1001.20 × 101
High-dimensional multimodal
F6Avr0.00 × 1002.33 × 1020.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1007.57 × 102
SD0.00 × 1005.60 × 1010.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1003.26 × 101
Rank1.00 × 1001.20 × 1011.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1011.30 × 101
F7Avr8.88 × 10−165.71 × 1008.88 × 10−168.88 × 10−163.14 × 10−158.88 × 10−168.88 × 10−165.63 × 10−152.61 × 10−148.88 × 10−168.88 × 10−162.23 × 10−4
SD0.00 × 1001.30 × 1000.00 × 1000.00 × 1002.72 × 10−150.00 × 1000.00 × 1001.70 × 10−153.41 × 10−150.00 × 1000.00 × 1003.05 × 10−4
Rank1.00 × 1006.00 × 1001.00 × 1001.00 × 1002.00 × 1001.00 × 1001.00 × 1003.00 × 1005.00 × 1001.00 × 1001.00 × 1004.00 × 100
F8Avr0.00 × 1009.68 × 10−30.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1006.37 × 10−40.00 × 1000.00 × 1003.31 × 10−4
SD0.00 × 1005.56 × 10−30.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1003.49 × 10−30.00 × 1000.00 × 1001.80 × 10−3
Rank1.00 × 1004.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1003.00 × 1001.00 × 1001.00 × 1002.00 × 100
F9Avr4.67 × 10−31.52 × 1011.25 × 10−61.95 × 10−69.70 × 10−38.72 × 10−75.07 × 10−91.85 × 10−32.91 × 10−11.33 × 1006.83 × 10−41.59 × 103
SD1.91 × 10−32.95 × 1001.11 × 10−66.52 × 10−63.24 × 10−31.14 × 10−62.63 × 10−91.79 × 10−37.27 × 10−20.00 × 1009.08 × 10−46.68 × 103
Rank7.00 × 1001.10 × 1013.00 × 1004.00 × 1008.00 × 1002.00 × 1001.00 × 1006.00 × 1009.00 × 1001.00 × 1015.00 × 1001.20 × 101
F10Avr8.42 × 1001.66 × 1021.87 × 1001.11 × 10−11.50 × 1003.95 × 10−43.05 × 10−93.70 × 1006.55 × 1009.72 × 1005.29 × 10−32.54 × 104
SD1.47 × 1001.71 × 1012.63 × 1001.15 × 10−15.79 × 10−12.03 × 10−34.10 × 10−91.05 × 1004.13 × 10−18.77 × 10−15.86 × 10−36.44 × 104
Rank9.00 × 1001.10 × 1016.00 × 1004.00 × 1005.00 × 1002.00 × 1001.00 × 1007.00 × 1008.00 × 1001.00 × 1013.00 × 1001.20 × 101
Fixed-dimensional multimodal
F11Avr9.98 × 10−19.98 × 10−19.98 × 10−14.63 × 1002.24 × 1009.98 × 10−11.06 × 1001.06 × 1005.76 × 1004.47 × 1009.98 × 10−11.23 × 100
SD0.00 × 1002.24 × 10−164.12 × 10−173.84 × 1002.49 × 1000.00 × 1003.62 × 10−13.62 × 10−14.64 × 1003.30 × 1008.55 × 10−159.59 × 10−1
Rank1.00 × 1001.00 × 1001.00 × 1001.20 × 1015.00 × 1001.00 × 1002.00 × 1003.00 × 1007.00 × 1006.00 × 1001.00 × 1004.00 × 100
F12Avr3.07 × 10−48.50 × 10−44.30 × 10−47.04 × 10−46.30 × 10−45.52 × 10−43.08 × 10−44.38 × 10−36.32 × 10−31.58 × 10−34.82 × 10−42.68 × 10−3
SD2.01 × 10−192.64 × 10−43.17 × 10−44.62 × 10−43.47 × 10−44.12 × 10−44.17 × 10−88.13 × 10−39.35 × 10−31.03 × 10−32.95 × 10−46.01 × 10−3
Rank1.00 × 1009.00 × 1003.00 × 1007.00 × 1006.00 × 1005.00 × 1001.00 × 1001.20 × 1011.30 × 1012.00 × 1014.00 × 1001.10 × 101
F13Avr−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100
SD6.78 × 10−163.72 × 10−156.78 × 10−162.05 × 10−135.36 × 10−126.52 × 10−165.13 × 10−166.39 × 10−168.86 × 10−101.59 × 10−31.72 × 10−116.78 × 10−16
Rank1.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 100
F14Avr3.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−14.18 × 10−13.98 × 10−13.98 × 10−1
SD0.00 × 1002.70 × 10−150.00 × 1001.17 × 10−113.58 × 10−70.00 × 1000.00 × 1000.00 × 1005.93 × 10−84.66 × 10−23.82 × 10−90.00 × 100
Rank1.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.30 × 1011.00 × 1001.00 × 100
F15Avr3.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 100
SD1.31 × 10−152.94 × 10−141.40 × 10−151.43 × 10−131.63 × 10−51.33 × 10−155.67 × 10−81.39 × 10−153.76 × 10−65.55 × 10−52.27 × 10−131.83 × 10−15
Rank1.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 100
F16Avr−3.86 × 100−3.86 × 100−3.86 × 100−3.86 × 100−3.86 × 100−3.86 × 100−3.86 × 100−3.86 × 100−3.86 × 100−3.80 × 100−3.86 × 100−3.86 × 100
SD2.71 × 10−155.11 × 10−152.71 × 10−154.64 × 10−62.98 × 10−32.59 × 10−152.20 × 10−152.00 × 10−32.79 × 10−35.08 × 10−23.64 × 10−92.71 × 10−15
Rank1.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.00 × 1001.30 × 1011.00 × 1001.00 × 100
F17Avr−3.32 × 100−3.07 × 100−3.27 × 100−3.24 × 100−2.71 × 100−3.07 × 100−3.07 × 100−3.24 × 100−3.19 × 100−1.74 × 100−3.03 × 100−3.12 × 100
SD1.32 × 10−151.57 × 10−16.62 × 10−27.38 × 10−24.14 × 10−12.49 × 10−13.31 × 10−18.05 × 10−21.19 × 10−15.78 × 10−13.05 × 10−11.01 × 10−1
Rank1.00 × 1009.00 × 1002.00 × 1004.00 × 1001.10 × 1017.00 × 1008.00 × 1003.00 × 1005.00 × 1001.30 × 1011.00 × 1016.00 × 100
F18Avr−1.02 × 101−5.12 × 100−7.36 × 100−6.31 × 100−4.51 × 100−7.44 × 100−7.25 × 100−7.18 × 100−7.36 × 100−4.79 × 100−6.37 × 100−3.64 × 100
SD7.23 × 10−153.16 × 1002.44 × 1002.47 × 1002.33 × 1002.48 × 1002.84 × 1003.20 × 1003.50 × 1009.60 × 10−12.53 × 1002.28 × 100
Rank1.00 × 1009.00 × 1004.00 × 1008.00 × 1001.10 × 1012.00 × 1005.00 × 1006.00 × 1003.00 × 1001.00 × 1017.00 × 1001.20 × 101
F19Avr−1.04 × 101−9.27 × 100−8.63 × 100−1.04 × 101−9.38 × 100−1.04 × 101−1.04 × 101−1.00 × 101−1.00 × 101−5.09 × 100−1.04 × 101−9.35 × 100
SD1.48 × 10−152.35 × 1002.55 × 1006.54 × 10−102.36 × 1007.38 × 10−161.14 × 10−151.34 × 1001.35 × 1008.85 × 10−71.91 × 10−52.42 × 100
Rank1.00 × 1006.00 × 1007.00 × 1001.00 × 1004.00 × 1001.00 × 1001.00 × 1002.00 × 1003.00 × 1008.00 × 1001.00 × 1005.00 × 100
F20Avr−1.05 × 101−9.74 × 100−8.91 × 100−1.04 × 101−9.23 × 100−1.05 × 101−1.05 × 101−1.00 × 101−1.04 × 101−5.13 × 100−1.05 × 101−9.16 × 100
SD1.81 × 10−152.10 × 1002.52 × 1009.87 × 10−12.42 × 1002.06 × 10−153.32 × 10−151.65 × 1009.87 × 10−11.85 × 10−61.91 × 10−52.80 × 100
Rank1.00 × 1005.00 × 1008.00 × 1002.00 × 1006.00 × 1001.00 × 1001.00 × 1004.00 × 1003.00 × 1009.00 × 1001.00 × 1007.00 × 100
Table 7. p-values of LSO with each rival optimizer on CEC2005 test suite (F1–F20).
Table 7. p-values of LSO with each rival optimizer on CEC2005 test suite (F1–F20).
FunSCASSAGBORUNWOAGTOAVOAEOGWORFOSMA
Unimodal
F11.21 × 10−12NaNNaNNaNNaNNaN1.21 × 10−121.21 × 10−12NaNNaN1.21 × 10−12
F21.21 × 10−121.21 × 10−121.21 × 10−12NaNNaNNaN1.21 × 10−121.21 × 10−12NaNNaN1.21 × 10−12
F33.75 × 10−115.37 × 10−65.37 × 10−62.26 × 10−115.37 × 10−65.37 × 10−66.61 × 10−16.61 × 10−15.37 × 10−65.37 × 10−62.26 × 10−11
F41.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12NaNNaN1.21 × 10−121.21 × 10−12NaNNaN1.21 × 10−12
F51.56 × 10−85.53 × 10−84.51 × 10−26.97 × 10−33.02 × 10−113.02 × 10−112.44 × 10−91.17 × 10−41.21 × 10−123.02 × 10−113.02 × 10−11
High-dimensional multimodal
F63.02 × 10−113.02 × 10−113.02 × 10−115.69 × 10−13.02 × 10−113.02 × 10−113.47 × 10−103.02 × 10−111.21 × 10−123.02 × 10−115.57 × 10−10
F73.02 × 10−111.11 × 10−31.56 × 10−23.52 × 10−71.38 × 10−26.79 × 10−22.87 × 10−104.50 × 10−115.32 × 10−38.88 × 10−13.02 × 10−11
F84.64 × 10−36.74 × 10−61.89 × 10−48.99 × 10−113.02 × 10−113.69 × 10−116.05 × 10−73.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F91.21 × 10−12NaNNaNNaNNaNNaNNaNNaNNaNNaN1.21 × 10−12
F101.21 × 10−12NaNNaN2.75 × 10−5NaNNaN3.37 × 10−137.73 × 10−13NaNNaN1.21 × 10−12
Fixed-dimensional multimodal
F111.21 × 10−12NaNNaNNaNNaNNaNNaN3.34 × 10−1NaNNaN1.21 × 10−12
F123.02 × 10−113.02 × 10−113.02 × 10−115.00 × 10−93.02 × 10−113.02 × 10−118.20 × 10−73.02 × 10−111.21 × 10−121.46 × 10−101.21 × 10−10
F133.02 × 10−119.92 × 10−113.02 × 10−111.33 × 10−103.02 × 10−113.02 × 10−115.57 × 10−108.10 × 10−101.21 × 10−83.02 × 10−113.15 × 10−2
F143.00 × 10−133.34 × 10−11.20 × 10−121.21 × 10−12NaN2.52 × 10−74.19 × 10−21.21 × 10−121.21 × 10−121.20 × 10−121.61 × 10−1
F151.58 × 10−95.33 × 10−67.55 × 10−103.28 × 10−93.43 × 10−87.97 × 10−91.89 × 10−91.58 × 10−93.55 × 10−104.69 × 10−98.23 × 10−5
F161.19 × 10−12NaN4.57 × 10−121.21 × 10−124.18 × 10−21.47 × 10−91.09 × 10−21.21 × 10−121.21 × 10−121.21 × 10−12NaN
F174.15 × 10−8NaN1.66 × 10−114.57 × 10−12NaNNaNNaN1.21 × 10−121.21 × 10−121.21 × 10−12NaN
F186.43 × 10−121.70 × 10−16.46 × 10−126.46 × 10−123.11 × 10−16.46 × 10−128.24 × 10−26.46 × 10−126.46 × 10−126.46 × 10−124.30 × 10−1
F191.20 × 10−12NaN1.21 × 10−121.21 × 10−125.54 × 10−33.67 × 10−108.15 × 10−21.21 × 10−121.21 × 10−121.21 × 10−12NaN
F208.44 × 10−121.62 × 10−41.99 × 10−111.99 × 10−111.15 × 10−64.90 × 10−116.46 × 10−62.21 × 10−113.16 × 10−128.44 × 10−121.88 × 10−5
Table 8. Avr and SD of CEC2014 for 25 independent runs (The bolded value is the best overall).
Table 8. Avr and SD of CEC2014 for 25 independent runs (The bolded value is the best overall).
FIndexLSOSSAGBORUNWOAGTOAVOAEOGWORSASMADE
Unimodal
F21Avr1.00 × 1025.29 × 1051.73 × 1031.29 × 1058.21 × 1062.81 × 1031.87 × 1055.06 × 1045.03 × 1061.32 × 1081.53 × 1053.05 × 103
SD7.09 × 10−115.03 × 1052.04 × 1031.11 × 1056.51 × 1063.88 × 1031.12 × 1056.63 × 1043.43 × 1068.55 × 1074.14 × 1047.78 × 103
Rank1.009.002.006.0012.003.008.005.0010.0013.007.004.00
F22Avr200.003802.99272.454382.91940,880.88557.783404.481.30 × 1035.69 × 1077.09 × 1095.94 × 1037.20 × 102
SD4.22 × 10−143.65 × 1031.30 × 1025.25 × 1037.23 × 1058.18 × 1023.67 × 1031.65 × 1032.88 × 1081.99 × 1093.99 × 1032.07 × 103
Rank1.007.002.008.0010.003.006.005.0011.0013.009.004.00
F23Avr300.003450.21314.701401.8243,769.75315.071571.60426.517388.058368.142407.10301.79
SD0.00 × 1002845.5521.18566.3728,043.5150.451128.64203.064648.722959.581895.909.80
Rank1.009.003.006.0013.004.007.005.0011.0012.008.002.00
Multimodal
F24Avr405.36425.87424.51416.08438.61419.47419.49423.67434.971735.39424.29425.97
SD11.8515.1215.9815.7322.1716.7117.9316.0218.96992.1915.1314.33
Rank1.008.007.002.0011.003.004.005.0010.0013.006.009.00
F25Avr520.03520.04520.07520.09520.14520.11520.06520.11520.42520.43519.43520.21
SD1.18 × 10−28.81 × 10−29.18 × 10−21.06 × 10−19.39 × 10−27.89 × 10−27.39 × 10−27.25 × 10−20.080.083.660.06
Rank2.003.005.006.009.007.004.008.0012.0013.001.0010.00
F26Avr600.32603.66604.32605.49607.99605.26605.99601.516.02 × 102609.69604.19600.84
SD0.611.591.651.141.761.531.481.211.16 × 1000.801.240.83
Rank1.005.007.009.0012.008.0010.003.004.00 × 10013.006.002.00
F27Avr700.02700.26700.17700.35701.09700.29700.41700.05701.20795.27700.24700.08
SD0.010.100.080.230.460.190.310.040.9527.140.120.10
Rank1.006.004.008.0010.007.009.002.0011.0013.005.003.00
F28Avr800.50821.33821.40821.03844.05823.18813.43807.33812.15876.28801.79800.66
SD6.27 × 10−11.11 × 1019.72 × 1007.53 × 1001.58 × 1011.02 × 1015.44 × 1004.71 × 1005.708.271.151.05
Rank1.008.009.007.0012.0010.006.004.005.0013.003.002.00
F29Avr907.18920.93925.22937.34950.57931.37931.48914.54914.91960.42915.39914.65
SD2.589.258.725.7519.5410.369.985.916.215.635.693.48
Rank1.006.007.0010.0012.008.009.002.004.0013.005.003.00
F30Avr1010.541571.441457.971186.071610.081580.211104.671169.311370.392107.301145.391068.08
SD30.00257.58238.56120.79352.70273.0076.89146.30141.26167.86110.9353.37
Rank1.009.008.006.0011.0010.003.005.007.0013.004.002.00
F31Avr1888.082.37 × 1031.98 × 1031.69 × 1032.99 × 1031.99 × 1031.87 × 1031.71 × 1031823.922501.551950.172447.27
SD202.293.49 × 1023.28 × 1022.35 × 1025.38 × 1023.30 × 1023.50 × 1023.17 × 102405.02203.72258.79193.51
Rank5.009.007.001.0013.008.004.002.003.0011.006.0010.00
F32Avr1200.121200.241200.291200.261200.771200.391200.351200.321200.751201.281200.141200.64
SD0.040.190.180.160.290.300.200.170.650.300.080.11
Rank1.003.005.004.0011.008.007.006.0010.0013.002.009.00
F33Avr1300.161300.251300.281300.461300.441300.361300.461300.081300.191303.571300.291300.16
SD0.040.110.140.180.170.180.210.040.050.740.090.04
Rank2.005.006.0010.009.008.0011.001.004.0013.007.003.00
F34Avr1400.191400.231400.291400.421400.321400.291400.401400.201400.291414.111400.211400.19
SD0.060.120.110.220.230.170.230.090.196.760.100.06
Rank1.005.008.0011.009.007.0010.003.006.0013.004.001.00
F35Avr1500.821501.591501.761503.141506.371503.111503.731501.131501.906731.901501.091501.76
SD0.220.640.882.122.842.232.130.400.895786.780.410.29
Rank1.004.005.009.0011.008.0010.003.007.0013.002.006.00
F36Avr1602.351602.701603.031602.851603.361603.061602.981602.351602.641603.751602.761602.61
SD0.380.590.420.410.350.300.420.520.500.130.320.30
Rank1.005.009.007.0012.0010.008.001.004.0013.006.003.00
Hybrid
F37Avr1717.645.92 × 1032.27 × 1037.35 × 1031.62 × 1052.58 × 1031.07 × 1045.48 × 10363,042.13477,784.197797.801873.12
SD22.393.38 × 1033.96 × 1023.66 × 1032.13 × 1051.25 × 1038.26 × 1033.86 × 103129,932.00109,260.795659.29434.71
Rank1.006.003.007.0012.004.009.005.0011.0013.008.002.00
F38Avr1800.8514,446.412319.869682.729696.151885.2310,609.809356.6810,610.0240,067.1211,422.571803.87
SD0.5910,403.741076.153555.4510,255.4946.858071.845017.195783.7546,220.3110,865.4010.84
Rank1.0011.004.006.007.003.008.005.0091310.002.00
F39Avr1900.461903.041902.591903.041906.121902.601903.541901.841902.941921.261901.821900.90
SD0.270.981.410.871.401.261.280.960.9216.060.650.69
Rank1.009.005.008.0012.006.0010.004.007.0013.003.002.00
F40Avr2000.583418.702108.105264.048152.172056.558010.602121.665439.3612,224.037640.342000.40
SD0.471781.6182.852176.014028.6641.844783.8174.553862.976303.346685.140.44
Rank2.006.004.008.0012.003.0011.005.009.0013.0010.001.00
F41Avr2100.535295.602406.774838.3190,227.962505.539494.632388.9614,867.981.27 × 1064290.402117.89
SD2.91 × 10−14294.39239.423095.47202,328.35261.306275.18209.2835,581.102.94 × 1063157.2344.31
Rank1.008.004.007.0012.005.009.003.0011.0013.006.002.00
F42Avr2200.812252.822272.192307.982309.892238.242271.602250.982285.782409.372228.422213.01
SD3.0247.4264.6557.3071.5533.9760.9753.8760.3181.8634.6025.09
Rank1.006.009.0011.0012.004.008.005.0010.0013.003.002.00
Composition
F43Avr2581.992629.462500.002500.002616.132500.002500.002629.462634.662500.002500.002629.46
SD63.450.000.000.0046.430.000.000.004.520.000.000.00
Rank7.0010.001.002.008.003.004.009.0012.005.006.0011.00
F44Avr2515.122552.082532.182547.452574.632582.522582.072580.262543.122538.922599.432536.95
SD3.968.589.3825.7230.0623.1328.4727.8236.1229.871.7829.67
Rank1.008.003.007.009.0012.0011.0010.006.005.0013.004.00
F45Avr2633.502696.492678.162682.902696.472694.012696.552696.562690.042693.412699.882694.51
SD10.381.17 × 1013.10 × 1012.32 × 1011.05 × 1011.29 × 1011.36 × 1018.76 × 10021.1618.890.6517.36
Rank1.001.00 × 1012.00 × 1004.00 × 1009.00 × 1007.00 × 1001.10 × 1011.20 × 1015.006.0013.008.00
F46Avr2700.142700.652700.182700.282700.172700.342700.242700.302703.402703.482709.862700.22
SD0.040.120.080.170.080.170.120.1518.2518.2319.720.08
Rank1.0010.003.007.002.009.006.008.0011.0012.0013.005.00
F47Avr2823.873002.573010.432842.062874.383084.232854.692893.573027.043032.322900.002893.39
SD154.31171.71158.0290.0366.44135.5883.5335.20116.7098.700.0036.19
Rank1.009.0010.002.004.0013.003.006.0011.0012.007.005.00
F48Avr3155.873278.033209.813000.003000.003349.793000.003000.003255.213248.913000.003000.00
SD67.8053.4458.100.000.00159.110.000.0079.9278.450.000.00
Rank7.0012.009.001.002.0013.003.004.0011.0010.005.006.00
F49Avr3076.809000.0261,591.92183,714.473824.55244,952.3174,649.013643.30413,389.22545,401.123100.0036,135.91
SD48.996617.79314,631.79551,709.97588.88625,545.37390,869.63622.98836,534.221.02 × 1060.00179,113.96
Rank1.005.007.0010.004.0011.008.003.0012.001.30 × 1012.006.00
Avr3523.724608.664241.323990.044470.615031.793991.634425.033810.934322.673200.003608.58
F50SD52.44608.68508.03545.97451.97982.39332.96523.84304.68692.910.00307.80
Rank3.0012.008.006.0011.0013.007.0010.005.009.001.004.00
Table 9. p-values of LSO with each rival optimizer on CEC-2014 test suite (F21–F50).
Table 9. p-values of LSO with each rival optimizer on CEC-2014 test suite (F21–F50).
FunSSAGBORUNWOAGTOAVOAEOGWORSASMADE
Unimodal
F213.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F225.20 × 10−125.20 × 10−125.20 × 10−125.20 × 10−125.20 × 10−125.20 × 10−125.20 × 10−125.20 × 10−125.20 × 10−125.20 × 10−127.85 × 10−1
F231.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−122.16 × 10−2
Multimodal
F244.34 × 10−114.49 × 10−89.41 × 10−81.41 × 10−91.52 × 10−71.20 × 10−72.61 × 10−83.42 × 10−101.93 × 10−113.86 × 10−92.36 × 10−8
F255.57 × 10−103.79 × 10−18.29 × 10−61.85 × 10−82.13 × 10−48.77 × 10−18.48 × 10−93.02 × 10−113.02 × 10−111.43 × 10−53.02 × 10−11
F263.02 × 10−116.07 × 10−113.02 × 10−113.02 × 10−113.69 × 10−113.34 × 10−111.19 × 10−61.41 × 10−93.02 × 10−114.50 × 10−111.90 × 10−1
F273.02 × 10−111.21 × 10−103.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.78 × 10−43.02 × 10−113.02 × 10−114.08 × 10−112.84 × 10−1
F281.55 × 10−111.54 × 10−111.55 × 10−111.55 × 10−111.54 × 10−111.54 × 10−115.78 × 10−111.55 × 10−111.55 × 10−114.68 × 10−97.08 × 10−1
F293.02 × 10−116.06 × 10−113.02 × 10−113.02 × 10−111.96 × 10−104.50 × 10−111.73 × 10−61.25 × 10−73.02 × 10−111.10 × 10−81.29 × 10−9
F302.99 × 10−112.35 × 10−104.45 × 10−115.43 × 10−114.45 × 10−112.42 × 10−93.78 × 10−103.65 × 10−112.99 × 10−112.35 × 10−102.42 × 10−9
F313.02 × 10−112.52 × 10−18.12 × 10−44.08 × 10−112.52 × 10−16.52 × 10−12.15 × 10−28.50 × 10−29.92 × 10−113.71 × 10−19.92 × 10−11
F323.02 × 10−114.44 × 10−75.61 × 10−53.02 × 10−116.01 × 10−88.48 × 10−91.07 × 10−71.25 × 10−43.02 × 10−118.65 × 10−13.02 × 10−11
F333.02 × 10−112.77 × 10−55.07 × 10−101.55 × 10−91.36 × 10−71.20 × 10−86.01 × 10−89.47 × 10−33.02 × 10−112.02 × 10−83.87 × 10−1
F343.02 × 10−112.60 × 10−52.83 × 10−87.29 × 10−33.03 × 10−39.83 × 10−87.17 × 10−11.30 × 10−13.02 × 10−114.38 × 10−18.53 × 10−1
F353.02 × 10−117.04 × 10−78.99 × 10−113.02 × 10−111.61 × 10−104.98 × 10−111.89 × 10−44.44 × 10−73.02 × 10−116.10 × 10−36.07 × 10−11
F363.34 × 10−114.69 × 10−81.09 × 10−53.16 × 10−109.26 × 10−92.78 × 10−79.71 × 10−15.19 × 10−23.02 × 10−111.32 × 10−42.24 × 10−2
Hybrid
F373.02 × 10−114.50 × 10−113.02 × 10−113.02 × 10−113.34 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.37 × 10−5
F383.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−117.20 × 10−5
F393.02 × 10−111.07 × 10−93.02 × 10−113.02 × 10−114.50 × 10−113.02 × 10−115.00 × 10−93.02 × 10−113.02 × 10−118.99 × 10−112.92 × 10−2
F403.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.68 × 10−2
F413.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−112.32 × 10−2
F423.02 × 10−115.49 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−119.92 × 10−113.02 × 10−113.02 × 10−112.15 × 10−103.51 × 10−2
Composition
F431.20 × 10−111.45 × 10−71.45 × 10−71.51 × 10−81.45 × 10−71.45 × 10−72.43 × 10−111.20 × 10−111.45 × 10−71.45 × 10−76.39 × 10−4
F441.86 × 10−92.09 × 10−103.34 × 10−113.02 × 10−111.79 × 10−113.16 × 10−122.38 × 10−73.16 × 10−103.16 × 10−121.78 × 10−102.23 × 10−9
F457.77 × 10−95.93 × 10−96.93 × 10−121.44 × 10−104.57 × 10−121.60 × 10−111.34 × 10−101.20 × 10−81.02 × 10−111.41 × 10−112.53 × 10−8
F465.97 × 10−55.60 × 10−73.67 × 10−35.07 × 10−101.29 × 10−65.53 × 10−81.49 × 10−65.01 × 10−13.02 × 10−118.29 × 10−62.13 × 10−5
F472.25 × 10−42.68 × 10−52.76 × 10−72.15 × 10−101.50 × 10−51.91 × 10−74.80 × 10−72.60 × 10−82.32 × 10−72.77 × 10−61.35 × 10−4
F483.56 × 10−44.57 × 10−124.57 × 10−127.76 × 10−94.57 × 10−124.57 × 10−123.82 × 10−97.69 × 10−84.57 × 10−124.57 × 10−126.51 × 10−9
F493.02 × 10−113.02 × 10−111.17 × 10−93.02 × 10−116.52 × 10−91.58 × 10−73.02 × 10−113.02 × 10−115.89 × 10−22.83 × 10−63.02 × 10−11
F503.02 × 10−112.20 × 10−71.47 × 10−73.02 × 10−113.82 × 10−105.97 × 10−95.60 × 10−72.61 × 10−101.21 × 10−121.56 × 10−28.50 × 10−2
Table 10. Avr and SD of CEC2017 for 25 independent runs (The bolded value is the best overall).
Table 10. Avr and SD of CEC2017 for 25 independent runs (The bolded value is the best overall).
FIndexLSOSSAGBORUNWOAGTOAVOAEOGWORSASMADE
Unimodal
F51Avr1.00 × 1023.02 × 1031.75 × 1033.41 × 1034.66 × 1063.13 × 1034.12 × 1031.79 × 1034.87 × 1071.14 × 10107.46 × 1034.60 × 102
SD3.56 × 10−143.59 × 1031.71 × 1031.77 × 1036.92 × 1063.24 × 1034.04 × 1031.93 × 1031.26 × 1083.64 × 1094.48 × 1031.19 × 103
Rank1.005.003.007.0010.006.008.004.0011.0013.009.002.00
F52Avr300.00300.00300.00300.001617.08300.00300.00300.002542.8810,392.16300.00300.00
SD0.00 × 1007.30 × 10−104.88 × 10−131.17 × 10−31.73 × 1034.70 × 10−102.93 × 10−94.92 × 10−102.02 × 1033.07 × 1031.62 × 10−31.06 × 10−14
Rank1.006.003.009.0011.004.007.005.0012.0013.008.002.00
Multimodal
F53Avr400.00405.28401.36403.35428.61402.75408.82403.49418.771226.25408.54406.25
SD6.83 × 10−710.060.492.1836.661.3817.511.2820.57579.1713.531.24
Rank1.006.002.004.0011.003.009.005.0010.0013.008.007.00
F54Avr506.00519.80526.09533.13555.93526.93537.46516.16519.42584.87514.30514.54
SD2.229.3812.388.5126.0612.7514.365.668.7815.786.183.31
Rank1.006.007.009.0012.008.0010.004.005.0013.002.003.00
F55Avr600.00609.86601.84618.91638.19609.14610.07600.06601.57645.54600.15600.00
SD0.007.52 × 1003.06 × 1008.59 × 1001.37 × 1017.16 × 1006.92 × 1003.17 × 10−12.14 × 1006.58 × 1004.35 × 10−12.08 × 10−6
Rank1.008.006.0010.0012.007.009.003.005.0013.004.002.00
F56Avr717.07732.33737.11762.79775.78756.31765.61723.21732.39806.57726.05726.99
SD2.589.8711.6412.1423.4317.1220.505.848.5312.897.543.86
Rank1.005.007.009.0012.008.0010.002.006.0013.003.004.00
F57Avr806.62821.03822.97827.43840.82824.91828.43813.33815.53854.98816.32814.53
SD3.4210.618.066.0016.839.1410.254.855.627.677.523.69
Rank1.006.007.009.0012.008.0010.002.004.0013.005.003.00
F58Avr900.00903.92924.991056.951498.95993.421090.74900.20927.361542.77900.00900.00
SD0.007.74 × 1004.43 × 1011.01 × 1024.48 × 1028.62 × 1011.80 × 1025.29 × 10−15.44 × 1011.67 × 1021.64 × 10−20.00
Rank1.005.006.0010.0012.008.0011.004.007.0013.003.001.00
F59Avr1350.601772.601835.551582.062158.152000.111852.321541.391639.922623.401601.131449.25
SD143.76259.50342.45241.56392.70311.11312.11222.43314.70171.28224.02262.92
Rank1.007.008.004.0011.0010.009.003.006.0013.005.002.00
Hybrid
F60Avr1101.172158.631120.621126.241378.551126.781138.461108.961141.0319,833.641116.241102.18
SD1.16464.0827.187.71443.6317.5039.918.2836.0448,865.277.732.31
Rank1.0012.005.006.0010.007.008.003.009.0013.004.002.00
F61Avr1233.501.20 × 1061.06 × 1042.18 × 1054.62 × 1061.40 × 1043.87 × 1051.01 × 1046.84 × 1052.75 × 1084.51 × 1043.83 × 103
SD52.569.97 × 1051.25 × 1041.87 × 1055.01 × 1061.22 × 1044.73 × 1056.61 × 1038.47 × 1051.59 × 1082.89 × 1046.45 × 103
Rank1.0010.004.007.0011.005.008.003.009.0013.006.002.00
F62Avr1303.7516,604.031815.0210,568.9516,564.101466.2910,902.047359.1211,669.4749,487,747.8815,922.681309.37
SD2.5511,742.29304.945329.6213,327.54149.898235.906429.886780.5277,696,384.8613,216.907.75
Rank1.0011.004.006.0010.003.007.005.008.0013.009.002.00
F63Avr1400.601498.921480.732086.661803.911457.811790.541455.402955.5810,856.691581.311402.50
SD0.6240.2138.32589.06728.2124.43554.1927.241781.8318,963.95642.535.99
Rank1.006.005.0011.0010.004.009.003.0012.0013.007.002.00
F64Avr1500.292251.041593.262267.566197.311559.163328.431600.404020.6611,243.412344.581500.66
SD0.34883.9474.16783.574167.9052.611537.3476.543824.805685.891528.880.61
Rank1.007.004.008.0012.003.0010.005.0011.0013.009.002.00
F65Avr1600.461709.981740.231783.441899.101701.901797.661664.511725.362127.091691.781611.95
SD0.26117.63107.09108.96116.0385.88123.8466.64107.02117.7078.5725.13
Rank1.006.009.0010.0012.005.0011.003.008.0013.004.002.00
F66Avr1702.471770.911761.371758.721793.831750.441780.811742.651749.741849.721756.581706.13
SD4.5241.6243.1821.7944.2021.8246.9617.1820.4550.8338.028.25
Rank1.009.008.007.0012.005.0011.003.004.0013.006.002.00
F67Avr1800.181.63 × 1044.67 × 1031.68 × 1041.76 × 1043.13 × 1031.40 × 1041.42 × 1042.64 × 1041.55 × 1082.80 × 1041.81 × 103
SD0.271.01 × 1044.95 × 1039.94 × 1031.31 × 1042.79 × 1039.74 × 1031.08 × 1041.63 × 1043.78 × 1081.42 × 1048.76 × 100
Rank1.007.004.008.009.003.005.006.0010.0013.0011.002.00
F68Avr1900.112551.751987.398849.7852,403.861955.147572.261954.9415,942.552,103,955.736030.921900.30
SD0.27886.5377.786841.5985,717.2449.956392.5734.0947,387.193,083,821.046113.480.55
Rank1.006.005.0010.0012.004.009.003.0011.0013.007.002.00
F69Avr2000.232088.912093.272125.332205.962082.822098.952055.372081.952300.602025.852002.40
SD0.3857.3663.2154.0967.3650.9675.8361.4760.4073.009.526.32
Rank1.007.008.0011.0012.006.0010.004.005.0013.003.002.00
Composition
F70Avr2215.362276.542274.362233.442306.012222.912252.572312.712317.952337.092302.122308.46
SD34.4558.2860.7054.6864.6048.2967.1322.595.7659.0745.9236.96
Rank1.007.006.003.009.002.005.0011.0012.0013.008.0010.00
F71Avr2291.092350.022300.862305.552579.662302.982337.242316.372348.583123.292387.922300.47
SD24.29177.4113.4611.34536.2310.16192.1785.77152.10345.67286.270.80
Rank1.009.003.005.0012.004.007.006.008.0013.0011.002.00
F72Avr2597.612620.272624.732620.772646.732631.162640.092615.402618.592701.832620.812612.69
SD56.228.8812.936.9521.6317.7717.085.377.6722.148.014.21
Rank1.005.008.006.0011.009.0010.003.004.0013.007.002.00
F73Avr2650.082749.502698.602740.682765.212723.202735.382744.792748.322872.142755.562751.45
SD112.949.32112.2346.3162.3389.8294.837.5110.9344.6210.185.29
Rank1.008.002.005.0011.003.004.006.007.0013.0010.009.00
F74Avr2902.822921.262934.682935.622938.572919.172927.292930.312938.183349.412945.502929.57
SD13.7924.6129.0920.3863.3966.9224.7122.3428.39117.1033.0822.53
Rank1.003.007.008.0010.002.004.006.009.0013.0011.005.00
F75Avr2845.982975.912980.812993.653345.532989.793250.792973.063263.924146.593176.373001.41
SD103.36249.95114.69213.33446.62129.74529.36253.79442.08290.19427.89231.12
Rank1.003.004.006.0012.005.0010.002.0011.0013.009.007.00
F76Avr3090.283092.863097.643094.373141.373099.693101.523095.373099.263189.413091.133092.04
SD1.783.084.372.2737.1918.888.878.4112.7284.081.513.12
Rank1.004.007.005.0012.009.0010.006.008.0013.002.003.00
F77Avr3111.623312.503315.613340.493433.653317.493324.963342.763335.873780.913367.883334.58
SD150.21185.24138.98108.50181.80188.84128.45122.45112.63145.33164.48108.44
Rank1.003.004.009.0012.005.006.0010.008.0013.0011.007.00
F78Avr3157.423185.143246.873206.203327.953220.803278.283192.923213.863463.173198.953170.47
SD9.5743.9573.7636.6674.4848.1379.1448.3348.58140.1267.3516.12
Rank1.003.0010.006.0012.008.0011.004.007.0013.005.002.00
F79Avr3.49 × 1033.12 × 1058.06 × 1051.67 × 1059.11 × 1051.90 × 1061.64 × 1053.38 × 1059.62 × 1059.67 × 1062.33 × 1051.48 × 105
SD2.20 × 1024.94 × 1051.16 × 1062.06 × 1058.87 × 1052.98 × 1062.50 × 1054.89 × 1051.74 × 1061.32 × 1074.27 × 1053.07 × 105
Rank1.006.009.004.0010.0012.003.007.0011.0013.005.002.00
Table 11. p-values of LSO with each rival optimizer on CEC-2017 test suite (F51–F79).
Table 11. p-values of LSO with each rival optimizer on CEC-2017 test suite (F51–F79).
FunSSAGBORUNWOAGTOAVOAEOGWORSASMADE
Unimodal
F512.40 × 10−112.40 × 10−112.40 × 10−112.40 × 10−112.40 × 10−112.40 × 10−112.40 × 10−112.40 × 10−112.40 × 10−112.40 × 10−112.92 × 10−1
F521.21 × 10−121.53 × 10−111.21 × 10−121.21 × 10−121.20 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−123.34 × 10−1
Multimodal
F533.33 × 10−113.33 × 10−114.96 × 10−113.01 × 10−113.68 × 10−114.49 × 10−113.01 × 10−113.01 × 10−113.01 × 10−113.01 × 10−113.01 × 10−11
F542.60 × 10−83.47 × 10−103.69 × 10−113.34 × 10−114.20 × 10−104.50 × 10−113.65 × 10−87.12 × 10−93.02 × 10−112.00 × 10−68.48 × 10−9
F552.36 × 10−122.36 × 10−122.36 × 10−122.36 × 10−122.36 × 10−122.36 × 10−124.48 × 10−122.36 × 10−122.36 × 10−122.36 × 10−124.02 × 10−1
F562.61 × 10−109.92 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−116.28 × 10−61.69 × 10−93.02 × 10−115.19 × 10−72.61 × 10−10
F571.25 × 10−71.96 × 10−103.02 × 10−113.02 × 10−114.62 × 10−104.97 × 10−115.17 × 10−71.20 × 10−83.02 × 10−112.19 × 10−84.57 × 10−9
F585.26 × 10−128.63 × 10−121.72 × 10−121.72 × 10−121.72 × 10−121.72 × 10−124.09 × 10−112.15 × 10−121.72 × 10−124.10 × 10−113.34 × 10−1
F592.19 × 10−82.19 × 10−88.66 × 10−51.96 × 10−102.15 × 10−106.52 × 10−99.03 × 10−41.78 × 10−43.02 × 10−119.51 × 10−62.40 × 10−1
Hybrid
F603.02 × 10−119.92 × 10−113.02 × 10−113.02 × 10−113.69 × 10−113.02 × 10−111.31 × 10−83.02 × 10−113.02 × 10−114.50 × 10−111.08 × 10−2
F613.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F623.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.60 × 10−7
F633.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.55 × 10−1
F643.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−116.35 × 10−2
F653.02 × 10−116.70 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.08 × 10−113.02 × 10−113.02 × 10−113.02 × 10−117.09 × 10−8
F663.02 × 10−114.08 × 10−113.69 × 10−113.02 × 10−113.02 × 10−113.02 × 10−115.49 × 10−113.69 × 10−113.02 × 10−114.50 × 10−112.43 × 10−1
F673.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−112.61 × 10−2
F683.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.76 × 10−2
F691.41 × 10−111.41 × 10−111.41 × 10−111.41 × 10−111.41 × 10−111.41 × 10−112.61 × 10−111.41 × 10−111.41 × 10−111.41 × 10−115.22 × 10−2
Composition
F701.32 × 10−41.32 × 10−44.46 × 10−14.44 × 10−79.94 × 10−14.86 × 10−31.70 × 10−81.96 × 10−104.44 × 10−79.26 × 10−93.08 × 10−8
F713.34 × 10−111.55 × 10−93.82 × 10−103.82 × 10−101.17 × 10−94.31 × 10−86.77 × 10−54.98 × 10−113.02 × 10−113.02 × 10−118.07 × 10−1
F722.67 × 10−95.49 × 10−111.21 × 10−103.02 × 10−111.07 × 10−93.34 × 10−118.48 × 10−93.96 × 10−83.02 × 10−114.50 × 10−114.74 × 10−6
F732.60 × 10−81.76 × 10−32.38 × 10−72.23 × 10−93.32 × 10−68.20 × 10−72.32 × 10−67.04 × 10−73.02 × 10−111.61 × 10−102.87 × 10−10
F744.57 × 10−52.53 × 10−88.61 × 10−103.86 × 10−84.66 × 10−61.44 × 10−74.44 × 10−95.30 × 10−92.91 × 10−117.51 × 10−85.00 × 10−6
F751.77 × 10−98.48 × 10−82.88 × 10−21.80 × 10−106.34 × 10−74.13 × 10−62.12 × 10−63.55 × 10−101.44 × 10−113.64 × 10−117.62 × 10−7
F766.32 × 10−56.63 × 10−102.80 × 10−82.98 × 10−113.32 × 10−81.39 × 10−93.80 × 10−61.99 × 10−82.98 × 10−113.17 × 10−34.20 × 10−4
F773.78 × 10−87.35 × 10−62.09 × 10−91.73 × 10−91.24 × 10−62.28 × 10−82.49 × 10−72.09 × 10−82.50 × 10−113.60 × 10−92.40 × 10−8
F785.87 × 10−47.38 × 10−109.26 × 10−93.02 × 10−114.18 × 10−96.07 × 10−111.86 × 10−65.46 × 10−93.02 × 10−113.51 × 10−29.03 × 10−4
F793.02 × 10−115.07 × 10−103.02 × 10−113.02 × 10−115.46 × 10−113.02 × 10−113.69 × 10−113.02 × 10−113.01 × 10−113.02 × 10−112.87 × 10−10
Table 12. Avr and SD of CEC2020 for 25 independent runs (The bolded value is the best overall).
Table 12. Avr and SD of CEC2020 for 25 independent runs (The bolded value is the best overall).
FIndexLSOSSAGBORUNWOAGTOAVOAEOGWORSASMADE
Unimodal
F80Avr1.00 × 1023.04 × 1031.82 × 1033.94 × 1031.69 × 1062.85 × 1033.76 × 1033.89 × 1037.74 × 1071.27 × 10+107257.82704.92
SD2.64 × 10−133.47 × 1032.05 × 1032.04 × 1032.04 × 1062.19 × 1032.52 × 1033.64 × 1031.60 × 1084.15 × 1094390.972059.48
Rank1.005.003.008.0010.004.006.007.0011.0013.009.002.00
Multimodal
F81Avr1347.021766.981839.451636.202059.681954.491902.301476.481682.492648.601665.281604.18
SD1.52 × 1022.87 × 1023.42 × 1021.98 × 1023.46 × 1022.39 × 1023.00 × 1021.92 × 1022.48 × 1022.37 × 102158.68251.92
Rank1.007.008.004.0011.0010.009.002.006.0013.005.003.00
Hybrid
F82Avr718.49732.59744.51761.38784.64758.14769.60725.86732.62806.17727.48726.59
SD3.379.8315.3017.0021.8718.0119.847.8410.4510.009.093.31
Rank1.005.007.009.0012.008.0010.002.006.0013.004.003.00
F83Avr1900.021901.351900.001900.001900.111900.001900.001900.001900.111900.001900.001901.16
SD0.090.560.000.000.460.000.000.000.260.000.000.17
Rank8.0013.001.002.009.003.004.005.0010.006.007.0012.00
F84Avr1712.496902.122245.586702.53294,472.062278.877280.403432.4539,529.67414,990.219364.451834.94
SD7.64 × 1004.34 × 1032.81 × 1023.78 × 1035.64 × 1053.21 × 1025.18 × 1039.92 × 1021.03 × 1051.37 × 1055902.78239.86
Rank1.007.003.006.0012.004.008.005.0011.0013.009.002.00
F85Avr1604.581713.701746.691756.901824.551776.961788.641671.191747.522137.601722.521618.83
SD21.6687.8597.0577.33115.43101.48106.8094.6187.98213.0152.6047.72
Rank1.004.006.008.0012.009.0011.003.007.0013.005.002.00
F86Avr2100.925038.572561.385704.1845,897.212546.398936.412417.449055.121,045,287.185349.382115.06
SD3.013364.65263.403932.6938,048.22354.826470.47208.654287.621,690,081.524114.9051.19
Rank1.006.005.008.0012.004.009.003.0010.0013.007.002.00
Composition
F87Avr2291.632298.762299.332305.762401.002305.772336.932294.712308.683188.892413.022300.39
SD2.33 × 1011.93 × 1011.74 × 1011.09 × 1012.60 × 1024.59 × 1001.82 × 1022.32 × 1012.43 × 1012.97 × 102308.220.56
Rank1.003.004.006.0011.007.009.002.008.0013.0012.005.00
F88Avr2654.712748.832717.242740.742774.762707.032777.682742.182745.642860.812746.392749.71
SD113.828.2899.8146.3157.90105.7722.496.9548.6834.6847.524.42
Rank1.008.003.004.0011.002.0012.005.006.0013.007.009.00
F89Avr2894.332922.202937.042929.852952.002926.602931.742932.252936.163372.912927.642936.00
SD57.5124.0022.4823.5723.4224.3323.0422.9818.18150.1131.0619.13
Rank1.002.0010.005.0011.003.006.007.009.0013.004.008.00
Table 13. p-values of LSO with each rival optimizer on CEC-2020 test suite (F80–F89).
Table 13. p-values of LSO with each rival optimizer on CEC-2020 test suite (F80–F89).
FSSAGBORUNWOAGTOAVOAEOGWORSASMADE
Unimodal
F802.61 × 10−112.61 × 10−112.61 × 10−112.61 × 10−112.61 × 10−112.61 × 10−112.61 × 10−112.61 × 10−112.61 × 10−112.61 × 10−113.49 × 10−1
Multimodal
F819.06 × 10−89.83 × 10−87.60 × 10−72.61 × 10−102.87 × 10−102.67 × 10−91.22 × 10−25.60 × 10−73.02 × 10−113.65 × 10−87.66 × 10−5
Hybrid
F822.67 × 10−98.15 × 10−113.34 × 10−113.02 × 10−113.02 × 10−113.02 × 10−118.15 × 10−56.53 × 10−83.02 × 10−111.34 × 10−51.41 × 10−9
F831.72 × 10−123.34 × 10−13.34 × 10−15.44 × 10−13.34 × 10−13.34 × 10−13.34 × 10−12.12 × 10−43.34 × 10−13.34 × 10−11.72 × 10−12
F843.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−115.46 × 10−6
F851.09 × 10−106.70 × 10−114.50 × 10−114.08 × 10−114.98 × 10−115.49 × 10−116.72 × 10−106.07 × 10−113.02 × 10−114.98 × 10−111.70 × 10−2
F863.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.34 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−118.31 × 10−3
Composition
F876.52 × 10−93.35 × 10−83.82 × 10−103.02 × 10−113.02 × 10−114.18 × 10−92.89 × 10−35.07 × 10−103.02 × 10−116.52 × 10−94.19 × 10−1
F888.48 × 10−92.28 × 10−55.53 × 10−82.61 × 10−101.11 × 10−43.02 × 10−114.98 × 10−42.60 × 10−53.02 × 10−112.92 × 10−92.37 × 10−10
F891.39 × 10−52.49 × 10−83.64 × 10−94.37 × 10−102.03 × 10−78.84 × 10−91.66 × 10−71.77 × 10−82.84 × 10−112.24 × 10−64.79 × 10−8
Table 14. Avr and SD of CEC2022 for 25 independent runs (The bolded value is the best overall).
Table 14. Avr and SD of CEC2022 for 25 independent runs (The bolded value is the best overall).
FIndexLSOSSAGBORUNWOAGTOAVOAEOGWORSASMADE
Unimodal
F90Avr300.003.00 × 1023.00 × 1023.00 × 1021.55 × 1043.00 × 1023.00 × 1023.00 × 1022.44 × 10310,731.57300.00300.00
SD0.005.61 × 10−101.37 × 10−102.20 × 10−16.57 × 1036.31 × 10−92.00 × 10−86.18 × 10−102.20 × 1033788.590.000.00
Rank1.005.003.009.0013.006.007.004.0011.0012.001.001.00
Multimodal
F91Avr400.96410.29405.42405.90428.03410.25413.99409.75434.341066.53409.80407.52
SD2.13 × 1001.96 × 1013.66 × 1001.30 × 1014.56 × 1011.74 × 1012.22 × 1011.71 × 1012.63 × 101814.5312.352.32
Rank1.008.002.003.0010.007.009.005.0011.0013.006.004.00
F92Avr600.00606.09601.30617.36632.26609.38611.84600.15601.21645.65600.08600.00
SD0.004.621.857.4013.117.566.300.621.866.130.052.07 × 10−6
Rank1.007.006.0010.0012.008.009.004.005.0013.003.002.00
F93Avr810.72821.06821.13823.32836.29824.64831.03812.11814.64848.55825.60824.02
SD2.9610.989.216.3116.196.728.715.908.586.588.373.22
Rank1.004.005.006.0011.008.0010.002.003.0013.009.007.00
F94Avr900.02908.19921.081021.891447.741019.311210.33900.38917.151432.77902.70900.00
SD8.29 × 10−22.65 × 1012.66 × 1017.76 × 1014.10 × 1021.10 × 1021.75 × 1024.17 × 10−12.84 × 101104.118.251.60 × 10−6
Rank2.005.007.0010.0013.009.0011.003.006.0012.004.001.00
Hybrid
F95Avr1800.363566.292258.983181.783989.731874.013629.153691.905032.022.66 × 1085411.361826.60
SD0.341735.601245.491115.741954.6950.561615.831945.922475.452.41 × 1082210.90117.17
Rank1.006.004.005.009.003.007.008.0010.001.30 × 10111.002.00
F96Avr2000.352033.782027.672042.252066.692032.202033.242032.422033.042139.272020.432002.21
SD0.5413.089.4011.4228.1916.4111.5138.2015.8041.113.766.18
Rank1.009.004.0010.0012.005.008.006.007.0013.003.002.00
F97Avr2201.572223.042221.272224.212232.312221.582224.612222.822224.732263.952220.572204.54
SD3.83 × 1007.12 × 1001.01 × 1001.34 × 1005.42 × 1002.81 × 1002.93 × 1002.33 × 1013.53 × 10029.433.768.49
Rank1.007.004.008.0012.005.009.006.0010.0013.003.002.00
Composition
F98Avr2529.282529.432534.182529.292564.672529.552534.182529.282580.542771.042529.282529.28
SD0.000.1926.830.0039.900.8726.830.0039.6561.340.006.79 × 10−10
Rank1.006.008.005.0011.007.009.002.0012.0013.004.003.00
F99Avr2500.362508.722516.282553.992616.352532.772558.842553.162573.332690.102523.742527.05
SD0.0632.0140.4457.97214.3654.4263.6760.4063.44195.9553.3749.31
Rank1.003.004.009.0012.007.0010.008.0011.0013.005.006.00
F100Avr2825.012.82 × 1032.78 × 1032.70 × 1032.92 × 1032.68 × 1032.78 × 1032.85 × 1033.00 × 1034092.282750.512895.74
SD129.152.24 × 1021.76 × 1021.38 × 1025.55 × 1011.27 × 1021.48 × 1021.15 × 1021.74 × 102331.37175.8427.73
Rank7.006.005.002.0011.001.004.008.0012.0013.003.009.00
F101Avr2862.692863.952865.402864.292894.242864.792866.372863.942866.002971.502862.462863.56
SD1.721.491.801.7835.171.763.731.785.67114.511.671.45
Rank2.005.008.006.0012.007.0010.004.009.0013.001.003.00
Table 15. p-values of LSO with each rival optimizer on CEC-2022 test suite (F90–F101).
Table 15. p-values of LSO with each rival optimizer on CEC-2022 test suite (F90–F101).
FSSAGBORUNWOAGTOAVOAEOGWORSASMADE
Unimodal
F901.21 × 10−121.20 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12NaN
Multimodal
F913.36 × 10−84.03 × 10−92.42 × 10−72.07 × 10−108.53 × 10−102.44 × 10−98.08 × 10−95.76 × 10−112.10 × 10−111.40 × 10−102.68 × 10−10
F921.72 × 10−121.72 × 10−121.72 × 10−121.72 × 10−121.72 × 10−121.72 × 10−123.37 × 10−121.72 × 10−121.72 × 10−121.72 × 10−121.00 × 100
F933.32 × 10−67.59 × 10−73.16 × 10−103.82 × 10−108.86 × 10−105.49 × 10−116.63 × 10−19.05 × 10−23.02 × 10−111.41 × 10−93.69 × 10−11
F943.44 × 10−107.88 × 10−127.88 × 10−127.88 × 10−127.88 × 10−127.88 × 10−121.20 × 10−91.08 × 10−117.88 × 10−122.50 × 10−114.22 × 10−5
Hybrid
F953.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−118.99 × 10−11
F962.99 × 10−112.99 × 10−112.99 × 10−112.99 × 10−112.99 × 10−112.99 × 10−114.03 × 10−112.99 × 10−112.99 × 10−113.65 × 10−111.15 × 10−2
F974.50 × 10−111.96 × 10−103.34 × 10−113.34 × 10−118.15 × 10−113.34 × 10−112.87 × 10−104.50 × 10−113.02 × 10−112.61 × 10−101.30 × 10−1
Composition
F981.21 × 10−121.61 × 10−11.21 × 10−121.21 × 10−122.16 × 10−24.65 × 10−82.15 × 10−21.21 × 10−121.21 × 10−121.21 × 10−123.34 × 10−1
F991.08 × 10−28.15 × 10−114.62 × 10−103.02 × 10−115.97 × 10−94.62 × 10−105.69 × 10−15.46 × 10−63.02 × 10−112.64 × 10−16.95 × 10−1
F1001.85 × 10−12.36 × 10−16.44 × 10−19.41 × 10−101.37 × 10−25.00 × 10−12.51 × 10−67.91 × 10−99.31 × 10−128.02 × 10−16.47 × 10−3
F1011.59 × 10−33.62 × 10−64.19 × 10−45.38 × 10−114.59 × 10−56.66 × 10−72.86 × 10−33.80 × 10−52.95 × 10−114.29 × 10−14.41 × 10−2
Table 16. Overall effectiveness over five investigated benchmarks (The bolded and red value is the best overall).
Table 16. Overall effectiveness over five investigated benchmarks (The bolded and red value is the best overall).
FIndexLSOSSAGBORUNWOAGTOAVOAEOGWORSASMADE
CEC2005Rank 2.007.103.003.604.901.701.604.505.506.552.757.60
SD0.10646.250.630.437101.710.140.180.430.590.290.245749.12
O × 10 (%)85.0025.0050.0050.0040.0070.0085.0030.0025.0045.0065.0020.00
CEC2014Rank 1.667.285.456.559.727.077.484.798.4111.936.214.45
SD22.5717,917.3710,668.4222,738.20256,429.2621,129.5917,868.182636.009.75 × 1066.93 × 1072470.276338.28
O × 10 (%)79.310.003.456.900.000.000.006.900.000.006.906.90
CEC2017Rank 1.116.435.577.3911.215.468.504.438.1113.006.683.32
SD32.4752,413.7640,719.9514,459.54446,168.54103,439.5326,051.3817,788.924.44 × 1061.47 × 10817,155.8510,876.18
O × 10 (%)100.000.000.000.000.000.000.000.000.000.000.003.45
CEC2020Rank 1.786.444.446.1111.115.678.673.788.3312.227.224.44
SD38.241161.15318.841012.58264,287.33335.901482.46518.821.60 × 1074.15 × 1081501.58267.78
O × 10 (%)90.000.0010.000.000.000.000.000.000.000.000.000.00
CEC2022Rank 1.646.004.737.0011.456.008.455.098.9112.825.363.64
SD11.73172.93128.52119.23782.3432.96173.73184.05420.222.01 × 107206.5317.99
O × 10 (%)75.000.000.000.000.008.330.000.000.000.0016.6716.67
Avg. Rank:1.646.654.646.139.685.186.944.527.8511.305.644.69
Avg. SD:21.0214,462.2910,367.277666.00194,953.8424,987.629115.194225.646.04 × 1061.30 × 1084266.894649.87
Avg.OE (%):85.865.0012.6911.388.0015.6717.007.385.009.0017.719.40
Table 17. Comparison of the results for tension/compression spring design optimization problem.
Table 17. Comparison of the results for tension/compression spring design optimization problem.
No.Algorithm x 1 x 2 x 3 f m i n ( x )
1LSO5.1689 × 10−23.5671 × 10−11.1290 × 1011.2665233 × 10−2
2SSA5.0000 × 10−23.1737 × 10−11.4051 × 1011.2735631 × 10−2
3GBO5.3274 × 10−23.9605 × 10−19.3074 × 1001.2709931 × 10−2
4RUN5.1960 × 10−23.6328 × 10−11.0914 × 1011.2666568 × 10−2
5WOA5.2797 × 10−23.8396 × 10−19.8542 × 1001.2687243 × 10−2
6GTO5.2437 × 10−23.7499 × 10−11.0293 × 1011.2675330 × 10−2
7AVOA5.1702 × 10−23.5704 × 10−11.1270 × 1011.2665236 × 10−2
8EO5.3212 × 10−23.9449 × 10−19.3755 × 1001.2706550 × 10−2
9GWO5.2770 × 10−23.8328 × 10−19.8863 × 1001.2686160 × 10−2
10RSA5.1387 × 10−23.4946 × 10−11.1734 × 1011.2673204 × 10−2
11SMA5.0000 × 10−23.1050 × 10−11.5000 × 1011.3196460 × 10−2
12DE5.2354 × 10−23.7292 × 10−11.0399 × 1011.2673207 × 10−2
13PO5.1778 × 10−23.5884 × 10−11.1166 × 1011.26665924 × 10−2
14SO5.1403 × 10−23.4988 × 10−11.1701 × 1011.26667301 × 10−2
15BWO5.4611 × 10−24.2818 × 10−19.0696 × 1001.41360583 × 10−2
16DTBO5.3059 × 10−23.8966 × 10−19.6233 × 1001.27507490 × 10−2
17CCAA5.2012 × 10−23.6453 × 10−11.0846 × 1011.26674032 × 10−2
18RO5.13700 × 10−23.49096 × 10−11.17628 × 1011.2678800 × 10−2
19ES5.1989 × 10−23.6397 × 10−11.08905 × 1011.2681 × 10−2
20GSA5.02760 × 10−23.2368 × 10−11.352541 × 1011.27022 × 10−2
21TSN/AN/AN/A1.2935 × 10−2
22Swarm strategy5.0417 × 10−23.2153 × 10−11.3980 × 10−11.3060 × 10−2
23UPSON/AN/AN/A1.31 × 10−2
24CAN/AN/AN/A1.2867 × 10−2
25TANA-35.8400 × 10−25.4170 × 10−15.2745 × 1001.3400 × 10−2
26PSON/AN/AN/A1.2857 × 10−2
27ACON/AN/AN/A1.3223 × 10−2
28GA5.8231 × 10−25.2106 × 10−15.8845 × 101.3931 × 10−2
29QEAN/AN/AN/A1.2928 × 10−2
30PC5.06 × 10−23.28 × 10−11.41 × 10−11.35 × 10−2
31SIGAN/AN/AN/A1.3076 × 10−2
32PSIGAN/AN/AN/A1.2864 × 10−2
Table 18. Comparison of the results for weld beam design problem.
Table 18. Comparison of the results for weld beam design problem.
No.Algorithm x 1 x 2 x 3 x 4 f m i n ( x )
1LSO2.0572 × 10−13.4707 × 1009.0366 × 1002.0573 × 10−11.7248658
2SSA2.0548 × 10−13.4757 × 1009.0369 × 1002.0573 × 10−11.7252086
3GBO2.0572 × 10−13.4707 × 1009.0366 × 1002.0573 × 10−11.7248658
4RUN2.0572 × 10−13.4708 × 1009.0366 × 1002.0573 × 10−11.7248746
5WOA1.9721 × 10−13.6965 × 1009.0151 × 1002.0671 × 10−11.7454019
6GTO2.0572 × 10−13.4707 × 1009.0366 × 1002.0573 × 10−11.7248658
7AVOA2.0567 × 10−13.4718 × 1009.0366 × 1002.0573 × 10−11.7249388
8EO2.0572 × 10−13.4707 × 1009.0366 × 1002.0573 × 10−11.7248658
9GWO2.0557 × 10−13.4741 × 1009.0376 × 1002.0573 × 10−11.7253039
10RSA2.0615 × 10−13.4694 × 1009.0775 × 1002.2698 × 10−11.8945096
11SMA2.0572 × 10−13.4707 × 1009.0366 × 1002.0573 × 10−11.7248689
12DE2.0572 × 10−13.4707 × 1009.0366 × 1002.0573 × 10−11.7248658
13PO2.0572 × 10−13.4707 × 1009.0366 × 1002.0573 × 10−11.72486608
14SO2.0572 × 10−13.4707 × 1009.0366 × 1002.0573 × 10−11.7248658
15BWO1.8639 × 10−13.9876 × 1008.9766 × 1002.1300 × 10−11.8076732
16DTBO2.0571 × 10−13.4709 × 1009.0368 × 1002.0574 × 10−11.7249588
17CCAA2.0572 × 10−13.4707 × 1009.0367 × 1002.0573 × 10−11.72487987
18RO2.03687 × 10−13.52847 × 1009.00423 × 1002.07241 × 10−11.735344
19RO2.03687 × 10−13.52847 × 1009.00423 × 1002.072410 × 10−11.735344 × 100
20WOA2.05396 × 10−13.48429 × 1009.03743 × 1002.06276 × 10−11.73050 × 100
21HS2.4420 × 10−16.2231 × 1008.29150 × 1002.4430 × 10−12.38070 × 100
22CSS&PSO I 2.0639 × 10−13.4236 × 1009.1241 × 1002.0531 × 10−11.7314 × 100
23CSS&PSO II2.0546 × 10−13.4800 × 1009.05401 × 1002.0578 × 10−11.72910 × 100
24PSOStr2.0150 × 10−13.5620 × 1009.0414 × 1002.0571 × 10−11.73118 × 100
25FA2.015 × 10−13.5620 × 1009.0414 × 1002.0570 × 10−11.73121 × 100
26DE2.444 × 10−16.2175 × 1008.2915 × 1002.4440 × 10−12.3810 × 100
27AIS-GA2.444 × 10−16.2183 × 1008.2912 × 1002.444 × 10−12.3812 × 100
Table 19. Comparison of the results for pressure vessel design optimization problem.
Table 19. Comparison of the results for pressure vessel design optimization problem.
No.Algorithm x 1 x 2 x 3 x 4 f m i n ( x )
1LSO7.7818 × 10−13.8466 × 10−14.0320 × 1012.0000 × 1025885.43417456
2SSA7.8272 × 10−13.8690 × 10−14.0555 × 1011.9675 × 1025893.25120195
3GBO7.7818 × 10−13.8466 × 10−14.0320 × 1012.0000 × 1025885.44199882
4RUN7.7824 × 10−13.8471 × 10−14.0323 × 1011.9995 × 1025885.60517789
5WOA1.0863 × 1005.5859 × 10−15.6156 × 1015.5940 × 1016779.76692510
6GTO7.7818 × 10−13.8466 × 10−14.0320 × 1012.0000 × 1025885.43417456
7AVOA7.7820 × 10−13.8467 × 10−14.0321 × 1011.9998 × 1025885.47556956
8EO8.2691 × 10−14.0875 × 10−14.2845 × 1011.6760 × 1025974.05971856
9GWO7.7854 × 10−13.8509 × 10−14.0332 × 1011.9986 × 1025888.33742107
10RSA1.2411 × 1009.5886 × 10−14.5644 × 1011.3730 × 10210,457.57367139
11SMA7.7818 × 10−13.8466 × 10−14.0320 × 1012.0000 × 1025885.43624519
12DE7.7818 × 10−13.8466 × 10−14.0320 × 1012.0000 × 1025885.43417456
13PO7.7818 × 10−13.8466 × 10−14.0320 × 1012.0000 × 1025885.43636261
14SO7.7818 × 10−13.8470 × 10−14.0320 × 1012.0000 × 1025885.54986448
15BWO7.9249 × 10−13.9708 × 10−14.0982 × 1011.9550 × 1026036.98073005
16DTBO8.7638 × 10−14.5781 × 10−14.5407 × 1011.3954 × 1026165.77458390
17CCAA7.7837 × 10−13.8488 × 10−14.0327 × 1011.9989 × 1025886.45565028
18HHO0.8175830.4072942.0917176.71966000.46259
19GWO0.81250.434542.089181176.7587316051.5639
20HPSO0.81250.43750042.0984176.63666059.7143
21G-QPSO0.81250.43750042.0984176.63726059.7208
22WEO0.81250.43750042.0984176.63666059.71
23BA0.81250.43750042.098445176.636596059.7143
24MFO0.81250.437542.098445176.6365966059.7143
25CSS0.81250.437542.103624176.5726566059.0888
26ESs0.81250.437542.098087176.6405186059.7456
27BIANCA0.81250.437542.0968176.6586059.9384
28MDDE0.81250.437542.098446176.6360476059.70166
29DELC0.81250.437542.0984456176.63659586059.7143
30WOA0.81250.437542.0982699176.6389986059.7410
31NPGA0.81250.437542.0974176.6546059.9463
32Lagrangian multiplier1.1250.62558.29143.697198.0428
33Branch-bound1.1250.62547.7117.7018129.1036
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Abdel-Basset, M.; Mohamed, R.; Sallam, K.M.; Chakrabortty, R.K. Light Spectrum Optimizer: A Novel Physics-Inspired Metaheuristic Optimization Algorithm. Mathematics 2022, 10, 3466. https://doi.org/10.3390/math10193466

AMA Style

Abdel-Basset M, Mohamed R, Sallam KM, Chakrabortty RK. Light Spectrum Optimizer: A Novel Physics-Inspired Metaheuristic Optimization Algorithm. Mathematics. 2022; 10(19):3466. https://doi.org/10.3390/math10193466

Chicago/Turabian Style

Abdel-Basset, Mohamed, Reda Mohamed, Karam M. Sallam, and Ripon K. Chakrabortty. 2022. "Light Spectrum Optimizer: A Novel Physics-Inspired Metaheuristic Optimization Algorithm" Mathematics 10, no. 19: 3466. https://doi.org/10.3390/math10193466

APA Style

Abdel-Basset, M., Mohamed, R., Sallam, K. M., & Chakrabortty, R. K. (2022). Light Spectrum Optimizer: A Novel Physics-Inspired Metaheuristic Optimization Algorithm. Mathematics, 10(19), 3466. https://doi.org/10.3390/math10193466

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop