1. Introduction
1.1. Motivation
The goal of optimization is to find the best acceptable answer, given the limitations and needs of the problem [
1]. For a problem, there may be different solutions, and to compare them and select the optimal solution, a function called the objective function is defined. The choice of this function depends on the nature of the problem. However, choosing the suitable objective function is one of the most important optimization steps [
2]. An optimization problem can be defined from a mathematical point of view using the three main parts of variables, objective functions, and constraints [
3]. Once the optimization problem is mathematically modelled, it must be optimized using the appropriate method.
Optimization algorithms are among the optimization problem solving methods that are able to provide suitable solutions for optimization problems based on random scanning of the search space without the need for gradient information. In recent years, many optimization algorithms have been designed by scientists to solve optimization problems. These algorithms are based on simulations of various natural phenomena, the laws of physics, the biological sciences, the behavior of animals, insects, and living things in nature. The main question that arises in the study of optimization algorithms is that according to the algorithms that have been introduced in recent years, is there a need to develop and design new algorithms? The answer to this question lies in the No Free Lunch (NFL) theorem [
4]. According to the NFL theorem, a particular optimization algorithm will not be able to solve all optimization problems simply because it has a powerful performance in solving several optimization problems. In fact, an optimization algorithm may be the best optimizer in solving one optimization problem, but it fails in solving another problem. This is because each optimization problem has its own complexity and nature. The NFL theorem prompts researchers to design new optimization algorithms that can solve optimization problems in different domains and applications. NFL theorem also motivated the authors of this paper to design and introduce a new optimization algorithm for solving optimization problems.
1.2. Research Gap
Although many optimization algorithms have been introduced to solve the optimization problems, achieving more suitable quasi-optimal solutions closer to the global optimal is still a major challenge in solving and optimizing various optimization problems. The key issue in the studies of optimization algorithms is that since optimization algorithms are considered stochastic-based methods, they do not guarantee provided solutions as the global optimal. Therefore, it is always possible to develop new optimization algorithms that are able to provide quasi-optimal solutions to the global optimal. In order to solve more accurate and more effective optimization problems in different sciences, it is necessary to design newer algorithms with higher abilities in achieving quasi-optimal solutions closer to global optimal solutions. Therefore, in this study, it has been attempted to design a new optimization algorithm that can provide more suitable solutions for optimization issues.
One of the disadvantages of most optimization algorithms is that the process of updating their population depends too much on the best member of the population. This can lead to early convergence of the algorithm or entrapment in local optimal areas. The reason for the lack of optimal convergence in these methods is that it does not allow members of the population to properly scan the search space in different directions. The advantage of the proposed SLOA is that it does not rely on the best member of the population and increases the search power of the algorithm. In SLOA, the algorithm population is updated in four different phases. None of these four phases relies on the best member of the population. In the first phase, simulating the zigzag motion of snow leopards is very effective in accurately scanning the search space and exiting local optimal solutions. In the second phase, the behavior of snow leopards during hunting, which is at two different speeds towards the prey, increases the power of exploitation and the convergence of the algorithm towards the solution. In the third phase, the reproduction process creates random solutions in different areas of the search space, which increases the exploration power of the algorithm. In the fourth phase, mortality gives the algorithm the advantage of helping the algorithm evolve by eliminating weak population members and preventing the search for non-optimal areas.
1.3. Contribution
The innovation and contribution of this paper is in designing a new optimization algorithm called Snow Leopard Optimization Algorithm (SLOA) in order to solve optimization problems more effectively in different sciences. The novelty of the proposed method is to simulate the behavior and social life of snow leopards with a focus on travel routes, hunting, reproduction, and mortality. In the algorithms that have been introduced so far, researchers have not used the optimization process in the social life of snow leopards. The contributions proposed by this study are as follows:
- (i)
The main idea in designing SLOA is simulation of the different behaviors of snow leopards that are inspired by nature. In SLOA, the natural behaviors of snow leopards are modeled in four phases including travel routes, hunting, reproduction, and mortality.
- (ii)
The proposed SLOA is mathematically modeled and simulated for use in solving and optimizing various problems.
- (iii)
Twenty-three sets of standard objective functions of different types are employed to evaluate the power of the proposed SLOA in providing quasi-optimal and effective solutions.
- (iv)
In order to further analyze the SLOA and evaluate the quality of the obtained optimization results, the performance of the SLOA is compared with eight well-known optimization algorithms.
Optimization algorithms are applied in all disciplines of science and real-world problems where the optimization process or problem is defined and designed. The proposed SLOA can be applied to minimize or maximize various objective functions. SLOA can be applied in optimal designs and engineering sciences where decision variables must be well selected to optimize device performance. In data mining, medical science, clustering, and in general in any application that faces optimization, the proposed SLOA can be applied.
1.4. Paper Organization
The rest of the paper is organized in such a way that lecture review is presented in
Section 2. The proposed SLOA is introduced in
Section 3. Simulation studies on the performance of the SLOA are presented in
Section 4. The performance of the SLOA from the perspective of exploration and exploitation is analyzed and discussed in
Section 5. Finally, conclusions and several suggestions for future studies are presented in
Section 6.
2. Lecture Review
Optimization problem solving methods include two categories of deterministic methods and stochastic methods [
5].
The use of basic optimization methods including linear programming, integer programming, dynamic programming, and nonlinear programming is associated with disadvantages. The most important disadvantage of these methods is being time consuming in solving big problems using them. In a way, even with today’s advanced computing technologies, solving a large-scale problem with the mentioned techniques takes several years [
6]. The emergence of these problems has forced researchers to adjust their expectations to find the best possible answer and to be satisfied with good enough answers, that even for large-scale problems, appropriate solutions can be reached in a reasonable time [
7].
The weakness of methods such as gradients and numerical calculations in solving optimization problems has led to the creation of a special type of intelligent search algorithms called population-based optimization algorithms. Population-based optimization algorithms are a kind of stochastic methods that are inspired by nature and its mechanisms [
8]. These methods try to send their initial population to the global optimal and provide appropriate solutions close to the global optimal in a reasonable time [
9].
The basis of the performance of optimization algorithms is that they first produce a number of solutions which is the population of the algorithm, then in an iterative-based process without the use of derivative information and only based on the simulation of the collective intelligence of the algorithm members improves these initial solutions [
10].
Every optimization problem has a precise and basic solution called global optimal [
11]. Given that optimization algorithms are stochastic methods and improve the initial proposed solutions during successive iterations, it is possible that the optimization algorithms will not be able to provide exactly the global optimal. For this reason, the solutions obtained using optimization algorithms for an optimization problem are called quasi-optimal [
12]. In evaluating several quasi-optimal solutions to an optimization problem, the most appropriate solution is closer to the global optimal. The main reason for the design of numerous optimization algorithms by researchers is to provide quasi-optimal solutions that are more appropriate and closer to the global optimal solution. In this regard, optimization algorithms have been applied to solve optimization problems in different branches of science such as microwave for design of rectangular microstrip antennas [
13], electromagnetic problems [
14], Proportional-Integral-Derivative (PID) control [
15], Flexible Job Planning (FJSP) problem [
16], force analysis and optimization of kinematic parameters [
17], simultaneous optimization of distributed generation [
18], optimization of complex system reliability [
19], design of model-based fuzzy controllers for networked control systems [
20], design of the stable robot controller [
21], and the Maki-Thompson rumor model [
22].
Optimization algorithms have developed over the years based on various natural, physical, game, genetic, and any theory or process that has an evolving nature.
Genetic Algorithm (GA) is one of the famous, oldest, and widely used optimization methods that is inspired by the science of genetics and Darwin’s theory of evolution, and is based on the survival of the fittest or natural selection. GA begins with the production of a population of chromosomes (the initial population of chromosomes in GA is randomly generated and bound to the top and bottom of the problem variables). In the next step, the generated data structures (chromosomes) are evaluated. Chromosomes that provide better values for the objective function of the problem have a higher chance to be selected as parents for reproduction than weaker solutions [
23]. Although GA has simple concepts and can be easily implemented, having several control parameters and being time consuming is the most important disadvantage of GA.
Particle Swarm Optimization (PSO) is another popular and widely used algorithm that is inspired by the collective behavior of birds or fish in nature. In PSO, a group of birds or fish are looking for food in an environment where there is only one piece of food. None of the birds know the location of the food and only know the distance to the food. One of the best strategies is to follow a bird that is closer to the food. In other words, every bird or fish, in addition to its own experience, also trusts the bird or fish that is closest to the food [
24]. The main disadvantages of PSO are that it is easy to fall into local optimal in high-dimensional problems, and it has a low convergence rate in the iterative process.
Gravitational Search Algorithm (GSA) is a physics-based algorithm that is introduced based on simulation of gravitational force and Newton’s laws of motion. According to the theory of gravity, objects that are at different distances from each other exert a gravitational force on each other. In GSA, the mass of objects is determined based on the values of the objective function. Objects that are in a better position in the search space pull other objects towards themselves based on simulations of gravity force and the laws of motion [
25]. Slow convergence, tendency to become trapped in local optimal, and having control parameters are the main disadvantages of GSA.
Teaching-Learning Based Optimization (TLBO) is a population-based optimization method that is designed based on simulation of behaviors of the teacher and students in a classroom. In TLBO, the best member of the population is considered the teacher and the rest of the population is considered the students of the class. TLBO has two phases called teaching phase and learning phase. In the teaching phase, the teacher teaches her/his knowledge to the students and in the learning phase, the students share their information with each other [
26]. The main disadvantages of TLBO are that consumes lot of memory space and it involves lot of iterations, so TLBO is a time-consuming algorithm.
Gray Wolf Optimizer (GWO) is a nature-based optimizer that is introduced based on simulation of behaviors of grey wolves in nature. GWO mimics the hierarchical leadership and hunting mechanism of the gray wolves. Four types of gray wolves named alpha, beta, delta, and omega are used to simulate hierarchical leadership. In GWO, alpha is the best member of the population, beta and delta are the second and third best members of the population, and the rest of the wolves are omega. In addition, three stages including search for prey, encircling prey, and attacking prey are used to simulate hunting mechanism [
27], low solving accuracy, slow convergence rate, and bad local searching ability are several disadvantages of GWO.
Grasshopper Optimization Algorithm (GOA) is a swarm-based algorithm that is introduced based on the simulation of behavior of grasshopper swarms in nature. In GOA, the group movement of grasshoppers towards food sources is imitated and simulated. A mathematical model is proposed for simulation of attraction and repulsion forces between the grasshoppers. Attraction forces encouraged grasshoppers to exploit promising regions, whereas repulsion forces let them explore the search space [
28]. The most important disadvantages of the GOA are slow convergence speed, the fact that it’s time-consuming, and having control parameters.
Marine Predators Algorithm (MPA) is a nature-based algorithm that is inspired by the movement strategies that marine predators use when trapping their prey in the oceans. The predominant search behavior and strategy of marine predators for hunting is modeled using the Levy flight method. MPA has three phases due to the different speeds of the predators and the prey: Phase 1: When the prey moves faster than the predator, Phase 2: When the prey and the predator move at almost the same speed, and Phase 3: When the predator is moving faster than the prey [
29]. One of the main disadvantages of MPA is that it requires huge number of iterations, especially for nonlinear optimization problems.
Tunicate Swarm Algorithm (TSA) is a bio-inspired algorithm that is introduced based on imitation of swarm behaviors of tunicates and jet propulsion during the navigation and foraging process. In TSA, two behaviors of tunicates including jet propulsion and swarm intelligence are employed for finding the food sources. In order to model the jet propulsion behavior, tunicates must comply with three conditions including remaining close to the best search agent, moving towards the position of best search agent, and avoiding the conflicts between search agents. In order to model the swarm intelligence behavior, positions of search agents should be updated based on the best optimal solution [
30]. Poor convergence in solving high-dimensional multimodal problems, having control parameters, and complex calculations are the main disadvantages of the TSA.
3. Snow Leopard Optimization Algorithm (SLOA)
In this section, the snow leopard is introduced first. Then, based on simulating the habits and natural behaviors of snow leopards, a new optimization algorithm called Snow Leopard Optimization Algorithm (SLOA) is developed. Mathematical modeling and formulation of the proposed SLOA for implementation in solving optimization problems is presented.
3.1. Snow Leopard
The snow leopard (Panthera uncia) is a species of Panthera native that lives in the high mountains of South and Central Asia. Snow leopards live in mountainous and alpine areas at altitudes of 3000 to 4500 m from eastern Afghanistan, the Himalayas, and the Tibetan Plateau to southern Siberia, Mongolia, and western China [
31].
The fur of snow leopards is whitish to gray with black spots on neck and head, with larger rosettes on the back, flanks, and bushy tail. The snow leopard’s belly is whitish. Its eyes are grey or pale green. Its nasal cavities are large. Its forehead is domed and its muzzle is short. The fur is thick with hairs between 5 and 12 cm long. Its body is stocky, short-legged, and slightly smaller than the other cats of the genus Panthera, reaching a shoulder height of 56 cm, and ranging in head to body size from 75 to 150 cm. Its tail is 80 to 105 cm long [
32]. It weighs between 22 and 55 kg, with an occasional large male reaching 75 kg, and small female of under 25 kg. Its canine teeth are 28.6 mm long and are more slender than those of the other Panthera species [
33].
Snow leopards have different behaviors and habits, including how they move towards each other and travel routes, how they hunt, reproduce, and mortality. The modeling of these natural behaviors has been used in the design of the proposed SLOA. In this design, modeling of four optimal natural behaviors in the life of snow leopards is used.
The first behavior is travel routes and movement. Modeling the zig-zag pattern movements of snow leopards as they move and follow each other leads to a more efficient search of the search space and crossing the optimal local areas.
The second behavior is how to hunt. Modeling the movements of snow leopards in order to hunt prey leads to the convergence of the optimization algorithm towards the optimal areas.
The third behavior is reproduction. The reproduction of snow leopards can be modeled as a combination of two members of the population, which leads to the production of a new member that may improve the performance of the algorithm in achieving optimal areas.
The fourth behavior is mortality. Modeling the mortality of weak snow leopards leads to the elimination of solutions and inappropriate members of the algorithm. This will remove members in inappropriate areas from the search space. In addition, modeling this behavior leads to the algorithm population remaining constant during the algorithm iterations.
3.2. Mathematical Modeling
In the proposed SLOA, each snow leopard is a member of the algorithm population. A certain number of snow leopards as search agents are members of the SLOA. In population-based optimization algorithms, population members are identified using a matrix called the population matrix. The number of rows in the population matrix is equal to the number of members in the population, and the number of columns in this matrix is equal to the number of variables in the optimization problem. The population matrix is specified as a matrix representation using Equation (1).
where,
is the population of snow leopard,
is the
ith snow leopard,
is the value for
dth problem variable suggested by
ith snow leopard,
is the number of snow leopard in algorithm population, and
is the number of problem variables.
The position of each snow leopard as a member of the population in the problem-solving space determines the values for the problem variables. Therefore, for each snow leopard, a value can be calculated for the objective function of the problem. The values of the objective function are specified by a vector using Equation (2).
here,
is the vector of objective function and
is the value for objective function of problem obtained based on
ith snow leopard.
Members of the population are updated in the proposed SLOA based on simulating the natural behaviors of snow leopards in four phases: displacement, hunting, reproduction, and mortality. The mathematical modeling of these four phases and the mentioned natural behaviors are presented in the following subsections.
3.2.1. Phase 1: Travel Routes and Movement
Snow leopards, like other cats, use scent signs to show their locations and travel routes. These signs are usually caused by scraping the ground with the hind feet before depositing urine or scat [
34]. Snow leopards also move in a zig-zag pattern in indirect lines [
35]. So, snow leopards can move towards or follow each other based on this natural behavior.
This phase of the proposed SLOA is mathematically modeled using Equations (3)–(5).
here,
is the new value for
dth problem variable obtained by
ith snow leopard based on phase 1,
is a random number in interval of
,
is the row number of selected snow leopard for guiding
ith snow leopard in
dth axis,
is the updated location of
ith snow leopard based on phase 1, and
is its objective function value.
3.2.2. Phase 2: Hunting
In the second phase of updating the members of the population, the behavior of snow leopards during hunting and attacking prey is used. The process and method of hunting a snow leopard, based on an observation recorded in Hemis National Park, is that the snow leopard uses rocky cliffs to cover itself when approaching its prey. After reaching a distance of 40 m from the prey, the snow leopard first walked slowly for the first 15 m, then ran the last 25 m, and finally killed the prey by biting its neck [
36].
The natural behavior of snow leopards during hunting is mathematically modeled using Equations (6)–(8). Equation (6) specifies the location of the prey for
ith snow leopard. Equation (7) simulates how a snow leopard moves toward its prey. According to observations, a snow leopard walks about 0.375% of the distance to the prey and then runs 0.625% of the distance to the prey. Therefore, a parameter called
is used to simulate this type of motion in Equation (7). Parameter
shows the percentage from the distance to the prey that the snow leopard walks. In the simulation of the proposed SLOA, the value of this parameter based on observations is considered 0.375. In Equation (8), the new position of the snow leopard after the attack on the prey is simulated. For this purpose, an effective update is used in which the new position is acceptable to an algorithm member if the value of the objective function in the new position is more appropriate than the previous position.
here,
is the
dth dimension of location of prey considered for
ith snow leopard,
is the objective function value based on location of prey,
is the new value for
dth problem variable obtained by
ith snow leopard based on phase 2, and
is its objective function value.
3.2.3. Phase 3: Reproduction
In this phase, based on the natural reproductive behavior of snow leopards, new members equal to half the total population are added to the population of the algorithm. In fact, it is assumed that a cub will be born based on the mating of both snow leopards. The reproduction process of snow leopards is mathematically modeled based on the mentioned concepts using Equation (9).
here,
is the
lth cub which is born from the mating of two snow leopards.
3.2.4. Phase 4: Mortality
Living things are always in danger of dying. Although reproduction increases the population of snow leopards, the number of snow leopards remains constant during the replication of the algorithm due to mortality and losses. In the proposed SLOA, it is assumed that in each replication after reproduction, snow leopards face mortality exactly as the number of puppies born. The criterion of snow leopard mortality in the SLOA is the value of the objective function. Therefore, snow leopards that have a weaker objective function are more prone to death. Also, some born cubs may die due to having poor objective function.
3.3. Flowchart of SLOA
In the proposed SLOA, snow leopards are updated in each iteration according to the first and second phases, then the population of the algorithm according to the third and fourth phases is faced with natural processes of reproduction and mortality.
These steps of the SLOA are repeated until the stop condition is reached. After fully implementing the SLOA on an optimization problem, the SLOA makes available the best obtained solution as the best quasi-optimal solution. The various stages of implementation of the SLOA are specified as flowcharts in
Figure 1 and its pseudocode is presented in Algorithm 1.
Algorithm 1 Pseudocode of SLOA |
Start SLOA. |
1. | Input problem information: variables, objective function, and constraints. |
2. | Set number of snow leopard (N) and iterations (T). |
3. | Generate an initial population matrix at random. |
4. | Evaluate the objective function. |
5. | | For t = 1:T |
6. | | | Phase 1: travel routes and movement |
7. | | | | For i = 1:N |
8. | | | | | For d = 1:m |
9. | | | | | | Calculate using Equation (3) and Equation (5). |
10. | | | | | End |
11. | | | | | Update using Equation (4) |
12. | | | | End |
13. | | | Phase 2: hunting |
14. | | | | For i = 1:N |
15. | | | | | For d = 1:m |
16. | | | | | | Calculate location of prey using Equation (6) |
17. | | | | | | Calculate using Equation (7). |
18. | | | | | End |
19. | | | | | Update using Equation (8). |
20. | | | | End |
21. | | | Phase 3: reproduction |
22. | | | | For l = 1:0.5 × N |
23. | | | | Generate cub using Equation (9) |
24. | | | | End |
25. | | | Phase 4: mortality |
26. | | | | Adjust the number of snow leopards to N due to mortality based on criterion of the objective function. |
27. | | | Save best quasi-optimal solution obtained with the SLOA so far. |
28. | | End For t = 1:T |
29. | Output best quasi-optimal solution obtained with the SLOA. |
End SLOA. |
4. Simulation Studies and Results
In this section, the performance of the proposed SLOA in optimization and providing effective solutions to optimization problems are studied. For this purpose, a standard set of twenty-three objective functions of three different types of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal has been used. The complete information of these objective functions is specified in
Appendix A and in
Table A1,
Table A2 and
Table A3. The optimization results obtained using the SLOA are compared with the performance of eight optimization algorithms including Genetic Algorithm (GA) [
23], Particle Swarm Optimization (PSO) [
24], Gravitational Search Algorithm (GSA) [
25], Teaching-Learning Based Optimization (TLBO) [
26], Gray Wolf Optimizer (GWO) [
27], Grasshopper Optimization Algorithm (GOA) [
28], Marine Predators Algorithm (MPA) [
29], and Tunicate Swarm Algorithm (TSA) [
30]. The proposed SLOA, as well as eight compared algorithms, are implemented in twenty independent implementations on the optimization of twenty-three objective functions F1 to F23. The most important criterion for determining the superiority of optimization algorithms is the value of the objective function. Although the algorithm convergence speed is an important criterion in the performance of optimization algorithms, the main purpose of optimizing an optimization problem is to provide a suitable quasi-optimal solution. An algorithm may have a high convergence speed but converge to local or unsuitable solutions. In fact, in optimizing an objective function, it is a superior algorithm that can offer a better solution and with a more optimal objective function value. For this reason, the results of implementation and performance of optimization algorithms on the set of the mentioned objective functions have been reported using the two indicators of the average of the best obtained solutions from twenty independent executions for the objective function (ave) and the standard deviation of these best obtained solutions (std). These two indicators can be calculated using Equations (10) and (11). The values used for the main controlling parameters of the comparative optimization algorithms are specified in
Table 1.
where,
is the best obtained solution in
ith independent implementation.
4.1. Evaluation of Unimodal Objective Functions
Seven F1 to F7 functions of the unimodal type are selected to analyze the ability of optimization algorithms to provide quasi-optimal solutions for the unimodal objective functions. The results of optimization of these objective functions using the proposed SLOA as well as eight other algorithms are presented in
Table 2. The optimization results show that the proposed SLOA has been able to converge to the global optimal solution in solving F1 and F6 functions. SLOA is the first best optimizer in optimizing the F2, F3, F4, F5, and F7 functions. The simulation results show that in optimizing the F1, F2, F3, and F4 functions, the performance of the proposed SLOA is significantly superior to the eight compared algorithms.
Analysis and comparison of the results obtained from optimization algorithms, shows that the SLOA has a more effective ability to provide quasi-optimal solutions in this type of objective functions than similar algorithms.
4.2. Evaluation of High-Dimensional Multimodal Objective Functions
The second group of objective functions, including six F8 to F13 functions of the high-dimension multi-objective type, are selected to analyze the power of optimization algorithms in solving this type of optimization problems. The performance results of the optimization algorithms and the SLOA on the objective functions F8 to F13 are presented in
Table 3. The results of this table show that the proposed algorithm has been able to provide the global optimal solution for F9 and F11 functions. The proposed SLOA is the first best optimizer for solving F10 and F12 functions. In optimizing the F8 function, SLOA is the sixth best optimizer after GA, TLBO, PSO, GWO, and TSA algorithms. GSA is the first best optimizer for optimizing the F13 function, while the proposed algorithm for solving this objective function is the second-best optimizer.
Analysis of the results of optimization of high-dimensional multi-model objective functions shows the acceptable ability of the SLOA to solve such optimization problems.
4.3. Evaluation of Fixed-Dimensional Multimodal Objective Functions
Ten objective functions, including F14 to F23, are considered to analyze the performance of optimization algorithms in solving fixed-dimension multimodal optimization problems. The results of optimization of these objective functions using the proposed SLOA and eight other algorithms are presented in
Table 4. The optimization results indicate that the proposed SLOA is the first best optimizer for the F15, F16, F17, F19, and F20 functions. In optimizing the F14 function, SLOA has a similar performance to GA, GOA, and MPA in the Ave indicator. However, due to the lower std indicator, it is clear that the proposed SLOA is a more efficient method for solving F14. In optimizing the F18 function, SLOA with the lower std indicator is the first best optimizer. In optimizing the F21, F22, and F23 functions, SLOA and MPA provide similar performance in the Ave indicator. But since the proposed SLOA has a lower std indicator, it is the first-best optimizer and MPA is the second-best optimizer.
What is clear from the analysis and comparison of the performance of optimization algorithms is that the SLOA has a high ability to solve fixed-dimensional multimodal optimization problems and is able to provide more effective solutions with less standard deviation than similar algorithms.
The behavior and performance of the proposed SLOA and the eight compared optimization algorithms are presented in the form of a boxplot in
Figure 2. This figure intuitively demonstrates the superiority of the proposed SLOA in optimizing the functions of F1 to F7, F9 to F12, F14 to F23.
4.4. Statistic Analysis
Presenting the results of optimization of objective functions using standard statistical indicators of average and standard deviation of the best solutions provides useful and valuable information. However, the superiority of an algorithm in solving an optimization problem may be coincidental even after twenty independent implementations. Therefore, Wilcoxon rank-sum test is presented in order to statistically analyze the optimization results obtained from the proposed SLOA and eight other optimization algorithms. Wilcoxon rank-sum test is a non-parametric test that is used to evaluate the similarity of two dependent samples with a ranking scale. This analysis is applied to specify whether the obtained results using SLOA are different from the competitive algorithms in a statistically significant way.
A
p-value characterizes whether the given optimization algorithm is statistically significant or not. If the
p-value of the given optimization algorithm is less than 0.05, then the corresponding optimization algorithm is statistically significant. The results of statistical analysis based on the application of Wilcoxon rank-sum test are presented in
Table 5. Based on the analysis of the results in
Table 5, it is concluded that the proposed SLOA has a significant superiority over the other eight algorithms in cases where the
p-value is less than 0.05. Accordingly, the proposed SLOA has a significant superiority over all eight compared algorithms in optimizing the F1 to F7 unimodal functions. In optimizing high-dimensional multimodal functions, SLOA has a significant superiority over MPA and GOA with
p-value of less than 0.05. Also, in optimizing fixed-dimensional multimodal functions, the proposed SLOA with
p-value of less than 0.05 has a significant superiority over all eight algorithms compared.
4.5. Sensitivity Analysis
The sensitivity analysis means studies of the output changes of a mathematical model due to changes in the values of the input parameters. In other words, if sensitivity analysis is checked, which if the value of an independent parameter is changed, how does its dependent variable change in a specified and defined condition with assuming the constant of other parameters? In order to further analyze the proposed SLOA, sensitivity analysis is presented. For this purpose, the sensitivity of the SLOA to the three parameters of maximum number of iterations, number of members of the snow leopard population, and P parameter is investigated.
In order to analyze the sensitivity of the proposed algorithm to the maximum number of iterations parameter, the SLOA in independent runs for a maximum of 100, 500, 800, and 1000 iterations is implemented on the objective functions F1 to F23. The results of this analysis are presented in
Table 6 and the behavior of convergence curves under the influence of sensitivity analysis to the maximum number of iterations are presented in
Figure 3. The simulation results show that by increasing the maximum number of iterations, SLOA converges towards more suitable quasi-optimal solutions.
The proposed SLOA is implemented in independent runs for the population of snow leopards 20, 30, 50, and 80 on all objective functions F1 to F23 in order to analyze the sensitivity of the SLOA to the parameter of the number of members of the population. The simulation results of the sensitivity analysis to the number of population members are presented in
Table 7, and the behavior of the convergence curves is shown in
Figure 4. The simulation results show that as the number of snow leopard population members increases, the values of the objective function decrease and the algorithm converges towards solutions closer to the global optimal.
The
P parameter represents the percentage of the distance that the snow leopard walks when attacking prey. In order to analyze the sensitivity of the proposed algorithm to this parameter, SLOA is run independently for different
p values equal to 0.2, 0.375, 0.6, and 0.8. The simulation results of this analysis for all F1 to F23 objective functions are reported in
Table 8 and the behavior of convergence curves under the influence of sensitivity analysis to the
P parameter are presented in
Figure 5. The simulation results show that in the target objective functions of F6, F9, F11, F14, F16, F17, F18, F19, F20, and F23, the
P parameter changes had no effect on the performance of the proposed SLOA. In the F1, F5, F10, F21, and F22 functions, the
P parameter changes had very little effect on the performance of the proposed SLOA. According to the analysis of the optimization results, it is determined that the value of 0.375 for the
P parameter is a suitable value. Therefore, the authors suggest that researchers use the value 0.375 for the
P parameter in their research simulations.
5. Discussion
Exploitation and exploration are two important and key criteria in analyzing the performance of optimization algorithms and evaluating their ability to provide appropriate quasi-optimal solutions.
Exploitation means the ability of the algorithm to provide a suitable quasi-optimal solution that is close to the global optimal. Therefore, compared to the ability of several algorithms to solve an optimization problem, the algorithm that can provide the best quasi-optimal solution and the closest to the global optimal has a higher exploitation. Exploitation is very important especially for optimization problems that lack local optimal solutions. Unimodal objective functions, including F1 to F7, have only one main solution and lack local optimal solutions. Therefore, these types of objective functions are suitable for evaluating the exploitation of optimization algorithms. The results of optimization of these objective functions are presented in
Table 2. Based on the analysis of the results of this table and the comparison of the performance of optimization algorithms, it is clear that the proposed SLOA has provided more suitable quasi-optimal solutions and has a higher capability in the exploitation index than the other eight algorithms.
Exploration means the ability of the algorithm to accurately and effectively scan the search space of optimization problems. This index is especially important in solving optimization problems that have several local optimal solutions in addition to the global optimal. An optimization algorithm must be able to search the various regions of the problem-solving space well during execution and approach the global optimal by passing through the local optimal regions. Therefore, in evaluating the performance of several algorithms in solving an optimization problem, an algorithm that can scan the problem-solving space well has a higher ability in the exploration index. High-dimensional multimodal objective functions, including F8 to F13, as well as fixed-dimensional multimodal objective functions, including F14 to F23, are functions that have several optimal local solutions in addition to the main global optimal. Therefore, these types of objective functions are suitable for analyzing the exploration index in optimization algorithms. The performance results of optimization algorithms in solving these objective functions are presented in
Table 3 and
Table 4. The analysis of these results indicates that the proposed SLOA with high exploration power has scanned the search space in optimization problems containing local optimal areas and has approached to global optimal by passing local areas.
Exploration power allows the algorithm to scan different areas of the search space with the aim of passing through the optimal local areas and discovering the main optimal area. Exploitation power causes an algorithm to converge as much as possible towards the optimal solution after finding the optimal area in the search space. The main feature and advantage of the proposed SLOA is that in the process of scanning the search space, it does not rely on a specific member like the best member of the population. The first and second phases of SLOA, which simulate the snow leopard zigzag movement and hunting strategy, increase the exploitation power of the proposed algorithm in converging to the optimal solution. The effect of these two phases and the high convergence power of the proposed SLOA in the results of optimizing the F1 to F4 functions is significantly evident. The third and fourth phases of SLOA, which models reproduction and mortality, have a great impact on the algorithm’s exploration power in scanning new areas in the search space and distancing itself from non-optimal areas. The high exploration power of the proposed SLOA is well evident in the optimization of F9, F11, F14, to F23 functions.
Therefore, with an overview of the optimization results obtained from three different groups of objective functions, it has been determined that the proposed SLOA has high capability and power in both exploitation and exploration indicators. In fact, the main reason for the proposed algorithm’s superiority compared to compared algorithms is its exploitation and exploration abilities. SLOA with high-exploration capability can search different areas of problem-solving space and after crossing local optimal areas and approaching the optimal global solution, with high-exploration capability, converge to global optimal as much as possible. As specified in simulations, SLOA in optimizing functions of F1, F6, F9, and F11 has been able to provide the global optimal solution. What is inferred from the simulation results is that the performance of the proposed SLOA in optimization is superior to the comparative algorithms, and its results are much more competitive.
6. Conclusions and Future Studies
Numerous optimization problems in different sciences should be optimized with appropriate technique. Population-based optimization algorithms are among the stochastic solving methods of optimization problems that can provide acceptable quasi-optimal solutions to optimization problems. In this paper, a new optimization algorithm called Snow Leopard Optimization Algorithm (SLOA) has been presented in order to effectively solve optimization problems in various sciences and provide quasi-optimal solutions that are more desirable and closer to the global optimal. The theory and different stages of the proposed SLOA have been stated and then the mathematical model of the SLOA has been presented in order to implement on optimization problems with the aim of achieving quasi-optimal solutions. The performance of the proposed SLOA has been tested on a standard set consisting of twenty-three objective functions of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal types. Also, in order to analyze the quality of the SLOA in providing quasi-optimal solutions, the results obtained from the SLOA have been compared with eight other well-known algorithms, namely Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), Teaching-Learning Based Optimization (TLBO), Gray Wolf Optimizer (GWO), Grasshopper Optimization Algorithm (GOA), Marine Predators Algorithm (MPA, and Tunicate Swarm Algorithm (TSA).
The optimization results using the proposed algorithm indicate the high ability of the SLOA to provide suitable quasi-optimal solutions for optimization problems of different types. Also, the results of performance analysis and comparison of comparative optimization algorithms showed that the SLOA presented better results and is much more competitive than these eight optimization algorithms.
The authors offer several suggestions and perspectives for future studies of this paper. The design of binary and multi-objective versions for SLOA are the main potentials of the proposed algorithm. In addition, the use of the proposed SLOA in solving real-world optimization problems as well as various other types of optimization problems are suggestions for future studies related to this paper—see e.g., [
34,
35,
36,
37,
38,
39].
The important thing about all optimization algorithms is that one cannot claim that a particular algorithm is the best optimizer for all optimization problems. Therefore, one of the limitations of the proposed SLOA is that in optimizing some optimization problems, it may not be able to provide a quasi-optimal solution very close to the global optimal. Another limitation for SLOA is that it is always possible to design newer algorithms that have a higher ability to converge to the optimal solution. In addition, with the advancement of science and technology, more complex optimization issues arise that existing algorithms, such as the proposed algorithm, may not be able to solve and require the improvement of existing methods or the design of newer methods.