1 Introduction

Optimization problems are common in the field of engineering and need to be solved. The optimization method uses mathematical methods to achieve a variety of search strategies to optimize the objective function. However, for complex engineering problems, due to the limitations of computing time, algorithm conditions, and other factors, it is challenging to obtain the optimal solution. The heuristic optimization method designs various search strategies based on the intuitive or prior knowledge of the problem, which is of practical significance for solving complex engineering optimization problems. In recent years, numerous meta-heuristic techniques have been developed and applied to realize real-world problems. Because of its simplicity, efficiency, flexibility and low computational cost, meta-heuristic algorithms for solving tricky problems have become a hot research topic [13]. In [4], a genetic-moth swarm algorithm was proposed for determining the suitable locations and capacities of distributed generations in distributed system and successfully reduced electrical power loss without violating security constrains. In [5], a PSOGWO algorithm was introduced by combining the grey wolf algorithm and particle swarm algorithm for parameter estimation of photovoltaic model. Compared with some well-performance methods, the experimental results demonstrated that the developed algorithm outperform all competitors. Moreover, metaheuristic techniques have been used extensively in PV model parameter estimation issues for a long time and have provided accurate results that have contributed to the development of this field [610]. Meanwhile, metaheuristic techniques have become increasingly prevalent in the field of Wireless Sensor Networks (WSNs). For example, in [11], a modified version of artificial bee colony algorithm was developed for optimal clustering in WSNs. The proposed approach effectively handles the delicate balancing of the network load and extends the network lifetime, the promising performance proves that the proposed method is an effective clustering tool for wireless sensor networks. In [12], sine-cosine algorithm and harris hawks optimization were integrated to form a better performing algorithm for low and high-dimensional feature selection. Compared with other recognized hybrid algorithms and state-of-the-art feature selection approaches, the suggested method has competitive performance and is useful for real-world applications. Furthermore, in recently, many metaheuristic algorithm-based feature selection approaches have been developed and provide superior performance [1316]. In [17], an improved fractional-order cuckoo search algorithm was proposed for COVID-19 X-ray images classification. The outcomes verify that the involved method performs well in terms of classification accuracy. In addition, many metaheuristic algorithms contributed to battle against the coronavirus disease, which greatly alleviated the work pressure of doctors while gaining more treatment time for patients [1821]. In [22], a novel particle swarm optimization algorithm was developed for the path planning and obstacle avoidance problems in autonomous mobile robots. The experimental results demonstrate that the proposed approach can generate an optimal collision-free route for mobile robots. Besides, a variety of nature-inspired swarm-based techniques have been employed to establish a safe and reasonable route for mobile robots, such as artificial bee colony [23], chemical reaction optimization [24], bat algorithm [25], ant colony optimization [26], and cuckoo search algorithm [27]. Due to the powerful ability to solve real-world problems, metaheuristic techniques are growing in popularity in many research areas, including image processing [28], economic dispatch [29], design PID controller [30], cloud computing [31], and environmental engineering [32] . According to “no free lunch” (NFL) theorem [33], there is no one method that can handle all optimization problems well. In other words, an optimization algorithm may achieve satisfactory results on some optimization problems, but it may perform poorly on other different problems. Therefore, the study of meta-heuristic algorithm has a strong practical significance.

Nature-inspired meta-heuristic algorithms solve optimization problems by building biological or physical phenomena as mathematical models, which can be divided into four classes: evolution-based, physics-based, human-based and population-based methods. As the most popular meta-heuristic algorithms, the population-based method simulates the social behavior of fauna in nature. The most representative algorithm is Particle Swarm Optimization (PSO) presented by Kennedy and Eberhart [34], which is inspired by the flocking behavior of birds. PSO uses several particles to travel around in the search space to find the optimal solution. Another popular population-based method is Firefly Algorithm (FA), first proposed by Yang [35], which is inspired by the swarming behavior of fireflies in the nature. In fact, the collective intelligence of fireflies in finding food through specific ways of communication is the primary insight for this optimizer. It uses several fireflies to move in the search space to find the optimal solution. In addition, the Artificial Bee Colony (ABC) algorithm [36] was proposed by Karaboga et al. inspired by the cooperative foraging behavior of bees in the colony, and has been successfully applied in many disciplines. In recent years, many novel swarm intelligence based algorithms have been presented, such as: Krill Herd (KH) [37], Bat Algorithm (BA) [38], Coyote Optimization Algorithm (COA) [39], Bacterial Foraging Optimization (BFO) Algorithm [40], Grey Wolf Optimization (GWO) [41], Fruit Fly Optimization Algorithm (FOA) [42], Spotted Hyena Optimizer (SHO) [43], Poor and Rich Optimization Algorithm (PRO) [44], Barnacles Mating Optimizer (BMO) [45], Multi-Verse Optimizer (MVO)  [46], Animal Migration Optimization (AMO) [47], Dragonfly Algorithm (DA) [48], Whale Optimization Algorithm (WOA) [49], Flower Pollination Algorithm (FPA) [50], Galactic Swarm Optimization (GSO) [51], Meerkat-Inspired Algorithm (MIA) [52], Farmland Fertility Algorithm (FFA) [53], Pathfinder Algorithm (PFA) [54], Falcon Optimization Algorithm (FOA) [55], Harris Hawks Optimizer (HHO) [56], Owl Optimization Algorithm (OOA) [57], Manta Ray Foraging Optimization (MRFO) [58], Monarch Butterfly Optimization (MBO) [59], Pity Beetle Algorithm (PBA) [60], Earthworm optimization algorithm (EOA) [61] and Salp Swarm Algorithm (SSA) [62]. Among these algorithms, SSA has been widely studied in recent years because of its distinctive perspective.

The SSA algorithm is a novel swarm-based stochastic optimization algorithm proposed by Mirjalili et al. in 2017, which is inspired by the phenomenon that the salps follow each other to form a chain when they are navigating and foraging in the ocean. Through the process of leading and following, an efficient optimization scheme is constructed. The SSA algorithm has been applied to various optimization problems because of its simple mechanism, dynamic nature and strong global search ability. Such as extracting the parameters of photovoltaic system cell [6364], optimization of software-defined networks [65], train neural networks [6667], optimize parameters of soil water retention [68], tariff optimization in electrical systems [69], image segmentation [70,71], target localization [72], optimal power flow problem [73], estimation of optimal parameters of polymer exchange membrane fuel cells [74], feature selection [757677787980] and others. Although the SSA algorithm is very competitive, it still suffers from some limitations such as poor convergence, unbalanced exploration and exploitation capacities, which may lead to local optimum stagnation when solving some intractable optimization problems. To address these drawbacks, many scholars try to adjust the exploration or exploitation strategies to enhance the comprehensive performance of the canonical SSA.

Qais et al. [81] modified the control parameters of SSA and introduced a random parameter into the follower position update formula to improve the convergence performance. Gupta et al. [82] proposed a modified version of SSA. The concept of opposition-based learning (OBL) is introduced to improve the ability of canonical SSA to escape local optimal, and the levy flight strategy is introduced to help explore the search space. The comprehensive effect of OBL and levy flight mechanisms strengthens the exploration-exploitation balance during the search process. Zhang et al. [83] proposed an enhanced SSA, which promotes the overall performance of the algorithm by embedding multiple strategies into original SSA. Among all the introduced strategies, OBL helps to enrich the diversity of the initial population, orthogonal learning helps to promote the probability of avoiding local optimal, and the concept of quadratic interpolation can improve the convergence accuracy. Ren et al. [84] proposed an improved version of SSA, which introduces adaptive weight and levy flight strategy into the traditional SSA to promote the comprehensive performance of the algorithm. Among them, the adaptive weight expands the exploration range, while the levy flight strategy strengthens the exploitation of the solution space. Gholami et al. [85] introduced a mutation mechanism to help the algorithm jump out of the local optimum, which improved the problem of insufficient accuracy of the algorithm. Sayed et al. [86] proposed a chaos SSA, that introduces ten different chaotic maps when updating the control parameters of SSA, so as to solve the problem that the algorithm is easy to sink into local optimum. Wu et al. [87] proposed a new version of SSA, which reduced the probability of local optimal stagnation by introducing dynamic weight factor and adaptive mutation strategy. Ibrahim et al. [88] proposed an improved SSA, called SSAPSO, which improves the flexibility of the algorithm in the exploration phase by hybridizing SSA and PSO. Singh et al. [89] proposed a novel method that mixes SSA and Sine Cosine Algorithm (SCA) [90], which enhances the exploratory ability of the algorithm. Li et al. combined Gravitational Search Algorithm (GSA) [91] with SSA to promote the probability of escaping the local optimum. Among them, GSA and SSA have a clear division of labor, the former is responsible for global exploration, the latter is responsible for local exploitation. Yang et al. [92] proposed a memetic SSA. The convergence rate and convergence accuracy of the algorithm are enhanced by using memetic behavior. Syed et al. [93] proposed a weighted SSA, which improves the ability to balance exploitation and exploration during the search process by introducing the concept of optimal location weighted sum in the position update method. Bairathi et al. [94] proposed an OBL-based SSA variant, which enriches the diversity of the initial salp swarm by introducing the concept of OBL during the population initialization stage, thus promoting the convergence speed and enhancing the exploitation ability. Chen et al. [95] proposed an improved SSA. By introducing a non-linear attenuation factor that controls the search range, and the local search ability was improved, and a dynamic learning strategy was introduced to enhance the assistance of elite individuals to the leader. Although each of the SSA variants discussed above improves the performance of the original SSA to some extent, there are still some drawbacks, such as premature convergence, unbalanced global search and local development capabilities. In addition, the above-mentioned improved algorithms are only applicable to specific optimization problems, and none of them can solve all optimization problems well. Therefore, it is of great practical significance to propose more effective algorithms. Motivated by these two viewpoints, this study presents a modified SSA variant, called OOSSA, which improves the overall performance of the canonical SSA. The main contributions of this investigation are summarized as follows:

  • An improved variant of SSA is presented and does not affect the structure of basic SSA.

  • An adaptive technique is presented for the number of leaders in order to improve the global search ability and to promote the convergence performance.

  • The lens opposition-based learning mechanism is proposed, and combined with orthogonal experimental design, an orthogonal opposition-based learning technique is designed to overcome the local optima stagnation.

  • A ranking-based dynamic learning strategy is presented to enhance the local exploitation capability.

  • The balance between global search and local development of the proposed methodology is held more stably.

  • The efficiency of the proposed OOSSA is evaluated on a comprehensive set of 26 well-known benchmark functions with diverse difficulty levels, and compared with a variety of competitive metaheuristic techniques, including the canonical SSA, SSA variants, and other cutting-edge swarm-based methods.

  • Three real-world engineering optimization problems and parameter extraction problem of PV model are used to verify the efficiency and practicality of the proposed algorithm.

An OOSSA-based path planning approach is developed for solving the path planning and collision avoidance problem in autonomous mobile robots.

The remainder of this article is structured as follows: Section 2 illustrates the preliminary knowledge. The proposed SSA-based algorithm is specified in detail in Section 3. The experimentation and verification of the proposed method on benchmark functions are performed in Section 4. In Section 5, the proposed method is used to solve three real-life engineering problems and determine the parameters of PV models. In Section 6, the proposed OOSSA-based path planning and obstacle avoidance approach is described and simulation and comparative studies are discussed. Finally, Section 7 concludes this investigation. The detailed flow of the article is shown in Fig. 1.

Fig. 1
figure 1

Outline of the paper

2 Preliminary knowledge

2.1 Salp swarm algorithm (SSA)

Salps are translucent, colloidal marine organisms that resemble jellyfish and move by inhaling and expelling seawater. Researchers mathematicized the chain structure of salps and proposed SSA. SSA divides salps into two types: leader and followers. The leader is at the front of the salp chain, while the followers are at the back, as shown in Fig. 2.

Fig. 2
figure 2

An illustration of salp chain

In SSA, the food source is the foraging target of the salp chain, which guides the leader to update the position. The mathematical model for updating the position of leader is as follows:

$$ {X}_j^1=\Big\{{\displaystyle \begin{array}{c}{F}_j+{c}_1\left(\left(u{b}_j-l{b}_j\right){c}_2+l{b}_j\right)\kern0.9000001em {c}_3\ge 0.5\\ {}{F}_j-{c}_1\left(\left(u{b}_j-l{b}_j\right){c}_2+l{b}_j\right)\kern0.9000001em {c}_3<0.5\end{array}} $$
(1)

where X1 j and Fj indicate the positions of the leaders and food source in the jth dimension, respectively. ubj and lbj correspond to the lower and upper boundaries of the jth dimension in the search space, c2 and c3 are random variables evenly distributed in the interval [0,1], which determine the step size and moving direction respectively.

c1 is an important parameter in SSA, which is called distance control factor. The c1 parameter is defined as shown in Eq. (2).

$$ {c}_1=2{e}^{-{\left(\frac{4t}{T}\right)}^2} $$
(2)

where t is the current iteration and T is the maximum number of iterations.

To update the states of the followers, Eq. (3) is applied:

$$ {X}_j^i=\frac{1}{2}\left({X}_j^i+{X}_j^{i-1}\right) $$
(3)

where i≥2 and Xi j shows the position of ith follower salp in the jth dimension search area.

The search process of SSA algorithm includes two stages: global exploration and local exploitation. During the initialization stage, the randomly generated initial population searches randomly in the search space to help the algorithm lock the optimal solution region. Then, the algorithm enters the exploitation stage and makes an accurate search in the limited area determined in the previous stage to improve the convergence accuracy. It should be pointed out that the distance control parameter c1 has an important influence on the search process. In the early stage of evolution, the value of c1 is large, which can help the algorithm to explore the whole solution space, parameter c1 is decreased adaptively over the course of iterations, and the small value of c1 can help the algorithm to carry out accurate exploitation in a specific search area. Since there is no prior knowledge to know the position of the food source, the global optimal solution obtained in each iteration is set as the current food source position.

Fig. 3 demonstrates the flowchart of SSA.

2.2 Principle of lens imaging of light

Convex lens imaging (LI) law is a kind of optical law, which means that an object is placed outside the focus and an inverted real image is formed on the other side of the convex lens [96], as shown in Fig. 4.

Fig. 3
figure 3

The flowchart of SSA

Fig. 4
figure 4

The convex lens image of light

Fig. 5
figure 5

Population distribution observed at various stages in SSA for solving Sphere function (D = 3)

Fig. 6
figure 6

The number of leaders and followers changes adaptively over the course of iterations

Fig. 7
figure 7

Lens opposition-based learning

Fig. 8
figure 8

Construct experimental solution

Fig. 9
figure 9

The flowchart of OOSSA

Fig. 10
figure 10

Search space of some typical benchmark problems

Fig. 11
figure 11

Convergence curves of SSA-based algorithms for twelve representative test functions

Fig. 12
figure 12

Convergence curves of OOSSA and twelve cutting-edge algorithms for twelve representative test functions

Fig. 13
figure 13

Pressure vessel design problem

Fig. 14
figure 14

I-beam design problem

Fig. 15
figure 15

Cantilever beam problem

Fig. 16
figure 16

Equivalent circuit of SDM

The mathematical model of LI can be obtained from Fig. 4 as follows:

$$ \frac{1}{u}+\frac{1}{v}=\frac{1}{f} $$
(4)

where u and v are the object distance and image distance, respectively, and f is the focal length.

2.3 Opposition-based learning

Opposition-based learning (OBL) is an effective tool that can be used to improve the performance of stochastic search algorithms, first proposed by Tizhoosh [97] and has been app lied to many intelligent optimization techniques. Its main idea is to evaluate both the current feasible solution and the reverse solution, and choose the better one to use. Reference [98] shows that the reverse solution has a greater probability of approaching the global optimum than the current solution. Therefore, the OBL technique can effectively improve the comprehensive performance of the stochastic algorithm. The concept of OBL is defined as follows:

Definition 1 (Opposite number) [99] Let x∈[a,b] be a real number and to calculate its opposite number \( \tilde{x} \), Eq. (5) is presented.

$$ \overset{\sim }{x}=a+b-x $$
(5)

Extend the concept of opposite number to high dimensional space and give the definition of opposite point as follows:

Definition 2 (Opposite point) [99] Let X=(x1, x2,…, xD) be a point in d-dimensional space, x1,x2,…,xDR and xi∈[ai,bi], i=1,2,…,D. To calculate its opposite point \( \overset{\sim }{\mathrm{X}}=\left({\overset{\sim }{\mathrm{x}}}_1,{\overset{\sim }{\mathrm{x}}}_2,\dots, {\overset{\sim }{\mathrm{x}}}_D\right) \), Eq. (6) is used.

$$ \tilde{x}_{i}={a}_i+{b}_i-{x}_i $$
(6)

2.4 Orthogonal experimental design (OED)

OED is an efficient tool for finding the optimal combination of multi-factor and multi-level experiments through a small number of experiments [100]. For example, for an experiment with 2-level-7-factor, 128 tests are required to discover the optimal combination. If the orthogonal test design is used, according to the orthogonal table L8(27), such as Eq. (7), the optimal or better combination can be found through only 8 tests, which greatly improves the test efficiency. Since there is no guarantee that the optimal combination is in the orthogonal table [101], when using the orthogonal table, it is usually necessary to perform factor analysis to find the theoretical optimal combination of the experiment, and combine all the combinations in the orthogonal table to determine the best solution of the experiment. For that reason, for the above-mentioned experiments with 2 levels and 7 factors, eight groups of candidate combinations can be obtained first according to orthogonal table L8(27), then factor analysis is carried out to find a set of theoretical optimal combinations. Finally, nine combinations are evaluated to find the best combinations for the experiment.

$$ {L}_8\left({2}^7\right)=\left[\begin{array}{ccccccc}1& 1& 1& 1& 1& 1& 1\\ {}1& 1& 1& 2& 2& 2& 2\\ {}1& 2& 2& 1& 1& 2& 2\\ {}1& 2& 2& 2& 2& 1& 1\\ {}2& 1& 2& 1& 2& 1& 2\\ {}2& 1& 2& 2& 1& 2& 1\\ {}2& 2& 1& 1& 2& 2& 1\\ {}2& 2& 1& 2& 1& 1& 2\end{array}\right]. $$
(7)

3 Proposed OOSSA approach

3.1 Motivation

According to the previous section, the leader is constantly moving in the direction of the food source, and the followers follow closely behind the leader. In this case, the salp chain successfully completed the foraging process under the leadership of the leading salp. As can be known from Eqs. (1) and (3), the current optimal solution, that is, the position of the food source, only has a direct impact on the leading salp, while the impact on the followers is indirect, and the force is relatively weak. The food source provides the best guidance when the salp swarm is foraging in the ocean, so it is very important that the food source should be located in the global optimal position. However, due to the lack of prior knowledge, it is difficult to determine whether the current food source is in a global optimal state. If the food source is in the local optimal region, the salp chain will gather in the local optimum region, resulting in the loss of population diversity, and finally make the algorithm converge to the local optimal. Fig. 5 shows the 30 salps location of the Sphere problem with three dimensions in the interval [-100, 100] observed at various stages of SSA.

From Fig. 5, at the early stage of SSA (Fig. 5(a)), 30 individuals are scattered in the search space, and the rich population diversity makes the algorithm have a good global exploration ability. As the number of evolutions increases (Fig. 5(b)), the leading salp continues to approach the food source, and followers follow each other. All salps form a chain and move around the food source. In the middle stage of the SSA (Fig. 5(c)), all individuals continue to shrink to the food source position, and the search range continues to narrow, until the end of optimization process (Fig. 5(d)), all individuals gather near the current optimum solution, and the population diversity is lost. If the food source is located in a local optimal, then the leader will lead the salp chain to gradually move to the local optimum area, eventually causing SSA to fall into a local optimum. Therefore, it is necessary to adjust the leader's position update strategy to improve the diversity of the SSA algorithm. The basic SSA is prone to search stagnation, so it is difficult to accomplish the goal of global search-fast convergence balance. In addition to the reasons analyzed above, the unreasonable number of leaders and followers is also an important reason for this tragedy. In the canonical SSA algorithm, the leader is responsible for global exploration, and the follower is responsible for local exploitation. That is to say, the leader first locates a rough approximation of the global optimum, and then the followers accurately search in this area to improve solution accuracy. There is always only one leader in the standard SSA, which means that in the early stage of the evaluation, the global search performed by a single leader is insufficient, too many followers will lead to excess local exploitation, which leads to premature convergence. This situation also exists in the later iteration. To solve this problem, we propose an adaptive mechanism for the number of leaders, i.e., the number of leaders adaptively changes over the course of iterations, trying to achieve the goal of exploration-exploitation balance. Additionally, the basic SSA has a poor performance in terms of convergence accuracy. To settle this problem, we adjust the follower position update mechanism and propose a ranking-based dynamic learning strategy.

The points discussed above have prompted the authors to estimate that there are some limitations in basic SSA, so the above point of view is the motivation behind our proposal of an improved version of SSA. The new SSA-based structure aims to solve the drawbacks of SSA through different modifications, so as to improve the overall performance of basic SSA. Detailed discussion on each of the introduced operators are provided in the following subsections.

3.2 The adaptive mechanism for the number of leaders

In the basic SSA algorithm, the leader is the search agent at the front of the salp chain, and the other salps are followers. The working nature of the salp is determined by its own role. The leader updates its own state based on the position information of the food source, and the followers update their position by following each other. With the increase of the number of evolutions, the salp chain continues to move closer to the food source under the leadership of the leading salp. However, it can be seen from Eq. (3) that when the follower updates the position, it is only affected by its own state and the position of the individual in front of it, while the global optimal solution, that is, the food source position, only directly affects the leader's position update. Therefore, if there is only one leader in the salp chain, the global optimum solution provides too little help to the salp swarm during the search process. Once the leader falls into the local optima, the followers must follow to the local optimal region, resulting in the premature convergence. To solve this problem, we made some improvements to the number of leaders and followers. First, increase the number of leaders, which can help to improve the global search ability and accelerate the convergence speed of the algorithm. Then, an adaptive technique is presented for the number of leaders. This can help to enhance the ability of exploration-exploitation balance, and improve the solution accuracy. Fig. 6 shows that how the number of leaders and followers change adaptively over the course of iterations. To calculate the number of leaders and followers, Eq. (8) is utilized.

$$ \Big\{{\displaystyle \begin{array}{c} leaderno= ceil\left(N\cdotp b\cdotp \left(\tan \left(\frac{-\pi \left(l-1\right)}{4T}+\frac{\pi }{4}\right)\right)\right)\\ {} followerno=N- leaderno\end{array}} $$
(8)

where leader no and follower no show the number of leaders and followers, respectively, l is the current iteration, N is the population size, T is the maximum number of iterations, b is a parameter to adjust the number of leaders. After a large number of experiments, the value of b is set to 0.55.

Equation (8) shows that the number of leaders decreases adaptively over the course of iterations, while the number of followers increases accordingly. In the early iteration, a sufficient number of leaders perform global exploration and a suitable number of followers perform local exploitation. In this case, multiple leaders effectively explore unknown areas, thus improving the algorithm's ability to jump out of local optimum. At the same time, an appropriate number of followers can ensure that the algorithm has a strong exploitation capacity. As the number of iterations increases, the number of leaders decreases adaptively and the number of followers increases accordingly. In this case, a sufficient number of followers can accurately search within the global optimum area determined at the early iteration, so as to improve the convergence accuracy.

3.3 Orthogonal lens opposition-based learning strategy

In the canonical SSA, at the end of search process, the salp chain tends to move in a small area near the food source. If the food source is at the local optima, then the population will converge to the local optima, resulting in the search stagnation. Therefore, when dealing with intractable multi-modal problems, SSA is prone to premature convergence. Consequently, the ability to escape the local optimum has become the most urgent problem for SSA to solve. Therefore, this study presents a new Orthogonal Lens Opposition-Based Learning (OLOBL) mechanism to help the leading salps migrate to a potentially more promising region.

Firstly, a Lens Opposition-Based Learning (LOBL) strategy is presented as a technique to generate reverse solution in OLOBL strategy. The essence of LOBL is a dynamic opposition-based learning strategy designed by combining OBL and optical LI principle, and its performance is better than that of OBL. Next, the definition of LOBL is illustrated in detail:

Definition 3 (Cardinal point) [102] Let o1,o2,…,om be several points in D-dimensional space. The opposite of point X=(x1,x2,…,xD) is \( \overset{\sim }{\mathrm{X}}=\left({\overset{\sim }{\mathrm{x}}}_1,{\overset{\sim }{\mathrm{x}}}_2,\dots, {\overset{\sim }{\mathrm{x}}}_D\right) \), and the Euclidean distances of their two to point oi (i=1,2,…,m) are di and d* i, respectively. Let k=di/d* i, and k= 1,2,...,n, then oi is designated the cardinal point of X and \( \tilde{X} \) when k=i.

Consider one-dimensional search space as an example, suppose there is an object M with a height of h at x on the coordinate axis, and x∈[a,b], and a lens with focal length r is fixed at the cardinal position o, it should be pointed out that, this paper takes the midpoint of the interval [a, b] as the cardinal point. Based on the LI principle, an image M* with height h* can be procured. The one-dimensional spatial LOBL process for the leading salp (x) is illustrated in Fig. 7.

In Fig. 7, x takes o as the cardinal point to get the corresponding opposite point \( \tilde{x} \), the geometric relationships can be specified as follows:

$$ \frac{\frac{\left(a+b\right)}{2}-x}{\overset{\sim }{x}-\frac{\left(a+b\right)}{2}}=\frac{h}{h^{\ast }} $$
(9)

Let h/h*=k, Eq. (9) can be modified as:

$$ \overset{\sim }{x}=\frac{\left(a+b\right)}{2}+\frac{\left(a+b\right)}{2k}-\frac{x}{k} $$
(10)

Let k=1, the Eq. (10) can be clarified to

$$ \overset{\sim }{x}=a+b-x $$
(11)

where the Eq. (11) is the OBL technique equation in [103].

Equations (10) and (11) show that the reverse individual obtained from the OBL strategy is fixed, and that the reverse individual obtained from LOBL is dynamic depending on the k-value.

Extending Eq. (10) to the n-dimensional space and Eq. (12) is presented.

$$ \overset{\sim }{x_i}=\frac{\left({a}_i+{b}_i\right)}{2}+\frac{\left({a}_i+{b}_i\right)}{2k}-\frac{x_i}{k} $$
(12)

where xi and \( \overset{\sim }{x_i} \) are the i-dimensional components of x and \( \overset{\sim }{x} \), respectively, and ai is the upper boundary, bi is the lower boundary.

OLOBL is a technique produced by combining OED and LOBL. Compared with OBL, LOBL can further enrich the diversification of the population, thus enhancing the global exploration capability of the algorithm. Therefore, LOBL is used to generate reverse solution in OLOBL mechanism. However, according to the research of Park et al. [104], for a solution, the reverse solution is only better than the current solution in some dimensions. Therefore, taking the reverse value of all the dimensions of the individual will cause dimension degradation, that is, some dimensions are far away from the global optimal solution. To solve this problem, combined with OED and LOBL, an orthogonal lens opposition-based learning (OLOBL) strategy is designed, which fully considers each dimensional component of the current individual and the reverse individual, and combines their dominant dimensions to produce a partial inverse solution.

The OLOBL strategy is embedded in the canonical SSA algorithm, and the dimension D of the problem corresponds to the factors of the OED, and the individual and its opposite individual are the two levels of the OED. The specific process of constructing the partial inverse solution is as follows: a 2-level-D-factor orthogonal experiment is designed for the current individual and its opposite individual, and M partial reverse solution is generated, M is calculated according to Eq. (13). When the partial reverse solution is generated according to the orthogonal table, if the element in the orthogonal table is 1, the partial reverse solution takes the value of the current individual in the corresponding dimension, if the element in the orthogonal table is 2, the partial reverse solution takes the value of the opposite individual in the corresponding dimension. Taking a 7-dimensional problem as an example, the process of producing a partial inverse solution by using a 2-level-7-factor orthogonal experiment is illustrated, as shown in Fig. 8.

$$ M={2}^{\left\lceil {\log}_2\left(D+1\right)\right\rceil } $$
(13)

According to the characteristics of the OED, the elements in the first row of the orthogonal table are all 1, which means that the first set of experimental solutions is the current individual itself and does not need to be evaluated. The other M-1 sets of experimental solutions are the combination of the dominant dimensions of the current individual and its reverse individual, that is, partial reverse solutions, which need to be evaluated. According to the content mentioned above, when using the OED, it is necessary to carry out factor analysis to find out a group of theoretical optimal combination that is not in the orthogonal table, which need to be evaluated. Therefore, the number of function evaluations (FEs) required for each execution of OLOBL is M times. To achieve a good balance between enhancing the exploration ability of the algorithm and reducing the number of FEs, only one leader is randomly selected to perform the OLOBL strategy in each iteration, and the other leaders only perform the LOBL tactic, and choose the better one from the leader and its reverse individual or partial reverse individual to enter the later iteration.

3.4 Ranking-based dynamic learning strategy

According to Eq. (3), in the basic SSA, followers learn the previous individual while retaining their own characteristics, and complete the location update. The location update mechanism is relatively simple, once the leader falls into the local optimum, the followers must follow to the local optimum area. To enhance the flexibility of the follower's position update mechanism, this paper proposes a ranking-based dynamic learning strategy. Firstly, the ranking of the current search agent and the previous search agent in the salp swarm is evaluated based on the fitness value, and then the influence weight is designed according to the ranking, and it is applied to the corresponding individual.

After introducing the ranking-based dynamic learning strategy, Eq. (3) can be modified as:

$$ {X}_j^i=\frac{1}{2}\left(\frac{\mathit{\operatorname{ran}}{k}_{i-1}}{\mathit{\operatorname{ran}}{k}_i+\mathit{\operatorname{ran}}{k}_{i-1}}{X}_j^i+\frac{\mathit{\operatorname{ran}}{k}_i}{\mathit{\operatorname{ran}}{k}_i+\mathit{\operatorname{ran}}{k}_{i-1}}{X}_j^{i-1}\right) $$
(14)

where ranki and ranki-1 represent the ranking of the two individuals respectively.

The ingenuity of Eq. (14) is that for the current individual i and the previous individual i-1, the ranking of the individuals with better fitness is correspondingly higher, but the ranking value is smaller, so the ranking value of the individual with poor fitness is taken as the coefficient of the better individual, so that the better individual has a larger influence weight, and vice versa.

3.5 Proposed OOSSA

To enhance the comprehensive performance of SSA, we analyze the drawbacks of the method and propose three adjustment mechanisms, and then embedded the improved search mechanism into the basic SSA to present a novel and efficient SSA-based algorithm called OOSSA. The detailed process of OOSSA is as follows:

  • Step 1 Initialize parameters of the OOSSA method including population size N, maximal number of FEs Fmax, problem dimension D, the upper and lower boundaries of the i-th dimensional space, lbi, and ubi. Randomly generate N individuals in the search space.

  • Step 2 Evaluate the initial population based on the fitness value, and the position of the search agent with the best fitness is set as the current food source.

  • Step 3 Calculate the number of leaders and followers according to Eq. (8), and randomly select a leader and mark it as OLOBL-Leader.

  • Step 4 Judge the role of salps. If the salp is a leader and is not OLOBL-Leader, enter Step5; if the salp is OLOBL-Leader, enter Step6; if the salp is a follower, enter Step7.

  • Step 5 Use Eq. (1) to amend the state of the leading salp to generate candidate solution 1. The current leader executes the LOBL strategy according to Eq. (12) to generate candidate solution 2, and the search agent with better fitness value is chosen as the new solution among candidate solution 1 and candidate solution 2.

  • Step 6 Leader OLOBL-Leader executes the OLOBL strategy.

  • Step 7 Update the state of the follower by Eq. (14).

  • Step 8 The one with the better fitness value among the food source and the current optimal individual is set as the food source.

  • Step 9 If the number of FEs does not exceed Fmax, return to Step 3. Otherwise, output the food source position.

The flowchart of OOSSA is illustrated in Fig. 9.

While keeping the basic framework and overall flow of the original SSA algorithm unchanged, OOSSA introduces the LOBL strategy in the leader position update phase, randomly selects an individual to implement the OLOBL strategy, and modifies the follower position update equation. Suppose the dimension of problem is D, population size is N, and maximum number of iterations is T. The computational complexity of population position updating is O(TND), and the computational complexity of the leader executing LOBL or OLOBL is O(TD2). Therefore, the eventual computational complexity of OOSSA is O(TN2), which is the same as that of the original SSA. This shows that the algorithm is modified without increasing the computational overhead.

3.6 Justification of OOSSA

In this subsection, the proposed OOSSA algorithm is justified in a metaphor-free way. A proportion of individuals in the population search the solution space according to Eq. (1), and this search equation can be rewritten as

$$ {X}_j^1=\Big\{{\displaystyle \begin{array}{c}{F}_j+{V}_j\kern0.8000001em {c}_3\ge 0.5\\ {}{F}_j-{V}_j\kern0.9000001em {c}_3<0.5\end{array}} $$
(15)

To calculate Vj, the following equation is employed

$$ {V}_j={c}_1\left(\left(u{b}_j-l{b}_j\right){c}_2+l{b}_j\right) $$
(16)

where c2 is a random number between [0,1], so c1((ubj-lbj)c2+lbj) can be rewritten as c1((ubj-lbj)rand(0,1)+lbj). This equation is the same as the one used to allocate initial positions of individuals, which means that the outcome obtained from this equation is a randomly generated position in the jth dimensional search space, which can also be interpreted as a step size. c1 is an important parameter that decreases adaptively during the iterations and is used to control the value of the step size. Therefore, according to Eq. (16), Vj can be defined as a step size that decreases adaptively over the course of iterations and has a stochastic property.

In Eq. (15), Fj is the current optimal solution, which searches the solution space by moving gradually in steps Vj in the hope of finding more promising regions. The parameter c3 determines the movement direction of the current optimal solution. Since c3 is a random number between [0,1], the current optimal solution searches towards positive or negative infinity with equal probability, which ensures that the whole solution space is adequately searched.

Based on the above analysis, from Eq. (15), for the jth dimensional search space, Fj is the current optimal position and Vj is the step size. In the early iteration, larger c1 values generate larger step sizes Vj, which can help the current optimal solution to discover new promising regions in the search space by moving significantly, thus identifying the region where the global optimal solution is located. In the later iteration, smaller c1 values produce smaller step sizes Vj, which can help the current optimal individual to discover the global optimal solution by exploiting the promising regions identified in the early iteration. In summary, Eq. (15) is responsible for thoroughly searching the entire solution space in the early evolution to pinpoint the potentially global optimal region and finely exploiting this region in the later evolution to find the optimal solution.

After an individual updates its position according to Eq. (15), it jumps to an OLOBL individual based on the OLOBL strategy, and this operator is analyzed below.

The literature [98] demonstrates that the reverse solution has a higher probability of reaching the global optimal than the current solution, and the literature [104] further demonstrates that the reverse solution outperforms the current solution only in certain dimensions, i.e., opposite learning may cause the dimensional degradation problem. The OED technique can solve this problem by discovering and combining the dominate dimensions of the current and reverse solutions, which is the OOBL strategy mentioned in the paper. The OLOBL strategy proposed in this paper is a generalized version of the OOBL, i.e. OOBL is a special case of OLOBL, so OLOBL has the same properties as OOBL and is more effective. Consequently, an individual can improve the convergence speed of the algorithm by executing the OLOBL strategy to produce an OLOBL-individual that is closer to the global optimum solution. In addition, if an individual is trapped in a local optimum, by executing the OLOBL strategy, it can jump out of this trap and move to a more favorable area for searching. In summary, OLOBL can improve the convergence speed of the algorithm and enhance its ability to jump out of the local optimum.

The remaining individuals in the population update their position according to Eq. (14), which will be analyzed below to reveal the search properties it implies.

As Eq. (14) is a modification of Eq. (3), we first analyze the search properties implied by Eq. (3). This search equation is derived from Newton's law of motion, based on which the distance moved by an individual is calculated as follows.

$$ D(t)=\frac{1}{2} a\varDelta {t}^2+{v}_0\varDelta ti\ge 2 $$
(17)

In optimization, time is iteration, t represents the discrepancy between evolutions, so Δt=1, v0 is the initial velocity, consider v0=0, a is the acceleration and calculate according to the following equation

$$ a=\frac{v_{final}-{v}_0}{\varDelta t} $$
(18)

When the current individual moves to the position of its previous individual, its velocity is calculated as follows.

$$ {v}_{final}=\frac{x_j^{i-1}\left(t-1\right)-{x}_j^i\left(t-1\right)}{\varDelta t} $$
(19)

where \( {x}_j^{i-1} \) and \( {x}_j^i \) represent the position of the current individual and its previous individual in the jth dimension at the previous time step (t-1), respectively. Therefore, the individual moves to the next position according to the following equation

$$ {\displaystyle \begin{array}{c}{x}_j^i(t)={x}_j^i\left(t-1\right)+D(t)\\ {}={x}_j^i\left(t-1\right)+\frac{1}{2}\left({x}_j^{i-1}\left(t-1\right)-{x}_j^i\left(t-1\right)\right)\end{array}} $$
(20)

The search equation used by the algorithm can be obtained by simplifying Eq. (20), which is Eq. (3) as presented previously.

Based on the above analysis, the individual moves to the next position according to Newton's laws of motion, and although this gradual movement can search the solution space, this pattern is too rigid. Therefore, we improve it by using the fitness value-based ranking of two individuals in the population as a weighting factor to dynamically adjust the movement direction and step size of the current individual so that the next position is closer to the better of the two search agents, which is the proposed search Eq. (14). In the experimental section, we will verify the validity of the proposed search model by rigorous experimentation on benchmark functions.

Next, we will present a theoretical convergence proof of OOSSA.

OOSSA is a swarm intelligence optimization algorithm, so the convergence property of OOSSA is analyzed using the general approach of analyzing the theoretical convergence of population-based techniques.

Theorem 1: If the basic SSA is convergent, the developed OOSSA is also convergent.

Proof of Theorem 1: Let X(t) be the current solution at generation t, which jumps to \( \overset{\sim }{\mathrm{X}}(t) \) through OLOBL, and their jth dimension values are Xj(t) and \( {\overset{\sim }{\mathrm{X}}}_j(t) \), respectively; the global optimal is X*; the boundary of the search region in jth dimension is [aj,bj].

Based on Theorem 1, for any individual X(t) in generation t, it is satisfied as

$$ \underset{t\to \infty }{\lim }{\mathrm{X}}_j(t)={\mathrm{X}}_j^{\ast } $$
(21)

Since aj(t)=min(Xj(t)), bj(t)=max(xi,j(t)), it follows that

$$ \underset{t\to \infty }{\lim }{a}_j(t)=\underset{t\to \infty }{\lim }{b}_j(t)={\mathrm{X}}_j^{\ast } $$
(22)

For the OLOBL-individual\( \overset{\sim }{\mathrm{X}}(t) \),

$$ {\overset{\sim }{\mathrm{X}}}_j(t)=\frac{\left({a}_j(t)+{b}_j(t)\right)}{2}+\frac{\left({a}_j(t)+{b}_j(t)\right)}{2k}-\frac{{\mathrm{X}}_j(t)}{k} $$
(23)

When t → ∞,

$$ {\displaystyle \begin{array}{c}\underset{t\to \infty }{\lim }{\overset{\sim }{\mathrm{X}}}_j(t)=\underset{t\to \infty }{\lim}\left(\frac{\left({a}_j(t)+{b}_j(t)\right)}{2}+\frac{\left({a}_j(t)+{b}_j(t)\right)}{(2k)}-\frac{{\mathrm{X}}_j(t)}{k}\right)\\ {}=\underset{t\to \infty }{\lim}\frac{\left({a}_j(t)+{b}_j(t)\right)}{2}+\underset{t\to \infty }{\lim}\frac{\left({a}_j(t)+{b}_j(t)\right)}{(2k)}-\underset{t\to \infty }{\lim}\frac{\left({\mathrm{X}}_j(t)\right)}{k}\\ {}=\frac{\left({\mathrm{X}}_j^{\ast }+{\mathrm{X}}_j^{\ast}\right)}{2}+\frac{\left({\mathrm{X}}_j^{\ast }+{\mathrm{X}}_j^{\ast}\right)}{2k}-\frac{{\mathrm{X}}_j^{\ast }}{k}={\mathrm{X}}_j^{\ast}\end{array}} $$
(24)

From Eq. (24), when Xj(t) converges to X*, \( {\overset{\sim }{\mathrm{X}}}_j(t) \) also converges to X*. Hence, if the basic SSA algorithm can converge to the global optimal X*, the OOSSA algorithm is also convergent. It should be noted that the proof does not guarantee that the algorithm converges to the global optimum.

4 Simulation results and discussions

4.1 Benchmark test functions

To authenticate the performance of OOSSA for global optimization problems, a set of 26 widely used benchmark test functions are utilized in the experiment, the details of which are given in Table 1. These benchmark problems can be characterized three various categories: unimodal, multimodal, and fixed-dimension multimodal. The unimodal problems (f1~f9) have no local optima, but only one global optimum, which is used to disclose the local exploitation ability of stochastic algorithms. On the other hand, there are multiple local optima for the multimodal problems (f10~f19), which are suitable for revealing the ability of stochastic algorithms to balance global exploration and local exploitation. The fixed-dimension multimodal problems (f20~f26) face the existence of a large number of local optima and more than one global optimum, which are very appropriate to verify local optimum avoidance and stability of stochastic algorithms. The search space of some benchmark functions are illustrated in Fig. 10. The proposed algorithm is compared with a variety of algorithms including SSA variants and other state-of-the-art swarm intelligence based methods, and all comparison algorithms are given in Table 2.

Table 1 26 widely used benchmark test functions
Table 2 Comparison algorithms

All algorithms are coded on MATLAB R2014b, and all of the simulation experiments are performed on a computer with Intel(R) Core(TM) i7-7700 CPU(3.60GHz) and 8.00 GB RAM.

4.2 Comparison with SSA and improved SSA

To demonstrate the performance of the proposed OOSSA, we tested it on 26 widely used benchmarks, which are reported in Table 1. The OOSSA algorithm was compared with other thirteen SSA-based methods, including the original SSA algorithm [62], the enhanced SSA algorithm (ESSA) [81], the lifetime scheme-based SSA algorithm (LSSA) [105], the multi-subpopulation-based SSA with Gaussian mutation mechanism (MSNSSA) [106], the chaotic SSA algorithm (CSSA) [80], the self-adaptive SSA algorithm (ASSA) [107], the SSA algorithm based on PSO (SSAPSO) [88], the improved SSA algorithm based on opposition-based learning (ISSA) [108], the Gaussian-SSA algorithm (GSSA) [109], the enhanced opposition-based SSA algorithm (OBSSA) [110], the adaptive SSA algorithm with non-linear coefficient decreasing inertia weight (ASSO) [111], the SSA algorithm with random replacement tactic and double adaptive weighting mechanism (RDSSA) [112], the hybrid enhanced whale optimization SSA algorithm (IWOSSA) [113]. For fair comparisons, the population size N for all algorithms was set to 30. The maximal number of FEs is set to 15000. The dimension is set as 100. The other parameter settings of SSA, ESSA, LSSA, MSNSSA, CSSA, ASSA, SSAPSO, ISSA, GSSA, OBSSA, ASSO, RDSSA, and IWOSSA algorithms are adapted from the original papers. In the proposed OOSSA algorithm, k=10000. Each approach runs 30 times independently on each benchmark problem, and the average and standard deviation of objective function values results found by nine algorithms on these functions as the metrics of performance. At the same time, Friedman rank (f-rank) test [114] was used to test the statistical significance of OOSSA. The simulation results on benchmark functions f1 to f19 with dimensions 100 is shown in Table 3, the statistical results on fixed-dimension problems (f20~f26) is shown in Table 5.

Table 3 Comparisons of fourteen algorithms on 19 test functions with 100 dimensions

The statistical results from Table 3 show that except for the functions f6, f9, f11 and f14, the OOSSA algorithm converges to the global optimum on the other 15 test cases. Compared with ESSA, OOSSA finds the similar and better results on six and 13 test functions, respectively. For functions f10, f12, f16, f17, and f18, two algorithms get the theoretical optimum. Compared with MSNSSA, OOSSA provides similar and better results on four and 15 test functions, respectively. OOSSA outperforms SSA, LSSA, CSSA, ASSA, SSAPSO, ISSA, GSSA and IWOSSA significantly in terms of solution accuracy on all test problems. With respect to OBSSA, OOSSA obtains better and similar values for 14 and four benchmarks, respectively. However, OBSSA shows better performance on f6, but the gap between the two approaches is negligible. Compared to ASSO, OOSSA gets similar and better results on four and 15 test cases, respectively. According to the results of the comparison between OOSSA and RDSSA, the performance of the developed approach is better than its rival on 13 problems. For other six functions, two methods show similar performance on five of them, while the better value is achieved by RDSSA on the remaining one. Additionally, according to the average ranking values of all optimizers achieved from the Friedman test, which are reported at the bottom of Table 3, the OOSSA obtains the top rank, followed by RDSSA, ESSA, OBSSA, ASSO, MSNSSA, GSSA, IWOSSA, ASSA, LSSA, ISSA, CSSA, and SSA. In other words, OOSSA is recommended as the best optimizer among all its peers.

Moreover, Wilcoxon signed rank test (significance level is set to 0.05) [114] is used to verify that the proposed method has significant advantages over other competitors. The p-values calculated in the Wilcoxon signed rank test of OOSSA and other compared algorithms for all benchmark functions with 100 dimensions are given in Table 4. For example, if the optimal algorithm is OOSSA, a comparison is made between OOSSA versus SSA, OOSSA versus ESSA, OOSSA versus LSSA and so on. Among them, N/A represents not available, which means that the corresponding method performs best on this test function, and there is no statistical data to compare with itself. In the statistical table, the symbols “-”, “+” and “=” represent the performance of the corresponding approach is worse than, better than, and similar to that of OOSSA, respectively. According to the Wilcoxon’s rank sum test, when the p-value is less than 0.05, the zero hypothesis is rejected, that is, it is considered that there is a significant difference between the two methods [114]. It should be noted that when p-values are greater than 0.05, bold is used.

Table 4 Statistical conclusions based on Wilcoxon signed-rank test on 100-dimensional benchmark problems

From Table 4, the p-values are greater than 0.05 in the following cases: OOSSA versus ESSA on f6, OOSSA versus OBSSA on f6, and OOSSA versus ASSO on f6. Except for the three comparisons mentioned above, the p-values obtained in all other cases are less than 0.05. This means that the overall performance of OOSSA is obviously superior to other rivals, that is, the superiority of OOSSA is statistically significant. Based on the above discussion, compared with the basic SSA algorithm and other improved version of SSA algorithms, the overall performance of OOSSA has a strong competitiveness.

Another issue of interest is the performance of OOSSA on fixed-dimension problems. The comparison results between OOSSA and thirteen comparison SSA variants outlines in Table 5 address this concern. Due to the low dimensions of these test cases, the optimal solutions obtained by the fourteen approaches on all functions can basically achieve theoretical optimum. In terms of average values, the results of OOSSA on most functions are highly close to the global optimal. With respect to standard deviation, OOSSA can also obtain satisfactory results on most cases, indicating that the methodology has a good stability. According to the average ranking values of OOSSA and the involved SSA variants on seven test functions provided by Friedman test, we can see that OOSSA ranks second, behind CSSA, and followed by OBSSA, GSSA, IWOSSA, SSAPSO, ISSA, ESSA, ASSA, SSA, RDSSA, MSNSSA, LSSA, and ASSO, which further indicates that OOSSA has a better performance in terms of the capability to balance diversification and intensification, and is also highly competitive in respect of stability.

Table 5 Comparisons of fourteen algorithms on 7 fixed-dimension test functions

4.3 The influence of improving mechanisms on SSA

In this section, we investigate the effectiveness of different mechanisms of the proposed OOSSA algorithm. The three components of OOSSA that differ from SSA are: the number of leaders changes adaptively, the orthogonal lens opposition-based learning (OLOBL) strategy, and dynamic learning (DL) mechanism. To verify the validity of the adjustment strategies, OLOBL and DL are embedded into the basic SSA algorithm respectively, and the OLOBL-SSA and DL-SSA algorithms are obtained. In addition, to compare the effectiveness of LOBL and OBL strategies, the orthogonal opposition-based learning using OBL is embedded into the basic SSA to obtain OOBL-SSA. It should be pointed out that OLOBL-SSA, DL-SSA and OOBL-SSA all adopt the multi-leader mechanism. Select all the unimodal and multimodal problems (f1-f19) with 100 dimensions in Table 1 for simulation experiments to compare OOSSA, OLOBL-SSA, DL-SSA and OOBL-SSA. The parameter settings are the same as those in the previous section. Each algorithm runs 30 times independently on each benchmark problem, and recording the average optimal object function value, standard deviation, and Friedman ranking results, as shown in Table 6.

Table 6 Comparisons of OLOBL-SSA, DL-SSA, OOBL-SSA, and OOSSA on 19 test functions with 100 dimensions

According to the statistical results in Table 6, the convergence accuracy and stability of the OLOBL-SSA, DL-SSA and OOBL-SSA algorithms embedded with a single component are better than the basic SSA algorithm on all functions, which proves the effectiveness of the different mechanisms of the proposed method. Compared with DL-SSA, OOSSA achieved superior results for 16 test functions except f10, f12, and f16, and for functions f10, f12, and f16, the two algorithms achieved similar and satisfactory results. Compared with OOBL-SSA, OOSSA achieved similar and better performance on 4 and 15 test functions. The OLOBL-SSA algorithm, like OOSSA, can find the theoretical optimal value in most functions. But for the test functions f6, f9, and f14, the convergence accuracy and optimization stability of OOSSA are obviously better than that of OLOBL-SSA, especially on the function f9 and f14, OOSSA shows significant advantages. Therefore, the two components have certain effects, and embedding both of them into the basic SSA algorithm can further improve the overall performance of the algorithm. In addition, according to Table 6, compared with OOBL-SSA, OLOBL-SSA shows significant advantages on 15 benchmark functions except f10, f12, f16, and f18. For the test functions f10, f12, f16, and f18, the two algorithms show the same performance. This shows that LOBL in orthogonal opposition-based learning is more helpful for the algorithm to jump out of the local optimum than OBL. According to Friedman ranking test results, OOSSA ranks first and OLOBL-SSA ranks second, followed by DL-SSA, OBL-SSA and SSA, which further verifies the effectiveness of the components and the performance of OOSSA is obviously better than that of SSA variants with a single component.

4.4 High-dimensional performance analysis

To demonstrate the feasibility of applying OOSSA to solve large-scale problems, 19 test problems listed in Table 1 (f1~f19) are chosen for simulation experiments, and the function dimension is set to 10000. The parameter settings remain the same as those used in the previous experiment. Table 7 shows the best solution, worst solution, average and standard deviation obtained by OOSSA in 30 independent experiments on 19 large-scale problems. In addition, the success rate (SR%) metric is used to evaluate the efficiency of the proposed methodology in solving large-scale optimization problems. The criterion for judging whether a solution is successful is as follows:

$$ \Big\{{\displaystyle \begin{array}{c}\frac{\mid {f}_A-{f}_T\mid }{f}<{10}^{-5},{f}_T\ne 0\\ {}\mid {f}_A-{f}_T\mid <{10}^{-5}\kern0.3em ,{f}_T=0\end{array}} $$
(25)
Table 7 Results obtained by OOSSA on 10,000-dimensional functions

where fA is the result obtained by the algorithm on the test function, and fT is the theoretical optimal value of the test function.

According to the statistical results in Table 7, for large-scale numerical optimization problems, OOSSA can converge to the theoretical optimal value on 16 test functions except f6, f9, f11, and f14. From the standard deviation, OOSSA has the same superior stability while maintaining high solution accuracy. For functions f6, f9, f11, and f14, although OOSSA failed to find the theoretical optimal solution, the results obtained are still satisfactory. Compared to the results obtained on functions with dimensions 100, the result on f6 is in the same order of magnitude, the results on f9 is slightly inferior, but the convergence accuracy is still considerable, while on f11, the same results are obtained, and for f13, the similar result is obtained. This fully demonstrates that OOSSA algorithm is strongly robust in dealing with large-scale optimization problems. In terms of success rate, OOSSA can achieve 100% on the other 17 test functions except for 0 on test function f9 and 20% on test function f6, which verifies that OOSSA algorithm is an effective tool for solving large-scale optimization problems.

4.5 Comparison with other swarm-based intelligent algorithms

To further testify the overall performance of OOSSA, it is compared with nine state-of-the-art algorithms. They are Marine Predators Algorithm (MPA) [116], Equilibrium Optimizer (EO) [117], Tunicate Swarm Algorithm (TSA) [129], Selective Opposition Based Grey Wolf Optimization (SOGWO) [130], Improved Grey Wolf Algorithm (IGWO) [131], Henry Gas Solubility (HGS) Optimization [132], Memetic Harris Hawk Optimization (EESHHO) [133], Arithmetic Optimization Algorithm (ArOA) [134], Archimedes Optimization Algorithm (AOA) [135], Moth-Flame Optimization Algorithm Based on Diversity and Mutation Mechanism (DMMFO) [136], Moth Flame Optimizer with Double Adaptive Weights (WEMFO) [137], Opposition-based Learning Grey Wolf Optimizer (OGWO) [138]. These are cutting-edge algorithms that have been verified to have good optimization performance and have been successfully applied to solve a variety of optimization problems. Therefore, by comparing with these algorithms, we can effectively authenticate the effectiveness and superiority of OOSSA. In this section, some of the test functions in Table 1 (f1~f19) are selected for simulation experiment, and the function dimension is set to 100. To ensure the fairness of the comparative experiments, the parameters of the compared algorithms are consistent with the original paper, which can provide assurance that each method to take full advantage of its overall performance. Table 8 shows the average value and the standard deviation obtained from 30 independent experiments by 13 algorithms on 19 test functions, and the Friedman test is also employed to double-check the performance of OOSSA. Apart from these, Wilcoxon’s rank sum test at a 0.05 significance level is applied to analyze the performance gap between OOSSA and its peers.

Table 8 Comparisons of thirteen algorithms on 19 test functions with 100 dimensions

According to the comparison results provided in Table 8 between OOSSA and the applied renowned algorithms, OOSSA outperforms TSA, SOGWO, IGWO, DMMFO, and OGWO on all test cases. With respect to MPA, EO, AOA, and WEMFO, OOSSA provides better results on 15, 17, 16, and 12 problems, respectively, and for the remaining cases they obtain similar performance. Compared with HGS, OOSSA achieve the better and similar values for 12 and five benchmarks, respectively. For other two functions f9 and f14, HGS yields slightly superior results. With regard to ArOA, OOSSA gets more promising and comparable results on 15 and three problems, respectively, while the better result is given by ArOA on the remaining function f6. From the comparison results of OOSSA and EESHHO on 19 benchmarks, the proposed methodology gets more potential and resemble results on seven and 11 test functions, respectively. For the other benchmark function f9, EESHHO displays better performance. Additionally, according to the ranking of all methods in Friedman test, the OOSSA obtains the top rank, followed by EESHHO, HGS, AOA, MPA, EO, ArOA, WEMFO, OGWO, SOGWO, IGWO, TSA, and DMMFO, which indicates that OOSSA has the most outstanding performance among all competitors.

Table 9 shows the p-values of the Wilcoxon test obtained for the involved thirteen algorithms on 19 classical functions with 100 dimensions. As can be seen from the statistical results, the p-values of the following cases are greater than 0.05: OOSSA versus HGS on f17, OOSSA versus EO on f9, f16, and f17; OOSSA versus EESHHO on f7 and f19, OOSSA versus ArOA on f6, OOSSA versus AOA on f16. All other p-values are less than 0.05, which implies that the overall performance of the OOSSA algorithm has a significant advantage over the other twelve well-performance cutting-edge optimization algorithms.

Table 9 Statistical conclusions based on Wilcoxon signed-rank test on 100-dimensional benchmark problems

4.6 Convergence analysis

To test the convergence performance of the proposed OOSSA algorithm, we select some representative benchmark functions in Table 1 with 100 dimensions, and show the convergence curves of OOSSA, ESSA, LSSA, MSNSSA, CSSA, ASSA, SSAPSO, ISSA, GSSA, OBSSA, ASSO, RDSSA, IWOSSA and SSA on these test cases in Fig. 11, and the convergence graphs of OOSSA, TSA, SOGWO, MPA, HGS, EO, EESHHO, ArOA, AOA, IGWO, WEMFO, DMMFO, and OGWO on the applied benchmarks are drawn in Fig. 12. The convergence plots can help to analyze the convergence trend of OOSSA in a more intuitive way.

It can be clearly seen from Fig. 11 that OOSSA has a higher solution accuracy and faster convergence speed for all functions compared to the other thirteen SSA-based competitors. It is note that for test function f11, although EESSA obtains the same solution accuracy as OOSSA, it performs significantly worse than OOSSA in terms of convergence speed. On the other hand, from the curves presented in Fig. 12, the convergence trend of OOSSA outperforms all involved peers on most cases. On f11, OOSSA reaches the same solution accuracy as EESHHO and HGS, but the suggested method exhibits a substantial advantage in terms of convergence speed. In addition, the better solution accuracy is achieved by HGS on f14, while OOSSA shows a competitive convergence rate in the early stages, but it falls in to a local optimal in the later iterations. Based on the above assessment, we can assert that OOSSA has an outstanding convergence trend, facilitated by the fact that the developed approach focuses on a delicate balance between exploratory and exploitative inclinations.

5 OOSSA for engineering design problems

To test the effectiveness of OOSSA in solving practical problems, we applied OOSSA to three classical engineering design problems named pressure vessel design, I-beam design, and cantilever beam design. Even though the problems have several constraints, OOSSA still expects to handle these constraints and obtain the optimal solution. The experimental data of the compared algorithms are referred to the original literature.

5.1 Pressure vessel design

The objective of the pressure vessel design problem is to minimize fabrication cost, which include welding, materials and forming. As shown in Fig. 13, the pressure vessel design problem can be solved by determining the optimal values of four design variables: thickness of head Th, thickness of shell Ts, inner radius R, and length of cylindrical shell L.

The mathematical formulations of this problem are as follows:

Consider \( \overrightarrow{x}=\left[{x}_1{x}_2{x}_3{x}_4\right]=\left[{T}_s{T}_h RL\right] \).

Minimize \( f\left(\overrightarrow{x}\right)=0.6224{x}_1{x}_2{x}_3+1.7781{x}_2{x}_3^2+3.1661{x}_1^2{x}_4+19.84{x}_1^2{x}_3 \).

Subject to

$$ {g}_1\left(\overrightarrow{x}\right)=-{x}_1+0.0193{x}_3\le 0, $$
$$ {g}_2\left(\overrightarrow{x}\right)=-{x}_3+0.00954{x}_3\le 0, $$
$$ {g}_3\left(\overrightarrow{x}\right)=-\pi {x}_3^2{x}_4-\frac{4}{3}\pi {x}_3^3+1296000\le 0, $$
$$ {g}_4\left(\overrightarrow{x}\right)={x}_4-240\le 0. $$

Variable range 0 ≤ x1, x2 ≤ 99, 10 ≤ x3, x4 ≤ 200.

OOSSA is used to solve the pressure vessel design problem and ten other algorithms are selected for comparison, and the results obtained are listed in Table 10. It should be noted that the results of the compared algorithms are directly derived from the previous work. From Table 10, it can be seen that for the pressure vessel design problem, OOSSA obtains optimal results and has a greater advantage over the other compared algorithms.

Table 10 Comparison of optimization results for pressure vessel design

5.2 I-beam design problem

Another practical engineering problem used in this section is the I-beam design problem. The main objective of this problem is to design an I-shaped beam within a minimum vertical deflection. As shown in Fig. 14, this problem includes four structural parameters: length (b), height (h) and two thicknesses (tw and tf).

The mathematical model of this problem can be constructed as below:

(Consider) \( \overrightarrow{x}=\left[{x}_1{x}_2{x}_3{x}_4\right]=\left[ bh{t}_w{t}_f\right]. \)

(Minimize) \( f\left(\overrightarrow{x}\right)=\frac{5000}{\frac{t_w{\left(h-2{t}_f\right)}^3}{12}+\frac{b{t}_f^3}{6}+2b{t}_f{\left(\frac{h-{t}_f}{2}\right)}^2}. \)

Subject to \( g\left(\overrightarrow{x}\right)=2b{t}_w+{t}_w\left(h-2{t}_f\right)\le 0. \)

Variable range 10 ≤ x1 ≤ 50, 10 ≤ x2 ≤ 80,0.9 ≤ x3 ≤ 5, 0.9 ≤ x4 ≤ 5.

OOSSA was used to solve the I-beam design problem and compared with eight algorithms, and the results obtained are given in Table 11. From the results listed in the table, for the I-beam design problem, OOSSA obtained the same and similar results as the BWOA and SSA, and significantly outperformed the other six compared algorithms. This proves that OOSSA has a strong competitive edge in this problem.

Table 11 Comparison of optimization results for I-beam design problem

5.3 Cantilever beam design

This section tests the performance of OOSSA using the cantilever beam design problem whose main purpose is to minimize the weight of the cantilever beam. As illustrated in Fig. 15, the cantilever beam contains five hollow cells with square cross sections, each defined by a variable with a constant thickness, and thus contains a total of five structural parameters. The problem can be solved by determining the optimal values of the five structural parameters.

The mathematical expressions of the cantilever beam design problem are as follows:

(Minimize) \( f\left(\overrightarrow{x}\right)=0.6224\left({x}_1+{x}_2+{x}_3+{x}_4+{x}_5\right). \)

Subject to \( g\left(\overrightarrow{x}\right)=\frac{61}{x_1^3}+\frac{37}{x_2^3}+\frac{19}{x_3^3}+\frac{7}{x_4^3}+\frac{1}{x_5^3}\le 1. \)

Variable range 0.01 ≤ x1, x2, x3, x4, x5 ≤ 100.

OOSSA is used to solve the cantilever beam design problem and compared with other eleven advanced algorithms, and the obtained results are listed in Table 12. From the results listed in the table, OOSSA performs much better than other compared algorithms for the cantilever beam design problem, which further validates the superior performance of the proposed methodology for the practical engineering optimization problem.

Table 12 Comparison of results on cantilever beam design problem

5.4 OOSSA for parameter estimation of photovoltaic model

Solar energy is considered to be an environmentally friendly renewable energy source and has gained a lot of attention in recent years. By accurately predicting solar photovoltaic (PV) characteristics, the performance of PV cell systems can be optimized. Extracting PV parameters is a hot research problem in the field of solar PV systems. Among the models describing solar cell characteristics, the single diode model (SDM) is the most popular and its structure is shown in Fig. 16. To calculate the output current of the SDM, the following mathematical model is used [145].

$$ {\displaystyle \begin{array}{c}{I}_L={I}_{ph}-{I}_d-{I}_{sh}\\ {}={I}_{ph}-{I}_{sd}\cdotp \left[\exp \left(\frac{q\cdotp \left({V}_L+{R}_S\cdotp {I}_L\right)}{n\cdotp k\cdotp T}\right)-1\right]-\frac{V_L+{R}_S+{I}_L}{R_{sh}}\end{array}} $$
(26)

where IL represents the output current, Iph represents the photo-generated current, Id denotes the diode current, and Ish is the shunt resistor current, Isd denotes the reverse saturation current of diode, n is the ideality factor, k denotes the Boltzmann constant, q represents the electron charge, the series and shunt resistances are represented by Rs and Rsh, respectively. In the SDM, five parameters (Iph, Isd, Rs, Rsh, and n) are needed to be identified.

By introducing root mean square error (RMSE), the PV parameter identification problem can be reformulated as an optimization problem and solved by the metaheuristic algorithm. The objective of the optimization is to minimize the error between the measured data and simulated data. The objective function of the optimization problem is defined as:

$$ F\left(\mathrm{X}\right)=\sqrt{\frac{1}{N}\sum \limits_{k=1}^Nf{\left({V}_L,{I}_L,\mathrm{X}\right)}^2} $$
(27)

where N represents the number of experimental data.

In Eq. (27), for the SDM:

$$ \Big\{{\displaystyle \begin{array}{c}f\left({V}_L,{I}_L,\mathrm{X}\right)={I}_{ph}-{I}_{sd}\cdotp \left[\exp \left(\frac{q\cdotp \left({V}_L+{R}_S\cdotp {I}_L\right)}{n\cdotp k\cdotp T}\right)-1\right]\\ {}-\frac{V_L+{R}_S\cdotp {I}_L}{R_{sh}}-{I}_L\\ {}\mathrm{X}=\left\{{I}_{ph},{I}_{sd},{R}_S,{R}_{sh},n\right\}\end{array}} $$
(28)

where 0≤Iph≤1, 0≤Isd≤1, 0≤RS≤0.5, 0≤Rsh≤100, and 1≤n≤2.

We applied OOSSA to extract the parameters of the SDM and compared it with the canonical SSA algorithm and the other nine well-known algorithms. All algorithms were performed 30 times independently, each run evaluating the fitness function 30,000 times. The experimental parameters were used as provided in the literature [146]. The irradiance was 1000 W/m2 and the temperature was 33°C. The experimental results are shown in Table 13, and the I-V and P-V characteristic curves are plotted in Fig. 17.

Table 13. Comparison results among different algorithms on SDM
Fig. 17
figure 17

Comparison between measured data and simulated data obtained by OOSSA for SDM

6 Implementation of OOSSA for mobile robot path planning

With the sustainable advancement of intelligent technology, autonomous mobile robots (AMRs) are applied in an ever increasing number of fields, such as intelligent transport, intelligent handling. Path planning is a fundamental and essential technology in AMRs, the task of which is to generate a shortest obstacle-free trajectory route from a departure point to a target point. Nature-inspired population-based intelligence techniques are enjoying increasing popularity in mobile robot path planning (MRPP), e.g. PSO, GA, and ABC algorithms have been implemented to guide AMRs from one location to another, and safe and less time-consuming tracks have been projected successfully. In this section, we propose a novel OOSSA-based MRPP approach to handle the task of planning shortest collision-free path for AMRs in different workspace.

6.1 Robot path-planning problem description

Topologically, the MRPP project is associated with the shortest route issue of discovering a route between initial and the destination in a diagram. which is generally expressed as an optimization problem and can be solved using swarm intelligence techniques. The task to be achieved by optimization is to find a shortest possible trail from the source to the terminal, while all threating regions need to be avoided. The core of solving this intractable problem is to establish an efficient fitness function, and the developed OOSSA-based MRPP approach manipulates it through continues evaluation to derive the optimal solution, i.e. to generate the safe and shortest path. Based on the above analysis, we designed the fitness function by considering route path length and conflict avoidance to evaluate the quality of the trajectories generated by the OOSSA-based MRPP method, which is calculated as follows:

$$ F=L\left(1+\varpi \cdotp \eta \right) $$
(29)

where ϖ is a control parameter for encouraging safe paths and rejecting routes that collide with threatening areas, L represents the path length and to calculate it, the following equation is employed.

$$ L=\sum \limits_{i=1}^n\sqrt{{\left({x}_{i+1}-{x}_i\right)}^2+{\left({y}_{i+1}-{y}_i\right)}^2} $$
(30)

where (xi,yi) denotes the coordinates of the ith interpolation point.

In Eq. (29), η is a flag variable that serves to quantify the safety status of the path. To calculate it, the following equation is presented.

$$ \eta =\sum \limits_{k=1}^{nobs}\sum \limits_{j=1}^m\max \left(1-\frac{d_{j,k}}{rob{s}_k},0\right) $$
(31)

where nobs represents the number of obstacles in the workspace, m denotes the number of interpolant points in the route, dj,k indicates the distance from the jth interpolant point to the center of the kth obstacle, and robsk is the radius of the kth obstacle. From Eq. (31), if the generated path does not contain any conflicts, a small value of η is obtained, and vice versa.

6.2 Experiment and results

In this subsection, the proposed OOSSA-based MRPP approach is fulfilled and the obstacle-free shortest route for a path planning problem in the AMRs is successfully mimicked on the Matlab 2014b platform. To validate the performance of the suggested methodology more authoritatively, five terrains provided by [147] were utilized for the simulation, and these environment setups have various characteristics and complexity levels that can help to test the method comprehensively. After investigating the suitability of the OOSSA method in MRPP issues, the routes gained by it are compared with those achieved by several classical swarm-based metaheuristic techniques such as PSO, GWO, FA, SSA, and ABC. For an unbiased comparison, the common parameters are set to the same values and special ones are kept in line with their original literature to help the involved approached can perform at their best. The details of the used terrains are illustrated in Table 14. Each algorithm plans the path for the robot ten times in each environmental map and we record the optimal trajectory. The length of the path produced by respective methods are reported in Table 15, and the corresponding trajectory is drawn in Figs. 18, 19, 20, 21 and 22.

Table 14 Type of environment
Table 15 The minimum path length comparison of OOSSA-based MRPP approach and competitors under five terrains
Fig. 18
figure 18

Map 1 (a) PSO, (b) FA, (c) ABC, (d) GWO, (e) SSA and (f) OBDSSA

Fig. 19
figure 19

Map 2 (a) PSO, (b) FA, (c) ABC, (d) GWO, (e) SSA and (f) OBDSSA

Fig. 20
figure 20

Map 3 (a) PSO, (b) FA, (c) ABC, (d) GWO, (e) SSA and (f) OBDSSA

Fig. 21
figure 21

Map 4 (a) PSO, (b) FA, (c) ABC, (d) GWO, (e) SSA and (f) OBDSSA

Fig. 22
figure 22

Map 5 (a) PSO, (b) FA, (c) ABC, (d) GWO, (e) SSA and (f) OBDSSA

From the optimal trace created by all algorithms in five terrains, as recorded in Table 14, the OOSSA-based MRPP approach planned the collision free shortest paths for all environmental setup compared to the competitors. Next, the optimal routes produced by all methods for each terrain are detailed below.

A comparison of optimal routes developed by all methods for the first terrain is illustrated in Fig. 18(a-f). From the figure, FA, SSA, and OOSSA design the same trajectory to navigate the AMRs from the starting point to the destination, while PSO, ABC, and GWO provide an alternative trajectory route. Clearly, the former are thinking more wisely. The comparison between FA, SSA, and OOSSA shows that the suggested method has the competence to circumvents local optimum, while the other two approaches try to give the best path for the AMRs, but they fall into the local optimal. In general, OOSSA-based MRPP approach provides competitive paths under simple terrain.Figure 19(a-f) depicts the optimal performance of all involved methods for the second terrain. From Fig. (a-e), the involved competitors can produce a threatening area-free safety path to help the AMRs move from the initial to the goal. However, getting stuck in local optimum leads to path redundancy. In contrast, the OOSSA-based MRPP approach builds a most encouraging route, which helps the AMRs to reach the destination safely while consuming less fuel. The comparison results between the OOSSA-based MRPP algorithm and its peers show that the developed approach can serve as an excellent tool for MRPP.

Figure 20(a-f) visualizes the optimal routes planned by all methods for the third landscape with 13 threatening regions of different sizes. From the figure, various trajectories are established while manoeuvring from the starting point to a destination to create an obstacle-free trial. Different from the tortuous paths generated by PSO, FA, GWO, and SSA, OOSSA and ABC forge straightforward trajectories, which can apparently contribute better to the robot’s fuel savings. Furthermore, compared to OOSSA, ABC constructs a route trajectory that seems less sensible, as it is clearly worse than OOSSA in terms of path length. Correspondingly, the path lengths counted in Table 15 confirm this conclusion.

The simulation results of the fourth scenario with 30 obstacles shows that two different trajectories are established by applied methods while maoeuvring from the starting location to an ending point, as shown in Fig. 21(a-f). On the one hand, PSO, ABC, GWO, and SSA devises similar tracks, but travelling from the terrain edge is not conductive to assisting the AMRs in saving fuel, although avoiding collision with threatening regions. On the contrary, better routes are built by OOSSA and FA, and two algorithms have the competence to build paths that pass between obstacles, which validates the superiority of the methods. Moreover, compared to the sinuous route taken by FA, OOSSA yields a straightforward trajectory while manoeuvring from the starting point to destination. Overall, OOSSA is recommended as the optimal path planner under the complex terrain compared to its competitors.

Optimal paths generated by six applied approaches for the fifth terrain are plotted in Fig. 22(a-f). From the figure, we can conclude that all approaches can be recognized as reliable path planner. While the involved competitors have achieved satisfactory results, there is still scope for improvement. The statistical results listed in Table 15 also show that the OOSSA-based MRPP approach received the shortest collision-free path, followed by PSO, ABC, FA, SSA, and GWO. The OOSSA method can handle the delicate balance between exploration and exploitation, thus jumping out of the local optimum solution and planning a directly straightforward path to navigate the AMRs from the beginning to the finish. Overall, the superior results allow the recommended approach to be a promising tool for MRPP problems in AMRs.

7 Conclusions

In this study, the limitations of canonical SSA are discussed in detail, including the unreasonable distribution of the number of leaders and followers, the monotonous follower position update mechanism, and the lack of a technique to help the algorithm jump out of the local optimal, which leads to SSA, like existing metaheuristic algorithms, being easy to fall into local optimal, lazy convergence, and unbalanced exploration and exploitation. To relieve these drawbacks in a new manner, a novel version of SSA is developed, called OOSSA, which introduces three reliable adjustment strategies into the basic SSA. First, a leader-follower number adaptive mechanism is presented to improve the method’s global search ability in the early iterations and local exploitation competence in the later iterations. Second, an enhanced local optimal avoidance ability is obtained by introducing a lens opposition-based learning operator. In addition, an OLOBL strategy is constructed by combining LOBL and OED and embedded in the basic SSA to improve the exploratory ability of the algorithm while handling the dimensional degradation problem posed by OBL. To strike a trade-off between boosting the exploration potential of the method and reducing the number of FEs, only one leader is selected to perform OL OBL in each evolutionary iteration. Finally, a ranking-based dynamic learning strategy is introduced in the follower position update phase, which effectively improves the local exploitation capability of the algorithm.

The performance of the developed OOSSA is verified in a meaningful way on a set of 26 test functions widely used in the literature with various characteristics. The proposed OOSSA method is also compared with a comprehensive set of 13 well-performance SSA variants and 12 cutting-edge swarm-based metaheuristic approaches. The comparison results show that the proposed methodology outperforms other peers in a significant way. Also, OOSSA exhibits competitive performance on practical engineering design problems, including pressure vessel, I-beam, and cantilever beam. Additionally, the suggested approach is successfully applied to estimate the parameters of PV model. The test results indicate that OOSSA can serve as a promising tool for solar PV parameter estimation.

Finally, an OOSSA-based path planning and collision avoidance approach for autonomous mobile robots is presented. The performance of the introduced path planning approach is tested on five environmental maps and the outcomes achieved by this method are compared with those produced from other nature-inspired swarm-based techniques, including PSO, GWO, ABC, FA, and SSA. The comparative study shows that the introduced OOSSA-based path planning approach can provide the shortest collision-free route among all competitors under all the environmental setups.

Overall, the developed strategies are significant to the comprehensive performance of metaheuristic techniques. Research teams interested in metaheuristic algorithms can integrate our strategies into the used nature-inspired swarm-based techniques to solve their optimization problems, and it is recommended that research groups concerned with solving their optimization tasks using nature-inspired swarm-based techniques use OOSSA for this purpose. In the future, we hope to generalize OOSSA and design effective constraint handling techniques to solve multi-objective optimization problems. To perform well on multi-objective optimization assignments, further balancing the diversity and convergence of OOSSA is a necessary task. Furthermore, we will also focus on measuring the effectiveness of the developed operators on other metaheuristic techniques.