Abstract
The Grey Wolf Optimization (GWO) is a highly effective meta-heuristic algorithm leveraging swarm intelligence to tackle real-world optimization problems. However, when confronted with large-scale problems, GWO encounters hurdles in convergence speed and problem-solving capabilities. To address this, we propose an Improved Adaptive Grey Wolf Optimization (IAGWO), which significantly enhances exploration of the search space through refined search mechanisms and adaptive strategy. Primarily, we introduce the incorporation of velocity and the Inverse Multiquadratic Function (IMF) into the search mechanism. This integration not only accelerates convergence speed but also maintains accuracy. Secondly, we implement an adaptive strategy for population updates, enhancing the algorithm's search and optimization capabilities dynamically. The efficacy of our proposed IAGWO is demonstrated through comparative experiments conducted on benchmark test sets, including CEC 2017, CEC 2020, CEC 2022, and CEC 2013 large-scale global optimization suites. At CEC2017, CEC 2020 (10/20 dimensions), CEC 2022 (10/20 dimensions), and CEC 2013, respectively, it outperformed other comparative algorithms by 88.2%, 91.5%, 85.4%, 96.2%, 97.4%, and 97.2%. Results affirm that our algorithm surpasses state-of-the-art approaches in addressing large-scale problems. Moreover, we showcase the broad application potential of the algorithm by successfully solving 19 real-world engineering challenges.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
The rapid advancement of science, technology, and industry has given rise to a multitude of intricate optimization problems. These problems frequently entail numerous variables, constraints, and objectives. Their solution spaces are huge and complex, and it is difficult for traditional deterministic optimization methods to obtain satisfactory solutions in acceptable time (Deng et al. 2022; Guo et al. 2023; Zhou et al. 2022). To cope with these challenges, researchers in the field of computational intelligence have started to search for new approaches. Among them, metaheuristic algorithms have attracted much attention due to their high efficiency, universal applicability and powerful global search capability (Aldosari et al. 2022; Chauhan et al., 2024; Chen et al. 2023).
When dealing with engineering problems, constraints are a crucial consideration. Constraints may be physical limitations or requirements and restrictions of the project. In the field of engineering, there are various techniques available for handling these constraints to ensure that projects proceed as expected and achieve their intended goals. One common technique for constraint handling is optimization algorithms. Optimization algorithms assist engineers in finding the best solution given certain constraints. These algorithms can be mathematical optimization methods such as linear programming, integer programming, or nonlinear programming, or they can be heuristic algorithms such as genetic algorithms, simulated annealing, or particle swarm optimization. By leveraging these algorithms, engineers can find the optimal design or decision solution while taking into account various constraints (Fu et al. 2024b; Li et al. 2023).
MH algorithms are inspired by certain phenomena in nature, such as PSO (Kennedy and Eberhart 1995b), Firefly Algorithm (FA) (Yang 2009), Sine Cosine Algorithm (SCA) (Mirjalili 2016), Wind Driven Optimization (WDO) (Bayraktar et al. 2010), Fruit Fly Optimization Algorithm (FOA) (Pan 2012), Competitive Swarm Optimizer (Chauhan et al. 2024), Fox optimizer(FOX) (Mohammed and Rashid 2023), Fitness Dependent Optimizer(FDO) (Abdullah and Ahmed 2019) and so on. These algorithms often do not rely on the specific nature of the problem, but instead draw on nature's strategies for stochastic search, which can effectively avoid falling into local optima (Abdel-Basset et al. 2023). With the development of deep learning, neural networks and other machine learning techniques, researchers have begun to try to combine these techniques with metaheuristic algorithms to further improve the efficiency of solving complex optimization problems (Garg et al. 2023). In recent years, with the wide application of heuristic intelligent optimization algorithms in numerical optimization solving, various swarm intelligence algorithms have been proposed (Fu et al. 2022).
The popularity of MH algorithms has four distinct advantages: practicality, generalizability, non-leading properties and avoidance of local optima (Fu et al. 2023a). First, based on their natural theoretical framework, these strategies are relatively intuitive to construct and deploy, thus allowing engineers and researchers to rapidly integrate them into concrete applications (Havaei & Sandidzadeh 2023). Next, since these algorithms treat the problem as an unknown mechanism, they can be applied to a wide range of different tasks such as selection (Said, Elarbi, Bechikh, Coello Coello, & Said, 2023), shop visit balancing (Xia et al. 2023) and engineering problems (Nadimi-Shahraki et al. 2022). Further, these methods do not rely on derivative information and are particularly good for dealing with nonlinear problems (Aldosari et al. 2022). Ultimately, with the help of a global search strategy and a stochastic strategy for updating the location, they can efficiently jump out of the local optimum, which is particularly effective for those scenarios where there are multiple locally optimal solutions.
The existing MH algorithm mainly include: Physics-based algorithms (PhA), Swarm Intelligence Algorithms (SI), Natural Evolutionary Algorithms (EA), and Human-based algorithms (Abualigah et al. 2021). In the course of evolution, cooperative behavior between individuals has been gradually formed through natural selection over a long period of time. For example, Trojovský and Dehghani, 2022 proposed a pelican optimization algorithm (POA) inspired by pelican predation. The Genetic Algorithm (GA) is a method of optimization based on the principles of natural evolution. It was proposed by John Holland and his colleagues. is a typical example inspired by Darwinian evolution (Bäck & Schwefel 1993). Differential Evolution (DE), based on the concepts of natural selection and reproduction in Darwinian evolution(Storn and Price 1997); Genetic Programming (GP), inspired by biological evolution processes; and Evolution Strategies (ES) (Wei 2012). Among these, Genetic Algorithms (GA) and Differential Evolution (DE) are widely considered to be the most popular evolutionary algorithms, having garnered significant attention and being applied in numerous applications. Physical method is the result of the interaction of physical law and chemical variation. For example, the Chernobyl Disaster Optimizer (CDO) is an optimization algorithm inspired by the core explosion at the Chernobyl nuclear power plant. (Shehadeh 2023). The Galaxy Swarm Optimization (GSO) (Muthiah-Nakarajan and Noel 2016) algorithm, inspired by the motion of galaxies; the Firefly Algorithm (FFA), drawing inspiration from soil fertility in agriculture (Shayanfar and Gharehchopogh 2018); the Firefly Algorithm (FFA), drawing inspiration from soil fertility in agriculture (Eskandar et al. 2012); and the Gravitational Search Algorithm (GSA), derived from Newton's law of universal gravitation and kinematic laws (Rashedi et al. 2009).In sharp contrast, a variety of human behaviors are simulated based on human behavior patterns, such as the "Alpine skiing Optimization (ASO)" proposed by Professor Yuan, a new idea influenced by the competitive behavior of athletes. Each of these different Metaheuristic algorithms has its own characteristics. According to different problems and requirements, appropriate algorithms can be selected to solve the optimal problems (Yuan et al. 2022). Particle Swarm Optimization (PSO) (Kennedy and Eberhart 1995a) is inspired by the foraging behavior of bird flocks and fish schools. The Ant Colony Optimization (ACO) algorithm (Dorigo et al. 2006) is inspired by the social behavior of ant colonies during foraging. The Pathfinder Algorithm (PFA) (Yapici & Cetinkaya 2019) is inspired by the collective action of animal populations in finding optimal food areas or prey. The Harris Hawk Optimization algorithm (HHO)(Heidari et al. 2019) is based on the predatory process of Harris hawks hunting rabbits. The Sparrow Search Algorithm (SSA) (Xue and Shen 2020) is inspired by the foraging and anti-predatory behavior of sparrows. The Dung Beetle Optimization algorithm (DBO) (Xue and Shen 2022) is inspired by the rolling, dancing, foraging, stealing, and reproductive behaviors of dung beetles. The Remora Optimization Algorithm (ROA) (Jia et al. 2021) is inspired by the behavior of remoras adhering to different-sized hosts to facilitate foraging. The Black Widow Optimization algorithm (BWO) (Hayyolalam and Kazem 2020) is inspired by the unique reproductive behavior of black widow spiders. Dikshit Chauhan et al. proposed the Artificial Electric Field Algorithm (AFFEA) based on a series of learning strategies (Chauhan and Yadav 2024a, b). Additionally, the Secretary Bird Optimization Algorithm (SBOA) was introduced based on the survival behavior of secretary birds in their natural environment (Fu et al. 2024b), while the Red-Billed Blue Magpie Optimizer (RBMO) was proposed by simulating the search, chase, prey attack, and food storage behaviors of red-billed blue magpies (Fu et al. 2024a).
Generally, the optimization process of the MH algorithm can be divided into two main steps (Saka et al. 2016): exploration and exploitation. In the exploration phase, the algorithm mainly focuses on searching all corners of the solution space to ensure that no possible optimal solution area is missed; while in the exploitation phase, the algorithm will focus on known high-quality solutions and further deepen the search in order to find the real the optimal solution. These two phases complement each other and ensure that the algorithm has both breadth and depth. GWO is inspired by the hunting behavior of grey wolves (Mirjalili et al. 2014). GWO effectively balances the two stages of exploration and exploitation by combining the social behavior of grey wolves with a dynamically adjusted location update strategy, thereby ensuring good global and local search capabilities.
Since its introduction in 2014, GWO has received widespread attention from scholars at home and abroad for its simplicity and efficiency, and has become an important tool for solving complex optimization problems (Fan and Yu 2022). However, similar to other optimization algorithms, the GWO algorithm does have some limitations although it has shown quite good performance in many optimization problems. In particularly, it is prone to suffer from prematurity and local optimality when dealing with multimodal function problems. As the iterative process of the GWO progresses, the inherent social hierarchy mechanism within the wolf population leads to a decrease in diversity. This mechanism prioritizes the positions and decisions of the leading wolves (Alpha, Beta, and Delta), influencing the entire pack's movement. As a result, the population tends to converge towards the leaders’ positions. However, this strong convergence driven by the social hierarchy can also lead to a drawback. The population may start to aggregate too closely or blindly around the leaders’ current positions. This phenomenon, often referred to as premature convergence, limits the algorithm’s ability to thoroughly explore the solution space. Consequently, the algorithm might struggle to escape local optima, as the current best solutions (guided by the leading wolves) might not always represent the global optimum. The pack, following the leaders too closely, can get trapped in these local optima, lacking the diversity or exploratory behavior needed to venture out and discover better solutions elsewhere in the search space. (Wang et al. 2018). In addition, when global exploration transitions to local mining, the algorithm may lose the ability to explore a wider solution space and overly concentrate on a specific region for detailed search. Such a centralized strategy, although helpful in finding the optimal solution in the local region accurately, may also lead the algorithm to ignore other promising regions (Wolpert and Macready 1997). Despite there has been various GWO variants, such as Advanced Grey Wolf Optimizer (AGWO) (Meng et al. 2021), Exponential Neighborhood Grey Wolf Optimization (EN-GWO) (Mohakud and Dash 2022), Hybrid Grey Wolf Optimizer with Mutation Operator (DE-GWO) (Gupta and Deep 2017), and others (Ambika et al. 2022; Biabani et al. 2022). These improved versions do not break through in solving the large-scale global optimization problems of CEC 2022 and CEC 2013. Moreover, their performance in dealing with complex problems remains unsatisfactory.
To improve the performance of the GWO, this study incorporates several key enhancements. Firstly, the search mechanism from PSO is employed to increase population diversity. This addition helps in broadening the search scope of the algorithm. Secondly, the IMF is used to adjust inertia weights, a strategy that aids in fine-tuning the balance between exploration and exploitation. Lastly, an adaptive mechanism based on the Sigmoid function is introduced for updating the positions of individuals within the population. This adaptive update strategy strengthens the group's ability to escape local optima, enhancing the overall effectiveness of the GWO algorithm in finding optimal solutions.
An improved adaptive grey wolf optimization (IAGWO) is proposed to address the shortcomings of the GWO algorithm. The main contributions are as follows.
-
1)
The PSO search mechanism is introduced to enhances the algorithm's search efficiency and robustness by updating grey wolf positions early in each iteration. Additionally, the dynamic adjustment of inertia weights through the IMF boosts global search capability initially and local search effectiveness later.
-
2)
Adaptive position updating strategy based on Sigmoid function to balance the exploration and exploitation of IAGWO.
-
3)
To evaluate the exploration and exploitation capabilities of IAGWO, extensive experimentation is conducted using a suite of 67 test functions, which includes benchmarks from the CEC 2014, CEC 2017, CEC 2020, CEC 2022, and CEC 2013 for large-scale global optimization problems.
-
4)
The effectiveness and accuracy of IAGWO in solving practical engineering design challenges are thoroughly assessed through its application to 19 diverse engineering design challenges.
The paper is organized as follows: Sect. 2 provides a brief review of the previous enhancements and potential application directions of the GWO. Section 3 details the original GWO algorithm and the proposed improvement strategy. Section 4 evaluates IAGWO performance through relevant experiments and in-depth analysis. Finally, Sect. 5 concludes this paper with a summary of the results and an outlook on future research directions.
2 Related work
In recent years, there has been a significant focus among researchers on enhancing the GWO. These improvements are aimed at boosting the algorithm's search performance and effectiveness. Scholars have explored various approaches to achieve this, including aspects such as adjusting the algorithm parameters, improving the speed and position equations, and combining it with other algorithms.
Yu et al. (2023) adopted a new update search mechanism, improved control parameters, mutation driven strategy and greedy selection strategy to improve GWO in the search process. (Singh and Bansal 2022a) proposed a hybrid GWO and Differential Evolution (HGWODE) algorithm and applied it to UAV path planning. (Cuong-Le et al. 2022) introduced an equation to control the moving strategy of the algorithm in each iteration and proposed New Balance Grey Wolf Optimizer (NB-GWO), which was used to optimize the hyperparameters of the deep neural network for damage detection of two-dimensional concrete frames. Liu et al. (2023) proposed a hybrid differential evolution GWO (DE-GWO) algorithm and applied it to gas emission identification and localization. Luo et al. (2023) introduced butterfly optimization algorithm and opposition-based learning method based on elite strategy, adaptive nonlinear inertia weight strategy and random walk law to improve the shortcomings of slow convergence speed and low accuracy of GWO algorithm when dealing with high-dimensional complex problem. To address the issue of premature convergence encountered by the classic GWO in some situations due to the stagnation of sub-optimal solutions, Gupta et al. introduced an enhanced leadership-inspired grey wolf optimizer for global optimization problems (GLF-GWO)(Gupta and Deep 2020), Addressing the issues of slow convergence speed and insufficient global exploration in GWO, which can lead to settling in local optimal states and failure to achieve global optimal solutions, Singh et al. proposed a novel mutation-driven modified grey wolf optimizer (MDM-GWO) (Singh and Bansal 2022b). MDM-GWO integrates new update search mechanisms, modified control parameters, mutation-driven schemes, and greedy selection methods into the search process of GWO. Addressing the issues of slow convergence speed and susceptibility to local optima in the Grey Wolf Optimizer (GWO) algorithm, Zhang et al. proposed a nonlinear control parameter strategy based on a sinusoidal function (GWO-SIN) and a nonlinear control parameter combination strategy (GWO-COM) (Zhang et al. 2019).
Soliman et al. (2022) proposed a novel hybrid African vultures–grey wolf optimizer (AV–GWO) approach to precisely estimate the electrical parameters of such TDM. Nadimi-Shahraki et al. (2021) introduced an enhanced variant of the Grey Wolf Optimization algorithm, termed I-GWO. The algorithm, based on a dimensionally learned hunting and searching (DLH) strategy, uniquely constructs hunting domains for each Wolf and enables them to share information about neighboring domains with each other. This enhances the algorithm's local and global search capabilities for more balanced performance, while also helping to maintain population diversity. A. Abushawish and A. Jarndal (Abushawish and Jarndal 2021) jointly proposed a new hybrid algorithm named GWO-CS that combines the advantages of Cuckoo Search (CS) algorithm and GWO algorithm. This algorithm primarily incorporates the position update equation from the CS to further refine the global search process of the GWO. Aimed at solving the problems of poor stability and easily falling into the local optimal solution in the grey wolf optimizer (GWO) algorithm, Liu et al. proposed an improved GWO algorithm based on the differential evolution (DE) algorithm and the OTSU algorithm is proposed (DE-OTSU-GWO) (Liu, Sun, Yu, Wang, & Zhou, 2020). The multithreshold OTSU, Tsallis entropy, and DE algorithm are combined with the GWO algorithm. The multithreshold OTSU algorithm is used to calculate the fitness of the initial population. The population is updated using the GWO algorithm and the DE algorithm through the Tsallis entropy algorithm for crossover steps. Multithreshold OTSU calculates the fitness in the initial population and makes the initial stage basically stable. Tsallis entropy calculates the fitness quickly. The DE algorithm can solve the local optimal solution of GWO. To address the issues of GWO's susceptibility to local optima and its low exploration capabilities, Hardi Mohammed et al. proposed the Enhanced GWO (EGWO) (Mohammed et al. 2024). EGWO employs diverse methods to improve the performance of GWO, utilizing gamma, z-position, and the golden ratio.
Liu et al. (2022) introduced a novel improvement strategy for the GWO algorithm, known as the exponential convergence factor improvement strategy. This strategy is designed to more accurately simulate the actual search process of grey wolves. It incorporates dynamic weighting factors and enhances control parameters to reduce the likelihood of the GWO algorithm getting stuck in local optima. However, despite these improvements, experimental findings indicate that GWO still faces challenges in accurately handling high-dimensional functions. Şenel et al. (2019) integrated a differential disturbance operator into the GWO algorithm. This addition brought an element of exploration into the exploitation phase, thereby enhancing the GWO algorithm's overall optimization capabilities. Jangir and Jangir (2018) proposed a multi-objective version of the GWO algorithm, named NSGWO. This algorithm utilizes a crowding distance mechanism to select the optimal solution from a set of Pareto optimal solutions. This approach helps guide the search towards the dominant region in multi-objective search spaces. NSGWO was tested on a variety of standard unconstrained, constrained, and engineering design challenges, demonstrating its efficiency and effectiveness in diverse optimization scenarios.
To improve the performance of the GWO, this study incorporates several key enhancements. Firstly, the search mechanism from PSO is employed to increase population diversity. This addition helps in broadening the search scope of the algorithm. Secondly, the IMF is used to adjust inertia weights, a strategy that aids in fine-tuning the balance between exploration and exploitation. Lastly, an adaptive mechanism based on the Sigmoid function is introduced for updating the positions of individuals within the population. This adaptive update strategy strengthens the group's ability to escape local optima, enhancing the overall effectiveness of the GWO algorithm in finding optimal solutions.
3 Methodology Overview: Standardized GWO and Proposed Enhancements
This section offers an overview of the hunting behavior and the mathematical model that forms the foundation of the original GWO. Additionally, we introduce the IAGWO, our proposed enhancement to GWO. IAGWO integrates the PSO search mechanism, the IMF strategy for inertia weighting, and an adaptive strategy for updating positions. These additions aim to refine and boost the efficiency of the original GWO algorithm.
3.1 The standardized GWO
3.1.1 Inspiration of grey wolf packs’ hunting activity behavior
The GWO algorithm draws inspiration from the hunting behavior of grey wolf packs. It mathematically simulates the way a group of grey wolves hunts, encircles, and targets their prey while adhering to a well-defined social hierarchy. In this hierarchy, the pack is led by three primary wolves: the Alphas (α), Betas (β), and Deltas (δ), each playing a crucial role in guiding the pack's movements and decisions. These wolves are considered the leaders, showcasing significant leadership abilities. Below them are the Omega (w) wolves, who occupy a subordinate role and follow the directives of the leading wolves. This hierarchical structure, integral to the functioning of the GWO algorithm, is depicted in Fig. 1.
3.1.2 Mathematical model: GWO
GWO simulates grey wolf leadership and hunting mechanisms by dividing grey wolves based on their characteristics into a leader, α, who rules over the entire grey wolf; a facilitator, β, who helps α to make decisions and replaces α when α dies; and an enforcer, δ, who follows α's and β's orders (Fan and Yu 2022). GWO searches for excellence by modeling the wolf hunting process. In addition to the social hierarchy of wolves, group hunting is another interesting social behavior of grey wolves. The main phases of grey wolf hunting are as follows: The Grey Wolf Optimizer algorithm mimics the hunting behavior of grey wolf packs. Initially, in the "tracking, chasing, and approaching prey" phase, each wolf searches for potential solutions in the solution space and adjusts its position through certain search strategies to get closer to possible candidate solutions. Subsequently, in the "chasing, surrounding, and harassing prey until it stops moving" phase, the wolf pack collaborates to try to corner the prey into a smaller area and prevent its escape, involving behaviors such as encircling and harassing the prey to prevent its escape. Finally, in the "attacking prey" phase, once the prey is cornered and unable to escape, the wolves concentrate their attack on the prey, gradually optimizing the position of candidate solutions through strategies such as linear or leap searches until finding the optimal solution or meeting specific optimization criteria. These three phases represent the Grey Wolf Optimization algorithm's process of searching, chasing, and optimizing in the solution space, analogous to the behavior of a grey wolf pack during hunting, progressing from search to attack, gradually optimizing and approaching the optimal solution.
Now, this paper shows the calculation steps of the basic grey Wolf Optimization algorithm and the pseudo-code as follows (Algorithm 1). The GWO algorithm process is as follows:
1) Each member is initialized using Eq. (1), determine the population size N, the maximum number of iterations M, the single grey wolf dimension dim, and ɑ, A and C;
where, LB and UB are the lower and upper boundaries of the solution space, respectively. X represent the positions of the current solution. phi is a random number between [0,1].
2) Calculate the fitness value of each individual using the test function. Then, based on the magnitude of the fitness values, select the best-fit individual as the α-wolf, the second-best individual as the β-wolf, and the third-best individual as the δ-wolf;
3) The mathematical model of Wolf pack leader tracking prey is shown in Eq. (2), which calculates the traction direction of the entire pack according to the distance difference between the Wolf leader and the pack, that is, the movement direction information of the pack, can be calculated as shown in Eqs. (3) and (4). Update the current grey wolf position according to Eqs. (2)– (4).
where Dα, Dβ, and Dδ denote the distance difference between α-wolf, β-wolf, and δ-wolf and other individuals, respectively. Xα, Xβ, and Xδ indicate the current positions of α-wolf, β-wolf, and δ-wolf respectively, X indicate the current positions C1, C2, and C3 satisfy the constraints of Eq. (6). A1, A2, and A3 are random vectors satisfying the constrain of Eq. (5); X1, X2 and X3 are the traction directions of the three leading wolves; and X(t + 1) represents the next collective movement position of the wolf pack. As shown in Fig. 2, the final orientation of the wolves in the search space will be randomly positioned within a circle defined by the locations of the α, β, and δ in the search space. This graphical representation illustrates how the wolves’ positions influence the movement and direction of the entire pack in the pursuit of their prey.
4) Update ɑ, A and C according to Eqs. (5)–(7);
where, the parameter ɑ plays a crucial role in balancing global search and local exploration. Its value is set to decrease linearly from 2 to 0 over the course of the algorithm's iterations. Initially, a higher value of ɑ aids in the global convergence of the algorithm, guiding the wolf pack swiftly towards the region where the optimal solution might be found. As the algorithm progresses through its later iterations, the gradual decrease in the value of ɑ facilitates more refined exploration in the area of the optimal solution. This helps improve the convergence accuracy of the GWO algorithm, ensuring a more precise final result. r1 and r2 are random vectors and r1, r2 ∈ [0, 1].
5) Update the positions of other individuals, calculate the updated fitness value based on the new position, and update the α-wolf, β-wolf, δ-wolf and global optimal solution, \(R\) represents the position vector of the optimization target;
6) Judge whether the specified stopping condition is reached (e.g., the maximum number of iterations is reached), if not, repeat steps 2 to 5. Otherwise, output the optimal result: the position of the α-wolf obtained at the end is the optimal solution, and the corresponding fitness value is the degree of superiority or inferiority of the optimal solution.
3.2 Improved grey wolf optimization algorithm
3.2.1 PSO search mechanism
The GWO (Grey Wolf Optimizer) exhibits a weak exploratory capability in its early stages and lacks diversity within the population, consequently resulting in suboptimal solution quality. In order to enhance exploration capabilities, improve population diversity (Hu et al. 2022), and increase the quality of solutions (Hakli and Kiran 2020), this study integrates the PSO, introducing a velocity concept to provide a new search mechanism for the GWO. The individual grey wolves are updated in terms of position during the early iterations, and the application in the velocity update introduces additional randomness. This prevents the algorithm from converging prematurely and encourages exploration of new areas, thereby increasing population diversity. By dynamically adjusting the velocity and position of each individual, this method may help in more effectively balancing global exploration and local exploitation. Leading to a wider search in the early stage of the iteration, this assists in identifying potential high-quality solutions. The computation Eq. (8) is as follows:
where t represents the current number of iterations, X and \({X}_{best}\) represent the positions of the current solution and the best-performing solution, respectively. \({v}_{rand}\left(t\right)\) is the velocity vector of the current solution at time t of iteration. phi is a random number between [0,1]. \({X}_{selfbest}\) is the best position vector in the history of the current solution.
In this study, at the start of each iteration, a PSO updating strategy is employed, along with the addition of extra randomness to stimulate a more extensive global search. This approach helps avoid local optimization and increases population diversity. This approach not only accumulates a more diverse and high-quality search experience for the GWO but also more effectively balances global exploration and local exploitation by dynamically adjusting the search behavior.
3.2.2 IMF inertia weighting strategy
Inverse Multiquadric Function is a decreasing function based on the principle of inverse multiple squares. It is often used as a regularization method in neural networks, such as a kernel function in support vector machines (Hu et al. 1998; Rathan et al. 2023). In accordance with the characteristics of the IMF, this paper incorporates it into the population position update mechanism within the framework of the GWO as delineated in Eq. (3). The IMF inertia weight ω, along with the revised formulae for the wolf pack updating process, are elucidated in Eqs. (9)–(10).
where, the parameter groups [a, b, c, d] are taken as [0.6, 0.02, 0.05, 0.3] and the graph of ω is shown in Fig. 3. As indicated by Fig. 3, during the early to mid-phases of the algorithm's iteration, the inertia weight ω is set to a higher value. This larger influence of the α-wolf, β-wolf, and δ-wolf on the updated positions is beneficial for the pack to quickly converge towards the optimal solution, effectively preventing the waste of search resources due to blind searching and thus enhancing the quality of the pack. As the development progresses to the mid and late stages and the pack becomes densely concentrated, if the higher-ranking wolves get trapped in a local optimum, the lower-ranking wolves led by them are also unable to escape this local optimum. At this juncture, the value of ω should be reduced to a lower level, thereby enlarging the pack's autonomous search capability and avoiding premature convergence.
3.2.3 Adaptive updating mechanism
The population updating mechanism based on IMF inertia weight effectively reduces the density of population clustering to a certain extent. However, due to the intrinsic dynamics of the GWO, the newly generated wolf packs are still inevitably concentrated and migrate towards the positions directed by the α-wolf, β-wolf, and δ-wolf during the iterative process. In response to this, the present study defines the aggregation coefficient as the ratio of an individual's fitness value to the average population fitness value, which serves to quantify the degree of divergence between the current solution and the optimal solution. In minimization problems, the smaller the fitness value, the better the solution. A smaller aggregation coefficient indicates a more favorable current solution, thus allowing for minor updates in the vicinity of the individual's current position. Conversely, a larger aggregation coefficient suggests a poor location of the individual, warranting a significant perturbation to facilitate a jump to other positions. Based on this analysis, this paper introduces a Sigmoid function to construct the adaptive updating amplitude of the population under different aggregation coefficients, as depicted in Eqs. (11)-(12).
where fi represents the fitness value of the ith individual, and fave denotes the average fitness value of the population. θ is the exponential coefficient, which is taken as 0.5 in this paper.
In comparison to the standard GWO, the IAGWO brings several significant advancements. Firstly, it introduces a novel search mechanism by incorporating velocity concepts. This addition helps in preventing premature convergence and allows for a more thorough exploration of the search space. The integration of velocity updates also adds an element of randomness, which in turn increases the diversity within the population of solutions. Moreover, the implementation of the IMF inertia weight strategy in IAGWO improves the balance between exploring the global search space and exploiting local solutions. This strategic enhancement significantly boosts the convergence speed of the algorithm. Furthermore, IAGWO differentiates itself from the standard GWO through its adaptive updating mechanism. This mechanism combines the aggregation coefficient with the Sigmoid function, enhancing the algorithm's ability to switch between broad search patterns and detailed solution refinement. This results in improved performance in maintaining diversity and achieving faster convergence rates. This adaptive approach enables IAGWO to search and optimize more efficiently within the solution space of the problem. For an in-depth comprehension of the workings of IAGWO, the procedural flow is visually depicted in Fig. 4, its pseudocode is meticulously detailed in Algorithm 2, and the proposed IAGWO workflow(Chauhan & Yadav 2023b) is shown in Fig. 5.
3.3 Time complexity analysis
CEC17 (Competition on Evolutionary Computation) defines algorithm complexity as a measure of the computational resources required by an algorithm to solve a given problem instance. This section explains the computational complexity of IAGWO. The complexity of IAGWO is primarily influenced by two main factors: the initialization of solutions and the execution of the algorithm's core functions. These core functions involve calculating fitness functions and updating solutions. The computational complexity is determined by considering several variables: the count of solutions \(\left(N\right)\), the upper limit of iterations \((T)\), and the problem's dimension \((D)\) being tackled. Specifically, the complexity of initializing solutions in the IAGWO algorithm can be represented as \(O(N)\), indicating the order of complexity in relation to the number of solutions. This gives an understanding of the computational resources required for the initial setup phase of the algorithm. This means that as the number of solutions \(N\) increases, the computational complexity of the initial solution of the algorithm will also increase accordingly. This represents the order of complexity for the initial setup phase of the algorithm. The time complexity of the original GWO algorithm is \(O(T\times N\times D)\). IAGWO modifies this with Eqs. (8), (9), and (10)–(11), including enhancements to population diversity using the PSO position updating strategy, integration of IMF weights to reduce the excessive influence of higher-level wolves on lower-level ones, and the introduction of a population adaptive update based on the sigmoid function. The PSO position updating strategy requires calculations for each individual and each dimension, with a complexity of \(O(T\times N\times D)\). The update from Eq. (10) is independent of population size and search dimensions, correlating only with the maximum number of iterations, resulting in a time complexity of \(O(T)\). The time complexity for Eq. (11) is \(O(N\times D)\). Consequently, the overall time complexity of IAGWO is \(O(\text{IAGWO})=O\left(T\times N\times D\right)+O\left(T\right)+O\left(N\times D\right)=O(T\times N\times D)\), consistent with the original algorithm.
4 Results and comprehensive analysis
The simulation for this study was carried out on a Windows 11 platform, operating on a 64-bit system. The analysis was performed using MATLAB 2023b, running on a machine equipped with an AMD Ryzen 7 4800H CPU at 2.30 GHz and 16 GB of RAM.
4.1 Test functions and parameter settings
In this paper, the CEC 2017 (Dim = 30) (Mallipeddi and Suganthan 2010), CEC 2020 (Dim = 10 and 20) (Liang et al. 2019), and CEC 2022 (Dim = 10 and 20) (Ahrari et al. 2022) test suites were employed to evaluate the performance of the proposed IAGWO algorithm. The test suite for evaluating algorithms covers four different functional types: single-modal, multimodal, mixed, and combined. These different types of test suites are designed to comprehensively evaluate the performance and applicability of algorithms. Additionally, for assessing the scalability of the IAGWO algorithm, we employed the CEC 2013 Large-scale Global Optimization suite (800-dimensional) for simulation analysis (Li et al. 2013). The suite contains 15 highly complex reference functions that are grouped into four groups: fully separable, partially additively separable, overlapping, and completely indivisible. These different types of benchmark functions provide a comprehensive experimental framework for evaluating the scalability of optimization algorithms, so that we can more accurately evaluate the performance of IAGWO algorithm on different types of problems.
4.2 Comparison with other algorithms and parameter settings
The performance of the Improved Adaptive Grey Wolf Optimization (IAGWO) is benchmarked against 12 well-known algorithms, grouped into three categories for comparison:
High-citation algorithms: These include the Gravitational Search Algorithm (GSA) (Rashedi et al. 2009), Dolphin Echolocation Optimization (DMO) (Kaveh and Farhoudi 2013), Whale Optimization Algorithm (WOA) (Mirjalili and Lewis 2016), and Harris Hawks Optimization (HHO) (Tripathy et al. 2022).
Advanced algorithms: This category includes Combined Particle Swarm Optimization and Gravitational Search Algorithm (CPSOGSA), Crow Optimization Algorithm (COA) (Jia et al. 2023), African Vulture Optimization Algorithm (AVOA) (Abdollahzadeh et al. 2021), Optical Microscope Algorithm (OMA) (Cheng and Sholeh 2023), and Adaptive Artificial Electric Field Algorithm (iAEFA) (Chauhan and Yadav 2023a).
GWO and its variants: This includes the original Grey Wolf Optimization (GWO), the Adaptive GWO (AGWO) (Meidani et al. 2022), the Enhanced GWO (ENGWO)(Mohammed et al. 2024)and the Revised GWO (RGWO) (Banaie-Dezfouli et al. 2021).
Table 1 offers a comprehensive summary of the parameters for 14 different MH algorithms. For each of these algorithms, 30 independent runs were conducted, with each run limited to a maximum of 500 iterations and population size is set to 30 and a maximum of 30,000 evaluations. The outcomes of these runs were meticulously recorded, capturing the average values (denoted as Ave) and the standard deviations (Std) for each algorithm. To facilitate an easy comparison of their performance, the table highlights the best results among these 14 algorithms by formatting them in bold text. This highlighting method provides a clear visual indicator of which algorithms performed most effectively under the given testing conditions.
4.3 Qualitative assessment of IAGWO
4.3.1 Exploring convergence patterns
To verify the convergence performance of IAGWO, we plotted its convergence performance evaluation on the 30-dimensional CEC2017 test functions, as shown in Fig. 6. It presents the corresponding results of different test functions in the form of nine images involving three instances selected from the same suite. In the presentation of images, the first column distinctly illustrates the two-dimensional profiles of the reference functions being analyzed. The visuals presented in the first column accurately depict the characteristics and contours of each function being optimized. These graphical representations offer a clear understanding of the challenges and intricacies of each function. In the second column, images depict the final positions of the search agents at the end of the optimization process. Within these visuals, the optimal solution's location is distinctly marked with a red dot. This not only illustrates the end point of the search agents' journey but also visually highlights the spot where they successfully identified the most favorable solution. This layout effectively communicates the results of the optimization process, making it easier to comprehend the behavior and efficacy of the search agents in navigating the solution space. This layout offers a clear and informative view of both the nature of the functions and the outcomes of the optimization process. By observing the second column of images, we can clearly find that the search agent is close to the optimal solution in most cases, which fully reflects the powerful ability of IAGWO algorithm in the process of exploration and development. In addition, the third column image accurately tracks the change of the average fitness value during the iteration. Initially high, these values decrease and stabilize after 100 iterations, albeit with minor fluctuations. These fluctuations are normal in complex optimization problems, indicate ongoing detailed searches for improvement and the maintenance of population diversity to prevent premature convergence to local optima. The fourth column reveals the search agents' trajectories in the first dimension, there were marked fluctuations in the early iterations, which then leveled off, and then fluctuations again at intervals, which leveled off again, signifying a balance between exploration and exploitation. Finally, the convergence curve, smooth for unimodal functions, suggests optimal values are achievable through iteration. For multimodal functions, however, the step-like curve reflects the need for continual avoidance of local optima to reach global optima. These four metrics collectively affirm IAGWO's robust convergence.
4.3.2 Analyzing the diversity of population
In optimization algorithms, the importance of population diversity is a matter of balance. Moderate population diversity can help the algorithm avoid falling into local optima, thereby increasing search space coverage and global search capability, improving convergence speed, and the quality of optimization results. However, excessive population diversity may lead to overly dispersed search, making it difficult for the algorithm to explore local regions deeply, thereby reducing convergence speed and the quality of final solutions. Therefore, when designing optimization algorithms, it is necessary to consider a balance between population diversity and search efficiency. This can be achieved through appropriate parameter settings or suitable strategies to maintain population diversity, thus effectively solving optimization problems. A population with high diversity indicates significant differences among individuals, allowing for broader exploration in the search space and avoiding premature convergence to local optima. Hence, maintaining good population diversity is a crucial objective in metaheuristic algorithms. Typically, we use Eq. (13) and Eq. (14) to measure the population diversity of the algorithm. This calculation method was proposed by Morrison in 2004. Where, \({I}_{C}\) represents the moment of inertia, \({x}_{id}\) denotes the ith search agent's value in the \({d}^{th}\) dimension at iteration t. Furthermore, \({c}_{d}\) represents the spread of the population from its center of mass, denoted by 'c', in every iteration, as illustrated in Eq. (14) (Fu et al. 2023a, b).
Figure 7 displays the comparative experimental outcomes regarding population diversity for both IAGWO and GWO. The measurement of population diversity is conducted through \({I}_{C}\). Observations from Fig. 7 reveal that IAGWO demonstrates an initial marked increase in diversity during the early phases of iteration, which then transitions to a state of relative stability at an elevated level. This indicates an increase in the variance among individuals within the IAGWO population during the early iterations, effectively exploring a vast search space. As iterations progress, the population diversity tends to stabilize, which aids in averting premature convergence to local optima. The minor fluctuation are normal and beneficial for the algorithm to adapt to dynamically changing search spaces and prevent premature convergence. In contrast, GWO shows insufficient population diversity, highlighting IAGWO’s effectiveness in maintaining diversity, crucial for exploring complex search spaces and avoiding local optima. These experimental outcomes demonstrate IAGWO's substantial potential in optimization.
4.3.3 Exploration and exploitation analysis
In optimization algorithms, managing the balance between exploration and exploitation is key for optimal performance (Saka et al. 2016). Exploration involves searching through the solution space, while exploitation focuses on refining known good solutions. This section deals with quantifying the extent of exploration and exploitation in the algorithm. To do this, we use Eq. (15) to calculate the percentage of exploration and Eq. (16) for the percentage of exploitation. Additionally, the parameter \(Div\left(t\right)\) used for measuring dimension diversity is calculated using Eq. (17). The parameter \({\rm Div}_{max}\) reflects the peak diversity noted throughout the entire course of iterations,, which is essential for understanding how broadly and effectively the algorithm explores the solution space(Li et al. 2023) (Nadimi-Shahraki et al. 2023).
Figure 8 depicts the results of the experiments conducted. It shows that for various function types, as the number of iterations progresses, GWO consistently demonstrates a higher rate of exploration and a comparatively lower rate of exploitation. In contrast, IAGWO shows a changing pattern, with exploration decreasing and exploitation increasing as iterations progress. This observation suggests that GWO tends towards a broad search across the entire space, with less focus on local search and weaker performance in thoroughly exploiting the optimal regions found. In comparison, IAGWO demonstrates the ability to dynamically adjust its search strategy. This implies that the algorithm initially identifies potential good solution areas through extensive exploration and then finely tunes these solutions in the later stages through focused exploitation, potentially enhancing both the efficiency of the algorithm and the quality of solutions. Overall, while GWO shows commendable exploration capabilities, it lacks effective exploitation. In contrast, IAGWO effectively strikes a balance between exploration and exploitation. This balance is well-maintained across a variety of benchmark functions, showcasing IAGWO's adaptability and efficiency in different optimization scenarios. This attribute is particularly important as it ensures the algorithm can thoroughly search the solution space while also honing in on the most promising solutions.
4.3.4 Ablation experiments
In this section, a detailed analysis is conducted on the impact of three proposed improvement strategies on the GWO. These strategies include the PSO position updating mechanism, the introduction of IMF inertia weight strategy, and the adoption of a Sigmoid adaptive updating strategy. Based on these improvements, three new algorithm variants are named: PGWO for the PSO search mechanism, IGWO for the IMF inertia weight, and SGWO for the Sigmoid adaptive updating strategy. According to the experimental results in Fig. 9, all three strategies significantly enhance the convergence accuracy and speed of GWO, with IAGWO showing particularly notable performance.
Specifically, when dealing with unimodal and multimodal functions, the results of PGWO and IAGWO are relatively consistent, showing a more significant improvement over GWO compared to SGWO and IGWO. However, when dealing with more complex hybrid modal functions, the enhancement of PGWO on GWO diminishes, while the IAGWO algorithm, integrating all three strategies, continues to exhibit exceptional optimization performance. Overall, the IAGWO algorithm successfully overcomes challenges of local optima and premature convergence, significantly boosting the algorithm's convergence speed and accuracy. These findings provide valuable insights for the further development and application of the GWO.
4.4 Quantitative evaluation
In this section, the efficacy of IAGWO is scrutinized using a series of test suites: CEC 2017, CEC 2020, and CEC 2022. Moreover, its proficiency in handling large-scale problems is assessed with the CEC 2013 suite. To clearly compare performance, the best results among the algorithms are highlighted in bold in the tables. The parameters are standardized with a population size of 100, a maximum iteration limit of 500, and a total of 30 independent runs. The performance outcomes are systematically presented in Tables 2 to 7, which illustrate the average values (Ave) and the standard deviations (Std) for each competing algorithm. A thorough statistical analysis is conducted to highlight the superiority of IAGWO. This includes an initial evaluation represented by three indicators (W|T|L) in the first line of the results, denoting the algorithms' performance as best (win), comparable (tie), or least effective (loss) for specific functions. The second row compiles the mean performance of all algorithms, while the third row offers insights into the overall standings through the final Friedman ranking. The tables distinctly highlight the top results, emphasizing their significance. Furthermore, the comparative analysis of the convergence curves for each algorithm is depicted in Fig. 10. This visual representation aids in understanding the progression and efficiency of each algorithm in finding optimal solutions over the course of iterations. This detailed evaluation underscores the robustness and adaptability of IAGWO in varied optimization contexts.
4.4.1 Assessing performance with CEC 2017 test suite
This section examines the efficacy of IAGWO using the CEC 2017 test suite with a dimensionality of 30, as detailed in Table 2. The results are quite telling: IAGWO recorded the highest number of best performances, leading in 16 out of the 30 functions tested. Notably, it did not register as the least effective in any of the functions. In terms of statistical standing, IAGWO's Friedman mean ranking is 3.00, earning it the top position. Further, a diverse range of functions from CEC 2017 (Dim = 30) were chosen for a more comprehensive evaluation. The comparative analysis of the convergence trends, depicted in Fig. 10, reveals that IAGWO consistently achieved the quickest convergence rate and maintained the highest level of accuracy in convergence. These results underscore IAGWO's exceptional proficiency in both global exploration and local exploitation. Collectively, these findings solidify the effectiveness and superiority of IAGWO as an optimization tool.
4.4.2 Assessing performance with CEC 2020 test suite
This section is dedicated to evaluating 13 algorithms with the utilization of the CEC 2020 test suite, which includes tests with dimensions of 10 and 20. The outcomes of this evaluation are systematically presented in Table 3 and Table 4. On the CEC 2020 tests, IAGWO mirrors the impressive results observed in the CEC 2017 suite, achieving the highest number of best performances while not being the least effective in any function. To provide a visual representation of these results, representative functions are chosen to illustrate the convergence curves, as depicted in Fig. 10. IAGWO consistently shows the quickest convergence speed and the highest accuracy in convergence, reaffirming its efficiency. Additionally, it's important to note the contrasting performance of GWO on the CEC 2020 suite. Despite its lower ranking in the Friedman rankings, indicating a comparatively poor performance, its improved variants, namely AGWO, ENGWO, and RGWO, show marked improvements. Remarkably, RGWO secures the second-highest ranking, closely following IAGWO, underscoring the substantial research value in enhancing the GWO algorithm. A comprehensive statistical analysis among the 13 algorithms tested places IAGWO at the forefront in the Friedman rankings. This achievement highlights its superiority not only over the original GWO but also over other well-regarded algorithms. These results collectively demonstrate the robustness and effectiveness of IAGWO in a competitive algorithmic landscape.
4.4.3 Assessing performance with CEC 2022 test suite
This section is dedicated to a thorough evaluation of the proposed IAGWO and 12 other comparative algorithms, utilizing the CEC 2022 test suite. The primary objective of this evaluation is to gauge the exploration and exploitation capabilities of these algorithms and assess their proficiency in avoiding local optima traps. The experiments are conducted under 10-dimensional and 20-dimensional scenarios, with corresponding results displayed in Tables 5 and 6, respectively. IAGWO ranks first in Friedman mean ranking in both dimensional settings, with ranking values of 1.75 and 2.25 respectively. Similarly, while GWO shows subpar performance, its variants enhance GWO's performance, emphasizing the research significance of GWO. The analysis of results depicted in Fig. 10 leads to a conclusive observation that IAGWO successfully evades getting stuck in local optima and avoids premature convergence. These findings serve not just as a testament to the excellence and robustness of IAGWO, but they also highlight its substantial performance benefits and the capability to yield enhanced solutions. This analysis underscores IAGWO's effectiveness in navigating complex optimization landscapes, further establishing its potential as a superior tool in optimization tasks.
4.4.4 Scalability evaluation using the CEC 2013 test suite
In real-world scenarios, solving optimization problems often requires adjusting multiple parameters at once. To test the scalability of the IAGWO for high-dimensional problems, we utilized the CEC 2013 suite for large-scale global optimization. The results of this testing are detailed in Table 7. This suite includes 15 highly complex test functions, each with up to 1000 dimensions, providing a robust challenge for assessing algorithmic performance. In our experiments, IAGWO was compared with 12 other algorithms. The population size was fixed at 100, and we limited the maximum number of iterations to 10 for each run. After conducting 30 independent runs for each algorithm, IAGWO achieved a Friedman mean rank value of 2.63. This score signifies a higher level of performance relative to the other algorithms in the competition. The findings from these experiments demonstrate that the IAGWO algorithm has significant scalability, effectively handling complex, high-dimensional optimization challenges. This capability distinguishes IAGWO from other algorithms, highlighting its suitability for practical, large-scale optimization applications.
4.5 Wilcoxon rank sum test
This study utilizes the non-parametric Wilcoxon rank sum test (Wilcoxon 1945) to conduct comparative performance assessments of various algorithms, setting the significance level at 0.05. To succinctly represent the performance of IAGWO relative to its competitors, the symbols “ + / = /-” are used to denote whether IAGWO is superior to, equivalent to, or inferior to the competing algorithms. As shown in Table 8, these statistical results clearly indicate significant performance differences between IAGWO and other competing algorithms in most cases. Specifically, the statistical data show the following comparative results: 344/0/46、119/0/11、111/13/6、150/0/6、152/0/4 and 175/0/5. The analysis presented above demonstrates that the IAGWO method, as introduced in this study, shows exceptional overall performance when compared to the traditional GWO and other rival algorithms, thereby underscoring its distinct advantages.
4.6 Time comparison analysis of IAGWO and GWO
Building on the findings from previous chapters, it's clear that IAGWO significantly surpasses the original GWO in terms of overall performance. In this section, we focus on a more detailed comparison, specifically looking at the computational costs of both algorithms, with a particular emphasis on the differences in computational time. To facilitate this comparison, we standardized the settings for both IAGWO and GWO. For this evaluation, the population size was configured to 50, the maximum iterations were limited to 1000, and each algorithm underwent 30 independent runs. Table 9 presents the total time (in seconds) each algorithm took to complete all 30 runs. This data provides a clear basis for comparing the efficiency of the two algorithms in terms of how long they take to execute, offering insights into their time-based performance efficiency.
Analysis of the experimental data on the CEC 2017 test suit (Dim = 30) indicates that under the same experimental parameters, IAGWO and GWO perform almost equally in terms of execution time on unimodal functions and some simpler multimodal functions, but when dealing with more complex multimodal and hybrid functions, IAGWO generally consumes significantly less computational time than GWO. This suggests that in handling highly complex problems, IAGWO demonstrates greater computational efficiency. Compared to the original GWO, IAGWO's improved search strategies are more efficient, possessing better global search capabilities or faster local convergence speeds. Overall, IAGWO not only excels in benchmark tests but also exhibits higher computational efficiency and better adaptability when addressing more complex optimization problems that may arise in practical applications.
However, on the CEC 2020 test suite (Dim = 10 and 20) and CEC 2022 test suite (Dim = 10 and 20), IAGWO generally exhibits higher computational times compared to GWO. This may indicate that the types of problems or characteristics included in CEC 2020 are not entirely compatible with the strategies of IAGWO, leading to a higher computational load.
4.7 Evaluating performance against CEC 2014 and CEC 2017 competition-winners
This section evaluates the performance of the proposed IAGWO using the CEC 2014 test suite with a dimensionality of 30 (Liang et al. 2013) and CEC 2017 (Dim = 30) test suites. Additionally, we compare the performance of IAGWO with the competition-winners of these two suites in previous CEC competitions, including L-SHADE (Tanabe & Fukunaga 2014) and AL-SHADE (Li et al. 2022) from CEC 2014, and LSHADE-SPACMA (Mohamed, Hadi, Fattouh, & Jambi, 2017) and LSHADE-cnEpSin (Awad, Ali, & Suganthan, 2017) from CEC 2017. In the experimental setup, the population size is fixed at 30, the maximum iterations are limited to 500, and a total of 30 independent runs are performed.
Table 10 presents the results from testing IAGWO using the CEC 2014 suite. In these tests, IAGWO surpassed other algorithms in six different scenarios, though it showed slightly weaker performance in one. Notably, IAGWO achieved a Friedman mean ranking value of 1.71, which places it second after L-SHADE but ahead of AL-SHADE. Table 11 focuses on the performance of IAGWO in the CEC 2017 suite. Here, IAGWO showed strong results in 8 of the test cases, but its performance was less impressive in 10 others. In terms of the Friedman mean ranking, IAGWO scored 1.99, which is slightly better than LSHADE-SPACMA, but not quite as good as LSHADE-cnEpSin. These results provide a detailed comparison of IAGWO's performance relative to other algorithms in these specific test environments.
Combining experimental outcomes, IAGWO can be positioned as a high-performing optimizer in test functions. These results not only demonstrate IAGWO's strong capability in handling different types of optimization problems but also indicate its competitive standing against existing top-tier algorithms. These findings emphasize the potential application value of IAGWO in the field of evolutionary computing and optimization. This simultaneously demonstrates the effectiveness of the three improvement strategies we introduced: the PSO Search Mechanism, the IMF Inertia Weighting Strategy, and the Adaptive Updating Mechanism, enhancing the optimization performance of the algorithm.
4.8 IAGWO for 19 engineering design challenges
The specific constrained handling technique used in engineering design challenges is called "constraint relaxation." Constraint relaxation involves temporarily easing or loosening certain constraints within the design problem to explore alternative solutions. This allows designers to generate a wider range of potential solutions without being overly restricted by strict constraints. Once various solutions have been identified, designers can then reintroduce and refine the constraints to ensure that the final design meets all necessary requirements. Intelligent optimization algorithms can efficiently explore the design space and uncover potential solutions. By integrating constraint relaxation techniques, these algorithms can dynamically handle constraints during the search process, allowing for a broader exploration of the design space and enhancing the efficiency of finding optimal solutions.
In this section, the proficiency of IAGWO is meticulously evaluated through a set of 19 engineering design challenges (EDC) sourced from the CEC 2020 real-world optimization benchmarks, as outlined by Kumar et al., 2020., 2020. A concise summary of these engineering challenges is presented in Table 12, which includes key details such as their dimensions (D), the count of inequality constraints (g), equality constraints (h), and the known optimal cost (fmin). The evaluation parameters are defined as follows: a population size of 50, a maximum of 1000 iterations, and 30 independent runs for each challenge.
Table 13 is dedicated to enumerating the performance metrics of IAGWO. This table encompasses various metrics including the best cost achieved (Best), the average cost (Ave), the cost's standard deviation (Std), and performance symbols (W|T|L), representing the number of wins, ties, and losses, respectively. Additionally, the evaluation includes a comprehensive analysis of the mean performance of all the algorithms involved in the testing. It also presents a ranking of these methods, providing a clear and structured comparison of their overall effectiveness and highlights instances where IAGWO achieves optimal results.
The statistical analysis drawn from these results clearly demonstrates IAGWO's superior ability in solving these real-world engineering design challenges, effectively outshining other methods. In terms of overall effectiveness, other algorithms like OMA, DMO, RGWO, and ENGWO trail behind IAGWO. This comprehensive analysis accentuates the robustness and efficacy of IAGWO.
5 Summary and future directions
In this study, we introduced an enhanced version of the GWO, aiming to tackle its inherent limitations and elevate its efficacy for addressing contemporary optimization challenges. The original GWO, while promising, exhibited deficiencies, notably in its convergence speed and its adaptability to intricate, high-dimensional problem landscapes. To fortify its capabilities, we embarked on an innovative path, culminating in the birth of an enhanced variant dubbed the Improved Adaptive Grey Wolf Optimizer (IAGWO). Central to our enhancement strategy was the infusion of concepts borrowed from Particle Swarm Optimization (PSO), introducing a velocity component to expedite convergence. This integration of velocity mechanics injected dynamism into the algorithm, enabling it to traverse solution spaces with greater agility. Moreover, a novel search mechanism was devised, augmenting the algorithm's exploration and exploitation capabilities to navigate complex problem domains more efficiently. In addition to these fundamental alterations, we devised novel strategies for Inertia Weighting and Position Updating, leveraging Nonlinear Inertia Weighting for Intermediary Fitness (IMF) and employing Sigmoid adaptive techniques. These refinements were meticulously crafted to synergize with the core algorithm, amplifying its prowess in navigating diverse optimization landscapes with finesse.
To validate the prowess of IAGWO, rigorous experimentation ensued, wherein 52 test functions sourced from prestigious benchmark suites were scrutinized. Comparative analysis against eight prominent Metaheuristic (MH) algorithms, including the original GWO and three of its variants, underscored IAGWO's supremacy in terms of convergence speed and solution precision. Furthermore, the algorithm's mettle was tested against 15 formidable large-scale global optimization challenges, affirming its adeptness in grappling with high-dimensional complexities. The litmus test for IAGWO's efficacy extended to competitive arenas, where it stood toe-to-toe against previous champions of the renowned CEC competitions across various iterations. Notably, IAGWO's performance surpassed expectations, firmly establishing its dominance and resilience in the face of formidable adversaries. Beyond the realm of academia, the real-world applicability of IAGWO was validated through its deployment in 19 diverse engineering design challenges. Here, its versatility and competitive edge shone brightly, outperforming established algorithms and offering tangible solutions to practical problems.
Despite its commendable achievements, the journey of IAGWO is far from over. While it has emerged as a potent force in the optimization landscape, ongoing efforts are directed towards fine-tuning its computational efficiency. Time comparison analyses have revealed areas for optimization, particularly concerning computational overhead in certain test suites. Future endeavors will thus focus on streamlining computational complexity without compromising search efficacy, ensuring that IAGWO remains at the forefront of optimization methodologies. Looking ahead, the horizon for IAGWO is brimming with promise. Beyond academic benchmarks, its utility extends to a myriad of real-world applications, ranging from feature extraction to operations research, classification, and logistical challenges. As we embark on this journey, our aim is clear: to harness the full potential of IAGWO in unraveling the complexities of the modern world, one optimization problem at a time.
Availability of data and materials
Enquiries about data availability should be directed to the authors.
References
Abdel-Basset M, Mohamed R, Jameel M, Abouhawwash M (2023) Nutcracker optimizer: A novel nature-inspired metaheuristic algorithm for global optimization and engineering design problems. Knowl Based Syst 262:110248
Abdollahzadeh B, Gharehchopogh FS, Mirjalili S (2021) African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput Ind Eng 158:107408
Abdullah JM, Ahmed T (2019) Fitness dependent optimizer: inspired by the bee swarming reproductive process. IEEE Access 7:43473–43486
Abualigah L, Yousri D, Abd Elaziz M, Ewees AA, Al-qaness MAA, Gandomi AH (2021) Aquila Optimizer: A novel meta-heuristic optimization algorithm. Comput Ind Eng 157:107250
Abushawish, A., & Jarndal, A. 2021. Hybrid GWOCS optimization based parameter extraction method applied to GaN devices. Paper presented at the 2021 IEEE International Midwest Symposium on Circuits and Systems (MWSCAS).
Ahrari, A., Elsayed, S., Sarker, R., Essam, D., & Coello, C. A. C. 2022. Problem Definition and Evaluation Criteria for the CEC’2022 Competition on Dynamic Multimodal Optimization. Paper presented at the Proceedings of the IEEE World Congress on Computational Intelligence (IEEE WCCI 2022), Padua, Italy.
Aldosari F, Abualigah L, Almotairi KH (2022) A normal distributed dwarf Mongoose Optimization Algorithm for global optimization and data clustering applications. Symmetry 14(5):1021
Ambika V, Lim S-J (2022) Hybrid image embedding technique using Steganographic Signcryption and IWT-GWO methods. Microprocess Microsyst 95:104688
Awad, N. H., Ali, M. Z., & Suganthan, P. N. 2017. Ensemble sinusoidal differential covariance matrix adaptation with Euclidean neighborhood for solving CEC2017 benchmark problems. In: Paper presented at the 2017 IEEE congress on evolutionary computation (CEC).
Bäck T, Schwefel H-P (1993) An overview of evolutionary algorithms for parameter optimization. Evol Comput 1(1):1–23
Banaie-Dezfouli M, Nadimi-Shahraki MH, Beheshti Z (2021) R-GWO: Representative-based grey wolf optimizer for solving engineering problems. Appl Soft Comput 106:107328
Bayraktar, Z., Komurcu, M., & Werner, D. H. 2010. Wind Driven Optimization (WDO): A novel nature-inspired optimization algorithm and its application to electromagnetics. Paper presented at the 2010 IEEE antennas and propagation society international symposium.
Biabani F, Shojaee S, Hamzehei-Javaran S (2022) A new insight into metaheuristic optimization method using a hybrid of PSO, GSA, and GWO. Structures 44:1168–1189
Chauhan, D., & Yadav, A. 2024b. A Comprehensive Survey on Artificial Electric Field Algorithm: Theories and Applications. Archives of Computational Methods in Engineering.
Chauhan D, Shivani, & Cheng, R. (2024) Competitive Swarm Optimizer: a decade survey. Swarm Evol Comput 87:101543
Chauhan D, Yadav A (2023a) An Adaptive Artificial Electric Field Algorithm for Continuous Optimization Problems 40(9):e13380
Chauhan D, Yadav A (2023b) Optimizing the parameters of hybrid active power filters through a comprehensive and dynamic multi-swarm gravitational search algorithm. Eng Appl Artif Intell 123:106469
Chauhan D, Yadav A (2024a) An archive-based self-adaptive artificial electric field algorithm with orthogonal initialization for real-parameter optimization problems. Appl Soft Comput 150:111109
Chen W, Wang H, Liu Z, Jiang K (2023) Time-energy-jerk optimal trajectory planning for high-speed parallel manipulator based on quantum-behaved particle swarm optimization algorithm and quintic B-spline. Eng Appl Artif Intell 126:107223
Cheng M-Y, Sholeh MN (2023) Optical microscope algorithm: A new metaheuristic inspired by microscope magnification for solving engineering optimization problems. Knowl-Based Syst 279:110939
Cuong-Le T, Minh H-L, Sang-To T, Khatir S, Mirjalili S, Abdel Wahab M (2022) A novel version of grey wolf optimizer based on a balance function and its application for hyperparameters optimization in deep neural network (DNN) for structural damage identification. Eng Fail Anal 142:106829
Deng W, Xu J, Gao X-Z, Zhao H (2022) An enhanced MSIQDE algorithm with novel multiple strategies for global optimization problems. IEEE Trans Syst Man Cybern Syst 52(3):1578–1587
Dorigo M, Birattari M, Stützle T (2006) Ant colony optimization. Computational Intelligence Magazine, IEEE 1:28–39
Eskandar H, Sadollah A, Bahreininejad A, Hamdi M (2012) Water cycle algorithm—a novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput Struct 110:151–166
Fan X, Yu M (2022) Coverage optimization of WSN based on improved grey wolf optimizer. Comput Sci 49:628–631
Fu H, Shi H, Xu Y, Shao J (2022) Research on gas outburst prediction model based on multiple strategy fusion improved snake optimization algorithm with temporal convolutional network. IEEE Access 10:117973–117984
Fu S, Huang H, Ma C, Wei J, Li Y, Fu Y (2023a) Improved dwarf mongoose optimization algorithm using novel nonlinear control and exploration strategies. Expert Syst Appl 233:120904
Fu S, Li K, Huang H, Ma C, Fan Q, Zhu Y (2024a) Red-billed blue magpie optimizer: a novel metaheuristic algorithm for 2D/3D UAV path planning and engineering design problems. Artif Intell Rev 57(6):134
Fu Y, Liu D, Chen J, He L (2024b) Secretary bird optimization algorithm: a new metaheuristic for solving global optimization problems. Artif Intell Rev 57(5):123
Garg V, Deep K, Bansal S (2023) Improved Teaching Learning Algorithm with Laplacian operator for solving nonlinear engineering optimization problems. Eng Appl Artif Intell 124:106549
Guo H-W, Sang H-Y, Zhang X-J, Duan P, Li J-Q, Han Y-Y (2023) An effective fruit fly optimization algorithm for the distributed permutation flowshop scheduling problem with total flowtime. Eng Appl Artif Intell 123:106347
Gupta, S., & Deep, K. 2017. Hybrid Grey Wolf Optimizer with Mutation Operator. Paper presented at the International Conference on Soft Computing for Problem Solving (SocProS), Indian Inst Technol Bhubaneswar, Bhubaneswar, INDIA.
Gupta S, Deep K (2020) Enhanced leadership-inspired grey wolf optimizer for global optimization problems. Engineering with Computers 36(4):1777–1800
Hakli H, Kiran MS (2020) An improved artificial bee colony algorithm for balancing local and global search behaviors in continuous optimization. Int J Mach Learn Cybern 11(9):2051–2076
Havaei P, Sandidzadeh MA (2023) Multi-objective train speed profile determination for automatic train operation with conscious search: a new optimization algorithm, a comprehensive study. Eng Appl Artif Intell 119:105756
Hayyolalam V, Kazem AAP (2020) Black widow optimization algorithm: a novel meta-heuristic approach for solving engineering optimization problems. Eng Appl Artif Intell 87:103249
Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H (2019) Harris hawks optimization: Algorithm and applications. Futur Gener Comput Syst 97:849–872
Hu X-G, Ho T-S, Rabitz H (1998) The collocation method based on a generalized inverse multiquadric basis for bound-state problems. Comput Phys Commun 113(2–3):168–179
Hu G, Du B, Wang X, Wei G (2022) An enhanced black widow optimization algorithm for feature selection. Knowl Based Syst 235:107638
Jangir P, Jangir N (2018) A new Non-Dominated Sorting Grey Wolf Optimizer (NS-GWO) algorithm: Development and application to solve engineering designs and economic constrained emission dispatch problem with integration of wind power. Eng Appl Artif Intell 72:449–467
Jia H, Peng X, Lang C (2021) Remora optimization algorithm. Expert Syst Appl 185:115665
Jia H, Rao H, Wen C, Mirjalili S (2023) Crayfish optimization algorithm. Artif Intell Rev 56(Suppl 2):1919–1979
Kaveh A, Farhoudi N (2013) A new optimization method: Dolphin echolocation. Adv Eng Softw 59:53–70
Kennedy, J., & Eberhart, R. 1995a. Particle swarm optimization. Paper presented at the Proceedings of ICNN'95 - International Conference on Neural Networks.
Kennedy, J., & Eberhart, R. 1995b. Particle swarm optimization (PSO). Paper presented at the Proc. IEEE international conference on neural networks, Perth, Australia.
Li X, Tang K, Omidvar MN, Yang Z, Qin K, China H (2013) Benchmark functions for the CEC 2013 special session and competition on large-scale global optimization. Gene 7(33):8
Li Y, Han T, Zhou H, Tang S, Zhao H (2022) A novel adaptive L-SHADE algorithm and its application in UAV swarm resource configuration problem. Inf Sci 606:350–367
Li K, Huang H, Fu S, Ma C, Fan Q, Zhu Y (2023) A multi-strategy enhanced northern goshawk optimization algorithm for global optimization and engineering design problems. Comput Methods Appl Mech Eng 415:116199
Liang JJ, Qu BY, Suganthan PN (2013) Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization. Comput Intell Lab 635(2):1–32
Liang J-J, Qu B, Gong D, Yue C (2019) Problem definitions and evaluation criteria for the CEC 2019 special session on multimodal multiobjective optimization. Zhengzhou University, Computational Intelligence Laboratory, pp 1–26
Liu Z, He L, Yuan L, Zhang H (2022) Path Planning of Mobile Robot Based on TGWO Algorithm. Hsi-Chiao Tung Ta Hsueh/J. Xi’an Jiaotong Univ 56:49–60
Liu Y, Jiang Y, Zhang X, Pan Y, Wang J (2023) An improved grey wolf optimizer algorithm for identification and location of gas emission. J Loss Prev Process Ind 82:105003
Liu, Y. Y., Sun, J. H., Yu, H. Y., Wang, Y. Y., & Zhou, X. K. 2020. An Improved Grey Wolf Optimizer Based on Differential Evolution and OTSU Algorithm. Applied Sciences-Basel, 10(18).
Luo Y, Qin Q, Hu Z, Zhang Y (2023) Path planning for unmanned delivery robots based on EWB-GWO algorithm. Sensors 23(4):1867
Mallipeddi R, Suganthan PN (2010) Problem definitions and evaluation criteria for the CEC 2010 competition on constrained real-parameter optimization. Nanyang Technological University, Singapore 24:1–17
Meidani K, Hemmasian A, Mirjalili S, Barati Farimani A (2022) Adaptive grey wolf optimizer. Neural Comput Appl 34(10):7711–7731
Meng X, Jiang J, Wang H (2021) AGWO: Advanced GWO in multi-layer perception optimization. Expert Syst Appl 173:114676
Mirjalili S (2016) SCA: a sine cosine algorithm for solving optimization problems. Knowl Based Syst 96:120–133
Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67
Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61
Mohakud, R., & Dash, R. 2022. Skin cancer image segmentation utilizing a novel EN-GWO based hyper-parameter optimized FCEDN. J. King Saud Univ. - Comput. Inf. Sci., 34(10): 9889–9904.
Mohamed, A. W., Hadi, A. A., Fattouh, A. M., & Jambi, K. M. 2017. LSHADE with semi-parameter adaptation hybrid with CMA-ES for solving CEC 2017 benchmark problems. Paper presented at the 2017 IEEE Congress on evolutionary computation (CEC).
Mohammed H, Rashid T (2023) FOX: a FOX-inspired optimization algorithm. Appl Intell 53(1):1030–1050
Mohammed H, Abdul Z, Hamad Z (2024) Enhancement of GWO for solving numerical functions and engineering problems. Neural Comput Appl 36(7):3405–3413
Muthiah-Nakarajan V, Noel MM (2016) Galactic Swarm Optimization: a new global optimization metaheuristic inspired by galactic motion. Appl Soft Comput 38:771–787
Nadimi-Shahraki MH, Taghian S, Mirjalili S (2021) An improved grey wolf optimizer for solving engineering problems. Expert Syst Appl 166:113917
Nadimi-Shahraki MH, Taghian S, Mirjalili S, Zamani H, Bahreininejad A (2022) GGWO: Gaze cues learning-based grey wolf optimizer and its applications for solving engineering problems. J Comput Sci 61:101636
Nadimi-Shahraki MH, Taghian S, Zamani H, Mirjalili S, Elaziz MA (2023) MMKE: Multi-trial vector-based monkey king evolution algorithm and its applications for engineering optimization problems. PLoS ONE 18(1):e0280006
Pan W-T (2012) A new Fruit Fly Optimization Algorithm: Taking the financial distress model as an example. Knowl Based Syst 26:69–74
Rashedi E, Nezamabadi-Pour H, Saryazdi S (2009) GSA: A Gravitational Search Algorithm. Inf Sci 179(13):2232–2248
Rathan, S., Shah, D., Kumar, T. H., & Charan, K. S. 2023. Adaptive IQ and IMQ-RBFs for solving Initial Value Problems: Adam-Bashforth and Adam-Moulton methods. arXiv preprint arXiv:2302.06113.
Said R, Elarbi M, Bechikh S, Coello Coello CA, Said LB (2023) Discretization-based feature selection as a bilevel optimization problem. IEEE Trans Evol Comput 27(4):893–907
Saka MP, Hasançebi O, Geem ZW (2016) Metaheuristics in structural optimization and discussions on harmony search algorithm. Swarm Evol Comput 28:88–97
Şenel FA, Gökçe F, Yüksel AS, Yiğit T (2019) A novel hybrid PSO–GWO algorithm for optimization problems. Eng Comput 35(4):1359–1373
Shayanfar H, Gharehchopogh FS (2018) Farmland fertility: A new metaheuristic algorithm for solving continuous optimization problems. Appl Soft Comput 71:728–746
Shehadeh HA (2023) Chernobyl disaster optimizer (CDO): a novel meta-heuristic method for global optimization. Neural Comput Appl 35(15):10733–10749
Singh, S., & Bansal, J. C. 2022b. Mutation-driven grey wolf optimizer with modified search mechanism. Expert Systems with Applications, 194.
Singh S, Bansal JC (2022a) Mutation-driven grey wolf optimizer with modified search mechanism. Expert Syst Appl 194:116450
Soliman MA, Hasanien HM, Turky RA, Muyeen SM (2022) Hybrid African vultures–grey wolf optimizer approach for electrical parameters extraction of solar panel models. Energy Rep 8:14888–14900
Storn R, Price K (1997) Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11(4):341–359
Tanabe, R., & Fukunaga, A. S. 2014. Improving the search performance of SHADE using linear population size reduction. Paper presented at the 2014 IEEE congress on evolutionary computation (CEC).
Tripathy, B., Reddy Maddikunta, P. K., Pham, Q.-V., Gadekallu, T. R., Dev, K., Pandya, S., & ElHalawany, B. M. 2022. Harris hawk optimization: a survey onvariants and applications. Computational Intelligence and Neuroscience, 2022.
Wang, Q., Xu, J., Zhang, W., Mao, M., Wei, Z., Wang, L., Cui, C., Zhu, Y., & Ma, J. 2018. Research progress on vanadium-based cathode materials for sodium ion batteries. J. Mater. Chem. A Mater. Energy Sustain., 6(19): 8815–8838.
Wei, G. 2012. Study on Genetic Algorithm and Evolutionary Programming. Paper presented at the 2nd IEEE International Conference on Parallel, Distributed and Grid Computing (PDGC), Jaypee Univ Informat Technol (JUIT), Waknaghat, INDIA.
Wilcoxon F (1945) Individual comparisons by ranking methods. Biom Bull 1(6):80–83
Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1(1):67–82
Xia X, Fu X, Zhong S, Bai Z, Wang Y (2023) Gravity particle swarm optimization algorithm for solving shop visit balancing problem for repairable equipment. Eng Appl Artif Intell 117:105543
Xue, J., & Shen, B. 2022. Dung beetle optimizer: a new meta-heuristic algorithm for global optimization. Journal of Supercomputing.
Xue J, Shen B (2020) A novel swarm intelligence optimization approach: sparrow search algorithm. Systems Science & Control Engineering 8:22–34
Yang, X.-S. 2009. Firefly algorithms for multimodal optimization. Paper presented at the International symposium on stochastic algorithms.
Yapici H, Cetinkaya N (2019) A new meta-heuristic optimizer: Pathfinder algorithm. Appl Soft Comput 78:545–568
Yu X, Jiang N, Wang X, Li M (2023) A hybrid algorithm based on grey wolf optimizer and differential evolution for UAV path planning. Expert Syst Appl 215:119327
Yuan Y, Ren J, Wang S, Wang Z, Mu X, Zhao W (2022) Alpine skiing optimization: A new bio-inspired optimization algorithm. Adv Eng Softw 170:103158
Zhang, M. J., Long, D. Y., Wang, X., Yu, L. Z., Wu, J. W., Li, D. H., Yang, J., & Ieee. 2019. Improved Grey Wolf Algorithm Based on Nonlinear Control Parameter Strategy. Paper presented at the Chinese Automation Congress (CAC), Hangzhou, PEOPLES R CHINA.
Zhou Y, He X, Chen Z, Jiang S (2022) A neighborhood regression optimization algorithm for computationally expensive optimization problems. IEEE Trans Cybern 52(5):3018–3031
Acknowledgements
There is no acknowledgement involved in this work.
Funding
This work was supported by Natural Science Foundation of Tianjin Municipality (21JCYBJC00110) and China Postdoctoral Science Foundation (2023M731803).
Author information
Authors and Affiliations
Contributions
MY: conceptualization, methodology, writing—original draft, formal analysis, data curation, writing—review & editing, software. WL: visualization, formal analysis, writing—review & editing. JX: conceptualization, resources, supervision, formal analysis. YQ: software, writing—review & editing, resources. SB: methodology, visualization resources, software. LT: visualization resources.
Corresponding author
Ethics declarations
Competing interests
The authors have no relevant financial or non-financial interests to disclose.
Ethical approval
His article does not contain any studies with human participants or animals performed by any of the authors.
Informed consent
This article does not contain any studies with human participants. So informed consent is not applicable here.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Yu, M., Xu, J., Liang, W. et al. Improved multi-strategy adaptive Grey Wolf Optimization for practical engineering applications and high-dimensional problem solving. Artif Intell Rev 57, 277 (2024). https://doi.org/10.1007/s10462-024-10821-3
Accepted:
Published:
DOI: https://doi.org/10.1007/s10462-024-10821-3