1 Introduction

The rapid advancement of science, technology, and industry has given rise to a multitude of intricate optimization problems. These problems frequently entail numerous variables, constraints, and objectives. Their solution spaces are huge and complex, and it is difficult for traditional deterministic optimization methods to obtain satisfactory solutions in acceptable time (Deng et al. 2022; Guo et al. 2023; Zhou et al. 2022). To cope with these challenges, researchers in the field of computational intelligence have started to search for new approaches. Among them, metaheuristic algorithms have attracted much attention due to their high efficiency, universal applicability and powerful global search capability (Aldosari et al. 2022; Chauhan et al., 2024; Chen et al. 2023).

When dealing with engineering problems, constraints are a crucial consideration. Constraints may be physical limitations or requirements and restrictions of the project. In the field of engineering, there are various techniques available for handling these constraints to ensure that projects proceed as expected and achieve their intended goals. One common technique for constraint handling is optimization algorithms. Optimization algorithms assist engineers in finding the best solution given certain constraints. These algorithms can be mathematical optimization methods such as linear programming, integer programming, or nonlinear programming, or they can be heuristic algorithms such as genetic algorithms, simulated annealing, or particle swarm optimization. By leveraging these algorithms, engineers can find the optimal design or decision solution while taking into account various constraints (Fu et al. 2024b; Li et al. 2023).

MH algorithms are inspired by certain phenomena in nature, such as PSO (Kennedy and Eberhart 1995b), Firefly Algorithm (FA) (Yang 2009), Sine Cosine Algorithm (SCA) (Mirjalili 2016), Wind Driven Optimization (WDO) (Bayraktar et al. 2010), Fruit Fly Optimization Algorithm (FOA) (Pan 2012), Competitive Swarm Optimizer (Chauhan et al. 2024), Fox optimizer(FOX) (Mohammed and Rashid 2023), Fitness Dependent Optimizer(FDO) (Abdullah and Ahmed 2019) and so on. These algorithms often do not rely on the specific nature of the problem, but instead draw on nature's strategies for stochastic search, which can effectively avoid falling into local optima (Abdel-Basset et al. 2023). With the development of deep learning, neural networks and other machine learning techniques, researchers have begun to try to combine these techniques with metaheuristic algorithms to further improve the efficiency of solving complex optimization problems (Garg et al. 2023). In recent years, with the wide application of heuristic intelligent optimization algorithms in numerical optimization solving, various swarm intelligence algorithms have been proposed (Fu et al. 2022).

The popularity of MH algorithms has four distinct advantages: practicality, generalizability, non-leading properties and avoidance of local optima (Fu et al. 2023a). First, based on their natural theoretical framework, these strategies are relatively intuitive to construct and deploy, thus allowing engineers and researchers to rapidly integrate them into concrete applications (Havaei & Sandidzadeh 2023). Next, since these algorithms treat the problem as an unknown mechanism, they can be applied to a wide range of different tasks such as selection (Said, Elarbi, Bechikh, Coello Coello, & Said, 2023), shop visit balancing (Xia et al. 2023) and engineering problems (Nadimi-Shahraki et al. 2022). Further, these methods do not rely on derivative information and are particularly good for dealing with nonlinear problems (Aldosari et al. 2022). Ultimately, with the help of a global search strategy and a stochastic strategy for updating the location, they can efficiently jump out of the local optimum, which is particularly effective for those scenarios where there are multiple locally optimal solutions.

The existing MH algorithm mainly include: Physics-based algorithms (PhA), Swarm Intelligence Algorithms (SI), Natural Evolutionary Algorithms (EA), and Human-based algorithms (Abualigah et al. 2021). In the course of evolution, cooperative behavior between individuals has been gradually formed through natural selection over a long period of time. For example, Trojovský and Dehghani, 2022 proposed a pelican optimization algorithm (POA) inspired by pelican predation. The Genetic Algorithm (GA) is a method of optimization based on the principles of natural evolution. It was proposed by John Holland and his colleagues. is a typical example inspired by Darwinian evolution (Bäck & Schwefel 1993). Differential Evolution (DE), based on the concepts of natural selection and reproduction in Darwinian evolution(Storn and Price 1997); Genetic Programming (GP), inspired by biological evolution processes; and Evolution Strategies (ES) (Wei 2012). Among these, Genetic Algorithms (GA) and Differential Evolution (DE) are widely considered to be the most popular evolutionary algorithms, having garnered significant attention and being applied in numerous applications. Physical method is the result of the interaction of physical law and chemical variation. For example, the Chernobyl Disaster Optimizer (CDO) is an optimization algorithm inspired by the core explosion at the Chernobyl nuclear power plant. (Shehadeh 2023). The Galaxy Swarm Optimization (GSO) (Muthiah-Nakarajan and Noel 2016) algorithm, inspired by the motion of galaxies; the Firefly Algorithm (FFA), drawing inspiration from soil fertility in agriculture (Shayanfar and Gharehchopogh 2018); the Firefly Algorithm (FFA), drawing inspiration from soil fertility in agriculture (Eskandar et al. 2012); and the Gravitational Search Algorithm (GSA), derived from Newton's law of universal gravitation and kinematic laws (Rashedi et al. 2009).In sharp contrast, a variety of human behaviors are simulated based on human behavior patterns, such as the "Alpine skiing Optimization (ASO)" proposed by Professor Yuan, a new idea influenced by the competitive behavior of athletes. Each of these different Metaheuristic algorithms has its own characteristics. According to different problems and requirements, appropriate algorithms can be selected to solve the optimal problems (Yuan et al. 2022). Particle Swarm Optimization (PSO) (Kennedy and Eberhart 1995a) is inspired by the foraging behavior of bird flocks and fish schools. The Ant Colony Optimization (ACO) algorithm (Dorigo et al. 2006) is inspired by the social behavior of ant colonies during foraging. The Pathfinder Algorithm (PFA) (Yapici & Cetinkaya 2019) is inspired by the collective action of animal populations in finding optimal food areas or prey. The Harris Hawk Optimization algorithm (HHO)(Heidari et al. 2019) is based on the predatory process of Harris hawks hunting rabbits. The Sparrow Search Algorithm (SSA) (Xue and Shen 2020) is inspired by the foraging and anti-predatory behavior of sparrows. The Dung Beetle Optimization algorithm (DBO) (Xue and Shen 2022) is inspired by the rolling, dancing, foraging, stealing, and reproductive behaviors of dung beetles. The Remora Optimization Algorithm (ROA) (Jia et al. 2021) is inspired by the behavior of remoras adhering to different-sized hosts to facilitate foraging. The Black Widow Optimization algorithm (BWO) (Hayyolalam and Kazem 2020) is inspired by the unique reproductive behavior of black widow spiders. Dikshit Chauhan et al. proposed the Artificial Electric Field Algorithm (AFFEA) based on a series of learning strategies (Chauhan and Yadav 2024a, b). Additionally, the Secretary Bird Optimization Algorithm (SBOA) was introduced based on the survival behavior of secretary birds in their natural environment (Fu et al. 2024b), while the Red-Billed Blue Magpie Optimizer (RBMO) was proposed by simulating the search, chase, prey attack, and food storage behaviors of red-billed blue magpies (Fu et al. 2024a).

Generally, the optimization process of the MH algorithm can be divided into two main steps (Saka et al. 2016): exploration and exploitation. In the exploration phase, the algorithm mainly focuses on searching all corners of the solution space to ensure that no possible optimal solution area is missed; while in the exploitation phase, the algorithm will focus on known high-quality solutions and further deepen the search in order to find the real the optimal solution. These two phases complement each other and ensure that the algorithm has both breadth and depth. GWO is inspired by the hunting behavior of grey wolves (Mirjalili et al. 2014). GWO effectively balances the two stages of exploration and exploitation by combining the social behavior of grey wolves with a dynamically adjusted location update strategy, thereby ensuring good global and local search capabilities.

Since its introduction in 2014, GWO has received widespread attention from scholars at home and abroad for its simplicity and efficiency, and has become an important tool for solving complex optimization problems (Fan and Yu 2022). However, similar to other optimization algorithms, the GWO algorithm does have some limitations although it has shown quite good performance in many optimization problems. In particularly, it is prone to suffer from prematurity and local optimality when dealing with multimodal function problems. As the iterative process of the GWO progresses, the inherent social hierarchy mechanism within the wolf population leads to a decrease in diversity. This mechanism prioritizes the positions and decisions of the leading wolves (Alpha, Beta, and Delta), influencing the entire pack's movement. As a result, the population tends to converge towards the leaders’ positions. However, this strong convergence driven by the social hierarchy can also lead to a drawback. The population may start to aggregate too closely or blindly around the leaders’ current positions. This phenomenon, often referred to as premature convergence, limits the algorithm’s ability to thoroughly explore the solution space. Consequently, the algorithm might struggle to escape local optima, as the current best solutions (guided by the leading wolves) might not always represent the global optimum. The pack, following the leaders too closely, can get trapped in these local optima, lacking the diversity or exploratory behavior needed to venture out and discover better solutions elsewhere in the search space. (Wang et al. 2018). In addition, when global exploration transitions to local mining, the algorithm may lose the ability to explore a wider solution space and overly concentrate on a specific region for detailed search. Such a centralized strategy, although helpful in finding the optimal solution in the local region accurately, may also lead the algorithm to ignore other promising regions (Wolpert and Macready 1997). Despite there has been various GWO variants, such as Advanced Grey Wolf Optimizer (AGWO) (Meng et al. 2021), Exponential Neighborhood Grey Wolf Optimization (EN-GWO) (Mohakud and Dash 2022), Hybrid Grey Wolf Optimizer with Mutation Operator (DE-GWO) (Gupta and Deep 2017), and others (Ambika et al. 2022; Biabani et al. 2022). These improved versions do not break through in solving the large-scale global optimization problems of CEC 2022 and CEC 2013. Moreover, their performance in dealing with complex problems remains unsatisfactory.

To improve the performance of the GWO, this study incorporates several key enhancements. Firstly, the search mechanism from PSO is employed to increase population diversity. This addition helps in broadening the search scope of the algorithm. Secondly, the IMF is used to adjust inertia weights, a strategy that aids in fine-tuning the balance between exploration and exploitation. Lastly, an adaptive mechanism based on the Sigmoid function is introduced for updating the positions of individuals within the population. This adaptive update strategy strengthens the group's ability to escape local optima, enhancing the overall effectiveness of the GWO algorithm in finding optimal solutions.

An improved adaptive grey wolf optimization (IAGWO) is proposed to address the shortcomings of the GWO algorithm. The main contributions are as follows.

  1. 1)

    The PSO search mechanism is introduced to enhances the algorithm's search efficiency and robustness by updating grey wolf positions early in each iteration. Additionally, the dynamic adjustment of inertia weights through the IMF boosts global search capability initially and local search effectiveness later.

  2. 2)

    Adaptive position updating strategy based on Sigmoid function to balance the exploration and exploitation of IAGWO.

  3. 3)

    To evaluate the exploration and exploitation capabilities of IAGWO, extensive experimentation is conducted using a suite of 67 test functions, which includes benchmarks from the CEC 2014, CEC 2017, CEC 2020, CEC 2022, and CEC 2013 for large-scale global optimization problems.

  4. 4)

    The effectiveness and accuracy of IAGWO in solving practical engineering design challenges are thoroughly assessed through its application to 19 diverse engineering design challenges.

The paper is organized as follows: Sect. 2 provides a brief review of the previous enhancements and potential application directions of the GWO. Section 3 details the original GWO algorithm and the proposed improvement strategy. Section 4 evaluates IAGWO performance through relevant experiments and in-depth analysis. Finally, Sect. 5 concludes this paper with a summary of the results and an outlook on future research directions.

2 Related work

In recent years, there has been a significant focus among researchers on enhancing the GWO. These improvements are aimed at boosting the algorithm's search performance and effectiveness. Scholars have explored various approaches to achieve this, including aspects such as adjusting the algorithm parameters, improving the speed and position equations, and combining it with other algorithms.

Yu et al. (2023) adopted a new update search mechanism, improved control parameters, mutation driven strategy and greedy selection strategy to improve GWO in the search process. (Singh and Bansal 2022a) proposed a hybrid GWO and Differential Evolution (HGWODE) algorithm and applied it to UAV path planning. (Cuong-Le et al. 2022) introduced an equation to control the moving strategy of the algorithm in each iteration and proposed New Balance Grey Wolf Optimizer (NB-GWO), which was used to optimize the hyperparameters of the deep neural network for damage detection of two-dimensional concrete frames. Liu et al. (2023) proposed a hybrid differential evolution GWO (DE-GWO) algorithm and applied it to gas emission identification and localization. Luo et al. (2023) introduced butterfly optimization algorithm and opposition-based learning method based on elite strategy, adaptive nonlinear inertia weight strategy and random walk law to improve the shortcomings of slow convergence speed and low accuracy of GWO algorithm when dealing with high-dimensional complex problem. To address the issue of premature convergence encountered by the classic GWO in some situations due to the stagnation of sub-optimal solutions, Gupta et al. introduced an enhanced leadership-inspired grey wolf optimizer for global optimization problems (GLF-GWO)(Gupta and Deep 2020), Addressing the issues of slow convergence speed and insufficient global exploration in GWO, which can lead to settling in local optimal states and failure to achieve global optimal solutions, Singh et al. proposed a novel mutation-driven modified grey wolf optimizer (MDM-GWO) (Singh and Bansal 2022b). MDM-GWO integrates new update search mechanisms, modified control parameters, mutation-driven schemes, and greedy selection methods into the search process of GWO. Addressing the issues of slow convergence speed and susceptibility to local optima in the Grey Wolf Optimizer (GWO) algorithm, Zhang et al. proposed a nonlinear control parameter strategy based on a sinusoidal function (GWO-SIN) and a nonlinear control parameter combination strategy (GWO-COM) (Zhang et al. 2019).

Soliman et al. (2022) proposed a novel hybrid African vultures–grey wolf optimizer (AV–GWO) approach to precisely estimate the electrical parameters of such TDM. Nadimi-Shahraki et al. (2021) introduced an enhanced variant of the Grey Wolf Optimization algorithm, termed I-GWO. The algorithm, based on a dimensionally learned hunting and searching (DLH) strategy, uniquely constructs hunting domains for each Wolf and enables them to share information about neighboring domains with each other. This enhances the algorithm's local and global search capabilities for more balanced performance, while also helping to maintain population diversity. A. Abushawish and A. Jarndal (Abushawish and Jarndal 2021) jointly proposed a new hybrid algorithm named GWO-CS that combines the advantages of Cuckoo Search (CS) algorithm and GWO algorithm. This algorithm primarily incorporates the position update equation from the CS to further refine the global search process of the GWO. Aimed at solving the problems of poor stability and easily falling into the local optimal solution in the grey wolf optimizer (GWO) algorithm, Liu et al. proposed an improved GWO algorithm based on the differential evolution (DE) algorithm and the OTSU algorithm is proposed (DE-OTSU-GWO) (Liu, Sun, Yu, Wang, & Zhou, 2020). The multithreshold OTSU, Tsallis entropy, and DE algorithm are combined with the GWO algorithm. The multithreshold OTSU algorithm is used to calculate the fitness of the initial population. The population is updated using the GWO algorithm and the DE algorithm through the Tsallis entropy algorithm for crossover steps. Multithreshold OTSU calculates the fitness in the initial population and makes the initial stage basically stable. Tsallis entropy calculates the fitness quickly. The DE algorithm can solve the local optimal solution of GWO. To address the issues of GWO's susceptibility to local optima and its low exploration capabilities, Hardi Mohammed et al. proposed the Enhanced GWO (EGWO) (Mohammed et al. 2024). EGWO employs diverse methods to improve the performance of GWO, utilizing gamma, z-position, and the golden ratio.

Liu et al. (2022) introduced a novel improvement strategy for the GWO algorithm, known as the exponential convergence factor improvement strategy. This strategy is designed to more accurately simulate the actual search process of grey wolves. It incorporates dynamic weighting factors and enhances control parameters to reduce the likelihood of the GWO algorithm getting stuck in local optima. However, despite these improvements, experimental findings indicate that GWO still faces challenges in accurately handling high-dimensional functions. Şenel et al. (2019) integrated a differential disturbance operator into the GWO algorithm. This addition brought an element of exploration into the exploitation phase, thereby enhancing the GWO algorithm's overall optimization capabilities. Jangir and Jangir (2018) proposed a multi-objective version of the GWO algorithm, named NSGWO. This algorithm utilizes a crowding distance mechanism to select the optimal solution from a set of Pareto optimal solutions. This approach helps guide the search towards the dominant region in multi-objective search spaces. NSGWO was tested on a variety of standard unconstrained, constrained, and engineering design challenges, demonstrating its efficiency and effectiveness in diverse optimization scenarios.

To improve the performance of the GWO, this study incorporates several key enhancements. Firstly, the search mechanism from PSO is employed to increase population diversity. This addition helps in broadening the search scope of the algorithm. Secondly, the IMF is used to adjust inertia weights, a strategy that aids in fine-tuning the balance between exploration and exploitation. Lastly, an adaptive mechanism based on the Sigmoid function is introduced for updating the positions of individuals within the population. This adaptive update strategy strengthens the group's ability to escape local optima, enhancing the overall effectiveness of the GWO algorithm in finding optimal solutions.

3 Methodology Overview: Standardized GWO and Proposed Enhancements

This section offers an overview of the hunting behavior and the mathematical model that forms the foundation of the original GWO. Additionally, we introduce the IAGWO, our proposed enhancement to GWO. IAGWO integrates the PSO search mechanism, the IMF strategy for inertia weighting, and an adaptive strategy for updating positions. These additions aim to refine and boost the efficiency of the original GWO algorithm.

3.1 The standardized GWO

3.1.1 Inspiration of grey wolf packs’ hunting activity behavior

The GWO algorithm draws inspiration from the hunting behavior of grey wolf packs. It mathematically simulates the way a group of grey wolves hunts, encircles, and targets their prey while adhering to a well-defined social hierarchy. In this hierarchy, the pack is led by three primary wolves: the Alphas (α), Betas (β), and Deltas (δ), each playing a crucial role in guiding the pack's movements and decisions. These wolves are considered the leaders, showcasing significant leadership abilities. Below them are the Omega (w) wolves, who occupy a subordinate role and follow the directives of the leading wolves. This hierarchical structure, integral to the functioning of the GWO algorithm, is depicted in Fig. 1.

Fig. 1
figure 1

Hierarchy of the grey Wolf Pack

3.1.2 Mathematical model: GWO

GWO simulates grey wolf leadership and hunting mechanisms by dividing grey wolves based on their characteristics into a leader, α, who rules over the entire grey wolf; a facilitator, β, who helps α to make decisions and replaces α when α dies; and an enforcer, δ, who follows α's and β's orders (Fan and Yu 2022). GWO searches for excellence by modeling the wolf hunting process. In addition to the social hierarchy of wolves, group hunting is another interesting social behavior of grey wolves. The main phases of grey wolf hunting are as follows: The Grey Wolf Optimizer algorithm mimics the hunting behavior of grey wolf packs. Initially, in the "tracking, chasing, and approaching prey" phase, each wolf searches for potential solutions in the solution space and adjusts its position through certain search strategies to get closer to possible candidate solutions. Subsequently, in the "chasing, surrounding, and harassing prey until it stops moving" phase, the wolf pack collaborates to try to corner the prey into a smaller area and prevent its escape, involving behaviors such as encircling and harassing the prey to prevent its escape. Finally, in the "attacking prey" phase, once the prey is cornered and unable to escape, the wolves concentrate their attack on the prey, gradually optimizing the position of candidate solutions through strategies such as linear or leap searches until finding the optimal solution or meeting specific optimization criteria. These three phases represent the Grey Wolf Optimization algorithm's process of searching, chasing, and optimizing in the solution space, analogous to the behavior of a grey wolf pack during hunting, progressing from search to attack, gradually optimizing and approaching the optimal solution.

Now, this paper shows the calculation steps of the basic grey Wolf Optimization algorithm and the pseudo-code as follows (Algorithm 1). The GWO algorithm process is as follows:

1) Each member is initialized using Eq. (1), determine the population size N, the maximum number of iterations M, the single grey wolf dimension dim, and ɑ, A and C;

$$X=\left(UB-LB\right)\times phi+lb$$
(1)

where, LB and UB are the lower and upper boundaries of the solution space, respectively. X represent the positions of the current solution. phi is a random number between [0,1].

2) Calculate the fitness value of each individual using the test function. Then, based on the magnitude of the fitness values, select the best-fit individual as the α-wolf, the second-best individual as the β-wolf, and the third-best individual as the δ-wolf;

3) The mathematical model of Wolf pack leader tracking prey is shown in Eq. (2), which calculates the traction direction of the entire pack according to the distance difference between the Wolf leader and the pack, that is, the movement direction information of the pack, can be calculated as shown in Eqs. (3) and (4). Update the current grey wolf position according to Eqs. (2)– (4).

$$\left\{ \begin{gathered} {D_\alpha } = |{C_1}{X_\alpha } - X| \hfill \\ {D_\beta } = |{C_2}{X_\beta } - X| \hfill \\ {D_\delta } = |{C_3}{X_\delta } - X| \hfill \\ \end{gathered} \right.$$
(2)
$$\left\{ \begin{gathered} {X_1} = {X_\alpha } - {A_1}{D_\alpha } \hfill \\ {X_2} = {X_\beta } - {A_2}{D_\beta } \hfill \\ {X_3} = {X_\delta } - {A_3}{D_\delta } \hfill \\ \end{gathered} \right.$$
(3)
$$X(t + 1) = \frac{{{X_1} + {X_2} + {X_3}}}{3}$$
(4)

where Dα, Dβ, and Dδ denote the distance difference between α-wolf, β-wolf, and δ-wolf and other individuals, respectively. Xα, Xβ, and Xδ indicate the current positions of α-wolf, β-wolf, and δ-wolf respectively, X indicate the current positions C1, C2, and C3 satisfy the constraints of Eq. (6). A1, A2, and A3 are random vectors satisfying the constrain of Eq. (5); X1, X2 and X3 are the traction directions of the three leading wolves; and X(t + 1) represents the next collective movement position of the wolf pack. As shown in Fig. 2, the final orientation of the wolves in the search space will be randomly positioned within a circle defined by the locations of the α, β, and δ in the search space. This graphical representation illustrates how the wolves’ positions influence the movement and direction of the entire pack in the pursuit of their prey.

Fig. 2
figure 2

Position update of wolf groups in GWO algorithm

4) Update ɑ, A and C according to Eqs. (5)–(7);

$$A = 2a \cdot {r_1} - a$$
(5)
$$C = 2{r_2}$$
(6)
$$a = 2 - \frac{2t}{M}$$
(7)

where, the parameter ɑ plays a crucial role in balancing global search and local exploration. Its value is set to decrease linearly from 2 to 0 over the course of the algorithm's iterations. Initially, a higher value of ɑ aids in the global convergence of the algorithm, guiding the wolf pack swiftly towards the region where the optimal solution might be found. As the algorithm progresses through its later iterations, the gradual decrease in the value of ɑ facilitates more refined exploration in the area of the optimal solution. This helps improve the convergence accuracy of the GWO algorithm, ensuring a more precise final result. r1 and r2 are random vectors and r1, r2 ∈ [0, 1].

5) Update the positions of other individuals, calculate the updated fitness value based on the new position, and update the α-wolf, β-wolf, δ-wolf and global optimal solution, \(R\) represents the position vector of the optimization target;

6) Judge whether the specified stopping condition is reached (e.g., the maximum number of iterations is reached), if not, repeat steps 2 to 5. Otherwise, output the optimal result: the position of the α-wolf obtained at the end is the optimal solution, and the corresponding fitness value is the degree of superiority or inferiority of the optimal solution.

Algorithm 1:
figure a

GWO

3.2 Improved grey wolf optimization algorithm

3.2.1 PSO search mechanism

The GWO (Grey Wolf Optimizer) exhibits a weak exploratory capability in its early stages and lacks diversity within the population, consequently resulting in suboptimal solution quality. In order to enhance exploration capabilities, improve population diversity (Hu et al. 2022), and increase the quality of solutions (Hakli and Kiran 2020), this study integrates the PSO, introducing a velocity concept to provide a new search mechanism for the GWO. The individual grey wolves are updated in terms of position during the early iterations, and the application in the velocity update introduces additional randomness. This prevents the algorithm from converging prematurely and encourages exploration of new areas, thereby increasing population diversity. By dynamically adjusting the velocity and position of each individual, this method may help in more effectively balancing global exploration and local exploitation. Leading to a wider search in the early stage of the iteration, this assists in identifying potential high-quality solutions. The computation Eq. (8) is as follows:

$$\left\{\begin{array}{c}{v}_{rand}\left(t\right)={v}_{rand}\left(t\right)+phi*\left({X}_{best}-X\right)+phi*\left({X}_{selfbest}-X\right)\\ X=X+{v}_{rand}\left(t\right)\end{array}\right.$$
(8)

where t represents the current number of iterations, X and \({X}_{best}\) represent the positions of the current solution and the best-performing solution, respectively. \({v}_{rand}\left(t\right)\) is the velocity vector of the current solution at time t of iteration. phi is a random number between [0,1]. \({X}_{selfbest}\) is the best position vector in the history of the current solution.

In this study, at the start of each iteration, a PSO updating strategy is employed, along with the addition of extra randomness to stimulate a more extensive global search. This approach helps avoid local optimization and increases population diversity. This approach not only accumulates a more diverse and high-quality search experience for the GWO but also more effectively balances global exploration and local exploitation by dynamically adjusting the search behavior.

3.2.2 IMF inertia weighting strategy

Inverse Multiquadric Function is a decreasing function based on the principle of inverse multiple squares. It is often used as a regularization method in neural networks, such as a kernel function in support vector machines (Hu et al. 1998; Rathan et al. 2023). In accordance with the characteristics of the IMF, this paper incorporates it into the population position update mechanism within the framework of the GWO as delineated in Eq. (3). The IMF inertia weight ω, along with the revised formulae for the wolf pack updating process, are elucidated in Eqs. (9)–(10).

$$\omega =a\cdot {e}^{(-b\cdot {e}^{(-c\cdot t)})}+d$$
(9)
$$\left\{\begin{array}{c}{X}_{1}={X}_{\alpha }-{\omega A}_{1}{D}_{\alpha }\\ {X}_{2}={X}_{\beta }-\omega {A}_{2}{D}_{\beta }\\ {X}_{3}={X}_{\delta }-{\omega A}_{3}{D}_{\delta }\end{array}\right.$$
(10)

where, the parameter groups [a, b, c, d] are taken as [0.6, 0.02, 0.05, 0.3] and the graph of ω is shown in Fig. 3. As indicated by Fig. 3, during the early to mid-phases of the algorithm's iteration, the inertia weight ω is set to a higher value. This larger influence of the α-wolf, β-wolf, and δ-wolf on the updated positions is beneficial for the pack to quickly converge towards the optimal solution, effectively preventing the waste of search resources due to blind searching and thus enhancing the quality of the pack. As the development progresses to the mid and late stages and the pack becomes densely concentrated, if the higher-ranking wolves get trapped in a local optimum, the lower-ranking wolves led by them are also unable to escape this local optimum. At this juncture, the value of ω should be reduced to a lower level, thereby enlarging the pack's autonomous search capability and avoiding premature convergence.

Fig. 3
figure 3

IMF Inertia Weight Graph

3.2.3 Adaptive updating mechanism

The population updating mechanism based on IMF inertia weight effectively reduces the density of population clustering to a certain extent. However, due to the intrinsic dynamics of the GWO, the newly generated wolf packs are still inevitably concentrated and migrate towards the positions directed by the α-wolf, β-wolf, and δ-wolf during the iterative process. In response to this, the present study defines the aggregation coefficient as the ratio of an individual's fitness value to the average population fitness value, which serves to quantify the degree of divergence between the current solution and the optimal solution. In minimization problems, the smaller the fitness value, the better the solution. A smaller aggregation coefficient indicates a more favorable current solution, thus allowing for minor updates in the vicinity of the individual's current position. Conversely, a larger aggregation coefficient suggests a poor location of the individual, warranting a significant perturbation to facilitate a jump to other positions. Based on this analysis, this paper introduces a Sigmoid function to construct the adaptive updating amplitude of the population under different aggregation coefficients, as depicted in Eqs. (11)-(12).

$$\phi =\frac{1}{1+({e}^{-{f}_{i}/{f}_{ave}}{)}^{\theta }}$$
(11)
$${Y}_{i}={Y}_{i}\cdot \phi$$
(12)

where fi represents the fitness value of the ith individual, and fave denotes the average fitness value of the population. θ is the exponential coefficient, which is taken as 0.5 in this paper.

In comparison to the standard GWO, the IAGWO brings several significant advancements. Firstly, it introduces a novel search mechanism by incorporating velocity concepts. This addition helps in preventing premature convergence and allows for a more thorough exploration of the search space. The integration of velocity updates also adds an element of randomness, which in turn increases the diversity within the population of solutions. Moreover, the implementation of the IMF inertia weight strategy in IAGWO improves the balance between exploring the global search space and exploiting local solutions. This strategic enhancement significantly boosts the convergence speed of the algorithm. Furthermore, IAGWO differentiates itself from the standard GWO through its adaptive updating mechanism. This mechanism combines the aggregation coefficient with the Sigmoid function, enhancing the algorithm's ability to switch between broad search patterns and detailed solution refinement. This results in improved performance in maintaining diversity and achieving faster convergence rates. This adaptive approach enables IAGWO to search and optimize more efficiently within the solution space of the problem. For an in-depth comprehension of the workings of IAGWO, the procedural flow is visually depicted in Fig. 4, its pseudocode is meticulously detailed in Algorithm 2, and the proposed IAGWO workflow(Chauhan & Yadav 2023b) is shown in Fig. 5.

Fig. 4
figure 4

Implementation Process for IAGWO

Fig. 5
figure 5

Working procedure of the proposed IAGWO algorithm

Algorithm 2:
figure b

IAGWO

3.3 Time complexity analysis

CEC17 (Competition on Evolutionary Computation) defines algorithm complexity as a measure of the computational resources required by an algorithm to solve a given problem instance. This section explains the computational complexity of IAGWO. The complexity of IAGWO is primarily influenced by two main factors: the initialization of solutions and the execution of the algorithm's core functions. These core functions involve calculating fitness functions and updating solutions. The computational complexity is determined by considering several variables: the count of solutions \(\left(N\right)\), the upper limit of iterations \((T)\), and the problem's dimension \((D)\) being tackled. Specifically, the complexity of initializing solutions in the IAGWO algorithm can be represented as \(O(N)\), indicating the order of complexity in relation to the number of solutions. This gives an understanding of the computational resources required for the initial setup phase of the algorithm. This means that as the number of solutions \(N\) increases, the computational complexity of the initial solution of the algorithm will also increase accordingly. This represents the order of complexity for the initial setup phase of the algorithm. The time complexity of the original GWO algorithm is \(O(T\times N\times D)\). IAGWO modifies this with Eqs. (8), (9), and (10)–(11), including enhancements to population diversity using the PSO position updating strategy, integration of IMF weights to reduce the excessive influence of higher-level wolves on lower-level ones, and the introduction of a population adaptive update based on the sigmoid function. The PSO position updating strategy requires calculations for each individual and each dimension, with a complexity of \(O(T\times N\times D)\). The update from Eq. (10) is independent of population size and search dimensions, correlating only with the maximum number of iterations, resulting in a time complexity of \(O(T)\). The time complexity for Eq. (11) is \(O(N\times D)\). Consequently, the overall time complexity of IAGWO is \(O(\text{IAGWO})=O\left(T\times N\times D\right)+O\left(T\right)+O\left(N\times D\right)=O(T\times N\times D)\), consistent with the original algorithm.

4 Results and comprehensive analysis

The simulation for this study was carried out on a Windows 11 platform, operating on a 64-bit system. The analysis was performed using MATLAB 2023b, running on a machine equipped with an AMD Ryzen 7 4800H CPU at 2.30 GHz and 16 GB of RAM.

4.1 Test functions and parameter settings

In this paper, the CEC 2017 (Dim = 30) (Mallipeddi and Suganthan 2010), CEC 2020 (Dim = 10 and 20) (Liang et al. 2019), and CEC 2022 (Dim = 10 and 20) (Ahrari et al. 2022) test suites were employed to evaluate the performance of the proposed IAGWO algorithm. The test suite for evaluating algorithms covers four different functional types: single-modal, multimodal, mixed, and combined. These different types of test suites are designed to comprehensively evaluate the performance and applicability of algorithms. Additionally, for assessing the scalability of the IAGWO algorithm, we employed the CEC 2013 Large-scale Global Optimization suite (800-dimensional) for simulation analysis (Li et al. 2013). The suite contains 15 highly complex reference functions that are grouped into four groups: fully separable, partially additively separable, overlapping, and completely indivisible. These different types of benchmark functions provide a comprehensive experimental framework for evaluating the scalability of optimization algorithms, so that we can more accurately evaluate the performance of IAGWO algorithm on different types of problems.

4.2 Comparison with other algorithms and parameter settings

The performance of the Improved Adaptive Grey Wolf Optimization (IAGWO) is benchmarked against 12 well-known algorithms, grouped into three categories for comparison:

High-citation algorithms: These include the Gravitational Search Algorithm (GSA) (Rashedi et al. 2009), Dolphin Echolocation Optimization (DMO) (Kaveh and Farhoudi 2013), Whale Optimization Algorithm (WOA) (Mirjalili and Lewis 2016), and Harris Hawks Optimization (HHO) (Tripathy et al. 2022).

Advanced algorithms: This category includes Combined Particle Swarm Optimization and Gravitational Search Algorithm (CPSOGSA), Crow Optimization Algorithm (COA) (Jia et al. 2023), African Vulture Optimization Algorithm (AVOA) (Abdollahzadeh et al. 2021), Optical Microscope Algorithm (OMA) (Cheng and Sholeh 2023), and Adaptive Artificial Electric Field Algorithm (iAEFA) (Chauhan and Yadav 2023a).

GWO and its variants: This includes the original Grey Wolf Optimization (GWO), the Adaptive GWO (AGWO) (Meidani et al. 2022), the Enhanced GWO (ENGWO)(Mohammed et al. 2024)and the Revised GWO (RGWO) (Banaie-Dezfouli et al. 2021).

Table 1 offers a comprehensive summary of the parameters for 14 different MH algorithms. For each of these algorithms, 30 independent runs were conducted, with each run limited to a maximum of 500 iterations and population size is set to 30 and a maximum of 30,000 evaluations. The outcomes of these runs were meticulously recorded, capturing the average values (denoted as Ave) and the standard deviations (Std) for each algorithm. To facilitate an easy comparison of their performance, the table highlights the best results among these 14 algorithms by formatting them in bold text. This highlighting method provides a clear visual indicator of which algorithms performed most effectively under the given testing conditions.

Table 1 Parameter configurations for competing algorithms

4.3 Qualitative assessment of IAGWO

4.3.1 Exploring convergence patterns

To verify the convergence performance of IAGWO, we plotted its convergence performance evaluation on the 30-dimensional CEC2017 test functions, as shown in Fig. 6. It presents the corresponding results of different test functions in the form of nine images involving three instances selected from the same suite. In the presentation of images, the first column distinctly illustrates the two-dimensional profiles of the reference functions being analyzed. The visuals presented in the first column accurately depict the characteristics and contours of each function being optimized. These graphical representations offer a clear understanding of the challenges and intricacies of each function. In the second column, images depict the final positions of the search agents at the end of the optimization process. Within these visuals, the optimal solution's location is distinctly marked with a red dot. This not only illustrates the end point of the search agents' journey but also visually highlights the spot where they successfully identified the most favorable solution. This layout effectively communicates the results of the optimization process, making it easier to comprehend the behavior and efficacy of the search agents in navigating the solution space. This layout offers a clear and informative view of both the nature of the functions and the outcomes of the optimization process. By observing the second column of images, we can clearly find that the search agent is close to the optimal solution in most cases, which fully reflects the powerful ability of IAGWO algorithm in the process of exploration and development. In addition, the third column image accurately tracks the change of the average fitness value during the iteration. Initially high, these values decrease and stabilize after 100 iterations, albeit with minor fluctuations. These fluctuations are normal in complex optimization problems, indicate ongoing detailed searches for improvement and the maintenance of population diversity to prevent premature convergence to local optima. The fourth column reveals the search agents' trajectories in the first dimension, there were marked fluctuations in the early iterations, which then leveled off, and then fluctuations again at intervals, which leveled off again, signifying a balance between exploration and exploitation. Finally, the convergence curve, smooth for unimodal functions, suggests optimal values are achievable through iteration. For multimodal functions, however, the step-like curve reflects the need for continual avoidance of local optima to reach global optima. These four metrics collectively affirm IAGWO's robust convergence.

Fig. 6
figure 6figure 6

The convergence behavior of IAGWO

4.3.2 Analyzing the diversity of population

In optimization algorithms, the importance of population diversity is a matter of balance. Moderate population diversity can help the algorithm avoid falling into local optima, thereby increasing search space coverage and global search capability, improving convergence speed, and the quality of optimization results. However, excessive population diversity may lead to overly dispersed search, making it difficult for the algorithm to explore local regions deeply, thereby reducing convergence speed and the quality of final solutions. Therefore, when designing optimization algorithms, it is necessary to consider a balance between population diversity and search efficiency. This can be achieved through appropriate parameter settings or suitable strategies to maintain population diversity, thus effectively solving optimization problems. A population with high diversity indicates significant differences among individuals, allowing for broader exploration in the search space and avoiding premature convergence to local optima. Hence, maintaining good population diversity is a crucial objective in metaheuristic algorithms. Typically, we use Eq. (13) and Eq. (14) to measure the population diversity of the algorithm. This calculation method was proposed by Morrison in 2004. Where, \({I}_{C}\) represents the moment of inertia, \({x}_{id}\) denotes the ith search agent's value in the \({d}^{th}\) dimension at iteration t. Furthermore, \({c}_{d}\) represents the spread of the population from its center of mass, denoted by 'c', in every iteration, as illustrated in Eq. (14) (Fu et al. 2023a, b).

$${I}_{C}\left(t\right)=\sqrt{\sum_{i=1}^{n}\sum_{d=1}^{dim}{\left({x}_{id}\left(t\right)-{c}_{d}\left(t\right)\right)}^{2}}$$
(13)
$${c}_{d}\left(t\right)=\frac{1}{dim}\sum_{i=1}^{n}{x}_{id}\left(t\right)$$
(14)

Figure 7 displays the comparative experimental outcomes regarding population diversity for both IAGWO and GWO. The measurement of population diversity is conducted through \({I}_{C}\). Observations from Fig. 7 reveal that IAGWO demonstrates an initial marked increase in diversity during the early phases of iteration, which then transitions to a state of relative stability at an elevated level. This indicates an increase in the variance among individuals within the IAGWO population during the early iterations, effectively exploring a vast search space. As iterations progress, the population diversity tends to stabilize, which aids in averting premature convergence to local optima. The minor fluctuation are normal and beneficial for the algorithm to adapt to dynamically changing search spaces and prevent premature convergence. In contrast, GWO shows insufficient population diversity, highlighting IAGWO’s effectiveness in maintaining diversity, crucial for exploring complex search spaces and avoiding local optima. These experimental outcomes demonstrate IAGWO's substantial potential in optimization.

Fig. 7
figure 7

The population diversity of IAGWO and GWO

4.3.3 Exploration and exploitation analysis

In optimization algorithms, managing the balance between exploration and exploitation is key for optimal performance (Saka et al. 2016). Exploration involves searching through the solution space, while exploitation focuses on refining known good solutions. This section deals with quantifying the extent of exploration and exploitation in the algorithm. To do this, we use Eq. (15) to calculate the percentage of exploration and Eq. (16) for the percentage of exploitation. Additionally, the parameter \(Div\left(t\right)\) used for measuring dimension diversity is calculated using Eq. (17). The parameter \({\rm Div}_{max}\) reflects the peak diversity noted throughout the entire course of iterations,, which is essential for understanding how broadly and effectively the algorithm explores the solution space(Li et al. 2023) (Nadimi-Shahraki et al. 2023).

$$Exploration\left(\%\right)=\frac{Div\left(t\right)}{Di{v}_{max}}\times 100$$
(15)
$$Exploitation\left(\%\right)=\frac{\left|Div\left(t\right)-Di{v}_{max}\right|}{Di{v}_{max}}\times 100$$
(16)
$$Div\left(t\right)=\frac{1}{dim}\sum_{d=1}^{dim}\frac{1}{n}\sum_{i=1}^{n}\left|median({x}_{d}\left(t\right))-{x}_{id}\left(t\right)\right|$$
(17)

Figure 8 depicts the results of the experiments conducted. It shows that for various function types, as the number of iterations progresses, GWO consistently demonstrates a higher rate of exploration and a comparatively lower rate of exploitation. In contrast, IAGWO shows a changing pattern, with exploration decreasing and exploitation increasing as iterations progress. This observation suggests that GWO tends towards a broad search across the entire space, with less focus on local search and weaker performance in thoroughly exploiting the optimal regions found. In comparison, IAGWO demonstrates the ability to dynamically adjust its search strategy. This implies that the algorithm initially identifies potential good solution areas through extensive exploration and then finely tunes these solutions in the later stages through focused exploitation, potentially enhancing both the efficiency of the algorithm and the quality of solutions. Overall, while GWO shows commendable exploration capabilities, it lacks effective exploitation. In contrast, IAGWO effectively strikes a balance between exploration and exploitation. This balance is well-maintained across a variety of benchmark functions, showcasing IAGWO's adaptability and efficiency in different optimization scenarios. This attribute is particularly important as it ensures the algorithm can thoroughly search the solution space while also honing in on the most promising solutions.

Fig. 8
figure 8

The exploration and exploitation of IAGWO and GWO

4.3.4 Ablation experiments

In this section, a detailed analysis is conducted on the impact of three proposed improvement strategies on the GWO. These strategies include the PSO position updating mechanism, the introduction of IMF inertia weight strategy, and the adoption of a Sigmoid adaptive updating strategy. Based on these improvements, three new algorithm variants are named: PGWO for the PSO search mechanism, IGWO for the IMF inertia weight, and SGWO for the Sigmoid adaptive updating strategy. According to the experimental results in Fig. 9, all three strategies significantly enhance the convergence accuracy and speed of GWO, with IAGWO showing particularly notable performance.

Fig. 9
figure 9

Comparison of different improvement strategies

Specifically, when dealing with unimodal and multimodal functions, the results of PGWO and IAGWO are relatively consistent, showing a more significant improvement over GWO compared to SGWO and IGWO. However, when dealing with more complex hybrid modal functions, the enhancement of PGWO on GWO diminishes, while the IAGWO algorithm, integrating all three strategies, continues to exhibit exceptional optimization performance. Overall, the IAGWO algorithm successfully overcomes challenges of local optima and premature convergence, significantly boosting the algorithm's convergence speed and accuracy. These findings provide valuable insights for the further development and application of the GWO.

4.4 Quantitative evaluation

In this section, the efficacy of IAGWO is scrutinized using a series of test suites: CEC 2017, CEC 2020, and CEC 2022. Moreover, its proficiency in handling large-scale problems is assessed with the CEC 2013 suite. To clearly compare performance, the best results among the algorithms are highlighted in bold in the tables. The parameters are standardized with a population size of 100, a maximum iteration limit of 500, and a total of 30 independent runs. The performance outcomes are systematically presented in Tables 2 to 7, which illustrate the average values (Ave) and the standard deviations (Std) for each competing algorithm. A thorough statistical analysis is conducted to highlight the superiority of IAGWO. This includes an initial evaluation represented by three indicators (W|T|L) in the first line of the results, denoting the algorithms' performance as best (win), comparable (tie), or least effective (loss) for specific functions. The second row compiles the mean performance of all algorithms, while the third row offers insights into the overall standings through the final Friedman ranking. The tables distinctly highlight the top results, emphasizing their significance. Furthermore, the comparative analysis of the convergence curves for each algorithm is depicted in Fig. 10. This visual representation aids in understanding the progression and efficiency of each algorithm in finding optimal solutions over the course of iterations. This detailed evaluation underscores the robustness and adaptability of IAGWO in varied optimization contexts.

Table 2 Comparison of results on CEC 2017 (Dim = 30)
Fig. 10
figure 10

Convergence curves of different algorithms

4.4.1 Assessing performance with CEC 2017 test suite

This section examines the efficacy of IAGWO using the CEC 2017 test suite with a dimensionality of 30, as detailed in Table 2. The results are quite telling: IAGWO recorded the highest number of best performances, leading in 16 out of the 30 functions tested. Notably, it did not register as the least effective in any of the functions. In terms of statistical standing, IAGWO's Friedman mean ranking is 3.00, earning it the top position. Further, a diverse range of functions from CEC 2017 (Dim = 30) were chosen for a more comprehensive evaluation. The comparative analysis of the convergence trends, depicted in Fig. 10, reveals that IAGWO consistently achieved the quickest convergence rate and maintained the highest level of accuracy in convergence. These results underscore IAGWO's exceptional proficiency in both global exploration and local exploitation. Collectively, these findings solidify the effectiveness and superiority of IAGWO as an optimization tool.

4.4.2 Assessing performance with CEC 2020 test suite

This section is dedicated to evaluating 13 algorithms with the utilization of the CEC 2020 test suite, which includes tests with dimensions of 10 and 20. The outcomes of this evaluation are systematically presented in Table 3 and Table 4. On the CEC 2020 tests, IAGWO mirrors the impressive results observed in the CEC 2017 suite, achieving the highest number of best performances while not being the least effective in any function. To provide a visual representation of these results, representative functions are chosen to illustrate the convergence curves, as depicted in Fig. 10. IAGWO consistently shows the quickest convergence speed and the highest accuracy in convergence, reaffirming its efficiency. Additionally, it's important to note the contrasting performance of GWO on the CEC 2020 suite. Despite its lower ranking in the Friedman rankings, indicating a comparatively poor performance, its improved variants, namely AGWO, ENGWO, and RGWO, show marked improvements. Remarkably, RGWO secures the second-highest ranking, closely following IAGWO, underscoring the substantial research value in enhancing the GWO algorithm. A comprehensive statistical analysis among the 13 algorithms tested places IAGWO at the forefront in the Friedman rankings. This achievement highlights its superiority not only over the original GWO but also over other well-regarded algorithms. These results collectively demonstrate the robustness and effectiveness of IAGWO in a competitive algorithmic landscape.

Table 3 Comparison of results on CEC2020 (Dim = 10)
Table 4 Comparison of results on CEC2020 (Dim = 20)

4.4.3 Assessing performance with CEC 2022 test suite

This section is dedicated to a thorough evaluation of the proposed IAGWO and 12 other comparative algorithms, utilizing the CEC 2022 test suite. The primary objective of this evaluation is to gauge the exploration and exploitation capabilities of these algorithms and assess their proficiency in avoiding local optima traps. The experiments are conducted under 10-dimensional and 20-dimensional scenarios, with corresponding results displayed in Tables 5 and 6, respectively. IAGWO ranks first in Friedman mean ranking in both dimensional settings, with ranking values of 1.75 and 2.25 respectively. Similarly, while GWO shows subpar performance, its variants enhance GWO's performance, emphasizing the research significance of GWO. The analysis of results depicted in Fig. 10 leads to a conclusive observation that IAGWO successfully evades getting stuck in local optima and avoids premature convergence. These findings serve not just as a testament to the excellence and robustness of IAGWO, but they also highlight its substantial performance benefits and the capability to yield enhanced solutions. This analysis underscores IAGWO's effectiveness in navigating complex optimization landscapes, further establishing its potential as a superior tool in optimization tasks.

Table 5 Comparison of results on CEC2022 (Dim = 10)
Table 6 Comparison of results on CEC2022 (Dim = 20)

4.4.4 Scalability evaluation using the CEC 2013 test suite

In real-world scenarios, solving optimization problems often requires adjusting multiple parameters at once. To test the scalability of the IAGWO for high-dimensional problems, we utilized the CEC 2013 suite for large-scale global optimization. The results of this testing are detailed in Table 7. This suite includes 15 highly complex test functions, each with up to 1000 dimensions, providing a robust challenge for assessing algorithmic performance. In our experiments, IAGWO was compared with 12 other algorithms. The population size was fixed at 100, and we limited the maximum number of iterations to 10 for each run. After conducting 30 independent runs for each algorithm, IAGWO achieved a Friedman mean rank value of 2.63. This score signifies a higher level of performance relative to the other algorithms in the competition. The findings from these experiments demonstrate that the IAGWO algorithm has significant scalability, effectively handling complex, high-dimensional optimization challenges. This capability distinguishes IAGWO from other algorithms, highlighting its suitability for practical, large-scale optimization applications.

Table 7 Experimental results of 13 algorithms on the CEC 2013 large-scale global optimization suite

4.5 Wilcoxon rank sum test

This study utilizes the non-parametric Wilcoxon rank sum test (Wilcoxon 1945) to conduct comparative performance assessments of various algorithms, setting the significance level at 0.05. To succinctly represent the performance of IAGWO relative to its competitors, the symbols “ + / = /-” are used to denote whether IAGWO is superior to, equivalent to, or inferior to the competing algorithms. As shown in Table 8, these statistical results clearly indicate significant performance differences between IAGWO and other competing algorithms in most cases. Specifically, the statistical data show the following comparative results: 344/0/46、119/0/11、111/13/6、150/0/6、152/0/4 and 175/0/5. The analysis presented above demonstrates that the IAGWO method, as introduced in this study, shows exceptional overall performance when compared to the traditional GWO and other rival algorithms, thereby underscoring its distinct advantages.

Table 8 Statistical findings from Wilcoxon rank sum test

4.6 Time comparison analysis of IAGWO and GWO

Building on the findings from previous chapters, it's clear that IAGWO significantly surpasses the original GWO in terms of overall performance. In this section, we focus on a more detailed comparison, specifically looking at the computational costs of both algorithms, with a particular emphasis on the differences in computational time. To facilitate this comparison, we standardized the settings for both IAGWO and GWO. For this evaluation, the population size was configured to 50, the maximum iterations were limited to 1000, and each algorithm underwent 30 independent runs. Table 9 presents the total time (in seconds) each algorithm took to complete all 30 runs. This data provides a clear basis for comparing the efficiency of the two algorithms in terms of how long they take to execute, offering insights into their time-based performance efficiency.

Table 9 Execution time comparison: IAGWO vs. GWO

Analysis of the experimental data on the CEC 2017 test suit (Dim = 30) indicates that under the same experimental parameters, IAGWO and GWO perform almost equally in terms of execution time on unimodal functions and some simpler multimodal functions, but when dealing with more complex multimodal and hybrid functions, IAGWO generally consumes significantly less computational time than GWO. This suggests that in handling highly complex problems, IAGWO demonstrates greater computational efficiency. Compared to the original GWO, IAGWO's improved search strategies are more efficient, possessing better global search capabilities or faster local convergence speeds. Overall, IAGWO not only excels in benchmark tests but also exhibits higher computational efficiency and better adaptability when addressing more complex optimization problems that may arise in practical applications.

However, on the CEC 2020 test suite (Dim = 10 and 20) and CEC 2022 test suite (Dim = 10 and 20), IAGWO generally exhibits higher computational times compared to GWO. This may indicate that the types of problems or characteristics included in CEC 2020 are not entirely compatible with the strategies of IAGWO, leading to a higher computational load.

4.7 Evaluating performance against CEC 2014 and CEC 2017 competition-winners

This section evaluates the performance of the proposed IAGWO using the CEC 2014 test suite with a dimensionality of 30 (Liang et al. 2013) and CEC 2017 (Dim = 30) test suites. Additionally, we compare the performance of IAGWO with the competition-winners of these two suites in previous CEC competitions, including L-SHADE (Tanabe & Fukunaga 2014) and AL-SHADE (Li et al. 2022) from CEC 2014, and LSHADE-SPACMA (Mohamed, Hadi, Fattouh, & Jambi, 2017) and LSHADE-cnEpSin (Awad, Ali, & Suganthan, 2017) from CEC 2017. In the experimental setup, the population size is fixed at 30, the maximum iterations are limited to 500, and a total of 30 independent runs are performed.

Table 10 presents the results from testing IAGWO using the CEC 2014 suite. In these tests, IAGWO surpassed other algorithms in six different scenarios, though it showed slightly weaker performance in one. Notably, IAGWO achieved a Friedman mean ranking value of 1.71, which places it second after L-SHADE but ahead of AL-SHADE. Table 11 focuses on the performance of IAGWO in the CEC 2017 suite. Here, IAGWO showed strong results in 8 of the test cases, but its performance was less impressive in 10 others. In terms of the Friedman mean ranking, IAGWO scored 1.99, which is slightly better than LSHADE-SPACMA, but not quite as good as LSHADE-cnEpSin. These results provide a detailed comparison of IAGWO's performance relative to other algorithms in these specific test environments.

Table 10 Comparison with the 2014 CEC winner results
Table 11 Comparison with the 2017 CEC winner results

Combining experimental outcomes, IAGWO can be positioned as a high-performing optimizer in test functions. These results not only demonstrate IAGWO's strong capability in handling different types of optimization problems but also indicate its competitive standing against existing top-tier algorithms. These findings emphasize the potential application value of IAGWO in the field of evolutionary computing and optimization. This simultaneously demonstrates the effectiveness of the three improvement strategies we introduced: the PSO Search Mechanism, the IMF Inertia Weighting Strategy, and the Adaptive Updating Mechanism, enhancing the optimization performance of the algorithm.

4.8 IAGWO for 19 engineering design challenges

The specific constrained handling technique used in engineering design challenges is called "constraint relaxation." Constraint relaxation involves temporarily easing or loosening certain constraints within the design problem to explore alternative solutions. This allows designers to generate a wider range of potential solutions without being overly restricted by strict constraints. Once various solutions have been identified, designers can then reintroduce and refine the constraints to ensure that the final design meets all necessary requirements. Intelligent optimization algorithms can efficiently explore the design space and uncover potential solutions. By integrating constraint relaxation techniques, these algorithms can dynamically handle constraints during the search process, allowing for a broader exploration of the design space and enhancing the efficiency of finding optimal solutions.

In this section, the proficiency of IAGWO is meticulously evaluated through a set of 19 engineering design challenges (EDC) sourced from the CEC 2020 real-world optimization benchmarks, as outlined by Kumar et al., 2020., 2020. A concise summary of these engineering challenges is presented in Table 12, which includes key details such as their dimensions (D), the count of inequality constraints (g), equality constraints (h), and the known optimal cost (fmin). The evaluation parameters are defined as follows: a population size of 50, a maximum of 1000 iterations, and 30 independent runs for each challenge.

Table 12 Overview of 19 EDC from CEC 2020

Table 13 is dedicated to enumerating the performance metrics of IAGWO. This table encompasses various metrics including the best cost achieved (Best), the average cost (Ave), the cost's standard deviation (Std), and performance symbols (W|T|L), representing the number of wins, ties, and losses, respectively. Additionally, the evaluation includes a comprehensive analysis of the mean performance of all the algorithms involved in the testing. It also presents a ranking of these methods, providing a clear and structured comparison of their overall effectiveness and highlights instances where IAGWO achieves optimal results.

Table 13 Comparison of 14 algorithms in EDC

The statistical analysis drawn from these results clearly demonstrates IAGWO's superior ability in solving these real-world engineering design challenges, effectively outshining other methods. In terms of overall effectiveness, other algorithms like OMA, DMO, RGWO, and ENGWO trail behind IAGWO. This comprehensive analysis accentuates the robustness and efficacy of IAGWO.

5 Summary and future directions

In this study, we introduced an enhanced version of the GWO, aiming to tackle its inherent limitations and elevate its efficacy for addressing contemporary optimization challenges. The original GWO, while promising, exhibited deficiencies, notably in its convergence speed and its adaptability to intricate, high-dimensional problem landscapes. To fortify its capabilities, we embarked on an innovative path, culminating in the birth of an enhanced variant dubbed the Improved Adaptive Grey Wolf Optimizer (IAGWO). Central to our enhancement strategy was the infusion of concepts borrowed from Particle Swarm Optimization (PSO), introducing a velocity component to expedite convergence. This integration of velocity mechanics injected dynamism into the algorithm, enabling it to traverse solution spaces with greater agility. Moreover, a novel search mechanism was devised, augmenting the algorithm's exploration and exploitation capabilities to navigate complex problem domains more efficiently. In addition to these fundamental alterations, we devised novel strategies for Inertia Weighting and Position Updating, leveraging Nonlinear Inertia Weighting for Intermediary Fitness (IMF) and employing Sigmoid adaptive techniques. These refinements were meticulously crafted to synergize with the core algorithm, amplifying its prowess in navigating diverse optimization landscapes with finesse.

To validate the prowess of IAGWO, rigorous experimentation ensued, wherein 52 test functions sourced from prestigious benchmark suites were scrutinized. Comparative analysis against eight prominent Metaheuristic (MH) algorithms, including the original GWO and three of its variants, underscored IAGWO's supremacy in terms of convergence speed and solution precision. Furthermore, the algorithm's mettle was tested against 15 formidable large-scale global optimization challenges, affirming its adeptness in grappling with high-dimensional complexities. The litmus test for IAGWO's efficacy extended to competitive arenas, where it stood toe-to-toe against previous champions of the renowned CEC competitions across various iterations. Notably, IAGWO's performance surpassed expectations, firmly establishing its dominance and resilience in the face of formidable adversaries. Beyond the realm of academia, the real-world applicability of IAGWO was validated through its deployment in 19 diverse engineering design challenges. Here, its versatility and competitive edge shone brightly, outperforming established algorithms and offering tangible solutions to practical problems.

Despite its commendable achievements, the journey of IAGWO is far from over. While it has emerged as a potent force in the optimization landscape, ongoing efforts are directed towards fine-tuning its computational efficiency. Time comparison analyses have revealed areas for optimization, particularly concerning computational overhead in certain test suites. Future endeavors will thus focus on streamlining computational complexity without compromising search efficacy, ensuring that IAGWO remains at the forefront of optimization methodologies. Looking ahead, the horizon for IAGWO is brimming with promise. Beyond academic benchmarks, its utility extends to a myriad of real-world applications, ranging from feature extraction to operations research, classification, and logistical challenges. As we embark on this journey, our aim is clear: to harness the full potential of IAGWO in unraveling the complexities of the modern world, one optimization problem at a time.