Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

A Review of Population-Based Metaheuristics for Large-Scale Black-Box Global Optimization—Part II

Published: 01 October 2022 Publication History

Abstract

This article is the second part of a two-part survey series on large-scale global optimization. The first part covered two major algorithmic approaches to large-scale optimization, namely, decomposition methods and hybridization methods, such as memetic algorithms and local search. In this part, we focus on sampling and variation operators, approximation and surrogate modeling, initialization methods, and parallelization. We also cover a range of problem areas in relation to large-scale global optimization, such as multiobjective optimization, constraint handling, overlapping components, the component imbalance issue and benchmarks, and applications. The article also includes a discussion on pitfalls and challenges of the current research and identifies several potential areas of future research.

References

[1]
J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proc. IEEE Int. Conf. Neural Netw., vol. 4, 1995, pp. 1942–1948.
[2]
R. Storn and K. Price, “Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces,” J. Global Optim., vol. 11, no. 4, pp. 341–359, 1997.
[3]
S. Das and P. N. Suganthan, “Differential evolution: A survey of the state-of-the-art,” IEEE Trans. Evol. Comput., vol. 15, no. 1, pp. 4–31, Feb. 2011.
[4]
M. S. Maučec and J. Brest, “A review of the recent use of differential evolution for large-scale global optimization: An analysis of selected algorithms on the CEC 2013 LSGO benchmark suite,” Swarm Evol. Comput., vol. 50, Nov. 2019, Art. no.
[5]
A. W. Mohamed, A. A. Hadi, and A. K. Mohamed, “Differential evolution mutations: Taxonomy, comparison and convergence analysis,” IEEE Access, vol. 9, pp. 68629–68662, 2021.
[6]
M. Z. Ali, N. H. Awad, and P. N. Suganthan, “Multi-population differential evolution with balanced ensemble of mutation strategies for large-scale global optimization,” Appl. Soft Comput., vol. 33, pp. 304–327, Aug. 2015.
[7]
A. Banitalebi, M. I. A. Aziz, and Z. A. Aziz, “A self-adaptive binary differential evolution algorithm for large scale binary optimization problems,” Inf. Sci., vols. 367–368, pp. 487–511, Nov. 2016.
[8]
J.-I. Kushida, A. Hara, and T. Takahama, “Rank-based differential evolution with multiple mutation strategies for large scale global optimization,” in Proc. IEEE Congr. Evol. Comput., 2015, pp. 353–360.
[9]
Y. Wang, B. Li, and X. Lai, “Variance priority based cooperative co-evolution differential evolution for large scale global optimization,” in Proc. IEEE Congr. Evol. Comput., 2009, pp. 1232–1239.
[10]
A. W. Mohamed and A. S. Almazyad, “Differential evolution with novel mutation and adaptive crossover strategies for solving large scale global optimization problems,” Appl. Comput. Intell. Soft Comput., vol. 2017, Mar. 2017, Art. no.
[11]
A. W. Mohamed, “Solving large-scale global optimization problems using enhanced adaptive differential evolution algorithm,” Complex Intell. Syst., vol. 3, no. 4, pp. 205–231, Dec. 2017.
[12]
H. Ge, L. Sun, X. Yang, S. Yoshida, and Y. Liang, “Cooperative differential evolution with fast variable interdependence learning and cross-cluster mutation,” Appl. Soft Comput., vol. 36, pp. 300–314, Nov. 2015.
[13]
H. Wang, Z. Wu, S. Rahnamayan, and D. Jiang, “Sequential DE enhanced by neighborhood search for large scale global optimization,” in Proc. IEEE Congr. Evol. Comput., 2010, pp. 1–7.
[14]
C. García-Martínez, F. J. Rodríguez, and M. Lozano, “Role differentiation and malleable mating for differential evolution: an analysis on large-scale optimisation,” Soft Comput., vol. 15, no. 11, pp. 2109–2126, 2011.
[15]
J. Zhang and A. C. Sanderson, “JADE: Adaptive differential evolution with optional external archive,” IEEE Trans. Evol. Comput., vol. 13, no. 5, pp. 945–958, Oct. 2009.
[16]
Q. Yang, H.-Y. Xie, W.-N. Chen, and J. Zhang, “Multiple parents guided differential evolution for large scale optimization,” in Proc. IEEE Congr. Evol. Comput., 2016, pp. 3549–3556.
[17]
H. Wang, Z. Wu, and S. Rahnamayan, “Enhanced opposition-based differential evolution for solving high-dimensional continuous optimization problems,” Soft Comput., vol. 15, no. 11, pp. 2127–2140, 2011.
[18]
S. Rahnamayan, H. R. Tizhoosh, and M. Salama, “Opposition-based differential evolution algorithms,” in Proc. IEEE Congr. Evol. Comput. (CEC), 2006, pp. 2010–2017.
[19]
F. Herrera, M. Lozano, and D. Molina, “Test suite for the special issue of soft computing on scalability of evolutionary algorithms and other metaheuristics for large scale continuous optimization problems,” 2009. [Online]. Available: http://150.214.190.154/sites/default/files/files/TematicWebSites/EAMHCO/fu nctions1-19.pdf
[20]
H. Hiba, S. Mahdavi, and S. Rahnamayan, “Differential evolution with center-based mutation for large-scale optimization,” in Proc. IEEE Symp. Series Comput. Intell., 2017, pp. 1–8.
[21]
J. Brest, A. Zamuda, B. Boskovic, M. S. Maucec, and V. Zumer, “High-dimensional real-parameter optimization using self-adaptive differential evolution algorithm with population size reduction,” in Proc. IEEE Congr. Evol. Comput. (IEEE World Congr. Comput. Intell.), 2008, pp. 2032–2039.
[22]
H. Wang, S. Rahnamayan, and Z. Wu, “Parallel differential evolution with self-adapting control parameters and generalized opposition-based learning for solving high-dimensional optimization problems,” J. Parallel Distrib. Comput., vol. 73, no. 1, pp. 62–73, 2013.
[23]
J. Brest, A. Zamuda, I. Fister, and M. S. Maučec, “Large scale global optimization using self-adaptive differential evolution algorithm,” in Proc. IEEE Congr. Evol. Comput., 2010, pp. 1–8.
[24]
M. Weber, F. Neri, and V. Tirronen, “Shuffle or update parallel differential evolution for large-scale optimization,” Soft Comput., vol. 15, no. 11, pp. 2089–2107, 2011.
[25]
A. Zamuda, J. Brest, B. Boskovic, and V. Zumer, “Large scale global optimization using differential evolution with self-adaptation and cooperative co-evolution,” in Proc. IEEE Congr. Evol. Comput., 2008, pp. 3718–3725.
[26]
A. K. Qin and P. N. Suganthan, “Self-adaptive differential evolution algorithm for numerical optimization,” in Proc. IEEE Congr. Evol. Comput., vol. 2, 2005, pp. 1785–1791.
[27]
Z. Yang, K. Tang, and X. Yao, “Self-adaptive differential evolution with neighborhood search,” in Proc. IEEE Congr. Evol. Comput. (IEEE World Congr. Comput. Intell.), 2008, pp. 1110–1116.
[28]
Z. Yang, K. Tang, and X. Yao, “Scalability of generalized adaptive differential evolution for large-scale continuous optimization,” Soft Comput., vol. 15, no. 11, pp. 2141–2155, 2011.
[29]
T. Takahama and S. Sakai, “Large scale optimization by differential evolution with landscape modality detection and a diversity archive,” in Proc. IEEE Congr. Evol. Comput., 2012, pp. 1–8.
[30]
H. Wang, S. Rahnamayan, and Z. Wu, “Adaptive differential evolution with variable population size for solving high-dimensional problems,” in Proc. IEEE Congr. Evol. Comput., 2011, pp. 2626–2632.
[31]
R. Tanabe and A. S. Fukunaga, “Improving the search performance of SHADE using linear population size reduction,” in Proc. IEEE Congr. Evol. Comput. (CEC), 2014, pp. 1658–1665.
[32]
J. Lampinen and I. Zelinka, “On stagnation of the differential evolution algorithm,” in Proc. MENDEL, 2000, pp. 76–83.
[33]
C. Segura, C. A. Coello Coello, and A. G. Hernández-Díaz, “Improving the vector generation strategy of differential evolution for large-scale optimization,” Inf. Sci., vol. 323, pp. 106–129, Dec. 2015.
[34]
Y.-F. Ge, W.-J. Yu, and J. Zhang, “Diversity-based multi-population differential evolution for large-scale optimization,” in Proc. Genet. Evol. Comput. Conf., 2016, pp. 31–32.
[35]
Y.-F. Geet al., “Distributed differential evolution based on adaptive mergence and split for large-scale optimization,” IEEE Trans. Cybern., vol. 48, no. 7, pp. 2166–2180, Jul. 2018.
[36]
K. E. Parsopoulos, “Cooperative micro-differential evolution for high-dimensional problems,” in Proc. Genet. Evol. Comput. Conf., 2009, pp. 531–538.
[37]
E. H. Houssein, A. G. Gad, K. Hussain, and P. N. Suganthan, “Major advances in particle swarm optimization: Theory, analysis, and application,” Swarm Evol. Comput., vol. 63, Jun. 2021, Art. no.
[38]
J. Vesterstrom and R. Thomsen, “A comparative study of differential evolution, particle swarm optimization, and evolutionary algorithms on numerical benchmark problems,” in Proc. Congr. Evol. Comput., vol. 2, 2004, pp. 1980–1987.
[39]
F. van den Bergh and A. P. Engelbrecht, “A cooperative approach to particle swarm optimization,” IEEE Trans. Evol. Comput., vol. 8, no. 3, pp. 225–239, Jun. 2004.
[40]
N. Lynn and P. N. Suganthan, “Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation,” Swarm Evol. Comput., vol. 24, pp. 11–24, Oct. 2015.
[41]
R. Cheng and Y. Jin, “A competitive swarm optimizer for large scale optimization,” IEEE Trans. Cybern., vol. 45, no. 2, pp. 191–204, Feb. 2015.
[42]
Y. Tian, X. Zheng, X. Zhang, and Y. Jin, “Efficient large-scale multiobjective optimization based on a competitive swarm optimizer,” IEEE Trans. Cybern., vol. 50, no. 8, pp. 3696–3708, Aug. 2020.
[43]
E. Naderi, H. Narimani, M. Fathi, and M. R. Narimani, “A novel fuzzy adaptive configuration of particle swarm optimization to solve large-scale optimal reactive power dispatch,” Appl. Soft Comput., vol. 53, pp. 441–456, Apr. 2017.
[44]
R.-L. Tang, Z. Wu, and Y.-J. Fang, “Adaptive multi-context cooperatively coevolving particle swarm optimization for large-scale problems,” Soft Comput., vol. 21, no. 16, pp. 4735–4754, 2016.
[45]
M. Pluhacek, R. Senkerik, and I. Zelinka, “Investigation on the performance of a new multiple choice strategy for PSO algorithm in the task of large scale optimization problems,” in Proc. IEEE Congr. Evol. Comput., 2013, pp. 2007–2011.
[46]
N. Lynn, M. Z. Ali, and P. N. Suganthan, “Population topologies for particle swarm optimization and differential evolution,” Swarm Evol. Comput., vol. 39, pp. 24–35, Apr. 2018.
[47]
J. Fan, J. Wang, and M. Han, “Cooperative coevolution for large-scale optimization based on kernel fuzzy clustering and variable trust region methods,” IEEE Trans. Fuzzy Syst., vol. 22, no. 4, pp. 829–839, Aug. 2014.
[48]
Q. Zhang, H. Cheng, Z. Ye, and Z. Wang, “A competitive swarm optimizer integrated with cauchy and Gaussian mutation for large scale optimization,” in Proc. Chin. Control Conf., 2017, pp. 9829–9834.
[49]
Q. Yanget al., “A distributed swarm optimizer with adaptive communication for large-scale optimization,” IEEE Trans. Cybern., vol. 50, no. 7, pp. 3393–3408, Jul. 2020.
[50]
Z.-J. Wanget al., “Dynamic group learning distributed particle swarm optimization for large-scale optimization and its application in cloud workflow scheduling,” IEEE Trans. Cybern., vol. 50, no. 6, pp. 2715–2729, Jun. 2020.
[51]
Z.-J. Wang, Z.-H. Zhan, S. Kwong, H. Jin, and J. Zhang, “Adaptive granularity learning distributed particle swarm optimization for large-scale optimization,” IEEE Trans. Cybern., vol. 51, no. 3, pp. 1175–1188, Mar. 2021.
[52]
M. A. Arasomwan and A. O. Adewumi, “An adaptive velocity particle swarm optimization for high-dimensional function optimization,” in Proc. IEEE Congr. Evol. Comput., 2013, pp. 2352–2359.
[53]
R. Cheng and Y. Jin, “A social learning particle swarm optimization algorithm for scalable optimization,” Inf. Sci., vol. 291, pp. 43–60, Jan. 2015.
[54]
J.-R. Jian, Z.-G. Chen, Z.-H. Zhan, and J. Zhang, “Region encoding helps evolutionary computation evolve faster: A new solution encoding scheme in particle swarm for large-scale optimization,” IEEE Trans. Evol. Comput., vol. 25, no. 4, pp. 779–793, Aug. 2021.
[55]
Q. Yang, W.-N. Chen, J. Da Deng, Y. Li, T. Gu, and J. Zhang, “A level-based learning swarm optimizer for large-scale optimization,” IEEE Trans. Evol. Comput., vol. 22, no. 4, pp. 578–594, Aug. 2018.
[56]
H. Huang, L. Lv, S. Ye, and Z. Hao, “Particle swarm optimization with convergence speed controller for large-scale numerical optimization,” Soft Comput., vol. 23, no. 12, pp. 4421–4437, Jun. 2019.
[57]
S. Cheng, H. Zhan, H. Yao, H. Fan, and Y. Liu, “Large-scale many-objective particle swarm optimizer with fast convergence based on Alpha-stable mutation and Logistic function,” Appl. Soft Comput., vol. 99, Feb. 2021, Art. no.
[58]
D. Li, W. Guo, A. Lerch, Y. Li, L. Wang, and Q. Wu, “An adaptive particle swarm optimizer with decoupled exploration and exploitation for large scale optimization,” Swarm Evol. Comput., vol. 60, Feb. 2021, Art. no.
[59]
Y. Xue, T. Tang, W. Pang, and A. X. Liu, “Self-adaptive parameter and strategy based particle swarm optimization for large-scale feature selection problems with multiple classifiers,” Appl. Soft Comput., vol. 88, Mar. 2020, Art. no.
[60]
S.-T. Hsieh, T.-Y. Sun, C.-C. Liu, and S.-J. Tsai, “Solving large scale global optimization using improved particle swarm optimizer,” in Proc. IEEE Congr. Evol. Comput., 2008, pp. 1777–1784.
[61]
M. A. M. de Oca, D. Aydın, and T. Stützle, “An incremental particle swarm for large-scale continuous optimization problems: An example of tuning-in-the-loop (re)design of optimization algorithms,” Soft Comput., vol. 15, no. 11, pp. 2233–2255, 2011.
[62]
M. A. M. De Oca, K. Van den Enden, and T. Stützle, “Incremental particle swarm-guided local search for continuous optimization,” in Proc. Int. Workshop Hybrid Metaheuristics, 2008, pp. 72–86.
[63]
J. Garcí-Nieto and E. Alba, “Restart particle swarm optimization with velocity modulation: A scalability test,” Soft Comput., vol. 15, no. 11, pp. 2221–2232, 2011.
[64]
S. Cheng, Y. Shi, and Q. Qin, “Dynamical exploitation space reduction in particle swarm optimization for solving large scale problems,” in Proc. IEEE Congr. Evol. Comput., 2012, pp. 1–8.
[65]
J. Zhou, W. Fang, X. Wu, J. Sun, and S. Cheng, “An opposition-based learning competitive particle swarm optimizer,” in Proc. IEEE Congr. Evol. Comput. (CEC), 2016, pp. 515–521.
[66]
T. Hendtlass, “Particle swarm optimisation and high dimensional problem spaces,” in Proc. IEEE Congr. Evol. Comput., 2009, pp. 1988–1994.
[67]
T. Korenaga, T. Hatanaka, and K. Uosaki, “Performance improvement of particle swarm optimization for high-dimensional function optimization,” in Proc. IEEE Congr. Evol. Comput., 2007, pp. 3288–3293.
[68]
W. Chu, X. Gao, and S. Sorooshian, “A new evolutionary search strategy for global optimization of high-dimensional problems,” Inf. Sci., vol. 181, no. 22, pp. 4909–4927, 2011.
[69]
W. Chu, X. Gao, and S. Sorooshian, “Fortify particle swam optimizer (PSO) with principal components analysis: A case study in improving bound-handling for optimizing high-dimensional and complex problems,” in Proc. IEEE Congr. Evol. Comput., 2011, pp. 1644–1648.
[70]
Q. Zhang, W. Liu, X. Meng, B. Yang, and A. V. Vasilakos, “Vector coevolving particle swarm optimization algorithm,” Inf. Sci., vols. 394–395, pp. 273–298, Jul. 2017.
[71]
S.-Z. Zhao, J. J. Liang, P. N. Suganthan, and M. F. Tasgetiren, “Dynamic multi-swarm particle swarm optimizer with local search for large scale global optimization,” in Proc. IEEE Congr. Evol. Comput., 2008, pp. 3845–3852.
[72]
R. G. Regis, “Evolutionary programming for high-dimensional constrained expensive black-box optimization using radial basis functions,” IEEE Trans. Evol. Comput., vol. 18, no. 3, pp. 326–347, Jun. 2014.
[73]
P. Yang, K. Tang, and X. Yao, “Turning high-dimensional optimization into computationally expensive optimization,” IEEE Trans. Evol. Comput., vol. 22, no. 1, pp. 143–156, Feb. 2018.
[74]
S. Mahdavi, M. E. Shiri, and S. Rahnamayan, “Cooperative co-evolution with a new decomposition method for large-scale optimization,” in Proc. IEEE Congr. Evol. Comput., 2014, pp. 1285–1292.
[75]
E. Li, H. Wang, and F. Ye, “Two-level multi-surrogate assisted optimization method for high dimensional nonlinear problems,” Appl. Soft Comput., vol. 46, pp. 26–36, Sep. 2016.
[76]
M. N. Omidvar, X. Li, Y. Mei, and X. Yao, “Cooperative co-evolution with differential grouping for large scale optimization,” IEEE Trans. Evol. Comput., vol. 18, no. 3, pp. 378–393, Jun. 2014.
[77]
B. Pang, Z. Ren, Y. Liang, and A. Chen, “Enhancing cooperative coevolution for large scale optimization by adaptively constructing surrogate models,” 2018, arXiv:1803.00906.
[78]
B. Werth, E. Pitzer, and M. Affenzeller, “Enabling high-dimensional surrogate-assisted optimization by using sliding windows,” in Proc. Genet. Evol. Comput. Conf. Companion, 2017, pp. 1630–1637.
[79]
C. Wang and J.-H. Gao, “A differential evolution algorithm with cooperative coevolutionary selection operation for high-dimensional optimization,” Optim. Lett., vol. 8, no. 2, pp. 477–492, 2014.
[80]
I. De Falco, A. D. Cioppa, and G. A. Trunfio, “Large scale optimization of computationally expensive functions: An approach based on parallel cooperative coevolution and fitness metamodeling,” in Proc. Genet. Evol. Comput. Conf. Companion, 2017, pp. 1788–1795.
[81]
Z. Ren, B. Pang, Y. Liang, A. Chen, and Y. Zhang, “Surrogate model assisted cooperative coevolution for large scale optimization,” 2018, arXiv:1802.09746.
[82]
C. Sun, Y. Jin, J. Ding, and J. Zeng, “Fitness estimation strategy assisted competitive swarm optimizer for high dimensional expensive problems,” in Proc. Genet. Evol. Comput. Conf., 2016, pp. 1277–1278.
[83]
C. Sun, Y. Jin, R. Cheng, J. Ding, and J. Zeng, “Surrogate-assisted cooperative swarm optimization of high-dimensional expensive problems,” IEEE Trans. Evol. Comput., vol. 21, no. 4, pp. 644–660, Aug. 2017.
[84]
Z. Yang, B. Sendhoff, K. Tang, and X. Yao, “Target shape design optimization by evolving B-splines with cooperative coevolution,” Appl. Soft Comput., vol. 48, pp. 672–682, Nov. 2016.
[85]
A. Tiwari, R. Roy, G. Jared, and O. Munaux, “Interaction and multi-objective optimisation,” in Proc. Genet. Evol. Comput. Conf., 2001, pp. 671–678.
[86]
H. Zille, H. Ishibuchi, S. Mostaghim, and Y. Nojima, “A framework for large-scale multiobjective optimization based on problem transformation,” IEEE Trans. Evol. Comput., vol. 22, no. 2, pp. 260–275, Apr. 2018.
[87]
A. Kabán, J. Bootkrajang, and R. J. Durrant, “Toward large-scale continuous EDA: A random matrix theory perspective,” Evol. Comput., vol. 24, no. 2, pp. 255–291, Jun. 2016.
[88]
W. Dong, Y. Wang, and M. Zhou, “A latent space-based estimation of distribution algorithm for large-scale global optimization,” Soft Comput., vol. 23, no. 13, pp. 4593–4615, Jul. 2019.
[89]
R. Liu, R. Ren, J. Liu, and J. Liu, “A clustering and dimensionality reduction based evolutionary algorithm for large-scale multi-objective problems,” Appl. Soft Comput., vol. 89, Apr. 2020, Art. no.
[90]
G. Wu, W. Pedrycz, P. N. Suganthan, and R. Mallipeddi, “A variable reduction strategy for evolutionary algorithms handling equality constraints,” Appl. Soft Comput., vol. 37, pp. 774–786, Dec. 2015.
[91]
G. Wu, W. Pedrycz, P. N. Suganthan, and H. Li, “Using variable reduction strategy to accelerate evolutionary optimization,” Appl. Soft Comput., vol. 61, pp. 283–293, Dec. 2017.
[92]
M. L. Sanyang and A. Kabán, “REMEDA: Random embedding EDA for optimising functions with intrinsic dimension,” in Parallel Problem Solving From Nature. Heidelberg, Germany: Springer, 2016, pp. 859–868.
[93]
S. Wang, J. Liu, and Y. Jin, “Surrogate-assisted robust optimization of large-scale networks based on graph embedding,” IEEE Trans. Evol. Comput., vol. 24, no. 4, pp. 735–749, Aug. 2020.
[94]
Z. Yang, K. Tang, and X. Yao, “Large scale evolutionary optimization using cooperative coevolution,” Inf. Sci., vol. 178, no. 15, pp. 2985–2999, 2008.
[95]
B. Kazimipour, X. Li, and A. K. Qin, “A review of population initialization techniques for evolutionary algorithms,” in Proc. IEEE Congr. Evol. Comput., 2014, pp. 2585–2592.
[96]
S. Mahdavi, S. Rahnamayan, and K. Deb, “Center-based initialization of cooperative co-evolutionary algorithm for large-scale optimization,” in Proc. IEEE Congr. Evol. Comput., 2016, pp. 3557–3565.
[97]
B. Kazimipour, X. Li, and A. K. Qin, “Initialization methods for large scale global optimization,” in Proc. IEEE Congr. Evol. Comput., 2013, pp. 2750–2757.
[98]
B. Kazimipour, X. Li, and A. K. Qin, “Effects of population initialization on differential evolution for large scale optimization,” in Proc. IEEE Congr. Evol. Comput., 2014, pp. 2404–2411.
[99]
B. Kazimipour, X. Li, and A. Qin, “Why advanced population initialization techniques perform poorly in high dimension?,” in Simulated Evolution and Learning (LNCS 8886). Heidelberg, Germany: Springer Int., 2014, pp. 479–490.
[100]
E. Segredo, B. Paechter, C. Segura, and C. I. González-Vila, “On the comparison of initialisation strategies in differential evolution for large scale optimisation,” Optim. Lett., vol. 12, no. 1, pp. 221–234, 2018.
[101]
K. Beyer, J. Goldstein, R. Ramakrishnan, and U. Shaft, “When is ‘nearest neighbor’ meaningful?,” in Proc. Int. Conf. Database Theory, 1999, pp. 217–235.
[102]
R. J. Durrant and A. Kabán, “When is ‘nearest neighbour’ meaningful: A converse theorem and implications,” J. Complex., vol. 25, no. 4, pp. 385–397, 2009.
[103]
E. Cantú-Paz and D. E. Goldberg, “On the scalability of parallel genetic algorithms,” Evol. Comput., vol. 7, no. 4, pp. 429–449, Dec. 1999.
[104]
M. Munetomo, N. Murao, and K. Akama, “Empirical investigations on parallelized linkage identification,” in Parallel Problem Solving From Nature. Heidelberg, Germany: Springer, 2004, pp. 322–331.
[105]
A. Mendiburu, J. A. Lozano, and J. Miguel-Alonso, “Parallel implementation of EDAs based on probabilistic graphical models,” IEEE Trans. Evol. Comput., vol. 9, no. 4, pp. 406–423, Aug. 2005.
[106]
S. Iturriaga and S. Nesmachnow, “Solving very large optimization problems (up to one billion variables) with a parallel evolutionary algorithm in CPU and GPU,” in Proc. 7th Int. Conf. P2P Parallel Grid Cloud Internet Comput., 2012, pp. 267–272.
[107]
Q. Duan, L. Sun, and Y. Shi, “Spark clustering computing platform based parallel particle swarm optimizers for computationally expensive global optimization,” in Proc. Int. Conf. Parallel Problem Solving Nat., 2018, pp. 424–435.
[108]
B. Cao, S. Fan, J. Zhao, P. Yang, K. Muhammad, and M. Tanveer, “Quantum-enhanced multiobjective large-scale optimization via parallelism,” Swarm Evol. Comput., vol. 57, Sep. 2020, Art. no.
[109]
M. Lastra, D. Molina, and J. M. Benítez, “A high performance memetic algorithm for extremely high-dimensional problems,” Inf. Sci., vol. 293, pp. 35–58, Feb. 2015.
[110]
D. Molina, M. Lozano, and F. Herrera, “MA-SW-Chains: Memetic algorithm based on local search chains for large scale continuous global optimization,” in Proc. IEEE Congr. Evol. Comput., 2010, pp. 3153–3160.
[111]
A. Cano and C. García-Martínez, “100 million dimensions large-scale global optimization using distributed GPU computing,” in Proc. IEEE Congr. Evol. Comput., 2016, pp. 3566–3573.
[112]
A. Cano, C. García-Martínez, and S. Ventura, “Extremely high-dimensional optimization with mapreduce: Scaling functions and algorithm,” Inf. Sci., vols. 415–416, pp. 110–127, Nov. 2017.
[113]
Y. Su, K. Zhou, X. Zhang, R. Cheng, and C. Zheng, “A parallel multi-objective evolutionary algorithm for community detection in large-scale complex networks,” Inf. Sci., vol. 576, pp. 374–392, Oct. 2021.
[114]
B. Cao, J. Zhao, Z. Lv, and X. Liu, “A distributed parallel cooperative coevolutionary multiobjective evolutionary algorithm for large-scale optimization,” IEEE Trans. Ind. Informat., vol. 13, no. 4, pp. 2030–2038, Aug. 2017.
[115]
M. N. Omidvar, M. Yang, Y. Mei, X. Li, and X. Yao, “DG2: A faster and more accurate differential grouping for large-scale black-box optimization,” IEEE Trans. Evol. Comput., vol. 21, no. 6, pp. 929–942, Dec. 2017.
[116]
P. Yang, K. Tang, and X. Yao, “A parallel divide-and-conquer based evolutionary algorithm for large-scale optimization,” 2018, arXiv:1812.02500.
[117]
G. Roy, H. Lee, J. L. Welch, Y. Zhao, V. Pandey, and D. Thurston, “A distributed pool architecture for genetic algorithms,” in Proc. IEEE Congr. Evol. Comput., 2009, pp. 1177–1184.
[118]
Y.-H. Jiaet al., “Distributed cooperative co-evolution with adaptive computing resource allocation for large scale optimization,” IEEE Trans. Evol. Comput., vol. 23, no. 2, pp. 188–202, Apr. 2019.
[119]
J. R. R. A. Martins and A. B. Lambe, “Multidisciplinary design optimization: A survey of architectures,” AIAA J., vol. 51, no. 9, pp. 2049–2075, 2013.
[120]
A. Yassine and D. Braha, “Complex concurrent engineering and the design structure matrix method,” Concurrent Eng., vol. 11, no. 3, pp. 165–176, 2003.
[121]
K. Li, M. N. Omidvar, K. Deb, and X. Yao, “Variable interaction in multi-objective optimization problems,” in Parallel Problem Solving From Nature. Cham, Switzerland: Springer Int., 2016, pp. 399–409.
[122]
P. Xu, W. Luo, X. Lin, J. Zhang, Y. Qiao, and X. Wang, “Constraint-objective cooperative coevolution for large-scale constrained optimization,” ACM Trans. Evol. Learn. Optim., vol. 1, no. 3, pp. 1–26, Aug. 2021.
[123]
L. Sun, S. Yoshida, X. Cheng, and Y. Liang, “A cooperative particle swarm optimizer with statistical variable interdependence learning,” Inf. Sci., vol. 186, no. 1, pp. 20–39, 2012.
[124]
A. Song, W.-N. Chen, P.-T. Luo, Y.-J. Gong, and J. Zhang, “Overlapped cooperative co-evolution for large scale optimization,” in Proc. IEEE Int. Conf. Syst. Man Cybern., 2017, pp. 3689–3694.
[125]
M. Munetomo and D. E. Goldberg, “Linkage identification by non-monotonicity detection for overlapping functions,” Evol. Comput., vol. 7, no. 4, pp. 377–398, Dec. 1999.
[126]
Y. Sun, X. Li, A. Ernst, and M. N. Omidvar, “Decomposition for large-scale optimization problems with overlapping components,” in Proc. IEEE Congr. Evol. Comput., 2019, pp. 326–333.
[127]
L. Li, W. Fang, Y. Mei, and Q. Wang, “Cooperative coevolution for large-scale global optimization based on fuzzy decomposition,” Soft Comput., vol. 25, no. 5, pp. 3593–3608, Mar. 2021.
[128]
D. Thierens, “The linkage tree genetic algorithm,” in Parallel Problem Solving From Nature. Heidelberg, Germany: Springer, 2010, pp. 264–273.
[129]
P. A. N. Bosman and D. Thierens, “More concise and robust linkage learning by filtering and combining linkage hierarchies,” in Proc. Genet. Evol. Comput. Conf., 2013, pp. 359–366.
[130]
T.-L. Yu, D. E. Goldberg, K. Sastry, C. F. Lima, and M. Pelikan, “Dependency structure matrix, genetic algorithms, and effective recombination,” Evol. Comput., vol. 17, no. 4, pp. 595–626, 2009.
[131]
T.-L. Yu, K. Sastry, and D. E. Goldberg, “Linkage learning, overlapping building blocks, and systematic strategy for scalable recombination,” in Proc. Genet. Evol. Comput. Conf., 2005, pp. 1217–1224.
[132]
Y. Sun, M. Kirley, and S. K. Halgamuge, “A recursive decomposition method for large scale continuous optimization,” IEEE Trans. Evol. Comput., vol. 22, no. 5, pp. 647–661, Oct. 2018.
[133]
Y.-H. Jia, Y. Mei, and M. Zhang, “Contribution-based cooperative co-evolution for nonseparable large-scale problems with overlapping subcomponents,” IEEE Trans. Cybern., early access, Oct. 29, 2020. 10.1109/TCYB.2020.3025577.
[134]
A. Song, W.-N. Chen, Y.-J. Gong, X. Luo, and J. Zhang, “A divide-and-conquer evolutionary algorithm for large-scale virtual network embedding,” IEEE Trans. Evol. Comput., vol. 24, no. 3, pp. 566–580, Jun. 2020.
[135]
S. Strasser, J. Sheppard, N. Fortier, and R. Goodman, “Factored evolutionary algorithms,” IEEE Trans. Evol. Comput., vol. 21, no. 2, pp. 281–293, Apr. 2017.
[136]
M. Pelikan, D. E. Goldberg, and E. Cantú-Paz, “BOA: The Bayesian optimization algorithm,” in Proc. 1st Annu. Conf. Genet. Evol. Comput. Vol. 1, 1999, pp. 525–532.
[137]
M. Tsuji, M. Munetomo, and K. Akama, “Linkage identification by fitness difference clustering,” Evol. Comput., vol. 14, no. 4, pp. 383–409, 2006.
[138]
L. R. Emmendorfer and A. T. R. Pozo, “Effective linkage learning using low-order statistics and clustering,” IEEE Trans. Evol. Comput., vol. 13, no. 6, pp. 1233–1246, Dec. 2009.
[139]
C.-Y. Chuang and Y.-P. Chen, “Sensibility of linkage information and effectiveness of estimated distributions,” Evol. Comput., vol. 18, no. 4, pp. 547–579, 2010.
[140]
M. N. Omidvar, X. Li, and X. Yao, “Smart use of computational resources based on contribution for cooperative co-evolutionary algorithms,” in Proc. Genet. Evol. Comput. Conf., 2011, pp. 1115–1122.
[141]
S. Liu, Q. Lin, Y. Tian, and K. C. Tan, “A variable importance-based differential evolution for large-scale multiobjective optimization,” IEEE Trans. Cybern., early access, Aug. 18, 2021. 10.1109/TCYB.2021.3098186.
[142]
X. Shen, Y. Guo, and A. Li, “Cooperative coevolution with an improved resource allocation for large-scale multi-objective software project scheduling,” Appl. Soft Comput., vol. 88, Mar. 2020, Art. no.
[143]
D. Yazdani, M. N. Omidvar, J. Branke, T. T. Nguyen, and X. Yao, “Scaling up dynamic optimization problems: A divide-and-conquer approach,” IEEE Trans. Evol. Comput., vol. 24, no. 1, pp. 1–15, Feb. 2020.
[144]
M. N. Omidvar, B. Kazimipour, X. Li, and X. Yao, “CBCC3—A contribution-based cooperative co-evolutionary algorithm with improved exploration/exploitation balance,” in Proc. IEEE Congr. Evol. Comput. (CEC), Vancouver, BC, Canada, 2016, pp. 3541–3548.
[145]
M. Yanget al., “Efficient resource allocation in cooperative co-evolution for large-scale global optimization,” IEEE Trans. Evol. Comput., vol. 21, no. 4, pp. 493–505, Aug. 2017.
[146]
M. Yang, A. Zhou, C. Li, J. Guan, and X. Yan, “CCFR2: A more efficient cooperative co-evolutionary framework for large-scale global optimization,” Inf. Sci., vol. 512, pp. 64–79, Feb. 2020.
[147]
Z. Ren, Y. Liang, A. Zhang, Y. Yang, Z. Feng, and L. Wang, “Boosting cooperative coevolution for large scale optimization with a fine-grained computation resource allocation strategy,” 2018, arXiv:1802.09703.
[148]
G. A. Trunfio, “Adaptation in cooperative coevolutionary optimization,” in Adaptation and Hybridization in Computational Intelligence. Cham, Switzerland: Springer, 2015, pp. 91–109.
[149]
M. A. Meselhi, S. M. Elsayed, R. A. Sarker, and D. L. Essam, “Contribution based co-evolutionary algorithm for large-scale optimization problems,” IEEE Access, vol. 8, pp. 203369–203381, 2020.
[150]
S. Mahdavi, S. Rahnamayan, and M. E. Shiri, “Multilevel framework for large-scale global optimization,” Soft Comput., vol. 21, pp. 4111–4140, Feb. 2016.
[151]
S. Mahdavi, S. Rahnamayan, and M. E. Shiri, “Incremental cooperative coevolution for large-scale global optimization,” Soft Comput., vol. 22, pp. 2045–2064, Dec. 2016.
[152]
S. Mahdavi, S. Rahnamayan, and M. E. Shiri, “Cooperative co-evolution with sensitivity analysis-based budget assignment strategy for large-scale global optimization,” Appl. Intell., vol. 47, no. 3, pp. 888–913, 2017.
[153]
X. Peng and Y. Wu, “Large-scale cooperative co-evolution with bi-objective selection based imbalanced multi-modal optimization,” in Proc. IEEE Congr. Evol. Comput., Donostia, Spain, 2017, pp. 1527–1532.
[154]
B. Kazimipour, M. N. Omidvar, A. K. Qin, X. Li, and X. Yao, “Bandit-based cooperative coevolution for tackling contribution imbalance in large-scale optimization problems,” Appl. Soft Comput., vol. 76, pp. 265–281, Mar. 2019.
[155]
F.-M. D. Rainville, M. Sebag, C. Gagné, M. Schoenauer, and D. Laurendeau, “Sustainable cooperative coevolution with a multi-armed bandit,” in Proc. 15th Annu. Conf. Genet. Evol. Comput., 2013, pp. 1517–1524.
[156]
C. A. Coello Coello and G. B. Lamont, Applications of Multi-Objective Evolutionary Algorithms, vol. 1. Singapore: World Sci., 2004.
[157]
B. Li, J. Li, K. Tang, and X. Yao, “Many-objective evolutionary algorithms: A survey,” ACM Comput. Surv., vol. 48, no. 1, p. 13, 2015.
[158]
J. J. Durillo, A. J. Nebro, C. A. Coello Coello, F. Luna, and E. Alba, “A comparative study of the effect of parameter scalability in multi-objective metaheuristics,” in Proc. IEEE Congr. Evol. Comput. (IEEE World Congress on Computational Intelligence), Hong Kong, 2008, pp. 1893–1900.
[159]
J. J. Durillo, A. J. Nebro, C. A. Coello Coello, J. García-Nieto, F. Luna, and E. Alba, “A study of multiobjective metaheuristics when solving parameter scalable problems,” IEEE Trans. Evol. Comput., vol. 14, no. 4, pp. 618–635, Aug. 2010.
[160]
S. Liu, Q. Lin, K.-C. Wong, Q. Li, and K. C. Tan, “Evolutionary large-scale multiobjective optimization: Benchmarks and algorithms,” IEEE Trans. Evol. Comput., early access, Jul. 26, 2021. 10.1109/TEVC.2021.3099487.
[161]
Q. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary algorithm based on decomposition,” IEEE Trans. Evol. Comput., vol. 11, no. 6, pp. 712–731, Dec. 2007.
[162]
X. Maet al., “A multiobjective evolutionary algorithm based on decision variable analyses for multiobjective optimization problems with large-scale variables,” IEEE Trans. Evol. Comput., vol. 20, no. 2, pp. 275–298, Apr. 2016.
[163]
X. Zhang, Y. Tian, R. Cheng, and Y. Jin, “A decision variable clustering-based evolutionary algorithm for large-scale many-objective optimization,” IEEE Trans. Evol. Comput., vol. 22, no. 1, pp. 97–112, Feb. 2018.
[164]
A. E. I. Brownlee, J. A. Wright, M. He, T. Lee, and P. McMenemy, “A novel encoding for separable large-scale multi-objective problems and its application to the optimisation of housing stock improvements,” Appl. Soft Comput., vol. 96, Nov. 2020, Art. no.
[165]
L. M. Antonio and C. A. Coello Coello, “Use of cooperative coevolution for solving large scale multiobjective optimization problems,” in Proc. IEEE Congr. Evol. Comput., Cancun, Mexico, 2013, pp. 2758–2765.
[166]
L. M. Antonio and C. A. Coello Coello, “Decomposition-based approach for solving large scale multi-objective problems,” in Parallel Problem Solving From Nature. Cham, Switzerland: Springer, 2016, pp. 525–534.
[167]
A. Song, Q. Yang, W.-N. Chen, and J. Zhang, “A random-based dynamic grouping strategy for large scale multi-objective optimization,” in Proc. IEEE Congr. Evol. Comput., Vancouver, BC, Canada, 2016, pp. 468–475.
[168]
B. Cao, J. Zhao, Y. Gu, Y. Ling, and X. Ma, “Applying graph-based differential grouping for multiobjective large-scale optimization,” Swarm Evol. Comput., vol. 53, Mar. 2020, Art. no.
[169]
Q. Wang, L. Zhang, S. Wei, and B. Li, “Tensor decomposition-based alternate sub-population evolution for large-scale many-objective optimization,” Inf. Sci., vol. 569, pp. 376–399, Aug. 2021.
[170]
L. Ma, M. Huang, S. Yang, R. Wang, and X. Wang, “An adaptive localized decision variable analysis approach to large-scale multiobjective and many-objective optimization,” IEEE Trans. Cybern., early access, Jan. 21, 2021. 10.1109/TCYB.2020.3041212.
[171]
H. Chen, R. Cheng, J. Wen, H. Li, and J. Weng, “Solving large-scale many-objective optimization problems by covariance matrix adaptation evolution strategy with scalable small subpopulations,” Inf. Sci., vol. 509, pp. 457–469, Jan. 2020.
[172]
A. Tiwari and R. Roy, “Variable dependence interaction and multi-objective optimisation,” in Proc. Genet. Evol. Comput. Conf., 2002, pp. 602–609.
[173]
C. Heet al., “Accelerating large-scale multiobjective optimization via problem reformulation,” IEEE Trans. Evol. Comput., vol. 23, no. 6, pp. 949–961, Dec. 2019.
[174]
H. Zille and S. Mostaghim, “Comparison study of large-scale optimisation techniques on the LSMOP benchmark functions,” in Proc. IEEE Symp. Ser. Comput. Intell., Honolulu, HI, USA, 2017, pp. 1–8.
[175]
H. Zille, H. Ishibuchi, S. Mostaghim, and Y. Nojima, “Weighted optimization framework for large-scale multi-objective optimization,” in Proc. Genet. Evol. Comput. Conf., 2016, pp. 83–84.
[176]
R. Liu, J. Liu, Y. Li, and J. Liu, “A random dynamic grouping based weight optimization framework for large-scale multi-objective optimization problems,” Swarm Evol. Comput., vol. 55, Jun. 2020, Art. no.
[177]
Y. Tian, C. Lu, X. Zhang, K. C. Tan, and Y. Jin, “Solving large-scale multiobjective optimization problems with sparse optimal solutions via unsupervised neural networks,” IEEE Trans. Cybern., vol. 51, no. 6, pp. 3115–3128, Jun. 2021.
[178]
Y. Tian, C. Lu, X. Zhang, F. Cheng, and Y. Jin, “A pattern mining-based evolutionary algorithm for large-scale sparse multiobjective optimization problems,” IEEE Trans. Cybern., early access, Dec. 30, 2020. 10.1109/TCYB.2020.3041325.
[179]
Y. Tian, X. Zhang, C. Wang, and Y. Jin, “An evolutionary algorithm for large-scale sparse multiobjective optimization problems,” IEEE Trans. Evol. Comput., vol. 24, no. 2, pp. 380–393, Apr. 2020.
[180]
Y. Tian, R. Liu, X. Zhang, H. Ma, K. C. Tan, and Y. Jin, “A multipopulation evolutionary algorithm for solving large-scale multimodal multiobjective optimization problems,” IEEE Trans. Evol. Comput., vol. 25, no. 3, pp. 405–418, Jun. 2021.
[181]
J.-H. Yiet al., “Behavior of crossover operators in NSGA-III for large-scale optimization problems,” Inf. Sci., vol. 509, pp. 470–487, Jan. 2020.
[182]
K. Deb and H. Jain, “An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: Solving problems with box constraints,” IEEE Trans. Evol. Comput., vol. 18, no. 4, pp. 577–601, Aug. 2014.
[183]
Y. Yin, Y. Zhao, H. Li, and X. Dong, “Multi-objective evolutionary clustering for large-scale dynamic community detection,” Inf. Sci., vol. 549, pp. 269–287, Mar. 2021.
[184]
L. P. Cotaet al., “An adaptive multi-objective algorithm based on decomposition and large neighborhood search for a green machine scheduling problem,” Swarm Evol. Comput., vol. 51, Dec. 2019, Art. no.
[185]
S. Qin, C. Sun, Y. Jin, Y. Tan, and J. Fieldsend, “Large-scale evolutionary multiobjective optimization assisted by directed sampling,” IEEE Trans. Evol. Comput., vol. 25, no. 4, pp. 724–738, Aug. 2021.
[186]
Y. Zhang, G.-G. Wang, K. Li, W.-C. Yeh, M. Jian, and J. Dong, “Enhancing MOEA/D with information feedback models for large-scale many-objective optimization,” Inf. Sci., vol. 522, pp. 1–16, Jun. 2020.
[187]
W. Hong, K. Tang, A. Zhou, H. Ishibuchi, and X. Yao, “A scalable indicator-based evolutionary algorithm for large-scale multiobjective optimization,” IEEE Trans. Evol. Comput., vol. 23, no. 3, pp. 525–537, Jun. 2019.
[188]
J. Xiao, T. Zhang, J. Du, and X. Zhang, “An evolutionary multiobjective route grouping-based heuristic algorithm for large-scale capacitated vehicle routing problems,” IEEE Trans. Cybern., vol. 51, no. 8, pp. 4173–4186, Aug. 2021.
[189]
X. Zhang, K. Zhou, H. Pan, L. Zhang, X. Zeng, and Y. Jin, “A network reduction-based multiobjective evolutionary algorithm for community detection in large-scale complex networks,” IEEE Trans. Cybern., vol. 50, no. 2, pp. 703–716, Feb. 2020.
[190]
J. Zhang, L. Xing, G. Peng, F. Yao, and C. Chen, “A large-scale multiobjective satellite data transmission scheduling algorithm based on SVM + NSGA-II,” Swarm Evol. Comput., vol. 50, Nov. 2019, Art. no.
[191]
S.-Y. Ho, L.-S. Shu, and J.-H. Chen, “Intelligent evolutionary algorithms for large parameter optimization problems,” IEEE Trans. Evol. Comput., vol. 8, no. 6, pp. 522–541, Dec. 2004.
[192]
M. Gong, H. Li, E. Luo, J. Liu, and J. Liu, “A multiobjective cooperative coevolutionary algorithm for hyperspectral sparse unmixing,” IEEE Trans. Evol. Comput., vol. 21, no. 2, pp. 234–248, Apr. 2017.
[193]
Y. Wang, B. Li, and T. Weise, “Estimation of distribution and differential evolution cooperation for large scale economic load dispatch optimization of power systems,” Inf. Sci., vol. 180, no. 12, pp. 2405–2420, 2010.
[194]
R. Shang, K. Dai, L. Jiao, and R. Stolkin, “Improved memetic algorithm based on route distance grouping for multiobjective large scale capacitated arc routing problems,” IEEE Trans. Cybern., vol. 46, no. 4, pp. 1000–1013, Apr. 2016.
[195]
A. Kumar, G. Wu, M. Z. Ali, R. Mallipeddi, P. N. Suganthan, and S. Das, “A test-suite of non-convex constrained optimization problems from the real-world and some baseline results,” Swarm Evol. Comput., vol. 56, Aug. 2020, Art. no.
[196]
E. Mezura-Montes and C. A. Coello Coello, “Constraint-handling in nature-inspired numerical optimization: Past, present and future,” Swarm Evol. Comput., vol. 1, no. 4, pp. 173–194, 2011.
[197]
C. Peng and Q. Hui, “Comparison of differential grouping and random grouping methods on sCCPSO for large-scale constrained optimization,” in Proc. IEEE Congr. Evol. Comput., Vancouver, BC, Canada, 2016, pp. 2057–2063.
[198]
E. Sayed, D. Essam, R. Sarker, and S. Elsayed, “Decomposition-based evolutionary algorithm for large scale constrained problems,” Inf. Sci., vol. 316, pp. 457–486, Sep. 2015.
[199]
A. E. Aguilar-Justo and E. Mezura-Montes, “Towards an improvement of variable interaction identification for large-scale constrained problems,” in Proc. IEEE Congr. Evol. Comput., Vancouver, BC, Canada, 2016, pp. 4167–4174.
[200]
A. E. Aguilar-Justo, E. Mezura-Montes, S. M. Elsayed, and R. A. Sarker, “Decomposition of large-scale constrained problems using a genetic-based search,” in Proc. IEEE Int. Autumn Meeting Power Electron. Comput. (ROPEC), Ixtapa, Mexico, 2016, pp. 1–6.
[201]
A. E. Aguilar-Justo and E. Mezura-Montes, “A local cooperative approach to solve large-scale constrained optimization problems,” Swarm Evol. Comput., vol. 51, Dec. 2019, Art. no.
[202]
J. Blanchard, C. Beauthier, and T. Carletti, “A cooperative co-evolutionary algorithm for solving large-scale constrained problems with interaction detection,” in Proc. Genet. Evol. Comput. Conf., 2017, pp. 697–704.
[203]
M. N. Omidvar, “IDG: A faster and more accurate differential grouping algorithm,” Ph.D. dissertation, Dept. School Comput. Sci., Univ. Birmingham, Birmingham U.K., 2015.
[204]
C. He, R. Cheng, Y. Tian, X. Zhang, K. C. Tan, and Y. Jin, “Paired offspring generation for constrained large-scale multiobjective optimization,” IEEE Trans. Evol. Comput., vol. 25, no. 3, pp. 448–462, Jun. 2021.
[205]
W. Chu, X. Gao, and S. Sorooshian, “Handling boundary constraints for particle swarm optimization in high-dimensional search space,” Inf. Sci., vol. 181, no. 20, pp. 4569–4581, 2011.
[206]
O. A. Elhara, “Stochastic black-box optimization and benchmarking in large dimensions,” Ph.D. dissertation, Dept. Laboratoire de Recherche en Informatique, Université Paris-Saclay, Gif-sur-Yvette, France, 2017.
[207]
K. Tanget al., “Benchmark functions for the CEC’2008 special session and competition on large scale global optimization,” Dept. Nat. Inspired Comput. Appl. Lab., USTC, Hefei, China, Rep., 2007.
[208]
K. Tang, X. Li, P. N. Suganthan, Z. Yang, and T. Weise, “Benchmark functions for the CEC’2010 special session and competition on large-scale global optimization,” Dept. Nat. Inspired Comput. Appl. Lab., USTC, Hefei, China, Rep., 2009. [Online]. Available: http://nical.ustc.edu.cn/cec10ss.php
[209]
X. Li, K. Tang, M. N. Omidvar, Z. Yang, and K. Qin, “Benchmark functions for the CEC’2013 special session and competition on large-scale global optimization,” RMIT Univ., Melbourne, VIC, Australia, Rep., 2013, [Online]. Available: http://goanna.cs.rmit.edu.au/~xiaodong/cec13-lsgo
[210]
M. Jamil and X.-S. Yang, “A literature survey of benchmark functions for global optimization problems,” 2013, arXiv:1308.4008.
[211]
P. Suganthanet al., “Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization,” Elect. Electron. Eng., Nanyang Technol. Univ., Singapore, Rep. 2005005, 2005. [Online]. Available: http://www.ntu.edu.sg/home/EPNSugan
[212]
M. Lozano, D. Molina, and F. Herrera, “Editorial scalability of evolutionary algorithms and other metaheuristics for large-scale continuous optimization problems,” Soft Comput., vol. 15, no. 11, pp. 2085–2087, 2011.
[213]
N. Hansen, S. Finck, R. Ros, and A. Auger, “Real-parameter black-box optimization benchmarking 2009: Noiseless functions definitions,” INRIA, Le Chesnay-Rocquencourt, France, Rep. RR-6829, 2009.
[214]
Y. Sun, M. Kirley, and S. K. Halgamuge, “Quantifying variable interactions in continuous optimization problems,” IEEE Trans. Evol. Comput., vol. 21, no. 2, pp. 249–264, Apr. 2017.
[215]
E. Sayed, D. Essam, and R. Sarker, “Dependency identification technique for large scale optimization problems,” in Proc. IEEE Congr. Evol. Comput., Brisbane, QLD, Australia, 2012, pp. 1–8.
[216]
S. K. Goh, K. C. Tan, A. Al-Mamun, and H. A. Abbass, “Evolutionary big optimization (BigOpt) of signals,” in Proc. IEEE Congr. Evol. Comput., Sendai, Japan, 2015, pp. 3332–3339.
[217]
R. Cheng, Y. Jin, M. Olhofer, and B. Sendhoff, “Test problems for large-scale multiobjective and many-objective optimization,” IEEE Trans. Cybern., vol. 47, no. 12, pp. 4108–4121, Dec. 2017.
[218]
M. N. Omidvar, X. Li, and K. Tang, “Designing benchmark problems for large-scale continuous optimization,” Inf. Sci., vol. 316, pp. 419–436, Sep. 2015.
[219]
E. D. Dolan, J. J. More, and T. S. Munson, “Benchmarking optimization software with COPS 3.0,” Dept. Math. Comput. Sci. Division, Argonne Nat. Lab., Argonne, IL, USA, Rep. ANL/MCS-TM-273, 2004.
[220]
W. Luo, B. Yang, C. Bu, and X. Lin, “A hybrid particle swarm optimization for high-dimensional dynamic optimization,” in Proc. Asia–Pac. Conf. Simul. Evol. Learn., 2017, pp. 981–993.
[221]
L.-Y. Tseng and C. Chen, “Multiple trajectory search for large scale global optimization,” in Proc. IEEE Congr. Evol. Comput., Hong Kong, 2008, pp. 3052–3059.
[222]
Y. Wang and B. Li, “A restart univariate estimation of distribution algorithm: Sampling under mixed Gaussian and lévy probability distribution,” in Proc. IEEE Congr. Evol. Comput., Hong Kong, 2008, pp. 3917–3924.
[223]
Y. Wang and B. Li, “Two-stage based ensemble optimization for large-scale global optimization,” in Proc. IEEE Congr. Evol. Comput., Barcelona, Spain, 2010, pp. 1–8.
[224]
A. LaTorre, S. Muelas, and J.-M. Peña, “A MOS-based dynamic memetic differential evolution algorithm for continuous optimization: A scalability test,” Soft Comput., vol. 15, no. 11, pp. 2187–2199, 2011.
[225]
A. LaTorre, S. Muelas, and J.-M. Peña, “Large scale global optimization: Experimental results with MOS-based hybrid algorithms,” in Proc. IEEE Congr. Evol. Comput., Cancun, Mexico, 2013, pp. 2742–2749.
[226]
D. Molina and F. Herrera, “Iterative hybridization of DE with local search for the CEC’2015 special session on large scale global optimization,” in Proc. IEEE Congr. Evol Comput., Sendai, Japan, 2015, pp. 1974–1978.
[227]
D. Molina, A. LaTorre, and F. Herrera, “SHADE with iterative local search for large-scale global optimization,” in Proc. IEEE Congr. Evol. Comput. (CEC), Rio de Janeiro, Brazil, Jul. 2018, pp. 1–8.
[228]
A. A. Hadi, A. W. Mohamed, and K. M. Jambi, “LSHADE-SPA memetic framework for solving large-scale optimization problems,” Complex Intell. Syst., vol. 5, no. 1, pp. 25–40, 2019.
[229]
A. Bolufé-Röhler, S. Chen, and D. Tamayo-Vera, “An analysis of minimum population search on large scale global optimization,” in Proc. IEEE Congr. Evol. Comput. (CEC), Wellington, New Zealand, 2019, pp. 1228–1235.
[230]
Y. Wanget al., “Two-stage based ensemble optimization framework for large-scale global optimization,” Eur. J. Oper. Res., vol. 228, no. 2, pp. 308–320, 2013.
[231]
D. Molina, A. LaTorre, and F. Herrera, “An insight into bio-inspired and evolutionary algorithms for global optimization: Review, analysis, and lessons learnt over a decade of competitions,” Cogn. Comput., vol. 10, no. 4, pp. 517–544, 2018.
[232]
D. M. Cabrera, “Evolutionary algorithms for large-scale global optimisation: A snapshot, trends and challenges,” Progr. Artif. Intell., vol. 5, no. 2, pp. 85–89, 2016.
[233]
Y. Mei, M. N. Omidvar, X. Li, and X. Yao, “A competitive divide-and-conquer algorithm for unconstrained large-scale black-box optimization,” ACM Trans. Math. Softw., vol. 42, no. 2, p. 13, 2016.
[234]
K. Sörensen, “Metaheuristics—The metaphor exposed,” Int. Trans. Oper. Res., vol. 22, no. 1, pp. 3–18, 2015.
[235]
K. O. Stanley, J. Clune, J. Lehman, and R. Miikkulainen, “Designing neural networks through neuroevolution,” Nat. Mach. Intell., vol. 1, no. 1, pp. 24–35, 2019.
[236]
F. P. Such, V. Madhavan, E. Conti, J. Lehman, K. O. Stanley, and J. Clune, “Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning,” 2018, arXiv:1712.06567.
[237]
S. Fujino, T. Hatanaka, N. Mori, and K. Matsumoto, “The evolutionary deep learning based on deep convolutional neural network for the anime storyboard recognition,” in Proc. Int. Symp. Distrib. Comput. Artif. Intell., 2017, pp. 278–285.
[238]
G. Morse and K. O. Stanley, “Simple evolutionary optimization can rival stochastic gradient descent in neural networks,” in Proc. Genet. Evol. Comput. Conf., 2016, pp. 477–484.
[239]
E. Realet al., “Large-scale evolution of image classifiers,” in Proc. 34th Int. Conf. Mach. Learn. Vol. 70, 2017, pp. 2902–2911.
[240]
A. Yaman, D. C. Mocanu, G. Iacca, G. Fletcher, and M. Pechenizkiy, “Limited evaluation cooperative co-evolutionary differential evolution for large-scale neuroevolution,” in Proc. Genet. Evol. Comput. Conf., 2018, pp. 569–576.
[241]
X. Cui, W. Zhang, Z. Tüske, and M. Picheny, “Evolutionary stochastic gradient descent for optimization of deep neural networks,” in Advances in Neural Information Processing Systems. Red Hook, NY, USA: Curran, 2018, pp. 6048–6058.
[242]
M. Jaderberget al., “Population based training of neural networks,” 2017, arXiv:1711.09846.
[243]
T. Elsken, J. H. Metzen, and F. Hutter, “Neural architecture search: A survey,” J. Mach. Learn. Res., vol. 20, pp. 1–21, Mar. 2019.
[244]
T. Salimans, J. Ho, X. Chen, S. Sidor, and I. Sutskever, “Evolution strategies as a scalable alternative to reinforcement learning,” Sep. 2017, arXiv:1703.03864.
[245]
Y. Bengio, A. Lodi, and A. Prouvost, “Machine learning for combinatorial optimization: A methodological tour d’horizon,” Eur. J. Oper. Res., vol. 290, no. 2, pp. 405–421, 2021.
[246]
A. Lodi and G. Zarpellon, “On learning and branching: A survey,” TOP, vol. 25, no. 2, pp. 207–236, 2017.
[247]
Y. Sun, X. Li, and A. Ernst, “Using statistical measures and machine learning for graph reduction to solve maximum weight clique problems,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 5, pp. 1746–1760, May 2021.
[248]
Y. Sun, A. Ernst, X. Li, and J. Weiner, “Generalization of machine learning for problem reduction: A case study on travelling salesman problems,” OR Spectr., vol. 43, no. 3, pp. 607–633, 2021.
[249]
D. Bertsimas and J. N. Tsitsiklis, Introduction to Linear Optimization. Belmont, MA, USA: Athena Sci., 1997.
[250]
M. Kruber, M. E. Lübbecke, and A. Parmentier, “Learning when to use a decomposition,” in Integration of AI and OR Techniques in Constraint Programming (Lecture Notes in Computer Science), D. Salvagnin and M. Lombardi, Eds. Cham, Switzerland: Springer Int., 2017, pp. 202–210.
[251]
C. Blum and G. R. Raidl, Hybrid Metaheuristics: Powerful Tools for Optimization. Cham, Switzerland: Springer, 2016.
[252]
A. Kenny, X. Li, A. T. Ernst, and D. Thiruvady, “Towards solving large-scale precedence constrained production scheduling problems in mining,” in Proc. Genet. Evol. Comput. Conf., 2017, pp. 1137–1144.
[253]
A. Kenny, X. Li, and A. T. Ernst, “A merge search algorithm and its application to the constrained pit problem in mining,” in Proc. Genet. Evol. Comput. Conf., 2018, pp. 316–323.
[254]
A. Kenny, X. Li, A. T. Ernst, and Y. Sun, “An improved merge search algorithm for the constrained pit problem in open-pit mining,” in Proc. Genet. Evol. Comput. Conf., 2019, pp. 294–302.
[255]
S. Elsayed, R. Sarker, D. Essam, and C. A. Coello Coello, “Evolutionary approach for large-Scale mine scheduling,” Inf. Sci., vol. 523, pp. 77–90, Jun. 2020.
[256]
C. He, R. Cheng, C. Zhang, Y. Tian, Q. Chen, and X. Yao, “Evolutionary large-scale multiobjective optimization for ratio error estimation of voltage transformers,” IEEE Trans. Evol. Comput., vol. 24, no. 5, pp. 868–881, Oct. 2020.

Cited By

View all
  • (2024)Robust Optimization Over Time: A Critical ReviewIEEE Transactions on Evolutionary Computation10.1109/TEVC.2023.330601728:5(1265-1285)Online publication date: 1-Oct-2024
  • (2024)Gene-targeting multiplayer battle game optimizer for large-scale global optimization via cooperative coevolutionCluster Computing10.1007/s10586-024-04600-627:9(12483-12508)Online publication date: 1-Dec-2024
  • (2024)Evolutionary dynamic grouping based cooperative co-evolution algorithm for large-scale optimizationApplied Intelligence10.1007/s10489-024-05390-554:6(4585-4601)Online publication date: 1-Mar-2024
  • Show More Cited By

Index Terms

  1. A Review of Population-Based Metaheuristics for Large-Scale Black-Box Global Optimization—Part II
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image IEEE Transactions on Evolutionary Computation
        IEEE Transactions on Evolutionary Computation  Volume 26, Issue 5
        Oct. 2022
        398 pages

        Publisher

        IEEE Press

        Publication History

        Published: 01 October 2022

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 22 Nov 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Robust Optimization Over Time: A Critical ReviewIEEE Transactions on Evolutionary Computation10.1109/TEVC.2023.330601728:5(1265-1285)Online publication date: 1-Oct-2024
        • (2024)Gene-targeting multiplayer battle game optimizer for large-scale global optimization via cooperative coevolutionCluster Computing10.1007/s10586-024-04600-627:9(12483-12508)Online publication date: 1-Dec-2024
        • (2024)Evolutionary dynamic grouping based cooperative co-evolution algorithm for large-scale optimizationApplied Intelligence10.1007/s10489-024-05390-554:6(4585-4601)Online publication date: 1-Mar-2024
        • (2023)Low-Dimensional Space Modeling-Based Differential Evolution for Large-Scale Global Optimization ProblemsIEEE Transactions on Evolutionary Computation10.1109/TEVC.2022.322744027:5(1529-1543)Online publication date: 1-Oct-2023
        • (2023)Gene Targeting Differential Evolution: A Simple and Efficient Method for Large-Scale OptimizationIEEE Transactions on Evolutionary Computation10.1109/TEVC.2022.318566527:4(964-979)Online publication date: 1-Aug-2023
        • (2023)Contribution-Based Cooperative Co-Evolution With Adaptive Population Diversity for Large-Scale Global Optimization [Research Frontier]IEEE Computational Intelligence Magazine10.1109/MCI.2023.327777218:3(56-68)Online publication date: 1-Aug-2023
        • (2023)Spark-based cooperative coevolution for large scale global optimizationCluster Computing10.1007/s10586-023-04058-y27:2(1911-1926)Online publication date: 12-Jun-2023
        • (2023)Brain Storm Optimization Integrated with Cooperative Coevolution for Large-Scale Constrained OptimizationAdvances in Swarm Intelligence10.1007/978-3-031-36622-2_29(356-368)Online publication date: 14-Jul-2023
        • (2022)A dual decomposition strategy for large-scale multiobjective evolutionary optimizationNeural Computing and Applications10.1007/s00521-022-08133-035:5(3767-3788)Online publication date: 27-Dec-2022

        View Options

        View options

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media