Abstract
Brain storm optimization algorithms (BSO) have shown great potential in many global black-box optimization problems. However, the existing BSO variants can suffer from three problems: (1) large-scale optimization problem; (2) hyperparameter optimization problem; (3) high computational cost of the clustering operations. To address these problems, in this paper, we propose a simple yet effective BSO variant named Brain Storm Optimization Algorithm with Cooperative Learning Strategy (BSO-CLS). It is inspired by the new ideas generating process of brain storm in which the participators propose their own ideas by cooperatively learning other participators’ ideas. Thus, BSO-CLS iteratively updates the candidate solutions by linearly combining other solutions with the weights deriving from the fitness values of other solutions. To validate the effectiveness of the proposed method, we test it on 6 benchmark functions with the 1000 dimensions. The experimental results show that BSO-CLS can outperform the vanilla BSO and the other BSO variant with the learning strategy.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Large scale black-box optimization problems play a very important role in many scientific and engineering applications. It can be formulated as follows [12]:
where \(x^{*}\) is the global minimizer to be searched and \(f: R^D \rightarrow R\) is the evaluation function. In this paper, we focus on the large scale black-box optimization problem [11], i.e., the evaluation function f typically has three properties:
-
Black-box: When a candidate solution \(x \in R^{D}\) is evaluated by f, we can only obtain the function evaluation value f(x). Any other information of f, such as gradient information, is unavailable.
-
High-dimension: The dimension D of the candidate solution x is typically larger than or equal to 1000 (i.e. \(D \ge 1000\)).
-
Multimodal: The evaluation function f can have many local optima and global optima solutions.
Recently, swarm intelligence (SI) [6, 7] has been a promising technique to solve the large scale black-box optimization problems [2, 8]. Most existing SI algorithms such as particle swarm optimization (PSO) [14] and ant colony optimization (ACO) [5] are inspired by the swarm behaviours of the simple creatures in nature. However, BSO [3, 4, 15], a young and promising SI algorithm, is inspired by the brainstorming process of the high-level human being, which utilizes the clustering operation to simulate the brainstorming process and generates the new candidate solutions by learning other solutions from intercluster or intracluster randomly. In general, the existing BSO algorithms can suffer from three problems: (1) cannot obtain good performance on the high-dimension optimization problems; (2) several hyperparameters of algorithm itself need to be optimized; (3) the clustering operation (e.g. k-means [10]) is high computational cost.
To address the aforementioned problems, in this paper, we propose a novel BSO variant named BSO-CLS. It is inspired by the participators’ ideas generating process of brain storm, i.e., each new idea is generated by cooperatively learning the ideas from other participators. Thus, for each candidate solution of BSO-CLS, it iteratively updates itself by linearly combining other solutions with the weights deriving from the function evaluation values of other solutions.
The contributions of this paper are summarized as follows:
-
We introduce a novel cooperative learning strategy to simulate the new ideas generating process of brain storm. The analysis and experiment show that the novel learning strategy is effective and efficient.
-
In order to validate the effectiveness of the proposed method, we test it on 6 benchmark functions with the 1000 dimensions. The empirical results show that BSO-CLS can outperform the vanilla BSO and the other BSO variant with the learning strategy.
The rest of this paper is organized as follows. Section 2 introduces the representative BSO variants. Section 3 introduces BSO-CLS in detail, and Sect. 4 presents and discusses the experimental setting and results. The conclusions are drawn in Sect. 5.
2 Related Work
In this section, we will introduce some representative BSO variants. In order to reduce the high computational cost of clustering operations, BSO-OS [13] proposed to cluster individuals in the objective space instead of in the solution space. MBSO [17] proposed a modified BSO which introduced the simple grouping method (SGM) with the idea difference strategy (IDS). RGBSO [1] proposed to utilize the random grouping strategy in clustering.
Recently, some learning based new solutions generating strategies have been proposed. BSOLS [16] proposed a novel learning strategy after updating operator to improve the diversity of population. OLBSO [9] introduced a orthogonal learning (OL) framework and incorporated it with BSO to improve the performance.
3 Proposed Method
In this section, we will introduce the proposed method in detail. The key idea of BSO-CLS is that it utilizes a novel method to simulate the new ideas generating process of the brain storm. Specifically, as shown in Fig. 1, during the brain storm process, the participators generally go through three processes to cooperatively learn other participators’ ideas before proposing their own ideas. Taking participator P0 as an example, the first process is to listen to other participators’ ideas and select the ideas that inspire him. The second process is to analyze the selected ideas. Some of them have a greater impact on him, and others have a smaller impact on him. The third process is to come up with his own ideas based on his analysis and share them with other participators. Thus, other participators can follow these three processes to come up with their own ideas until the brain storm is finished.
Based on the processes introduced above, the BSO-CLS can be implemented as follows:
Step 1: Randomly initialize N candidate solutions \(X \in R^{P \times D}\) with D dimension.
Step 2: Evaluate the fitness \(Y = f(X)\) using the evaluation function f.
Step 3.1: For each candidate solution, randomly select m other candidate solutions \(x_{1},x_{2}, \cdots , x_{m}\) as the learning objects.
Step 3.2: For each learning object \(x_{i}\), calculate the learning weight \(w_{i}\) using the formula (2) by normalizing the fitness values \(f(x_{1}),f(x_{2}), \cdots , f(x_{m})\) of the learning objects.
where \(\mu \) and \(\sigma \) are the mean and standard deviation of the fitness values \(f(x_{1}),f(x_{2}), \cdots , f(x_{m})\) respectively.
Step 3.3: Generate the new candidate solution \(x_{new}\) using the formula (3) and evaluate its fitness value.
Step 3.4: If N new solutions have not been generated, go to Step 3.1.
Step 4: If the stop condition is not met, return Step 3.1.
The procedures of BSO-CLS have been given in the Algorithm 1.
4 Experiments
In this section, we will present and discuss the experimental settings and results.
4.1 Benchmark Functions
To validate the effectiveness of the BSO-CLS, we test it on the 6 benchmark functions including the unimodal function (Sphere) and the multimodal functions (Rosenbrock, Ackley, Rastrigin, Schwefel, and Ellipsoid). Furthermore, in order to test the proposed method’s performance on the various dimensions, we vary the dimensions from 10 to 1000 for all benchmark functions. The information of the benchmark functions are summarized in Table 1.
4.2 Comparison Method
This paper aims to propose a novel learning mechanism to generate the new candidate solutions for BSO algorithms. Therefore, we choose the most representative SI algorithm PSO [14], the original BSO algorithm [15] and the other BSO variant BSOLS with learning strategy [16]. The hyperparameters setting for each algorithm is given in Table 2.
4.3 Experimental Setting
In order to fairly compare all algorithms, they are tested with the same population size 100 and the same maximum number of function evaluations \(1E4 \times D\) in each run for each test function, where D is the dimension. In order to reduce the statistical error, each algorithm is tested 30 times independently, and the mean and standard deviation (std) are reported for comparison.
4.4 Results and Discussion
The experimental results are given in Table 3, and the best results are marked with bold.
As shown in Table 3, we can clearly observe that: (1) The proposed method can obtain competitive performance on the low dimension problems against other methods. (2) The proposed method outperforms all the comparison methods on the high dimension problems. (3) With the dimension increasing, the proposed method obtains better results on the Rosenbrock test function.
5 Conclusion
In this paper, we proposed a novel BSO variant named BSO-CLS, which introduced a simple yet effective cooperative learning mechanism. The key concept is inspired by the new ideas generating process of brain storm which the participators propose their own ideas by cooperatively learning other participators’ ideas. Thus, BSO-CLS iteratively updates the candidate solutions by linearly combining other solutions with the weights deriving from the fitness values of other solutions. The effectiveness of the proposed method was validated on 6 benchmark functions for both low-dimension and high-dimension settings. The empirical results showed that BSO-CLS can outperform the vanilla BSO and the other BSO variant with the learning strategy especially for the large-scale optimization problems.
In the future, we plan to validate the proposed method on more complex problems, such as the rotation and shifted benchmark functions. We intend to explore the reasons behind the better performance on Rosenbrock function with the dimension increasing. Furthermore, we will test the influence of different normalization functions.
References
Cao, Z., Shi, Y., Rong, X., Liu, B., Du, Z., Yang, B.: Random grouping brain storm optimization algorithm with a new dynamically changing step size. In: Tan, Y., Shi, Y., Buarque, F., Gelbukh, A., Das, S., Engelbrecht, A. (eds.) ICSI 2015. LNCS, vol. 9140, pp. 357–364. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-20466-6_38
Cheng, R., Jin, Y.: A competitive swarm optimizer for large scale optimization. IEEE Trans. Cybern. 45(2), 191–204 (2014)
Cheng, S., Qin, Q., Chen, J., Shi, Y.: Brain storm optimization algorithm: a review. Artif. Intell. Rev. 46(4), 445–458 (2016). https://doi.org/10.1007/s10462-016-9471-0
Cheng, S., Shi, Y. (eds.): Brain Storm Optimization Algorithms. ALO, vol. 23. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-15070-9
Dorigo, M., Birattari, M., Stutzle, T.: Ant colony optimization. IEEE Comput. Intell. Mag. 1(4), 28–39 (2006)
Eberhart, R.C., Shi, Y., Kennedy, J.: Swarm Intelligence. Elsevier (2001)
Kennedy, J.: Swarm intelligence. In: Zomaya, A.Y. (ed.) Handbook of Nature-Inspired and Innovative Computing, pp. 187–219. Springer, Boston (2006). https://doi.org/10.1007/0-387-27705-6_6
Li, X., Yao, X.: Cooperatively coevolving particle swarms for large scale optimization. IEEE Trans. Evol. Comput. 16(2), 210–224 (2011)
Ma, L., Cheng, S., Shi, Y.: Enhancing learning efficiency of brain storm optimization via orthogonal learning design. IEEE Trans. Syst. Man Cybern. Syst. (2020)
MacQueen, J., et al.: Some methods for classification and analysis of multivariate observations. In: Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Oakland, CA, USA, vol. 1, pp. 281–297 (1967)
Omidvar, M.N., Li, X., Mei, Y., Yao, X.: Cooperative co-evolution with differential grouping for large scale optimization. IEEE Trans. Evol. Comput. 18(3), 378–393 (2013)
Qian, H., Hu, Y.Q., Yu, Y.: Derivative-free optimization of high-dimensional non-convex functions by sequential random embeddings. In: IJCAI, pp. 1946–1952 (2016)
Shi, Y.: Brain storm optimization algorithm in objective space. In: 2015 IEEE Congress on Evolutionary Computation (CEC), pp. 1227–1234, May 2015. https://doi.org/10.1109/CEC.2015.7257029
Shi, Y., Eberhart, R.: A modified particle swarm optimizer. In: 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360), pp. 69–73, May 1998. https://doi.org/10.1109/ICEC.1998.699146
Shi, Y.: Brain storm optimization algorithm. In: Tan, Y., Shi, Y., Chai, Y., Wang, G. (eds.) ICSI 2011. LNCS, vol. 6728, pp. 303–309. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-21515-5_36
Wang, H., Liu, J., Yi, W., Niu, B., Baek, J.: An improved brain storm optimization with learning strategy. In: Tan, Y., Takagi, H., Shi, Y. (eds.) ICSI 2017. LNCS, vol. 10385, pp. 511–518. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-61824-1_56
Zhan, Z., Zhang, J., Shi, Y., Liu, H.: A modified brain storm optimization. In: 2012 IEEE Congress on Evolutionary Computation, pp. 1–8. IEEE (2012)
Acknowledgement
This work is supported by the National Science Foundation of China under the Grant No. 61761136008, the Shenzhen Peacock Plan under the Grant No. KQTD2016112514355531, the Program for Guangdong Introducing Innovative and Entrepreneurial Teams under the Grant No. 2017ZT07X386, the Science and Technology Innovation Committee Foundation of Shenzhen under the Grant No. ZDSYS201703031748284, Guangdong Provincial Key Laboratory under Grant No. 2020B121201001.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Qu, L., Duan, Q., Yang, J., Cheng, S., Zheng, R., Shi, Y. (2020). BSO-CLS: Brain Storm Optimization Algorithm with Cooperative Learning Strategy. In: Tan, Y., Shi, Y., Tuba, M. (eds) Advances in Swarm Intelligence. ICSI 2020. Lecture Notes in Computer Science(), vol 12145. Springer, Cham. https://doi.org/10.1007/978-3-030-53956-6_22
Download citation
DOI: https://doi.org/10.1007/978-3-030-53956-6_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-53955-9
Online ISBN: 978-3-030-53956-6
eBook Packages: Computer ScienceComputer Science (R0)