Abstract
In this study, we propose a novel deep learning-based method to predict an optimized structure for a given boundary condition and optimization setting without using any iterative scheme. For this purpose, first, using open-source topology optimization code, datasets of the optimized structures paired with the corresponding information on boundary conditions and optimization settings are generated at low (32 × 32) and high (128 × 128) resolutions. To construct the artificial neural network for the proposed method, a convolutional neural network (CNN)-based encoder and decoder network is trained using the training dataset generated at low resolution. Then, as a two-stage refinement, the conditional generative adversarial network (cGAN) is trained with the optimized structures paired at both low and high resolutions and is connected to the trained CNN-based encoder and decoder network. The performance evaluation results of the integrated network demonstrate that the proposed method can determine a near-optimal structure in terms of pixel values and compliance with negligible computational time.
Similar content being viewed by others
References
Abadi M, Agarwal A, Paul Barham EB, et al (2015) TensorFlow: Large-scale machine learning on heterogeneous systems. https://www.tensorflow.org/
Andreassen E, Clausen A, Schevenels M et al (2011) Efficient topology optimization in MATLAB using 88 lines of code. Struct Multidiscip Optim 43:1–16. https://doi.org/10.1007/s00158-010-0594-7
Aulig N, Olhofer M (2014) Topology optimization by predicting sensitivities based on local state features 11th World Congr Comput Mech (WCCM XI) 437–448. doi: https://doi.org/10.1145/2576768.2598314
Bergstra J, Breuleux O, Bastien FF, et al (2010) Theano: a CPU and GPU math compiler in Python. Proc Python Sci Comput Conf
Carleo G, Troyer M (2017) Solving the quantum many-body problem with artificial neural networks. Science (80- ) 355:602–606. https://doi.org/10.1126/science.aag2302
Chollet F, et al (2015) Keras. In: GitHub Repos https://github.com/fchollet/keras. Accessed 17 Nov 2017
Defferrard M, Bresson X, Vandergheynst P (2016) Convolutional neural networks on graphs with fast localized spectral filtering. Conf Neural Inf Process Syst
Farimani AB, Gomes J, Pande VS (2017) Deep learning the physics of transport phenomena. arXiv:1709.02432v1
Fedus W, Goodfellow I, Dai AM (2018) MaskGAN: better text generation via filling in the______. arXiv:1801.07736v3
Github (2018) Keras GAN/srgan. GitHub Repos https://github.com/eriklindernoren/Keras-GAN/blob/.
Goodfellow IJ, Pouget-Abadie J, Mirza M, et al (2014) Generative adversarial networks. Conf Neural Inf Process Syst https://doi.org/10.1001/jamainternmed.2016.8245
Hahnioser RHR, Sarpeshkar R, Mahowald MA et al (2000) Digital selection and analogue amplification coexist in a cortex- inspired silicon circuit. Nature 405:947–951. https://doi.org/10.1038/35016072
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. IEEE Conf Comput Vis Pattern Recognit:770–778. https://doi.org/10.1109/CVPR.2016.90
Hinton GE, Osindero S, Teh Y-W (2006) A fast learning algorithm for deep belief bets. Neural Comput 18:1527–1554. https://doi.org/10.1162/neco.2006.18.7.1527
Jang IG, Kwak BM (2006) Evolutionary topology optimization using design space adjustment based on fixed grid. Int J Numer Methods Eng 66:1817–1840. https://doi.org/10.1002/nme.1607
Jang IG, Kwak BM (2007) Design space optimization using design space adjustment and refinement. Struct Multidiscip Optim 35:41–54. https://doi.org/10.1007/s00158-007-0112-8
Karras T, Aila T, Laine S, Lehtinen J (2018) Progressive growing of GANs for improved quality, stability, and variation. Int Conf Learn Represent. https://doi.org/10.1002/joe.20070
Kim YY, Yoon GH (2000) Multi-resolution multi-scale topology optimization—a new paradigm. Int J Solids Struct 37:5529–5559. https://doi.org/10.1016/S0020-7683(99)00251-6
Kim SY, Kim IY, Mechefske CK (2012) A new efficient convergence criterion for reducing computational expense in topology optimization: reducible design variable method. Int J Numer Methods Eng 90:752–783. https://doi.org/10.1002/nme.3343
Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. Int Conf Learn Represent. https://doi.org/10.1145/1830483.1830503
Kingma DP, Welling M (2014) Auto-encoding variational bayes. Int Conf Learn Represent. https://doi.org/10.1051/0004-6361/201527329
Kipf TN, Welling M (2016) Semi-supervised classification with graph convolutional networks. Int Conf Learn Represent. https://doi.org/10.1051/0004-6361/201527329
LeCun Y, Boser B, Denker JS et al (1989) Backpropagation applied to handwritten zip code recognition. Neural Comput 1:541–551. https://doi.org/10.1162/neco.1989.1.4.541
Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86:2278–2324. https://doi.org/10.1109/5.726791
Ledig C, Theis L, Huszar F, et al (2016) Photo-realistic single image super-resolution using a generative adversarial network. https://doi.org/10.1109/CVPR.2017.19
Lee S, Ha J, Zokhirova M et al (2018) Background information of deep learning for structural engineering. Arch Comput Methods Eng 25:121. https://doi.org/10.1007/s11831-017-9237-0
Liu K, Tovar A, Nutwell E, Detwiler D (2015) Towards nonlinear multimaterial topology optimization using unsupervised machine learning and metamodel-based optimization. ASME Int Des Eng Tech Conf Comput Inf Eng Conf 2B https://doi.org/10.1115/DETC2015-46534
Liu J, Yu F, Funkhouser T (2017) Interactive 3D modeling with a generative adversarial network. https://doi.org/10.1109/3dv.2017.00024
Mills K, Spanner M, Tamblyn I (2017) Deep learning and the Schrödinger equation. Phys Rev A 96:042113. https://doi.org/10.1103/PhysRevA.96.042113
Mirza M, Osindero S (2014) Conditional generative adversarial Nets. arXiv:1411.1784v1
Mitchell TM (1997) Machine learning. McGraw-Hill, New York, pp 96–97
Nair V, Hinton GE (2010) Rectified linear units improve restricted boltzmann machines. Proc. 27th Int Conf Mach Learn.
Paganini M, de Oliveira L, Nachman B (2018) CaloGAN: simulating 3D high energy particle showers in multilayer electromagnetic calorimeters with generative adversarial networks. Phys Rev D 97:014021. https://doi.org/10.1103/PhysRevD.97.014021
Qi CR, Su H, Mo K, Guibas LJ (2017) PointNet: deep learning on point sets for 3D classification and segmentation. IEEE Conf Comput Vis Pattern Recognit. https://doi.org/10.1109/CVPR.2017.16
Radford A, Metz L, Chintala S (2016) Unsupervised representation learning with deep covolutional generative adversarial networks. Int Conf Learn Represent https://doi.org/10.1051/0004-6361/201527329
Sigmund O, Bendsoe MP (2003) Topology optimization theory, methods, and applications. Springer, Berlin
Silver D, Huang A, Maddison CJ et al (2016) Mastering the game of Go with deep neural networks and tree search. Nature 529:484–489. https://doi.org/10.1038/nature16961
Singh AP, Medida S, Duraisamy K (2017) Machine-learning-augmented predictive modeling of turbulent separated flows over airfoils. AIAA J 55:2215–2227. https://doi.org/10.2514/1.J055595
Sosnovik I, Oseledets I (2017) Neural networks for topology optimization. arXiv:1709.09578v1
Tompson J, Schlachter K, Sprechmann P, Perlin K (2017) Accelerating eulerian fluid simulation with convolutional networks. Proc 34 Int Conf Mach Learn PMLR 70. arXiv:1607.03597v6
Wang Y, Skerry-Ryan RJ, Stanton D, et al (2017) Tacotron: towards end-to-end speech synthesis. Proc Annu Conf Int Speech Commun Assoc INTERSPEECH 2017–Augus:4006–4010. https://doi.org/10.21437/Interspeech.2017-1452
Wetzel SJ (2017) Unsupervised learning of phase transitions: from principal component analysis to variational autoencoders. Phys Rev E 96:1–8. https://doi.org/10.1103/PhysRevE.96.022140
Wu J, Dick C, Westermann R (2016) A system for high-resolution topology optimization. IEEE Trans Vis Comput Graph 22:1195–1208. https://doi.org/10.1109/TVCG.2015.2502588
Zhang H, Xu T, Li H, et al (2017) StackGAN++: realistic image synthesis with stacked generative adversarial networks. https://doi.org/10.1109/ICCV.2017.629
Acknowledgments
This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2018R1C1B6005157) and National Institute of Supercomputing and Network (NISN)/Korea Institute of Science and Technology Information (KISTI) with supercomputing resources including technical support (KSC-2017-S1-0029).
Author information
Authors and Affiliations
Corresponding author
Additional information
Responsible Editor: Hyunsun Alicia Kim
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Yu, Y., Hur, T., Jung, J. et al. Deep learning for determining a near-optimal topological design without any iteration. Struct Multidisc Optim 59, 787–799 (2019). https://doi.org/10.1007/s00158-018-2101-5
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00158-018-2101-5