Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Layout Congestion Prediction Based on Regression-ViT

Published: 09 November 2024 Publication History

Abstract

To accelerate the back-end design flow of integrated circuit (IC), numerous studies have made exploratory advancements in machine learning (ML) for electronic design automation (EDA). However, most research works are limited to deep learning (DL) models predominantly based on convolutional neural networks, and the models often suffer from poor generalization due to the scarcity of data. In this study, we propose the Double generative adversarial networks (D-GAN) model to enrich the dataset and propose the Regression Vision Transformer (R-ViT) model to predict layout congestion information. Compared with the baseline model, experimental results show improvements of 3.03% and 2.64% in Receiver Operating Characteristic-Area under Curve (ROC-AUC) and Precision-Recall Curve-Area under Curve (PRC-AUC) respectively. To further enhance the prediction accuracy of the model, an adaptive Huber loss function is designed to optimize the training process, resulting in an improvement of up to 11.03% in ROC-AUC compared with the baseline model. Lastly, extended experiments are conducted to study the effects of parameters and convolutional kernel size on performance, which find a better configuration.

References

[1]
Wei-Ting, J. Chan, Pei-Hsin Ho, Andrew B. Kahng, and Prashant Saxena. 2017. Routability optimization for industrial designs at sub-14 nm process nodes using machine learning. In Proc. ISPD (2017), 15–27.
[2]
Wei Ye, Mohamed Baker Alawieh, Yibo Lin, and David Z. Pan. 2019. Tackling signal electromigration with learning-based detection and multistage mitigation. In Proc. ASPDAC (2019), 167–172.
[3]
Subhendu Roy, Yuzhe Ma, Jin Miao, and Bei Yu. 2017. A learning bridge from architectural synthesis to physical design for exploring power efficient high-performance adders. In Proc. ISLPED (2017), 1–6.
[4]
Haoyu Yang, Jing Su, Yi Zou, Yuzhe Ma, Bei Yu, and E. F. Y. Young. 2019. Layout hotspot detection with feature tensor generation and deep biased learning. IEEE Trans. Comput.-Aided Design Integr. Circuits Syst. 38, 6 (2019), 1175–1187.
[5]
Siting Liu, Qi Sun, Peiyu Liao, Yibo Lin, and Bei Yu. 2021. Global placement with deep learning-enabled explicit routability optimization[C] In 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE (2021), 1821–1824.
[6]
Yu-Hung. Huang, Zhiyao Xie, Guan-Qi Fang, Tao-Chun Yu, Haoxing Ren, and Shao-Yun Fang. 2019. Routability-driven macro placement with embedded CNN-based prediction model. In 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE) (2019), 180–185.
[7]
C. Yu and Z. Zhang. 2019. Painting on piacement: Forecasting routing congestion using conditional generative adversarial nets. In 2019 56th ACM/IEEE Design Automation Conference (DAC), Las Vegas, NV, USA, (2019), 1–6.
[8]
Wang Bowen, Guibao Shen, Dong Li, Jianye Hao, Wulong Liu, Yu Huang, Hongzhong Wu, Yibo Lin, Guanyong Chen, and Pheng Ann Heng. 2022. LHNN: Lattice hypergraph neural network for VLSI congestion prediction. In Proceedings of the 59th ACM/IEEE Design Automation Conference. 2022.
[9]
Baek Kyeonghyeon, Hyunbum Park, Suwan Kim, Kyumyung Choi, and Taewhan Kim. 2022. Pin accessibility and routing congestion aware drc hotspot prediction using graph neural network and u-net. In Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided Design. 2022.
[10]
Zheng Su, Lancheng Zou, Siting Liu, Yibo Lin, Bei Yu, and Martin Wong. 2023. Mitigating distribution shift for congestion optimization in global placement. In 2023 60th ACM/IEEE Design Automation Conference (DAC). IEEE, 2023.
[11]
A. Khan, A. Sohail, U. Zahoora, and A. S. Qureshi. 2020. A survey of the recent architectures of deep convolutional neural networks. Artificial Intelligence Review 53, 8 (2020), 5455–5516.
[12]
Z. Zhang, Q. Gao, L. Liu, and Y. He. 2023. A high-quality rice leaf disease image data augmentation method based on a dual GAN[J]. IEEE Access 11 (2023), 21176--21191. DOI:
[13]
S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon. 2018. CBAM: Convolutional block attention module. In Computer Vision – ECCV 2018: 15th European Conference, Munich, Germany, September 8–14, 2018, Proceedings, Part VII, Berlin, 2018: Springer-Verlag, 3–19.
[14]
H. Geng, H. Yang, L. Zhang, F. Yang, X. Zheng, and B. Yu. 2022. Hotspot detection via attention-based deep layout metric learning. IEEE Trans. Comput.- Aided Design Integr. Circuits Syst. 41, 8 (2022), 2685–2698.
[15]
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and L. Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Red Hook, NY, USA, 2017: Curran Associates Inc., 6000–6010.
[16]
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.
[17]
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. 2020. Generative adversarial networks[J]. Communications of the ACM 63, 11 (2020), 139–144.
[18]
Z. Chai, Y. Zhao, W. Liu, Y. Lin, R. Wang, and R. Huang. 2023. CircuitNet: An open-source dataset for machine learning in VLSI CAD applications with improved domain-specific evaluation metric and learning strategies. In IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 42, 12 (2023), 5034--5047. DOI:
[19]
H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablayrolles, and H. e. J'egou. 2020. Training data-efficient image transformers & distillation through attention. In International Conference on Machine Learning, 2020.
[20]
Xie Zhiyao, Yu-Hung Huang, Guan-Qi Fang, Haoxing Ren, Shao-Yun Fang, and Yiran Chen. 2018. RouteNet: Routability prediction for mixed-size designs using convolutional neural network. In 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). IEEE, 2018.
[21]
S. Kazeminia, C. Baur, A. Kuijper, B. Ginneken, N. Navab, S. Albarqouni, and A. Mukhopadhyay. 2020. GANs for medical image analysis[J]. Artificial Intelligence in Medicine 109 (2020), 101938.
[22]
Y. Zhang, M. Li, R. Li, K. Jia, and L. Zhang. 2022. Exact feature distribution matching for arbitrary style transfer and domain generalization[C]. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022): 8035–8045.
[23]
S. Pascual, A. Bonafonte, and J. Serrà. 2017. SEGAN: Speech enhancement generative adversarial network. Proc. Interspeech (2017), 3642--3646. DOI:
[24]
V. A. Chhabria, K. Kunal, M. Zabihi, and S. S. Sapatnekar. 2021. BeGAN: Power grid benchmark generation using a process-portable GAN-based methodology. In 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD) (2021), 1–8.
[25]
K. Lin, L. Wang, and Z. Liu. 2021. End-to-end human pose and mesh reconstruction with transformers. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021), 1954–1963.
[26]
K. Li, S. Wang, X. Zhang, Y. Xu, W. Xu, and Z. Tu. 2021. Pose recognition with cascade transformers. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, (2021), 1944–1953, DOI:
[27]
Y. Xiao, F. Codevilla, A. Gurram, O. Urfalioglu, and A. M. López. 2022. Multimodal end-to-end autonomous driving. IEEE Transactions on Intelligent Transportation Systems 23, 1 (2022), 537–547.
[28]
J. Sun, Z. Shen, Y. Wang, H. Bao, and X. Zhou. 2021. LoFTR: Detector-free local feature matching with transformers. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2021), 8918–8927.
[29]
T. Perrett, A. Masullo, T. Burghardt, M. Mirmehdi, and D. Damen. 2021. Temporal-relational crosstransformers for few-shot action recognition. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021), 475–484.
[30]
M. Arjovskyrjovsky, S. Chintalahintala, and L. Bottouottou. 2017. Wasserstein generative adversarial networks[C]//In Proceedings of the 2017 International Conference on Machine Learning. PMLR, (2017), 214–223.
[31]
M. Arjovsky, S. Chintala, and L. O. Bottou. 2017. Wasserstein generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, Sydney, NSW, Australia, 2017: JMLR.org, 214–223.
[32]
R. Strudel, R. Garcia, I. Laptev, and C. Schmid. 2021. Segmenter: Transformer for semantic segmentation. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2021), 7242–7252.
[33]
C.-C. Chang, J. Pan, T. Zhang, Z. Xie, H. Jiang, W. Qi, C. -W. Lin, R. Liang, J. Mitra, E. Fallon, and Y. Chen. 2021. Automatic routability predictor development using neural architecture search. In 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD) (2021), 1–9.
[34]
K. H. Brodersen, C. S. Ong, K. E. Stephan, and J. M. Buhmann. 2010. The binormal assumption on precision-recall curves. In 2010 20th International Conference on Pattern Recognition (2010), 4263–4266.
[35]
Devlin Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. North American Chapter of the Association for Computational Linguistics (2019).
[36]
K. He, R. Girshick, and P. Dollar. 2019. Rethinking imagenet pre-training. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV) (2019), 4917–4926.
[37]
Y. Wang, C. Wu, L. Herranz, J. van de Weijer, A. Gonzalez-Garcia, and B. Raducanu. 2018. Transferring GANs: Generating images from limited data. In Computer Vision – ECCV 2018: 15th European Conference, Munich, Germany, September 8–14, 2018, Proceedings, Part VI, Berlin, 2018: Springer-Verlag (2018), 220–236.
[38]
V. Badrinarayanan, A. Kendall, and R. Cipolla. 2017. SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 39, 12 (2017), 2481–2495.
[39]
Z. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell, and S. Xie. 2022. A ConvNet for the 2020s. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022), 11966–11976.
[40]
Cadence Design Systems. Innovus Implementation System. Version 20.10. San Jose, CA: Cadence Design Systems. Retrieved from https://www.cadence.com.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Design Automation of Electronic Systems
ACM Transactions on Design Automation of Electronic Systems  Volume 30, Issue 1
January 2025
198 pages
EISSN:1557-7309
DOI:10.1145/3697150
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Journal Family

Publication History

Published: 09 November 2024
Online AM: 30 September 2024
Accepted: 15 September 2024
Revised: 30 May 2024
Received: 29 January 2024
Published in TODAES Volume 30, Issue 1

Check for updates

Author Tags

  1. Integrated circuit
  2. layout congestion
  3. deep learning
  4. regression vision transformer
  5. double generated adversarial network

Qualifiers

  • Research-article

Funding Sources

  • Key-Area Research and Development Program of Guangdong Province

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 110
    Total Downloads
  • Downloads (Last 12 months)110
  • Downloads (Last 6 weeks)83
Reflects downloads up to 19 Nov 2024

Other Metrics

Citations

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media