Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3060403.3060423acmconferencesArticle/Chapter ViewAbstractPublication PagesglsvlsiConference Proceedingsconference-collections
research-article

Design Space Exploration of TAGE Branch Predictor with Ultra-Small RAM

Published: 10 May 2017 Publication History

Abstract

In embedded processors, the RAM resources required by branch predictor compared to desktop or server processors are far from being reached. Utilizing the limited resources to design superior performance branch predictor has become urgent challenge. In this paper, we exploit the performance of complex TAGE implemented in ultra-small RAM processor. We first define design space exploration problem of the TAGE under the constraints of given RAM size and maximum global history register length. Then, based on the trace-driven simulation, the improved Particle Swarm Optimization algorithm is used to efficiently explore the specific parameters, rewarding design parameters with high prediction accuracy under RAM ranging from 0.125KB to 4KB. We found that, for the traces of this paper, the parameters under the 1.5KB RAM explored by our algorithm can achieve adequate accuracy. The performance loss is considerably small if we reduce the RAM resources from 8KB to 1.5KB. In addition, the misprediction rate of 1.5KB TAGE are reduced by 63.41% compared to 1.5KB Bi-mode. And, 0.25KB TAGE has almost the same accuracy with 4KB GShare.

References

[1]
ARM. Cortex-a15 mpcore technical reference manual, revision:r3p3. In http://infocenter.arm.com/help/index.jsp.
[2]
M. Bielby, M. Gould, and N. Topham. Design space exploration of hybrid ultra low power branch predictors. In International Conference on Architecture of Computing Systems, pages 184--199. Springer, 2012.
[3]
M. Hicks, C. Egan, B. Christianson, and P. Quick. Towards an energy efficient branch prediction scheme using profiling, adaptive bias measurement and delay region scheduling. In Design and Technology of Integrated Systems in Nanoscale Era, pages 19--24. IEEE, 2007.
[4]
C.-C. Lee, I.-C. K. Chen, and T. N. Mudge. The bi-mode branch predictor. In Proceedings of the 30th annual ACM/IEEE international symposium on Microarchitecture, pages 4--13. IEEE Computer Society, 1997.
[5]
S. Pruett, S. Zangeneh, A. Fakhrzadehgan, B. Lin, and Y. N. Pat. Dynamically sizing the tage branch predictor. In CBP 5, 2016.
[6]
A. Seznec. Tage-sc-l branch predictors. In 4th JILP- Championship Branch Prediction (CBP-4), 2014.
[7]
A. Seznec. Tage-sc-l branch predictors again. In 5th JILP- Championship Branch Prediction (CBP-5), 2016.
[8]
A. Seznec and P. Michaud. A case for (partially) tagged geometric history length branch prediction. Journal of Instruction-Level Parallelism, 8, 2006.
[9]
A. Seznec, J. S. Miguel, and J. Albericio. The inner most loop iteration counter: a new dimension in branch history. In Proceedings of the 48th International Symposium on Microarchitecture, pages 347--357. ACM, 2015.
[10]
T. Sherwood and B. Calder. Automated design of finite state machine predictors for customized processors. In ACM SIGARCH Computer Architecture News, volume 29, pages 86--97. ACM, 2001.
[11]
Y. Shi and R. C. Eberhart. Parameter selection in particle swarm optimization. In International Conference on Evolutionary Programming, pages 591--600. Springer, 1998.
[12]
K. Skadron, P. S. Ahuja, M. Martonosi, and D. W. Clark. Branch prediction, instruction-window size, and cache size: Performance trade-offs and simulation techniques. IEEE Transactions on Computers, 48(11):1260--1281, 1999.
[13]
J. E. Smith. A study of branch prediction strategies. In Proceedings of the 8th annual symposium on Computer Architecture, pages 135--148. IEEE Computer Society Press, 1981.
[14]
E. Sprangle and D. Carmean. Increasing processor performance by implementing deeper pipelines. In ACM SIGARCH Computer Architecture News, volume 30, pages 25--34. IEEE Computer Society, 2002.

Cited By

View all
  • (2024)CMA-BP: A Clustered Multi-Task Learning and Branch Attention Based Branch Predictor2024 IEEE International Conference on Systems, Man, and Cybernetics (SMC)10.1109/SMC54092.2024.10831163(1370-1376)Online publication date: 6-Oct-2024
  • (2019)BRB: Mitigating Branch Predictor Side-Channels2019 IEEE International Symposium on High Performance Computer Architecture (HPCA)10.1109/HPCA.2019.00058(466-477)Online publication date: Feb-2019
  • (2018)Improving Branch Prediction Accuracy on Multi-Core Architectures for Big Data2018 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Ubiquitous Computing & Communications, Big Data & Cloud Computing, Social Computing & Networking, Sustainable Computing & Communications (ISPA/IUCC/BDCloud/SocialCom/SustainCom)10.1109/BDCloud.2018.00065(377-382)Online publication date: Dec-2018
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
GLSVLSI '17: Proceedings of the Great Lakes Symposium on VLSI 2017
May 2017
516 pages
ISBN:9781450349727
DOI:10.1145/3060403
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 10 May 2017

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. branch prediction
  2. design space exploration
  3. particle swarm optimization
  4. tage

Qualifiers

  • Research-article

Funding Sources

  • NSFC

Conference

GLSVLSI '17
Sponsor:
GLSVLSI '17: Great Lakes Symposium on VLSI 2017
May 10 - 12, 2017
Alberta, Banff, Canada

Acceptance Rates

GLSVLSI '17 Paper Acceptance Rate 48 of 197 submissions, 24%;
Overall Acceptance Rate 312 of 1,156 submissions, 27%

Upcoming Conference

GLSVLSI '25
Great Lakes Symposium on VLSI 2025
June 30 - July 2, 2025
New Orleans , LA , USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)26
  • Downloads (Last 6 weeks)4
Reflects downloads up to 27 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)CMA-BP: A Clustered Multi-Task Learning and Branch Attention Based Branch Predictor2024 IEEE International Conference on Systems, Man, and Cybernetics (SMC)10.1109/SMC54092.2024.10831163(1370-1376)Online publication date: 6-Oct-2024
  • (2019)BRB: Mitigating Branch Predictor Side-Channels2019 IEEE International Symposium on High Performance Computer Architecture (HPCA)10.1109/HPCA.2019.00058(466-477)Online publication date: Feb-2019
  • (2018)Improving Branch Prediction Accuracy on Multi-Core Architectures for Big Data2018 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Ubiquitous Computing & Communications, Big Data & Cloud Computing, Social Computing & Networking, Sustainable Computing & Communications (ISPA/IUCC/BDCloud/SocialCom/SustainCom)10.1109/BDCloud.2018.00065(377-382)Online publication date: Dec-2018
  • (2017)Effective Optimization of Branch Predictors through Lightweight Simulation2017 IEEE International Conference on Computer Design (ICCD)10.1109/ICCD.2017.114(653-656)Online publication date: Nov-2017
  • (2017)Improving Branch Prediction for Thread Migration on Multi-core ArchitecturesNetwork and Parallel Computing10.1007/978-3-319-68210-5_8(87-99)Online publication date: 20-Oct-2017

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media