Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3579371.3589348acmconferencesArticle/Chapter ViewAbstractPublication PagesiscaConference Proceedingsconference-collections
research-article

MTIA: First Generation Silicon Targeting Meta's Recommendation Systems

Published: 17 June 2023 Publication History

Abstract

Meta has traditionally relied on using CPU-based servers for running inference workloads, specifically Deep Learning Recommendation Models (DLRM), but the increasing compute and memory requirements of these models have pushed the company towards using specialized solutions such as GPUs or other hardware accelerators. This paper describes the company's effort in constructing its first silicon specifically designed for recommendation systems; it describes the accelerator architecture and platform design, the software stack for enabling and optimizing PyTorch-based models and provides an initial performance evaluation. With our emerging software stack, we have made significant progress towards reaching the same or higher efficiency as the GPU: We averaged 0.9x perf/W across various DLRMs, and benchmarks show operators such as GEMMs reaching 2x perf/W. Finally, the paper describes the lessons we learned during this journey which can improve the performance and programmability of future generations of architecture.

References

[1]
V. Sze, Y. Chen, T. Yang, and J.S. Emer, Efficient Processing of Deep Neural Networks, Synthesis Lectures on Computer Architecture, Morgan & Claypool Publishers, 2020.
[2]
M. Pellauer, Y. S. Shao, J. Clemons, N. Crago, K. Hegde, R. Venkatesan, S. W. Keckler, C. W. Fletcher, and J. Emer, "Buffets: An efficient and composable storage idiom for explicit decoupled data orchestration," in Proceedings of Architectural Support for Programming Languages and Operating Systems (ASPLOS), 2019.
[3]
Y.-H. Chen, J. Emer, and V. Sze, Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks, in Proceedings of International Symposium on Computer Architecture (ISCA), 2016.
[4]
Y.-H. Chen, T. Krishna, J. Emer, and V. Sze, Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks IEEE Journal of Solid-State Circuits (JSSC), 51(1), 2017.
[5]
Y. Harata, Y. Nakamura, H. Nagase, M. Takigawa, and N. Takagi, "A highspeed multiplier using a redundant binary adder tree," in IEEE Journal of Solid-State Circuits (JSSC), 22(1):28--34, 1987.
[6]
Y.-H. Chen, T.-J. Yang, J. Emer, and V. Sze, "Eyeriss v2: A flexible accelerator for emerging deep neural networks on mobile devices," in IEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS), 2019.
[7]
T. Krishna, H. Kwon, A. Parashar, M. Pellauer, and A. Samajdar, Data Orchestration in Deep Learning Accelerators, Synthesis Lectures on Computer Architecture, Morgan & Claypool Publishers, 2020.
[8]
N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, S. Bates, S. Bhatia, N. Boden, A. Borchers, R. Boyle, P.-L. Cantin, C. Chao, C. Clark, J. Coriell, M. Daley, M. Dau, J. Dean, B. Gelb, T. V. Ghaemmaghami, R. Gottipati, W. Gulland, R. Hagmann, C. R. Ho, D. Hogberg, J. Hu, R. Hundt, D. Hurt, J. Ibarz, A. Jaffey, A. Jaworski, A. Kaplan, H. Khaitan, D. Killebrew, A. Koch, N. Kumar, S. Lacy, J. Laudon, J. Law, D. Le, C. Leary, Z. Liu, K. Lucke, A. Lundin, G. MacKean, A. Maggiore, M. Mahony, K. Miller, R. Nagarajan, R. Narayanaswami, R. Ni, K. Nix, T. Norrie, M. Omernick, N. Penukonda, A. Phelps, J. Ross, M. Ross, A. Salek, E. Samadiani, C. Severn, G. Sizikov, M. Snelham, J. Souter, D. Steinberg, A. Swing, M. Tan, G. Thorson, B. Tian, H. Toma, E. Tuttle, V. Vasudevan, R. Walter, W. Wang, E. Wilcox, and D. H. Yoon, "In-datacenter performance analysis of a tensor processing unit," in Proceedings of the International Symposium on Computer Architecture (ISCA), June 2017.
[9]
https://zephyrproject.org
[10]
M. Anderson, B. Chen, S. Chen, S. Deng, J. Fix, M. Gschwind, A. Kalaiah, C. Kim, J. Lee, J. Liang, H. Liu, Y. Lu, J. Montgomery, A. Moorthy, S. Nadathur, S. Naghshineh, A. Nayak, J. Park, C. Petersen, M. Schatz, N. Sundaram, B. Tang, P. Tang, A. Yang, J. Yu, H. Yuen, Y. Zhang, A. Anbudurai, V. Balan, H. Bojja, J. Boyd, M. Breitbach, C. Caldato, A. Calvo, G. Catron, S. Chandwani, P. Christeas, B. Cottel, B. Coutinho, A. Dalli, A. Dhanotia, O. Duncan, R. Dzhabarov, S. Elmir, C. Fu, W. Fu, M. Fulthorp, A. Gangidi, N. Gibson, S. Gordon, B. Padilla Hernandez, D. Ho, Y. Huang, O. Johansson, S. Juluri, S. Kanaujia, M. Kesarkar, J. Killinger, B. Kim, R. Kulkarni, M. Lele, Huayu Li, Huamin Li, Y. Li, C. Liu, J. Liu, B. Maher, C. Mallipedi, S. Mangla, K.K. Matam, J. Mehta, S. Mehta, C. Mitchell, B. Muthiah, N. Nagarkatte, A. Narasimha, B. Nguyen, T. Ortiz, S. Padmanabha, D. Pan, A. Poojary, Y. Qi, O. Raginel, D. Rajagopal, T. Rice, C. Ross, N. Rotem, S. Russ, K. Shah, B. Shan, H. Shen, P. Shetty, K. Skandakumaran, K. Srinivasan, R. Sumbaly, M. Tauberg, M. Tzur, S. Verma, H. Wang, M. Wang, B. Wei, A. Xia, C. Xu, M. Yang, K. Zhang, R. Zhang, M. Zhao, W. Zhao, R. Zhu, A. Mathews, L. Qiao, M. Smelyanskiy, B. Jia, V. Rao., "First-Generation Inference Accelerator Deployment at Facebook," in Arxiv, 2021. [Online]. Available: https://arxiv.org/abs/2107.04140, unpublished.
[11]
J. Ehlen, J. Clow, B. Wei, D. Chong, "Facebook Multi-node Server Platform: Yosemite V2 Design Specification," Open Compute Project, https://www.opencompute.org/documents/facebook-multi-node-server-platform-yosemite-v2-design-specification
[12]
D. Mudigere, Y. Hao, J. Huang, Z. Jia, A. Tulloch, S. Sridharan, X. Liu, M. Ozdal, J. Nie, J. Park, L. Luo, J. Yang, L. Gao, D. Ivchenko, A. Basant, Y. Hu, J. Yang, E. K. Ardestani, X. Wang, R. Komuravelli, C.H. Chu, S. Yilmaz, H. Li, J. Qian, Z. Feng, Y. Ma, J. Yang, E. Wen, H. Li, L. Yang, C. Sun, W. Zhao, D. Melts, K. Dhulipala, KR. Kishore, T. Graf, A Eisenman, K. K. Matam, A. Gangidi, G. J. Chen, M. Krishnan, A. Nayak, K Nair, B. Muthiah, M. khorashadi, P. Bhattacharya, P. Lapukhov, M. Naumov, A. Mathews, L. Qiao, M. Smelyanskiy, B. Jia, V. Rao, "Software-Hardware Co-design for Fast and Scalable Training of Deep Learning Recommendation Models," in Proceedings of the International Symposium on Computer Architecture (ISCA), June 2022.
[13]
M. Haken, J. Clow, Y. Li, B. Wei, D. Chong, T Ky, "Yosemite V3: Facebook Multi-node Server Platform Design Specification", Open Compute Project, https://www.opencompute.org/documents/ocp-yosemite-v3-platform-design-specification-1v16-pdf
[14]
GemmBench. [Online]. Available: https://https://github.com/pytorch/glow/blob/master/tests/benchmark/GemmBench.cpp
[15]
TableBatchedEmbeddingBagBench (TBEBench). [Online]. Available: https://github.com/pytorch/glow/blob/master/tests/benchmark/TBEBench.cpp
[16]
M. Naumov, D. Mudigere, H. M. Shi, J. Huang, N. Sundaraman, J. Park, X. Wang, U. Gupta, C. Wu, A.G. Azzolini, D. Dzhulgakov, A. Mallevich, I. Cherniavskii, Y. Lu, R. Krishnamoorthi, A. Yu, V. Kondratenko, S. Pereira, X. Chen, W. Chen, V. Rao, B. Jia, L. Xiong, M. Smelyanskiy "Deep Learning Recommendation Model for Personalization and Recommendation Systems," in Arxiv, 2021, [Online]. Available: https://arxiv.org/abs/1906.00091, unpublished
[17]
U. Gupta, C. Wu, X. Wang, M. Naumov, B. Reagen, D. Brooks, B. Cottel, K. Hazelwood, M. Hempstead, B. Jia, H.S.Lee, A. Malevich, D. Mudigere, M. Smelyanskiy, L. Xiong, X. Zhang, "The Architectural Implications of Facebook's DNN-based Personalized Recommendation," in Proceedings of the International Symposium on High Performance Computer Architecture (HPCA), February 2020.
[18]
J. Park, M. Naumov, P. Basu, S. Deng, A. Kalaiah, D. Khudia, J. Law, P. Malani, Andrey Malevich, Satish Nadathur, Juan Pino, Martin Schatz, Alexander Sidorov, Viswanath Sivakumar, Andrew Tulloch, Xiaodong Wang, Yiming Wu, Hector Yuen, Utku Diril, D. Dzhulgakov, Kim Hazelwood, Bill Jia, Yangqing Jia, Lin Qiao, Vijay Rao, Nadav Rotem, Sungjoo Yoo, Mikhail Smelyanskiy "Deep Learning Inference in Facebook Data Centers: Characterization, Performance Optimizations and Hardware Implications," in Arxiv, 2018, [Online]. Available: https://arxiv.org/abs/1811.09886
[19]
J. K. Reed, Z. DeVito, H. He, A. Ussery, J. Ansel, "Torch.fx: Practical Program Capture and Transformation of Deep Learning in Python," in Arxiv, 2022, [Online]. Available: https://arxiv.org/abs/2112.08429
[20]
https://pytorch.org/docs/stable/fx.html
[21]
C. Lattner, V. Adve, "LLVM: a compilation framework for lifelong program analysis & transformation," in Proceedings of International Symposium on Code Generation and Optimization, 2004.
[22]
https://llvm.org/docs/LangRef.html
[23]
https://github.com/riscv/riscv-v-spec
[24]
BatchGemmBench. [Online]. Available: https://github.com/pytorch/glow/blob/master/tests/benchmark/BatchGemmBench.cpp
[25]
ConcatBench. [Online]. Available: https://github.com/pytorch/glow/blob/master/tests/benchmark/ConcatBench.cpp
[26]
TransposeBench. [Online]. Available: https://github.com/pytorch/glow/blob/master/tests/benchmark/TransposeBench.cpp

Cited By

View all
  • (2024)Forward Learning of Large Language Models by Consumer DevicesElectronics10.3390/electronics1302040213:2(402)Online publication date: 18-Jan-2024
  • (2024)Tandem Processor: Grappling with Emerging Operators in Neural NetworksProceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 210.1145/3620665.3640365(1165-1182)Online publication date: 27-Apr-2024
  • (2024)Intel Accelerators Ecosystem: An SoC-Oriented Perspective : Industry Product2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture (ISCA)10.1109/ISCA59077.2024.00066(848-862)Online publication date: 29-Jun-2024
  • Show More Cited By

Index Terms

  1. MTIA: First Generation Silicon Targeting Meta's Recommendation Systems

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ISCA '23: Proceedings of the 50th Annual International Symposium on Computer Architecture
    June 2023
    1225 pages
    ISBN:9798400700958
    DOI:10.1145/3579371
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 17 June 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. accelerators
    2. machine learning
    3. inference
    4. recommendation systems
    5. performance
    6. programmability

    Qualifiers

    • Research-article

    Conference

    ISCA '23
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 543 of 3,203 submissions, 17%

    Upcoming Conference

    ISCA '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)1,051
    • Downloads (Last 6 weeks)106
    Reflects downloads up to 03 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Forward Learning of Large Language Models by Consumer DevicesElectronics10.3390/electronics1302040213:2(402)Online publication date: 18-Jan-2024
    • (2024)Tandem Processor: Grappling with Emerging Operators in Neural NetworksProceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 210.1145/3620665.3640365(1165-1182)Online publication date: 27-Apr-2024
    • (2024)Intel Accelerators Ecosystem: An SoC-Oriented Perspective : Industry Product2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture (ISCA)10.1109/ISCA59077.2024.00066(848-862)Online publication date: 29-Jun-2024
    • (2024)MAD-Max Beyond Single-Node: Enabling Large Machine Learning Model Acceleration on Distributed Systems2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture (ISCA)10.1109/ISCA59077.2024.00064(818-833)Online publication date: 29-Jun-2024
    • (2024)FEATHER: A Reconfigurable Accelerator with Data Reordering Support for Low-Cost On-Chip Dataflow Switching2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture (ISCA)10.1109/ISCA59077.2024.00024(198-214)Online publication date: 29-Jun-2024
    • (2024)Data Motion Acceleration: Chaining Cross-Domain Multi Accelerators2024 IEEE International Symposium on High-Performance Computer Architecture (HPCA)10.1109/HPCA57654.2024.00083(1043-1062)Online publication date: 2-Mar-2024
    • (2024)Efficient Approaches for GEMM Acceleration on Leading AI-Optimized FPGAs2024 IEEE 32nd Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM)10.1109/FCCM60383.2024.00015(54-65)Online publication date: 5-May-2024

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media