Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3528233.3530722acmconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
research-article

ShaderTransformer: Predicting Shader Quality via One-shot Embedding for Fast Simplification

Published: 24 July 2022 Publication History

Abstract

Given specific scene configurations and target functions, automatic shader simplification searches for the best simplified shader variant from an optimization space with many candidates. Although various speedup methods have been proposed, there is still a costly render-and-evaluate process to obtain variant’s performance and quality, especially when the scene changes.
In this paper, we present a deep learning-based framework for predicting a shader’s simplification space, where the shader’s variants can be embedded into a metric space all at once for efficient quality evaluation. The novel framework allows the one-shot embedding of a space rather than a single instance. In addition, the simplification errors can be interpreted by mutual attention between shader fragments, presenting an informative focus-aware simplification framework that can assist experts in optimizing the codes. The results show that the new framework achieves significant speedup compared with existing search approaches. The focus-aware simplification framework reveals a new possibility of interpreting shaders for various applications.

Supplementary Material

Supplemental file (supplementary_document.pdf)
Shader variant codes (shader-variant-codes.zip)
MP4 File (variants_on_Pareto.mp4)
Supplemental video

References

[1]
Andrew Adams, Karima Ma, Luke Anderson, Riyadh Baghdadi, Tzu-Mao Li, Michaël Gharbi, Benoit Steiner, Steven Johnson, Kayvon Fatahalian, Frédo Durand, 2019. Learning to optimize halide with tree search and random programs. ACM Transactions on Graphics (TOG) 38, 4 (2019), 1–12.
[2]
Miltiadis Allamanis, Marc Brockschmidt, and Mahmoud Khademi. 2017. Learning to Represent Programs with Graphs. arxiv:1711.00740 [cs.LG]
[3]
Uri Alon, Shaked Brody, Omer Levy, and Eran Yahav. 2018. code2seq: Generating Sequences from Structured Representations of Code. arXiv: Learning (2018).
[4]
Avishkar Bhoopchand, Tim Rocktäschel, Earl Barr, and Sebastian Riedel. 2016. Learning Python Code Suggestion with a Sparse Pointer Network. (2016). http://arxiv.org/abs/1611.08307
[5]
Tianqi Chen, Lianmin Zheng, Eddie Yan, Ziheng Jiang, Thierry Moreau, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy. 2018. Learning to optimize tensor programs. Advances in Neural Information Processing Systems 31 (2018).
[6]
In-Young Cho, Yuchi Huo, and Sung-Eui Yoon. 2021. Weakly-Supervised Contrastive Learning in Path Manifold for Monte Carlo Image Reconstruction. ACM Trans. Graph. 40, 4, Article 38 (jul 2021), 14 pages. https://doi.org/10.1145/3450626.3459876
[7]
Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. arxiv:1409.1259 [cs.CL]
[8]
L. Crawford and M. O’Boyle. 2018. A Cross-platform Evaluation of Graphics Shader Compiler Optimization. In 2018 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS).
[9]
Chris Cummins, Pavlos Petoumenos, Zheng Wang, and Hugh Leather. 2017. End-to-end deep learning of optimization heuristics. In 2017 26th International Conference on Parallel Architectures and Compilation Techniques (PACT). IEEE, 219–232.
[10]
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Viet Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive Language Models beyond a Fixed-Length Context. In ACL (1).
[11]
Michael Deering, Stephanie Winner, Bic Schediwy, Chris Duffy, and Neil Hunt. 1988. The triangle processor and normal vector shader: a VLSI system for high performance graphics. Acm siggraph computer graphics 22, 4 (1988), 21–30.
[12]
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, 2020. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations.
[13]
Raia Hadsell, Sumit Chopra, and Yann LeCun. 2006. Dimensionality Reduction by Learning an Invariant Mapping. In CVPR.
[14]
Yong He, Tim Foley, and Kayvon Fatahalian. 2016. A System for Rapid Exploration of Shader Optimization Choices. (2016).
[15]
Yong He, Tim Foley, Natalya Tatarchuk, and Kayvon Fatahalian. 2015. A system for rapid, automatic shader level-of-detail. Acm Transactions on Graphics 34, 6 (2015), 1–12.
[16]
Abram Hindle, Earl T. Barr, Zhendong Su, Mark Gabel, and Premkumar Devanbu. 2012. On the Naturalness of Software. In Proceedings of the 34th International Conference on Software Engineering.
[17]
Han Hu, Zheng Zhang, Zhenda Xie, and Stephen Lin. 2019. Local relation networks for image recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 3464–3473.
[18]
Mahmut Kaya and Hasan Şakir Bilge. 2019. Deep metric learning: A survey. Symmetry 11, 9 (2019), 1066.
[19]
Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of NAACL-HLT. 4171–4186.
[20]
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980(2014).
[21]
Shi Li, Rui Wang, Yuchi Huo, Wenting Zheng, Wei Hua, and Hujun Bao. 2020. Automatic Band-Limited Approximation of Shaders Using Mean-Variance Statistics in Clamped Domain. In Computer Graphics Forum, Vol. 39. Wiley Online Library, 181–192.
[22]
Chris Maddison and Daniel Tarlow. [n. d.]. Structured Generative Models of Natural Source Code. In Proceedings of the 31st International Conference on Machine Learning. PMLR. http://proceedings.mlr.press/v32/maddison14.html
[23]
Marc Olano, Bob Kuehne, and Maryann Simmons. 2003. Automatic shader level of detail. In Acm Siggraph/eurographics Conference on Graphics Hardware.
[24]
Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. 2018. Image transformer. In International Conference on Machine Learning. PMLR, 4055–4064.
[25]
Fabio Pellacini. 2005. User-Configurable Automatic Shader Simplification. ACM Trans. Graph. (2005).
[26]
Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. 2017. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 652–660.
[27]
Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, and Jonathon Shlens. 2019. Stand-alone self-attention in vision models. In Proceedings of the 33rd International Conference on Neural Information Processing Systems. 68–80.
[28]
Veselin Raychev, Pavol Bielik, and Martin Vechev. 2016. Probabilistic Model for Code with Decision Trees. In Proceedings of the 2016 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications (Amsterdam, Netherlands) (OOPSLA 2016).
[29]
Pitchaya Sitthiamorn, Nicholas Modly, Westley Weimer, and Jason Lawrence. 2011. Genetic programming for shader simplification. (2011).
[30]
X. Song, C. Tu, and Y. Xu. 2011. An Adaptive Method for Shader Simplification. In 2013 International Conference on Virtual Reality and Visualization. IEEE Computer Society, Los Alamitos, CA, USA, 46–51.
[31]
Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks. arxiv:1503.00075 [cs.CL]
[32]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017a. Attention is all you need. In Advances in neural information processing systems. 5998–6008.
[33]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017b. Attention is All you Need. In Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Vol. 30. Curran Associates, Inc.https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf
[34]
Jian Wang, Feng Zhou, Shilei Wen, Xiao Liu, and Yuanqing Lin. 2017. Deep Metric Learning with Angular Loss. In IEEE International Conference on Computer Vision, ICCV 2017. IEEE Computer Society, Venice, Italy, 2612–2620.
[35]
Rui Wang, Xianjin Yang, Yazhen Yuan, Wei Chen, Kavita Bala, and Hujun Bao. 2014. Automatic Shader Simplification Using Surface Signal Approximation. (2014).
[36]
Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. 2018. Pay Less Attention with Lightweight and Dynamic Convolutions. In International Conference on Learning Representations.
[37]
Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 2015. 3D shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1912–1920.
[38]
Yuting Yang and Connelly Barnes. 2018. Approximate program smoothing using mean-variance statistics, with application to procedural shader bandlimiting. In Computer Graphics Forum, Vol. 37. Wiley Online Library, 443–454.
[39]
Yuting Yang, Connelly Barnes, and Adam Finkelstein. 2021. Learning from Shader Program Traces. arXiv preprint arXiv:2102.04533(2021).
[40]
Yazhen Yuan, Rui Wang, Tianlei Hu, and Hujun Bao. 2018. Runtime Shader Simplification via Instant Search in Reduced Optimization Space. Computer Graphics Forum(2018).
[41]
Hengshuang Zhao, Jiaya Jia, and Vladlen Koltun. 2020. Exploring self-attention for image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10076–10085.

Cited By

View all
  • (2024)Development of Software for 3D Well Visualization Modeling Using Acoustic, Gamma, Neutron and Density Logging for Fossil Energy Sources Sustainable ProductionEnergies10.3390/en1703061317:3(613)Online publication date: 26-Jan-2024
  • (2024)X-TED: Massive Parallelization of Tree Edit DistanceProceedings of the VLDB Endowment10.14778/3654621.365463417:7(1683-1696)Online publication date: 30-May-2024
  • (2024)ShaderPerFormer: Platform-independent Context-aware Shader Performance PredictorProceedings of the ACM on Computer Graphics and Interactive Techniques10.1145/36512957:1(1-17)Online publication date: 13-May-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SIGGRAPH '22: ACM SIGGRAPH 2022 Conference Proceedings
July 2022
553 pages
ISBN:9781450393379
DOI:10.1145/3528233
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 July 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Real-time Rendering
  2. Shader Simplification
  3. Transformer

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

SIGGRAPH '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,822 of 8,601 submissions, 21%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)98
  • Downloads (Last 6 weeks)3
Reflects downloads up to 23 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Development of Software for 3D Well Visualization Modeling Using Acoustic, Gamma, Neutron and Density Logging for Fossil Energy Sources Sustainable ProductionEnergies10.3390/en1703061317:3(613)Online publication date: 26-Jan-2024
  • (2024)X-TED: Massive Parallelization of Tree Edit DistanceProceedings of the VLDB Endowment10.14778/3654621.365463417:7(1683-1696)Online publication date: 30-May-2024
  • (2024)ShaderPerFormer: Platform-independent Context-aware Shader Performance PredictorProceedings of the ACM on Computer Graphics and Interactive Techniques10.1145/36512957:1(1-17)Online publication date: 13-May-2024
  • (2023)Development of a Software Tool for Visualizing a Mine (Wellbore) in the Industrial Drilling of Oil WellsProcesses10.3390/pr1102062411:2(624)Online publication date: 18-Feb-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media