Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3637528.3671890acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article
Open access

Attacking Graph Neural Networks with Bit Flips: Weisfeiler and Leman Go Indifferent

Published: 24 August 2024 Publication History

Abstract

Prior attacks on graph neural networks have focused on graph poisoning and evasion, neglecting the network's weights and biases. For convolutional neural networks, however, the risk arising from bit flip attacks is well recognized. We show that the direct application of a traditional bit flip attack to graph neural networks is of limited effectivity. Hence, we discuss the Injectivity Bit Flip Attack, the first bit flip attack designed specifically for graph neural networks. Our attack targets the learnable neighborhood aggregation functions in quantized message passing neural networks, degrading their ability to distinguish graph structures and impairing the expressivity of the Weisfeiler-Leman test. We find that exploiting mathematical properties specific to certain graph neural networks significantly increases their vulnerability to bit flip attacks. The Injectivity Bit Flip Attack can degrade the maximal expressive Graph Isomorphism Networks trained on graph property prediction datasets to random output by flipping only a small fraction of the network's bits, demonstrating its higher destructive power compared to traditional bit flip attacks transferred from convolutional neural networks. Our attack is transparent, motivated by theoretical insights and confirmed by extensive empirical results.

Supplemental Material

MP4 File - Attacking Graph Neural Networks with Bit Flips: Promotional Video
In our promotional Video, we introduce Graph Neural Network (GNN) expressivity as a potential security vulnerability that can be exploited by attackers using gradient-based bit-search methods. We illustrate that with our dedicated attack, GNNs can be degraded much faster than by attack methods ported from Convolutional Neural Networks (CNNs). Further, our attack solutions lead to distributions of bit flips across the network, which are distinct from existing attacks.In our promotional video, we introduce the expressivity of Graph Neural Networks (GNNs) as a potential security vulnerability exploitable by attackers using gradient-based bit-search methods. We demonstrate that our dedicated attack can degrade GNNs significantly faster than attacks adapted from Convolutional Neural Networks (CNNs). Additionally, our attack results in distinct distributions of bit flips across the network compared to existing methods.

References

[1]
Anders Aamand, Justin Chen, Piotr Indyk, Shyam Narayanan, Ronitt Rubinfeld, Nicholas Schiefer, Sandeep Silwal, and Tal Wagner. 2022. Exponentially Improving the Complexity of Simulating the Weisfeiler-Lehman Test with Graph Neural Networks. In Advances in Neural Information Processing Systems 35. 27333--27346.
[2]
Mohammad-Hossein Askari-Hemmat, Sina Honari, Lucas Rouhier, Christian S. Perone, Julien Cohen-Adad, Yvon Savaria, and Jean-Pierre David. 2019. U-net fixed-point quantization for medical image segmentation. In Large-Scale Annotation of Biomedical Data and Expert Label Synthesis (LABELS) and Hardware Aware Learning for Medical Imaging and Computer Assisted Intervention (HAL-MICCAI), International Workshops. 115--124.
[3]
Mehdi Bahri, Gaétan Bahl, and Stefanos Zafeiriou. 2021. Binary graph neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 9492--9501.
[4]
Lejla Batina, Shivam Bhasin, Dirmanto Jap, and Stjepan Picek. 2018. CSI neural network: Using side-channels to recover your artificial neural network information. CoRR, Vol. abs/2204.07697 (2018).
[5]
Yoshua Bengio, Nicholas Léonard, and Aaron C. Courville. 2013. Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation. CoRR, Vol. abs/1308.3432 (2013).
[6]
Carolina Fortuna. 2023. Graph Isomorphism Networks for Wireless Link Layer Anomaly Classification. In 2023 IEEE Wireless Communications and Networking Conference (WCNC). 1--6.
[7]
Jakub Breier, Xiaolu Hou, Dirmanto Jap, Lei Ma, Shivam Bhasin, and Yang Liu. 2018. Practical Fault Attack on Deep Neural Networks. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. 2204--2206.
[8]
Mark Cheung and José MF Moura. 2020. Graph neural networks for covid-19 drug discovery. In 2020 IEEE International Conference on Big Data. 5646--5648.
[9]
Enyan Dai, Tianxiang Zhao, Huaisheng Zhu, Junjie Xu, Zhimeng Guo, Hui Liu, Jiliang Tang, and Suhang Wang. 2022. A comprehensive survey on trustworthy graph neural networks: Privacy, robustness, fairness, and explainability. CoRR, Vol. abs/2204.08570 (2022).
[10]
Austin Derrow-Pinion, Jennifer She, David Wong, Oliver Lange, Todd Hester, Luis Perez, Marc Nunkesser, Seongjae Lee, Xueying Guo, Brett Wiltshire, et al. 2021. Eta prediction with graph neural networks in google maps. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 3767--3776.
[11]
Guimin Dong, Mingyue Tang, Zhiyuan Wang, Jiechao Gao, Sikun Guo, Lihua Cai, Robert Gutierrez, Bradford Campbel, Laura E Barnes, and Mehdi Boukhechba. 2023. Graph neural networks in IoT: a survey. ACM Transactions on Sensor Networks, Vol. 19, 2 (2023), 1--50.
[12]
Giuseppe Alessio D'Inverno, Monica Bianchini, Maria Lucia Sampoli, and Franco Scarselli. 2021. A unifying point of view on expressive power of GNNs. CoRR, Vol. abs/2106.08992 (2021).
[13]
Boyuan Feng, Yuke Wang, Xu Li, Shu Yang, Xueqiao Peng, and Yufei Ding. 2020. SGQuant: Squeezing the Last Bit on Graph Neural Networks with Specialized Quantization. In 2020 IEEE 32nd international conference on tools with artificial intelligence (ICTAI). 1044--1052.
[14]
Matthias Fey and Jan E. Lenssen. 2019. Fast Graph Representation Learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds.
[15]
Han Gao, Xu Han, Jiaoyang Huang, Jian-Xun Wang, and Liping Liu. 2022a. PatchGT: Transformer over Non-trainable Clusters for Learning Graph Representations. In Learning on Graphs Conference. 1--27.
[16]
Jianliang Gao, Tengfei Lyu, Fan Xiong, Jianxin Wang, Weimao Ke, and Zhao Li. 2022c. Predicting the Survival of Cancer Patients With Multimodal Graph Neural Network. IEEE/ACM Transactions on Computational Biology and Bioinformatics, Vol. 19, 2 (2022), 699--709.
[17]
Yun Gao, Hirokazu Hasegawa, Yukiko Yamaguchi, and Hajime Shimada. 2022b. Malware Detection by Control-Flow Graph Level Representation Learning With Graph Isomorphism Network. IEEE Access, Vol. 10 (2022), 111830--111841.
[18]
Mukhammed Garifulla, Juncheol Shin, Chanho Kim, Won Hwa Kim, Hye Jung Kim, Jaeil Kim, and Seokin Hong. 2021. A case study of quantizing convolutional neural networks for fast disease diagnosis on portable medical devices. Sensors, Vol. 22, 1 (2021), 219.
[19]
Jhony H Giraldo, Konstantinos Skianis, Thierry Bouwmans, and Fragkiskos D. Malliaros. 2023. On the trade-off between over-smoothing and over-squashing in deep graph neural networks. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. 566--576.
[20]
Kevin Hector, Pierre-Alain Moëllic, Mathieu Dumont, and Jean-Max Dutertre. 2022. A Closer Look at Evaluating the Bit-Flip Attack Against Deep Neural Networks. In 2022 IEEE 28th International Symposium on On-Line Testing and Robust System Design. 1--5.
[21]
Sanghyun Hong, Pietro Frigo, Yiugitcan Kaya, Cristiano Giuffrida, and Tudor Dumitra?. 2019. Terminal brain damage: Exposing the graceless degradation in deep neural networks under hardware fault attacks. In 28th USENIX Security Symposium (USENIX Security 19). 497--514.
[22]
Kurt Hornik. 1991. Approximation capabilities of multilayer feedforward networks. Neural networks, Vol. 4, 2 (1991), 251--257.
[23]
Kurt Hornik, Maxwell Stinchcombe, and Halbert White. 1989. Multilayer feedforward networks are universal approximators. Neural networks, Vol. 2, 5 (1989), 359--366.
[24]
Xiaolu Hou, Jakub Breier, Dirmanto Jap, Lei Ma, Shivam Bhasin, and Yang Liu. 2020. Security evaluation of deep neural network resistance against laser fault injection. In 2020 IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits. 1--6.
[25]
Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. 2020. Open graph benchmark: Datasets for machine learning on graphs. In Advances in neural information processing systems 33. 22118--22133.
[26]
Stefanie Jegelka. 2022. Theory of graph neural networks: Representation and learning. In Proceedings of the International Congress of Mathematicians, Vol. 7. 5450--5476.
[27]
Xun Jiao, Ruixuan Wang, Fred Lin, Daniel Moore, and Sriram Sankar. 2022. PyGFI: Analyzing and Enhancing Robustness of Graph Neural Networks Against Hardware Errors. CoRR, Vol. abs/2212.03475 (2022).
[28]
Wei Jin, Yaxing Li, Han Xu, Yiqi Wang, Shuiwang Ji, Charu Aggarwal, and Jiliang Tang. 2021. Adversarial attacks and defenses on graphs. ACM SIGKDD Explorations Newsletter, Vol. 22, 2 (2021), 19--34.
[29]
Yash Khare, Kumud Lakara, Maruthi S. Inukonda, Sparsh Mittal, Mahesh Chandra, and Arvind Kaushik. 2022. Design and Analysis of Novel Bit-flip Attacks and Defense Strategies for DNNs. In 2022 IEEE Conference on Dependable and Secure Computing. 1--8.
[30]
Solomon Kullback and Richard Leibler. 1951. On information and sufficiency. The annals of mathematical statistics, Vol. 22 (1951), 79--86.
[31]
Lorenz Kummer, Kevin Sidak, Tabea Reichmann, and Wilfried Gansterer. 2023. Adaptive Precision Training (AdaPT): A dynamic quantized training approach for DNNs. In Proceedings of the 2023 SIAM International Conference on Data Mining. 559--567.
[32]
Jingtao Li, Adnan Siraj Rakin, Zhezhi He, Deliang Fan, and Chaitali Chakrabarti. 2021. RADAR: Run-time Adversarial Weight Attack Detection and Accuracy Recovery. In 2021 Design, Automation and Test in Europe Conference and Exhibition. 790--795.
[33]
Yang Li, Buyue Qian, Xianli Zhang, and Hui Liu. 2020. Graph neural network-based diagnosis prediction. Big Data, Vol. 8, 5 (2020), 379--390.
[34]
Xuan Lin, Zhe Quan, Zhi-Jie Wang, Tengfei Ma, and Xiangxiang Zeng. 2020. KGNN: Knowledge Graph Neural Network for Drug-Drug Interaction Prediction. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, Vol. 380. 2739--2745.
[35]
Moritz Lipp, Michael Schwarz, Lukas Raab, Lukas Lamster, Misiker Tadesse Aga, Clémentine Maurice, and Daniel Gruss. 2020. Nethammer: Inducing rowhammer faults through network requests. In 2020 IEEE European Symposium on Security and Privacy Workshops. 710--719.
[36]
Qi Liu, Jieming Yin, Wujie Wen, Chengmo Yang, and Shi Sha. 2023. NeuroPots: Realtime Proactive Defense against Bit-Flip Attacks in Neural Networks. In 32nd USENIX Security Symposium (USENIX Security 23). 6347--6364.
[37]
Yannan Liu, Lingxiao Wei, Bo Luo, and Qiang Xu. 2017. Fault injection attack on deep neural network. In 2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). 131--138.
[38]
Zheng Liu, Xiaohan Li, Hao Peng, Lifang He, and S Yu Philip. 2020. Heterogeneous similarity graph neural network on electronic health records. In 2020 IEEE International Conference on Big Data. 1196--1205.
[39]
Haohui Lu and Shahadat Uddin. 2021. A weighted patient network-based framework for predicting chronic diseases using graph neural networks. Scientific reports, Vol. 11, 1 (2021), 22607.
[40]
Jiaqi Ma, Shuangrui Ding, and Qiaozhu Mei. 2020. Towards More Practical Adversarial Attacks on Graph Neural Networks. In Advances in Neural Information Processing Systems 33. 4756--4766.
[41]
Christopher Morris, Matthias Fey, and Nils Kriege. 2021. The Power of the Weisfeiler-Leman Algorithm for Machine Learning with Graphs. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21. 4543--4550.
[42]
Christopher Morris, Nils M. Kriege, Franka Bause, Kristian Kersting, Petra Mutzel, and Marion Neumann. 2020. TUDataset: A collection of benchmark datasets for learning with graphs. In ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL 2020).
[43]
Christopher Morris, Yaron Lipman, Haggai Maron, Bastian Rieck, Nils M. Kriege, Martin Grohe, Matthias Fey, and Karsten Borgwardt. 2023. Weisfeiler and Leman go Machine Learning: The Story so far. Journal of Machine Learning Research, Vol. 24, 333 (2023), 1--59.
[44]
Christopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. 2019. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, Vol. 33. 4602--4609.
[45]
Onur Mutlu and Jeremie S. Kim. 2019. Rowhammer: A retrospective. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Vol. 39, 8 (2019), 1555--1571.
[46]
Javier Parapar and Álvaro Barreiro. 2008. Winnowing-Based Text Clustering. In Proceedings of the 17th ACM Conference on Information and Knowledge Management. 1353--1354.
[47]
Cheng Qian, Ming Zhang, Yuanping Nie, Shuaibing Lu, and Huayang Cao. 2023. A Survey of Bit-Flip Attacks on Deep Neural Network and Corresponding Defense Methods. Electronics, Vol. 12, 4 (2023), 853.
[48]
Adnan Siraj Rakin, Zhezhi He, and Deliang Fan. 2019. Bit-Flip Attack: Crushing Neural Network With Progressive Bit Search. In Proceedings of the IEEE/CVF International Conference on Computer Vision and Pattern Recognition. 1211--1220.
[49]
Adnan Siraj Rakin, Zhezhi He, Jingtao Li, Fan Yao, Chaitali Chakrabarti, and Deliang Fan. 2022. T-BFA: Targeted Bit-Flip Adversarial Weight Attack. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 44, 11 (2022), 7928--7939.
[50]
Henrique De Melo Ribeiro, Ahran Arnold, James P. Howard, Matthew J. Shun-Shin, Ying Zhang, Darrel P. Francis, Phang B. Lim, Zachary Whinnett, and Massoud Zolgharni. 2022. ECG-based real-time arrhythmia monitoring using quantized deep neural networks: A feasibility study. Computers in Biology and Medicine, Vol. 143 (2022), 105249.
[51]
Ryan A Rossi, Di Jin, Sungchul Kim, Nesreen K. Ahmed, Danai Koutra, and John Boaz Lee. 2020. On proximity and structural role-based embeddings in networks: Misconceptions, techniques, and applications. ACM Transactions on Knowledge Discovery from Data, Vol. 14, 5 (2020), 1--37.
[52]
Till Hendrik Schulz, Tamás Horváth, Pascal Welke, and Stefan Wrobel. 2022. A Generalized Weisfeiler-Lehman Graph Kernel. Machine Learning, Vol. 111, 7 (2022), 2601--2629.
[53]
Yingxia Shao, Hongzheng Li, Xizhi Gu, Hongbo Yin, Yawen Li, Xupeng Miao, Wentao Zhang, Bin Cui, and Lei Chen. 2024. Distributed graph neural network training: A survey. Comput. Surveys, Vol. 56, 8 (2024), 1--39.
[54]
Nino Shervashidze, Pascal Schweitzer, Erik Jan Van Leeuwen, Kurt Mehlhorn, and Karsten M Borgwardt. 2011. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research, Vol. 12, 9 (2011).
[55]
Yiwei Sun, Suhang Wang, Xianfeng Tang, Tsung-Yu Hsieh, and Vasant Honavar. 2020. Adversarial Attacks on Graph Neural Networks via Node Injections: A Hierarchical Reinforcement Learning Approach. In Proceedings of The Web Conference 2020. 673--683.
[56]
Zhenchao Sun, Hongzhi Yin, Hongxu Chen, Tong Chen, Lizhen Cui, and Fan Yang. 2021. Disease Prediction via Graph Neural Networks. IEEE Journal of Biomedical and Health Informatics, Vol. 25, 3 (2021), 818--826.
[57]
Susheel Suresh, Pan Li, Cong Hao, and Jennifer Neville. 2021. Adversarial graph augmentation to improve graph contrastive learning. In Advances in Neural Information Processing Systems 34. 15920--15933.
[58]
Shyam A. Tailor, Javier Fernandez-Marques, and Nicholas D. Lane. 2021. Degree-Quant: Quantization-Aware Training for Graph Neural Networks. In 9th International Conference on Learning Representations.
[59]
Valerio Venceslai, Alberto Marchisio, Ihsen Alouani, Maurizio Martina, and Muhammad Shafique. 2020. Neuroattack: Undermining spiking neural networks security through externally triggered bit-flips. In 2020 International Joint Conference on Neural Networks. 1--8.
[60]
Jialai Wang, Ziyuan Zhang, Meiqi Wang, Han Qiu, Tianwei Zhang, Qi Li, Zongpeng Li, Tao Wei, and Chao Zhang. 2023b. Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks. In 32nd USENIX Security Symposium (USENIX Security 23). 2329--2346.
[61]
Zhiqiong Wang, Zican Lin, Shuo Li, Yibo Wang, Weiying Zhong, Xinlei Wang, and Junchang Xin. 2023a. Dynamic Multi-Task Graph Isomorphism Network for Classification of Alzheimer's Disease. Applied Sciences, Vol. 13, 14 (2023), 8433.
[62]
Bang Wu, Xingliang Yuan, Shuo Wang, Qi Li, Minhui Xue, and Shirui Pan. 2024. Securing Graph Neural Networks in MLaaS: A Comprehensive Realisation of Query-based Integrity Verification. In 2024 IEEE Symposium on Security and Privacy (SP). 110--110.
[63]
Lingfei Wu, Peng Cui, Jian Pei, Liang Zhao, and Le Song. 2022. Graph Neural Networks: Foundations, Frontiers, and Applications.
[64]
Zhenqin Wu, Bharath Ramsundar, Evan N. Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S. Pappu, Karl Leswing, and Vijay Pande. 2018. MoleculeNet: a benchmark for molecular machine learning. Chemical science, Vol. 9, 2 (2018), 513--530.
[65]
Jiacheng Xiong, Zhaoping Xiong, Kaixian Chen, Hualiang Jiang, and Mingyue Zheng. 2021. Graph neural networks for automated de novo drug design. Drug Discovery Today, Vol. 26, 6 (2021), 1382--1393.
[66]
Han Xu, Yao Ma, Hao-Chen Liu, Debayan Deb, Hui Liu, Ji-Liang Tang, and Anil K Jain. 2020. Adversarial attacks and defenses in images, graphs and text: A review. International Journal of Automation and Computing, Vol. 17 (2020), 151--178.
[67]
Jingjing Xu, Wangchunshu Zhou, Zhiyi Fu, Hao Zhou, and Lei Li. 2021. A survey on green deep learning. CoRR, Vol. abs/2111.05193 (2021).
[68]
Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2019. How Powerful are Graph Neural Networks?. In 7th International Conference on Learning Representations.
[69]
Mengjia Yan, Christopher W Fletcher, and Josep Torrellas. 2020. Cache telepathy: Leveraging shared resource attacks to learn $$DNN$$ architectures. In 29th USENIX Security Symposium (USENIX Security 20). 2003--2020.
[70]
Sihong Yang, Dezhi Jin, Jun Liu, and Ye He. 2022. Identification of Young High-Functioning Autism Individuals Based on Functional Connectome Using Graph Isomorphism Network: A Pilot Study. Brain Sciences, Vol. 12, 7 (2022), 883.
[71]
Fan Yao, Adnan Siraj Rakin, and Deliang Fan. 2020. DeepHammer: Depleting the intelligence of deep neural networks through targeted chain of bit flips. In 29th USENIX Security Symposium (USENIX Security 20). 1463--1480.
[72]
Jiangchao Yao, Shengyu Zhang, Yang Yao, Feng Wang, Jianxin Ma, Jianwei Zhang, Yunfei Chu, Luo Ji, Kunyang Jia, Tao Shen, et al. 2022. Edge-cloud polarization and collaboration: A comprehensive survey for ai. IEEE Transactions on Knowledge and Data Engineering, Vol. 35, 7 (2022), 6866--6886.
[73]
Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabás Póczos, Ruslan Salakhutdinov, and Alexander J. Smola. 2017. Deep Sets. In Advances in Neural Information Processing Systems 30. 3391--3401.
[74]
Rongzhao Zhang and Albert C. S. Chung. 2021. MedQ: Lossless ultra-low-bit neural network quantization for medical image segmentation. Medical Image Analysis, Vol. 73 (2021), 102200.
[75]
Sixiao Zhang, Hongxu Chen, Xiangguo Sun, Yicong Li, and Guandong Xu. 2022. Unsupervised graph poisoning attack via contrastive loss back-propagation. In Proceedings of the ACM Web Conference 2022. 1322--1330.
[76]
Zeyu Zhu, Fanrong Li, Zitao Mo, Qinghao Hu, Gang Li, Zejian Liu, Xiaoyao Liang, and Jian Cheng. 2023. $A^2$Q: Aggregation-Aware Quantization for Graph Neural Networks. In 11th International Conference on Learning Representations.
[77]
Markus Zopf. 2022. 1-WL Expressiveness Is (Almost) All You Need. In International Joint Conference on Neural Networks, IJCNN. 1--8.
[78]
Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. 2018. Adversarial Attacks on Neural Networks for Graph Data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2847--2856.
[79]
Daniel Zügner and Stephan Günnemann. 2019. Adversarial Attacks on Graph Neural Networks via Meta Learning. In 7th International Conference on Learning Representations. endthebibl

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
August 2024
6901 pages
ISBN:9798400704901
DOI:10.1145/3637528
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 August 2024

Check for updates

Author Tags

  1. bit flip attacks
  2. graph neural network

Qualifiers

  • Research-article

Funding Sources

Conference

KDD '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 141
    Total Downloads
  • Downloads (Last 12 months)141
  • Downloads (Last 6 weeks)141
Reflects downloads up to 03 Oct 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media