Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3503161.3548390acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Rethinking the Vulnerability of DNN Watermarking: Are Watermarks Robust against Naturalness-aware Perturbations?

Published: 10 October 2022 Publication History

Abstract

Training Deep Neural Networks (DNN) is a time-consuming process and requires a large amount of training data, which motivates studies working on protecting the intellectual property (IP) of DNN models by employing various watermarking techniques. Unfortunately, in recent years, adversaries have been exploiting the vulnerabilities of the employed watermarking techniques to remove the embedded watermarks. In this paper, we investigate and introduce a novel watermark removal attack, called AdvNP, against all the existing four different types of DNN watermarking schemes via input preprocessing by injecting <u>Adv</u>ersarial <u>N</u>aturalness-aware <u>P</u>erturbations. In contrast to the prior studies, our proposed method is the first work that generalizes all the existing four watermarking schemes well without involving any model modification, which preserves the fidelity of the target model. We conduct the experiments against four state-of-the-art (SOTA) watermarking schemes on two real tasks (e.g., image classification on ImageNet, face recognition on CelebA) across multiple DNN models. Overall, our proposed AdvNP significantly invalidates the watermarks against the four watermarking schemes on two real-world datasets, i.e., 60.9% on the average attack success rate and up to 97% in the worse case. Moreover, our AdvNP could well survive the image denoising techniques and outperforms the baseline in both the fidelity preserving and watermark removal. Furthermore, we introduce two defense methods to enhance the robustness of DNN watermarking against our AdvNP. Our experimental results pose real threats to the existing watermarking schemes and call for more practical and robust watermarking techniques to protect the copyright of pre-trained DNN models. The source code and models are available at ttps://github.com/GitKJ123/AdvNP.

Supplementary Material

MP4 File (MM22-fp2962.mp4)
Here is the video presentation of our work: Rethinking the Vulnerability of DNN Watermarking: Are Watermarks Robust against Naturalness-aware Perturbations? In this paper, we investigate and introduce a novel watermark removal attack, called AdvNP, against all the existing four different types of DNN watermarking schemes via input preprocessing by injecting Adversarial Naturalness-aware Perturbations. We conduct the experiments against four state-of-the-art watermarking schemes on two real tasks (e.g image classification on ImageNet, face recognition on CelebA) across multiple DNN models. Overall, our proposed AdvNP could invalidates the watermarks against the four watermarking schemes on two real-world datasets and could well survive the image denoising techniques and outperforms the baseline in both the fidelity preserving and watermark removal. We also introduce two defense methods to enhance the robustness of DNN watermarking against our AdvNP. Thanks for watching our presentation.

References

[1]
2022. Keras-vggface. https://github.com/rcmalli/keras-vggface.
[2]
Yossi Adi, Carsten Baum, Moustapha Cisse, Benny Pinkas, and Joseph Keshet. 2018. Turning your weakness into a strength: Watermarking deep neural networks by backdooring. In 27th {USENIX} Security Symposium ({USENIX} Security 18). 1615--1631.
[3]
William Aiken, Hyoungshick Kim, and Simon Woo. 2020. Neural network laundering: Removing black-box backdoor watermarks from deep neural networks. arXiv preprint arXiv:2004.11368 (2020).
[4]
William Aiken, Hyoungshick Kim, Simon Woo, and Jungwoo Ryoo. 2021. Neural network laundering: Removing black-box backdoor watermarks from deep neural networks. Computers & Security, Vol. 106 (2021), 102277.
[5]
Eugene Bagdasaryan and Vitaly Shmatikov. 2021. Blind backdoors in deep learning models. In 30th USENIX Security Symposium (USENIX Security 21). 1505--1521.
[6]
Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2020. How to backdoor federated learning. In International Conference on Artificial Intelligence and Statistics. PMLR, 2938--2948.
[7]
Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 (2020).
[8]
Nicholas Carlini and David Wagner. 2017a. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM workshop on artificial intelligence and security. 3--14.
[9]
Nicholas Carlini and David Wagner. 2017b. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp). IEEE, 39--57.
[10]
Bryant Chen, Wilka Carvalho, Nathalie Baracaldo, Heiko Ludwig, Benjamin Edwards, Taesung Lee, Ian Molloy, and Biplav Srivastava. 2018a. Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering. arxiv: 1811.03728 [cs.LG]
[11]
Huili Chen, Bita Darvish Rohani, and Farinaz Koushanfar. 2018b. Deepmarks: A digital fingerprinting framework for deep neural networks. arXiv preprint arXiv:1804.03648 (2018).
[12]
Xinyun Chen, Wenxiao Wang, Chris Bender, Yiming Ding, Ruoxi Jia, Bo Li, and Dawn Song. 2019. Refit: a unified watermark removal framework for deep learning systems with limited data. arXiv preprint arXiv:1911.07205 (2019).
[13]
Bita Darvish Rouhani, Huili Chen, and Farinaz Koushanfar. 2019. Deepsigns: An end-to-end watermarking framework for ownership protection of deep neural networks. In Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems. 485--497.
[14]
Thomas Defard, Aleksandr Setkov, Angelique Loesch, and Romaric Audigier. 2021. Padim: a patch distribution modeling framework for anomaly detection and localization. In International Conference on Pattern Recognition. Springer, 475--489.
[15]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
[16]
Lixin Fan, Kam Woh Ng, and Chee Seng Chan. 2019. Rethinking deep neural network ownership verification: Embedding passports to defeat ambiguity attacks. (2019).
[17]
Ruijun Gao, Qing Guo, Qian Zhang, Felix Juefei-Xu, Hongkai Yu, and Wei Feng. 2021. Adversarial relighting against face recognition. arXiv preprint arXiv:2108.07920 (2021).
[18]
Yansong Gao, Chang Xu, Derui Wang, Shiping Chen, Damith C. Ranasinghe, and Surya Nepal. 2020. STRIP: A Defence Against Trojan Attacks on Deep Neural Networks. arxiv: 1902.06531 [cs.CR]
[19]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
[20]
Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2019. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain. arxiv: 1708.06733 [cs.CR]
[21]
Shangwei Guo, Tianwei Zhang, Han Qiu, Yi Zeng, Tao Xiang, and Yang Liu. 2020. Fine-tuning is not enough: A simple yet effective watermark removal attack for dnn models. arXiv preprint arXiv:2009.08697 (2020).
[22]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep Residual Learning for Image Recognition. https://doi.org/10.48550/ARXIV.1512.03385
[23]
Andrew Hou, Ze Zhang, Michel Sarkis, Ning Bi, Yiying Tong, and Xiaoming Liu. 2021. Towards high fidelity face relighting with realistic shadows. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 14719--14728.
[24]
Erwan Le Merrer, Patrick Perez, and Gilles Trédan. 2020. Adversarial frontier stitching for remote neural network watermarking. Neural Computing and Applications, Vol. 32, 13 (2020), 9233--9244.
[25]
Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. 2018. Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks. arxiv: 1805.12185 [cs.CR]
[26]
Xuankai Liu, Fengting Li, Bihan Wen, and Qi Li. 2020. Removing Backdoor-Based Watermarks in Neural Networks with Limited Data. arXiv preprint arXiv:2008.00407 (2020).
[27]
Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. 2015. Deep Learning Face Attributes in the Wild. In Proceedings of International Conference on Computer Vision (ICCV).
[28]
Nils Lukas, Edward Jiang, Xinda Li, and Florian Kerschbaum. 2021. Sok: How robust is image classification deep neural network watermarking?(extended version). arXiv preprint arXiv:2108.04974 (2021).
[29]
Gabriel Resende Machado, Eugênio Silva, and Ronaldo Ribeiro Goldschmidt. 2021. Adversarial Machine Learning in Image Classification: A Survey Toward the Defender's Perspective. ACM Computing Surveys (CSUR), Vol. 55, 1 (2021), 1--38.
[30]
Ben Mildenhall, Jonathan T Barron, Jiawen Chen, Dillon Sharlet, Ren Ng, and Robert Carroll. 2018. Burst denoising with kernel prediction networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2502--2510.
[31]
Ding Sheng Ong, Chee Seng Chan, Kam Woh Ng, Lixin Fan, and Qiang Yang. 2021. Protecting Intellectual Property of Generative Adversarial Networks from Ambiguity Attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3630--3639.
[32]
Fabien AP Petitcolas, Ross J Anderson, and Markus G Kuhn. 1999. Information hiding-a survey. Proc. IEEE, Vol. 87, 7 (1999), 1062--1078.
[33]
Samira Pouyanfar, Saad Sadiq, Yilin Yan, Haiman Tian, Yudong Tao, Maria Presa Reyes, Mei-Ling Shyu, Shu-Ching Chen, and Sundaraja S Iyengar. 2018. A survey on deep learning: Algorithms, techniques, and applications. ACM Computing Surveys (CSUR), Vol. 51, 5 (2018), 1--36.
[34]
Bita Darvish Rouhani, Huili Chen, and Farinaz Koushanfar. 2018. Deepsigns: A generic watermarking framework for ip protection of deep learning models. arXiv preprint arXiv:1804.00750 (2018).
[35]
Masoumeh Shafieinejad, Nils Lukas, Jiaqi Wang, Xinda Li, and Florian Kerschbaum. 2021. On the robustness of backdoor-based watermarking in deep neural networks. In Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security. 177--188.
[36]
Masoumeh Shafieinejad, Jiaqi Wang, Nils Lukas, Xinda Li, and Florian Kerschbaum. 2019. On the robustness of the backdoor-based watermarking in deep neural networks. arXiv preprint arXiv:1906.07745 (2019).
[37]
Seyed Reza Shahamiri. 2021. Speech vision: An end-to-end deep learning-based dysarthric automatic speech recognition system. IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol. 29 (2021), 852--861.
[38]
Amnon Shashua and Tammy Riklin-Raviv. 2001. The quotient image: Class-based re-rendering and recognition with varying illuminations. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 23, 2 (2001), 129--139.
[39]
Yusuke Uchida, Yuki Nagai, Shigeyuki Sakazawa, and Shin'ichi Satoh. 2017. Embedding watermarks into deep neural networks. In Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval. 269--277.
[40]
Haoqi Wang, Mingfu Xue, Shichang Sun, Yushu Zhang, Jian Wang, and Weiqiang Liu. 2021. Detect and remove watermark in deep neural networks via generative adversarial networks. arXiv preprint arXiv:2106.08104 (2021).
[41]
Emily Wenger, Josephine Passananti, Arjun Nitin Bhagoji, Yuanshun Yao, Haitao Zheng, and Ben Y Zhao. 2021. Backdoor Attacks Against Deep Learning Systems in the Physical World. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 6206--6215.
[42]
Mingfu Xue, Jian Wang, and Weiqiang Liu. 2021. DNN intellectual property protection: Taxonomy, attacks and evaluations. In Proceedings of the 2021 on Great Lakes Symposium on VLSI. 455--460.
[43]
Peng Yang, Yingjie Lao, and Ping Li. 2021. Robust watermarking for deep neural networks via bi-level optimization. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 14841--14850.
[44]
Ziqi Yang, Hung Dang, and Ee-Chien Chang. 2019. Effectiveness of Distillation Attack and Countermeasure on Neural Network Watermarking. arxiv: 1906.06046 [cs.CR]
[45]
Jie Zhang, Dongdong Chen, Jing Liao, Weiming Zhang, Huamin Feng, Gang Hua, and Nenghai Yu. 2021. Deep model intellectual property protection via deep watermarking. IEEE Transactions on Pattern Analysis and Machine Intelligence (2021).
[46]
Jialong Zhang, Zhongshu Gu, Jiyong Jang, Hui Wu, Marc Ph Stoecklin, Heqing Huang, and Ian Molloy. 2018. Protecting intellectual property of deep neural networks with watermarking. In Proceedings of the 2018 on Asia Conference on Computer and Communications Security. 159--172.
[47]
Hao Zhou, Sunil Hadap, Kalyan Sunkavalli, and David W Jacobs. 2019. Deep single-image portrait relighting. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 7194--7202.

Cited By

View all
  • (2023)A Privacy-Preserving Testing Framework for Copyright Protection of Deep Learning ModelsElectronics10.3390/electronics1301013313:1(133)Online publication date: 28-Dec-2023
  • (2023)Free Fine-tuning: A Plug-and-Play Watermarking Scheme for Deep Neural NetworksProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3612331(8463-8474)Online publication date: 26-Oct-2023
  • (2023)What can Discriminator do? Towards Box-free Ownership Verification of Generative Adversarial Networks2023 IEEE/CVF International Conference on Computer Vision (ICCV)10.1109/ICCV51070.2023.00462(4986-4996)Online publication date: 1-Oct-2023

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
MM '22: Proceedings of the 30th ACM International Conference on Multimedia
October 2022
7537 pages
ISBN:9781450392037
DOI:10.1145/3503161
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 10 October 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. dnn watermarking
  2. naturalness-aware perturbations
  3. relighting

Qualifiers

  • Research-article

Funding Sources

Conference

MM '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)79
  • Downloads (Last 6 weeks)7
Reflects downloads up to 28 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2023)A Privacy-Preserving Testing Framework for Copyright Protection of Deep Learning ModelsElectronics10.3390/electronics1301013313:1(133)Online publication date: 28-Dec-2023
  • (2023)Free Fine-tuning: A Plug-and-Play Watermarking Scheme for Deep Neural NetworksProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3612331(8463-8474)Online publication date: 26-Oct-2023
  • (2023)What can Discriminator do? Towards Box-free Ownership Verification of Generative Adversarial Networks2023 IEEE/CVF International Conference on Computer Vision (ICCV)10.1109/ICCV51070.2023.00462(4986-4996)Online publication date: 1-Oct-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media