Abstract
Modern AI is largely driven by machine learning. Recent machine learning algorithms such as deep neural networks (DNN) have become quite effective in many recognition tasks e.g., object recognition, face recognition, speech recognition, etc. Due to their effectiveness, these models are already catering to user needs in the real world. To handle the service requests from large number of users and meet round the clock demand, these models are usually hosted on cloud platforms (e.g., Microsoft Azure ML Studio). When hosting a model on the cloud, there may be security concerns. For example, during the transit of the model to the cloud, a malicious third party can alter the model or sometimes the cloud provider itself may use a lossy compression on the model to efficiently manage the server resources. We propose a method to detect such model compromises via sensitive samples. Finding the best sensitive sample boils down to an optimization problem where the sensitive sample maximizes the difference in the prediction between the original and the modified model. The optimization problem is challenging as (1) the altered model is unknown (2) we have to search a sensitive sample in high-dimensional data space and (3) the optimization problem is a non-convex problem. To overcome these challenges, we first use a variational autoencoder to transform high-dimensional data to a non-linear low-dimensional space and then uses Bayesian optimization to find the optimal sensitive sample. Our proposed method is capable of generating a sensitive sample that can detect model compromise without incurring much cost by multiple queries.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389 (2012)
Brochu, E., Cora, V.M., De Freitas, N.: A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599 (2010)
Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526 (2017)
Eissman, S., Levy, D., Shu, R., Bartzsch, S., Ermon, S.: Bayesian optimization and attribute adjustment. In: UAI (2018)
Geigel, A.: Neural network trojan. J. Comput. Secur. 21(2), 191–232 (2013)
Ghodsi, Z., Gu, T., Garg, S.: SafetyNets: verifiable execution of deep neural networks on an untrusted cloud. In: NIPS, pp. 4672–4681 (2017)
He, Z., Zhang, T., Lee, R.: Sensitive-sample fingerprinting of deep neural networks. In: CVPR, pp. 4729–4737 (2019)
Jones, D., Schonlau, M., Welch, W.: Efficient global optimization of expensive black-box functions. J. Global Optim. 13(4), 455–492 (1998)
Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)
Kumar, N.G., Rao, K.P.K.: Hash based approach for providing privacy and integrity in cloud data storage using digital signatures. Int. J. Comput. Sci. Inform. Technol. 5(6), 8074–8078 (2014)
Kushner, H.: A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. J. Basic Eng. 86(1), 97–106 (1964)
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)
Li, C., Gupta, S., Rana, S., Nguyen, V., Venkatesh, S., Shilton, A.: High dimensional Bayesian optimization using dropout. In: IJCAI, pp. 2096–2102 (2018)
Liu, Y., et al.: Trojaning attack on neural networks. In: NDSS (2017)
Mulazzani, M., Schrittwieser, S., Leithner, M., Huber, M., Weippl, E.R.: Dark clouds on the horizon: using cloud storage as attack vector and online slack space. In: USENIX Security Symposium, San Francisco, CA, USA, pp. 65–76 (2011)
Ortigosa, A., Carro, R.M., Quiroga, J.I.: Predicting user personality by mining social interactions in Facebook. J. Comput. Syst. Sci. 80(1), 57–71 (2014)
Srinivas, N., Krause, A., Kakade, S., Seeger, M.: Information-theoretic regret bounds for Gaussian process optimization in the bandit setting. IEEE Trans. Inf. Theory 58(5), 3250–3265 (2012)
Zhang, Y., Juels, A., Oprea, A., Reiter, M.K.: HomeAlone: co-residency detection in the cloud via side-channel analysis. In: IEEE Symposium on Security and Privacy, pp. 313–328. IEEE (2011)
Acknowledgment
This research was partially funded by the Australian Government through the Australian Research Council (ARC). Prof Venkatesh is the recipient of an ARC Australian Laureate Fellowship (FL170100006).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Kuttichira, D.P., Gupta, S., Nguyen, D., Rana, S., Venkatesh, S. (2019). Detection of Compromised Models Using Bayesian Optimization. In: Liu, J., Bailey, J. (eds) AI 2019: Advances in Artificial Intelligence. AI 2019. Lecture Notes in Computer Science(), vol 11919. Springer, Cham. https://doi.org/10.1007/978-3-030-35288-2_39
Download citation
DOI: https://doi.org/10.1007/978-3-030-35288-2_39
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-35287-5
Online ISBN: 978-3-030-35288-2
eBook Packages: Computer ScienceComputer Science (R0)