Nothing Special   »   [go: up one dir, main page]

Skip to main content

FirewaLLM: A Portable Data Protection and Recovery Framework for LLM Services

  • Conference paper
  • First Online:
Data Mining and Big Data (DMBD 2023)

Abstract

In the era characterized by the swift proliferation of large language models such as ChatGPT and GPT-4, there is a mounting escalation of apprehension regarding user privacy. These large language models possess the potential to inadvertently expose sensitive information, encompassing personal identities, health particulars, and financial data. The inadvertent exposure and misuse of such information can lead to significant privacy breaches, thereby exposing model owners to potential legal ramifications. This emphasizes the imperative necessity to amplify efforts in enhancing and evaluating data privacy and security protocols within the domain of large language models. Remarkably, a comprehensive framework for safeguarding user security and privacy is presently absent, leaving a discernible void in established standards for evaluating the privacy and security aspects of Big Predictive Models. To address this gap, we have proposed FirewaLLM, a portable framework that aims to protect user data security within the realm of Large Language Model services. This framework is specifically designed to encompass data protection and recovery measures, mitigating potential vulnerabilities and enhancing overall privacy safeguards. Within this framework, users employ a smaller model to locally desensitize sensitive aspects of text before submitting it to the large language model. By adopting this approach, privacy concerns are addressed proactively, as potentially identifying information is obfuscated prior to interacting with the large language model. Subsequently, the responses obtained from the large language model are matched with the original local text, facilitating the restoration of private information. This process ensures that the desired output is generated while preserving the confidentiality of sensitive data. Furthermore, we have introduced a bespoke benchmark specifically designed to evaluate the security and accuracy of large language models. This benchmark provides a comprehensive assessment of Large Language Models from two key perspectives: security and accuracy. Leveraging this benchmark, we have conducted a detailed evaluation and analysis of the security attributes of our local text desensitization tool in conjunction with ChatGPT-3.5. In conclusion, our research endeavors to tackle the pressing privacy concerns associated with large language models, providing a robust safeguard for user data and presenting a practical approach to evaluating the performance of these models, by employing a relatively smaller model for local desensitization. We believe that this study holds significant practical implications for upholding user privacy and data security within the context of LLM services. FirewaLLM is publicly released at https://github.com/ysy1216/FirewaLLM .

Supported by National Natural Science Foundation of China (No. 62102108, No. 62372120), Natural Science Foundation of Guangdong Province of China (No. 2022A1515010061).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Abdelali, A., et al.: Benchmarking Arabic AI with large language models (2023). https://doi.org/10.48550/arXiv.2305.14982

  2. Aliyu, E.O., Kotzé, E.: Stacked language models for an optimized next word generation. In: 2022 IST-Africa Conference (IST-Africa), pp. 1–12 (2022). https://doi.org/10.23919/IST-Africa56635.2022.9845545

  3. Arora, D., Singh, H.G., Mausam: have LLMs advanced enough? A challenging problem solving benchmark for large language models (2023). https://doi.org/10.48550/arXiv.2305.15074

  4. Bubeck, S., et al.: Sparks of artificial general intelligence: early experiments with GPT-4 (2023)

    Google Scholar 

  5. Byrd, D., Polychroniadou, A.: Differentially private secure multi-party computation for federated learning in financial applications (2020)

    Google Scholar 

  6. Chang, Y., et al.: A survey on evaluation of large language models (2023). https://arxiv.org/abs/2307.03109v7

  7. Chatzikokolakis, K., Andrés, M.E., Bordenabe, N.E., Palamidessi, C.: Broadening the scope of differential privacy using metrics. In: De Cristofaro, E., Wright, M. (eds.) PETS 2013. LNCS, vol. 7981, pp. 82–102. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-39077-7_5

    Chapter  Google Scholar 

  8. Chen, Y., Arunasalam, A., Celik, Z.B.: Can large language models provide security & privacy advice? Measuring the ability of LLMs to refute misconceptions (2023). https://doi.org/10.48550/arXiv.2310.02431

  9. de Vos, I.M.A., van den Boogerd, G.L., Fennema, M.D., Correia, A.D.: Comparing in context: improving cosine similarity measures with a metric tensor (2022). https://doi.org/10.48550/arXiv.2203.14996

  10. Deshpande, A., Murahari, V., Rajpurohit, T., Kalyan, A., Narasimhan, K.: Toxicity in ChatGPT: analyzing Persona-assigned Language Models (2023). https://doi.org/10.48550/arXiv.2304.05335

  11. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding (2019). https://doi.org/10.48550/arXiv.1810.04805

  12. Fan, L., Li, L., Ma, Z., Lee, S., Yu, H., Hemphill, L.: A bibliometric review of large language models research from 2017 to 2023 (2023). https://doi.org/10.48550/arXiv.2304.02020

  13. Hoory, S., et al.: Learning and evaluating a differentially private pre-trained language model. In: Moens, M.F., Huang, X., Specia, L., Yih, S.W.T. (eds.) Findings of the Association for Computational Linguistics: EMNLP 2021, pp. 1178–1189. Association for Computational Linguistics, Punta Cana (2021). https://doi.org/10.18653/v1/2021.findings-emnlp.102, https://aclanthology.org/2021.findings-emnlp.102

  14. Jalilifard, A., Caridá, V.F., Mansano, A.F., Cristo, R.S., da Fonseca, F.P.C.: Semantic sensitive TF-IDF to determine word relevance in documents. In: Thampi, S.M., Gelenbe, E., Atiquzzaman, M., Chaudhary, V., Li, K.-C. (eds.) Advances in Computing and Network Communications. LNEE, vol. 736, pp. 327–337. Springer, Singapore (2021). https://doi.org/10.1007/978-981-33-6987-0_27

    Chapter  Google Scholar 

  15. Jin, H., Luo, Y., Li, P., Mathew, J.: A review of secure and privacy-preserving medical data sharing. IEEE Access 7, 61656–61669 (2019). https://doi.org/10.1109/ACCESS.2019.2916503

    Article  Google Scholar 

  16. Katz, D.M., Hartung, D., Gerlach, L., Jana, A., Bommarito II, M.J.: Natural language processing in the legal domain (2023)

    Google Scholar 

  17. Kshetri, N.: Cybercrime and privacy threats of large language models. IT Prof. 25(3), 9–13 (2023). https://doi.org/10.1109/MITP.2023.3275489

    Article  Google Scholar 

  18. Lewis, M., et al.: BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7871–7880. Association for Computational Linguistics (2020). https://doi.org/10.18653/v1/2020.acl-main.703

  19. Liu, H., Wei, Z., Li, F., Lin, Y., Qu, H., Wu, H., Feng, Z.: ISAC signal processing over unlicensed spectrum bands (2023)

    Google Scholar 

  20. Liu, M., Ho, S., Wang, M., Gao, L., Jin, Y., Zhang, H.: Federated learning meets natural language processing: a survey (2021)

    Google Scholar 

  21. Liu, Q., Wang, J., Zhang, D., Yang, Y., Wang, N.: Text features extraction based on TF-IDF associating semantic. In: 2018 IEEE 4th International Conference on Computer and Communications (ICCC), pp. 2338–2343 (2018). https://doi.org/10.1109/CompComm.2018.8780663

  22. Liu, X., Liu, Z.: LLMs can understand encrypted prompt: towards privacy-computing friendly transformers (2023). https://doi.org/10.48550/arXiv.2305.18396

  23. Lyu, L., He, X., Li, Y.: Differentially private representation for NLP: formal guarantee and an empirical study on privacy and fairness (2020)

    Google Scholar 

  24. Lyu, L., He, X., Li, Y.: Differentially private representation for NLP: formal guarantee and an empirical study on privacy and fairness. In: Cohn, T., He, Y., Liu, Y. (eds.) Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 2355–2365. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.findings-emnlp.213, https://aclanthology.org/2020.findings-emnlp.213

  25. Mahendran, D., Luo, C., Mcinnes, B.T.: Review: privacy-preservation in the context of natural language processing. IEEE Access 9, 147600–147612 (2021). https://doi.org/10.1109/ACCESS.2021.3124163

    Article  Google Scholar 

  26. Naveed, H., et al.: A comprehensive overview of large language models (2023). https://doi.org/10.48550/arXiv.2307.06435

  27. Sak, H., Senior, A., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition (2014). https://doi.org/10.48550/arXiv.1402.1128

  28. Sousa, S., Kern, R.: How to keep text private? A systematic review of deep learning methods for privacy-preserving natural language processing (2022)

    Google Scholar 

  29. Staudemeyer, R.C., Morris, E.R.: Understanding LSTM - a tutorial into long short-term memory recurrent neural networks (2019)

    Google Scholar 

  30. Sun, H., Zhang, Z., Deng, J., Cheng, J., Huang, M.: Safety assessment of Chinese large language models (2023). https://doi.org/10.48550/arXiv.2304.10436

  31. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks (2014). https://doi.org/10.48550/arXiv.1409.3215

  32. Wang, S.: Privacy amplification via shuffling: unified, simplified, and tightened (2023). https://doi.org/10.48550/arXiv.2304.05007

  33. Wannasuphoprasit, S., Zhou, Y., Bollegala, D.: Solving cosine similarity underestimation between high frequency words by L2 norm discounting (2023). https://arxiv.org/abs/2305.10610v1

  34. Wirth, F.N., Meurers, T., Johns, M., Prasser, F.: Privacy-preserving data sharing infrastructures for medical research: systematization and comparison. BMC Med. Inform. Decis. Mak. 21(1), 242 (2021). https://doi.org/10.1186/s12911-021-01602-x

    Article  Google Scholar 

  35. Wong, T.T.: Performance evaluation of classification algorithms by k-fold and leave-one-out cross validation. Pattern Recogn. 48(9), 2839–2846 (2015). https://doi.org/10.1016/j.patcog.2015.03.009

    Article  Google Scholar 

  36. Xu, R., Baracaldo, N., Joshi, J.: Privacy-preserving machine learning: methods, challenges and directions (2021). https://doi.org/10.48550/arXiv.2108.04417

  37. Yu, D., et al.: Differentially private fine-tuning of language models (2022)

    Google Scholar 

  38. Zhou, K., Ethayarajh, K., Card, D., Jurafsky, D.: Problems with cosine as a measure of embedding similarity for high frequency words (2022). https://doi.org/10.48550/arXiv.2205.05092

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shaowei Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Huang, B. et al. (2024). FirewaLLM: A Portable Data Protection and Recovery Framework for LLM Services. In: Tan, Y., Shi, Y. (eds) Data Mining and Big Data. DMBD 2023. Communications in Computer and Information Science, vol 2018. Springer, Singapore. https://doi.org/10.1007/978-981-97-0844-4_2

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-0844-4_2

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-0843-7

  • Online ISBN: 978-981-97-0844-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics