Nothing Special   »   [go: up one dir, main page]

Skip to main content

Verifiable Fairness: Privacy–preserving Computation of Fairness for Machine Learning Systems

  • Conference paper
  • First Online:
Computer Security. ESORICS 2023 International Workshops (ESORICS 2023)

Abstract

Fair machine learning is a thriving and vibrant research topic. In this paper, we propose Fairness as a Service (FaaS), a secure, verifiable and privacy-preserving protocol to computes and verify the fairness of any machine learning (ML) model. In the deisgn of FaaS, the data and outcomes are represented through cryptograms to ensure privacy. Also, zero knowledge proofs guarantee the well-formedness of the cryptograms and underlying data. FaaS is model–agnostic and can support various fairness metrics; hence, it can be used as a service to audit the fairness of any ML model. Our solution requires no trusted third party or private channels for the computation of the fairness metric. The security guarantees and commitments are implemented in a way that every step is securely transparent and verifiable from the start to the end of the process. The cryptograms of all input data are publicly available for everyone, e.g., auditors, social activists and experts, to verify the correctness of the process. We implemented FaaS to investigate performance and demonstrate the successful use of FaaS for a publicly available data set with thousands of entries.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Adler, P., et al.: Auditing black-box models for indirect influence. Knowl. Inf. Syst. 54(1), 95–122 (2018)

    Article  Google Scholar 

  2. Angelino, E., Larus-Stone, N., Alabi, D., Seltzer, M., Rudin, C.: Learning certifiably optimal rule lists for categorical data. arXiv preprint arXiv:1704.01701 (2017)

  3. Azad, M.A., Bag, S., Parkinson, S., Hao, F.: TrustVote: privacy-preserving node ranking in vehicular networks. IEEE Internet Things J. 6(4), 5878–5891 (2018)

    Article  Google Scholar 

  4. Baudron, O., Fouque, P.A., Pointcheval, D., Stern, J., Poupard, G.: Practical multi-candidate election system. In: Proceedings of the Twentieth Annual ACM Symposium on Principles of Distributed Computing, pp. 274–283 (2001)

    Google Scholar 

  5. Chouldechova, A.: Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data 5(2), 153–163 (2017)

    Article  Google Scholar 

  6. Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., Huq, A.: Algorithmic decision making and the cost of fairness. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 797–806. ACM (2017)

    Google Scholar 

  7. Cramer, R., Damgård, I., Schoenmakers, B.: Proofs of partial knowledge and simplified design of witness hiding protocols. In: Desmedt, Y.G. (ed.) Annual International Cryptology Conference, vol. 839, pp. 174–187. Springer, Heidelberg (1994). https://doi.org/10.1007/3-540-48658-5_19

  8. Carroll, C.E., Olegario, R.: Pathways to corporate accountability: corporate reputation and its alternatives. J. Bus. Ethics 163(2), 173–181 (2020)

    Article  Google Scholar 

  9. Feldman, M., Friedler, S.A., Moeller, J., Scheidegger, C., Venkatasubramanian, S.: Certifying and removing disparate impact. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 259–268. ACM (2015)

    Google Scholar 

  10. Fiat, A., Shamir, A.: How to prove yourself: practical solutions to identification and signature problems. In: Conference on the Theory and Application of Cryptographic Techniques, vol. 263, pp. 186–194. Springer, Heidelberg (1986). https://doi.org/10.1007/3-540-47721-7_12

  11. Friedler, S.A., Scheidegger, C., Venkatasubramanian, S.: On the (im) possibility of fairness. arXiv preprint arXiv:1609.07236 (2016)

  12. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR) 51(5), 1–42 (2018)

    Article  Google Scholar 

  13. Hao, F., Kreeger, M.N., Randell, B., Clarke, D., Shahandashti, S.F., Lee, P.H.J.: Every vote counts: Ensuring integrity in large-scale electronic voting. In: 2014 Electronic Voting Technology Workshop/Workshop on Trustworthy Elections (EVT/WOTE 14) (2014)

    Google Scholar 

  14. Hao, F., Ryan, P.Y.A., Zieliński, P.: Anonymous voting by two-round public discussion. IET Inf. Secur. 4(2), 62–67 (2010)

    Article  Google Scholar 

  15. Hardt, M., Price, E., Srebro, N., et al.: Equality of opportunity in supervised learning. Adv. Neural Inf. Process. Syst. 29, 3315–3323 (2016)

    Google Scholar 

  16. Hu, H., Liu, Y., Wang, Z., Lan, C.: A distributed fair machine learning framework with private demographic data protection. In: 2019 IEEE International Conference on Data Mining (ICDM), pp. 1102–1107. IEEE (2019)

    Google Scholar 

  17. Jagielski, M., et al.: Differentially private fair learning. In: International Conference on Machine Learning, pp. 3000–3008. PMLR (2019)

    Google Scholar 

  18. Katz, J., Lindell, Y.: Introduction to Modern Cryptography. CRC Press, Boca Raton (2014)

    Book  Google Scholar 

  19. Kilbertus, N., Gascon, A., Kusner, M., Veale, M., Gummadi, K.P., Weller, A.: Blind justice: Fairness with encrypted sensitive attributes. In: 35th International Conference on Machine Learning, pp. 2630–2639. PMLR (2018)

    Google Scholar 

  20. Larson, J., Mattu, S., Kirchner, L., Angwin, J.: How we analyzed the COMPAS recidivism algorithm. ProPublica 9(1), 2016 (2016)

    Google Scholar 

  21. Liu, J., Yu, F., Song, L.: A systematic investigation on the research publications that have used the medical expenditure panel survey (MEPS) data through a bibliometrics approach. Library Hi Tech (2020)

    Google Scholar 

  22. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 30, 1–10 (2017)

    Google Scholar 

  23. Mahdawi, A.: It’s not just A-levels - algorithms have a nightmarish new power over our lives. The Guardian (2020)

    Google Scholar 

  24. Narayanan, A.: Translation tutorial: 21 fairness definitions and their politics. In Proceedings of Conference on Fairness Accountability Transport, New York, USA (2018)

    Google Scholar 

  25. Panigutti, C., Perotti, A., Panisson, A., Bajardi, P., Pedreschi, D.: Fairlens: auditing black-box clinical decision support systems. Inf. Process. Manag. 58(5), 102657 (2021)

    Article  Google Scholar 

  26. Panigutti, C., Perotti, A., Pedreschi, D.: Doctor xai: an ontology-based approach to black-box sequential data classification explanations. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 629–639 (2020)

    Google Scholar 

  27. Park, S., Kim, S., Lim, Y.: Fairness audit of machine learning models with confidential computing. In: Proceedings of the ACM Web Conference 2022, pp. 3488–3499 (2022)

    Google Scholar 

  28. Reuters. Amazon ditched AI recruiting tool that favored men for technical jobs. The Guardian (2018)

    Google Scholar 

  29. Segal, S., Adi, Y., Pinkas, B., Baum, C., Ganesh, C., Keshet, J.: Fairness in the eyes of the data: certifying machine-learning models. In: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pp. 926–935 (2021)

    Google Scholar 

  30. Siau, K., Wang, W.: Building trust in artificial intelligence, machine learning, and robotics. Cutter Bus. Technol. J. 31(2), 47–53 (2018)

    Google Scholar 

  31. Stinson, D.R., Paterson, M.: Cryptography: Theory and Practice. CRC Press, Boca Raton (2018)

    Book  Google Scholar 

  32. Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C.G., van Moorsel, A.: The relationship between trust in AI and trustworthy machine learning technologies. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 272–283 (2020)

    Google Scholar 

  33. Veale, M., Binns, R.: Fairer machine learning in the real world: mitigating discrimination without collecting sensitive data. Big Data Soc. 4(2), 2053951717743530 (2017)

    Article  Google Scholar 

  34. Wang, T., Rudin, C., Doshi-Velez, F., Liu, Y., Klampfl, E., MacNeille, P.: A bayesian framework for learning rule sets for interpretable classification. J. Mach. Learn. Res. 18(1), 2357–2393 (2017)

    MathSciNet  Google Scholar 

  35. Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. (TIST) 10(2), 1–19 (2019)

    Article  Google Scholar 

Download references

Acknowledgement

The authors in this project have been funded by UK EPSRC grant “FinTrust: Trust Engineering for the Financial Industry” under grant number EP/R033595/1, and UK EPSRC grant “AGENCY: Assuring Citizen Agency in a World with Complex Online Harms” under grant EP/W032481/1 and PETRAS National Centre of Excellence for IoT Systems Cybersecurity, which has been funded by the UK EPSRC under grant number EP/S035362/1.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ehsan Toreini .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Toreini, E., Mehrnezhad, M., van Moorsel, A. (2024). Verifiable Fairness: Privacy–preserving Computation of Fairness for Machine Learning Systems. In: Katsikas, S., et al. Computer Security. ESORICS 2023 International Workshops. ESORICS 2023. Lecture Notes in Computer Science, vol 14399. Springer, Cham. https://doi.org/10.1007/978-3-031-54129-2_34

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-54129-2_34

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-54128-5

  • Online ISBN: 978-3-031-54129-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics