Abstract
Fair machine learning is a thriving and vibrant research topic. In this paper, we propose Fairness as a Service (FaaS), a secure, verifiable and privacy-preserving protocol to computes and verify the fairness of any machine learning (ML) model. In the deisgn of FaaS, the data and outcomes are represented through cryptograms to ensure privacy. Also, zero knowledge proofs guarantee the well-formedness of the cryptograms and underlying data. FaaS is model–agnostic and can support various fairness metrics; hence, it can be used as a service to audit the fairness of any ML model. Our solution requires no trusted third party or private channels for the computation of the fairness metric. The security guarantees and commitments are implemented in a way that every step is securely transparent and verifiable from the start to the end of the process. The cryptograms of all input data are publicly available for everyone, e.g., auditors, social activists and experts, to verify the correctness of the process. We implemented FaaS to investigate performance and demonstrate the successful use of FaaS for a publicly available data set with thousands of entries.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Adler, P., et al.: Auditing black-box models for indirect influence. Knowl. Inf. Syst. 54(1), 95–122 (2018)
Angelino, E., Larus-Stone, N., Alabi, D., Seltzer, M., Rudin, C.: Learning certifiably optimal rule lists for categorical data. arXiv preprint arXiv:1704.01701 (2017)
Azad, M.A., Bag, S., Parkinson, S., Hao, F.: TrustVote: privacy-preserving node ranking in vehicular networks. IEEE Internet Things J. 6(4), 5878–5891 (2018)
Baudron, O., Fouque, P.A., Pointcheval, D., Stern, J., Poupard, G.: Practical multi-candidate election system. In: Proceedings of the Twentieth Annual ACM Symposium on Principles of Distributed Computing, pp. 274–283 (2001)
Chouldechova, A.: Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data 5(2), 153–163 (2017)
Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., Huq, A.: Algorithmic decision making and the cost of fairness. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 797–806. ACM (2017)
Cramer, R., Damgård, I., Schoenmakers, B.: Proofs of partial knowledge and simplified design of witness hiding protocols. In: Desmedt, Y.G. (ed.) Annual International Cryptology Conference, vol. 839, pp. 174–187. Springer, Heidelberg (1994). https://doi.org/10.1007/3-540-48658-5_19
Carroll, C.E., Olegario, R.: Pathways to corporate accountability: corporate reputation and its alternatives. J. Bus. Ethics 163(2), 173–181 (2020)
Feldman, M., Friedler, S.A., Moeller, J., Scheidegger, C., Venkatasubramanian, S.: Certifying and removing disparate impact. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 259–268. ACM (2015)
Fiat, A., Shamir, A.: How to prove yourself: practical solutions to identification and signature problems. In: Conference on the Theory and Application of Cryptographic Techniques, vol. 263, pp. 186–194. Springer, Heidelberg (1986). https://doi.org/10.1007/3-540-47721-7_12
Friedler, S.A., Scheidegger, C., Venkatasubramanian, S.: On the (im) possibility of fairness. arXiv preprint arXiv:1609.07236 (2016)
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR) 51(5), 1–42 (2018)
Hao, F., Kreeger, M.N., Randell, B., Clarke, D., Shahandashti, S.F., Lee, P.H.J.: Every vote counts: Ensuring integrity in large-scale electronic voting. In: 2014 Electronic Voting Technology Workshop/Workshop on Trustworthy Elections (EVT/WOTE 14) (2014)
Hao, F., Ryan, P.Y.A., Zieliński, P.: Anonymous voting by two-round public discussion. IET Inf. Secur. 4(2), 62–67 (2010)
Hardt, M., Price, E., Srebro, N., et al.: Equality of opportunity in supervised learning. Adv. Neural Inf. Process. Syst. 29, 3315–3323 (2016)
Hu, H., Liu, Y., Wang, Z., Lan, C.: A distributed fair machine learning framework with private demographic data protection. In: 2019 IEEE International Conference on Data Mining (ICDM), pp. 1102–1107. IEEE (2019)
Jagielski, M., et al.: Differentially private fair learning. In: International Conference on Machine Learning, pp. 3000–3008. PMLR (2019)
Katz, J., Lindell, Y.: Introduction to Modern Cryptography. CRC Press, Boca Raton (2014)
Kilbertus, N., Gascon, A., Kusner, M., Veale, M., Gummadi, K.P., Weller, A.: Blind justice: Fairness with encrypted sensitive attributes. In: 35th International Conference on Machine Learning, pp. 2630–2639. PMLR (2018)
Larson, J., Mattu, S., Kirchner, L., Angwin, J.: How we analyzed the COMPAS recidivism algorithm. ProPublica 9(1), 2016 (2016)
Liu, J., Yu, F., Song, L.: A systematic investigation on the research publications that have used the medical expenditure panel survey (MEPS) data through a bibliometrics approach. Library Hi Tech (2020)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 30, 1–10 (2017)
Mahdawi, A.: It’s not just A-levels - algorithms have a nightmarish new power over our lives. The Guardian (2020)
Narayanan, A.: Translation tutorial: 21 fairness definitions and their politics. In Proceedings of Conference on Fairness Accountability Transport, New York, USA (2018)
Panigutti, C., Perotti, A., Panisson, A., Bajardi, P., Pedreschi, D.: Fairlens: auditing black-box clinical decision support systems. Inf. Process. Manag. 58(5), 102657 (2021)
Panigutti, C., Perotti, A., Pedreschi, D.: Doctor xai: an ontology-based approach to black-box sequential data classification explanations. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 629–639 (2020)
Park, S., Kim, S., Lim, Y.: Fairness audit of machine learning models with confidential computing. In: Proceedings of the ACM Web Conference 2022, pp. 3488–3499 (2022)
Reuters. Amazon ditched AI recruiting tool that favored men for technical jobs. The Guardian (2018)
Segal, S., Adi, Y., Pinkas, B., Baum, C., Ganesh, C., Keshet, J.: Fairness in the eyes of the data: certifying machine-learning models. In: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pp. 926–935 (2021)
Siau, K., Wang, W.: Building trust in artificial intelligence, machine learning, and robotics. Cutter Bus. Technol. J. 31(2), 47–53 (2018)
Stinson, D.R., Paterson, M.: Cryptography: Theory and Practice. CRC Press, Boca Raton (2018)
Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C.G., van Moorsel, A.: The relationship between trust in AI and trustworthy machine learning technologies. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 272–283 (2020)
Veale, M., Binns, R.: Fairer machine learning in the real world: mitigating discrimination without collecting sensitive data. Big Data Soc. 4(2), 2053951717743530 (2017)
Wang, T., Rudin, C., Doshi-Velez, F., Liu, Y., Klampfl, E., MacNeille, P.: A bayesian framework for learning rule sets for interpretable classification. J. Mach. Learn. Res. 18(1), 2357–2393 (2017)
Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. (TIST) 10(2), 1–19 (2019)
Acknowledgement
The authors in this project have been funded by UK EPSRC grant “FinTrust: Trust Engineering for the Financial Industry” under grant number EP/R033595/1, and UK EPSRC grant “AGENCY: Assuring Citizen Agency in a World with Complex Online Harms” under grant EP/W032481/1 and PETRAS National Centre of Excellence for IoT Systems Cybersecurity, which has been funded by the UK EPSRC under grant number EP/S035362/1.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Toreini, E., Mehrnezhad, M., van Moorsel, A. (2024). Verifiable Fairness: Privacy–preserving Computation of Fairness for Machine Learning Systems. In: Katsikas, S., et al. Computer Security. ESORICS 2023 International Workshops. ESORICS 2023. Lecture Notes in Computer Science, vol 14399. Springer, Cham. https://doi.org/10.1007/978-3-031-54129-2_34
Download citation
DOI: https://doi.org/10.1007/978-3-031-54129-2_34
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-54128-5
Online ISBN: 978-3-031-54129-2
eBook Packages: Computer ScienceComputer Science (R0)