Abstract
Representation learning is an important part of any machine learning model. Learning privacy-preserving discriminative representations that are invariant against nuisance factors is an open question. This is done by removing sensitive information from the learned representation. Such privacy-preserving representations are believed to be beneficial to some medical and federated learning applications. In this paper, a framework for learning invariant fair representations by decomposing the learned representation into target and sensitive codes is proposed. An entropy maximization constraint is imposed on the target code to be invariant to sensitive information. The proposed model is evaluated on three applications derived from two medical datasets for autism detection and healthcare insurance. We compare with two methods and achieve state of the art performance in sensitive information leakage trade-off. A discussion regarding the difficulties of applying fair representation learning to medical data and when it is desirable is presented.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Heritage Health dataset, www.heritagehealthprize.com.
References
Abraham, A., et al.: Deriving reproducible biomarkers from multi-site resting-state data: an autism-based example. NeuroImage 147, 736–745 (2017)
Chen, I.Y., Szolovits, P., Ghassemi, M.: Can AI help reduce disparities in general medical and mental health care? AMA J. Ethics 21(2), 167–179 (2019)
Craddock, C., et al.: Towards automated analysis of connectomes: the configurable pipeline for the analysis of connectomes (C-PAC). Front. Neuroinform. 42 (2013)
Di Martino, A., et al.: The autism brain imaging data exchange: towards a large-scale evaluation of the intrinsic brain architecture in autism. Mol. Psychiatry 19(6), 659–667 (2014)
Kamiran, F., Calders, T.: Classifying without discriminating. In: 2009 2nd International Conference on Computer, Control and Communication, pp. 1–6. IEEE (2009)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Louizos, C., Swersky, K., Li, Y., Welling, M., Zemel, R.: The variational fair autoencoder. arXiv preprint arXiv:1511.00830 (2015)
Maaten, L.V.D., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)
Madras, D., Creager, E., Pitassi, T., Zemel, R.: Learning adversarially fair and transferable representations. arXiv preprint arXiv:1802.06309 (2018)
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635 (2019)
Moyer, D., Gao, S., Brekelmans, R., Galstyan, A., Ver Steeg, G.: Invariant representations without adversarial training. In: Advances in Neural Information Processing Systems, pp. 9084–9093 (2018)
Parisot, S., et al.: Disease prediction using graph convolutional networks: application to autism spectrum disorder and Alzheimer’s disease. Med. Image Anal. 48, 117–130 (2018)
Pedreshi, D., Ruggieri, S., Turini, F.: Discrimination-aware data mining. In: Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 560–568 (2008)
Rieke, N., et al.: The future of digital health with federated learning. arXiv preprint arXiv:2003.08119 (2020)
Roy, P.C., Boddeti, V.N.: Mitigating information leakage in image representations: a maximum entropy approach. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2586–2594 (2019)
Sarhan, M.H., Navab, N., Eslami, A., Albarqouni, S.: Fairness by learning orthogonal disentangled representations. arXiv preprint arXiv:2003.05707 (2020)
Song, J., Kalluri, P., Grover, A., Zhao, S., Ermon, S.: Learning controllable fair representations. arXiv preprint arXiv:1812.04218 (2018)
Xie, Q., Dai, Z., Du, Y., Hovy, E., Neubig, G.: Controllable invariance through adversarial feature learning. In: Advances in Neural Information Processing Systems, pp. 585–596 (2017)
Zemel, R., Wu, Y., Swersky, K., Pitassi, T., Dwork, C.: Learning fair representations. In: International Conference on Machine Learning, pp. 325–333 (2013)
Acknowledgments
S.A. is supported by the PRIME programme of the German Academic Exchange Service (DAAD) with funds from the German Federal Ministry of Education and Research (BMBF).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Sarhan, M.H., Navab, N., Eslami, A., Albarqouni, S. (2020). On the Fairness of Privacy-Preserving Representations in Medical Applications. In: Albarqouni, S., et al. Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning. DART DCL 2020 2020. Lecture Notes in Computer Science(), vol 12444. Springer, Cham. https://doi.org/10.1007/978-3-030-60548-3_14
Download citation
DOI: https://doi.org/10.1007/978-3-030-60548-3_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-60547-6
Online ISBN: 978-3-030-60548-3
eBook Packages: Computer ScienceComputer Science (R0)