Abstract
Recent research has identified discriminatory behavior of automated prediction algorithms towards groups identified on specific protected attributes (e.g., gender, ethnicity, age group, etc.). When deployed in real-world scenarios, such techniques may demonstrate biased predictions resulting in unfair outcomes. Recent literature has witnessed algorithms for mitigating such biased behavior mostly by adding convex surrogates of fairness metrics such as demographic parity or equalized odds in the loss function, which are often not easy to estimate. This research proposes a novel in-processing based GroupMixNorm layer for mitigating bias from deep learning models. The GroupMixNorm layer probabilistically mixes group-level feature statistics of samples across different groups based on the protected attribute. The proposed method improves upon several fairness metrics with minimal impact on overall accuracy. Analysis on benchmark tabular and image datasets demonstrates the efficacy of the proposed method in achieving state-of-the-art performance. Further, the experimental analysis also suggests the robustness of the GroupMixNorm layer against new protected attributes during inference and its utility in eliminating bias from a pre-trained network.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
References
Agarwal, A., Beygelzimer, A., Dudík, M., Langford, J., Wallach, H.M.: A reductions approach to fair classification. In: ICML. vol. 80, pp. 60–69 (2018)
Ahuja, K., Shanmugam, K., Varshney, K.R., Dhurandhar, A.: Invariant risk minimization games. In: ICML. 119, 145–155 (2020)
Arjovsky, M., Bottou, L., Gulrajani, I., Lopez-Paz, D.: Invariant risk minimization. CoRR abs/1907.02893 (2019)
Barocas, S., Hardt, M., Narayanan, A.: Fairness and Machine Learning. fairmlbook.org (2019), http://www.fairmlbook.org
Cheng, P., Hao, W., Yuan, S., Si, S., Carin, L.: FairFil: Contrastive neural debiasing method for pretrained text encoders. In: ICLR (2021)
Chuang, C., Mroueh, Y.: Fair mixup: Fairness via interpolation. In: ICLR. Virtual Event, Austria, May 3–7, 2021 (2021)
Cotter, A. et al.: Training well-generalizing classifiers for fairness metrics and other data-dependent constraints. In: ICML. vol. 97, pp. 1397–1405 (2019)
Du, M., Mukherjee, S., Wang, G., Tang, R., Awadallah, A., Hu, X.: Fairness via representation neutralization. In: NeurIPS, vol. 34 (2021)
Dua, D., Graff, C.: UCI machine learning repository (2017), http://archive.ics.uci.edu/ml
Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: NeurIPS, pp. 3315–3323 (2016)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE CVPR, pp. 770–778 (2016)
Kilbertus, N. et al.: Avoiding discrimination through causal reasoning. In: NeurIPS, pp. 656–666 (2017)
Kim, B., Kim, H., Kim, K., Kim, S., Kim, J.: Learning not to learn: Training deep neural networks with biased data. In: IEEE CVPR, pp. 9012–9020 (2019)
Kusner, M.J., Loftus, J.R., Russell, C., Silva, R.: Counterfactual fairness. In: NeurIPS, pp. 4066–4076 (2017)
Liu, Z., Luo, P., Wang, X., Tang, X.: Large-scale celebfaces attributes (celeba) dataset. Retrieved August 15(2018), 11 (2018)
Manisha, P., Gujar, S.: FNNC: Achieving fairness through neural networks. In: IJCAI, pp. 2277–2283 (2020)
Singh, K.K., Mahajan, D., Grauman, K., Lee, Y.J., Feiszli, M., Ghadiyaram, D.: Don’t judge an object by its context: Learning to overcome contextual bias. In: IEEE/CVF CVPR, pp. 11067–11075 (2020)
Verma, S., Rubin, J.: Fairness definitions explained. In: International Workshop on Software Fairness, pp. 1–7 (2018)
Verma, V. et al.: Manifold mixup: Better representations by interpolating hidden states. In: ICML. vol. 97, pp. 6438–6447 (2019)
Woodworth, B.E., Gunasekar, S., Ohannessian, M.I., Srebro, N.: Learning non-discriminatory predictors. In: COLT. 65, 1920–1953 (2017)
Zafar, M.B., Valera, I., Gomez-Rodriguez, M., Gummadi, K.P.: Fairness constraints: Mechanisms for fair classification. In: AIStat. vol. 54, pp. 962–970 (2017)
Zhang, B.H., Lemoine, B., Mitchell, M.: Mitigating unwanted biases with adversarial learning. In: AAAI/ACM AIES, pp. 335–340 (2018)
Zhang, H., Cissé, M., Dauphin, Y.N., Lopez-Paz, D.: Mixup: Beyond empirical risk minimization. In: ICLR (2018)
Zunino, A. et al.: Explainable deep classification models for domain generalization. In: IEEE CVPRW, pp. 3233–3242 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Pandey, A., Rai, A., Singh, M., Bhatt, D., Bhowmik, T. (2023). GroupMixNorm Layer for Learning Fair Models. In: Kashima, H., Ide, T., Peng, WC. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2023. Lecture Notes in Computer Science(), vol 13935. Springer, Cham. https://doi.org/10.1007/978-3-031-33374-3_41
Download citation
DOI: https://doi.org/10.1007/978-3-031-33374-3_41
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-33373-6
Online ISBN: 978-3-031-33374-3
eBook Packages: Computer ScienceComputer Science (R0)