Abstract
The maximum mean discrepancy (MMD) as a representative distribution metric between source domain and target domain has been widely applied in unsupervised domain adaptation (UDA), where both domains follow different distributions, and the labels from source domain are merely available. However, MMD and its class-wise variants possibly ignore the intra-class compactness, thus canceling out discriminability of feature representation. In this paper, we endeavor to improve the discriminative ability of MMD from two aspects: 1) we re-design the weights for MMD in order to align the distribution of relatively hard classes across domains; 2) we explore a focally contrastive loss to trade-off the positive sample pairs and negative ones for better discrimination. The intergration of both losses makes the intra-class features close as well as push away the inter-class features far from each other. Moreover, the improved loss is simple yet effective. Our model shows state-of-the-art compared to the most domain adaptation methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M.: Domain-adversarial neural networks. arXiv preprint arXiv:1412.4446 (2014)
Cui, S., Wang, S., Zhuo, J., Li, L., Huang, Q., Tian, Q.: Towards discriminability and diversity: Batch nuclear-norm maximization under label insufficient situations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3941–3950 (2020)
Deng, W., Zheng, L., Sun, Y., Jiao, J.: Rethinking triplet loss for domain adaptation. IEEE Trans. Circuits Syst. Video Technol. 31(1), 29–37 (2020)
Donahue, J., et al.: A deep convolutional activation feature for generic visual recognition. UC Berkeley & ICSI, Berkeley, CA, USA
Ganin, Y., et al.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17(1), 2030–2096 (2016)
Gong, B., Shi, Y., Sha, F., Grauman, K.: Geodesic flow kernel for unsupervised domain adaptation. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2066–2073. IEEE (2012)
Gretton, A., Borgwardt, K.M., Rasch, M.J., Schölkopf, B., Smola, A.: A kernel two-sample test. J. Mach. Learn. Res. 13(1), 723–773 (2012)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Long, M., Cao, Y., Wang, J., Jordan, M.: Learning transferable features with deep adaptation networks. In: International Conference on Machine Learning, pp. 97–105. PMLR (2015)
Long, M., Cao, Z., Wang, J., Jordan, M.I.: Conditional adversarial domain adaptation. In: Advances in Neural Information Processing Systems, pp. 1640–1650 (2018)
Long, M., Wang, J., Ding, G., Sun, J., Yu, P.S.: Transfer feature learning with joint distribution adaptation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2200–2207 (2013)
Long, M., Zhu, H., Wang, J., Jordan, M.I.: Unsupervised domain adaptation with residual transfer networks. In: Advances in Neural Information Processing Systems, pp. 136–144 (2016)
Pan, S.J., Tsang, I.W., Kwok, J.T., Yang, Q.: Domain adaptation via transfer component analysis. IEEE Trans. Neural Netw. 22(2), 199–210 (2010)
Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. knowl. Data Eng. 22(10), 1345–1359 (2009)
Pei, Z., Cao, Z., Long, M., Wang, J.: Multi-adversarial domain adaptation. arXiv preprint arXiv:1809.02176 (2018)
Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 213–226. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15561-1_16
Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7167–7176 (2017)
Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., Darrell, T.: Deep domain confusion: maximizing for domain invariance. arXiv preprint arXiv:1412.3474 (2014)
Venkateswara, H., Eusebio, J., Chakraborty, S., Panchanathan, S.: Deep hashing network for unsupervised domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5018–5027 (2017)
Wang, H., Yang, W., Wang, J., Wang, R., Lan, L., Geng, M.: Pairwise similarity regularization for adversarial domain adaptation. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 2409–2418 (2020)
Wang, M., Zhang, X., Lan, L., Wang, W., Tan, H., Luo, Z.: Improving unsupervised domain adaptation by reducing bi-level feature redundancy. arXiv preprint arXiv:2012.15732 (2020)
Wang, W., Li, H., Ding, Z., Wang, Z.: Rethink maximum mean discrepancy for domain adaptation. arXiv preprint arXiv:2007.00689 (2020)
Wang, X., Li, L., Ye, W., Long, M., Wang, J.: Transferable attention for domain adaptation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 5345–5352 (2019)
Xu, R., Li, G., Yang, J., Lin, L.: Larger norm more transferable: an adaptive feature norm approach for unsupervised domain adaptation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1426–1435 (2019)
Yang, X., Dong, J., Cao, Y., Wang, X., Wang, M., Chua, T.S.: Tree-augmented cross-modal encoding for complex-query video retrieval. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1339–1348 (2020)
Yang, X., Feng, F., Ji, W., Wang, M., Chua, T.S.: Deconfounded video moment retrieval with causal intervention. In: SIGIR (2021)
Yang, X., He, X., Wang, X., Ma, Y., Feng, F., Wang, M., Chua, T.S.: Interpretable fashion matching with rich attributes. In: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 775–784 (2019)
Zhang, W., Ouyang, W., Li, W., Xu, D.: Collaborative and adversarial network for unsupervised domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3801–3809 (2018)
Zhang, W., Zhang, X., Liao, Q., Yang, W., Lan, L., Luo, Z.: Robust normalized squares maximization for unsupervised domain adaptation. In: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 2317–2320 (2020)
Zhang, Y., Tang, H., Jia, K., Tan, M.: Domain-symmetric networks for adversarial domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5031–5040 (2019)
Zheng, S., Ding, C., Nie, F., Huang, H.: Harmonic mean linear discriminant analysis. IEEE Trans. Knowl. Data Eng. 31(8), 1520–1531 (2018)
Zhu, Y., Zhuang, F., Wang, J., Ke, G., Chen, J., Bian, J., Xiong, H., He, Q.: Deep subdomain adaptation network for image classification. IEEE Trans. Neural Netw. Learn. Syst. 32(4), 1713–1722 (2020)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Sun, D. et al. (2021). A Focally Discriminative Loss for Unsupervised Domain Adaptation. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds) Neural Information Processing. ICONIP 2021. Lecture Notes in Computer Science(), vol 13108. Springer, Cham. https://doi.org/10.1007/978-3-030-92185-9_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-92185-9_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-92184-2
Online ISBN: 978-3-030-92185-9
eBook Packages: Computer ScienceComputer Science (R0)