Abstract
For remote sensing image unsupervised domain adaptation, there are differences in resolution except for feature differences between source and target domains. An end-to-end unsupervised domain adaptation segmentation model for remote sensing images is proposed to reduce the image style and resolution differences between the source and target domains. First, a generative adversarial-based style transfer network with residual connection, scale consistency module, and perceptual loss with class balance weights is proposed. It reduces the image style and resolution differences between the two domains and maintains the original structural information while transferring. Second, the visual attention network (VAN) that considers both spatial and channel attention is used as the feature extraction backbone network to improve the feature extraction capability. Finally, the style transfer and segmentation tasks are unified in an end-to-end network. Experimental results show that the proposed model effectively alleviates the performance degradation caused by different features and resolutions. The segmentation performance is significantly improved compared to advanced domain adaptation segmentation methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Liang, M., Wang, X.L.: Domain adaptation and super-resolution based bi-directional semantic segmentation method for remote sensing images. In: International Geoscience and Remote Sensing Symposium (IGARSS), pp. 3500–3503. IEEE (2022)
Ian, G., Jean, P., Mehdi, M.: Generative adversarial nets. In: Neural Information Processing Systems (NIPS), pp. 2672–2680. MIT Press (2014)
Toldo, M., Michieli, U., Agresti, G.: Unsupervised domain adaptation for mobile semantic segmentation based on cycle consistency and feature alignment. Image Vis. Comput. 95, 103889 (2020)
Zhao, Y., Gao, H., Guo, P.: ResiDualGAN: Resize-Residual DualGAN for cross-domain remote sensing images semantic segmentation. Remote Sens. 15(5), 1428 (2023)
Li, Y., Yuan, L., Vasconcelos, N.: Bidirectional learning for domain adaptation of semantic segmentation. In: International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6936–6945. IEEE (2019)
Cheng, Y., Wei, F., Bao, J.: Dual path learning for domain adaptation of semantic segmentation. In: International Conference on Computer Vision (CVPR), pp. 9082–9091. IEEE (2021)
Yang, J., An, W., Wang, S., Zhu, X., Yan, C., Huang, J.: Label-driven reconstruction for domain adaptation in semantic segmentation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12372, pp. 480–498. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58583-9_29
Guo, M.H., Lu, C.Z., Liu, Z.M.: Visual attention network. arXiv preprint arXiv:2202.09741 (2022)
Yi, Z., Zhang, H., Tai, P.: DualGAN: unsupervised dual learning for image-to-image translation. In: International Conference on Computer Vision (ICCV), pp.2849–2857. IEEE (2017)
Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 833–851. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_49
Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: International Conference on Machine Learning (ICML), pp. 214–223. ACM (2017)
Gerke, M.: Use of the stair vision library within the ISPRS 2D semantic labeling benchmark (Vaihingen) (2015). https://doi.org/10.13140/2.1.5015.9683
Tsai, Y.H., Hung, W.C., Samuel, S.: Learning to adapt structured output space for semantic segmentation. In: International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7472–7481. IEEE (2018)
Xu, Y., Du, B., Zhang, L.: Self-ensembling attention networks: addressing domain shift for semantic segmentation. In: the AAAI Conference on Artificial Intelligence (AAAI), pp. 5581–5588. AAAI (2019)
Deng, X.Q., Zhu, Y., Tian, Y.X.: Scale aware adaptation for land-cover classification in remote sensing imagery. In: Winter Conference on Applications of Computer Vision (WACV), pp. 2160–2169. IEEE (2021)
Zhu, J., Guo, Y., Sun, J., Yang, L., Deng, M., Chen, J.: Unsupervised domain adaptation semantic segmentation of high-resolution remote sensing imagery with invariant domain-level prototype memory. IEEE Trans. Geosci. Remote Sens. 61, 1–18 (2023)
Zhang, B., Chen, T., Wang, B.: Curriculum-style local-to-global adaptation for cross-domain remote sensing image segmentation. IEEE Trans. Geosci. Remote Sens. 60, 1–12 (2021)
Acknowledgements
This work was supported by the Second Tibetan Plateau Scientific Expedition and Research (2019QZKK0405).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Li, Z., Wang, X. (2024). End-to-End Unsupervised Style and Resolution Transfer Adaptation Segmentation Model for Remote Sensing Images. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14428. Springer, Singapore. https://doi.org/10.1007/978-981-99-8462-6_31
Download citation
DOI: https://doi.org/10.1007/978-981-99-8462-6_31
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8461-9
Online ISBN: 978-981-99-8462-6
eBook Packages: Computer ScienceComputer Science (R0)