Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Multi-focus image fusion method based on adaptive weighting and interactive information modulation

  • Regular Paper
  • Published:
Multimedia Systems Aims and scope Submit manuscript

Abstract

Multi-focus image fusion is an important computer vision task aimed at synthesizing multiple images with different focal points into a single image with higher clarity and a wide depth of field. However, existing methods face challenges when dealing with complex scenes, such as information loss, edge blurring, and insufficient extraction of image detail information. To address these challenges, a multi-focus image fusion method based on adaptive weighting and interactive information modulation is proposed in this paper. Specifically, the proposed method utilizes a detail and structure information enhancement block that employs DTCWT to decompose the spectral information of the image. This approach achieves a good balance of direction selectivity, multi-scale analysis, and translation invariance, enabling efficient capture of spectral information while reducing noise interference and preserving edge information. Introducing learnable weights for adaptive weighting allows the network to adjust the importance of different frequency components to enhance feature information. Moreover, the interactive information modulation structure enhances local feature extraction through multi-level and multi-dimensional feature interaction and information flow. The paper also optimizes feature fusion to better aggregate feature information from different scales and channels, improving the network’s ability to generate accurate and natural fusion results. The method’s performance is evaluated based on both visual quality and objective metrics, demonstrating its superiority in multi-focus image fusion tasks compared to other state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Data availability

The data that support the fndings of this study are available from the corresponding author upon reasonable request.

References

  1. Wan, H., Tang, X., Zhu, Z., Li, W.: Multi-focus image fusion method based on multi-scale decomposition of information complementary. Entropy 23(10), 1362 (2021)

    Google Scholar 

  2. Sun, J., Han, Q., Kou, L., Zhang, L., Zhang, K., Jin, Z.: Multi-focus image fusion algorithm based on laplacian pyramids. JOSA A 35(3), 480–490 (2018)

    Google Scholar 

  3. Chen, Y., Liu, Y., Ward, R.K., Chen, X.: Multi-focus image fusion with complex sparse representation. IEEE Sensors Journal, 1–1 (2024)

  4. Tan, J., Zhang, T., Zhao, L., Luo, X., Tang, Y.Y.: Multi-focus image fusion with geometrical sparse representation. Signal Processing: Image Communication 92, 116130 (2021)

    Google Scholar 

  5. Ma, X., Hu, S., Liu, S., Fang, J., Xu, S.: Multi-focus image fusion based on joint sparse representation and optimum theory. Signal Processing: Image Communication 78, 125–134 (2019)

    Google Scholar 

  6. Zhou, Y., Yang, X., Zhang, R., Liu, K., Anisetti, M., Jeon, G.: Gradient-based multi-focus image fusion method using convolution neural network. Computers & Electrical Engineering 92, 107174 (2021)

    Google Scholar 

  7. Chen, J., Li, X., Luo, L., Ma, J.: Multi-focus image fusion based on multi-scale gradients and image matting. IEEE Trans. Multimedia 24, 655–667 (2021)

    Google Scholar 

  8. Li, L., Lv, M., Jia, Z., Ma, H.: Sparse representation-based multi-focus image fusion method via local energy in shearlet domain. Sensors 23(6), 2888 (2023)

    Google Scholar 

  9. Wang, Y., Li, X., Zhu, R., Wang, Z., Feng, Y., Zhang, X.: A multi-focus image fusion framework based on multi-scale sparse representation in gradient domain. Signal Process. 189, 108254 (2021)

    Google Scholar 

  10. Wang, Y., Li, X., Zhu, R., Wang, Z., Feng, Y., Zhang, X.: A multi-focus image fusion framework based on multi-scale sparse representation in gradient domain. Signal Process. 189, 108254 (2021)

    Google Scholar 

  11. Du, C., Gao, S.: Image segmentation-based multi-focus image fusion through multi-scale convolutional neural network. IEEE Access 5, 15750–15761 (2017)

    Google Scholar 

  12. Ma, J., Zhou, Z., Wang, B., Miao, L., Zong, H.: Multi-focus image fusion using boosted random walks-based algorithm with two-scale focus maps. Neurocomputing 335, 9–20 (2019)

    Google Scholar 

  13. Wang, J., Qu, H., Zhang, Z., Xie, M.: New insights into multi-focus image fusion: A fusion method based on multi-dictionary linear sparse representation and region fusion model. Information Fusion 105, 102230 (2024)

    Google Scholar 

  14. Farid, M.S., Mahmood, A., Al-Maadeed, S.A.: Multi-focus image fusion using content adaptive blurring. Information fusion 45, 96–112 (2019)

    Google Scholar 

  15. Chen, L., Li, J., Chen, C.P.: Regional multifocus image fusion using sparse representation. Opt. Express 21(4), 5182–5197 (2013)

    Google Scholar 

  16. Xiao, J., Liu, T., Zhang, Y., Zou, B., Lei, J., Li, Q.: Multi-focus image fusion based on depth extraction with inhomogeneous diffusion equation. Signal Process. 125, 171–186 (2016)

    Google Scholar 

  17. Zhao, W., Yang, H., Wang, J., Pan, X., Cao, Z.: Region-and pixel-level multi-focus image fusion through convolutional neural networks. Mobile Networks and Applications 26(1), 40–56 (2021)

    Google Scholar 

  18. Liu, Y., Liu, S., Wang, Z.: Multi-focus image fusion with dense sift. Information Fusion 23, 139–155 (2015)

    Google Scholar 

  19. Liu, Y., Chen, X., Peng, H., Wang, Z.: Multi-focus image fusion with a deep convolutional neural network. Information Fusion 36, 191–207 (2017)

    Google Scholar 

  20. Bhalla, K., Koundal, D., Sharma, B., Hu, Y.-C., Zaguia, A.: A fuzzy convolutional neural network for enhancing multi-focus image fusion. J. Vis. Commun. Image Represent. 84, 103485 (2022)

    Google Scholar 

  21. Li, J., Guo, X., Lu, G., Zhang, B., Xu, Y., Wu, F., Zhang, D.: Drpl: Deep regression pair learning for multi-focus image fusion. IEEE Trans. Image Process. 29, 4816–4831 (2020)

    Google Scholar 

  22. Zhang, Y., Liu, Y., Sun, P., Yan, H., Zhao, X., Zhang, L.: Ifcnn: A general image fusion framework based on convolutional neural network. Information Fusion 54, 99–118 (2020)

    Google Scholar 

  23. Li, H., Nie, R., Cao, J., Guo, X., Zhou, D., He, K.: Multi-focus image fusion using u-shaped networks with a hybrid objective. IEEE Sens. J. 19(21), 9755–9765 (2019)

    Google Scholar 

  24. Vaswani, A.: Attention is all you need. arXiv preprint arXiv:1706.03762 (2017)

  25. Samplawski, C., Marlin, B.M.: Towards transformer-based real-time object detection at the edge: A benchmarking study. In: MILCOM 2021-2021 IEEE Military Communications Conference (MILCOM), pp. 898–903 (2021). IEEE

  26. Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: European Conference on Computer Vision, pp. 213–229 (2020). Springer

  27. Ke, Z., Qiu, D., Li, K., Yan, Q., Lau, R.W.: Guided collaborative training for pixel-wise semi-supervised learning. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIII 16, pp. 429–445 (2020). Springer

  28. Wu, P., Jiang, L., Hua, Z., Li, J.: Multi-focus image fusion: Transformer and shallow feature attention matters. Displays 76, 102353 (2023)

    Google Scholar 

  29. Ma, J., Tang, L., Fan, F., Huang, J., Mei, X., Ma, Y.: Swinfusion: Cross-domain long-range learning for general image fusion via swin transformer. IEEE/CAA Journal of Automatica Sinica 9(7), 1200–1217 (2022)

    Google Scholar 

  30. Kingsbury, N.: Complex wavelets for shift invariant analysis and filtering of signals. Appl. Comput. Harmon. Anal. 10(3), 234–253 (2001)

    MathSciNet  Google Scholar 

  31. Kingsbury, N.G.: The dual-tree complex wavelet transform: a new technique for shift invariance and directional filters. In: IEEE Digital Signal Processing Workshop, vol. 86, pp. 120–131 (1998). Citeseer

  32. Yang, Y., Tong, S., Huang, S., Lin, P.: Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks. Sensors 14(12), 22408–22430 (2014)

    Google Scholar 

  33. Yang, M.-X., Tang, G.-J., Liu, X.-H., Wang, L.-Q., Cui, Z.-G., Luo, S.-H.: Low-light image enhancement based on retinex theory and dual-tree complex wavelet transform. Optoelectron. Lett. 14(6), 470–475 (2018)

    Google Scholar 

  34. Li, D., Zhang, L., Sun, C., Yin, T., Liu, C., Yang, J.: Robust retinal image enhancement via dual-tree complex wavelet transform and morphology-based method. IEEE Access 7, 47303–47316 (2019)

    Google Scholar 

  35. Kiran, S.: Optimization of decomposition techniques for hybrid wavelet based image fusion algorithm using nsct and dtcwt. In: 2022 International Conference on Augmented Intelligence and Sustainable Systems (ICAISS), pp. 630–636 (2022). IEEE

  36. Bavirisetti, D.P., Xiao, G., Zhao, J., Dhuli, R., Liu, G.: Multi-scale guided image and video fusion: A fast and efficient approach. Circuits Systems Signal Process. 38, 5576–5605 (2019)

    Google Scholar 

  37. Zhang, H., Le, Z., Shao, Z., Xu, H., Ma, J.: Mff-gan: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion. Information Fusion 66, 40–53 (2021)

    Google Scholar 

  38. Xu, H., Ma, J., Jiang, J., Guo, X., Ling, H.: U2fusion: A unified unsupervised image fusion network. IEEE Trans. Pattern Anal. Mach. Intell. 44(1), 502–518 (2020)

    Google Scholar 

  39. Ma, B., Zhu, Y., Yin, X., Ban, X., Huang, H., Mukeshimana, M.: Sesf-fuse: An unsupervised deep model for multi-focus image fusion. Neural Comput. Appl. 33, 5793–5804 (2021)

    Google Scholar 

  40. Hu, X., Jiang, J., Liu, X., Ma, J.: Zmff: Zero-shot multi-focus image fusion. Information Fusion 92, 127–138 (2023)

    Google Scholar 

  41. Li, M., Pei, R., Zheng, T., Zhang, Y., Fu, W.: Fusiondiff: Multi-focus image fusion using denoising diffusion probabilistic models. Expert Syst. Appl. 238, 121664 (2024)

    Google Scholar 

  42. Cui, G., Feng, H., Xu, Z., Li, Q., Chen, Y.: Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition. Optics Communications 341, 199–209 (2015)

    Google Scholar 

  43. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Google Scholar 

  44. Chen, Y., Blum, R.S.: A new automated quality assessment algorithm for image fusion. Image Vis. Comput. 27(10), 1421–1432 (2009)

    Google Scholar 

  45. Wang, Q., Shen, Y., Jin, J.: Performance evaluation of image fusion techniques. Image fusion: algorithms and applications 19, 469–492 (2008)

    Google Scholar 

  46. Xydeas, C.S., Petrovic, V., et al.: Objective image fusion performance measure. Electron. Lett. 36(4), 308–309 (2000)

    Google Scholar 

  47. Zheng, Y., Essock, E.A., Hansen, B.C., Haun, A.M.: A new metric based on extended spatial frequency and its application to dwt based fusion algorithms. Information Fusion 8(2), 177–192 (2007)

    Google Scholar 

  48. Hossny, M., Nahavandi, S., Creighton, D.: Comments on’information measure for performance of image fusion’ 44(18) (2008)

  49. Han, Y., Cai, Y., Cao, Y., Xu, X.: A new image fusion performance metric based on visual information fidelity. Information fusion 14(2), 127–135 (2013)

    Google Scholar 

Download references

Acknowledgements

This research was supported by the National Natural Science Foundation of China (Grant No. 62003065); Natural Science Foundation of Chongqing (General Program) (Grant No. CSTB2024NSCQ-MSX0527); Innovation and Development Joint Fund of Chongqing Natural Science Foundation (Grant No. 2023NSCQ-LZX0029); Chongqing Postgraduate Joint Training Base Project (Grant No. 2019-45); the Fund Project of Chongqing Normal University (Grant No. 21XLB047).

Author information

Authors and Affiliations

Authors

Contributions

Jinyuan Jiang: Conceptualization, Methodology, Software, Writing - review & editing. Hao Zhai: Data curation, Resources, Writing - review & editing. You Yang: Writing - review & editing. Xuan Xiao: Writing - review & editing. Xinbo Wang: Software, Writing - review & editing.

Corresponding author

Correspondence to Hao Zhai.

Ethics declarations

Conflict of interest

The authors declare that they have no conficts of interest.

Additional information

Communicated by Junyu Gao.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiang, J., Zhai, H., Yang, Y. et al. Multi-focus image fusion method based on adaptive weighting and interactive information modulation. Multimedia Systems 30, 290 (2024). https://doi.org/10.1007/s00530-024-01506-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00530-024-01506-6

Keywords

Navigation