Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3651671.3651683acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicmlcConference Proceedingsconference-collections
research-article

Automatic Machine Learning based Real Time Multi-Tasking Image Fusion

Published: 07 June 2024 Publication History

Abstract

Imaging systems work diversely in the image processing domain, and each system contains specific characteristics. We are developing models to fuse images from different sensors and environments to get promising outcomes for different computer vision applications. The multiple unified models have been developed for multiple tasks such as multi-focus (MF), multi-exposure (ME), and multi-modal (MM) image fusion. The careful tuning of such models is required to get optimal results, which are still not applicable to diverse applications. We propose an automatic machine learning (AML) based multi-tasking image fusion approach to overcome this problem. Initially, we evaluate source images with AML and feed them to the task-based models. Then, the source images are fused with the pre-trained and fine-tuned models. The experimental results authenticate the consequences of our proposed approach compared to generic approaches.

References

[1]
S. Karim, G. Tong, J. Li, A. Qadir, U. Farooq, and Y. Yu, “Current Advances and Future Perspectives of Image Fusion: A Comprehensive Review,” Inf. Fusion, 2022.
[2]
J. Du, W. Li, K. Lu, and B. Xiao, “An overview of multi-modal medical image fusion,” Neurocomputing, vol. 215, pp. 3–20, 2016.
[3]
Y. Liu, L. Wang, J. Cheng, C. Li, and X. Chen, “Multi-focus image fusion: A survey of the state of the art,” Inf. Fusion, vol. 64, pp. 71–91, 2020.
[4]
F. Xu, J. Liu, Y. Song, H. Sun, and X. Wang, “Multi-Exposure Image Fusion Techniques: A Comprehensive Review,” Remote Sens., vol. 14, no. 3, p. 771, 2022.
[5]
X. Deng and P. L. Dragotti, “Deep convolutional neural network for multi-modal image restoration and fusion,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 10, pp. 3333–3348, 2020.
[6]
T. M. Mitchell and T. M. Mitchell, Machine learning, vol. 1, no. 9. McGraw-hill New York, 1997.
[7]
Q. Yao, “Taking human out of learning applications: A survey on automated machine learning,” arXiv Prepr. arXiv1810.13306, 2018.
[8]
G. Luo, “A review of automatic selection methods for machine learning algorithms and hyper-parameter values,” Netw. Model. Anal. Heal. Informatics Bioinforma., vol. 5, pp. 1–16, 2016.
[9]
E. Hazan, A. Klivans, and Y. Yuan, “Hyperparameter optimization: A spectral approach,” arXiv Prepr. arXiv1706.00764, 2017.
[10]
C. Thornton, F. Hutter, H. H. Hoos, and K. Leyton-Brown, “Auto-WEKA: Combined selection and hyperparameter optimization of classification algorithms,” in Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, 2013, pp. 847–855.
[11]
W. Huang and Z. Jing, “Multi-focus image fusion using pulse coupled neural network,” Pattern Recognit. Lett., vol. 28, no. 9, pp. 1123–1132, 2007.
[12]
S. Li, X. Kang, J. Hu, and B. Yang, “Image matting for fusion of multi-focus images in dynamic scenes,” Inf. Fusion, vol. 14, no. 2, pp. 147–162, 2013.
[13]
I. De and B. Chanda, “Multi-focus image fusion using a morphology-based focus measure in a quad-tree structure,” Inf. Fusion, vol. 14, no. 2, pp. 136–146, 2013.
[14]
V. Aslantas and R. Kurban, “Fusion of multi-focus images using differential evolution algorithm,” Expert Syst. Appl., vol. 37, no. 12, pp. 8861–8870, 2010.
[15]
B. Ma, Y. Zhu, X. Yin, X. Ban, H. Huang, and M. Mukeshimana, “Sesf-fuse: An unsupervised deep model for multi-focus image fusion,” Neural Comput. Appl., vol. 33, no. 11, pp. 5793–5804, 2021.
[16]
H. Tang, B. Xiao, W. Li, and G. Wang, “Pixel convolutional neural network for multi-focus image fusion,” Inf. Sci. (Ny)., vol. 433, pp. 125–141, 2018.
[17]
Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang, “IFCNN: A general image fusion framework based on convolutional neural network,” Inf. Fusion, vol. 54, pp. 99–118, 2020.
[18]
Y. Yang, Z. Nie, S. Huang, P. Lin, and J. Wu, “Multilevel features convolutional neural network for multifocus image fusion,” IEEE Trans. Comput. Imaging, vol. 5, no. 2, pp. 262–273, 2019.
[19]
B. Gu, W. Li, J. Wong, M. Zhu, and M. Wang, “Gradient field multi-exposure images fusion for high dynamic range image visualization,” J. Vis. Commun. Image Represent., vol. 23, no. 4, pp. 604–610, 2012.
[20]
S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process., vol. 22, no. 7, pp. 2864–2875, 2013.
[21]
Z. G. Li, J. H. Zheng, and S. Rahardja, “Detail-enhanced exposure fusion,” IEEE Trans. Image Process., vol. 21, no. 11, pp. 4672–4676, 2012.
[22]
Z. Li, Z. Wei, C. Wen, and J. Zheng, “Detail-enhanced multi-scale exposure fusion,” IEEE Trans. Image Process., vol. 26, no. 3, pp. 1243–1252, 2017.
[23]
J. Ma, Y. Ma, and C. Li, “Infrared and visible image fusion methods and applications: A survey,” Inf. Fusion, vol. 45, pp. 153–178, 2019.
[24]
H. Li, X.-J. Wu, and J. Kittler, “RFN-Nest: An end-to-end residual fusion network for infrared and visible images,” Inf. Fusion, vol. 73, pp. 72–86, 2021.
[25]
J. Li, “DRPL: Deep regression pair learning for multi-focus image fusion,” IEEE Trans. Image Process., vol. 29, pp. 4816–4831, 2020.
[26]
D. Han, L. Li, X. Guo, and J. Ma, “Multi-exposure image fusion via deep perceptual enhancement,” Inf. Fusion, vol. 79, pp. 248–262, 2022.
[27]
H. Xu, J. Ma, and X.-P. Zhang, “MEF-GAN: Multi-exposure image fusion via generative adversarial networks,” IEEE Trans. Image Process., vol. 29, pp. 7203–7216, 2020.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICMLC '24: Proceedings of the 2024 16th International Conference on Machine Learning and Computing
February 2024
757 pages
ISBN:9798400709234
DOI:10.1145/3651671
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 June 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. automatic ML
  2. imaging systems
  3. multi-tasking image fusion

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • the Shenzhen Science and Technology Innovation Committee

Conference

ICMLC 2024

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 34
    Total Downloads
  • Downloads (Last 12 months)34
  • Downloads (Last 6 weeks)13
Reflects downloads up to 14 Dec 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media