Decomposed Multilateral Filtering for Accelerating Filtering with Multiple Guidance Images
<p>Algorithm overview. <span class="html-italic">n</span>-lateral filtering denotes multilateral filtering that multiplies a spatial filter and <math display="inline"><semantics> <mrow> <mi>n</mi> <mspace width="-0.166667em"/> <mo>−</mo> <mspace width="-0.166667em"/> <mn>1</mn> </mrow> </semantics></math> range filters. Examples of multiple guidance information are flash images, segmentation masks, and depth maps. The key point of the proposed algorithm is that it decomposes multilateral filtering into a set of constant-time filters. For more information on implementation, our code is available at <a href="https://fukushimalab.github.io/dmf/" target="_blank">https://fukushimalab.github.io/dmf/</a> (accessed on 16 January 2024).</p> "> Figure 2
<p>Difference of filtering weights between HDF and MF. HDF weights are computed using the method in [<a href="#B30-sensors-24-00633" class="html-bibr">30</a>]. The red point represents the target pixel to compute the kernel weights.</p> "> Figure 3
<p>Procedure of splatting in DMF.</p> "> Figure 4
<p>Multilateral rolling guidance filtering. Note that the constraining information <math display="inline"><semantics> <msub> <mi>J</mi> <mn>2</mn> </msub> </semantics></math> is the same as the filtering image <span class="html-italic">I</span> when the filtering process is the first time.</p> "> Figure 5
<p>PSNR accuracy with respect to the number of coefficient images. The parameters are <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mrow> <mi>r</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>32</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mrow> <mi>r</mi> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>32</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>s</mi> </msub> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>. SS denotes spatial subsampling rates, <math display="inline"><semantics> <mrow> <mo>×</mo> <mfrac> <mn>1</mn> <mn>16</mn> </mfrac> </mrow> </semantics></math>. We tested 4 images for the input image.</p> "> Figure 6
<p>PSNR accuracy with respect to smoothing parameters. (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mrow> <mi>r</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>32</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mrow> <mi>r</mi> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>32</mn> </mrow> </semantics></math>. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>s</mi> </msub> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mrow> <mi>r</mi> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>32</mn> </mrow> </semantics></math>. (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>s</mi> </msub> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mrow> <mi>r</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>32</mn> </mrow> </semantics></math>. SS denotes the spatial subsampling rate, which is <math display="inline"><semantics> <mrow> <mo>×</mo> <mfrac> <mn>1</mn> <mn>16</mn> </mfrac> </mrow> </semantics></math>. The input images are the same as in <a href="#sensors-24-00633-f005" class="html-fig">Figure 5</a>.</p> "> Figure 7
<p>Processing time. The input image size is 1 megapixel (1024 × 1024). The parameters are <math display="inline"><semantics> <mrow> <msup> <mi>T</mi> <mrow> <mn>3</mn> <mi>r</mi> <mi>d</mi> </mrow> </msup> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msup> <mi>T</mi> <mrow> <mn>2</mn> <mi>n</mi> <mi>d</mi> </mrow> </msup> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>. SS denotes spatial subsampling rate (<math display="inline"><semantics> <mfrac> <mn>1</mn> <mn>16</mn> </mfrac> </semantics></math>).</p> "> Figure 8
<p>Flash/no-flash denoising without the false edge in the flash image. The parameters are <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mrow> <mi>r</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>64</mn> </mrow> </semantics></math> (for joint bilateral filtering and DTF), <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mrow> <mi>r</mi> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>s</mi> </msub> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msup> <mi>T</mi> <mrow> <mn>3</mn> <mi>r</mi> <mi>d</mi> </mrow> </msup> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math>.</p> "> Figure 9
<p>Depth map refining. (<b>b</b>) is coded by JPEG (quality factor = 50). The parameters are <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mrow> <mi>r</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mrow> <mi>r</mi> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>s</mi> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msup> <mi>T</mi> <mrow> <mn>3</mn> <mi>r</mi> <mi>d</mi> </mrow> </msup> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi>T</mi> <mrow> <mn>2</mn> <mi>n</mi> <mi>d</mi> </mrow> </msup> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math>. The values of the ratio of bad pixels [<a href="#B25-sensors-24-00633" class="html-bibr">25</a>] (error threshold is 1.0) in (<b>b</b>–<b>d</b>) are 12.47, 9.24, and 5.38, respectively.</p> "> Figure 10
<p>Guided feathering. MRGF’s results are in <a href="#sensors-24-00633-f004" class="html-fig">Figure 4</a>. The parameters are <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>6</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>r</mi> </msub> <mo>=</mo> <mn>160</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi>T</mi> <mrow> <mn>3</mn> <mi>r</mi> <mi>d</mi> </mrow> </msup> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>. Red boxes indicate magnified areas.</p> "> Figure 11
<p>Haze removing. (<b>f</b>), (<b>g</b>), and (<b>h</b>) are the details of (<b>a</b>), (<b>b</b>), and (<b>c</b>) with red boxes, respectively. Our result has been computed by MRGF in <a href="#sensors-24-00633-f004" class="html-fig">Figure 4</a>. The parameters are <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>60</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>6</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>r</mi> </msub> <mo>=</mo> <mn>60</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi>T</mi> <mrow> <mn>3</mn> <mi>r</mi> <mi>d</mi> </mrow> </msup> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math>.</p> ">
Abstract
:1. Introduction
- 1.
- Introducing a constant-time algorithm for multilateral filtering (Section 5);
- 2.
- Extending various filters (e.g., guided image filtering [20] and domain transform filtering [33]) to deal with multiple guidance information (Section 6.1);
- 3.
- Proposing a multilateral extension to the filter that uses the filtering output as a guidance image, such as rolling guidance filters [52] (Section 6.2).
2. Related Work
3. Preliminaries
4. Relationship between Multilateral Filtering and Higher-Dimensional Filtering
5. Proposed Method: Decomposed Multilateral Filtering
5.1. Definition of Multilateral Filtering
Algorithm 1 Decomposed Multilateral Filtering |
|
5.2. Recursive Representation for Decomposed Multilateral Filtering
5.3. Tonal Range Subsampling
5.4. Spatial Domain Subsampling
6. Extension of Decomposed Multilateral Filtering
6.1. Beyond Gauss Transform
6.2. Multilateral Rolling Guidance Filtering
7. Experimental Results
7.1. Accuracy Evaluation
7.2. Efficiency Evaluation
7.3. Denoising Performance Evaluation
7.4. Channel Perfomance Evaluation
7.5. Memory Usage Analysis
8. Multilateral Filtering for Computational Photography
8.1. Flash/No-Flash Denoising
8.2. Depth Map Refining
8.3. Feathering
8.4. Haze Removing
9. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
CFJIR | cross-field joint image restoration |
CT | computed tomography |
DCT | discrete cosine transform |
DMF | decomposed multilateral filtering |
DTF | domain transform filtering |
FIR | finite impulse response |
GIF | guided image filtering |
HDF | high-dimensional filtering |
HDKF | high-dimensional kernel filtering |
HSV | hue, saturation and value |
JPEG | joint photographic experts group |
LTI | linear-time invariant |
LTV | linear-time variant |
MF | multilateral filtering |
MPEG | moving picture experts group |
MRI | magnetic resonance imaging |
MRGF | multilateral rolling guidance filtering |
PBFICs | principle bilateral filtered image components |
PSNR | peak signal-to-noise ratio |
RGB | Red, green and blue |
RGB-D | red, green, blue and depth |
RTBF | real-time bilateral filtering |
SIMD | single instruction, multiple data |
References
- Jia, W.; Song, Z.; Li, Z. Multi-scale Fusion of Stretched Infrared and Visible Images. Sensors 2022, 22, 6660. [Google Scholar] [CrossRef]
- Li, H.; Xiao, Y.; Cheng, C.; Song, X. SFPFusion: An Improved Vision Transformer Combining Super Feature Attention and Wavelet-Guided Pooling for Infrared and Visible Images Fusion. Sensors 2023, 23, 7870. [Google Scholar] [CrossRef]
- Chen, H.; Deng, L.; Zhu, L.; Dong, M. ECFuse: Edge-Consistent and Correlation-Driven Fusion Framework for Infrared and Visible Image Fusion. Sensors 2023, 23, 8071. [Google Scholar] [CrossRef] [PubMed]
- Monno, Y.; Kiku, D.; Tanaka, M.; Okutomi, M. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking. Sensors 2017, 17, 2787. [Google Scholar] [CrossRef] [PubMed]
- Morillas, S.; Gregori, V.; Sapena, A. Adaptive Marginal Median Filter for Colour Images. Sensors 2011, 11, 3205–3213. [Google Scholar] [CrossRef]
- Morillas, S.; Gregori, V. Robustifying Vector Median Filter. Sensors 2011, 11, 8115–8126. [Google Scholar] [CrossRef]
- Le, A.V.; Jung, S.W.; Won, C.S. Directional Joint Bilateral Filter for Depth Images. Sensors 2014, 14, 11362–11378. [Google Scholar] [CrossRef] [PubMed]
- Chen, L.; Li, Q. An Adaptive Fusion Algorithm for Depth Completion. Sensors 2022, 22, 4603. [Google Scholar] [CrossRef]
- Takeda, J.; Fukushima, N. Poisson disk sampling with randomized satellite points for projected texture stereo. Opt. Contin. 2022, 1, 974–988. [Google Scholar] [CrossRef]
- Cheong, H.; Chae, E.; Lee, E.; Jo, G.; Paik, J. Fast Image Restoration for Spatially Varying Defocus Blur of Imaging Sensor. Sensors 2015, 15, 880–898. [Google Scholar] [CrossRef]
- Yang, Y.; Tong, S.; Huang, S.; Lin, P. Dual-Tree Complex Wavelet Transform and Image Block Residual-Based Multi-Focus Image Fusion in Visual Sensor Networks. Sensors 2014, 14, 22408–22430. [Google Scholar] [CrossRef]
- Li, Q.; Yang, X.; Wu, W.; Liu, K.; Jeon, G. Multi-Focus Image Fusion Method for Vision Sensor Systems via Dictionary Learning with Guided Filter. Sensors 2018, 18, 2143. [Google Scholar] [CrossRef]
- Oishi, S.; Fukushima, N. Retinex-Based Relighting for Night Photography. Appl. Sci. 2023, 13, 1719. [Google Scholar] [CrossRef]
- Huang, D.; Tang, Y.; Wang, Q. An Image Fusion Method of SAR and Multispectral Images Based on Non-Subsampled Shearlet Transform and Activity Measure. Sensors 2022, 22, 7055. [Google Scholar] [CrossRef]
- Xiao, Y.; Guo, Z.; Veelaert, P.; Philips, W. General Image Fusion for an Arbitrary Number of Inputs Using Convolutional Neural Networks. Sensors 2022, 22, 2457. [Google Scholar] [CrossRef] [PubMed]
- Eisemann, E.; Durand, F. Flash Photography Enhancement via Intrinsic Relighting. ACM Trans. Graph. 2004, 23, 673–678. [Google Scholar] [CrossRef]
- Petschnigg, G.; Agrawala, M.; Hoppe, H.; Szeliski, R.; Cohen, M.; Toyama, K. Digital Photography with Flash and No-flash Image Pairs. ACM Trans. Graph. 2004, 23, 664–672. [Google Scholar] [CrossRef]
- Kopf, J.; Cohen, M.; Lischinski, D.; Uyttendaele, M. Joint Bilateral Upsampling. ACM Trans. Graph. 2007, 26, 6497. [Google Scholar] [CrossRef]
- Wada, N.; Kazui, M.; Haseyama, M. Extended Joint Bilateral Filter for the Reduction of Color Bleeding in Compressed Image and Video. ITE Trans. Media Technol. Appl. 2015, 3, 95–106. [Google Scholar] [CrossRef]
- He, K.; Shun, J.; Tang, X. Guided Image Filtering. In Proceedings of the European Conference on Computer Vision (ECCV), Crete, Greece, 5–11 September 2010; pp. 1–14. [Google Scholar] [CrossRef]
- He, K.; Sun, J.; Tang, X. Single Image Haze Removal using Dark Channel Prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 20–25 June 2009; pp. 2341–2353. [Google Scholar]
- Shi, Z.; Li, Y.; Zhang, C.; Zhao, M.; Feng, Y.; Jiang, B. Weighted median guided filtering method for single image rain removal. EURASIP J. Image Video Process. 2018, 2018, 35. [Google Scholar] [CrossRef]
- Eichhardt, I.; Chetverikov, D.; Janko, Z. Image-guided ToF depth upsampling: A survey. Mach. Vis. Appl. 2017, 28, 267–282. [Google Scholar] [CrossRef]
- Matsuo, T.; Fukushima, N.; Ishibashi, Y. Weighted Joint Bilateral Filter with Slope Depth Compensation Filter for Depth Map Refinement. In Proceedings of the International Conference on Computer Vision Theory and Applications (VISAPP), Barcelona, Spain, 21–24 February 2013; pp. 300–309. [Google Scholar]
- Scharstein, D.; Szeliski, R. A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms. Int. J. Comput. Vis. 2002, 47, 7–42. [Google Scholar] [CrossRef]
- Hosni, A.; Rhemann, C.; Bleyer, M.; Rother, C.; Gelautz, M. Fast Cost-Volume Filtering for Visual Vorrespondence and Beyond. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 504–511. [Google Scholar] [CrossRef] [PubMed]
- Baker, S.; Scharstein, D.; Lewis, J.P.; Roth, S.; Black, M.J.; Szeliski, R. A Database and Evaluation Methodology for Optical Flow. Int. J. Comput. Vis. 2011, 92, 1–31. [Google Scholar] [CrossRef]
- Tomasi, C.; Manduchi, R. Bilateral Filtering for Gray and Color Images. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Bombay, India, 4–7 January 1998; pp. 839–846. [Google Scholar] [CrossRef]
- Lai, P.; Tian, D.; Lopez, P. Depth Map Processing with Iterative Joint Multilateral Filtering. In Proceedings of the Picture Coding Symposium (PCS), Nagoya, Japan, 8–10 December 2010; pp. 9–12. [Google Scholar] [CrossRef]
- Gastal, E.S.L.; Oliveira, M.M. Adaptive Manifolds for Real-Time High-Dimensional Filtering. ACM Trans. Graph. 2012, 31, 2185529. [Google Scholar] [CrossRef]
- Butt, I.T.; Rajpoot, N.M. Multilateral Filtering: A Novel Framework for Generic Similarity-Based Image Denoising. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 2981–2984. [Google Scholar] [CrossRef]
- Choudhury, P.; Tumblin, J. The Trilateral Filter for High Contrast Images and Meshes. In Proceedings of the Eurographics Workshop on Rendering, Leuven, Belgium, 25–27 June 2003; pp. 186–196. [Google Scholar]
- Gastal, E.S.L.; Oliveira, M.M. Domain Transform for Edge-Aware Image and Video Processing. ACM Trans. Graph. 2011, 30, 1–12. [Google Scholar] [CrossRef]
- Sugimoto, K.; Fukushima, N.; Kamata, S. Fast bilateral filter for multichannel images via soft-assignment coding. In Proceedings of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Jeju, Republic of Korea, 13–15 December 2016; pp. 1–4. [Google Scholar] [CrossRef]
- Nair, P.; Chaudhury, K.N. Fast High-Dimensional Kernel Filtering. IEEE Signal Process. Lett. 2019, 26, 377–381. [Google Scholar] [CrossRef]
- Miyamura, T.; Fukushima, N.; Waqas, M.; Sugimoto, K.; Kamata, S. Image Tiling for Clustering to Improve Stability of Constant-time Color Bilateral Filtering. In Proceedings of the International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; pp. 1038–1042. [Google Scholar] [CrossRef]
- Oishi, S.; Fukushima, N. Tiling and PCA strategy for Clustering-based High-Dimensional Gaussian Filtering. SN Comput. Sci. 2023, 5, 40. [Google Scholar] [CrossRef]
- Lin, G.S.; Chen, C.Y.; Kuo, C.T.; Lie, W.N. A Computing Framework of Adaptive Support-Window Multi-Lateral Filter for Image and Depth Processing. IEEE Trans. Broadcast. 2014, 60, 452–463. [Google Scholar] [CrossRef]
- Yang, Y.; Liu, Q.; He, X.; Liu, Z. Cross-View Multi-Lateral Filter for Compressed Multi-View Depth Video. IEEE Trans. Image Process. 2019, 28, 302–315. [Google Scholar] [CrossRef]
- Durand, F.; Dorsey, J. Fast Bilateral Filtering for the Display of High-Dynamic-Range Images. ACM Trans. Graph. 2002, 21, 257–266. [Google Scholar] [CrossRef]
- Paris, S.; Durand, F. A Fast Approximation of the Bilateral Filter using A Signal Processing Approach. Int. J. Comput. Vis. 2009, 81, 24–52. [Google Scholar] [CrossRef]
- Yang, Q.; Ahuja, N.; Tan, K.H. Constant Time Median and Bilateral Filtering. Int. J. Comput. Vis. 2014, 112, 307–318. [Google Scholar] [CrossRef]
- Yang, Q.; Tan, K.H.; Ahuja, N. Real-Time O(1) Bilateral Filtering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 20–25 June 2009; pp. 557–564. [Google Scholar] [CrossRef]
- Ghosh, S.; Chaudhury, K.N. On Fast Bilateral Filtering Using Fourier Kernels. IEEE Signal Process. Lett. 2016, 23, 570–573. [Google Scholar] [CrossRef]
- Sugimoto, K.; Fukushima, N.; Kamata, S. 200 FPS Constant-time Bilateral Filter Using SVD and Tiling Strategy. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 190–194. [Google Scholar] [CrossRef]
- Sumiya, Y.; Fukushima, N.; Sugimoto, K.; Kamata, S. Extending Compressive Bilateral Filtering for Arbitrary Range Kernel. In Proceedings of the International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; pp. 1018–1022. [Google Scholar] [CrossRef]
- Adams, A.; Gelfand, N.; Dolson, J.; Levoy, M. Gaussian KD-Trees for Fast High-Dimensional Filtering. ACM Trans. Graph. 2009, 28, 1531327. [Google Scholar] [CrossRef]
- Adams, A.; Baek, J.; Davis, M.A. Fast High-Dimensional Filtering Using the Permutohedral Lattice. Comput. Graph. Forum 2010, 29, 753–762. [Google Scholar] [CrossRef]
- Maeda, Y.; Fukushima, N.; Matsuo, H. Taxonomy of Vectorization Patterns of Programming for FIR Image Filters Using Kernel Subsampling and New One. Appl. Sci. 2018, 8, 1985. [Google Scholar] [CrossRef]
- Maeda, Y.; Fukushima, N.; Matsuo, H. Effective Implementation of Edge-Preserving Filtering on CPU Microarchitectures. Appl. Sci. 2018, 8, 1235. [Google Scholar] [CrossRef]
- Naganawa, Y.; Kamei, H.; Kanetaka, Y.; Nogami, H.; Maeda, Y.; Fukushima, N. SIMD-Constrained Lookup Table for Accelerating Variable-Weighted Convolution on x86/64 CPUs. IEEE Access 2024. [Google Scholar] [CrossRef]
- Zang, Q.; Shen, X.; Xu, L.; Jia, J. Rolling Guidance Filter. In Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014. [Google Scholar] [CrossRef]
- Pajares, G.; De La Cruz, J.M. A wavelet-based image fusion tutorial. Pattern Recognit. 2004, 37, 1855–1872. [Google Scholar] [CrossRef]
- Wang, Z.; Ziou, D.; Armenakis, C.; Li, D.; Li, Q. A comparative analysis of image fusion methods. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1391–1402. [Google Scholar] [CrossRef]
- James, A.P.; Dasarathy, B.V. Medical image fusion: A survey of the state of the art. Inf. Fusion 2014, 19, 4–19. [Google Scholar] [CrossRef]
- Li, S.; Kang, X.; Fang, L.; Hu, J.; Yin, H. Pixel-level image fusion: A survey of the state of the art. Inf. Fusion 2017, 33, 100–112. [Google Scholar] [CrossRef]
- Zhang, H.; Xu, H.; Tian, X.; Jiang, J.; Ma, J. Image fusion meets deep learning: A survey and perspective. Inf. Fusion 2021, 76, 323–336. [Google Scholar] [CrossRef]
- Kaur, H.; Koundal, D.; Kadyan, V. Image fusion techniques: A survey. Arch. Comput. Methods Eng. 2021, 28, 4425–4447. [Google Scholar] [CrossRef] [PubMed]
- Hermessi, H.; Mourali, O.; Zagrouba, E. Multimodal medical image fusion review: Theoretical background and recent advances. Signal Process. 2021, 183, 108036. [Google Scholar] [CrossRef]
- Azam, M.A.; Khan, K.B.; Salahuddin, S.; Rehman, E.; Khan, S.A.; Khan, M.A.; Kadry, S.; Gandomi, A.H. A review on multimodal medical image fusion: Compendious analysis of medical modalities, multimodal databases, fusion techniques and quality metrics. Comput. Biol. Med. 2022, 144, 105253. [Google Scholar] [CrossRef]
- Kalamkar, S. Multimodal image fusion: A systematic review. Decis. Anal. J. 2023, 9, 100327. [Google Scholar] [CrossRef]
- Singh, S.; Singh, H.; Bueno, G.; Deniz, O.; Singh, S.; Monga, H.; Hrisheekesha, P.; Pedraza, A. A review of image fusion: Methods, applications and performance metrics. Digit. Signal Process. 2023, 137, 104020. [Google Scholar] [CrossRef]
- Crow, F.C. Summed-Area Tables for Texture Mapping. In Proceedings of the ACM SIGGRAPH, Minneapolis, MN, USA, 23–27 July 1984; pp. 207–212. [Google Scholar] [CrossRef]
- Fukushima, N.; Maeda, Y.; Kawasaki, Y.; Nakamura, M.; Tsumura, T.; Sugimoto, K.; Kamata, S. Efficient Computational Scheduling of Box and Gaussian FIR Filtering for CPU Microarchitecture. In Proceedings of the 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Honolulu, HI, USA, 12–15 November 2018. [Google Scholar]
- Deriche, R. Recursively Implementating the Gaussian and its Derivatives. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Singapore, 7–11 September 1992; pp. 263–267. [Google Scholar]
- Sugimoto, K.; Kamata, S. Fast Gaussian Filter with Second-Order Shift Property of DCT-5. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Melbourne, Australia, 15–18 September 2013; pp. 514–518. [Google Scholar] [CrossRef]
- Takagi, H.; Fukushima, N. An Efficient Description with Halide for IIR Gaussian Filter. In Proceedings of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Auckland, New Zealand, 7–10 December 2020. [Google Scholar]
- Otsuka, T.; Fukushima, N.; Maeda, Y.; Sugimoto, K.; Kamata, S. Optimization of Sliding-DCT based Gaussian Filtering for Hardware Accelerator. In Proceedings of the International Conference on Visual Communications and Image Processing (VCIP), Macau, China, 1–4 December 2020; pp. 423–426. [Google Scholar] [CrossRef]
- Sayed, A.H. Fundamentals of Adaptive Filtering; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
- Yoshizawa, S.; Belyaev, A.; Yokota, H. Fast Gauss Bilateral Filtering. Comput. Graph. Forum 2010, 29, 60–74. [Google Scholar] [CrossRef]
- Fujita, S.; Fukushima, N. Extending Guided Image Filtering for High-Dimensional Signals. In Communications in Computer and Information Science Book Series, Revised Selected Papers in 11th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2016), Rome, Italy, 27–29 February 2016; Springer International Publishing: Berlin/Heidelberg, Germany, 2017; Volume 693, pp. 439–453. [Google Scholar] [CrossRef]
- Fattal, R.; Carroll, R.; Agrawala, M. Edge-Based Image Coarsening. ACM Trans. Graph. 2009, 29, 1640449. [Google Scholar] [CrossRef]
- Fattal, R. Edge-Avoiding Wavelets and Their Applications. ACM Trans. Graph. 2009, 28, 1531328. [Google Scholar] [CrossRef]
- Fukushima, N.; Kawasaki, Y.; Maeda, Y. Accelerating Redundant DCT Filtering for Deblurring and Denoising. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 4175–4179. [Google Scholar] [CrossRef]
- Gavaskar, R.G.; Chaudhury, K.N. Fast Adaptive Bilateral Filtering. IEEE Trans. Image Process. 2019, 28, 779–790. [Google Scholar] [CrossRef] [PubMed]
- Aubry, M.; Paris, S.; Hasinoff, S.W.; Kautz, J.; Durand, F. Fast Local Laplacian Filters: Theory and Applications. ACM Trans. Graph. 2014, 33, 2629645. [Google Scholar] [CrossRef]
- Sumiya, Y.; Otsuka, T.; Maeda, Y.; Fukushima, N. Gaussian Fourier Pyramid for Local Laplacian Filter. IEEE Signal Process. Lett. 2022, 29, 11–15. [Google Scholar] [CrossRef]
- Hayashi, K.; Maeda, Y.; Fukushima, N. Local Contrast Enhancement with Multiscale Filtering. In Proceedings of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Taipei, Taiwan, 31 October–3 November 2023. [Google Scholar]
- Mishiba, K. Fast Guided Median Filter. IEEE Trans. Image Process. 2023, 32, 737–749. [Google Scholar] [CrossRef] [PubMed]
- Tsubokawa, T.; Tajima, H.; Maeda, Y.; Fukushima, N. Local look-up table upsampling for accelerating image processing. Multimed. Tools Appl. 2023. [Google Scholar] [CrossRef]
- Farbman, Z.; Fattal, R.; Lischinski, D.; Szeliski, R. Edge-Preserving Decompositions for Multi-Scale Tone and Detail Manipulation. ACM Trans. Graph. 2008, 27, 1360666. [Google Scholar] [CrossRef]
- Min, D.; Choi, S.; Lu, J.; Ham, B.; Sohn, K.; Do, M. Fast Global Image Smoothing Based on Weighted Least Squares. IEEE Trans. Image Process. 2014, 23, 5638–5653. [Google Scholar] [CrossRef]
- Xu, L.; Lu, C.; Xu, Y.; Jia, J. Image Smoothing via L0 Gradient Minimization. ACM Trans. Graph. 2011, 30, 2024208. [Google Scholar] [CrossRef]
- Kanetaka, Y.; Takagi, H.; Maeda, Y.; Fukushima, N. SlidingConv: Domain-Specific Description of Sliding Discrete Cosine Transform Convolution for Halide. IEEE Access 2024, 12, 7563–7583. [Google Scholar] [CrossRef]
- Huynh-Thu, Q.; Ghanbari, M. Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 2008, 44, 800–801. [Google Scholar] [CrossRef]
- Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the International Conference on Pattern Recognition (ICPR), Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar] [CrossRef]
- Honda, S.; Maeda, Y.; Fukushima, N. Dataset of Subjective Assessment for Visually Near-Lossless Image Coding based on Just Noticeable Difference. In Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), Ghent, Belgium, 20–22 June 2023; pp. 236–239. [Google Scholar] [CrossRef]
- Yan, Q.; Shen, X.; Xu, L.; Zhuo, S.; Zhang, X.; Shen, L.; Jia, J. Cross-Field Joint Image Restoration via Scale Map. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Sydney, NSW, Australia, 1–8 December 2013. [Google Scholar] [CrossRef]
- Ishikawa, K.; Oishi, S.; Fukushima, N. Principal Component Analysis for Accelerating Color Bilateral Filtering. In Proceedings of the International Workshop on Advanced Imaging Technology (IWAIT), Jeju, Republic of Korea, 9–11 January 2023; Volume 12592, p. 125921F. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
- Otsuka, T.; Fukushima, N. Vectorized Implementation of K-means. In Proceedings of the International Workshop on Advanced Image Technology (IWAIT), Kagoshima, Japan, 5–6 January 2021. [Google Scholar] [CrossRef]
Image | DCT | DMF-DCT | DTF | DMF-DTF | GIF | DMF-GIF | CFJIR * | HDKF * |
---|---|---|---|---|---|---|---|---|
0 | 36.73 | 36.73 | 34.41 | 35.11 | 34.35 | 35.26 | 35.22 | 35.55 |
1 | 33.23 | 33.41 | 31.37 | 31.97 | 32.34 | 33.14 | 32.13 | 33.53 |
2 | 36.16 | 36.13 | 34.50 | 34.72 | 34.45 | 35.10 | 34.54 | 35.67 |
3 | 39.56 | 40.54 | 37.53 | 37.88 | 37.30 | 39.38 | 40.52 | 40.49 |
4 | 36.31 | 36.47 | 34.11 | 34.73 | 33.90 | 35.97 | 35.46 | 36.77 |
5 | 35.51 | 35.46 | 33.76 | 34.26 | 33.82 | 34.68 | 34.38 | 34.57 |
6 | 34.06 | 33.99 | 32.21 | 32.85 | 32.49 | 33.25 | 32.93 | 33.05 |
7 | 36.56 | 36.69 | 34.69 | 35.25 | 34.60 | 37.28 | 37.52 | 37.70 |
8 | 34.50 | 34.38 | 33.04 | 33.67 | 33.56 | 32.99 | 32.80 | 33.42 |
9 | 36.58 | 36.82 | 33.28 | 34.10 | 33.77 | 37.03 | 37.49 | 37.62 |
Average | 35.92 | 36.06 | 33.89 | 34.45 | 34.06 | 35.41 | 35.30 | 35.84 |
Image | DCT | DMF-DCT | DTF | DMF-DTF | GIF | DMF-GIF | CFJIR * | HDKF * |
---|---|---|---|---|---|---|---|---|
0 | 32.81 | 32.89 | 29.90 | 31.42 | 29.98 | 31.91 | 32.26 | 32.09 |
1 | 29.50 | 29.92 | 27.76 | 28.82 | 28.28 | 29.75 | 29.50 | 30.31 |
2 | 31.64 | 32.28 | 29.47 | 30.34 | 29.66 | 31.44 | 30.86 | 31.60 |
3 | 36.24 | 37.34 | 34.18 | 34.58 | 33.45 | 36.04 | 38.59 | 36.54 |
4 | 32.45 | 32.85 | 29.18 | 31.05 | 29.09 | 32.31 | 32.42 | 33.13 |
5 | 31.67 | 31.85 | 29.56 | 30.49 | 29.63 | 31.02 | 31.05 | 31.07 |
6 | 30.03 | 30.22 | 27.50 | 29.07 | 27.77 | 29.90 | 29.84 | 29.77 |
7 | 32.92 | 33.27 | 31.30 | 32.24 | 31.03 | 34.03 | 34.98 | 34.31 |
8 | 30.77 | 30.99 | 28.56 | 29.97 | 29.07 | 30.08 | 29.99 | 30.32 |
9 | 32.61 | 33.21 | 29.15 | 30.74 | 29.32 | 33.51 | 34.47 | 34.02 |
Average | 32.06 | 32.48 | 29.66 | 30.87 | 29.73 | 32.00 | 32.40 | 32.32 |
Image | DCT | DMF-DCT | DTF | DMF-DTF | GIF | DMF-GIF | CFJIR * | HDKF * |
---|---|---|---|---|---|---|---|---|
0 | 30.35 | 30.55 | 27.24 | 29.50 | 27.61 | 30.21 | 30.42 | 30.16 |
1 | 27.64 | 28.28 | 26.03 | 27.44 | 26.42 | 28.27 | 27.86 | 28.15 |
2 | 28.84 | 29.83 | 26.08 | 27.63 | 26.88 | 29.12 | 28.44 | 28.75 |
3 | 33.93 | 35.01 | 32.54 | 33.00 | 30.82 | 34.99 | 36.41 | 34.29 |
4 | 30.18 | 30.88 | 26.28 | 29.36 | 26.58 | 30.63 | 30.36 | 30.48 |
5 | 29.32 | 29.71 | 26.79 | 28.07 | 27.27 | 28.48 | 29.05 | 29.09 |
6 | 27.72 | 28.18 | 24.65 | 27.09 | 25.39 | 27.88 | 28.00 | 27.84 |
7 | 30.64 | 31.19 | 28.94 | 30.51 | 28.82 | 32.60 | 32.85 | 31.71 |
8 | 28.68 | 29.16 | 25.98 | 28.12 | 26.71 | 28.31 | 28.45 | 28.73 |
9 | 30.14 | 31.22 | 26.94 | 29.22 | 27.18 | 32.17 | 31.80 | 30.82 |
Average | 29.74 | 30.40 | 27.15 | 28.99 | 27.37 | 30.27 | 30.36 | 30.00 |
Noise | 1 | 2 | 3 | 4 | 5 | 6 | |
---|---|---|---|---|---|---|---|
0 | 28.82 | 30.92 | 30.96 | 31.84 | 33.33 | 33.33 | 33.32 |
1 | 28.47 | 33.95 | 34.55 | 36.34 | 36.35 | 36.34 | 36.33 |
2 | 28.16 | 39.91 | 40.91 | 41.32 | 41.30 | 41.29 | 41.28 |
3 | 28.27 | 37.66 | 39.56 | 39.80 | 39.80 | 39.79 | 39.79 |
4 | 28.20 | 36.34 | 38.01 | 38.70 | 38.80 | 38.76 | 38.74 |
5 | 28.16 | 38.35 | 40.43 | 40.73 | 40.70 | 40.67 | 40.66 |
6 | 28.18 | 40.85 | 42.20 | 42.31 | 42.29 | 42.27 | 42.27 |
7 | 28.26 | 37.44 | 39.48 | 39.84 | 39.85 | 39.85 | 39.84 |
8 | 28.28 | 35.19 | 37.38 | 37.52 | 37.60 | 37.59 | 37.59 |
9 | 28.15 | 39.35 | 41.42 | 41.40 | 41.38 | 41.36 | 41.35 |
Average | 28.30 | 37.00 | 38.49 | 38.98 | 39.14 | 39.12 | 39.12 |
Noise | 1 | 2 | 3 | 4 | 5 | 6 | |
---|---|---|---|---|---|---|---|
0 | 0.932 | 0.965 | 0.965 | 0.967 | 0.975 | 0.974 | 0.974 |
1 | 0.863 | 0.970 | 0.974 | 0.984 | 0.984 | 0.983 | 0.983 |
2 | 0.723 | 0.983 | 0.984 | 0.987 | 0.987 | 0.987 | 0.987 |
3 | 0.751 | 0.973 | 0.983 | 0.983 | 0.983 | 0.983 | 0.983 |
4 | 0.769 | 0.977 | 0.985 | 0.985 | 0.985 | 0.984 | 0.984 |
5 | 0.693 | 0.975 | 0.985 | 0.985 | 0.985 | 0.984 | 0.984 |
6 | 0.650 | 0.974 | 0.984 | 0.983 | 0.983 | 0.982 | 0.982 |
7 | 0.762 | 0.976 | 0.984 | 0.985 | 0.985 | 0.985 | 0.985 |
8 | 0.796 | 0.961 | 0.981 | 0.981 | 0.980 | 0.980 | 0.980 |
9 | 0.676 | 0.973 | 0.984 | 0.984 | 0.984 | 0.983 | 0.983 |
Average | 0.762 | 0.973 | 0.981 | 0.982 | 0.983 | 0.983 | 0.983 |
Channels | Time (msec) |
---|---|
1 | 29.7 |
2 | 79.1 |
3 | 455.9 |
4 | 3380.4 |
5 | 27,224.7 |
6 | 224,146.0 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nogami, H.; Kanetaka, Y.; Naganawa, Y.; Maeda, Y.; Fukushima, N. Decomposed Multilateral Filtering for Accelerating Filtering with Multiple Guidance Images. Sensors 2024, 24, 633. https://doi.org/10.3390/s24020633
Nogami H, Kanetaka Y, Naganawa Y, Maeda Y, Fukushima N. Decomposed Multilateral Filtering for Accelerating Filtering with Multiple Guidance Images. Sensors. 2024; 24(2):633. https://doi.org/10.3390/s24020633
Chicago/Turabian StyleNogami, Haruki, Yamato Kanetaka, Yuki Naganawa, Yoshihiro Maeda, and Norishige Fukushima. 2024. "Decomposed Multilateral Filtering for Accelerating Filtering with Multiple Guidance Images" Sensors 24, no. 2: 633. https://doi.org/10.3390/s24020633