Segmentation-Based Color Channel Registration for Disparity Estimation of Dual Color-Filtered Aperture Camera
<p>Imaging system with single aperture: (<b>a</b>) paths of light rays in a conventional optical system; (<b>b</b>) paths of light rays in a single off-axis aperture system.</p> "> Figure 2
<p>The dual color-filtered aperture (DCA) system: (<b>a</b>) the DCA configuration with an object at the in-focus position; and (<b>b</b>) the DCA configuration with an object at the out-of-focus position.</p> "> Figure 3
<p><b>Left</b>: Acquired image using the DCA camera; <b>upper right</b>: the red channel image; and <b>lower right</b>: the green channel image.</p> "> Figure 4
<p>Block diagram of the proposed system. CMM, color mapping matrix.</p> "> Figure 5
<p>Poor disparity estimation results of a DCA image using the traditional method: (<b>a</b>,<b>b</b>) are matching results using sum of absolute differences (SAD) and normalized cross-correlation (NCC), respectively.</p> "> Figure 6
<p>The gradient magnitude and LBP images of channels shown in <a href="#sensors-18-03174-f003" class="html-fig">Figure 3</a>: gradient magnitude (<b>top</b>); LBP (<b>bottom</b>) of (<b>a</b>) red and (<b>b</b>) green channels.</p> "> Figure 7
<p>Comparison of the gradient magnitude and LBP feature: (<b>a</b>) two blocks selected in gradient magnitude image and (<b>b</b>) LBP feature image.</p> "> Figure 8
<p>Process of the distance transform (DT): (<b>a</b>) binary image and (<b>b</b>) the result of DT.</p> "> Figure 9
<p>Disparity map comparison: (<b>a</b>) disparity generated by cross-channel normalized gradient (CCNG); (<b>b</b>) disparity generated by our method with a constant weight matrix, whose pixels value is assigned as 0.5; and (<b>c</b>) disparity generated by our method with adaptive weight.</p> "> Figure 10
<p>Segmentation process of the DCA image and CMM: (<b>a</b>) initially aligned reference channel <span class="html-italic">R</span> using the estimated disparity; (<b>b</b>) a segmented target channel using superpixels; (<b>c</b>) segmentation results of all three channels; and (<b>d</b>) the segment-wise CMM.</p> "> Figure 11
<p>Comparison of color registration results: (<b>a</b>–<b>c</b>) registration using the initially aligned reference image, refinement with CMM, and refinement with enhanced CMM.</p> "> Figure 12
<p>Color mapping matrices: (<b>a</b>) CMM and (<b>b</b>) local points without corresponding pixels in the initially aligned reference channel.</p> "> Figure 13
<p>Proposed cross-channel disparity estimation and registration results. (<b>a</b>) Input images, (<b>b</b>) disparity maps, (<b>c</b>) registered images and (<b>d</b>,<b>e</b>) magnified regions of (<b>a</b>,<b>c</b>), respectively.</p> "> Figure 14
<p>Disparity extraction of various matching methods with Middlebury stereo images ‘Rocks’, ‘Cloth’, ‘Aloe’ and ‘Wood’ [<a href="#B17-sensors-18-03174" class="html-bibr">17</a>]. (<b>a</b>–<b>f</b>) Input DCA image, disparity with SAD, NCC, Holloway’s CCNG, our method and the ground truth disparity.</p> "> Figure 15
<p>Color registration with the corresponding disparity prior shown in <a href="#sensors-18-03174-f014" class="html-fig">Figure 14</a> by using the proposed cross-channel registration and refinement strategy. (<b>a</b>) input DCA image, (<b>b</b>) registration by SAD, (<b>c</b>) NCC, (<b>d</b>) Holloway’s CCNG, (<b>e</b>) the proposed method and (<b>f</b>) the ground truth color image.</p> ">
Abstract
:1. Introduction
2. Background
3. Cross-Channel Disparity Estimation and Color Channel Alignment
3.1. Feature Extraction
3.2. Disparity Extraction
3.3. Registration and Refinement
4. Experimental Results
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Scharstein, D.; Szeliski, R. A Taxonomy and Evaluation of Dense Two-frame Stereo Correspondence Algorithms. Int. J. Comput. Vis. 2002, 47, 7–42. [Google Scholar] [CrossRef]
- Wiley, W.C.; Mclaren, I.H. Time-of-flight mass spectrometer with improved resolution. Rev. Sci. Instrum. 1955, 26, 1150–1157. [Google Scholar] [CrossRef]
- Foix, S.; Alenya, G.; Torras, C. Lock-in time-of-flight (ToF) cameras: A surve. IEEE Sens. J. 2011, 11, 1917–1926. [Google Scholar] [CrossRef] [Green Version]
- Scharstein, D.; Szeliski, R. High-accuracy stereo depth maps using structured light. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003; Voulme 1, p. I. [Google Scholar]
- Asada, N.; Fujiwara, H.; Matsuyama, T. Edge and depth from focus. Int. J. Comput. Vis. 1998, 26, 153–163. [Google Scholar] [CrossRef]
- Lee, E.; Chae, E.; Cheong, H.; Jeon, S.; Paik, J. Depth-based defocus map estimation using off-axis apertures. Opt. Express 2015, 23, 21958–21971. [Google Scholar] [CrossRef] [PubMed]
- Park, S.; Jang, J.; Lee, S.; Paik, J. Optical range-finding system using a single-image sensor with liquid crystal display aperture. Opt. Lett. 2016, 41, 5154–5157. [Google Scholar] [CrossRef] [PubMed]
- Lee, S.; Kim, N.; Jung, K.; Hayes, M.H.; Paik, J. Single image-based depth estimation using dual off-axis color filtered aperture camera. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 2247–2251. [Google Scholar]
- Lee, E.; Kang, W.; Kim, S.; Paik, J. Color shift model-based image enhancement for digital multifocusing based on a multiple color-filter aperture camera. IEEE Trans. Consum. Electron. 2010, 56, 317–323. [Google Scholar] [CrossRef]
- Kuglin, C. The phase correlation image alignment method. In Proceedings of the 1975 International Conference on Cybernetics and Society, San Francisco, CA, USA, 23–25 September 1975; pp. 163–165. [Google Scholar]
- Holloway, J.; Mitra, K.; Koppal, S.J.; Veeraraghavan, A.N. Generalized assorted camera arrays: Robust cross-channel registration and applications. IEEE Trans. Image Process. 2015, 24, 823–835. [Google Scholar] [CrossRef] [PubMed]
- Vanne, J.; Aho, E.; Hamalainen, T.D.; Kuusilinna, K. A high-performance sum of absolute difference implementation for motion estimation. IEEE Trans. Circuits Syst. Video Technol. 2006, 16, 876–883. [Google Scholar] [CrossRef]
- San, T.T.; War, N. Feature based disparity estimation using hill-climbing algorithm. In Proceedings of the 2017 IEEE 15th International Conference on Software Engineering Research, Management and Applications (SERA), London, UK, 7–9 June 2017; pp. 129–133. [Google Scholar]
- Briechle, K.; Hanebeck, U.D. Template matching using fast normalized cross correlation. Proc. SPIE 2001, 4387, 95–103. [Google Scholar] [Green Version]
- Ahonen, T.; Hadid, A.; Pietikäinen, M. Face recognition with local binary patterns. In Proceedings of the European Conference on Computer Vision, Prague, Czech Republic, 11–14 May 2004; Springer: Berlin, Germany, 2004; pp. 469–481. [Google Scholar]
- Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [PubMed]
- Hirschmuller, H.; Scharstein, D. Evaluation of cost functions for stereo matching. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [Google Scholar]
- Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition (ICPR), Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
Error Rate | SAD | NCC | CCNG | Proposed Method |
---|---|---|---|---|
Rocks | 0.8167 | 0.5459 | 0.3388 | 0.1832 |
Cloth | 0.9060 | 0.7374 | 0.4900 | 0.3571 |
Aloe | 0.6057 | 0.5128 | 0.2124 | 0.2029 |
Wood | 0.9901 | 0.7062 | 0.2109 | 0.1843 |
Error Rate | SAD | NCC | CCNG | Proposed Method |
---|---|---|---|---|
Rocks | 0.7062 | 0.3878 | 0.0990 | 0.0407 |
Cloth | 0.8280 | 0.5278 | 0.2506 | 0.0627 |
Aloe | 0.4615 | 0.3259 | 0.1435 | 0.1111 |
Wood | 0.9798 | 0.5171 | 0.1142 | 0.0599 |
SAD | NCC | CCNG | Proposed Method | ||
---|---|---|---|---|---|
Rocks | PSNR | 26.3083 | 30.8668 | 35.8874 | 35.9138 |
SSIM | 0.8990 | 0.9580 | 0.9784 | 0.9792 | |
Cloth | PSNR | 25.7239 | 29.8107 | 33.4656 | 35.6846 |
SSIM | 0.8257 | 0.8970 | 0.9496 | 0.9733 | |
Aloe | PSNR | 31.9656 | 34.3602 | 35.2166 | 36.0752 |
SSIM | 0.9275 | 0.9499 | 0.9654 | 0.9698 | |
Wood | PSNR | 20.9945 | 25.1666 | 40.9911 | 41.7191 |
SSIM | 0.7902 | 0.8908 | 0.9892 | 0.9904 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Song, S.; Park, S.; Paik, J. Segmentation-Based Color Channel Registration for Disparity Estimation of Dual Color-Filtered Aperture Camera. Sensors 2018, 18, 3174. https://doi.org/10.3390/s18103174
Song S, Park S, Paik J. Segmentation-Based Color Channel Registration for Disparity Estimation of Dual Color-Filtered Aperture Camera. Sensors. 2018; 18(10):3174. https://doi.org/10.3390/s18103174
Chicago/Turabian StyleSong, Shuxiang, Sangwoo Park, and Joonki Paik. 2018. "Segmentation-Based Color Channel Registration for Disparity Estimation of Dual Color-Filtered Aperture Camera" Sensors 18, no. 10: 3174. https://doi.org/10.3390/s18103174