Motion Blur Kernel Rendering Using an Inertial Sensor: Interpreting the Mechanism of a Thermal Detector
<p>The mechanism of two different sensors and cause of motion blur. (<b>a</b>) the cause of motion blur in the photon detector is integration time, (<b>b</b>) the cause of motion blur in the thermal detector is the response time of temperature change.</p> "> Figure 2
<p>Two kinds of cameras simultaneously take an image of the aircraft’s twin-jet engine flames. Both images have motion blur, but they have different motion blur patterns. (<b>a</b>) LWIR camera using thermal detector, (<b>b</b>) MWIR camera using photon detector.</p> "> Figure 3
<p>(<b>a</b>) Microbolometer structure and Schematic model, (<b>b</b>) Microbolometer scanning electron microscope (SEM) image [<a href="#B53-sensors-22-01893" class="html-bibr">53</a>].</p> "> Figure 4
<p>Examples of motionless and moving pattern images. (<b>a</b>) 4-bar pattern, (<b>b</b>) Point source, (<b>c</b>) 4-bar pattern at 40°/s, (<b>d</b>) Point source at 40°/s.</p> "> Figure 5
<p>Examples of stepping effects. (<b>a</b>) Shifting one pixel between adjacent frames, (<b>b</b>) Shifting two pixels between adjacent frames, (<b>c</b>) Shifting four pixels between adjacent frames, (<b>d</b>) Shifting eight pixels between adjacent frames.</p> "> Figure 6
<p>The comparison of real blurry images and synthetic blur images. (<b>a</b>) 4-bar pattern, (<b>b</b>) Point source.</p> "> Figure 7
<p>Illustration of camera rotation. (<b>a</b>) 3-axis rotation model, (<b>b</b>) Rotation motion measured by gyroscope sensor, (<b>c</b>) Blur kernel rendering result using the thermal detector model, (<b>d</b>) Blur kernel rendering result using the photon detector model.</p> "> Figure 8
<p>The calibration pattern for a thermal signal. (<b>a</b>) An ordinary checkerboard pattern (captured in visible-band and infrared band), (<b>b</b>) The checkerboard pattern improved by attaching aluminum material (captured in visible-band and infrared band).</p> "> Figure 9
<p>(<b>a</b>) Blur kernel before refinement, (<b>b</b>) blur kernel after refinement (given <math display="inline"><semantics> <mi>λ</mi> </semantics></math> = 10 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m, <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>/</mo> <mo>♯</mo> </mrow> </semantics></math> = 1.0, <math display="inline"><semantics> <mi>β</mi> </semantics></math> = 0.6).</p> "> Figure 10
<p>Overview of STI and SBTI datasets.</p> "> Figure 11
<p>Qualitative comparison of deblurring results on the SBTI dataset [1-4]54th. (<b>a</b>) Synthetic blurry thermal image, (<b>b</b>) SRN [<a href="#B33-sensors-22-01893" class="html-bibr">33</a>], (<b>c</b>) SIUN [<a href="#B36-sensors-22-01893" class="html-bibr">36</a>], (<b>d</b>) DeblurGAN.v2 [<a href="#B35-sensors-22-01893" class="html-bibr">35</a>], (<b>e</b>) CDVD [<a href="#B34-sensors-22-01893" class="html-bibr">34</a>], (<b>f</b>) Ours, (<b>g</b>) GT.</p> "> Figure 12
<p>Qualitative comparison of deblurring results on the SBTI dataset [2-5]49th. (<b>a</b>) Synthetic blurry thermal image, (<b>b</b>) SRN [<a href="#B33-sensors-22-01893" class="html-bibr">33</a>], (<b>c</b>) SIUN [<a href="#B36-sensors-22-01893" class="html-bibr">36</a>], (<b>d</b>) DeblurGAN.v2 [<a href="#B35-sensors-22-01893" class="html-bibr">35</a>], (<b>e</b>) CDVD [<a href="#B34-sensors-22-01893" class="html-bibr">34</a>], (<b>f</b>) Ours, (<b>g</b>) GT.</p> "> Figure 13
<p>Qualitative comparison of deblurring results on the SBTI dataset [3-4]51th. (<b>a</b>) Synthetic blurry thermal images, (<b>b</b>) SRN [<a href="#B33-sensors-22-01893" class="html-bibr">33</a>] (<b>c</b>) SIUN [<a href="#B36-sensors-22-01893" class="html-bibr">36</a>], (<b>d</b>) DeblurGAN.v2 [<a href="#B35-sensors-22-01893" class="html-bibr">35</a>], (<b>e</b>) CDVD [<a href="#B34-sensors-22-01893" class="html-bibr">34</a>], (<b>f</b>) Ours, (<b>g</b>) GT.</p> "> Figure 14
<p>Qualitative comparison of deblurring results on the SBTI dataset [4-4]91th. (<b>a</b>) Synthetic blurry thermal image, (<b>b</b>) SRN [<a href="#B33-sensors-22-01893" class="html-bibr">33</a>], (<b>c</b>) SIUN [<a href="#B36-sensors-22-01893" class="html-bibr">36</a>], (<b>d</b>) DeblurGAN.v2 [<a href="#B35-sensors-22-01893" class="html-bibr">35</a>], (<b>e</b>) CDVD [<a href="#B34-sensors-22-01893" class="html-bibr">34</a>], (<b>f</b>) Ours, (<b>g</b>) GT.</p> "> Figure 15
<p>Qualitative comparison of motion deblurring results on the real blurry thermal image. (<b>a</b>) Real blurry thermal image acquired with a camera rotating at 31°/s, (<b>b</b>) SRN [<a href="#B33-sensors-22-01893" class="html-bibr">33</a>], (<b>c</b>) SIUN [<a href="#B36-sensors-22-01893" class="html-bibr">36</a>], (<b>d</b>) DeblurGAN.v2 [<a href="#B35-sensors-22-01893" class="html-bibr">35</a>], (<b>e</b>) CDVD [<a href="#B34-sensors-22-01893" class="html-bibr">34</a>], (<b>f</b>) Ours.</p> "> Figure 16
<p>Qualitative comparison of motion deblurring results on the real blurry thermal image. (<b>a</b>) Real blurry thermal image acquired with a camera rotating at 39°/s, (<b>b</b>) SRN [<a href="#B33-sensors-22-01893" class="html-bibr">33</a>], (<b>c</b>) SIUN [<a href="#B36-sensors-22-01893" class="html-bibr">36</a>], (<b>d</b>) DeblurGAN.v2 [<a href="#B35-sensors-22-01893" class="html-bibr">35</a>], (<b>e</b>) CDVD [<a href="#B34-sensors-22-01893" class="html-bibr">34</a>], (<b>f</b>) Ours.</p> "> Figure 17
<p>Qualitative comparison of motion deblurring results on the real blurry thermal image. (<b>a</b>) Real blurry thermal image acquired with a camera rotating at 43°/s, (<b>b</b>) SRN [<a href="#B33-sensors-22-01893" class="html-bibr">33</a>], (<b>c</b>) SIUN [<a href="#B36-sensors-22-01893" class="html-bibr">36</a>], (<b>d</b>) DeblurGAN.v2 [<a href="#B35-sensors-22-01893" class="html-bibr">35</a>], (<b>e</b>) CDVD [<a href="#B34-sensors-22-01893" class="html-bibr">34</a>], (<b>f</b>) Ours.</p> "> Figure 18
<p>Qualitative comparison of motion deblurring results on the real blurry thermal image. (<b>a</b>) Real blurry thermal image acquired with a camera rotating at 44°/s, (<b>b</b>) SRN [<a href="#B33-sensors-22-01893" class="html-bibr">33</a>], (<b>c</b>) SIUN [<a href="#B36-sensors-22-01893" class="html-bibr">36</a>], (<b>d</b>) DeblurGAN.v2 [<a href="#B35-sensors-22-01893" class="html-bibr">35</a>], (<b>e</b>) CDVD [<a href="#B34-sensors-22-01893" class="html-bibr">34</a>], (<b>f</b>) Ours.</p> "> Figure 19
<p>Qualitative comparison of motion deblurring results on the real blurry thermal image. (<b>a</b>) Real blurry thermal image acquired with a camera rotating at 84°/s, (<b>b</b>) SRN [<a href="#B33-sensors-22-01893" class="html-bibr">33</a>], (<b>c</b>) SIUN [<a href="#B36-sensors-22-01893" class="html-bibr">36</a>], (<b>d</b>) DeblurGAN.v2 [<a href="#B35-sensors-22-01893" class="html-bibr">35</a>], (<b>e</b>) CDVD [<a href="#B34-sensors-22-01893" class="html-bibr">34</a>], (<b>f</b>) Ours.</p> "> Figure 20
<p>Qualitative comparison of motion deblurring results on the real blurry thermal image. (<b>a</b>) Real blurry thermal image acquired with a camera rotating at 85°/s, (<b>b</b>) SRN [<a href="#B33-sensors-22-01893" class="html-bibr">33</a>], (<b>c</b>) SIUN [<a href="#B36-sensors-22-01893" class="html-bibr">36</a>], (<b>d</b>) DeblurGAN.v2 [<a href="#B35-sensors-22-01893" class="html-bibr">35</a>], (<b>e</b>) CDVD [<a href="#B34-sensors-22-01893" class="html-bibr">34</a>], (<b>f</b>) Ours.</p> "> Figure 21
<p>Qualitative comparison of motion deblurring results on the real blurry thermal image. (<b>a</b>) Real blurry thermal image acquired with a camera rotating at 100°/s, (<b>b</b>) SRN [<a href="#B33-sensors-22-01893" class="html-bibr">33</a>], (<b>c</b>) SIUN [<a href="#B36-sensors-22-01893" class="html-bibr">36</a>], (<b>d</b>) DeblurGAN.v2 [<a href="#B35-sensors-22-01893" class="html-bibr">35</a>], (<b>e</b>) CDVD [<a href="#B34-sensors-22-01893" class="html-bibr">34</a>], (<b>f</b>) Ours.</p> "> Figure 22
<p>Qualitative comparison of motion deblurring results on the real blurry thermal image. (<b>a</b>) Real blurry thermal image acquired with a camera rotating at 40°/s, (<b>b</b>) SRN [<a href="#B33-sensors-22-01893" class="html-bibr">33</a>], (<b>c</b>) SIUN [<a href="#B36-sensors-22-01893" class="html-bibr">36</a>], (<b>d</b>) DeblurGAN.v2 [<a href="#B35-sensors-22-01893" class="html-bibr">35</a>], (<b>e</b>) CDVD [<a href="#B34-sensors-22-01893" class="html-bibr">34</a>], (<b>f</b>) Ours.</p> ">
Abstract
:1. Introduction
- We propose a novel synthesis method for the blurring effect in the thermal image by interpreting the operating properties of a microbolometer.
- We propose the blur kernel rendering method for a thermal image by combining the gyroscope sensor information with the motion blur model.
- We acquire and publically release both actual thermal images and synthetic blurry thermal images for the construction of a dataset for thermal image deblurring.
- Our method quantitatively and qualitatively outperforms the latest state-of-the-art deblurring methods.
2. Image Generation and Motion Blur Model
2.1. Photon Detector Model
2.2. Thermal Detector Model
2.3. Generating the Synthetic Blurry Image in a Thermal Image
2.4. Verification of Thermal Detector Blur Model
2.4.1. Acquiring a Real Blurry Image
2.4.2. Obtaining a Synthetic Blurry Image from Sharp Images
2.4.3. Comparing Real and Synthetic Blurry Images
3. Blur Kernel Rendering Using a Gyroscope Sensor for a Thermal Detector
3.1. Blur Kernel Rendering and Gyroscope Data Selection
3.2. Calibration and Blur Kernel Refinement
4. Experimental Setup
4.1. Construction of Synthetic Blurry Thermal Image Dataset
4.2. Construction of Real Blurry Thermal Image Dataset
4.3. Our Deblurring Procedure
4.4. Evaluation Environment
5. Experimental Results
5.1. Performance Evaluation on SBTI Dataset
5.2. Performance Evaluation on Real Blurry Thermal Images
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Huda, A.N.; Taib, S. Application of infrared thermography for predictive/preventive maintenance of thermal defect in electrical equipment. Appl. Therm. Eng. 2013, 61, 220–227. [Google Scholar] [CrossRef]
- Mayer, S.; Lischke, L.; Woźniak, P.W. Drones for search and rescue. In Proceedings of the 1st International Workshop on Human-Drone Interaction, Glasgow, UK, 4–9 May 2019. [Google Scholar]
- Apvrille, L.; Tanzi, T.; Dugelay, J.L. Autonomous drones for assisting rescue services within the context of natural disasters. In Proceedings of the 2014 XXXIth URSI General Assembly and Scientific Symposium (URSI GASS), Beijing, China, 16–23 August 2014; pp. 1–4. [Google Scholar]
- Pinchon, N.; Cassignol, O.; Nicolas, A.; Bernardin, F.; Leduc, P.; Tarel, J.P.; Brémond, R.; Bercier, E.; Brunet, J. All-weather vision for automotive safety: Which spectral band? In International Forum on Advanced Microsystems for Automotive Applications; Springer: Berlin, Germany, 2018; pp. 3–15. [Google Scholar]
- Wikipedia. Infrared — Wikipedia, The Free Encyclopedia. 2021. Available online: http://en.wikipedia.org/w/index.php?title=Infrared&oldid=1052704429 (accessed on 3 November 2021).
- Kimata, M. Uncooled infrared focal plane arrays. IEEJ Trans. Electr. Electron. Eng. 2018, 13, 4–12. [Google Scholar] [CrossRef] [Green Version]
- Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–26 June 2005; Volume 2, pp. 60–65. [Google Scholar]
- Zha, Z.; Wen, B.; Yuan, X.; Zhou, J.; Zhu, C. Image Restoration via Reconciliation of Group Sparsity and Low-Rank Models. IEEE Trans. Image Process. 2021, 30, 5223–5238. [Google Scholar] [CrossRef]
- Buades, A.; Coll, B.; Morel, J.M. A review of image denoising algorithms, with a new one. Multiscale Model. Simul. 2005, 4, 490–530. [Google Scholar] [CrossRef]
- Zha, Z.; Yuan, X.; Wen, B.; Zhou, J.; Zhang, J.; Zhu, C. From Rank Estimation to Rank Approximation: Rank Residual Constraint for Image Restoration. IEEE Trans. Image Process. 2020, 29, 3254–3269. [Google Scholar] [CrossRef] [Green Version]
- Stark, J.A. Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Trans. Image Process. 2000, 9, 889–896. [Google Scholar] [CrossRef] [Green Version]
- Jung, C.; Jiao, L.; Qi, H.; Sun, T. Image deblocking via sparse representation. Signal Process. Image Commun. 2012, 27, 663–677. [Google Scholar] [CrossRef]
- Zha, Z.; Yuan, X.; Wen, B.; Zhang, J.; Zhou, J.; Zhu, C. Image Restoration Using Joint Patch-Group-Based Sparse Representation. IEEE Trans. Image Process. 2020, 29, 7735–7750. [Google Scholar] [CrossRef]
- Bertalmio, M.; Sapiro, G.; Caselles, V.; Ballester, C. Image inpainting. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, New Orleans, LA, USA, 23–28 July 2000; pp. 417–424. [Google Scholar]
- Zha, Z.; Yuan, X.; Wen, B.; Zhou, J.; Zhang, J.; Zhu, C. A Benchmark for Sparse Coding: When Group Sparsity Meets Rank Minimization. IEEE Trans. Image Process. 2020, 29, 5094–5109. [Google Scholar] [CrossRef] [Green Version]
- Pan, J.; Sun, D.; Pfister, H.; Yang, M.H. Blind image deblurring using dark channel prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1628–1636. [Google Scholar]
- Yan, Y.; Ren, W.; Guo, Y.; Wang, R.; Cao, X. Image deblurring via extreme channels prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 17–25 July 2017; pp. 4003–4011. [Google Scholar]
- Zha, Z.; Wen, B.; Yuan, X.; Zhou, J.T.; Zhou, J.; Zhu, C. Triply Complementary Priors for Image Restoration. IEEE Trans. Image Process. 2021, 30, 5819–5834. [Google Scholar] [CrossRef]
- Zha, Z.; Yuan, X.; Zhou, J.; Zhu, C.; Wen, B. Image Restoration via Simultaneous Nonlocal Self-Similarity Priors. IEEE Trans. Image Process. 2020, 29, 8561–8576. [Google Scholar] [CrossRef]
- Zha, Z.; Yuan, X.; Wen, B.; Zhou, J.; Zhu, C. Group Sparsity Residual Constraint with Non-Local Priors for Image Restoration. IEEE Trans. Image Process. 2020, 29, 8960–8975. [Google Scholar] [CrossRef]
- Zhang, J.; Ghanem, B. ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1828–1837. [Google Scholar]
- Han, J.; Lee, H.; Kang, M.G. Thermal Image Restoration Based on LWIR Sensor Statistics. Sensors 2021, 21, 5443. [Google Scholar] [CrossRef]
- Morris, N.J.W.; Avidan, S.; Matusik, W.; Pfister, H. Statistics of Infrared Images. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 18–23 June 2007; pp. 1–7. [Google Scholar] [CrossRef] [Green Version]
- Huang, Y.; Bi, D.; Wu, D. Infrared and visible image fusion based on different constraints in the non-subsampled shearlet transform domain. Sensors 2018, 18, 1169. [Google Scholar] [CrossRef] [Green Version]
- Ban, Y.; Lee, K. Multi-Scale Ensemble Learning for Thermal Image Enhancement. Appl. Sci. 2021, 11, 2810. [Google Scholar] [CrossRef]
- Choi, Y.; Kim, N.; Hwang, S.; Kweon, I.S. Thermal image enhancement using convolutional neural network. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 223–230. [Google Scholar]
- Lee, K.; Lee, J.; Lee, J.; Hwang, S.; Lee, S. Brightness-based convolutional neural network for thermal image enhancement. IEEE Access 2017, 5, 26867–26879. [Google Scholar] [CrossRef]
- Oswald-Tranta, B. Temperature reconstruction of infrared images with motion deblurring. J. Sens. Sens. Syst. 2018, 7, 13–20. [Google Scholar] [CrossRef] [Green Version]
- Nihei, R.; Tanaka, Y.; Iizuka, H.; Matsumiya, T. Simple correction model for blurred images of uncooled bolometer type infrared cameras. In Proceedings of the Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XXX, International Society for Optics and Photonics, Baltimore, MD, USA, 14–18 April 2019; Volume 11001, pp. 420–427. [Google Scholar]
- Ramanagopal, M.S.; Zhang, Z.; Vasudevan, R.; Roberson, M.J. Pixel-Wise Motion Deblurring of Thermal Videos. In Proceedings of the Robotics: Science and Systems XVI, Cambridge, MA, USA, 12–16 July 2020; Volume 16. [Google Scholar]
- Zhao, Y.; Fu, G.; Wang, H.; Zhang, S.; Yue, M. Infrared Image Deblurring Based on Generative Adversarial Networks. Int. J. Opt. 2021, 2021, 9946809. [Google Scholar] [CrossRef]
- Batchuluun, G.; Lee, Y.W.; Nguyen, D.T.; Pham, T.D.; Park, K.R. Thermal image reconstruction using deep learning. IEEE Access 2020, 8, 126839–126858. [Google Scholar] [CrossRef]
- Tao, X.; Gao, H.; Shen, X.; Wang, J.; Jia, J. Scale-recurrent network for deep image deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8174–8182. [Google Scholar]
- Pan, J.; Bai, H.; Tang, J. Cascaded deep video deblurring using temporal sharpness prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–16 June 2020; pp. 3043–3051. [Google Scholar]
- Kupyn, O.; Martyniuk, T.; Wu, J.; Wang, Z. Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019; pp. 8878–8887. [Google Scholar]
- Ye, M.; Lyu, D.; Chen, G. Scale-iterative upscaling network for image deblurring. IEEE Access 2020, 8, 18316–18325. [Google Scholar] [CrossRef]
- Wang, S.; Zhang, S.; Ning, M.; Zhou, B. Motion Blurred Star Image Restoration Based on MEMS Gyroscope Aid and Blur Kernel Correction. Sensors 2018, 18, 2662. [Google Scholar] [CrossRef] [Green Version]
- Liu, D.; Chen, X.; Liu, X.; Shi, C. Star Image Prediction and Restoration under Dynamic Conditions. Sensors 2019, 19, 1890. [Google Scholar] [CrossRef] [Green Version]
- Audi, A.; Pierrot-Deseilligny, M.; Meynard, C.; Thom, C. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles. Sensors 2017, 17, 1646. [Google Scholar] [CrossRef] [Green Version]
- Bae, H.; Fowlkes, C.C.; Chou, P.H. Accurate motion deblurring using camera motion tracking and scene depth. In Proceedings of the 2013 IEEE Workshop on Applications of Computer Vision (WACV), Beach, FL, USA, 15–17 January 2013; pp. 148–153. [Google Scholar]
- Zhang, Y.; Hirakawa, K. Combining inertial measurements with blind image deblurring using distance transform. IEEE Trans. Comput. Imaging 2016, 2, 281–293. [Google Scholar] [CrossRef]
- Hu, Z.; Yuan, L.; Lin, S.; Yang, M.H. Image deblurring using smartphone inertial sensors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1855–1864. [Google Scholar]
- Hee Park, S.; Levoy, M. Gyro-based multi-image deconvolution for removing handshake blur. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 3366–3373. [Google Scholar]
- Mustaniemi, J.; Kannala, J.; Särkkä, S.; Matas, J.; Heikkilä, J. Inertial-aided motion deblurring with deep networks. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 8–10 January 2019; pp. 1914–1922. [Google Scholar] [CrossRef] [Green Version]
- Joshi, N.; Kang, S.B.; Zitnick, C.L.; Szeliski, R. Image deblurring using inertial measurement sensors. ACM Trans. Graph. (TOG) 2010, 29, 1–9. [Google Scholar]
- Ji, S.; Hong, J.-P.; Lee, J.; Baek, S.-J.; Ko, S.-J. Robust Single Image Deblurring Using Gyroscope Sensor. IEEE Access 2021, 9, 80835–80846. [Google Scholar] [CrossRef]
- Sindelar, O.; Sroubek, F. Image deblurring in smartphone devices using built-in inertial measurement sensors. J. Electron. Imaging 2013, 22, 011003. [Google Scholar] [CrossRef]
- Nah, S.; Hyun Kim, T.; Mu Lee, K. Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3883–3891. [Google Scholar]
- Zhang, K.; Luo, W.; Zhong, Y.; Ma, L.; Stenger, B.; Liu, W.; Li, H. Deblurring by realistic blurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 2737–2746. [Google Scholar]
- Navarro, F.; Serón, F.J.; Gutierrez, D. Motion blur rendering: State of the art. In Computer Graphics Forum; Wiley: Hoboken, NJ, USA, 2011; Volume 30, pp. 3–26. [Google Scholar]
- Lancelle, M.; Dogan, P.; Gross, M. Controlling motion blur in synthetic long time exposures. In Computer Graphics Forum; Wiley Online Library, Wiley: Hoboken, NJ, USA, 2019; Volume 38, pp. 393–403. [Google Scholar]
- Kruse, P.W. Chapter 2 Principles of Uncooled Infrared Focal Plane Arrays. In Uncooled Infrared Imaging Arrays and Systems; Kruse, P.W., Skatrud, D.D., Eds.; Elsevier: Amsterdam, The Netherlands, 1997; Volume 47, pp. 17–42. [Google Scholar] [CrossRef]
- Oh, J.; Song, H.s.; Park, J.; Lee, J.K. Noise Improvement of a-Si Microbolometers by the Post-Metal Annealing Process. Sensors 2021, 21, 6722. [Google Scholar] [CrossRef]
- Numerical Differential Equation Methods. In Numerical Methods for Ordinary Differential Equations; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2008. [CrossRef]
- Pradham, P.; Younan, N.H.; King, R.L. 16—Concepts of image fusion in remote sensing applications. In Image Fusion; Stathaki, T., Ed.; Academic Press: Oxford, UK, 2008; pp. 393–428. [Google Scholar] [CrossRef]
- Hartley, R.; Zisserman, A. Scene planes and homographies. In Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2004; pp. 325–343. [Google Scholar] [CrossRef]
- Köhler, R.; Hirsch, M.; Mohler, B.; Schölkopf, B.; Harmeling, S. Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2012; pp. 27–40. [Google Scholar]
- Whyte, O.; Sivic, J.; Zisserman, A.; Ponce, J. Non-uniform deblurring for shaken images. Int. J. Comput. Vis. 2012, 98, 168–186. [Google Scholar] [CrossRef] [Green Version]
- Bell, S.; Troccoli, A.; Pulli, K. A non-linear filter for gyroscope-based video stabilization. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2014; pp. 294–308. [Google Scholar]
- Hu, Z.; Cho, S.; Wang, J.; Yang, M.H. Deblurring low-light images with light streaks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 3382–3389. [Google Scholar]
- Bouguet, J.Y. Camera Calibration Toolbox for Matlab. 2004. Available online: http://www.vision.caltech.edu/bouguetj/calib_doc/index.html (accessed on 4 November 2021).
- Kino, G.S.; Corle, T.R. Confocal Scanning Optical Microscopy and Related Imaging Systems; Academic Press: Cambridge, MA, USA, 1996. [Google Scholar]
- Zhang, B.; Zerubia, J.; Olivo-Marin, J.C. Gaussian approximations of fluorescence microscope point-spread function models. Appl. Opt. 2007, 46, 1819–1829. [Google Scholar] [CrossRef]
- Guenther, B.D.; Steel, D. Encyclopedia of Modern Optics; Academic Press: Cambridge, MA, USA, 2018. [Google Scholar]
- Krishnan, D.; Fergus, R. Fast image deconvolution using hyper-Laplacian priors. Adv. Neural Inf. Process. Syst. 2009, 22, 1033–1041. [Google Scholar]
- Pan, J.; Hu, Z.; Su, Z.; Yang, M.H. Deblurring text images via L0-regularized intensity and gradient prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2901–2908. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
Camera Parameters | Gyroscope Parameters | ||
---|---|---|---|
Resolution (pixel) | 640 × 480 | Resolution (°/s) | 0.0076 |
Frame rate (Hz) | 50 | Frame rate (Hz) | 1000 |
FOV/IFOV (°) | 25 × 19/0.0391 | Range (°/s) | ±200 |
Thermal time constant (ms) | 8 | Bias drift (°/s) | 0.12 |
Focal length (mm)/ | 24.6/1.0 | Total RMS noise (°/s) | 0.05 |
STI Dataset | Subject | # of Images | # of Gyro. | Collection Environment | Bit Depth |
---|---|---|---|---|---|
[1] | Test pattern | 1400 | 28000 | Indoor | 16 bits |
[2] | Vehicle, Road | 1600 | 32000 | Outdoor | 16 bits |
[3] | Person, Road | 2000 | 40000 | Outdoor | 16 bits |
[4] | Person, Vehicle | 2000 | 40000 | Outdoor | 16 bits |
STI Dataset | SBTI Dataset | ||||||
---|---|---|---|---|---|---|---|
Maximum Camera Rotation Speed (°/s) | |||||||
6.25 | 9.375 | 12.5 | 25 | 50 | 75 | 100 | |
[1] | [1-1] | [1-2] | [1-3] | [1-4] | [1-5] | [1-6] | [1-7] |
[2] | [2-1] | [2-2] | [2-3] | [2-4] | [2-5] | [2-6] | [2-7] |
[3] | [3-1] | [3-2] | [3-3] | [3-4] | [3-5] | [3-6] | [3-7] |
[4] | [4-1] | [4-2] | [4-3] | [4-4] | [4-5] | [4-6] | [4-7] |
SBTI Dataset | SRN [33] | SIUN [36] | DeblurGAN.v2 [35] | CDVD [34] | Ours | |||||
---|---|---|---|---|---|---|---|---|---|---|
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |
[1-1] | 40.33 | 0.9881 | 41.03 | 0.9914 | 41.30 | 0.9910 | 39.62 | 0.9905 | 41.57 | 0.9926 |
[1-2] | 37.96 | 0.9849 | 38.45 | 0.9889 | 38.37 | 0.9872 | 37.09 | 0.9874 | 38.79 | 0.9906 |
[1-3] | 35.94 | 0.9815 | 36.35 | 0.9858 | 36.13 | 0.9835 | 35.05 | 0.9840 | 36.42 | 0.9880 |
[1-4] | 30.97 | 0.9675 | 31.11 | 0.9714 | 30.91 | 0.9695 | 30.36 | 0.9699 | 31.06 | 0.9756 |
[1-5] | 26.69 | 0.9419 | 26.74 | 0.9476 | 26.64 | 0.9456 | 26.32 | 0.9453 | 26.65 | 0.9526 |
[1-6] | 24.59 | 0.9221 | 24.67 | 0.9298 | 24.57 | 0.9273 | 24.34 | 0.9271 | 24.52 | 0.9337 |
[1-7] | 23.21 | 0.9049 | 23.33 | 0.9141 | 23.22 | 0.9118 | 23.07 | 0.9130 | 23.11 | 0.9165 |
Average | 31.38 | 0.9558 | 31.67 | 0.9613 | 31.59 | 0.9594 | 30.84 | 0.9596 | 31.73 | 0.9642 |
SBTI Dataset | SRN [33] | SIUN [36] | DeblurGAN.v2 [35] | CDVD [34] | Ours | |||||
---|---|---|---|---|---|---|---|---|---|---|
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |
[2-1] | 28.66 | 0.8573 | 29.74 | 0.9026 | 32.25 | 0.9458 | 28.12 | 0.8358 | 32.98 | 0.9600 |
[2-2] | 27.06 | 0.8247 | 27.97 | 0.8719 | 30.06 | 0.9221 | 26.54 | 0.8076 | 30.93 | 0.9504 |
[2-3] | 26.02 | 0.8048 | 26.72 | 0.8455 | 28.69 | 0.9014 | 25.57 | 0.7891 | 29.55 | 0.9396 |
[2-4] | 23.82 | 0.7603 | 24.32 | 0.7805 | 25.81 | 0.8405 | 24.04 | 0.7679 | 26.38 | 0.9034 |
[2-5] | 21.78 | 0.7128 | 22.54 | 0.7421 | 23.36 | 0.7738 | 22.74 | 0.7674 | 23.49 | 0.8492 |
[2-6] | 20.29 | 0.6743 | 21.01 | 0.7063 | 21.74 | 0.7262 | 21.53 | 0.7450 | 21.86 | 0.8104 |
[2-7] | 19.11 | 0.6487 | 19.66 | 0.6776 | 20.28 | 0.6902 | 20.47 | 0.7204 | 20.61 | 0.7757 |
Average | 23.82 | 0.7547 | 24.56 | 0.7895 | 26.03 | 0.8286 | 24.14 | 0.7762 | 26.54 | 0.8841 |
SBTI Dataset | SRN [33] | SIUN [36] | DeblurGAN.v2 [35] | CDVD [34] | Ours | |||||
---|---|---|---|---|---|---|---|---|---|---|
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |
[3-1] | 29.20 | 0.8606 | 29.64 | 0.8862 | 35.69 | 0.9603 | 34.034 | 0.9240 | 36.556 | 0.9600 |
[3-2] | 27.93 | 0.8305 | 28.66 | 0.8597 | 33.79 | 0.9368 | 32.43 | 0.9081 | 35.02 | 0.9525 |
[3-3] | 27.05 | 0.8053 | 27.92 | 0.8394 | 32.66 | 0.9201 | 31.45 | 0.8965 | 33.95 | 0.9452 |
[3-4] | 25.34 | 0.7556 | 26.25 | 0.7961 | 30.10 | 0.8772 | 29.21 | 0.8657 | 31.10 | 0.9177 |
[3-5] | 24.29 | 0.7348 | 24.90 | 0.7656 | 27.27 | 0.8237 | 26.72 | 0.8263 | 28.00 | 0.8786 |
[3-6] | 23.38 | 0.7196 | 23.90 | 0.7435 | 25.52 | 0.7882 | 25.14 | 0.7982 | 25.93 | 0.8427 |
[3-7] | 22.48 | 0.7034 | 22.94 | 0.7215 | 24.21 | 0.7605 | 23.82 | 0.7726 | 24.53 | 0.8128 |
Average | 25.67 | 0.7728 | 26.32 | 0.8017 | 29.89 | 0.8667 | 28.97 | 0.8559 | 30.73 | 0.9013 |
SBTI Dataset | SRN [33] | SIUN [36] | DeblurGAN.v2 [35] | CDVD [34] | Ours | |||||
---|---|---|---|---|---|---|---|---|---|---|
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |
[4-1] | 30.37 | 0.8925 | 31.42 | 0.9271 | 33.63 | 0.9552 | 32.19 | 0.9258 | 34.05 | 0.9640 |
[4-2] | 29.02 | 0.8742 | 29.78 | 0.9066 | 31.78 | 0.9373 | 30.77 | 0.9177 | 32.34 | 0.9589 |
[4-3] | 28.14 | 0.8620 | 28.71 | 0.8900 | 30.67 | 0.9262 | 29.86 | 0.9110 | 31.22 | 0.9532 |
[4-4] | 25.98 | 0.8294 | 26.40 | 0.8531 | 27.87 | 0.8923 | 27.44 | 0.8937 | 28.20 | 0.9312 |
[4-5] | 23.88 | 0.7947 | 24.22 | 0.8137 | 25.19 | 0.8506 | 24.81 | 0.8636 | 25.02 | 0.8956 |
[4-6] | 22.53 | 0.7731 | 22.82 | 0.7869 | 23.53 | 0.8216 | 23.22 | 0.8390 | 23.41 | 0.8704 |
[4-7] | 21.52 | 0.7567 | 21.74 | 0.7662 | 22.33 | 0.8022 | 22.06 | 0.8175 | 22.30 | 0.8460 |
Average | 25.92 | 0.8261 | 26.44 | 0.8491 | 27.86 | 0.8836 | 27.19 | 0.8812 | 28.08 | 0.9170 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lee, K.; Ban, Y.; Kim, C. Motion Blur Kernel Rendering Using an Inertial Sensor: Interpreting the Mechanism of a Thermal Detector. Sensors 2022, 22, 1893. https://doi.org/10.3390/s22051893
Lee K, Ban Y, Kim C. Motion Blur Kernel Rendering Using an Inertial Sensor: Interpreting the Mechanism of a Thermal Detector. Sensors. 2022; 22(5):1893. https://doi.org/10.3390/s22051893
Chicago/Turabian StyleLee, Kangil, Yuseok Ban, and Changick Kim. 2022. "Motion Blur Kernel Rendering Using an Inertial Sensor: Interpreting the Mechanism of a Thermal Detector" Sensors 22, no. 5: 1893. https://doi.org/10.3390/s22051893
APA StyleLee, K., Ban, Y., & Kim, C. (2022). Motion Blur Kernel Rendering Using an Inertial Sensor: Interpreting the Mechanism of a Thermal Detector. Sensors, 22(5), 1893. https://doi.org/10.3390/s22051893