Pixel-Level and Robust Vibration Source Sensing in High-Frame-Rate Video Analysis
<p>Concept of vibration features with pixel-level digital filters.</p> "> Figure 2
<p>Robustness of vibration features with pixel-level digital filters.</p> "> Figure 3
<p>Overview of high-frame-rate video shoot.</p> "> Figure 4
<p>(<b>a</b>) Input images; amplitude of (<b>b</b>) input; and (<b>c</b>) pixel-wise filtered images; (<b>d</b>) amplitude ratios; (<b>e</b>) extra-cted vibration features.</p> "> Figure 5
<p>Averaged amplitudes and extracted region sizes with aperture variation. (<b>a</b>) Averaged amplitudes of input and pixel-wise filtered images and their ratios on the extracted pixels; (<b>b</b>) diameters of extracted vibration region.</p> "> Figure 6
<p>(<b>a</b>) Input images; amplitude of (<b>b</b>) input; and (<b>c</b>) pixel-wise filtered images; (<b>d</b>) amplitude ratios; (<b>e</b>) extracted vibration features.</p> "> Figure 7
<p>Averaged amplitudes and extracted region sizes with focus distance variation. (<b>a</b>) Averaged amplitudes of input and pixel-wise filtered images and their ratios on the extracted pixels; (<b>b</b>) diameters of extracted vibration region.</p> "> Figure 8
<p>(<b>a</b>) Input images; amplitude of (<b>b</b>) input; and (<b>c</b>) pixel-wise filtered images; (<b>d</b>) amplitude ratios; (<b>e</b>) extracted vibration features.</p> "> Figure 9
<p>Averaged amplitudes and extracted region sizes with focal length variation. (<b>a</b>) Averaged amplitudes of input and pixel-wise filtered images and their ratios on the extracted pixels; (<b>b</b>) diameters of extracted vibration region.</p> "> Figure 10
<p>(<b>a</b>) Input images; amplitude of (<b>b</b>) input; and (<b>c</b>) pixel-wise filtered images; (<b>d</b>) amplitude ratios; (<b>e</b>) extracted vibration features.</p> "> Figure 11
<p>Averaged amplitude values and extracted region sizes with orientation variation. (<b>a</b>) Averaged amplitudes of input and pixel-wise filtered images and their ratios on the extracted pixels; (<b>b</b>) minor axis lengths of extracted vibration region.</p> "> Figure 12
<p>(<b>a</b>) Input images; amplitude of (<b>b</b>) input; and (<b>c</b>) pixel-wise filtered images; (<b>d</b>) amplitude ratios; (<b>e</b>) extracted vibration features.</p> "> Figure 13
<p>Averaged amplitude values and number of extracted pixels with rotation speed variation. (<b>a</b>) Averaged amplitudes of input and pixel-wise filtered images and their ratios; (<b>b</b>) number of extracted pixels as vibration region.</p> "> Figure 14
<p>Moving fan against three-blades-patterned background.</p> "> Figure 15
<p>(<b>a</b>) Input images; amplitude of (<b>b</b>) input; and (<b>c</b>) pixel-wise filtered images; (<b>d</b>) amplitude ratios; (<b>e</b>) extracted vibration features.</p> "> Figure 16
<p>Averaged amplitude values and number of extracted pixels when a rotating fan moves. (<b>a</b>) Averaged amplitudes of input and pixel-wise filtered images and their ratios on the extracted pixels; (<b>b</b>) number of extracted pixels as vibration region and slider speeds.</p> "> Figure 17
<p>(<b>a</b>) Input images; amplitude of (<b>b</b>) input; and (<b>c</b>) pixel-wise filtered images; (<b>d</b>) amplitude ratios; (<b>e</b>) extracted vibration features.</p> "> Figure 18
<p>Averaged amplitude values and number of extracted pixels with moving background. (<b>a</b>) Averaged amplitudes of input and pixel-wise filtered images and their ratios on the extracted pixels; (<b>b</b>) number of extracted pixels as vibration region and slider speeds.</p> "> Figure 19
<p>(<b>a</b>) Input images; amplitude of (<b>b</b>) input; and (<b>c</b>) pixel-wise filtered images; (<b>d</b>) amplitude ratios; (<b>e</b>) extracted vibration features; (<b>f</b>) tracked positions; (<b>g</b>) magnified images.</p> "> Figure 20
<p><math display="inline"> <semantics> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </semantics> </math> trajectory of extracted vibration region in “trees-and-building background” experiment. (<b>a</b>) <span class="html-italic">x</span>- and <span class="html-italic">y</span>-coordinates and number of pixels; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </semantics> </math> trajectory.</p> "> Figure 21
<p>(<b>a</b>) Input images; amplitude of (<b>b</b>) input; and (<b>c</b>) pixel-wise filtered images; (<b>d</b>) amplitude ratios; (<b>e</b>) extracted vibration features; (<b>f</b>) tracked positions; (<b>g</b>) magnified images.</p> "> Figure 22
<p><math display="inline"> <semantics> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </semantics> </math> trajectory of extracted vibration region in “trees-and-building background” experiment. (<b>a</b>) <span class="html-italic">x</span>- and <span class="html-italic">y</span>-coordinates and number of pixels; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </semantics> </math> trajectory.</p> ">
Abstract
:1. Introduction
2. Vibration Feature with Pixel-Level Digital Filters
3. Experiments for a Rotating Fan
3.1. Image Intensity
3.2. Defocus Blur
3.3. Apparent Scale
3.4. Orientation
3.5. Rotation Speed
3.6. Moving Fan
3.7. Moving Background
4. Experiment for a Flying Multicopter
4.1. Trees-and-Building Background
4.2. Walking-Persons Background
5. Conclusions and Future Work
Author Contributions
Conflicts of Interest
References
- Wu, Y.; Lim, J.; Yang, M.-H. Object tracking benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1834–1848. [Google Scholar] [CrossRef] [PubMed]
- Li, X.; Hu, W.; Shen, C.; Zhang, Z.; Dick, A.; Hengel, A. A survey of appearance models in visual object tracking. ACM Trans. Intell. Syst. Technol. 2013, 4, 1–48. [Google Scholar] [CrossRef]
- Senst, T.; Eiselein, V.; Shen, C.; Sikora, T. Robust local optical flow for feature tracking. IEEE Trans. Circuits Syst. Video Technol. 2012, 22, 1377–1387. [Google Scholar] [CrossRef]
- Doyle, D.D.; Jennings, A.L.; Black, J.T. Optical flow background estimation for real-time pan/tilt camera object tracking. Measurement 2014, 48, 195–207. [Google Scholar] [CrossRef]
- Guo, D.; van de Ven, A.L.; Zhou, X. Red blood cell tracking using optical flow methods. IEEE J. Biomed. Health Inform. 2014, 18, 991–998. [Google Scholar] [PubMed]
- Zoidi, O.; Tefas, A.; Pitas, I. Visual object tracking based on local steering kernels and color histograms. IEEE Trans. Circuits Syst. Video Technol. 2013, 23, 870–882. [Google Scholar] [CrossRef]
- Kim, D.-H.; Kim, H.-K.; Ko, S.-J. Spatial color histogram based center voting method for subsequent object tracking and segmentation. Image Vis. Comput. 2011, 29, 850–860. [Google Scholar]
- Liang, P.; Blasch, E.; Ling, H. Encoding color information for visual tracking: Algorithms and benchmark. IEEE Trans. Image Process. 2015, 24, 5630–5644. [Google Scholar] [CrossRef] [PubMed]
- Bousetouane, F.; Dib, L.; Snoussi, H. Improved mean shift integrating texture and color features for robust real time object tracking. Vis. Comput. 2013, 29, 155–170. [Google Scholar] [CrossRef]
- Ning, J.; Zhang, L.; Zhang, D.; Wu, C. Robust object tracking using joint color-texture histogram. Int. J. Pattern Recognit. Artif. Intell. 2009, 23, 1245–1263. [Google Scholar] [CrossRef]
- Wang, J.; Yagi, Y. Integrating color and shape-texture features for adaptive real-time object tracking. IEEE Trans. Image Process. 2008, 17, 235–240. [Google Scholar] [CrossRef] [PubMed]
- Zhou, H.; Yuan, Y.; Shi, C. Object tracking using SIFT features and mean shift. Comput. Vis. Image Underst. 2009, 113, 345–352. [Google Scholar] [CrossRef]
- Zhao, W.-L.; Ngo, C.-W. Flip-invariant SIFT for copy and object detection. IEEE Trans. Image Process. 2013, 22, 980–991. [Google Scholar] [CrossRef] [PubMed]
- Zhang, S.; Bauckhage, C.; Cremers, A. Informed Haar-Like Features Improve Pedestrian Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 947–954.
- Pavani, S.-K.; Yuan, Y.; Delgado-Gomez, D.; Frangi, A.F. Gaussian weak classifiers based on co-occurring Haar-like features for face detection. Pattern Anal. Appl. 2014, 17, 431–439. [Google Scholar] [CrossRef]
- Dalal, N.; Triggs, B. Cremers, Histograms of Oriented Gradients for Human Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; pp. 886–893.
- Wu, B.-F.; Yuan, Y.; Kao, C.-C.; Jen, C.-L.; Li, Y.-F.; Chen, Y.-H.; Juang, J.-H. A Relative-Discriminative-Histogram-of-Oriented-Gradients-Based Particle Filter Approach to Vehicle Occlusion Handling and Tracking. IEEE Trans. Ind. Electron. 2014, 61, 4228–4237. [Google Scholar] [CrossRef]
- Chen, P.-Y.; Huang, C.-C.; Lien, C.-Y.; Tsai, Y.-H. An efficient hardware implementation of HOG feature extraction for human detection. IEEE Trans. Intell. Transp. Syst. 2014, 15, 656–662. [Google Scholar] [CrossRef]
- Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
- Yang, B.; Chen, S. A comparative study on local binary pattern (LBP) based face recognition: LBP histogram versus LBP image. Neurocomputing 2013, 120, 365–379. [Google Scholar] [CrossRef]
- Satpathy, A.; Jiang, X.; Eng, W.-L. LBP-based edge-texture features for object recognition. IEEE Trans. Image Process. 2014, 120, 1953–1964. [Google Scholar] [CrossRef] [PubMed]
- Satpathy, A.; Trivedi, M.M. Looking at vehicles on the road: A survey of vision-based vehicle detection, tracking, and behavior analysis. IEEE Trans. Intell. Transp. Syst. 2013, 14, 1773–1795. [Google Scholar]
- Chavez-Garcia, R.O.; Aycard, O. Multiple sensor fusion and classification for moving object detection and tracking. IEEE Trans. Intell. Transp. Syst. 2016, 17, 525–534. [Google Scholar] [CrossRef]
- Llorca, D.F.; Sánchez, S.; Ocaña, M.; Sotelo, M.A. Vision-based traffic data collection sensor for automotive applications. Sensors 2010, 10, 860–875. [Google Scholar] [CrossRef] [PubMed]
- Schuster, G.M.; Katsaggelos, A.K. Rate-Distortion Based Video Compression: Optimal Video Frame Compression and Object Boundary Encoding; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2013. [Google Scholar]
- Rautaray, S.S.; Agrawal, A. Vision based hand gesture recognition for human computer interaction: A survey. Artif. Intell. Rev. 2015, 43, 1–54. [Google Scholar] [CrossRef]
- Prisacariu, V.A.; Reid, I. 3D hand tracking for human computer interaction. Image Vis. Comput. 2012, 30, 236–250. [Google Scholar] [CrossRef]
- Tran, D.; Yuan, J. Optimal Spatio-Temporal Path Discovery for Video Event Detection. In Proceedings of the Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 3321–3328.
- Meng, J.; Yuan, J.; Yang, J.; Wang, G.; Tan, Y.-P. Object Instance Search in Videos via Spatio-Temporal Trajectory Discovery. IEEE Trans. Multimed. 2016, 18, 116–127. [Google Scholar] [CrossRef]
- Jain, M.; Van Gemert, J.; Jégou, H.; Bouthemy, P.; Cees, G.M.S. Action Localization with Tubelets from Motion. In Proceedings of the Computer Vision and Pattern Recognition, Columbus, OH, USA, 17–19 June 2014; pp. 740–747.
- Yu, G.; Yuan, J. Fast Action Proposals for Human Action Detection and Search. In Proceedings of the Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1302–1311.
- Gkioxari, G.; Malik, J. Finding Action Tubes. In Proceedings of the Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 759–768.
- Mettes, P.; Van Gemert, J.; Cappallo, S.; Mensink, T.; Cees, G.M.S. Bag-of-Fragments: Selecting and Encoding Video Fragments for Event Detection and Recounting. In Proceedings of the 5th ACM on International Conference on Multimedia Retrieval, Shanghai, China, 23–26 June 2015; pp. 427–434.
- González, A.; Vázquez, D.; Ramos, S.; López, A.M.; Amores, J. Spatiotemporal Stacked Sequential Learning for Pedestrian Detection. In Proceedings of the Iberian Conference Pattern Recognition and Image Analysis, Santiago de Compostela, Spain, 17–19 June 2015; pp. 3–12.
- Jiang, N.; Su, H.; Liu, W.; Wu, Y. Discriminative Metric Preservation for Tracking Low-Resolution Targets. IEEE Trans. Image Process. 2012, 21, 1284–1297. [Google Scholar] [CrossRef] [PubMed]
- Biswas, S.; Aggarwal, G.; Flynn, P.; Bowyer, K.W. Pose-robust recognition of low-resolution face images. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 3037–3049. [Google Scholar] [CrossRef] [PubMed]
- Watanabe, Y.; Komuro, T.; Ishikawa, M. 955-fps Real-Time Shape Measurement of a Moving/Deforming Object Using High-Speed Vision for Numerous-Point Analysis. In Proceedings of the IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 3192–3197.
- Hirai, S.; Zakoji, M.; Masubuchi, A.; Tsuboi, T. Realtime FPGA-based vision system. J. Robot. Mechatron. 2005, 17, 401–409. [Google Scholar]
- Ishii, I.; Taniguchi, T.; Sukenobe, R.; Yamamoto, K. Development of High-Speed and Real-Time Vision Platform, H3 Vision. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 3671–3678.
- Ishii, I.; Tatebe, T.; Gu, Q.; Moriue, Y.; Tajima, K. 2000 fps Real-Time Vision System with High-Frame-Rate Video Recording. In Proceedings of the IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–8 May 2010; pp. 1536–1541.
- Ishii, I.; Taniguchi, T.; Yamamoto, K.; Takaki, T. High-frame-rate optical flow system. IEEE Trans. Circuits Syst. Video Technol. 2012, 22, 105–112. [Google Scholar] [CrossRef]
- Ishii, I.; Tatebe, T.; Gu, Q.; Takaki, T. Color-histogram-based tracking at 2000 fps. J. Electron. Imaging 2012, 21, 013010. [Google Scholar] [CrossRef]
- Gu, Q.; Raut, S.; Okumura, K.; Aoyama, T.; Takaki, T.; Ishii, I. Real-time Image Mosaicing System Using a High-Frame-Rate Video Sequence. J. Robot. Mechatron. 2015, 27, 204–215. [Google Scholar]
- Ishii, I.; Ichiba, T.; Gu, Q.; Takaki, T. 500-fps Face Tracking System. J. Real Time Image Process. 2013, 8, 379–388. [Google Scholar] [CrossRef]
- Chen, J.; Yamamoto, T.; Aoyama, T.; Takaki, T.; Ishii, I. Simultaneous Projection Mapping Using High-Frame-Rate Depth Vision. In Proceedings of the IEEE International Conference on Robotics and Automation, Hong Kong, China, 31 May–7 June 2014; pp. 4506–4511.
- Okumura, K.; Yokoyama, K.; Oku, H.; Ishikawa, M. 1ms Auto Pan-Tilt—Video Shooting Technology for Objects in Motion Based on Saccade Mirror with Background Subtraction. Adv. Robot. 2015, 29, 457–468. [Google Scholar] [CrossRef]
- Namiki, A.; Imai, Y.; Kaneko, M.; Ishikawa, M. Development of a High-speed Multifingered Hand System and Its Application to Catching. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, 27–31 October 2003; pp. 2666–2671.
- Nakamura, Y.; Kishi, K.; Kawakami, H. Heartbeat Synchronization for Robotic Cardiac Surgery. In Proceedings of the IEEE International Conference on Robotics and Automation, Seoul, Korea, 21–26 May 2001; pp. 2014–2019.
- Nie, Y.; Ishii, I.; Yamamoto, K.; Orito, K.; Matsuda, H. Real-time Scratching Behavior Quantification System for Laboratory Mice using High-speed Vision. J. Real Time Image Process. 2009, 4, 181–190. [Google Scholar] [CrossRef]
- Sakuma, S.; Kuroda, K.; Tsai, C.; Fukui, W.; Arai, F.; Kaneko, M. Red Blood Cell Fatigue Evaluation Based on the Close-encountering Point between Extensibility and Recoverability. Lab Chip 2014, 14, 1135–1141. [Google Scholar] [CrossRef] [PubMed]
- Gu, Q.; Aoyama, T.; Takaki, T.; Ishii, I. Simultaneous Vision-Based Shape and Motion Analysis of Cells Fast-Flowing in a Microchannel. IEEE Trans. Automat. Sci. Eng. 2015, 12, 204–215. [Google Scholar] [CrossRef]
- Yang, H.; Gu, Q.; Aoyama, T.; Takaki, T.; Ishii, I. Dynamics-Based Stereo Visual Inspection Using Multidimensional Modal Analysis. IEEE Sens. J. 2013, 13, 4831–4843. [Google Scholar] [CrossRef]
- Gu, Q.; Ishii, I. Review of Some Advances and Applications in Real-time High-speed Vision: Our Views and Experiences. Int. J. Automat. Comput. 2016, 13, 305–318. [Google Scholar] [CrossRef]
- Caetano, E.; Silva, S.; Bateira, J. A vision system for vibration monitoring of civil engineering structures. Exp. Tech. 2011, 35, 74–82. [Google Scholar] [CrossRef]
- Maas, H.G.; Hampel, U. Photogrammetric techniques in civil engineering material testing and structure monitoring. Photogram. Eng. Remote Sens. 2006, 72, 39–45. [Google Scholar] [CrossRef]
- Chen, J.G.; Wadhwa, N.; Durand, F.; Freeman, W.T.; Buyukozturk, O. Developments with Motion Magnification for Structural Modal Identification through Camera Video. In Dynamics of Civil Structures; Caicedo, J., Pakzad, S., Eds.; Springer: Cham, Switzerland, 2015; Volume 2, pp. 49–57. [Google Scholar]
- Lohscheller, J.; Eysholdt, U.; Toy, H.; Dollinger, H. Phonovibrography: Mapping high-speed movies of vocal fold vibrations into 2-D diagrams for visualizing and analyzing the underlying laryngeal dynamics. IEEE Trans. Med. Imaging 2008, 27, 300–309. [Google Scholar] [CrossRef] [PubMed]
- Mehta, D.D.; Deliyski, D.D.; Quatieri, T.F.; Hillman, R.E. Automated measurement of vocal fold vibratory asymmetry from high-speed videoendoscopy recordings. J. Speech Lang. Hear. Res. 2011, 54, 47–54. [Google Scholar] [CrossRef]
- Pinheiro, A.P.; Stewart, D.E.; Maciel, C.D.; Pereira, J.C.; Oliveira, S. Analysis of nonlinear dynamics of vocal folds using highspeed video observation and biomechanical modeling. Digit. Signal Process. 2012, 22, 304–313. [Google Scholar] [CrossRef]
- Ishii, I.; Ohara, I.; Tatebe, T.; Takaki, T. 1000 fps Target Tracking Using Vibration-Based Image Features. In Proceedings of the IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1837–1842.
- Argentieri, S.; Danes, P.; Soueres, P. A Survey on Sound Source Localization in Robotics: Binaural to Array Processing Methods. Comput. Speech Lang. 2015, 34, 87–112. [Google Scholar] [CrossRef]
- Lanslots, J.; Deblauwe, F.; Janssens, K. Selecting Sound Source Localization Techniques for Industrial Applications. Sound Vib. 2010, 44, 6–9. [Google Scholar]
- Busset, J.; Perrodin, F.; Wellig, P.; Ott, B.; Heutschi, K.; Ruhl, T.; Nussbaumer, T. Detection and Tracking of Drones Using Advanced Acoustic Cameras. In Proceedings of the SPIE Security + Defence, Toulouse, France, 21–24 September 2015.
- Pham, T.; Srour, N. TTCP AG-6: Acoustic Detection and Tracking of UAVs. In Proceedings of the Defense and Security, Orlando, FL, USA, 12 April 2004; pp. 24–30.
- Multi-Sensor Drone Warning System. Available online: http://www.dedrone.com/en/dronetracker/drone-detection-hardware (accessed on 22 June 2016).
- HOW DRONESHIELD WORKS. Available online: https://www.droneshield.com/how-droneshield-works (accessed on 22 June 2016).
- OpenCV 3.0. Available online: http://opencv.org/opencv-3-0.html (accessed on 12 October 2016).
- Henriques, J.F.; Caseiro, R.; Martins, P.; Batista, J. High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 583–596. [Google Scholar] [CrossRef] [PubMed]
- Kalal, Z.; Mikolajczyk, K.; Matas, J. Tracking-learning-detection. IEEE Trans. Patten Anal. Mach. Intell. 2012, 37, 1409–1422. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kalal, Z.; Mikolajczyk, K.; Matas, J. Forward-Backward Error: Automatic Detection of Tracking Failures. In Proceedings of the International Conference on Pattern Recognition, Istanbul, Israel, 23–25 August 2010; pp. 2756–2759.
- Grabner, H.; Grabner, M.; Bischof, H. Real-Time Tracking via on-Line Boosting. In Proceedings of the British Machine Vision Conference, Edinburgh, UK, 4–7 September 2006; pp. 6–11.
- Babenko, B.; Yang, M.-H.; Belongie, S. Visual Tracking with Online Multiple Instance Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 983–990.
- Liu, H.; Hong, T.H.; Herman, M.; Camus, T.; Chellappa, R. Accuracy vs efficiency trade-offs in optical flow algorithms. Comput. Vis. Image Underst. 1998, 72, 271–286. [Google Scholar] [CrossRef]
- Barron, J.L.; Fleet, D.J.; Beauchemin, S.S. Performance of optical flow techniques. Int. J. Comput. Vis. 1994, 12, 43–77. [Google Scholar] [CrossRef]
Image Size | 64 × 64 | 128 × 128 | 256 × 256 | 512 × 512 | 1024 × 1024 | 2048 × 2048 |
Exec Time | 0.16 ms | 0.66 ms | 2.69 ms | 10.47 ms | 39.78 ms | 157.38 ms |
© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jiang, M.; Aoyama, T.; Takaki, T.; Ishii, I. Pixel-Level and Robust Vibration Source Sensing in High-Frame-Rate Video Analysis. Sensors 2016, 16, 1842. https://doi.org/10.3390/s16111842
Jiang M, Aoyama T, Takaki T, Ishii I. Pixel-Level and Robust Vibration Source Sensing in High-Frame-Rate Video Analysis. Sensors. 2016; 16(11):1842. https://doi.org/10.3390/s16111842
Chicago/Turabian StyleJiang, Mingjun, Tadayoshi Aoyama, Takeshi Takaki, and Idaku Ishii. 2016. "Pixel-Level and Robust Vibration Source Sensing in High-Frame-Rate Video Analysis" Sensors 16, no. 11: 1842. https://doi.org/10.3390/s16111842
APA StyleJiang, M., Aoyama, T., Takaki, T., & Ishii, I. (2016). Pixel-Level and Robust Vibration Source Sensing in High-Frame-Rate Video Analysis. Sensors, 16(11), 1842. https://doi.org/10.3390/s16111842