Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Applying Deep Learning to Continuous Bridge Deflection Detected by Fiber Optic Gyroscope for Damage Detection
Previous Article in Journal
Noise Resilient Outdoor Traffic Light Visible Light Communications System Based on Logarithmic Transimpedance Circuit: Experimental Demonstration of a 50 m Reliable Link in Direct Sun Exposure
Previous Article in Special Issue
An Improved Point Cloud Descriptor for Vision Based Robotic Grasping System
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Special Issue on Visual Sensors

Department of Systems Engineering and Automation, Miguel Hernández University, 03202 Elche, Spain
*
Authors to whom correspondence should be addressed.
Sensors 2020, 20(3), 910; https://doi.org/10.3390/s20030910
Submission received: 4 February 2020 / Accepted: 6 February 2020 / Published: 8 February 2020
(This article belongs to the Special Issue Visual Sensors)

1. Introduction

Visual sensors have characteristics that make them interesting as sources of information for any process or system. On the one hand, they are able to capture a very precise and high-resolution environmental information while occupying a small size and with a reduced price. On the other hand, they are able to capture a large quantity of information from the environment around them. These properties are the reason they have been employed for several decades for the resolution of multiple tasks. This high versatility in their fields of application makes them increasingly used as a source of information to solve a variety of diverse tasks.
Nowadays, a wide variety of visual systems can be found, from the classical monocular systems to omnidirectional, RGB-D, and more sophisticated 3D systems. Every configuration presents some specific characteristics that make them useful to solve different problems. Their range of applications is wide and varied. Among them, we can find robotics, industry, agriculture, quality control, visual inspection, surveillance, autonomous driving, and navigation aid systems.
Visual systems can be used to obtain relevant information from the environment, which can be processed to solve a specific problem. The aim of this Special Issue is to present some of the possibilities that vision systems offer, focusing on the different configurations that can be used and novel applications in any field.
In this Special Issue, 63 contributions were submitted and 36 of them were published (i.e., 57% acceptance rate). The published articles present a very adequate vision of how visual sensors are used in very different fields of application, from mapping for navigation of mobile robots to object recognition or scene reconstruction.

2. Contributions to the Special Issue on Visual Sensors

In the field of visual navigation of mobile robots, SLAM (Simultaneous Localization and Mapping), Visual odometry, etc., we find different alternatives that are presented in some of the papers of the Special Issue. Thus, in [1], an RGB-D SLAM algorithm is presented using the concept of orientation relevance taking into account the Manthattan Frame Estimation. Teng et al. [2] provided a method for aircraft pose estimation without relying on 3D models using two widely separated cameras to acquire the pose information. In [3], a new framework for online visual object tracking is proposed. A motion-aware strategy is employed to predict the possible region and scale of the target in the frame by utilizing the previously estimated 3D motion information. Wang et al. [4] provided an improved indoor visual SLAM method that uses point and line segment features extracted by stereo cameras, achieving robust results. In [5], an RGB-D sensor is employed. In this case, the purpose is to make a dense 3D semantic mapping of the environment by means of Pixel-Voxel network. Aladem et al. [6] proposed a low-overhead real-time ego-motion estimation (visual odometry) system based on either a stereo or RGB-D sensor. By means of the proposed algorithm, a local map is used, requiring significantly less memory and computational power. Nawaf et al. [7] provided the details of a visual odometry method adapted to the underwater context. They employed the captured stereo image to provide real-time navigation and a site coverage map, which is necessary to conduct a complete underwater survey. Valiente et al. [8] presented a visual information fusion approach for robust probability-oriented feature matching. This approach can be used in a more general SLAM procedure. This strategy permits obtaining relevant areas in the image reference system, from which probable matches could be detected.
Image retrieval aims at browsing, searching, and retrieving images from a large database of digital images. Proposing new descriptors of an image that define the characteristics of the image can be key in this regard. García-Olalla et al. [9] presented a new texture descriptor booster based on statistical information of the image. This descriptor is employed in texture-based classification images. Fareed et al. [10] proposed a framework for salient region detection that uses appearance-based and regression-based schemes to reduce the computational complexity and focusing on the salient parts of the image. In this sense, Feng et al. [11] proposed a texture descriptor for image retrieval designing a local parallel cross pattern in which the local binary pattern map is fused with the color map. In addition, Feng et al. [12] proposed a hybrid histogram descriptor used for image retrieval. The proposed descriptor comprises two histograms jointly: a perceptual uniform histogram and a motif co-occurrence histogram including the probability of a pair of motif patterns. Finally, García-Olalla et al. [13] proposed a method for textile based image retrieval for indoor environments based on describing the images with different channels (RGB, HSV, etc.) and using the combination of two different descriptors for the image.
Visual sensors can also be an important source of information to help and support for other tasks. Thus, in [14], a novel global point cloud descriptor is proposed for reliable object recognition and pose estimation, which can be applied to robot grasping operation. Martínez-Martin et al. [15] provided an approach based on depth cameras to robustly evaluate the manipulation success in robot object manipulation. The method proposed allows the robot to accurately detect the presence or absence of contact points between the robot manipulator and a held object. Xue et al. [16] presented a vision system capable of automatic 3D joint detection. The detection method is applied in a robotic seam tracking system for gas tungsten arc welding.
The calibration of vision systems plays a very important role in different applications where these types of sensors are used. Having a well-calibrated system will permit more robust results to be achieved in later stages. Zhang et al. [17] presented a simple calibration method for laser range finder systems needing only a calibration board. In [18], an alternative approach that uses gray-code patterns displayed on an LCD screen to determine camera parameters is provided. The proposed approach is 1.5 times more precise than using standard calibration with a checkerboard pattern. Finally, Choi et al. [19] proposed a method that automatically calibrates four cameras of an around view monitor system in a natural driving situation.
Object recognition is a task in which a vision system is almost always involved. During the past few years, many proposals have been made in this area including different methods that allow the recognition of the objects present in an image. In this way, Kapuscinski et al. [20] presented a method for hand shapes recognition based on skeletal data. It encodes the relative differences between vectors associated with the pointing direction of the particular fingers and the palm normal. Wang et al. [21] presented a new spatiotemporal action localization detector that consists of sequences of per-frame segmentation masks. This proposed detector can pinpoint the starting or ending frame of each action category in untrimmed videos. In [22], a system for automatically designing the field-of view of a camera, the illumination strength, and the parameters in a recognition algorithm is presented. Nguyen et al. [23] proposed a new presentation attack detection method for an iris recognition system using a near infrared light camera image. This method tries to avoid the effect that presentation attack images captured using high-quality printed images can cause in classic iris recognition systems. Fu et al. [24] presented an approach for pedestrian detection combining different methods previously proposed together with an efficient sliding window classification strategy. The detector achieves fast detecting speed at the same time as state-of-the-art accuracy. Wang et al. [25] proposed a model to resolve the 3D reconstruction problem for dynamic non-rigid objects with a single RGB-D sensor.
Over the past few years, the field of visual systems is shifting from classical statistical methods to deep learning methods. Video-based person detection and recognition is an important task with many problems and challenges such as lighting variation, occlusion, human appearance similarity, etc. In [26], a video-based person re-identification method with hybrid deep appearance-temporal features is proposed. Another application using deep learning methods was presented by Arsalan et al. [27]. The authors proposed a densely connected fully convolutional network, which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. Liu et al. [28] proposed a method to improve the performance of the star sensor under dynamic conditions based on the ensemble back-propagation neural network.
Scene reconstruction is a key task necessary to accomplish more complex problems such as mobile robot navigation. Xia et al. [29] presented a visual inertial odometry as a solution to the robot navigation system. Cheng et al. [30] presented a high-accuracy method for globally consistent surface reconstruction using a single fringe projection profilometry sensor. Lane marking detection and localization are crucial for autonomous driving and lane-based pavement surveys. In [31], a novel methodology is presented for automated lane marking identification and reconstruction. In addition, a case study is given to validate the proposed methodology. Finally, Zhang et al. [32] proposed an improved method for UAV image seamline searching. The experimental results show that the proposed method can effectively solve the problems of ghosting and seams in the panoramic UAV images.
Finally, one of the most widely discussed topics about vision systems is to establish visual measurements. Some of the papers of the Special Issue revolve around this problem. In [33], the authors presented an improved rotation-angle measurement method based on geometric moments that is suitable for automatic sorting systems. In [34], a stereo vision system is employed for measuring the ram speed of steam hammers. The system tries to decrease the influence of strong vibration. The accuracy and effectiveness of the method was experimentally verified. Li et al. [35] proposed a pose estimation method for sweet pepper detachment. The acquired point cloud is separated into candidate planes that are separately evaluated using a scoring strategy. Yang et al. [36] presented a comparative analysis of digital image correlation based stereo 3D shape measurements.

Acknowledgments

This Special Issue would not have been possible without the valuable contributions of the authors, peer reviewers, and editorial team of Sensors. Our most sincere thanks are given to all the authors for their hard work, independently on the final decision about their submitted manuscripts. In addition, all our gratitude is given to the peer reviewers for their help and fruitful feedback to authors. Finally, our warmest thanks go to the editorial team for their untiring support and hard work during all stages of development of this Special Issue and, in general, congratulations are offered on the great success of the journal Sensors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, L.; Wu, Z. RGB-D SLAM with Manhattan Frame Estimation Using Orientation Relevance. Sensors 2019, 19, 1050. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Teng, X.; Yu, Q.; Luo, J.; Zhang, X.; Wang, G. Pose Estimation for Straight Wing Aircraft Based on Consistent Line Clustering and Planes Intersection. Sensors 2019, 19, 342. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Zhang, Y.; Yang, Y.; Zhou, W.; Shi, L.; Li, D. Motion-Aware Correlation Filters for Online Visual Tracking. Sensors 2018, 18, 3937. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Wang, R.; Di, K.; Wan, W.; Wang, Y. Improved Point-Line Feature Based Visual SLAM Method for Indoor Scenes. Sensors 2018, 18, 3559. [Google Scholar] [CrossRef] [Green Version]
  5. Zhao, C.; Sun, L.; Purkait, P.; Duckett, T.; Stolkin, R. Dense RGB-D Semantic Mapping with Pixel-Voxel Neural Network. Sensors 2018, 18, 3099. [Google Scholar] [CrossRef] [Green Version]
  6. Aladem, M.; Rawashdeh, S. Lightweight Visual Odometry for Autonomous Mobile Robots. Sensors 2018, 18, 2837. [Google Scholar] [CrossRef] [Green Version]
  7. Nawaf, M.; Merad, D.; Royer, J.; Boï, J.; Saccone, M.; Ben Ellefi, M.; Drap, P. Fast Visual Odometry for a Low-Cost Underwater Embedded Stereo System. Sensors 2018, 18, 2313. [Google Scholar] [CrossRef] [Green Version]
  8. Valiente, D.; Payá, L.; Jiménez, L.; Sebastián, J.; Reinoso, Ó. Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature Matching. Sensors 2018, 18, 2041. [Google Scholar] [CrossRef] [Green Version]
  9. García-Olalla, Ó.; Fernández-Robles, L.; Alegre, E.; Castejón-Limas, M.; Fidalgo, E. Boosting Texture-Based Classification by Describing Statistical Information of Gray-Levels Differences. Sensors 2019, 19, 1048. [Google Scholar] [CrossRef] [Green Version]
  10. Fareed, M.; Chun, Q.; Ahmed, G.; Murtaza, A.; Asif, M.; Fareed, M. Appearance-Based Salient Regions Detection Using Side-Specific Dictionaries. Sensors 2019, 19, 421. [Google Scholar] [CrossRef] [Green Version]
  11. Feng, Q.; Hao, Q.; Sbert, M.; Yi, Y.; Wei, Y.; Dai, J. Local Parallel Cross Pattern: A Color Texture Descriptor for Image Retrieval. Sensors 2019, 19, 315. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Feng, Q.; Hao, Q.; Chen, Y.; Yi, Y.; Wei, Y.; Dai, J. Hybrid Histogram Descriptor: A Fusion Feature Representation for Image Retrieval. Sensors 2018, 18, 1943. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. García-Olalla, O.; Alegre, E.; Fernández-Robles, L.; Fidalgo, E.; Saikia, S. Textile Retrieval Based on Image Content from CDC and Webcam Cameras in Indoor Environments. Sensors 2018, 18, 1329. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Wang, F.; Liang, C.; Ru, C.; Cheng, H. An Improved Point Cloud Descriptor for Vision Based Robotic Grasping System. Sensors 2019, 19, 2225. [Google Scholar] [CrossRef] [Green Version]
  15. Martinez-Martin, E.; del Pobil, A. Vision for Robust Robot Manipulation. Sensors 2019, 19, 1648. [Google Scholar] [CrossRef] [Green Version]
  16. Xue, B.; Chang, B.; Peng, G.; Gao, Y.; Tian, Z.; Du, D.; Wang, G. A Vision Based Detection Method for Narrow Butt Joints and a Robotic Seam Tracking System. Sensors 2019, 19, 1144. [Google Scholar] [CrossRef] [Green Version]
  17. Zhang, Z.; Zhao, R.; Liu, E.; Yan, K.; Ma, Y. A Convenient Calibration Method for LRF-Camera Combination Systems Based on a Checkerboard. Sensors 2019, 19, 1315. [Google Scholar] [CrossRef] [Green Version]
  18. Sels, S.; Ribbens, B.; Vanlanduit, S.; Penne, R. Camera Calibration Using Gray Code. Sensors 2019, 19, 246. [Google Scholar] [CrossRef] [Green Version]
  19. Choi, K.; Jung, H.; Suhr, J. Automatic Calibration of an Around View Monitor System Exploiting Lane Markings. Sensors 2018, 18, 2956. [Google Scholar] [CrossRef] [Green Version]
  20. Kapuscinski, T.; Organisciak, P. Handshape Recognition Using Skeletal Data. Sensors 2018, 18, 2577. [Google Scholar] [CrossRef] [Green Version]
  21. Wang, L.; Duan, X.; Zhang, Q.; Niu, Z.; Hua, G.; Zheng, N. Segment-Tube: Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation. Sensors 2018, 18, 1657. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Chen, Y.; Ogata, T.; Ueyama, T.; Takada, T.; Ota, J. Automated Field-of-View, Illumination, and Recognition Algorithm Design of a Vision System for Pick-and-Place Considering Colour Information in Illumination and Images. Sensors 2018, 18, 1656. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Nguyen, D.; Baek, N.; Pham, T.; Park, K. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor. Sensors 2018, 18, 1315. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Fu, X.; Yu, R.; Zhang, W.; Wu, J.; Shao, S. Delving Deep into Multiscale Pedestrian Detection via Single Scale Feature Maps. Sensors 2018, 18, 1063. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Wang, S.; Zuo, X.; Du, C.; Wang, R.; Zheng, J.; Yang, R. Dynamic Non-Rigid Objects Reconstruction with a Single RGB-D Sensor. Sensors 2018, 18, 886. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Sun, R.; Huang, Q.; Xia, M.; Zhang, J. Video-Based Person Re-Identification by an End-To-End Learning Architecture with Hybrid Deep Appearance-Temporal Feature. Sensors 2018, 18, 3669. [Google Scholar] [CrossRef] [Green Version]
  27. Arsalan, M.; Naqvi, R.; Kim, D.; Nguyen, P.; Owais, M.; Park, K. IrisDenseNet: Robust Iris Segmentation Using Densely Connected Fully Convolutional Networks in the Images by Visible Light and Near-Infrared Light Camera Sensors. Sensors 2018, 18, 1501. [Google Scholar] [CrossRef] [Green Version]
  28. Liu, D.; Chen, X.; Liu, X.; Shi, C. Star Image Prediction and Restoration under Dynamic Conditions. Sensors 2019, 19, 1890. [Google Scholar] [CrossRef] [Green Version]
  29. Xia, L.; Meng, Q.; Chi, D.; Meng, B.; Yang, H. An Optimized Tightly-Coupled VIO Design on the Basis of the Fused Point and Line Features for Patrol Robot Navigation. Sensors 2019, 19, 2004. [Google Scholar] [CrossRef] [Green Version]
  30. Cheng, X.; Liu, X.; Li, Z.; Zhong, K.; Han, L.; He, W.; Gan, W.; Xi, G.; Wang, C.; Shi, Y. High-Accuracy Globally Consistent Surface Reconstruction Using Fringe Projection Profilometry. Sensors 2019, 19, 668. [Google Scholar] [CrossRef] [Green Version]
  31. Li, L.; Luo, W.; Wang, K. Lane Marking Detection and Reconstruction with Line-Scan Imaging Data. Sensors 2018, 18, 1635. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Zhang, W.; Guo, B.; Li, M.; Liao, X.; Li, W. Improved Seam-Line Searching Algorithm for UAV Image Mosaic with Optical Flow. Sensors 2018, 18, 1214. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Cao, C.; Ouyang, Q. 2D Rotation-Angle Measurement Utilizing Least Iterative Region Segmentation. Sensors 2019, 19, 1634. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Chen, R.; Li, Z.; Zhong, K.; Liu, X.; Wu, Y.; Wang, C.; Shi, Y. A Stereo-Vision System for Measuring the Ram Speed of Steam Hammers in an Environment with a Large Field of View and Strong Vibrations. Sensors 2019, 19, 996. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Li, H.; Zhu, Q.; Huang, M.; Guo, Y.; Qin, J. Pose Estimation of Sweet Pepper through Symmetry Axis Detection. Sensors 2018, 18, 3083. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Yang, X.; Chen, X.; Xi, J. Comparative Analysis of Warp Function for Digital Image Correlation-Based Accurate Single-Shot 3D Shape Measurement. Sensors 2018, 18, 1208. [Google Scholar] [CrossRef] [Green Version]

Share and Cite

MDPI and ACS Style

Reinoso, O.; Payá, L. Special Issue on Visual Sensors. Sensors 2020, 20, 910. https://doi.org/10.3390/s20030910

AMA Style

Reinoso O, Payá L. Special Issue on Visual Sensors. Sensors. 2020; 20(3):910. https://doi.org/10.3390/s20030910

Chicago/Turabian Style

Reinoso, Oscar, and Luis Payá. 2020. "Special Issue on Visual Sensors" Sensors 20, no. 3: 910. https://doi.org/10.3390/s20030910

APA Style

Reinoso, O., & Payá, L. (2020). Special Issue on Visual Sensors. Sensors, 20(3), 910. https://doi.org/10.3390/s20030910

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop