Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (222)

Search Parameters:
Keywords = 6D virtual sensor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 1218 KiB  
Review
FPGA-Based Sensors for Distributed Digital Manufacturing Systems: A State-of-the-Art Review
by Laraib Khan, Sriram Praneeth Isanaka and Frank Liou
Sensors 2024, 24(23), 7709; https://doi.org/10.3390/s24237709 (registering DOI) - 2 Dec 2024
Viewed by 220
Abstract
The combination of distributed digital factories (D2Fs) with sustainable practices has been proposed as a revolutionary technique in modern manufacturing. This review paper explores the convergence of D2F with innovative sensor technology, concentrating on the role of Field Programmable [...] Read more.
The combination of distributed digital factories (D2Fs) with sustainable practices has been proposed as a revolutionary technique in modern manufacturing. This review paper explores the convergence of D2F with innovative sensor technology, concentrating on the role of Field Programmable Gate Arrays (FPGAs) in promoting this paradigm. A D2F is defined as an integrated framework where digital twins (DTs), sensors, laser additive manufacturing (laser-AM), and subtractive manufacturing (SM) work in synchronization. Here, DTs serve as a virtual replica of physical machines, allowing accurate monitoring and control of a given manufacturing process. These DTs are supplemented by sensors, providing near-real-time data to assure the effectiveness of the manufacturing processes. FPGAs, identified for their re-programmability, reduced power usage, and enhanced processing compared to traditional processors, are increasingly being used to develop near-real-time monitoring systems within manufacturing networks. This review paper identifies the recent expansions in FPGA-based sensors and their exploration within the D2Fs operations. The primary topics incorporate the deployment of eco-efficient data management and near-real-time monitoring, targeted at lowering waste and optimizing resources. The review paper also identifies the future research directions in this field. By incorporating advanced sensors, DTs, laser-AM, and SM processes, this review emphasizes a path toward more sustainable and resilient D2Fs operations. Full article
(This article belongs to the Special Issue Feature Review Papers in Optical Sensors)
18 pages, 12610 KiB  
Article
Automatic Registration of Panoramic Images and Point Clouds in Urban Large Scenes Based on Line Features
by Panke Zhang, Hao Ma, Liuzhao Wang, Ruofei Zhong, Mengbing Xu and Siyun Chen
Remote Sens. 2024, 16(23), 4450; https://doi.org/10.3390/rs16234450 - 27 Nov 2024
Viewed by 320
Abstract
As the combination of panoramic images and laser point clouds becomes more and more widely used as a technique, the accurate determination of external parameters has become essential. However, due to the relative position change of the sensor and the time synchronization error, [...] Read more.
As the combination of panoramic images and laser point clouds becomes more and more widely used as a technique, the accurate determination of external parameters has become essential. However, due to the relative position change of the sensor and the time synchronization error, the automatic and accurate matching of the panoramic image and the point cloud is very challenging. In order to solve this problem, this paper proposes an automatic and accurate registration method for panoramic images and point clouds of urban large scenes based on line features. Firstly, the multi-modal point cloud line feature extraction algorithm is used to extract the edge of the point cloud. Based on the point cloud intensity orthoimage (an orthogonal image based on the point cloud’s intensity values), the edge of the road markings is extracted, and the geometric feature edge is extracted by the 3D voxel method. Using the established virtual projection correspondence for the panoramic image, the panoramic image is projected onto the virtual plane for edge extraction. Secondly, the accurate matching relationship is constructed by using the feature constraint of the direction vector, and the edge features from both sensors are refined and aligned to realize the accurate calculation of the registration parameters. The experimental results show that the proposed method shows excellent registration results in challenging urban scenes. The average registration error is better than 3 pixels, and the root mean square error (RMSE) is less than 1.4 pixels. Compared with the mainstream methods, it has advantages and can promote the further research and application of panoramic images and laser point clouds. Full article
Show Figures

Figure 1

Figure 1
<p>The framework of the registration.</p>
Full article ">Figure 2
<p>Virtual projection segmentation of panoramic image. (<b>A</b>) Front view after projection; (<b>B</b>) Right view after projection; (<b>C</b>) Back view after projection; (<b>D</b>) Left view after projection.</p>
Full article ">Figure 3
<p>Transformation of panoramic image and point cloud coordinate.</p>
Full article ">Figure 4
<p>Overview of the experimental areas. (<b>a</b>) Beijing; (<b>b</b>) Guangzhou; (<b>c</b>) Hong Kong.</p>
Full article ">Figure 5
<p>Point of cloud road marking edge detection: (<b>a</b>) original point cloud; (<b>b</b>) intensity orthoimage; (<b>c</b>) semantic segmentation; (<b>d</b>) road marking edge points.</p>
Full article ">Figure 6
<p>Point cloud geometric edge: (<b>a</b>) point cloud voxel; (<b>b</b>) geometric feature edge points.</p>
Full article ">Figure 7
<p>Panoramic segmentation and edge line extraction: (<b>a</b>) directly extracted from the panoramic image; (<b>b</b>) extracted after virtual projection segmentation.</p>
Full article ">Figure 7 Cont.
<p>Panoramic segmentation and edge line extraction: (<b>a</b>) directly extracted from the panoramic image; (<b>b</b>) extracted after virtual projection segmentation.</p>
Full article ">Figure 8
<p>Visualization of the registration process: (<b>a</b>) initial registration; (<b>b</b>) final registration.</p>
Full article ">Figure 9
<p>Panorama and point cloud registration effect diagram: (<b>a</b>) before the algorithm processing; (<b>b</b>) after algorithm processing.</p>
Full article ">Figure 10
<p>The visualization effects of different methods: (<b>a</b>–<b>d</b>) results of the proposed method A–D; (<b>e</b>) the overall effect diagram of method D. The number (<b>1</b>–<b>3</b>) represents the results on datasets I–III.</p>
Full article ">Figure 10 Cont.
<p>The visualization effects of different methods: (<b>a</b>–<b>d</b>) results of the proposed method A–D; (<b>e</b>) the overall effect diagram of method D. The number (<b>1</b>–<b>3</b>) represents the results on datasets I–III.</p>
Full article ">
13 pages, 6389 KiB  
Article
Finite Element Simulation and Piezoelectric Sensor Array-Driven Two-Stage Impact Location on Composite Structures
by Zhiling Wang and Yongteng Zhong
Processes 2024, 12(12), 2675; https://doi.org/10.3390/pr12122675 - 27 Nov 2024
Viewed by 332
Abstract
Impact monitoring is an effective approach to ensuring the safety of composite structures. The accuracy of current algorithms mostly depends on the number of physical sensors, which is not an economical way for large-area composite structures. In order to combine the advantages of [...] Read more.
Impact monitoring is an effective approach to ensuring the safety of composite structures. The accuracy of current algorithms mostly depends on the number of physical sensors, which is not an economical way for large-area composite structures. In order to combine the advantages of sparse and dense arrays, a two-stage collaborative approach is proposed to locate the general areas and precise positions of impacts on composite structures. In Stage I, the steering vector information of the possible position is simulated according to the principle of array sensor signal processing, and a virtual array sparse feature map is constructed. When an actual impact arrives, a similarity algorithm is then used to find the suspected area in the map, which narrows down the search area to a large extent. In Stage II, a compensated two-dimensional multiple signal classification (2D-MUSIC) algorithm-based imaging method is applied to estimate the precise position of the impact in the suspected area. Finally, the accuracy and effectiveness of the proposed method are validated by numerical simulation and experiments on a carbon fiber composite panel. Both numerical and experimental results verify that the two-stage impact location method can effectively monitor composite structures with sufficient accuracy and efficiency. Full article
(This article belongs to the Special Issue Reliability and Engineering Applications (Volume II))
Show Figures

Figure 1

Figure 1
<p>Observed array signal model. (<b>a</b>) Lamb wave induced by impacts. (<b>b</b>) Signal model using linear PZT array.</p>
Full article ">Figure 2
<p>Hybrid physics model-based two-stage impact localization procedure.</p>
Full article ">Figure 3
<p>The panel FEM model. (<b>a</b>) Numbered nodes for excitation. (<b>b</b>) Laminates of materials.</p>
Full article ">Figure 4
<p>The array sensor signals and their wave fronts of the S1 simulated impact.</p>
Full article ">Figure 5
<p>Stage I: Area localization of S1 simulated impact.</p>
Full article ">Figure 6
<p>Stage II: Precise position of S1 simulated impact.</p>
Full article ">Figure 7
<p>Experiment setup.</p>
Full article ">Figure 8
<p>The array sensor narrow-band signals and their envelopes. (<b>a</b>) Simulation signal. (<b>b</b>) Experimental signal.</p>
Full article ">Figure 9
<p>TOF comparison of simulation signals and experimental signals.</p>
Full article ">Figure 10
<p>Area localization and precise position of experimental impact cases.</p>
Full article ">Figure 10 Cont.
<p>Area localization and precise position of experimental impact cases.</p>
Full article ">
14 pages, 5550 KiB  
Article
Design of a Single-Edge Nibble Transmission Signal Simulation and Acquisition System for Power Machinery Virtual Testing
by He Li, Zhengyu Li, Yanbin Cai, Jiwei Zhang, Hongyu Liu, Wei Cui, Qingxin Wang, Shutao Zhang, Wenrui Cui, Feiyang Zhao and Wenbin Yu
Designs 2024, 8(6), 124; https://doi.org/10.3390/designs8060124 - 21 Nov 2024
Viewed by 557
Abstract
With the advancement of technology, the Single-Edge Nibble Transmission (SENT) protocol has become increasingly prevalent in automotive sensor applications, highlighting the need for a robust SENT signal simulation and acquisition system. This paper presents a real-time SENT signal acquisition system based on NI [...] Read more.
With the advancement of technology, the Single-Edge Nibble Transmission (SENT) protocol has become increasingly prevalent in automotive sensor applications, highlighting the need for a robust SENT signal simulation and acquisition system. This paper presents a real-time SENT signal acquisition system based on NI Field Programmable Gate Array (FPGA) technology. The system supports a range of message and data frame formats specified by the SAE J2716 SENT protocol, operates autonomously within a LabVIEW self-compiled environment, and is compatible with NI hardware-in-the-loop (HIL) systems for virtual electronic control units (ECU) calibration. This innovative, self-developed SENT system accommodates four message formats, seven data frame formats, and three pause pulse modes. Benchmarking tests were conducted by integrating this system with the dSPACE SCALEXIO HIL (located in Paderborn, Germany) system for SENT signal simulation and acquisition. The results confirm that the system effectively simulates and acquires SENT signals in accordance with SAE J2716 standards, establishing it as an invaluable asset in the electronification, intelligentization, informatization, and smart sensing of automotive and agricultural machinery. Full article
Show Figures

Figure 1

Figure 1
<p>A frame of the complete SENT signal packet.</p>
Full article ">Figure 2
<p>Short serial message format.</p>
Full article ">Figure 3
<p>Enhanced serial message format.</p>
Full article ">Figure 4
<p>System architecture.</p>
Full article ">Figure 5
<p>SENT signal simulation system.</p>
Full article ">Figure 6
<p>SENT signal acquisition system.</p>
Full article ">Figure 7
<p>Functional test system architecture.</p>
Full article ">Figure 8
<p>Fast message format results of signal simulation testing.</p>
Full article ">Figure 9
<p>Status pulse values in serial message format.</p>
Full article ">Figure 10
<p>Serial message format results of signal simulation testing.</p>
Full article ">Figure 11
<p>Fast message format results of signal acquisition testing.</p>
Full article ">Figure 12
<p>Serial message format results of signal acquisition testing.</p>
Full article ">Figure 13
<p>Accuracy test: (<b>a</b>) simulation system and (<b>b</b>) acquisition system.</p>
Full article ">Figure 14
<p>Reliability test: (<b>a</b>) simulation system and (<b>b</b>) acquisition system.</p>
Full article ">
17 pages, 13829 KiB  
Article
Advanced Virtual Fit Technology for Precision Pressure Application in Medical Compression Waistbands
by Seonyoung Youn, Sheng Zhan and Kavita Mathur
Appl. Sci. 2024, 14(22), 10697; https://doi.org/10.3390/app142210697 - 19 Nov 2024
Viewed by 438
Abstract
The design of medical-grade compression garments is essential for therapeutic efficacy, requiring precise pressure distribution on specific body areas. This study evaluates the effectiveness of virtual fit technology, focusing on CLO3D, in designing these garments. Simulated strain and pressure values from CLO3D were [...] Read more.
The design of medical-grade compression garments is essential for therapeutic efficacy, requiring precise pressure distribution on specific body areas. This study evaluates the effectiveness of virtual fit technology, focusing on CLO3D, in designing these garments. Simulated strain and pressure values from CLO3D were compared to experimental measurements, alongside the development of a CP model using CLO3D’s digitized stretch stiffness (Youn’s CP model). Using a 3D-scanned manikin, the mechanical behavior of eight knit fabrics, including composite structures, was assessed under strain of 5%, 10%, 15%, and 20%. The results showed that CLO3D’s built-in pressure simulation overestimated the pressure, especially in plaited fabrics such as SJP and INTP, with discrepancies of up to 10 kPa at strain levels above 15%. In contrast, the experimental pressure measurements using the Kikuhime and PPS sensors varied within 0.13 to 2.59 kPa. Youn’s CP model provided a closer fit to the experimental data, with deviations limited to within 1.9 kPa. This finding highlights the limitations of CLO3D for precision-required applications and underscores the need for more advanced, customized algorithms in virtual fit technology to ensure reliable compression garment design, particularly in medical contexts, where precise pressure control is critical for patient outcomes. Full article
(This article belongs to the Special Issue Innovative Functional Textiles and Their Applications)
Show Figures

Figure 1

Figure 1
<p>Overview of experimental methodology.</p>
Full article ">Figure 2
<p>(<b>a</b>) Grid system setup with a template to accurately measure the sample’s strain. (<b>b</b>) Schematic representation showing the dimensions and orientation (length, width, bias) used in the sample. (<b>c</b>) Loading fabric on a manikin. (<b>d</b>) Simulation strain map after applying 20% strain.</p>
Full article ">Figure 3
<p>Comparison of physical and virtual strain measurements across four fabric samples: (<b>a</b>) single jersey, (<b>b</b>) single jersey plaited, (<b>c</b>) interlock, and (<b>d</b>) interlock plaited. The solid lines represent the experimental data, while the dotted lines represent the simulation results.</p>
Full article ">Figure 4
<p>Experimental pressure measurements (Kikuhimi, PPS) and simulated pressure measurements obtained from CLO3D. (<b>a</b>) single jersey, (<b>b</b>) single jersey plaited, (<b>c</b>) interlock, and (<b>d</b>) interlock plaited fabric. The highlighted area represents the gap between the pressure values obtained from the PPS and Kikuhime sensors, indicating the range of experimental pressure values.</p>
Full article ">Figure 5
<p>Experimental pressure measurements (Kikuhimi, PPS) and predicted pressure values obtained from Youn’s model using CLO3D. (<b>a</b>) single jersey, (<b>b</b>) single jersey plaited, (<b>c</b>) interlock, and (<b>d</b>) interlock plaited fabric.</p>
Full article ">Figure 6
<p>(<b>a</b>) Illustration of compressional band pattern design. (<b>b</b>) Experimental setup with manikin wearing the compressional band integrated with the electrode composite. (<b>c</b>) A 3D-scanned manikin visualized in CLO3D wearing the simulated compressional band with composite. (<b>d</b>) Simulated stress map in CLO3D, indicating pressure distributions at pattern size reductions of 5%, 10%, 15%, and 20%.</p>
Full article ">Figure 7
<p>Comparison of simulated pressure values from the CLO3D built-in pressure feature and experimental pressure measurements obtained from PPS and Kikuhime, with and without electrode application, using four different samples: (<b>a</b>) SJ2, (<b>b</b>) SJP2, (<b>c</b>) INT2, and (<b>d</b>) INTP2.</p>
Full article ">Figure 8
<p>Comparison of simulated pressure values from the CLO3D-based Youn’s CP model and experimental pressure measurements obtained from PPS and Kikuhime, with and without electrode application, using four different samples: (<b>a</b>) SJ2, (<b>b</b>) SJP2, (<b>c</b>) INT2, and (<b>d</b>) INTP2.</p>
Full article ">
26 pages, 6588 KiB  
Article
A Coverage Hole Recovery Method for 3D UWSNs Based on Virtual Force and Energy Balance
by Luoheng Yan and Zhongmin Huangfu
Electronics 2024, 13(22), 4446; https://doi.org/10.3390/electronics13224446 - 13 Nov 2024
Viewed by 354
Abstract
Underwater wireless sensor networks (UWSNs) have been applied in lots of fields. However, coverage holes are usually caused by complex underwater environment. Coverage holes seriously affect UWSNs’ performance and quality of service; thus, their recovery is crucial for 3D UWSNs. Although most of [...] Read more.
Underwater wireless sensor networks (UWSNs) have been applied in lots of fields. However, coverage holes are usually caused by complex underwater environment. Coverage holes seriously affect UWSNs’ performance and quality of service; thus, their recovery is crucial for 3D UWSNs. Although most of the current research recovery algorithms demand hole detection, the number of additional mobile nodes is too large, the communication and computing costs are high, and the coverage and energy balance are poor. Therefore, these methods are not suitable for UWSN hole repairing. In order to enhance the performance of hole recovery, a coverage hole recovery method for 3D UWSNs in complex underwater environments based on virtual force guidance and energy balance is proposed. The proposed method closely combines the node energy and considers complex environmental factors. A series of multi-dimensional virtual force models are established based on energy between nodes, area boundaries, zero-energy holes, low-energy coverage holes, underwater terrain, and obstacle forces. Then, a coverage hole recovery method for 3D UWSNs based on virtual force guidance and energy balance (CHRVE) is proposed. In this method, the direction and step size of mobile repairing node movement is guided by distributed computation of virtual forces, and the nodes are driven towards the target location by means of AUV or other carrier devices. The optimal position to improve coverage rate and node force balance is obtained. Simulation experiments show good adaptability and robustness to complex underwater terrain and different environments. The algorithm does not require precise coverage hole boundary detection. Furthermore, it balances network energy distribution significantly. Therefore, this method reduces the frequency of coverage hole emergence and network maintenance costs. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

Figure 1
<p>The virtual repulsive force.</p>
Full article ">Figure 2
<p>The virtual attractive force.</p>
Full article ">Figure 3
<p>The virtual repulsion forces at the boundary of the monitoring area.</p>
Full article ">Figure 4
<p>The virtual attractive force of hole grid point <span class="html-italic">Q<sub>j</sub></span> on <span class="html-italic">Ns<sub>i.</sub></span></p>
Full article ">Figure 5
<p>The force on the bottom or obstacle.</p>
Full article ">Figure 6
<p>Coverage hole recovery with CHRVE.</p>
Full article ">Figure 7
<p>The variation of network coverage rate with number of iterations.</p>
Full article ">Figure 8
<p>Multiple non-closed coverage holes and their initial coverage state.</p>
Full article ">Figure 9
<p>The coverage rate variation diagram for recovery of multiple open holes with different patching mode and number of nodes.</p>
Full article ">Figure 10
<p>Central closure coverage holes and their initial coverage state.</p>
Full article ">Figure 11
<p>The coverage rate variation diagram for recovery of central closure holes with different patching mode and number of nodes.</p>
Full article ">Figure 12
<p>The coverage holes formed by the remaining 38 nodes.</p>
Full article ">Figure 13
<p>The analyses for coverage rate and energy density variance.</p>
Full article ">Figure 13 Cont.
<p>The analyses for coverage rate and energy density variance.</p>
Full article ">Figure 14
<p>Experiment on water bottom with complex terrain or obstacles.</p>
Full article ">Figure 15
<p>The coverage rate and energy density variance with number of iteration rounds for coverage holes recovery with underwater curved surfaces and obstacles.</p>
Full article ">Figure 16
<p>The movement energy consumption variance with number of iteration rounds for coverage holes recovery.</p>
Full article ">Figure 17
<p>The comparison of coverage rates for different algorithms.</p>
Full article ">Figure 18
<p>The comparison of energy density variance for different algorithms.</p>
Full article ">
15 pages, 7931 KiB  
Article
Color Models in the Process of 3D Digitization of an Artwork for Presentation in a VR Environment of an Art Gallery
by Irena Drofova and Milan Adamek
Electronics 2024, 13(22), 4431; https://doi.org/10.3390/electronics13224431 - 12 Nov 2024
Viewed by 637
Abstract
This study deals with the color reproduction of a work of art to digitize it into a 3D realistic model. The experiment aims to digitize a work of art for application in a virtual reality environment concerning faithful color reproduction. Photogrammetry and scanning [...] Read more.
This study deals with the color reproduction of a work of art to digitize it into a 3D realistic model. The experiment aims to digitize a work of art for application in a virtual reality environment concerning faithful color reproduction. Photogrammetry and scanning with a LiDAR sensor are used to compare the methods and work with colors during the reconstruction of the 3D model. An innovative tablet with a camera and LiDAR sensor is used for both methods. At the same time, current findings from the field of color vision and colorimetry are applied to 3D reconstruction. The experiment focuses on working with the RGB and L*a*b* color models and, simultaneously, on the sRGB, CIE XYZ, and Rec.2020(HDR) color spaces for transforming colors into a virtual environment. For this purpose, the color is defined in the Hex Color Value format. This experiment is a starting point for further research on color reproduction in the digital environment. This study represents a partial contribution to the much-discussed area of forgeries of works of art in current trends in forensics and forgery. Full article
(This article belongs to the Section Electronic Multimedia)
Show Figures

Figure 1

Figure 1
<p>Digitization of an art object: (<b>a</b>) 2D digitized object and detail marked in red; (<b>b</b>) matrix of partial details the yellow range of the image; and (<b>c</b>) visualization of the detail of the structure and color of a partial part of the object.</p>
Full article ">Figure 2
<p>The basic principle of the Structure from Motion (SfM) method [<a href="#B37-electronics-13-04431" class="html-bibr">37</a>].</p>
Full article ">Figure 3
<p>Creation of a 3D model using the SfM photogrammetry method: (<b>a</b>) Digitized object; (<b>b</b>) position of 24 photos from which the basic cloud of points is created; (<b>c</b>) Dense Cloud generation; (<b>d</b>) the resulting 3D texture model of the artwork.</p>
Full article ">Figure 4
<p>Creating a 3D model using a LiDAR sensor: (<b>a</b>) digitized object; (<b>b</b>) 3D model generated by Scaniverse; (<b>c</b>) 3D texture model imported into Agisoft 3D SW; and (<b>d</b>) generated point cloud from the textured 3D model.</p>
Full article ">Figure 5
<p>Generated Dense Cloud: (<b>a</b>) 3D SfM photogrammetry method and (<b>b</b>) LiDAR sensor.</p>
Full article ">Figure 6
<p>Colorimetry: (<b>a</b>) RGB color model and (<b>b</b>) sRGB color space (gamut).</p>
Full article ">Figure 7
<p>SfM—Points Segmentation #758605: (<b>a</b>) Dense Cloud 3D model using SfM photogrammetry; (<b>b</b>) segmentation points by color G#758605; (<b>c</b>) body #758605 in Dense Cloud.</p>
Full article ">Figure 8
<p>LiDAR—Segmentation of points #758605: (<b>a</b>) 3D model using LiDAR sensor; (<b>b</b>) Segmentation of points by color G#758605; (<b>c</b>) detail of the points generated in Dense Cloud.</p>
Full article ">Figure 9
<p>CIE XYZ 1931 standardized color space: (<b>a</b>) Basic ColorChecker standardized color gamut; (<b>b</b>) position of individual standardized colors in the CIE 1931 chromatic diagram; (<b>c</b>) color model L*a*b*; (<b>d</b>) CIE 1931 chromaticity diagram with Rec.2020 gamuts; sRGB and L*a*b.</p>
Full article ">Figure 10
<p>Visual comparison of reproduction quality in the process of 3D modeling and color segmentation: 3D models using the SfM photogrammetry method and 3D models using the LiDAR sensor.</p>
Full article ">Figure 11
<p>Visualization of a realistic 3D reconstruction of the artwork: (<b>a</b>) 3D Dense Cloud model; (<b>b</b>) 3D texture model by the SfM method; (<b>c</b>) 3D texture model by LiDAR sensor.</p>
Full article ">
21 pages, 2544 KiB  
Article
An Energy-Efficient Dynamic Feedback Image Signal Processor for Three-Dimensional Time-of-Flight Sensors
by Yongsoo Kim, Jaehyeon So, Chanwook Hwang, Wencan Cheng and Jong Hwan Ko
Sensors 2024, 24(21), 6918; https://doi.org/10.3390/s24216918 - 28 Oct 2024
Viewed by 661
Abstract
With the recent prominence of artificial intelligence (AI) technology, various research outcomes and applications in the field of image recognition and processing utilizing AI have been continuously emerging. In particular, the domain of object recognition using 3D time-of-flight (ToF) sensors has been actively [...] Read more.
With the recent prominence of artificial intelligence (AI) technology, various research outcomes and applications in the field of image recognition and processing utilizing AI have been continuously emerging. In particular, the domain of object recognition using 3D time-of-flight (ToF) sensors has been actively researched, often in conjunction with augmented reality (AR) and virtual reality (VR). However, for more precise analysis, high-quality images are required, necessitating significantly larger parameters and computations. These requirements can pose challenges, especially in developing AR and VR technologies for low-power portable devices. Therefore, we propose a dynamic feedback configuration image signal processor (ISP) for 3D ToF sensors. The ISP achieves both accuracy and energy efficiency through dynamic feedback. The proposed ISP employs dynamic area extraction to perform computations and post-processing only for pixels within the valid area used by the application in each frame. Additionally, it uses dynamic resolution to determine and apply the appropriate resolution for each frame. This approach enhances energy efficiency by avoiding the processing of all sensor data while maintaining or surpassing accuracy levels. Furthermore, These functionalities are designed for hardware-efficient implementation, improving processing speed and minimizing power consumption. The results show a maximum performance of 178 fps and a high energy efficiency of up to 123.15 fps/W. When connected to the hand pose estimation (HPE) accelerator, it demonstrates an average mean squared error (MSE) of 10.03 mm, surpassing the baseline ISP value of 20.25 mm. Therefore, the proposed ISP can be effectively utilized in low-power, small form-factor devices. Full article
(This article belongs to the Special Issue Vision Sensors for Object Detection and Tracking)
Show Figures

Figure 1

Figure 1
<p>A general overview of the operation comparison between a conventional ISP and the proposed dynamic feedback ISP. The proposed dynamic feedback ISP can reduce the number of pixels used compared to the conventional ISP, resulting in reduced computation and power consumption, thus enhancing energy efficiency. (#: Number).</p>
Full article ">Figure 2
<p>Comparison of pixel usage between the conventional method and the proposed method. (<b>a</b>) Example of conventional method. (<b>b</b>) Example of proposed method.</p>
Full article ">Figure 3
<p>Overall flow chart for dynamic feedback operation.</p>
Full article ">Figure 4
<p>Dynamic area extraction technique. (<b>a</b>) Example of valid area extraction. (<b>b</b>) Valid area extraction mechanism.</p>
Full article ">Figure 5
<p>Example of bounding box decision.</p>
Full article ">Figure 6
<p>Example of resolution decision. Red box: valid area.</p>
Full article ">Figure 7
<p>Overall block diagram of proposed example hardware based on a dynamic feedback ISP.</p>
Full article ">Figure 8
<p>Hardware details in main processor. (<b>a</b>) Dynamic pixel controller. (<b>b</b>) Dynamic depth controller.</p>
Full article ">Figure 9
<p>Comparison of sequential and pipelined operation flow. (<b>a</b>) Example of sequential operation flow. (<b>b</b>) Example of pipelined operation flow. P.C: Point Cloud, N: Normalization, Q: Quantization, DA: Dynamic area controller, R.C: Resolution controller.</p>
Full article ">
19 pages, 4717 KiB  
Article
Suitability of UR5 Robot for Robotic 3D Printing
by Martin Pollák, Marek Kočiško, Sorin D. Grozav, Vasile Ceclan and Alexandru D. Bogdan
Appl. Sci. 2024, 14(21), 9845; https://doi.org/10.3390/app14219845 - 28 Oct 2024
Viewed by 538
Abstract
The present paper describes the measurement of the drift of unidirectional pose accuracy, repeatability, and static compliance of a collaborative robot employing a measurement methodology that relies on the description of a virtual ISO cube placed in the robot’s workspace. The measurements aimed [...] Read more.
The present paper describes the measurement of the drift of unidirectional pose accuracy, repeatability, and static compliance of a collaborative robot employing a measurement methodology that relies on the description of a virtual ISO cube placed in the robot’s workspace. The measurements aimed to investigate and assess the suitability of the UR5 six-axis collaborative robot for its application in robotic 3D printing. An experimental laboratory measurement workstation was constructed to perform the measurements, and control measurements were performed. The measurements involved describing the TCP point of the robot tool at five measurement points located in a virtual ISO cube during a minimum of 30 repeated measurement cycles. A camera and six linear incremental sensors with assessment units were used for the measurements. The measurements were performed in compliance with the regulations of STN ISO 9283 standard for this type of measurement. As a result of the measurements, the technical specifications of the drift and static compliance of the controlled robotic arm were verified, and the results were compared with the values specified by the manufacturer. Following the measurements and assessment of the results, it was possible to assess the suitability of the used UR5 robotic arm for its application in robotic 3D printing and to propose possible recommendations for the calibration of the robot and the process settings of the printing system for the production of objects using FDM technology. Full article
(This article belongs to the Section Additive Manufacturing Technologies)
Show Figures

Figure 1

Figure 1
<p>Common process errors in robotic 3D printing; (<b>a</b>) warping, (<b>b</b>) layer shifting, (<b>c</b>) vibration and ringing, (<b>d</b>) stringing.</p>
Full article ">Figure 2
<p>Robotic 3D printing with UR5 robot.</p>
Full article ">Figure 3
<p>Measurement points on the virtual ISO cube.</p>
Full article ">Figure 4
<p>Drift measurement scheme of the UR5 robot’s unidirectional accuracy and repeatability and the measurement means employed.</p>
Full article ">Figure 5
<p>Arm of the UR5 robot with the measuring cube mounted.</p>
Full article ">Figure 6
<p>Drift of the unidirectional pose accuracy (dAP) of the UR5 robot.</p>
Full article ">Figure 7
<p>Drift of the unidirectional pose accuracy (dAP<sub>a</sub>, dAP<sub>b</sub>, dAP<sub>c</sub>) of the UR5 robot.</p>
Full article ">Figure 8
<p>Drift of the unidirectional pose repeatability (dRP) of the UR5 robot.</p>
Full article ">Figure 9
<p>Drift of unidirectional pose repeatability (dRP<sub>a</sub>, dRP<sub>b</sub>, dRP<sub>c</sub>).</p>
Full article ">Figure 10
<p>Measurement of dAP and dRP at P1.</p>
Full article ">Figure 11
<p>Schematic of the static compliance measurement of the UR5 robot and the measurement means employed; (<b>a</b>) simplified schematic representation, (<b>b</b>) display of components in 3D view.</p>
Full article ">Figure 12
<p>Static compliance measurement at P1 point.</p>
Full article ">
23 pages, 10682 KiB  
Article
VFLD: Voxelized Fractal Local Descriptor
by Francisco Gomez-Donoso, Felix Escalona, Florian Dargère and Miguel Cazorla
Appl. Sci. 2024, 14(20), 9414; https://doi.org/10.3390/app14209414 - 15 Oct 2024
Viewed by 532
Abstract
A variety of methods for 3D object recognition and registration based on a deep learning pipeline have recently emerged. Nonetheless, these methods require large amounts of data that are not easy to obtain, sometimes rendering them virtually useless in real-life scenarios due to [...] Read more.
A variety of methods for 3D object recognition and registration based on a deep learning pipeline have recently emerged. Nonetheless, these methods require large amounts of data that are not easy to obtain, sometimes rendering them virtually useless in real-life scenarios due to a lack of generalization capabilities. To counter this, we propose a novel local descriptor that takes advantage of the fractal dimension. For each 3D point, we create a descriptor by computing the fractal dimension of the neighbors at different radii. Our redmethod has many benefits, such as being agnostic to the sensor of choice and noise, up to a level, and having few parameters to tinker with. Furthermore, it requires no training and does not rely on semantic information. We test our descriptor using well-known datasets and it largely outperforms Fast Point Feature Histogram, which is the state-of-the-art descriptor for 3D data. We also apply our descriptor to a registration pipeline and achieve accurate three-dimensional representations of the scenes, which are captured with a commercial sensor. Full article
(This article belongs to the Special Issue Current Advances in 3D Scene Classification and Object Recognition)
Show Figures

Figure 1

Figure 1
<p>Visualization of the occupied boxes (blue) of a point cloud after applying the voxel grid. In black, we can see the main bounding box of the object that marks the size for the divisions.</p>
Full article ">Figure 2
<p>Effects of the <math display="inline"><semantics> <mrow> <mi>n</mi> <mi>I</mi> <mi>t</mi> <mi>e</mi> <mi>r</mi> <mi>s</mi> </mrow> </semantics></math> parameter in the box-counting process from the original point cloud (leftmost) to the generated grid with 100 iterations (rightmost).</p>
Full article ">Figure 3
<p>Plot of the computed fractal dimension for <math display="inline"><semantics> <mrow> <mi>n</mi> <mi>I</mi> <mi>t</mi> <mi>e</mi> <mi>r</mi> <mi>s</mi> <mo>=</mo> <mo>{</mo> <mn>3</mn> <mo>,</mo> <mn>5</mn> <mo>,</mo> <mn>7</mn> <mo>,</mo> <mn>15</mn> <mo>}</mo> </mrow> </semantics></math> for a random set of points. This is a log–log plot in which the X-axis refers to the inverse of the voxel size in the box-counting method, while the Y-axis is the number of occupied boxes. Note the difference in the slope (FD, fractal dimension) of the fitted line when <math display="inline"><semantics> <mrow> <mi>n</mi> <mi>I</mi> <mi>t</mi> <mi>e</mi> <mi>r</mi> <mi>s</mi> </mrow> </semantics></math> is set too high.</p>
Full article ">Figure 4
<p>Diagram of the VLFD generation process.</p>
Full article ">Figure 5
<p>Visualization of the steps that comprise the computation of the descriptor. (<b>a</b>) The surrounding points at different radii are obtained. Two radii are used in this example for visualization purposes. (<b>b</b>) Box-counting is used to obtain the leaf size and the occupied boxes on each subset. Four iterations are visualized. (<b>c</b>) The log–log curve is generated for the data obtained, and a line is fitted. Its slope is the fractal dimension.</p>
Full article ">Figure 6
<p>Random samples of the ModelNet10 dataset.</p>
Full article ">Figure 7
<p>Random samples of the Simple Figures dataset.</p>
Full article ">Figure 8
<p>Examples of ScanNet RGB-D scenes, viewed from above.</p>
Full article ">Figure 9
<p>Random examples of point clouds included in the ViDRILO dataset.</p>
Full article ">Figure 10
<p>Accuracy (top) and precision–recall (bottom) curves for different starting values for the search radii for ModelNet (leftmost) and Simple Figures (rightmost) datasets.</p>
Full article ">Figure 11
<p>Accuracy (top) and precision–recall (bottom) curves for different increment values for the search radii for ModelNet (leftmost) and Simple Figures (rightmost) datasets.</p>
Full article ">Figure 12
<p>Accuracy (top) and precision–recall (bottom) curves for different box-counting iterations for ModelNet (leftmost) and Simple Figures (rightmost) datasets.</p>
Full article ">Figure 13
<p>Accuracy (top) and precision–recall (bottom) curves for different amounts of the search radii for ModelNet (leftmost) and Simple Figures (rightmost) datasets.</p>
Full article ">Figure 14
<p>Accuracy (top) and precision–recall (bottom) curves for different densities of the sampling process for ModelNet (leftmost) and Simple Figures (rightmost) datasets.</p>
Full article ">Figure 15
<p>Result of applying different Gaussian noise levels <math display="inline"><semantics> <mrow> <mi>F</mi> <mo>=</mo> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>0.5</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>3</mn> <mo>)</mo> </mrow> </semantics></math> to a random sample of the ModelNet10 dataset.</p>
Full article ">Figure 16
<p>Accuracy (top) and precision–recall (bottom) curves for different noise levels added to ModelNet (leftmost) and Simple Figures (rightmost) datasets.</p>
Full article ">Figure 17
<p>Accuracy (top) and precision–recall (bottom) curves for different state-of-the-art methods for ModelNet (leftmost) and Simple Figures (rightmost) datasets. (<b>a</b>) ModelNet accuracy; (<b>b</b>) Simple Figures accuracy; (<b>c</b>) ModelNet precision–recall; (<b>d</b>) Simple figures precision–recall.</p>
Full article ">Figure 18
<p>ScanNet scenes splitting steps. (<b>a</b>) The initial scene is downsampled into a point cloud of 12,000 points. (<b>b</b>) The 2D minimum area rectangle including all the points is obtained. (<b>c</b>) This two-dimensional rectangle is divided into four equal parts. (<b>d</b>) Three more clouds are created by iterating clockwise from the leftmost cloud.</p>
Full article ">Figure 19
<p>Two types of downsampling used for the evaluation protocol. (<b>a</b>) A ScanNet scene, obtained with uniform downsampling. (<b>b</b>) A ScanNet scene, obtained with voxelized downsampling.</p>
Full article ">Figure 20
<p>Amount of scenes (in ordinate) for each error rate interval (in abscissa) of VFLD (in blue) and FPFH (in red) for the registration evaluation protocol on the uniform environment.</p>
Full article ">Figure 21
<p>Number of scenes (in ordinate) for each error rate interval (in abscissa) of VFLD (in blue) and FPFH (in red) for the registration evaluation protocol on the voxelized environment.</p>
Full article ">Figure 22
<p>Registration results of two different environments using VFLD as the descriptor of choice for the feature-matching step. Four different clouds are shown for each example; each one is of a different color. (<b>a</b>–<b>d</b>,<b>h</b>–<b>k</b>) are color images from a sequences of the dataset, and (<b>e</b>–<b>g</b>,<b>l</b>–<b>n</b>) are the tridimensional reconstruction of the scene achieved using the proposed VFLD.</p>
Full article ">
15 pages, 807 KiB  
Article
PointCloud-At: Point Cloud Convolutional Neural Networks with Attention for 3D Data Processing
by Saidu Umar and Aboozar Taherkhani
Sensors 2024, 24(19), 6446; https://doi.org/10.3390/s24196446 - 5 Oct 2024
Viewed by 1062
Abstract
The rapid growth in technologies for 3D sensors has made point cloud data increasingly available in different applications such as autonomous driving, robotics, and virtual and augmented reality. This raises a growing need for deep learning methods to process the data. Point clouds [...] Read more.
The rapid growth in technologies for 3D sensors has made point cloud data increasingly available in different applications such as autonomous driving, robotics, and virtual and augmented reality. This raises a growing need for deep learning methods to process the data. Point clouds are difficult to be used directly as inputs in several deep learning techniques. The difficulty is raised by the unstructured and unordered nature of the point cloud data. So, machine learning models built for images or videos cannot be used directly on point cloud data. Although the research in the field of point clouds has gained high attention and different methods have been developed over the decade, very few research works directly with point cloud data, and most of them convert the point cloud data into 2D images or voxels by performing some pre-processing that causes information loss. Methods that directly work on point clouds are in the early stage and this affects the performance and accuracy of the models. Advanced techniques in classical convolutional neural networks, such as the attention mechanism, need to be transferred to the methods directly working with point clouds. In this research, an attention mechanism is proposed to be added to deep convolutional neural networks that process point clouds directly. The attention module was proposed based on specific pooling operations which are designed to be applied directly to point clouds to extract vital information from the point clouds. Segmentation of the ShapeNet dataset was performed to evaluate the method. The mean intersection over union (mIoU) score of the proposed framework was increased after applying the attention method compared to a base state-of-the-art framework that does not have the attention mechanism. Full article
Show Figures

Figure 1

Figure 1
<p>The structure of the proposed channel attention mechanism for the point cloud convolutional layer. The numbers in the parentheses show the size of each tensor before and after each operation. In this sample network, |<span class="html-italic">P</span>| = 8 and <span class="html-italic">n</span> = 96. The grey, red, blue, and green boxes represent max pooling, average pooling, the ConvPoint layer, and sigmoid operations, respectively. The C stands for concatenation.</p>
Full article ">Figure 2
<p>Semantic segmentation network graphical display, as fetched from [<a href="#B9-sensors-24-06446" class="html-bibr">9</a>].</p>
Full article ">
15 pages, 5925 KiB  
Article
Inertial Motion Capture-Driven Digital Human for Ergonomic Validation: A Case Study of Core Drilling
by Quan Zhao, Tao Lu, Menglun Tao, Siyi Cheng and Guojun Wen
Sensors 2024, 24(18), 5962; https://doi.org/10.3390/s24185962 - 13 Sep 2024
Viewed by 961
Abstract
In the evolving realm of ergonomics, there is a growing demand for enhanced comfortability, visibility, and accessibility in the operation of engineering machinery. This study introduces an innovative approach to assess the ergonomics of a driller’s cabin by utilizing a digital human. Through [...] Read more.
In the evolving realm of ergonomics, there is a growing demand for enhanced comfortability, visibility, and accessibility in the operation of engineering machinery. This study introduces an innovative approach to assess the ergonomics of a driller’s cabin by utilizing a digital human. Through the utilization of inertial motion capture sensors, the method enables the operation of a virtual driller animated by real human movements, thereby producing more precise and realistic human–machine interaction data. Additionally, this study develops a simplified model for the human upper limbs, facilitating the calculation of joint forces and torques. An ergonomic analysis platform, encompassing a virtual driller’s cabin and a digital human model, is constructed using Unity 3D. This platform enables the quantitative evaluation of comfortability, visibility, and accessibility. Its versatility extends beyond the current scope, offering substantial support for product development and enhancement. Full article
(This article belongs to the Special Issue Advances in Human Locomotion Using Sensor-Based Approaches)
Show Figures

Figure 1

Figure 1
<p>Virtual human model.</p>
Full article ">Figure 2
<p>Motion capture sensor and wear effect.</p>
Full article ">Figure 3
<p>Parent–child relationship of human body skeleton.</p>
Full article ">Figure 4
<p>Parent–child relationship of human upper limb bones.</p>
Full article ">Figure 5
<p>Weight of the partial bones in the skeleton.</p>
Full article ">Figure 6
<p>Angle analysis diagram of upper limb joint.</p>
Full article ">Figure 7
<p>Torque analysis diagram of upper limb joints.</p>
Full article ">Figure 8
<p>The 3D structure of the core drilling driller’s cabin.</p>
Full article ">Figure 9
<p>Test platform.</p>
Full article ">Figure 10
<p>Curves of joint angle and torque under operating rocker.</p>
Full article ">Figure 11
<p>Curves of joint angle and torque under twist knob.</p>
Full article ">Figure 12
<p>Curves of joint angle and torque under push button.</p>
Full article ">Figure 13
<p>Analysis of visible domains.</p>
Full article ">Figure 14
<p>Analysis of reachable domains.</p>
Full article ">
16 pages, 8376 KiB  
Article
Virtual Tours as Effective Complement to Building Information Models in Computer-Aided Facility Management Using Internet of Things
by Sergi Aguacil Moreno, Matthias Loup, Morgane Lebre, Laurent Deschamps, Jean-Philippe Bacher and Sebastian Duque Mahecha
Appl. Sci. 2024, 14(17), 7998; https://doi.org/10.3390/app14177998 - 7 Sep 2024
Viewed by 1014
Abstract
This study investigates the integration of Building Information Models (BIMs) and Virtual Tour (VT) environments in the Architecture, Engineering and Construction (AEC) industry, focusing on Computer-Aided Facility Management (CAFM), Computerized Maintenance Management Systems (CMMSs), and data Life-Cycle Assessment (LCA). The interconnected nature of [...] Read more.
This study investigates the integration of Building Information Models (BIMs) and Virtual Tour (VT) environments in the Architecture, Engineering and Construction (AEC) industry, focusing on Computer-Aided Facility Management (CAFM), Computerized Maintenance Management Systems (CMMSs), and data Life-Cycle Assessment (LCA). The interconnected nature of tasks throughout a building’s life cycle increasingly demands a seamless integration of real-time monitoring, 3D models, and building data technologies. While there are numerous examples of effective links between IoT and BIMs, as well as IoT and VTs, a research gap exists concerning VT-BIM integration. This article presents a technical solution that connects BIMs and IoT data using VTs to enhance workflow efficiency and information transfer. The VT is developed upon a pilot based on the Controlled Environments for Living Lab Studies (CELLS), a unique facility designed for flexible monitoring and remote-control processes that incorporate BIMs and IoT technologies. The findings offer valuable insights into the potential of VTs to complement and connect to BIMs from a life-cycle perspective, improving the usability of digital twins for beginner users and contributing to the advancement of the AEC and CAFM industries. Our technical solution helps complete the connectivity of BIMs-VT-IoT, providing an intuitive interface (VT) for rapid data visualisation and access to dashboards, models and building databases. The practical field of application is facility management, enhancing monitoring and asset management tasks. This includes (a) sensor data monitoring, (b) remote control of connected equipment, and (c) centralised access to asset-space information bridging BIM and visual (photographic/video) data. Full article
Show Figures

Figure 1

Figure 1
<p>Added value of BIMs and VTs, in relation to the building life-cycle phase.</p>
Full article ">Figure 2
<p>AEC and CAFM entry points to the Information-control loop between the build object, BIMs, and VT.</p>
Full article ">Figure 3
<p>Workflow for VT construction and definition of database-linked elements.</p>
Full article ">Figure 4
<p>Example of historical values of temperature accessed from the VT.</p>
Full article ">Figure 5
<p>Example of ‘<span class="html-italic">Room 2</span>’ in the VT connecting to BIMs.</p>
Full article ">Figure 6
<p>Example of ‘<span class="html-italic">Room 2</span>’ in the BIMs’ CDE platform [<a href="#B47-applsci-14-07998" class="html-bibr">47</a>].</p>
Full article ">Figure 7
<p>VT interface showing entire site orthophoto and spaces under implementation.</p>
Full article ">Figure 8
<p>VT connected to live data and BIMs.</p>
Full article ">
21 pages, 10469 KiB  
Article
RGB Color Model: Effect of Color Change on a User in a VR Art Gallery Using Polygraph
by Irena Drofova, Paul Richard, Martin Fajkus, Pavel Valasek, Stanislav Sehnalek and Milan Adamek
Sensors 2024, 24(15), 4926; https://doi.org/10.3390/s24154926 - 30 Jul 2024
Cited by 1 | Viewed by 2391
Abstract
This paper presents computer and color vision research focusing on human color perception in VR environments. A VR art gallery with digital twins of original artworks is created for this experiment. In this research, the field of colorimetry and the application of the [...] Read more.
This paper presents computer and color vision research focusing on human color perception in VR environments. A VR art gallery with digital twins of original artworks is created for this experiment. In this research, the field of colorimetry and the application of the L*a*b* and RGB color models are applied. The inter-relationships of the two color models are applied to create a color modification of the VR art gallery environment using C# Script procedures. This color-edited VR environment works with a smooth change in color tone in a given time interval. At the same time, a sudden change in the color of the RGB environment is defined in this interval. This experiment aims to record a user’s reaction embedded in a VR environment and the effect of color changes on human perception in a VR environment. This research uses lie detector sensors that record the physiological changes of the user embedded in VR. Five sensors are used to record the signal. An experiment on the influence of the user’s color perception in a VR environment using lie detector sensors has never been conducted. This research defines the basic methodology for analyzing and evaluating the recorded signals from the lie detector. The presented text thus provides a basis for further research in the field of colors and human color vision in a VR environment and lays an objective basis for use in many scientific and commercial areas. Full article
(This article belongs to the Special Issue Sensors Applications on Emotion Recognition)
Show Figures

Figure 1

Figure 1
<p>Color model and gamut: (<b>a</b>) RGB and color space sRGB in the Chromacity Diagram CIE 1931 and (<b>b</b>) standardized color scale and color position in the Chromacity Diagram CIE 1931 (CIE 1976) [<a href="#B21-sensors-24-04926" class="html-bibr">21</a>].</p>
Full article ">Figure 2
<p>Color model and gamut: (<b>a</b>) VR headset Oculus Quest2 and color space Rec.2020 in the Chromacity Diagram CIE 1931 and (<b>b</b>) L*a*b* color model and L*a*b* gamut in the Chromacity Diagram CIE 1931.</p>
Full article ">Figure 3
<p>Digitization and creation of a digital twin of a work of art: (<b>a</b>) an original image in an art gallery environment and (<b>b</b>) a digitized art image for a VR environment.</p>
Full article ">Figure 4
<p>The virtual art gallery environment.</p>
Full article ">Figure 5
<p>The initial white background of the VR environment: (<b>a</b>) white color is the first color to immerse the user in the VR environment before color modification and (<b>b</b>–<b>d</b>) reference images of the smooth color tone change in the VR gallery environment.</p>
Full article ">Figure 5 Cont.
<p>The initial white background of the VR environment: (<b>a</b>) white color is the first color to immerse the user in the VR environment before color modification and (<b>b</b>–<b>d</b>) reference images of the smooth color tone change in the VR gallery environment.</p>
Full article ">Figure 6
<p>Static direct absolute RGB process colors: (<b>a</b>) direct absolute background color R (255), (<b>b</b>) direct absolute background color G (255), and (<b>c</b>) direct absolute background color B (255).</p>
Full article ">Figure 7
<p>User connected to lie detector sensors in a virtual gallery environment: (1) and (3) computing and display units, (2) VR headset, (4) and (5) Pneumo Chest Assembly, (6) Photoelectric Plethysmograph, and (7) Electrodermal Activity (EDA).</p>
Full article ">Figure 8
<p>Detail of progress captured by lie detector sensors in a virtual gallery environment: (P1) and (P2) Pneumo Chest Assembly, (PL) Photoelectric Plethysmograph, (GS) Electrodermal Activity (EDA) and (SE) Activity Sensors.</p>
Full article ">Figure 9
<p>Display of lie detector signals measured by sensors in the total measurement time interval. <a href="#sensors-24-04926-f008" class="html-fig">Figure 8</a> shows the details of individual sensors.</p>
Full article ">Figure 10
<p>Graphic representation of signals measured by polygraph sensors.</p>
Full article ">Figure 11
<p>Graphic representation of the GS sensor signal (EDA).</p>
Full article ">Figure 12
<p>Graphic representation of the signal from the sensor P1 Abdominal Respiration trace and P2 Thoracic Respiration trace.</p>
Full article ">Figure 13
<p>Graphic representation of the signal from the PL Photoelectric Plethysmograph and SE Activity Sensor sensors.</p>
Full article ">
23 pages, 3780 KiB  
Article
An Efficient Approach for Localizing Sensor Nodes in 2D Wireless Sensor Networks Using Whale Optimization-Based Naked Mole Rat Algorithm
by Goldendeep Kaur, Kiran Jyoti, Samer Shorman, Anas Ratib Alsoud and Rohit Salgotra
Mathematics 2024, 12(15), 2315; https://doi.org/10.3390/math12152315 - 24 Jul 2024
Viewed by 605
Abstract
Localization has emerged as an important and critical component of research in Wireless Sensor Networks (WSNs). WSN is a network of numerous sensors distributed across broad areas of the world to conduct numerous activities, including sensing the data and transferring it to various [...] Read more.
Localization has emerged as an important and critical component of research in Wireless Sensor Networks (WSNs). WSN is a network of numerous sensors distributed across broad areas of the world to conduct numerous activities, including sensing the data and transferring it to various devices. Most applications, like animal tracking, object monitoring, and innumerable resources put in the interior as well as outdoor locations, need to identify the position of the occurring incident. The primary objective of localization is to identify the locality of sensor nodes installed in a network so that the location of a particular event can be traced. Different optimization approaches are observed in the work for solving the localization challenge in WSN and assigning the apt positions to undiscovered sensor nodes. This research employs the approach of localizing sensor nodes in a 2D platform utilizing an exclusive static anchor node and virtual anchors to detect dynamic target nodes by projecting these six virtual anchors hexagonally at different orientations and then optimizing the estimated target node co-ordinates employing Whale Optimization-based Naked Mole Rat Algorithm (WONMRA). Moreover, the effectiveness of a variety of optimization strategies employed for localization is compared to the WONMRA strategy concerning localization error and the number of nodes being localized, and it has been investigated that the average error in localization is 0.1999 according to WONMRA and is less than all other optimization techniques. Full article
Show Figures

Figure 1

Figure 1
<p>Localization algorithms.</p>
Full article ">Figure 2
<p>Process of localization.</p>
Full article ">Figure 3
<p>Sensor field representation.</p>
Full article ">Figure 4
<p>Centroid calculation.</p>
Full article ">Figure 5
<p>WONMRA implementation around Centroid.</p>
Full article ">Figure 6
<p>Error estimation using WONMRA.</p>
Full article ">Figure 7
<p>Node localization using FA.</p>
Full article ">Figure 8
<p>Node localization using PSO.</p>
Full article ">Figure 9
<p>Node localization using BBO.</p>
Full article ">Figure 10
<p>Node localization using HPSO.</p>
Full article ">Figure 11
<p>Node localization using NMRA.</p>
Full article ">Figure 12
<p>Node localization using WOA.</p>
Full article ">Figure 13
<p>Node localization using WONMRA.</p>
Full article ">
Back to TopTop