Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,299)

Search Parameters:
Keywords = depth sensor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 6981 KiB  
Article
Spatial, Vertical, and Temporal Soil Water Content Variability Affected by Low-Pressure Drip Irrigation in Sandy Loam Soil: A Soil Bin Experimental Study
by Mohammod Ali, Md Asrakul Haque, Md Razob Ali, Md Aminur Rahman, Hongbin Jin, Young Yoon Jang and Sun-Ok Chung
Agronomy 2024, 14(12), 2848; https://doi.org/10.3390/agronomy14122848 - 28 Nov 2024
Viewed by 190
Abstract
Drip irrigation pressure is considered a key parameter for controlling and designing the drip irrigation system in sandy soils. Understanding soil water content (SWC) movements under varying pressures can enhance water use efficiency and support sustainable irrigation strategies for crops in arid regions. [...] Read more.
Drip irrigation pressure is considered a key parameter for controlling and designing the drip irrigation system in sandy soils. Understanding soil water content (SWC) movements under varying pressures can enhance water use efficiency and support sustainable irrigation strategies for crops in arid regions. The objectives of this study were to investigate the effects of irrigation pressure on the spatial, vertical, and temporal variability of SWC in sandy loam soil using surface drip irrigation. Experiments were carried out in a soil bin located in a greenhouse. SWC sensors were placed at depths 10, 20, 30, 40, and 50 cm to monitor SWC variability under low, medium, and high drip irrigation pressures (25, 50, and 75 kPa) at a constant emitter flow rate of 3 L/h. A pressure controller was used to regulate drip irrigation pressure, while microcontrollers communicated with SWC sensors, collected experimental data, and automatically recorded the outputs. At low irrigation pressure, water content began to increase at 0.53 h and saturated at 3.5 h, with both values being significantly lower at medium and high pressures. The results indicated that lower pressures led to significant variability in water movement at shallow depths (10 to 30 cm), becoming uniform at deeper layers but requiring longer irrigation times. Competitively higher pressures showed uniform water distribution and retention statistically throughout the soil profiles with shorter irrigation times. The variation in water distribution resulting in non-uniform coverage across the irrigated area demonstrates how pressure changes affect the flow rate of the emitter. The results provide information maps with soil water data that can be adjusted with irrigation pressure to maximize water use efficiency in sandy loam soils, aiding farmers in better irrigation scheduling for different crops using surface drip irrigation techniques in arid environments. Full article
(This article belongs to the Section Water Use and Irrigation)
Show Figures

Figure 1

Figure 1
<p>A schematic diagram of an automatic drip irrigation system for monitoring sandy SWC inside the soil bin. DAQ: data acquisition; R: Raspberry Pi with the display screen; W: water pump; C: pump power regulator; F: water flow control; D: display (irrigation pressure, emitter flow rate, temperature, and humidity data); A–C: microcontroller; and P<sub>1</sub>–P<sub>3</sub>: emitter positions 1, 2, and 3.</p>
Full article ">Figure 2
<p>The diagram illustrates the experimental soil bin (<b>A</b>), the measurement for sensor placement (<b>B</b>), the data acquisition system (<b>C</b>), and the positioning of the sensors at a distance of 15 cm (<b>D</b>). The system includes a DAQ box (1), a USB cable connector (2), an Arduino Mega 2560 (3), a voltage controller (4), a circuit breaker (5), a Raspberry Pi display screen (6), a USB hub (7), sensor connectors (8), an electric port and charger (9), and a Raspberry Pi power supply (10). The red arrows indicated the positions of the emitters.</p>
Full article ">Figure 3
<p>Schematic diagram of sensor interfacing with the Arduino and Raspberry Pi (<b>A</b>). PS: power source; PC: power converter; CB: circuit board; USB: universal serial bus; A<sub>1</sub>–A<sub>3</sub>: USB port for connecting Arduino USB hub; SDL: simple direct media layer; SCL: serial clock; VCC: voltage common collector; and GND: ground. Data collection and remote monitoring system (<b>B</b>).</p>
Full article ">Figure 4
<p>Box plot illustrating the distribution of SWC at different pressures (low, medium, and high) in a sandy soil bin. Each box represents the interquartile range, with the median marked by a horizontal line.</p>
Full article ">Figure 5
<p>Surface maps of the water distribution of the SWC at the low water pressure level in sandy loam soil after the 6-h irrigation. The variability in the spatial distribution occurs under soil depths of 10 cm (<b>A</b>), 20 cm (<b>B</b>), 30 cm (<b>C</b>), 40 cm (<b>D</b>), and 50 cm (<b>E</b>).</p>
Full article ">Figure 6
<p>Surface maps of the soil water distribution at the medium water pressure level in sandy loam soil after the 6-h irrigation. The variability in the spatial distribution occurs at soil depths of 10 cm (<b>A</b>), 20 cm (<b>B</b>), 30 cm (<b>C</b>), 40 cm (<b>D</b>), and 50 cm (<b>E</b>).</p>
Full article ">Figure 7
<p>Surface maps of the soil water distribution at the high water pressure level in sandy loam soil after the 6 h irrigation. The variability in the spatial distribution occurs at soil depths of 10 cm (<b>A</b>), 20 cm (<b>B</b>), 30 cm (<b>C</b>), 40 cm (<b>D</b>), and 50 cm (<b>E</b>).</p>
Full article ">Figure 8
<p>Vertical distribution of the SWC under different water pressures at lower (<b>A</b>), medium (<b>B</b>), and higher (<b>C</b>) pressure levels on sandy loam soil after the 6-h irrigation.</p>
Full article ">Figure 9
<p>Temporal distribution of the average SWC under different water pressures at lower (<b>A</b>), medium (<b>B</b>), and higher (<b>C</b>) levels in sandy loam soil.</p>
Full article ">Figure 10
<p>Time vs. depth for the SWC distribution at different pressures at low (<b>A</b>), medium (<b>B</b>), and high (<b>C</b>) pressure levels.</p>
Full article ">
20 pages, 2291 KiB  
Article
Development of a Multi-Source Satellite Fusion Method for XCH4 Product Generation in Oil and Gas Production Areas
by Lu Fan, Yong Wan and Yongshou Dai
Appl. Sci. 2024, 14(23), 11100; https://doi.org/10.3390/app142311100 - 28 Nov 2024
Viewed by 180
Abstract
Methane (CH4) is the second-largest greenhouse gas contributing to global climate warming. As of 2022, methane emissions from the oil and gas industry amounted to 3.586 million tons, representing 13.24% of total methane emissions and ranking second among all methane emission [...] Read more.
Methane (CH4) is the second-largest greenhouse gas contributing to global climate warming. As of 2022, methane emissions from the oil and gas industry amounted to 3.586 million tons, representing 13.24% of total methane emissions and ranking second among all methane emission sources. To effectively control methane emissions in oilfield regions, this study proposes a multi-source remote sensing data fusion method based on the concept of data fusion, targeting high-emission areas such as oil and gas fields. The aim is to construct an XCH4 remote sensing dataset that meets the requirements for high resolution, wide coverage, and high accuracy. Initially, XCH4 data products from the GOSAT satellite and the TROPOMI sensor are matched both spatially and temporally. Subsequently, variables such as longitude, latitude, aerosol optical depth, surface albedo, digital elevation model (DEM), and month are incorporated. Using a local random forest (LRF) model for fusion, the resulting product combines the high accuracy of GOSAT data with the wide coverage of TROPOMI data. On this basis, ΔXCH4 is derived using GF-5. Combined with the GFEI prior emission inventory, the high-precision fusion dataset output by the LRF model is redistributed grid by grid in oilfield areas, producing a 1 km resolution XCH4 grid product, thereby constructing a high-precision, high-resolution dataset for oilfield regions. Finally, the challenges that emerged from the study were discussed and summarized, and it was envisioned that, in the future, with the advancement of satellite technology and algorithms, it would be possible to obtain more accurate and high-resolution datasets of methane concentration and apply such datasets to a wide range of fields, with the expectation that significant contributions could be made to reducing methane emissions and combating climate change. Full article
(This article belongs to the Section Environmental Sciences)
Show Figures

Figure 1

Figure 1
<p>Overview of the methodology workflow in this study.</p>
Full article ">Figure 2
<p>Structure of the LRF model.</p>
Full article ">Figure 3
<p>Location of the study area (Dongying, Shandong Province, China).</p>
Full article ">Figure 4
<p>Presentation of XCH<sub>4</sub> dataset fused by LRF model in Shandong Province.</p>
Full article ">Figure 5
<p>Scatter Density Plot of Fused Data vs. Original TROPOMI Data, with the Black Dashed Line Representing the 1:1 Line and the Blue Dashed Line Representing the Fitted Line.</p>
Full article ">Figure 6
<p>Comparison and fitting plots of pre-merger and post-merger data with TCCON station data for each model, where (<b>a</b>) shows the pre-merger TROPOMI data compared to the TCCON station data, (<b>b</b>) represents the linear regression model, (<b>c</b>) represents the localized random forest model, and (<b>d</b>) represents the random forest model.</p>
Full article ">Figure 7
<p>Example of ΔXCH<sub>4</sub> Retrieval Results, where (<b>b</b>) is a satellite map of the region, (<b>a</b>,<b>c</b>) are zoomed presentations of the plant area in (<b>b</b>); (<b>e</b>) is a map of the matched filter results of the region, and (<b>d</b>,<b>f</b>) are zoomed presentations of the same region as in (<b>a</b>,<b>c</b>); (<b>h</b>) is a display of the matched filter results superimposed on the satellite map and hiding the background values, and (<b>g</b>,<b>i</b>) are zoomed presentations of the plant area in (<b>h</b>).</p>
Full article ">Figure 8
<p>High-resolution and high-precision XCH<sub>4</sub> dataset for oilfield regions. (<b>a</b>) shows the low-resolution data of a region after the fusion of the LRF model, (<b>b</b>) shows the high-resolution data of the same region, (<b>c</b>,<b>d</b>) are zoomed presentations of the two high-emission plant areas in (<b>b</b>), and (<b>e</b>,<b>f</b>) are satellite maps corresponding to the areas presented in (<b>c</b>) and (<b>d</b>), respectively.</p>
Full article ">Figure 9
<p>High-resolution and high-precision XCH<sub>4</sub> dataset for oilfield regions. (<b>a</b>) shows the low-resolution data of an area different from <a href="#applsci-14-11100-f008" class="html-fig">Figure 8</a> after LRF model fusion, (<b>b</b>) shows the high-resolution data of the same area, (<b>c</b>) shows a zoomed-in display of a high-emission plant area in (<b>b</b>), and (<b>d</b>) is a satellite map corresponding to the area displayed in (<b>c</b>).</p>
Full article ">
23 pages, 10810 KiB  
Article
A Multi-Sensor Fusion Autonomous Driving Localization System for Mining Environments
by Yi Wang, Chungming Own, Haitao Zhang and Minzhou Luo
Electronics 2024, 13(23), 4717; https://doi.org/10.3390/electronics13234717 - 28 Nov 2024
Viewed by 205
Abstract
We propose a multi-sensor fusion localization framework for autonomous heavy-duty trucks suitable for mining scenarios, which enables high-precision, real-time trajectory generation, and map construction. The motion estimated through pre-integration of the inertial measurement unit (IMU) can eliminate distortions in the point cloud and [...] Read more.
We propose a multi-sensor fusion localization framework for autonomous heavy-duty trucks suitable for mining scenarios, which enables high-precision, real-time trajectory generation, and map construction. The motion estimated through pre-integration of the inertial measurement unit (IMU) can eliminate distortions in the point cloud and provide an initial guess for LiDAR odometry optimization. The point cloud information obtained from LiDAR can assist in recovering depth information from image features extracted by the monocular camera. To ensure real-time performance, we introduce an iKD-tree to organize the point cloud data. To address issues arising from bumpy road segments and long-distance driving in practical mining scenarios, we can incorporate a large number of relative and absolute measurements from different sources, such as GPS information and AprilTag-assisted localization data, including loop closure, as factors in the system. The proposed method has been extensively evaluated on public datasets and self-collected datasets from mining sites. Full article
(This article belongs to the Special Issue Unmanned Vehicles Systems Application)
27 pages, 14376 KiB  
Article
Investigating Synoptic Influences on Tropospheric Volcanic Ash Dispersion from the 2015 Calbuco Eruption Using WRF-Chem Simulations and Satellite Data
by Douglas Lima de Bem, Vagner Anabor, Franciano Scremin Puhales, Damaris Kirsch Pinheiro, Fabio Grasso, Luiz Angelo Steffenel, Leonardo Brenner and Umberto Rizza
Remote Sens. 2024, 16(23), 4455; https://doi.org/10.3390/rs16234455 - 27 Nov 2024
Viewed by 329
Abstract
We used WRF-Chem to simulate ash transport from eruptions of Chile’s Calbuco volcano on 22–23 April 2015. Massive ash and SO2 ejections reached the upper troposphere, and particulates transported over South America were observed over Argentina, Uruguay, and Brazil via satellite and [...] Read more.
We used WRF-Chem to simulate ash transport from eruptions of Chile’s Calbuco volcano on 22–23 April 2015. Massive ash and SO2 ejections reached the upper troposphere, and particulates transported over South America were observed over Argentina, Uruguay, and Brazil via satellite and surface data. Numerical simulations with the coupled Weather Research and Forecasting–Chemistry (WRF-Chem) model from 22 to 27 April covered eruptions and particle propagation. Chemical and aerosol parameters utilized the GOCART (Goddard Chemistry Aerosol Radiation and Transport) model, while the meteorological conditions came from NCEP-FNL reanalysis. In WRF-Chem, we implemented a more efficient methodology to determine the Eruption Source Parameters (ESP). This permitted each simulation to consider a sequence of eruptions and a time varying ESP, such as the eruption height and mass and the SO2 eruption rate. We used two simulations (GCTS1 and GCTS2) differing in the ash mass fraction in the finest bins (0–15.6 µm) by 2.4% and 16.5%, respectively, to assess model efficiency in representing plume intensity and propagation. Analysis of the active synoptic components revealed their impact on particle transport and the Andes’ role as a natural barrier. We evaluated and compared the simulated Aerosol Optical Depth (AOD) with VIIRS Deep Blue Level 3 data and SO2 data from Ozone Mapper and Profiler Suite (OMPS) Limb Profiler (LP), both of which are sensors onboard the Suomi National Polar Partnership (NPP) spacecraft. The model successfully reproduced ash and SO2 transport, effectively representing influencing synoptic systems. Both simulations showed similar propagation patterns, with GCTS1 yielding better results when compared with AOD retrievals. These results indicate the necessity of specifying lower mass fraction in the finest bins. Comparison with VIIRS Brightness Temperature Difference data confirmed the model’s efficiency in representing particle transport. Overestimation of SO2 may stem from emission inputs. This study demonstrates the feasibility of our implementation of the WRF-Chem model to reproduce ash and SO2 patterns after a multi-eruption event. This enables further studies into aerosol–radiation and aerosol–cloud interactions and atmospheric behavior following volcanic eruptions. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Synoptic fields on 23 April at 00:00 UTC from the WRF model: (<b>a</b>) wind field at 850 hPa; (<b>b</b>) wind field up to 200 hPa (the solid arrows indicate the subtropical jet); (<b>c</b>) surface pressure (isolines) and thickness 500−1000 hPa field (shaded contours); (<b>d</b>) geopotential at 500 hPa (isolines) and relative vorticity field (shaded contours). The blue dot indicates the location of the Calbuco volcano.</p>
Full article ">Figure 2
<p>Synoptic fields on 24 April at 00:00 UTC from the WRF model: (<b>a</b>) wind field at 850 hPa; (<b>b</b>) wind field up to 200 hPa (the solid arrows indicate the subtropical jet and the dashed arrows indicate the polar jet); (<b>c</b>) surface pressure (isolines) and thickness 500−1000 hPa field (shaded contours); (<b>d</b>) geopotential at 500 hPa (isolines) and relative vorticity field (shaded contours). The blue dot indicates the location of the Calbuco volcano.</p>
Full article ">Figure 3
<p>Synoptic fields on 25 April at 00:00 UTC from the WRF model: (<b>a</b>) wind field at 850 hPa; (<b>b</b>) wind field up to 200 hPa (the solid arrows indicate the subtropical jet and the dashed arrows indicate the polar jet); (<b>c</b>) surface pressure (isolines) and thickness 500−1000 hPa field (shaded contours); (<b>d</b>) geopotential at 500 hPa (isolines) and relative vorticity field (shaded contours). The red dashed line indicates the trough axis, while the blue dot indicates the location of the Calbuco volcano.</p>
Full article ">Figure 4
<p>Synoptic fields on 26 April at 00:00 UTC from the WRF model: (<b>a</b>) wind field at 850 hPa; (<b>b</b>) wind field up to 200 hPa (the solid arrows indicate the subtropical jet and the dashed arrows indicate the polar jet); (<b>c</b>) surface pressure (isolines) and thickness 500−1000 hPa field (shaded contours); (<b>d</b>) geopotential at 500 hPa (isolines) and relative vorticity field (shaded contours). The red dashed line indicates the trough axis, while the blue dot indicates the location of the Calbuco volcano.</p>
Full article ">Figure 5
<p>Domain numerical grid showing the model representation of elevation and the location of Calbuco (blue star) along with the main cities in South America (red points).</p>
Full article ">Figure 6
<p>Daily average AOD from AERDB (SNPP-D3-VIIRS): (<b>a</b>) 23 April, (<b>b</b>) 24 April, (<b>c</b>) 25 April, and (<b>d</b>) 26 April.</p>
Full article ">Figure 7
<p>Brightness Temperature Difference (BTD) along the southern region of South America for (<b>a</b>) 23 April; (<b>b</b>) 24 April; (<b>c</b>) 25 April and (<b>d</b>) 26 April. Units are °K.</p>
Full article ">Figure 8
<p>Model maps of AOD at 0.55 µm on 23 April for (<b>a</b>) AOD-GCTS2 and (<b>b</b>) AOD-GCTS1; on 24 April for (<b>c</b>) AOD-GCTS2 and (<b>d</b>) AOD-GCTS1; and on 25 April for (<b>e</b>) AOD-GCTS2 and (<b>f</b>) AOD-GCTS1.</p>
Full article ">Figure 9
<p>(<b>a</b>) OMI+OMPS SO<sub>2</sub> retrievals in Dobson Units for 23 April 2015 between 17:08 and 20:26 UTC (source: <a href="https://so2.gsfc.nasa.gov/" target="_blank">https://so2.gsfc.nasa.gov/</a> (accessed on 1 October 2024)) and (<b>b</b>) WRF-Chem SO<sub>2</sub> prediction for 23 April at 19 UTC.</p>
Full article ">Figure 10
<p>(<b>a</b>) OMI+OMPS SO<sub>2</sub> retrievals in Dobson Units for 24 April 2015 between 16:10 and 19:32 UTC (source: <a href="https://so2.gsfc.nasa.gov/" target="_blank">https://so2.gsfc.nasa.gov/</a> (accessed on 1 October 2024)) and (<b>b</b>) WRF-Chem SO<sub>2</sub> prediction for 24 April at 19 UTC.</p>
Full article ">Figure 11
<p>(<b>a</b>)OMI+OMPS SO<sub>2</sub> retrievals in Dobson Units for 25 April 2015 between 15:13 and 20:11 UTC (source: <a href="https://so2.gsfc.nasa.gov/" target="_blank">https://so2.gsfc.nasa.gov/</a> (accessed on 1 October 2024)) and (<b>b</b>) WRF-Chem SO<sub>2</sub> prediction for 25 April at 16 UTC.</p>
Full article ">Figure 12
<p>(<b>a</b>) OMI+OMPS SO<sub>2</sub> retrievals in Dobson Units for 26 April 2015 between 12:39 and 19:18 UTC (source: <a href="https://so2.gsfc.nasa.gov/" target="_blank">https://so2.gsfc.nasa.gov/</a> (accessed on 1 October 2024)) and (<b>b</b>) WRF-Chem SO<sub>2</sub> prediction for 26 April at 16 UTC.</p>
Full article ">Figure A1
<p>Daily averaged vertical integrated concentration (p10) between (<b>a</b>) 0−17 km and (<b>b</b>) 17−20 km for23 April; units are µg m<sup>−2</sup>.</p>
Full article ">Figure A2
<p>Daily averaged vertical integrated concentration (p10) between (<b>a</b>) 0−17 km and (<b>b</b>) 17−20 km for 24 April; units are µg m<sup>−2</sup>.</p>
Full article ">Figure A3
<p>Daily averaged vertical integrated concentration (p10) between (<b>a</b>) 0−17 km and (<b>b</b>) 17−20 km for 25 April; units are µg m<sup>−2</sup>.</p>
Full article ">Figure A4
<p>Daily averaged vertical integrated concentration (p10) between (<b>a</b>) 0–17 km and (<b>b</b>) 17–20 km for 26 April; units are µg m<sup>−2</sup>.</p>
Full article ">Figure A5
<p>Daily averaged vertical integrated concentration (p10) between (<b>a</b>) 0–17 km and (<b>b</b>) 17–20 km for April 27; units are µg m<sup>−2</sup>.</p>
Full article ">Figure A6
<p>Skew-T for the Calbuco volcano region prior to the first eruption; the red line indicates the temperature, the green line the dew point temperature, and the black line the Lifted Condensation Level (LCL). The barbs on the right side indicate the direction and speed of the wind. (<b>a</b>) 00:00; (<b>b</b>) 03:00; (<b>c</b>) 06:00; (<b>d</b>) 09:00; (<b>e</b>) 12:00; (<b>f</b>) 15:00; (<b>g</b>) 18:00; (<b>h</b>) 21:00.</p>
Full article ">Figure A6 Cont.
<p>Skew-T for the Calbuco volcano region prior to the first eruption; the red line indicates the temperature, the green line the dew point temperature, and the black line the Lifted Condensation Level (LCL). The barbs on the right side indicate the direction and speed of the wind. (<b>a</b>) 00:00; (<b>b</b>) 03:00; (<b>c</b>) 06:00; (<b>d</b>) 09:00; (<b>e</b>) 12:00; (<b>f</b>) 15:00; (<b>g</b>) 18:00; (<b>h</b>) 21:00.</p>
Full article ">
19 pages, 3625 KiB  
Article
EBFA-6D: End-to-End Transparent Object 6D Pose Estimation Based on a Boundary Feature Augmented Mechanism
by Xinbei Jiang, Zichen Zhu, Tianhan Gao and Nan Guo
Sensors 2024, 24(23), 7584; https://doi.org/10.3390/s24237584 - 27 Nov 2024
Viewed by 250
Abstract
Transparent objects, commonly encountered in everyday environments, present significant challenges for 6D pose estimation due to their unique optical properties. The lack of inherent texture and color complicates traditional vision methods, while the transparency prevents depth sensors from accurately capturing geometric details. We [...] Read more.
Transparent objects, commonly encountered in everyday environments, present significant challenges for 6D pose estimation due to their unique optical properties. The lack of inherent texture and color complicates traditional vision methods, while the transparency prevents depth sensors from accurately capturing geometric details. We propose EBFA-6D, a novel end-to-end 6D pose estimation framework that directly predicts the 6D poses of transparent objects from a single RGB image. To overcome the challenges introduced by transparency, we leverage the high contrast at object boundaries inherent to transparent objects by proposing a boundary feature augmented mechanism. We further conduct a bottom-up feature fusion to enhance the location capability of EBFA-6D. EBFA-6D is evaluated on the ClearPose dataset, outperforming the existing methods in accuracy while achieving an inference speed near real-time. The results demonstrate that EBFA-6D provides an efficient and effective solution for accurate 6D pose estimation of transparent objects. Full article
(This article belongs to the Section Sensors and Robotics)
26 pages, 8281 KiB  
Review
Research Progress of Automation Ergonomic Risk Assessment in Building Construction: Visual Analysis and Review
by Ruize Qin, Peng Cui and Jaleel Muhsin
Buildings 2024, 14(12), 3789; https://doi.org/10.3390/buildings14123789 - 27 Nov 2024
Viewed by 286
Abstract
In recent years, the increasing demand for worker safety and workflow efficiency in the construction industry has drawn considerable attention to the application of automated ergonomic technologies. To gain a comprehensive understanding of the current research landscape in this field, this study conducts [...] Read more.
In recent years, the increasing demand for worker safety and workflow efficiency in the construction industry has drawn considerable attention to the application of automated ergonomic technologies. To gain a comprehensive understanding of the current research landscape in this field, this study conducts an in-depth visual analysis of the literature on automated ergonomic risk assessment published between 2001 and 2024 in the Web of Science database using CiteSpace and VOSviewer. The analysis systematically reviews key research themes, collaboration networks, keywords, and citation patterns. Building on this, an SWOT analysis is employed to evaluate the core technologies currently widely adopted in the construction sector. By focusing on the integrated application of wearable sensors, artificial intelligence (AI), big data analytics, virtual reality (VR), and computer vision, this research highlights the significant advantages of these technologies in enhancing worker safety and optimizing construction processes. It also delves into potential challenges related to the complexity of these technologies, high implementation costs, and concerns regarding data privacy and worker health. While these technologies hold immense potential to transform the construction industry, future efforts will need to address these challenges through technological optimization and policy support to ensure broader adoption. Full article
Show Figures

Figure 1

Figure 1
<p>Data collection process.</p>
Full article ">Figure 2
<p>Annual publication trends of articles from 2001 to 2024 (October).</p>
Full article ">Figure 3
<p>Top 10 subject categories of the Web of Science for automated ergonomic risk from 2001 to 2024.</p>
Full article ">Figure 4
<p>Analysis of published journals.</p>
Full article ">Figure 5
<p>Analysis of published journals (2001–2024).</p>
Full article ">Figure 6
<p>Keyword clustering diagram for the research of automated ergonomic risk evaluation.</p>
Full article ">Figure 7
<p>Visual representation of the rule compliance module outcomes [<a href="#B65-buildings-14-03789" class="html-bibr">65</a>].</p>
Full article ">Figure 8
<p>Principles of wearable sensor technology [<a href="#B68-buildings-14-03789" class="html-bibr">68</a>].</p>
Full article ">Figure 9
<p>Computer vision-based motion capture [<a href="#B74-buildings-14-03789" class="html-bibr">74</a>].</p>
Full article ">Figure 10
<p>Wearable sensor model diagram [<a href="#B75-buildings-14-03789" class="html-bibr">75</a>].</p>
Full article ">
12 pages, 2236 KiB  
Article
Evaluation of Electrical Characteristics of Weft−Knitted Strain Sensors for Joint Motion Monitoring: Focus on Plating Stitch Structure
by You-Kyung Oh and Youn-Hee Kim
Sensors 2024, 24(23), 7581; https://doi.org/10.3390/s24237581 - 27 Nov 2024
Viewed by 248
Abstract
We developed a sensor optimized for joint motion monitoring by exploring the effects of the stitch pattern, yarn thickness, and NP number on the performance of knitted strain sensors. We conducted stretching experiments with basic weft−knit patterns to select the optimal stitch pattern [...] Read more.
We developed a sensor optimized for joint motion monitoring by exploring the effects of the stitch pattern, yarn thickness, and NP number on the performance of knitted strain sensors. We conducted stretching experiments with basic weft−knit patterns to select the optimal stitch pattern and analyze its sensitivity and reproducibility. The plain stitch with a conductive yarn located on the reverse side exhibited the highest gauge factor value (143.68) and achieved excellent performance, with a stable change in resistance even after repeated sensing. For an in−depth analysis, we developed six sensors using the aforementioned pattern with different combinations of yarn thickness (1−ply, 2−ply) and NP numbers (12, 13, 14). Based on bending experiments, the GF across all sensors was 60.2–1092, indicating noticeable differences in sensitivity. However, no significant differences were observed in reproducibility, reliability, and responsiveness, confirming that all the sensors are capable of joint motion monitoring. Therefore, the plain−patterned plating stitch structure with conductive yarn on the reverse side is optimal for joint motion monitoring, and the yarn thickness and NP numbers can be adjusted to suit different purposes. This study provides basic data for developing knitted strain sensors and offers insights into how knitting methods impact sensor performance. Full article
(This article belongs to the Special Issue Wearable Systems for Monitoring Joint Kinematics)
20 pages, 9751 KiB  
Article
6D Pose Estimation of Industrial Parts Based on Point Cloud Geometric Information Prediction for Robotic Grasping
by Qinglei Zhang, Cuige Xue, Jiyun Qin, Jianguo Duan and Ying Zhou
Entropy 2024, 26(12), 1022; https://doi.org/10.3390/e26121022 - 26 Nov 2024
Viewed by 322
Abstract
In industrial robotic arm gripping operations within disordered environments, the loss of physical information on the object’s surface is often caused by changes such as varying lighting conditions, weak surface textures, and sensor noise. This leads to inaccurate object detection and pose estimation [...] Read more.
In industrial robotic arm gripping operations within disordered environments, the loss of physical information on the object’s surface is often caused by changes such as varying lighting conditions, weak surface textures, and sensor noise. This leads to inaccurate object detection and pose estimation information. A method for industrial object pose estimation using point cloud data is proposed to improve pose estimation accuracy. During the feature extraction process, both global and local information are captured by integrating the appearance features of RGB images with the geometric features of point clouds. Integrating semantic information with instance features effectively distinguishes instances of similar objects. The fusion of depth information and RGB color channels enriches spatial context and structure. A cross-entropy loss function is employed for multi-class target classification, and a discriminative loss function enables instance segmentation. A novel point cloud registration method is also introduced to address re-projection errors when mapping 3D keypoints to 2D planes. This method utilizes 3D geometric information, extracting edge features using point cloud curvature and normal vectors, and registers them with models to obtain accurate pose information. Experimental results demonstrate that the proposed method is effective and superior on the LineMod and YCB-Video datasets. Finally, objects are grasped by deploying a robotic arm on the grasping platform. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>Posture estimation network structure. Feature extraction is performed on the given scene RGB and point cloud images, matching the target pose through semantic and instance prediction and designing a priority grasping strategy to achieve accurate grasping by the robotic arm.</p>
Full article ">Figure 2
<p>Instance segmentation branch network.</p>
Full article ">Figure 3
<p>Hash table framework. For two point pairs in the model edge appearance, keypoint pair features are calculated and saved in the hash table for subsequent feature matching of the template point cloud with the instance point cloud.</p>
Full article ">Figure 4
<p>Definition of B2B-DL descriptors. In order to find corresponding pairs of points between the scene and the model, a descriptor was devised by calculating the tangent lines and considering their direction as the direction of the points.</p>
Full article ">Figure 5
<p>The coordinate transformation relationship between the instance and model point clouds. The inverse transformation <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>T</mi> </mrow> <mrow> <mi>p</mi> <mo>→</mo> <mi>g</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msubsup> </mrow> </semantics></math> repositions the reference point <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>p</mi> </mrow> <mrow> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> of the example point cloud to the origin, and the normal direction is aligned parallel to the <math display="inline"><semantics> <mrow> <mi>x</mi> </mrow> </semantics></math>-axis of the coordinate system.</p>
Full article ">Figure 6
<p>Positional information for each instance obtained by aligning the model point cloud with the field instance point cloud.</p>
Full article ">Figure 7
<p>The experimental platform dominated by the 3D industrial camera, KUKA robotic arm, target object, and PLCS7-1200 is mainly used for industrial parts disordered gripping use.</p>
Full article ">Figure 8
<p>In the suction device, each part corresponds to one kind of suction device; the first row consists of two sets of fixtures: the beam fixture and the roller fixture. The second row includes the suction device for the wheel fixture and the base fixture; the left side of each set of fixtures is its 3D model, and the right side is the real fixture.</p>
Full article ">Figure 9
<p>Coordinate transformation between the camera, robotic arm and object of the robotic gripping platform.</p>
Full article ">Figure 10
<p>Four low-textured industrial parts. The first row shows, from left to right, the real object’s beam, hub, roller, and base. The second row shows the CAD models of the beam, hub, roller, and base from left to right.</p>
Full article ">Figure 11
<p>The robot arm first gets the signal from PLC to grasp the target workpiece, then sucks up the corresponding fixture, returns to the home position, waits for the gripping position information, and executes the gripping operation, and the whole process is shown above.</p>
Full article ">Figure 12
<p>Pose estimation results on LineMod dataset.</p>
Full article ">Figure 13
<p>Pose estimation results on the YCB-Video dataset.</p>
Full article ">Figure 14
<p>Pose estimation results on a real dataset. The results after instance segmentation clustering is shown in the second column, and the third column shows the attitude estimation results of the target object.</p>
Full article ">Figure 15
<p>Pose estimation results obtained from testing in cluttered and occluded scenes. Each of these columns gives the RGB image, depth map, depth map converted point cloud, example point cloud clustering, edge estimation, and pose estimation results. This is a figure; schemes follow the same formatting.</p>
Full article ">Figure 16
<p>Some results of the robotic arm gripping experiments are performed in the real scene. Two kinds of workpieces are selected as gripping objects: a hub and eight wheels. Each column shows the process of converting the positional results obtained with the algorithm into positional information in robotic arm coordinates to make the robotic arm grasp each object.</p>
Full article ">Figure 16 Cont.
<p>Some results of the robotic arm gripping experiments are performed in the real scene. Two kinds of workpieces are selected as gripping objects: a hub and eight wheels. Each column shows the process of converting the positional results obtained with the algorithm into positional information in robotic arm coordinates to make the robotic arm grasp each object.</p>
Full article ">
8 pages, 4421 KiB  
Article
Chromatic Aberration in Wavefront Coding Imaging with Trefoil Phase Mask
by Miguel Olvera-Angeles, Justo Arines and Eva Acosta
Photonics 2024, 11(12), 1117; https://doi.org/10.3390/photonics11121117 - 26 Nov 2024
Viewed by 284
Abstract
The refractive index of the lenses used in optical designs varies with wavelength, causing light rays to fail when focusing on a single plane. This phenomenon is known as chromatic aberration (CA), chromatic distortion, or color fringing, among other terms. Images affected by [...] Read more.
The refractive index of the lenses used in optical designs varies with wavelength, causing light rays to fail when focusing on a single plane. This phenomenon is known as chromatic aberration (CA), chromatic distortion, or color fringing, among other terms. Images affected by CA display colored halos and experience a loss of resolution. Fully achromatic systems can be achieved through complex and costly lens designs and/or computationally when digital sensors capture the image. In this work, we propose using the wavefront coding (WFC) technique with a trefoil-shaped phase modulation plate in the optical system to effectively increase the resolution of images affected by longitudinal chromatic aberration (LCA), significantly simplifying the optical design and reducing costs. Experimental results with three LEDs simulating RGB images verify that WFC with trefoil phase plates effectively corrects longitudinal chromatic aberration. Transverse chromatic aberration (TCA) is corrected computationally. Furthermore, we demonstrate that the optical system maintains depth of focus (DoF) for color images. Full article
(This article belongs to the Special Issue Adaptive Optics Imaging: Science and Applications)
Show Figures

Figure 1

Figure 1
<p>WFC-based optical system.</p>
Full article ">Figure 2
<p>(<b>a</b>) Cubic <span class="html-italic">PSF</span> for circular pupil, (<b>b</b>) restored image for circular pupil and (<b>c</b>) restored image for square pupil. (<b>d</b>) Trefoil <span class="html-italic">PSF</span> for circular pupil, (<b>e</b>) restored image for circular pupil and (<b>f</b>) restored image for square pupil.</p>
Full article ">Figure 3
<p>Scheme of the experimental device.</p>
Full article ">Figure 4
<p>Properties of fabricated PM: (<b>a</b>) PM and center of <span class="html-italic">PSF</span>, (<b>b</b>) trefoil aberration and (<b>c</b>) other aberrations.</p>
Full article ">Figure 5
<p>RGB LED images and merged color images: optical system (<b>upper row</b>), intermediate images (<b>middle row</b>), and decoded images (<b>lower row</b>).</p>
Full article ">Figure 6
<p>Decoded color images with trefoil PM (<b>lower row</b>) and corresponding images with chromatic aberration for the optical system (<b>upper row</b>) for different defocused recording planes.</p>
Full article ">Figure 7
<p>Color image of decoded image with trefoil PM (<b>a</b>), and corresponding image with TCA reduced by image processing (<b>b</b>).</p>
Full article ">Figure 8
<p>RGB for white LED images and merged color images: optical system (<b>upper row</b>) and decoded images (<b>lower row</b>).</p>
Full article ">
21 pages, 16398 KiB  
Article
Assessing the Effect of Water on Submerged and Floating Plastic Detection Using Remote Sensing and K-Means Clustering
by Lenka Fronkova, Ralph P. Brayne, Joseph W. Ribeiro, Martin Cliffen, Francesco Beccari and James H. W. Arnott
Remote Sens. 2024, 16(23), 4405; https://doi.org/10.3390/rs16234405 - 25 Nov 2024
Viewed by 523
Abstract
Marine and freshwater plastic pollution is a worldwide problem affecting ecosystems and human health. Although remote sensing has been used to map large floating plastic rafts, there are research gaps in detecting submerged plastic due to the limited amount of in situ data. [...] Read more.
Marine and freshwater plastic pollution is a worldwide problem affecting ecosystems and human health. Although remote sensing has been used to map large floating plastic rafts, there are research gaps in detecting submerged plastic due to the limited amount of in situ data. This study is the first to collect in situ data on submerged and floating plastics in a freshwater environment and analyse the effect of water submersion on the strength of the plastic signal. A large 10 × 10 m artificial polymer tarpaulin was deployed in a freshwater lake for a two-week period and was captured by a multi-sensor and multi-resolution unmanned aerial vehicle (UAV) and satellite. Spectral analysis was conducted to assess the attenuation of individual wavelengths of the submerged tarpaulin in UAV hyperspectral and Sentinel-2 multispectral data. A K-Means unsupervised clustering algorithm was used to classify the images into two clusters: plastic and water. Additionally, we estimated the optimal number of clusters present in the hyperspectral dataset and found that classifying the image into four classes (water, submerged plastic, near surface plastic and buoys) significantly improved the accuracy of the K-Means predictions. The submerged plastic tarpaulin was detectable to ~0.5 m below the water surface in near infrared (NIR) (~810 nm) and red edge (~730 nm) wavelengths. However, the red spectrum (~669 nm) performed the best with ~84% true plastic positives, classifying plastic pixels correctly even to ~1 m depth. These individual bands outperformed the dedicated Plastic Index (PI) derived from the UAV dataset. Additionally, this study showed that in neither Sentinel-2 bands, nor the derived indices (PI or Floating Debris Index (FDI), it is currently possible to determine if and how much of the tarpaulin was under the water surface, using a plastic tarpaulin object of 10 × 10 m. Overall, this paper showed that spatial resolution was more important than spectral resolution in detecting submerged tarpaulin. These findings directly contributed to Sustainable Development Goal 14.1 on mapping large marine plastic patches of 10 × 10 m and could be used to better define systems for monitoring submerged and floating plastic pollution. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>A composite figure showing the (<b>A</b>) location of the area of interest; (<b>B</b>) the tarpaulin taken after it was installed on 5 August 2021; (<b>C</b>) the tarpaulin after it had sunk on 10 August 2021, and; (<b>D</b>) a summary of the datasets collected on 19 August 2021 (including RGB, thermal, hyperspectral, multispectral and Sentinel-2 MSI). * ESRI Satellite basemap was sourced from: <a href="https://qms.nextgis.com/geoservices/1300/" target="_blank">https://qms.nextgis.com/geoservices/1300/</a> (accessed on 18 November 2023).</p>
Full article ">Figure 2
<p>A diagram of the K-Means classification process and the plastic versus non-plastic hyperspectral mask created using the visible spectrum.</p>
Full article ">Figure 3
<p>This figure shows the number of plastic pixels detected for two K-Means classes, plastic versus water (<b>A</b>). The bottom figure (<b>B</b>) shows the results of the same K-Means predictions after removing the highly reflective buoy pixels. The percentage of true plastic pixels predicted were extracted from confusion matrices for individual hyperspectral bands.</p>
Full article ">Figure 4
<p>UAV collected imagery of the tarpaulin using: (<b>A</b>) a true colour composite from hyperspectral data; (<b>B</b>) a testing mask derived from hyperspectral data; (<b>C</b>) thermal camera data; (<b>D</b>) predictions for the band 187 at 810 nm (NIR); (<b>E</b>) band 187 at 810 nm in greyscale which performs the best at the predictions of submerged tarpaulin before removing buoy pixels for two clusters, and; (<b>F</b>) RGB image showing the colour coded crosses where the reflectance was extracted for <a href="#remotesensing-16-04405-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 5
<p>Reflectance extracted for specific points across the tarpaulin imagery (marked in <a href="#remotesensing-16-04405-f004" class="html-fig">Figure 4</a>F), showing an average reflectance across the electromagnetic spectrum for buoys, near-surface tarpaulin, submerged tarpaulin, and water. Buoys act almost as perfect reflectors, with an average reflectance of ~1.</p>
Full article ">Figure 6
<p>Reflectance across 400 to 1000 nm extracted from the point locations depicted in <a href="#remotesensing-16-04405-f004" class="html-fig">Figure 4</a>F (excluding buoys) for near-surface and submerged tarpaulin and water, as well as the results of the K-Means predictions for two clusters (plastic versus water). The K-Means plastic predictions showed the number of correctly classified pixels peaked at ~810 nm (NIR), and gradually dropped after exhibiting a strong salt and pepper noise from 900 nm onwards.</p>
Full article ">Figure 7
<p>K-Means plastic predictions for four clusters peaked at the red band spectrum ~669, followed by the red edge and NIR. These individual wavelengths outperformed predictions of the dedicated PI. PI predicted most of the near-surface and submerged tarpaulin pixels. However, it also predicted a lot of false positives in the water class, creating distinct salt and pepper noise.</p>
Full article ">Figure 8
<p>This image shows reflectance of submerged and floating tarpaulin and water pixels extracted from hyperspectral UAV data and multispectral Sentinel-2 satellite data.</p>
Full article ">Figure 9
<p>Sentinel-2 bands for water and partially submerged tarpaulin (visible in the zoomed red rectangle) in Whitlingham Great Broad area of interest (LAKE AOI), as well as derived PI and FDI from 11 August 2021.</p>
Full article ">Figure 10
<p>Boxplots showed that water and submerged and floating plastic tarpaulin have a small spread/variation in FDI, but their means overlapped. The distribution of the two classes differed from each other in PI, distinguishing tarpaulin better from water than FDI.</p>
Full article ">Figure 11
<p>This image showed the K-Means predictions with two clusters for Sentinel-2 imagery from 11/08/2021. The Sentinel-2 bands were chosen for a comparison with the best-performing UAV-collected hyperspectral wavelengths and the bands used for the floating litter indices calculations (<a href="#sec2dot3-remotesensing-16-04405" class="html-sec">Section 2.3</a>).</p>
Full article ">
19 pages, 15630 KiB  
Review
Review of Automated Operations in Drilling and Mining
by Athanasios Kokkinis, Theodore Frantzis, Konstantinos Skordis, George Nikolakopoulos and Panagiotis Koustoumpardis
Machines 2024, 12(12), 845; https://doi.org/10.3390/machines12120845 - 25 Nov 2024
Viewed by 376
Abstract
Current advances and trends in the fields of mechanical, material, and software engineering have allowed mining technology to undergo a significant transformation. Aiming to maximize the efficiency and safety of the mining process, several enabling technologies, such as the recent advances in artificial [...] Read more.
Current advances and trends in the fields of mechanical, material, and software engineering have allowed mining technology to undergo a significant transformation. Aiming to maximize the efficiency and safety of the mining process, several enabling technologies, such as the recent advances in artificial intelligence, IoT, sensor fusion, computational modeling, and advanced robotics, are being progressively adopted in mining machine manufacturing while replacing conventional parts and approaches that used to be the norm in the rock ore extraction industry. This article aims to provide an overview of research trends and state-of-the-art technologies in face exploration and drilling operations in order to define the vision toward the realization of fully autonomous mining exploration machines of the future, capable of operating without any external infrastructure. As the trend of mining at large depths is increasing and as the re-opening of abandoned mines is gaining more interest, near-to-face mining exploration approaches for identifying new ore bodies need to undergo significant revision. This article aims to contribute to future developments in the use of fully autonomous and cooperative smaller mining exploration machines. Full article
(This article belongs to the Special Issue Recent Developments in Machine Design, Automation and Robotics)
Show Figures

Figure 1

Figure 1
<p>The Schlumberger PowerDrive system. Figure taken from [<a href="#B8-machines-12-00845" class="html-bibr">8</a>].</p>
Full article ">Figure 2
<p>The Halliburton Sperry-sun Geo-Pilot system. Figure taken from [<a href="#B8-machines-12-00845" class="html-bibr">8</a>].</p>
Full article ">Figure 3
<p>A longwall system with most of its parts. Figure taken from [<a href="#B12-machines-12-00845" class="html-bibr">12</a>].</p>
Full article ">Figure 4
<p>Autonomous scraper with human-control capability. Figure taken from [<a href="#B16-machines-12-00845" class="html-bibr">16</a>].</p>
Full article ">Figure 5
<p>Obtaining a point cloud map using a LIDAR sensor using a quadruped robot. (<b>a</b>) A visual representation of the experimental environment (mine). (<b>b</b>) The estimated pose in the global point cloud map as perceived by the sensors on the robot. The estimated pose is depicted in the form of XYZ-axis (X-axis in red, Y-axis in green and Z-axis in blue). (<b>c</b>) The created point cloud map. Figures taken from [<a href="#B80-machines-12-00845" class="html-bibr">80</a>].</p>
Full article ">Figure 6
<p>The proposed DT-based architecture for the mining industry. Figure taken from [<a href="#B37-machines-12-00845" class="html-bibr">37</a>].</p>
Full article ">Figure 7
<p>e-Drilling’s wellAhead module’s digital twin in action. Figure taken from [<a href="#B81-machines-12-00845" class="html-bibr">81</a>].</p>
Full article ">Figure 8
<p>A YOLOv8 model detects personnel and hazardous areas. Figure taken from [<a href="#B89-machines-12-00845" class="html-bibr">89</a>].</p>
Full article ">Figure 9
<p>Automatic drill operating through pistons with water. Figure taken from [<a href="#B55-machines-12-00845" class="html-bibr">55</a>].</p>
Full article ">
14 pages, 7266 KiB  
Article
Femtosecond Laser Introduced Cantilever Beam on Optical Fiber for Vibration Sensing
by Jin Qiu, Zijie Wang, Zhihong Ke, Tianlong Tao, Shuhui Liu, Quanrong Deng, Wei Huang and Weijun Tong
Sensors 2024, 24(23), 7479; https://doi.org/10.3390/s24237479 - 23 Nov 2024
Viewed by 293
Abstract
An all-fiber vibration sensor based on the Fabry-Perot interferometer (FPI) is proposed and experimentally evaluated in this study. The sensor is fabricated by introducing a Fabry-Perot cavity to the single-mode fiber using femtosecond laser ablation. The cavity and the tail act together as [...] Read more.
An all-fiber vibration sensor based on the Fabry-Perot interferometer (FPI) is proposed and experimentally evaluated in this study. The sensor is fabricated by introducing a Fabry-Perot cavity to the single-mode fiber using femtosecond laser ablation. The cavity and the tail act together as a cantilever beam, which can be used as a vibration receiver. When mechanical vibrations are applied, the cavity length of the Fabry-Perot interferometer changes accordingly, altering the interference fringes. Due to the low moment of inertia of the fiber optic cantilever beam, the sensor can achieve broadband frequency responses and high vibration sensitivity without an external vibration receiver structure. The frequency range of sensor detection is 70 Hz–110 kHz, and the sensitivity of the sensor is 60 mV/V. The sensor’s signal-to-noise ratio (SNR) can reach 56 dB. The influence of the sensor parameters (cavity depth and fiber tail length) on the sensing performance are also investigated in this study. The sensor has the advantages of compact structure, high sensitivity, and wideband frequency response, which could be a promising candidate for vibration sensing. Full article
(This article belongs to the Special Issue Recent Advances in Micro- and Nanofiber-Optic Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The structure of the sensor. (<b>b</b>) Microscope image of the sensing structure. (<b>c</b>) The manufacturing process of the sensor.</p>
Full article ">Figure 2
<p>Experimental set up for the fabrication of the in-fiber cavity.</p>
Full article ">Figure 3
<p>Reflectance spectra of sensors with cavity depths of 75 µm (<b>a</b>), 70 µm (<b>b</b>), and 65 µm (<b>c</b>). (<b>d</b>) Microscope images of the sensors with different cavity depths.</p>
Full article ">Figure 3 Cont.
<p>Reflectance spectra of sensors with cavity depths of 75 µm (<b>a</b>), 70 µm (<b>b</b>), and 65 µm (<b>c</b>). (<b>d</b>) Microscope images of the sensors with different cavity depths.</p>
Full article ">Figure 4
<p>Vibration sensing experiment platform. Inset is a microscope image of the fiber sensor fixed at the PZT. (Note that the distance from the fiber cavity to the edge of the PZT is 5 mm, and we cannot visualize the whole structure in such scale with the microscope. The inset comprises two microscope images separated by an ellipsis to illustrate the whole structure).</p>
Full article ">Figure 5
<p>Time domain diagram (<b>a</b>–<b>c</b>) and frequency domain diagram (<b>d</b>–<b>f</b>) of sensors with cavity depths of 75, 70, 65 µm for a frequency of 1000 Hz and a voltage amplitude of 10 V.</p>
Full article ">Figure 6
<p>The SNR of sensors with cavity depths of 75, 70, and 65 µm, when the voltage is adjusted from 1 V to 10 V (the frequency is 1000 Hz).</p>
Full article ">Figure 7
<p>The SNR of sensors with tail fiber lengths of 15, 10, and 5 mm, when the voltage is adjusted from 1 V to 10 V (the frequency is 1000 Hz).</p>
Full article ">Figure 8
<p>Time domain (<b>a</b>,<b>b</b>) and frequency domain (<b>c</b>,<b>d</b>) of sensors with tail lengths of 10 mm and 5 mm when the frequency is 1000 Hz and the voltage amplitude is 10 V.</p>
Full article ">Figure 9
<p>Vibration time domain diagram (<b>a</b>,<b>b</b>) at 70 Hz and 110 kHz frequencies and frequency domain diagram (<b>c</b>,<b>d</b>) after FFT.</p>
Full article ">Figure 9 Cont.
<p>Vibration time domain diagram (<b>a</b>,<b>b</b>) at 70 Hz and 110 kHz frequencies and frequency domain diagram (<b>c</b>,<b>d</b>) after FFT.</p>
Full article ">Figure 10
<p>Time domain response of a sensor with a cavity depth of 75 µm and a tail length of 15 mm at different amplitudes.</p>
Full article ">Figure 11
<p>Sensitivity characteristic curve of vibration signal as it increases from 1 V to 10 V.</p>
Full article ">Figure 12
<p>Vibration model of the fiber cantilever beam.</p>
Full article ">Figure 13
<p>Sensitivity characteristic curves of sensors with different structural parameters. (<b>a</b>) Sensitivity characteristic curves of sensors with cavity depths of 75, 70, and 65 µm. (<b>b</b>) Sensitivity characteristic curves of sensors with tail lengths of 15, 10, and 5 mm.</p>
Full article ">Figure 14
<p>Stability experiments from 0 to 60 min, frequency range 200–1000 Hz.</p>
Full article ">
16 pages, 5102 KiB  
Article
Machine Learning-Based Structural Health Monitoring Technique for Crack Detection and Localisation Using Bluetooth Strain Gauge Sensor Network
by Tahereh Shah Mansouri, Gennady Lubarsky, Dewar Finlay and James McLaughlin
J. Sens. Actuator Netw. 2024, 13(6), 79; https://doi.org/10.3390/jsan13060079 - 23 Nov 2024
Viewed by 600
Abstract
Within the domain of Structural Health Monitoring (SHM), conventional approaches generally are complicated, destructive, and time-consuming. It also necessitates an extensive array of sensors to effectively evaluate and monitor the structural integrity. In this research work, we present a novel, non-destructive SHM framework [...] Read more.
Within the domain of Structural Health Monitoring (SHM), conventional approaches generally are complicated, destructive, and time-consuming. It also necessitates an extensive array of sensors to effectively evaluate and monitor the structural integrity. In this research work, we present a novel, non-destructive SHM framework based on machine learning (ML) for the accurate detection and localisation of structural cracks. This approach leverages a minimal number of strain gauge sensors linked via Bluetooth Low Energy (BLE) communication. The framework is validated through empirical data collected from 3D carbon fibre-reinforced composites, including three distinct specimens, ranging from crack-free samples to specimens with up to ten cracks of varying lengths and depths. The methodology integrates an analytical examination of the Shewhart chart, Grubbs’ test (GT), and hierarchical clustering (HC) algorithm, tailored towards the metrics of fracture measurement and classification. Our novel ML framework allows one to replace exhausting laboratory procedures with a modern and quick mechanism for the material, with unprecedented properties that could provide potential applications in the composites industry. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Ss sample cut from 3D layer-to-layer composite material and its developed length crack (red arrows show increasing cracks)—lift and drag force’s position on hydrofoil is shown in upper-right. (<b>b</b>) Schematic of the three-specimen geometry and a crack position. (<b>c</b>) Load configuration during the experiment. Please note the crack location identified via defined algorithms in framework in next sections.</p>
Full article ">Figure 2
<p>(<b>a</b>) Removable electronic enclosure attaches to the strain gauge electrodes using a holder permanently fixed on the sample surface. The node can be moved to a new location. (<b>b</b>,<b>c</b>) The enclosure accommodates a wireless module (BLE), a coin cell battery, supporting electronics, and a spring-loaded connector. (<b>d</b>) Spring-loaded interconnecting header in contact with copper pads deposited on the sample surface and connected to the strain gauge sensor. (<b>e</b>) Electronic module that accommodates BLE sensors, compared with a pen to evaluate the size. (<b>f</b>) The front side of the bridging board has four resistors. It is a connector between the strain gauge sensors and the transmitting board.</p>
Full article ">Figure 3
<p>(<b>a</b>) Measured strain vs. sensor number when crack developed through Bs—Figure illustrates the sensor outputs for various experimental configurations labelled as Exp 1–9. These configurations correspond to specific load conditions and crack positions as follows: Exp 1 represents a scenario with minimal strain applied, Exp 2–4 involve moderate strain with an incipient crack in both sides, Exp 5–8 indicate higher strain with multiple surface cracks, and Exp 9 denotes maximum load conditions under which significant crack propagation was observed after 48 h. Each experiment’s unique setup is detailed to provide insights into the strain responses across varying crack severities. (<b>b</b>) The average value of each BLE sensor when cracks developed through Bs [equivalent experiment shown in (<b>a</b>)].</p>
Full article ">Figure 4
<p>Machine learning-based SHM framework.</p>
Full article ">Figure 5
<p>(<b>a</b>) Comparing five sensors’ outputs for Ss before crack development (sensors under load but in normal phase). (<b>b</b>) Anomalous sensors after crack development through Ss. (<b>c</b>,<b>d</b>) Comparing five sensor outputs for Rs before and after crack development, respectively. As discussed above, a sensor close to cracks usually appears out of UCL/LCL interval.</p>
Full article ">Figure 6
<p>A dendrogram of six initial surface cracks (<b>left</b>), and equivalent Venn diagram (<b>right</b>). The dendrogram shows classificatory relationships between samples. The height of links increases when there is a progression in crack depth.</p>
Full article ">Figure 7
<p>(<b>a</b>) Normal probability plot of developed cracks for Rs before GT application. (<b>b</b>,<b>c</b>) The same samples after applying GT threshold (α); (0.05 % and 10%) with 90% and 99.5 % confidence, respectively. Please note: GT applied on two other forms of samples and similar results achieved.</p>
Full article ">Figure 8
<p>Probability density belonging to sensor 3 (square) on the left, and sensor 2 (rectangle) on the right, before and after crack development. For square, the process shift (<math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> </mrow> </semantics></math>) is positive, while the rectangle shows negative variance. This is because the IoT sensors were located on the surface of the square, while the same sensors were located beneath the surface for the rectangle specimen, resulting in negative recordings.</p>
Full article ">Figure 9
<p>Probability Density Function (PDF) of the prediction error for R, S, and B samples. The bars represent cracks within each sample, and the curve shows PDF prediction. A better Shewhart accuracy in predicting the crack location is indicated by greater coverage of the bars by the curve.</p>
Full article ">Figure 10
<p>(<b>a</b>) The difference between link height for ‘ss’ using HC algorithm during crack growth. (<b>b</b>) Repetition of exp a. for ‘rs’. (<b>c</b>) Blue line shows a polynomial model adjusted to the cluster link using ‘ss’ with R<sup>2</sup> = %97 to evaluate the prediction accuracy. Each dot represents a link between developed cracks, and all can be located within 99% of prediction bounds. (<b>d</b>) Repetition of exp c. for ‘bs’ and ‘rs’. R<sup>2</sup> = %91 for prediction bounds of 95%.</p>
Full article ">Figure 11
<p>(<b>a</b>) Applying G-test with <math display="inline"><semantics> <mrow> <mi>α</mi> </mrow> </semantics></math> = 0.5. on anomalous sensors of three different shape specimens (specimen 1: square, specimen 2: rectangle, and specimen 3: beam). Selected <math display="inline"><semantics> <mrow> <mi>α</mi> </mrow> </semantics></math> will define one outlier for specimens 2 and 3 and null for specimen 1. (<b>b</b>) Equivalent experiment with <math display="inline"><semantics> <mrow> <mi>α</mi> </mrow> </semantics></math> = 10; here, five outliers (shown inside the red circle) were selected via G-test from various specimens.</p>
Full article ">
14 pages, 4501 KiB  
Article
Moisture Distribution and Ice Front Identification in Freezing Soil Using an Optimized Circular Capacitance Sensor
by Xing Hu, Qiao Dong, Bin Shi, Kang Yao, Xueqin Chen and Xin Yuan
Sensors 2024, 24(22), 7392; https://doi.org/10.3390/s24227392 - 20 Nov 2024
Viewed by 263
Abstract
As the interface between frozen and unfrozen soil, the ice front is not only a spatial location concept, but also a potentially dangerous interface where the mechanical properties of soil could change abruptly. Accurately identifying its spatial position is essential for the safe [...] Read more.
As the interface between frozen and unfrozen soil, the ice front is not only a spatial location concept, but also a potentially dangerous interface where the mechanical properties of soil could change abruptly. Accurately identifying its spatial position is essential for the safe and efficient execution of large-scale frozen soil engineering projects. Electrical capacitance tomography (ECT) is a promising method for the visualization of frozen soil due to its non-invasive nature, low cast, and rapid response. This paper presents the design and optimization of a mobile circular capacitance sensor (MCCS). The MCCS was used to measure frozen soil samples along the depth direction to obtain moisture distribution and three-dimensional images of the ice front. Finally, the experimental results were compared with the simulation results from COMSOL Multiphysics to analyze the deviations. It was found that the fuzzy optimization design based on multi-criteria orthogonal experiments makes the MCCS meet various performance requirements. The average permittivity distribution was proposed to reflect moisture distribution along the depth direction and showed good correlation. Three-dimensional reconstructed images could provide the precise position of the ice front. The simulation results indicate that the MCCS has a low deviation margin in identifying the position of the ice front. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of ECT’s forward and inverse problems [<a href="#B17-sensors-24-07392" class="html-bibr">17</a>].</p>
Full article ">Figure 2
<p>Compaction curve of loess.</p>
Full article ">Figure 3
<p>The diagram of the testing procedure.</p>
Full article ">Figure 4
<p><math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> and moisture content variation in different specimens: (<b>a</b>) 10% initial moisture content; (<b>b</b>) 15% initial moisture content; (<b>c</b>) 20% initial moisture content.</p>
Full article ">Figure 5
<p><math display="inline"><semantics> <mrow> <msub> <mrow> <mi>ε</mi> </mrow> <mrow> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math> and moisture content variation in different specimens: (<b>a</b>) 10% initial moisture content; (<b>b</b>) 15% initial moisture content; (<b>c</b>) 20% initial moisture content.</p>
Full article ">Figure 6
<p>Ice fronts of different specimens: (<b>a</b>) 10% initial moisture content; (<b>b</b>) 15% initial moisture content; (<b>c</b>) 20% initial moisture content.</p>
Full article ">Figure 7
<p>2D image of the relative permittivity distribution at different specimen heights with 10% initial moisture content.</p>
Full article ">Figure 8
<p>Three-dimensional interpolated cloud plot of relative permittivity: (<b>a</b>) 10% initial moisture content; (<b>b</b>) 15% initial moisture content; (<b>c</b>) 20% initial moisture content.</p>
Full article ">Figure 9
<p>Simulation results of temperature field after 24 h of freezing: (<b>a</b>) 10% initial moisture content; (<b>b</b>) 15% initial moisture content; (<b>c</b>) 20% initial moisture content.</p>
Full article ">Figure 10
<p>Normalized simulation results of moisture content compared with measured <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>ε</mi> </mrow> <mrow> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math> after 24 h of freezing: (<b>a</b>) 10% initial moisture content; (<b>b</b>) 15% initial moisture content; (<b>c</b>) 20% initial moisture content.</p>
Full article ">
22 pages, 5386 KiB  
Article
A Novel Multi-Sensor Nonlinear Tightly-Coupled Framework for Composite Robot Localization and Mapping
by Lu Chen, Amir Hussain, Yu Liu, Jie Tan, Yang Li, Yuhao Yang, Haoyuan Ma, Shenbing Fu and Gun Li
Sensors 2024, 24(22), 7381; https://doi.org/10.3390/s24227381 - 19 Nov 2024
Viewed by 396
Abstract
Composite robots often encounter difficulties due to changes in illumination, external disturbances, reflective surface effects, and cumulative errors. These challenges significantly hinder their capabilities in environmental perception and the accuracy and reliability of pose estimation. We propose a nonlinear optimization approach to overcome [...] Read more.
Composite robots often encounter difficulties due to changes in illumination, external disturbances, reflective surface effects, and cumulative errors. These challenges significantly hinder their capabilities in environmental perception and the accuracy and reliability of pose estimation. We propose a nonlinear optimization approach to overcome these issues to develop an integrated localization and navigation framework, IIVL-LM (IMU, Infrared, Vision, and LiDAR Fusion for Localization and Mapping). This framework achieves tightly coupled integration at the data level using inputs from an IMU (Inertial Measurement Unit), an infrared camera, an RGB (Red, Green and Blue) camera, and LiDAR. We propose a real-time luminance calculation model and verify its conversion accuracy. Additionally, we designed a fast approximation method for the nonlinear weighted fusion of features from infrared and RGB frames based on luminance values. Finally, we optimize the VIO (Visual-Inertial Odometry) module in the R3LIVE++ (Robust, Real-time, Radiance Reconstruction with LiDAR-Inertial-Visual state Estimation) framework based on the infrared camera’s capability to acquire depth information. In a controlled study, using a simulated indoor rescue scenario dataset, the IIVL-LM system demonstrated significant performance enhancements in challenging luminance conditions, particularly in low-light environments. Specifically, the average RMSE ATE (Root Mean Square Error of absolute trajectory Error) improved by 23% to 39%, with reductions from 0.006 to 0.013. At the same time, we conducted comparative experiments using the publicly available TUM-VI (Technical University of Munich Visual-Inertial Dataset) without the infrared image input. It was found that no leading results were achieved, which verifies the importance of infrared image fusion. By maintaining the active engagement of at least three sensors at all times, the IIVL-LM system significantly boosts its robustness in both unknown and expansive environments while ensuring high precision. This enhancement is particularly critical for applications in complex environments, such as indoor rescue operations. Full article
(This article belongs to the Special Issue New Trends in Optical Imaging and Sensing Technologies)
Show Figures

Figure 1

Figure 1
<p>IIVL-LM system framework applied to the composite robot.</p>
Full article ">Figure 2
<p>Schematic diagram of each module of the IIVL-LM system.</p>
Full article ">Figure 3
<p>Feature extraction performance of RGB and infrared images under extreme illuminance value. (<b>a</b>) Feature extraction performance of RGB images at a normalized illuminance value of 0.148. (<b>b</b>) Feature extraction performance of infrared images at a normalized illuminance value of 0.853.</p>
Full article ">Figure 4
<p>Weight-based nonlinear interpolation frame method.</p>
Full article ">Figure 5
<p>VIO (Optimized Visual-Inertial Odometry).</p>
Full article ">Figure 6
<p>Schematic diagram of IIVL-LM system and sensors deployed on composite robots. (<b>a</b>) Multi-sensor. (<b>b</b>) Composite robots.</p>
Full article ">Figure 7
<p>Comparison of X/Y axis data and actual trajectory of review robots under the IIVL-LM system. (<b>a</b>) Comparison of <span class="html-italic">X</span>-axis data. (<b>b</b>) Comparison of <span class="html-italic">Y</span>-axis data. (<b>c</b>) Actual testing and running trajectory of composite robots.</p>
Full article ">Figure 8
<p>The feature extraction results in the VIO module using RGB, infrared, and depth images under different lighting conditions in a small-scale indoor simulated environment. (<b>a</b>) The extraction of environmental features from RGB frames during the day. (<b>b</b>) Feature extraction of environmental characteristics from infrared frames during the day. (<b>c</b>) Feature extraction of environmental characteristics from infrared frames during the night. (<b>d</b>) Feature coordinates with depth values in the depth image.</p>
Full article ">Figure 9
<p>Real-time reconstruction process and radiance map of the small-scale indoor environment. (<b>a</b>) Real-time reconstruction process of the map. (<b>b</b>) Reconstructed radiance map of the small-scale indoor environment.</p>
Full article ">Figure 10
<p>Test conclusion and comparison under different illuminances. (<b>a</b>) RMSE ATE of all methods under different illuminance values. (<b>b</b>) Comparison between various methods and overall average.</p>
Full article ">Figure 11
<p>Test conclusion and comparison under multiple sequences in the TUM-VI dataset. (<b>a</b>) RMSE ATE of all methods under multiple sequences in the TUM-VI dataset. (<b>b</b>) Comparison between various methods and overall average.</p>
Full article ">Figure 12
<p>The test scenario on ORB-SLAM3.</p>
Full article ">
Back to TopTop