Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (20)

Search Parameters:
Keywords = AMCL

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 8552 KiB  
Article
Path Planning for Autonomous Mobile Robot Using Intelligent Algorithms
by Jorge Galarza-Falfan, Enrique Efrén García-Guerrero, Oscar Adrian Aguirre-Castro, Oscar Roberto López-Bonilla, Ulises Jesús Tamayo-Pérez, José Ricardo Cárdenas-Valdez, Carlos Hernández-Mejía, Susana Borrego-Dominguez and Everardo Inzunza-Gonzalez
Technologies 2024, 12(6), 82; https://doi.org/10.3390/technologies12060082 - 3 Jun 2024
Viewed by 1467
Abstract
Machine learning technologies are being integrated into robotic systems faster to enhance their efficacy and adaptability in dynamic environments. The primary goal of this research was to propose a method to develop an Autonomous Mobile Robot (AMR) that integrates Simultaneous Localization and Mapping [...] Read more.
Machine learning technologies are being integrated into robotic systems faster to enhance their efficacy and adaptability in dynamic environments. The primary goal of this research was to propose a method to develop an Autonomous Mobile Robot (AMR) that integrates Simultaneous Localization and Mapping (SLAM), odometry, and artificial vision based on deep learning (DL). All are executed on a high-performance Jetson Nano embedded system, specifically emphasizing SLAM-based obstacle avoidance and path planning using the Adaptive Monte Carlo Localization (AMCL) algorithm. Two Convolutional Neural Networks (CNNs) were selected due to their proven effectiveness in image and pattern recognition tasks. The ResNet18 and YOLOv3 algorithms facilitate scene perception, enabling the robot to interpret its environment effectively. Both algorithms were implemented for real-time object detection, identifying and classifying objects within the robot’s environment. These algorithms were selected to evaluate their performance metrics, which are critical for real-time applications. A comparative analysis of the proposed DL models focused on enhancing vision systems for autonomous mobile robots. Several simulations and real-world trials were conducted to evaluate the performance and adaptability of these models in navigating complex environments. The proposed vision system with CNN ResNet18 achieved an average accuracy of 98.5%, a precision of 96.91%, a recall of 97%, and an F1-score of 98.5%. However, the YOLOv3 model achieved an average accuracy of 96%, a precision of 96.2%, a recall of 96%, and an F1-score of 95.99%. These results underscore the effectiveness of the proposed intelligent algorithms, robust embedded hardware, and sensors in robotic applications. This study proves that advanced DL algorithms work well in robots and could be used in many fields, such as transportation and assembly. As a consequence of the findings, intelligent systems could be implemented more widely in the operation and development of AMRs. Full article
(This article belongs to the Topic Advances in Mobile Robotics Navigation, 2nd Volume)
Show Figures

Figure 1

Figure 1
<p>Tendency of the publication of documents found on Scopus by year based on the following keywords: Path Planning AND Autonomous Mobile Robot AND Reinforcement Learning.</p>
Full article ">Figure 2
<p>VOSviewer map of the connections between the 40 most relevant keywords found on the database acquired from Scopus.</p>
Full article ">Figure 3
<p>Block diagram of the proposed AMR and its remote communication. Note: White arrows represent ROS communications. Black arrows represent other types of communications like I2C or digital I/O.</p>
Full article ">Figure 4
<p>Graphic representation of the global and local coordinate systems.</p>
Full article ">Figure 5
<p>Graphic representation of the robot’s cinematic model.</p>
Full article ">Figure 6
<p>Bottom view of the robot (units are in millimeters).</p>
Full article ">Figure 7
<p>Control used to adjust the output velocity signal based on the AMR’s interaction with the environment.</p>
Full article ">Figure 8
<p>ROS nodes (ovals) and their respective topics (rectangles).</p>
Full article ">Figure 9
<p>Structure of YOLOv3, also referred to as Darknet-53, taken from [<a href="#B19-technologies-12-00082" class="html-bibr">19</a>].</p>
Full article ">Figure 10
<p>Architecture of ResNet18 CNN, taken from [<a href="#B51-technologies-12-00082" class="html-bibr">51</a>].</p>
Full article ">Figure 11
<p>Robot in its environment.</p>
Full article ">Figure 12
<p>Map used for tests seen through Rviz. Red is the <math display="inline"><semantics> <msub> <mi>X</mi> <mi>R</mi> </msub> </semantics></math>-axis, green is the <math display="inline"><semantics> <msub> <mi>Y</mi> <mi>R</mi> </msub> </semantics></math>-axis, blue is the <math display="inline"><semantics> <msub> <mi>Z</mi> <mi>R</mi> </msub> </semantics></math>-axis, the yellow curve is the planned path, the cyan space is the map area where the robot can move, the purple objects are the obstacles, and the pink is the boundary around the obstacles.</p>
Full article ">Figure 13
<p>Real environment for the robot to navigate through during tests.</p>
Full article ">Figure 14
<p>Local map that contains the dynamic obstacles. Red is the <math display="inline"><semantics> <msub> <mi>X</mi> <mi>R</mi> </msub> </semantics></math>-axis, green is the <math display="inline"><semantics> <msub> <mi>Y</mi> <mi>R</mi> </msub> </semantics></math>-axis, blue is the <math display="inline"><semantics> <msub> <mi>Z</mi> <mi>R</mi> </msub> </semantics></math>-axis, the cyan area in the map is LiDAR-detected free space, black is the dynamic obstacles, and the grey is the detection area of LiDAR.</p>
Full article ">Figure 15
<p>Map created using Hector SLAM algorithm. The dark blue is the boundary of the map and fixed obstacles, the purple curve indicates the tracked path of the robot during the mapping process, and the yellow arrow is the final position and orientation of the robot.</p>
Full article ">Figure 16
<p>Command velocity read during the movement of the AMR (yellow), the PID-controlled signal (blue), and the controller error (green).</p>
Full article ">Figure 17
<p>Odometry readings of <span class="html-italic">X</span>-axis (blue) and <span class="html-italic">Y</span>-axis (orange) during the AMR’s movement.</p>
Full article ">Figure 18
<p>Odometry orientation reading from ROS during the AMR’s movement. The blue curve represents the adjustment of the odometry.</p>
Full article ">Figure 19
<p>Confusion matrices show the correctly identified classes in green and the incorrect ones in red for the CNN artificial vision models.</p>
Full article ">Figure 20
<p>Robot’s field of view with the two possible classes.</p>
Full article ">Figure 21
<p>Predictions of the YOLOv3 model for both classes. Red square represents a blocked path, and pink square represents a free path.</p>
Full article ">Figure 22
<p>Pre-trained YOLOv3 model, capable of detecting up to 80 classes.</p>
Full article ">
23 pages, 2622 KiB  
Article
L-PCM: Localization and Point Cloud Registration-Based Method for Pose Calibration of Mobile Robots
by Dandan Ning and Shucheng Huang
Information 2024, 15(5), 269; https://doi.org/10.3390/info15050269 - 10 May 2024
Viewed by 998
Abstract
The autonomous navigation of mobile robots contains three parts: map building, global localization, and path planning. Precise pose data directly affect the accuracy of global localization. However, the cumulative error problems of sensors and various estimation strategies cause the pose to have a [...] Read more.
The autonomous navigation of mobile robots contains three parts: map building, global localization, and path planning. Precise pose data directly affect the accuracy of global localization. However, the cumulative error problems of sensors and various estimation strategies cause the pose to have a large gap in data accuracy. To address these problems, this paper proposes a pose calibration method based on localization and point cloud registration, which is called L-PCM. Firstly, the method obtains the odometer and IMU (inertial measurement unit) data through the sensors mounted on the mobile robot and uses the UKF (unscented Kalman filter) algorithm to filter and fuse the odometer data and IMU data to obtain the estimated pose of the mobile robot. Secondly, the AMCL (adaptive Monte Carlo localization) is improved by combining the UKF fusion model of the IMU and odometer to obtain the modified global initial pose of the mobile robot. Finally, PL-ICP (point to line-iterative closest point) point cloud registration is used to calibrate the modified global initial pose to obtain the global pose of the mobile robot. Through simulation experiments, it is verified that the UKF fusion algorithm can reduce the influence of cumulative errors and the improved AMCL algorithm can optimize the pose trajectory. The average value of the position error is about 0.0447 m, and the average value of the angle error is stabilized at about 0.0049 degrees. Meanwhile, it has been verified that the L-PCM is significantly better than the existing AMCL algorithm, with a position error of about 0.01726 m and an average angle error of about 0.00302 degrees, effectively improving the accuracy of the pose. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>The architecture of the L-PCM pose calibration.</p>
Full article ">Figure 2
<p>Simulation environment. This environment simulates an indoor environment; that is, surrounded by a closed environment. Cylinders and squares in a room are obstacles.</p>
Full article ">Figure 3
<p>True and fused pose trajectories. Black is the true post trajectory, blue is the EKF estimated pose trajectory, and red is the UKF estimated pose trajectory.</p>
Full article ">Figure 4
<p>Error between the true and estimated postures. The blue color represents the error of the EKF estimated pose versus the true pose, and the red color represents the error of the UKF estimated pose versus the true pose.</p>
Full article ">Figure 5
<p>Pose tracking. The left figure shows the particle distribution state during initialization, and the right figure shows the particle distribution state after the mobile robot moves a certain distance. Black represents the obstacles. Red shows the improved AMCL algorithm particle swarm.</p>
Full article ">Figure 6
<p>Global localization. The left figure shows the particle distribution state during initialization, and the right figure shows the particle distribution state after the mobile robot moves a certain distance. Red shows the improved AMCL algorithm particle swarm. Black represents the obstacles.</p>
Full article ">Figure 7
<p>Original AMCL pose trajectory and improved AMCL pose trajectory. Black is the true pose trajectory, blue is the original AMCL pose trajectory, and red is the improved AMCL pose trajectory.</p>
Full article ">Figure 8
<p>Original AMCL pose error and modified AMCL pose error; (<b>a</b>) is the position error of the original AMCL and the improved AMCL and (<b>b</b>) is the angle error of the original AMCL and the improved AMCL. The blue color shows the error between the estimated and true pose of the original AMCL and the red color represents the error between the estimated and true pose of the improved AMCL.</p>
Full article ">Figure 9
<p>L-PCM calibrated pose trajectory and improved AMCL pose trajectory. The black color represents the real pose trajectory, the red color represents the L-PCM calibrated pose trajectory, and the blue color represents the improved AMCL pose trajectory.</p>
Full article ">Figure 10
<p>Odom pose error and L-PCM pose error; (<b>a</b>) is the position error of odom and L-PCM and (<b>b</b>) is the angle error of odom and L-PCM. The blue color indicates the error of odom and the red color indicates the error of L-PCM.</p>
Full article ">Figure 11
<p>Gmapping, cartographer, and L-PCM pose error; (<b>a</b>) is the position error of Gmapping, cartographer, and L-PCM and (<b>b</b>) is the angle error of Gmapping, cartographer, and L-PCM. The green color indicates the error of Gmapping, the blue color indicates the error of the cartographer, and the red color indicates the error of L-PCM.</p>
Full article ">
30 pages, 12209 KiB  
Article
Application and Research on Improved Adaptive Monte Carlo Localization Algorithm for Automatic Guided Vehicle Fusion with QR Code Navigation
by Bowen Zhang, Shiyun Li, Junting Qiu, Gang You and Lishuang Qu
Appl. Sci. 2023, 13(21), 11913; https://doi.org/10.3390/app132111913 - 31 Oct 2023
Cited by 5 | Viewed by 1367
Abstract
SLAM (simultaneous localization and mapping) technology incorporating QR code navigation has been widely used in the mobile robotics industry. However, the particle kidnapping problem, positioning accuracy, and navigation time are still urgent issues to be solved. In this paper, a SLAM fused QR [...] Read more.
SLAM (simultaneous localization and mapping) technology incorporating QR code navigation has been widely used in the mobile robotics industry. However, the particle kidnapping problem, positioning accuracy, and navigation time are still urgent issues to be solved. In this paper, a SLAM fused QR code navigation method is proposed and an improved adaptive Monte Carlo positioning algorithm is used to fuse the QR code information. Firstly, the generation and resampling methods of initialized particle swarms are improved to improve the robustness and weights of the swarms and to avoid the kidnapping problem. Secondly, the Gmapping scan data and the data generated by the improved AMCL algorithm are fused using the extended Kalman filter to improve the accuracy and stability of the state estimation. Finally, in terms of the positioning system, Gmapping is used to obtain QR code data as marker positions on static maps, and the improved adaptive Monte Carlo localization particle positioning algorithm is matched with a library of QR code templates, which corrects for offset distances and achieves precise point-to-point positioning under grey-valued raster maps. The experimental results show that the particles encountered with kidnapping can be quickly adjusted in position, with a 68.73% improvement in adjustment time, 64.27% improvement in navigation and positioning accuracy, and 42.81% reduction in positioning time. Full article
(This article belongs to the Special Issue Advances in Robot Path Planning, Volume II)
Show Figures

Figure 1

Figure 1
<p>System Framework.</p>
Full article ">Figure 2
<p>The overall architecture of the AMCL work.</p>
Full article ">Figure 3
<p>AGV structure and motion modelling: (<b>a</b>) chassis structure; (<b>b</b>) movement model.</p>
Full article ">Figure 4
<p>The effect of the improved algorithm.</p>
Full article ">Figure 5
<p>Simulation results: (<b>a</b>) comparison of mean square error results; (<b>b</b>) simulation results of heading angle error.</p>
Full article ">Figure 6
<p>Initial particle generation: (<b>a</b>) obtaining the covariance matrix of the position; (<b>b</b>) updating and re-prediction of particle positions.</p>
Full article ">Figure 7
<p>Particle weights and updating iterative particles, the weight of yellow particles is smaller than that of light green particles, and the weight of dark green particles is the largest: (<b>a</b>) update particle weights; (<b>b</b>) filter less weighted particles.</p>
Full article ">Figure 8
<p>Analysis of clustering statistics results: (<b>a</b>) sampled particles are divided into regions based on weights; (<b>b</b>) clustering to find particles with higher weights; (<b>c</b>) Comparison of weighting information for different clustered regions.</p>
Full article ">Figure 9
<p>Simulation path planning: (<b>a</b>) from the starting point to the target point; (<b>b</b>) return from the target point to the starting point.</p>
Full article ">Figure 10
<p>AMCL algorithm simulation: (<b>a</b>) trace of error; (<b>b</b>) trajectory map; (<b>c</b>) <span class="html-italic">x</span>-axis speed; (<b>d</b>) <span class="html-italic">y</span>-axis speed.</p>
Full article ">Figure 10 Cont.
<p>AMCL algorithm simulation: (<b>a</b>) trace of error; (<b>b</b>) trajectory map; (<b>c</b>) <span class="html-italic">x</span>-axis speed; (<b>d</b>) <span class="html-italic">y</span>-axis speed.</p>
Full article ">Figure 11
<p>QR code scanning recognition process.</p>
Full article ">Figure 12
<p>Coordinate transformation between QR code and AGV. (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">O</mi> </mrow> <mrow> <mi>W</mi> </mrow> </msub> <mo>−</mo> <msub> <mrow> <mi>X</mi> </mrow> <mrow> <mi>W</mi> </mrow> </msub> <msub> <mrow> <mi>Y</mi> </mrow> <mrow> <mi>W</mi> </mrow> </msub> <msub> <mrow> <mi>Z</mi> </mrow> <mrow> <mi>W</mi> </mrow> </msub> </mrow> </semantics></math>), represents world coordinate system describing camera position, <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mrow> <mi mathvariant="normal">O</mi> </mrow> <mrow> <mi>C</mi> </mrow> </msub> <mo>−</mo> <msub> <mrow> <mi>X</mi> </mrow> <mrow> <mi>C</mi> </mrow> </msub> <msub> <mrow> <mi>Y</mi> </mrow> <mrow> <mi>C</mi> </mrow> </msub> <msub> <mrow> <mi>Z</mi> </mrow> <mrow> <mi>C</mi> </mrow> </msub> <mo>)</mo> </mrow> </semantics></math> represents camera coordinate system, <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>o</mi> <mo>−</mo> <mi>x</mi> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math> represents image coordinate system, <math display="inline"><semantics> <mrow> <mi>f</mi> </mrow> </semantics></math>—the camera focal length, equal to the distance of <math display="inline"><semantics> <mrow> <mi>o</mi> </mrow> </semantics></math> to the <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">O</mi> </mrow> <mrow> <mi>c</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>a</b>) World, camera, image physics, pixel coordinate system relationship; (<b>b</b>) the relationship between camera coordinate system and image physical coordinate system.</p>
Full article ">Figure 13
<p>Identification code used in navigation: (<b>a</b>) foundation template; (<b>b</b>) composite template; (<b>c</b>) paste the template in the experiment.</p>
Full article ">Figure 14
<p>Process of AMCL particle distribution and triangulation matching. The number of the QR code reflection surface detected by (<b>a</b>–<b>c</b>) is the ID of the QR code landmark in the template library: (<b>a</b>) particles flatten the entire map; (<b>b</b>) the particle converges to the landmark position of the QR code; (<b>c</b>) use triangular matching positioning to increase the weight when scanning the QR code; (<b>d</b>) a, b and c are the detected QR code reflecting surfaces; (<b>e</b>) there are all kinds of triangles in the template library; (<b>f</b>) the two triangles are proportional in length and match in shape.</p>
Full article ">Figure 15
<p>Building a simulation environment: (<b>a</b>) synchronize environment on Rviz; (<b>b</b>) synchronize environment on Gazebo; (<b>c</b>) simulation model.</p>
Full article ">Figure 16
<p>Improved AMCL algorithm localization process for QR code navigation fusion.</p>
Full article ">Figure 17
<p>Simulation test: (<b>a</b>) convergence to true value; (<b>b</b>) the estimated value is near the true value.</p>
Full article ">Figure 18
<p>AMCL algorithm with improved AMCL algorithm particle changes: (<b>a</b>) AMCL algorithm for particle changes at the starting point; (<b>b</b>) AMCL algorithm changes of particles at the end point; (<b>c</b>) improved AMCL algorithm for particle changes at the starting point; (<b>d</b>) improved AMCL algorithm for particle changes at endpoints.</p>
Full article ">Figure 19
<p>Particle distribution and position error: (<b>a</b>) particle distributions generated by the AMCL algorithm and improved algorithms on trajectories; (<b>b</b>) AMCL and improved AMCL algorithm for processing information in case of kidnapping.</p>
Full article ">Figure 20
<p>Comparison between the data of the simulation experiment: (<b>a</b>) distance offset on both sides when travelling; (<b>b</b>) relationship between error and time; (<b>c</b>) line speed fluctuation during scanning; (<b>d</b>) angular velocity fluctuations during sweeping.</p>
Full article ">Figure 21
<p>Real scene testing: (<b>a</b>) AGV front; (<b>b</b>) AGV side; (<b>c</b>) experimental site and AGVs used; (<b>d</b>) paste a column of QR code landmarks.</p>
Full article ">Figure 22
<p>Gmapping build synchronization action: (<b>a</b>) a half-turn from 0 to 1; (<b>b</b>) a half-turn from 1 to 0.</p>
Full article ">Figure 23
<p>The scanning process: (<b>a</b>) synchronised Rviz; (<b>b</b>) scanning the QR code.</p>
Full article ">Figure 24
<p>Kidnapping and location recovery: (<b>a</b>) initial stage; (<b>b</b>) travelling to QR2; (<b>c</b>) kidnapping situations; (<b>d</b>) dissolution of kidnapping.</p>
Full article ">Figure 24 Cont.
<p>Kidnapping and location recovery: (<b>a</b>) initial stage; (<b>b</b>) travelling to QR2; (<b>c</b>) kidnapping situations; (<b>d</b>) dissolution of kidnapping.</p>
Full article ">
19 pages, 11821 KiB  
Article
Fabrication of Bimetallic High-Strength Low-Alloy Steel/Si-Bronze Functionally Graded Materials Using Wire Arc Additive Manufacturing
by Marwan M. El-Husseiny, Abdelrahman A. Baraka, Omar Oraby, Ehab A. El-Danaf and Hanadi G. Salem
J. Manuf. Mater. Process. 2023, 7(4), 138; https://doi.org/10.3390/jmmp7040138 - 31 Jul 2023
Cited by 3 | Viewed by 2414
Abstract
In this paper, bimetallic functionally graded structures were fabricated using wire and arc additive manufacturing (WAAM). The bimetallic walls were built by depositing Si-Bronze and high-strength low-alloy (HSLA) steel, successively. The microstructural evolution of the built structures, especially within the fusion zone between [...] Read more.
In this paper, bimetallic functionally graded structures were fabricated using wire and arc additive manufacturing (WAAM). The bimetallic walls were built by depositing Si-Bronze and high-strength low-alloy (HSLA) steel, successively. The microstructural evolution of the built structures, especially within the fusion zone between the dissimilar alloys, was investigated in relation to their mechanical properties. The built bimetallic walls showed a high level of integrity. An overall interface length of 9 mm was investigated for microstructural evolution, elemental mapping and microhardness measurements along the building direction. Microhardness profiles showed a gradual transition in hardness passing through the diffusion zone with no evidence for intermetallic compounds. Failure of the tensile specimens occurred at the Si-Bronze region, as expected. Bending tests confirmed good ductility of the joint between the dissimilar alloys. Direct shear test results proved a shear strength comparable to that of HSLA steel. The obtained results confirm that it is appropriate to fabricate HSLA steel/Si-Bronze FGMs using WAAM technology. Full article
(This article belongs to the Special Issue Advances in Metal Additive Manufacturing/3D Printing)
Show Figures

Figure 1

Figure 1
<p>The additive/subtractive robotic-controlled manufacturing system, AMCL [<a href="#B20-jmmp-07-00138" class="html-bibr">20</a>].</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematic for the deposited FGM walls and their deposition strategy, (<b>b</b>) schematic showing the first deposited HSLA steel overlapping double beads on the mild steel substrate, and (<b>c</b>) schematic for the first deposited Si-Bronze single bead on the HSLA steel wall.</p>
Full article ">Figure 3
<p>Schematic representation of the characterization samples cut from the deposited functionally graded material of HSLA steel/Si-Bronze walls.</p>
Full article ">Figure 4
<p>Three-point bending test setup.</p>
Full article ">Figure 5
<p>(<b>a</b>) Direct single shear test setup and (<b>b</b>) shear test specimen.</p>
Full article ">Figure 6
<p>A sample of the deposited walls (<b>a</b>) as-WAAM-ed and (<b>b</b>) after machining.</p>
Full article ">Figure 7
<p>(<b>a</b>) Panoramic image of the interface cross section and (<b>b</b>) a bar chart showing the corresponding elemental distribution for each of the ten shown locations (i–x) along the build direction.</p>
Full article ">Figure 8
<p>SEM micrograph (<b>left</b>) together with EDS mapping (<b>right</b>) for HSLA steel locations (i, ii).</p>
Full article ">Figure 9
<p>SEM micrograph together with EDS elemental area mapping of the intermixing zone along the interface boundary.</p>
Full article ">Figure 10
<p>Iron–copper phase diagram showing the wide miscibility gap between both metals [<a href="#B24-jmmp-07-00138" class="html-bibr">24</a>].</p>
Full article ">Figure 11
<p>SEM images together with elemental mapping of formed structures between HSLA steel and Si-Bronze along the building direction pointed out by the arrow on the left-hand side of the figure; (<b>a</b>–<b>c</b>) locations x and ix; (<b>d</b>–<b>f</b>) locations viii and vii; (<b>g</b>–<b>i</b>) location vi; (<b>j</b>–<b>l</b>) location v; (<b>m</b>–<b>o</b>) location iv; (<b>p</b>–<b>r</b>) location iii.</p>
Full article ">Figure 12
<p>XRD analysis in different regions along the build direction together with the corresponding microstructures: (<b>a</b>) HSLA steel region, (<b>b</b>) fusion zone, (<b>c</b>) Si-Bronze region.</p>
Full article ">Figure 13
<p>Micro-hardness color contour shown on the right of the interface microstructure.</p>
Full article ">Figure 14
<p>Engineering stress–strain curves for three WAAM-ed samples: (<b>a</b>) unmixed HSLA steel, (<b>b</b>) HSLA steel/Si-Bronze FGM, and (<b>c</b>) unmixed Si-Bronze.</p>
Full article ">Figure 15
<p>SEM micrographs of the fractured FGM specimen shown in <a href="#jmmp-07-00138-f014" class="html-fig">Figure 14</a>b.</p>
Full article ">Figure 16
<p>Flexural stress–average displacement curve for the bimetallic tested sample.</p>
Full article ">Figure 17
<p>Three-point bending tested specimen (<b>a</b>) after dye penetrant test and (<b>b</b>) image of the specimen at 90° bending angle.</p>
Full article ">Figure 18
<p>Fractured surfaces after the direct single shear tests. (<b>a</b>,<b>b</b>) Macro-images and (<b>c</b>–<b>h</b>) SEM micrographs at different magnifications.</p>
Full article ">
17 pages, 1935 KiB  
Article
An Evaluation of Radon in Drinking Water Supplies in Major Cities of the Province of Chimborazo, Central Andes of Ecuador
by Jheny Orbe, José Luis Herrera-Robalino, Gabriela Ureña-Callay, Jonatan Telenchano-Ilbay, Shirley Samaniego-León, Augusto Fienco-Bacusoy, Andrea Cando-Veintimilla and Theofilos Toulkeridis
Water 2023, 15(12), 2255; https://doi.org/10.3390/w15122255 - 16 Jun 2023
Cited by 2 | Viewed by 1782
Abstract
The activity concentrations of 222Rn were measured in 53 public water supplies of underground (50) and surface (3) origin, and the relation of such with five geological units where these supplies are located, in the central Ecuadorian Andes, was also explored. These [...] Read more.
The activity concentrations of 222Rn were measured in 53 public water supplies of underground (50) and surface (3) origin, and the relation of such with five geological units where these supplies are located, in the central Ecuadorian Andes, was also explored. These units supply drinking water to 10 cities, located between the 1500 and 3120 m.a.s.l. The experimental setup consisted of the RAD7 radon detector and the RAD H2O degassing system. The 222Rn levels measured in groundwater ranged from 0.53 to 14.78 Bq/L while surface waters did not indicate detectable radon levels. The radon concentrations were below the parametric value of 100 Bq/L for water intended for human consumption, recommended by the European Atomic Energy Community (EURATOM) in its Directive 2013/51, and the alternative maximum contamination level (AMCL) of 150 Bq/L, proposed by the Environmental Protection Agency (EPA). The Pisayambo Volcanic unit, mapped as intermediate volcaniclastic to felsic deposits, presented a mean radon concentration higher than the other geological units and lithologies (9.58 ± 3.04 Bq/L). The Cunupogyo well (11.36 ± 0.48 Bq/L) presented a radon concentration more than 70% higher than the neighboring springs, which may be explained by its proximity to the Pallatanga geological fault. The maximum annual effective doses, by cities, due to the ingestion and inhalation of radon, ranged from 0.010 to 0.108 mSv and from 0.008 to 0.091 mSv, respectively; therefore, these waters do not represent a risk to the health of the population. In addition, a correlation was observed between the activity concentration of 222Rn and the activity concentration of the parent 226Ra in samples collected from some springs. Full article
(This article belongs to the Section Water Quality and Contamination)
Show Figures

Figure 1

Figure 1
<p>Geological map of the province of Chimborazo with the sampling points monitored in this study. Inlet indicates the position of Ecuador and the Province of Chimborazo.</p>
Full article ">Figure 2
<p>Mean values of radon concentrations of springs and wells, according to each geological unit—lithology on the Chimborazo province (L1–L5). Error bars represent the standard deviation.</p>
Full article ">Figure 3
<p>Comparison between the ordinary mean and weight mean of radon concentrations of three samples collected in the same drinking water supplies: (<b>a</b>) Plaza de Rastro reservoir; (<b>b</b>) Catequilla spring.</p>
Full article ">Figure 4
<p>Correlation of <sup>226</sup>Ra and <sup>222</sup>Rn concentrations, radionuclides measured in water samples from springs detailed in <a href="#water-15-02255-t007" class="html-table">Table 7</a>, located in geological units—lithologies consisting of volcaniclastic deposits.</p>
Full article ">
18 pages, 8783 KiB  
Article
Indoor Localization Method for a Mobile Robot Using LiDAR and a Dual AprilTag
by Yuan-Heng Huang and Chin-Te Lin
Electronics 2023, 12(4), 1023; https://doi.org/10.3390/electronics12041023 - 18 Feb 2023
Cited by 9 | Viewed by 2819
Abstract
Global localization is one of the important issues for mobile robots to achieve indoor navigation. Nowadays, most mobile robots rely on light detection and ranging (LiDAR) and adaptive Monte Carlo localization (AMCL) to realize their localization and navigation. However, the reliability and performance [...] Read more.
Global localization is one of the important issues for mobile robots to achieve indoor navigation. Nowadays, most mobile robots rely on light detection and ranging (LiDAR) and adaptive Monte Carlo localization (AMCL) to realize their localization and navigation. However, the reliability and performance of global localization only using LiDAR are restricted due to the monotonous sensing feature. This study proposes a global localization approach to improve mobile robot global localization using LiDAR and a dual AprilTag. Firstly, the spatial coordinate system constructed with two neighboring AprilTags is applied as the reference basis for global localization. Then, the robot pose can be estimated by generating precise initial particle distribution for AMCL based on the relative tag positions. Finally, in pose tracking, the count and distribution of AMCL particles, evaluating the certainty of localization, is continuously monitored to update the real-time position of the robot. The contributions of this study are listed as follows. (1) Compared to the localization method only using LiDAR, the proposed method can locate the robot’s position with a few iterations and less computer power consumption. (2) The failure localization issues due to the many similar indoor features can be solved. (3) The error of the global localization can be limited to an acceptable range compared to the result using a single tag. Full article
(This article belongs to the Special Issue Selected Papers from Advanced Robotics and Intelligent Systems 2021)
Show Figures

Figure 1

Figure 1
<p>The procedure of the localization system in this study.</p>
Full article ">Figure 2
<p>Detail process in the localization system of this study. (<b>a</b>) Global localization; (<b>b</b>) pose tracking; (<b>c</b>) kidnapping problem check.</p>
Full article ">Figure 2 Cont.
<p>Detail process in the localization system of this study. (<b>a</b>) Global localization; (<b>b</b>) pose tracking; (<b>c</b>) kidnapping problem check.</p>
Full article ">Figure 3
<p>Schematic diagram of the concept of pose estimation: (<b>a</b>) miss alignment; (<b>b</b>) location correction with one reference point; (<b>c</b>) alignment with two reference points.</p>
Full article ">Figure 4
<p>Approaches to determine robot position using the dual AprilTag; (<b>a</b>) localization problem; (<b>b</b>) shift; (<b>c</b>) rotate; (<b>d</b>) result.</p>
Full article ">Figure 5
<p>Schematic diagram of the inaccuracy of the single tag global positioning algorithm due to observation error.</p>
Full article ">Figure 6
<p>The relationship between the observed distance and translation and rotation error under <math display="inline"><semantics> <mrow> <msub> <mi>ε</mi> <mi>x</mi> </msub> <mo>=</mo> <msub> <mi>ε</mi> <mi>y</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mi>k</mi> <mo>=</mo> <mn>100</mn> <mo>,</mo> <msub> <mi>θ</mi> <mrow> <mi>M</mi> <mo>,</mo> <mi>T</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>30</mn> <mo>°</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>FESTO Robotino with LiDAR and camera.</p>
Full article ">Figure 8
<p>Digital map and tag database.</p>
Full article ">Figure 9
<p>Results of global localization using the dual AprilTag.</p>
Full article ">Figure 10
<p>Results of global localization using a multi AprilTag; (<b>a</b>) a mobile robot and three AprilTags; (<b>b</b>) before localization; (<b>c</b>) result.</p>
Full article ">Figure 11
<p>Factory case field; (<b>a</b>) a factory; (<b>b</b>) robot moving path.</p>
Full article ">Figure 12
<p>Trap case field; (<b>a</b>) a designed rectangle bound using boxes; (<b>b</b>) robot moving path.</p>
Full article ">Figure 13
<p>Particle distributions in path points during global localization in the factory case study. (<b>a</b>) Classic; (<b>b</b>) proposed.</p>
Full article ">Figure 13 Cont.
<p>Particle distributions in path points during global localization in the factory case study. (<b>a</b>) Classic; (<b>b</b>) proposed.</p>
Full article ">Figure 14
<p>Convergence of particle state in path points during global localization in the factory case study. Variance of particles on (<b>a</b>) the <span class="html-italic">X</span>-axis and (<b>b</b>) the <span class="html-italic">Y</span>-axis.</p>
Full article ">Figure 15
<p>Particle used during global localization in the factory case (the setting range is 500~2000).</p>
Full article ">Figure 16
<p>Particle distributions in path points during global localization in the trap case study. (<b>a</b>) Classic; (<b>b</b>) proposed.</p>
Full article ">Figure 17
<p>Convergence of particle state in path points during global localization in the trap case study. Particle variance on (<b>a</b>) tje <span class="html-italic">X</span>-axis and (<b>b</b>) the <span class="html-italic">Y</span>-axis.</p>
Full article ">Figure 18
<p>Experimental environment for the stability of a single tag and two tags.</p>
Full article ">Figure 19
<p>The relationships between localization error and relative distance. (<b>a</b>) Error on the x direction; (<b>b</b>) error on the y direction; (<b>c</b>) error on rotation; (<b>d</b>) rotation error from the AprilTag.</p>
Full article ">
22 pages, 18228 KiB  
Article
Integration of Real-Time Semantic Building Map Updating with Adaptive Monte Carlo Localization (AMCL) for Robust Indoor Mobile Robot Localization
by Matthew Peavy, Pileun Kim, Hafiz Oyediran and Kyungki Kim
Appl. Sci. 2023, 13(2), 909; https://doi.org/10.3390/app13020909 - 9 Jan 2023
Cited by 10 | Viewed by 3558
Abstract
A robot can accurately localize itself and navigate in an indoor environment based on information about the operating environment, often called a world or a map. While typical maps describe structural layouts of buildings, the accuracy of localization is significantly affected by non-structural [...] Read more.
A robot can accurately localize itself and navigate in an indoor environment based on information about the operating environment, often called a world or a map. While typical maps describe structural layouts of buildings, the accuracy of localization is significantly affected by non-structural building elements and common items, such as doors, appliances, and furniture. This study enhances the robustness and accuracy of indoor robot localization by dynamically updating the semantic building map with non-structural elements detected by sensors. We propose modified Adaptive Monte Carlo Localization (AMCL), integrating object recognition and map updating into the traditional probabilistic localization. With the proposed approach, a robot can automatically correct errors caused by non-structural elements by updating a semantic building map reflecting the current state of the environment. Evaluations in kidnapped robot and traditional localization scenarios indicate that more accurate and robust pose estimation can be achieved with the map updating capability. Full article
(This article belongs to the Section Robotics and Automation)
Show Figures

Figure 1

Figure 1
<p>Overall framework.</p>
Full article ">Figure 2
<p>AMCL localization errors (<b>a</b>) due to the interpretation of the sofa front as a wall and (<b>b</b>) doors being mistakenly recognized as a contiguous wall.</p>
Full article ">Figure 3
<p>URDF building model creation for semantic localization of (<b>a</b>) conventional static door element in BIM, (<b>b</b>) dynamic door element with joint control, (<b>c</b>) 3D URDF building model with static and dynamic building elements, and (<b>d</b>) part of the URDF building file presenting dynamic building elements.</p>
Full article ">Figure 4
<p>Point cloud processing pipeline for isolating and determining the location of a general object.</p>
Full article ">Figure 5
<p>Point cloud isolation and boundary detection for identified objects.</p>
Full article ">Figure 6
<p>PCL pipeline for isolating and determining the location of a door.</p>
Full article ">Figure 7
<p>Pseudo code identifying doors and insertion into navigation map and BIM.</p>
Full article ">Figure 8
<p>Example of object recognition and navigation map updating. (<b>a</b>) Robot observing a sofa in a Gazebo simulation, (<b>b</b>) robot’s internal state before map updating, and (<b>c</b>) robot’s internal state after map updating.</p>
Full article ">Figure 9
<p>(<b>a</b>) Furnished apartment for experiment and (<b>b</b>) Turtlebot2 with RGBD and 2D LiDAR sensors.</p>
Full article ">Figure 10
<p>(<b>a</b>) Navigation path for traditional localization and (<b>b</b>) robot locations for kidnapped localization.</p>
Full article ">Figure 11
<p>Robot navigation along a given path with object recognition and map updating.</p>
Full article ">Figure 12
<p>Traditional localization scenario: navigation without map updating (<b>top</b>) and traditional navigation with map updating (<b>bottom</b>).</p>
Full article ">Figure 13
<p>Evolution of particle swarms for actual door configuration OOOOXO using an accurate map (OOOOXO) and inaccurate map (OOOOOO).</p>
Full article ">
27 pages, 22585 KiB  
Article
Improved LiDAR Localization Method for Mobile Robots Based on Multi-Sensing
by Yanjie Liu, Chao Wang, Heng Wu, Yanlong Wei, Meixuan Ren and Changsen Zhao
Remote Sens. 2022, 14(23), 6133; https://doi.org/10.3390/rs14236133 - 3 Dec 2022
Cited by 30 | Viewed by 3516
Abstract
In this paper, we propose a localization method applicable to 3D LiDAR by improving the LiDAR localization algorithm, such as AMCL (Adaptive Monte Carlo Localization). The method utilizes multiple sensing information, including 3D LiDAR, IMU and the odometer, and can be used without [...] Read more.
In this paper, we propose a localization method applicable to 3D LiDAR by improving the LiDAR localization algorithm, such as AMCL (Adaptive Monte Carlo Localization). The method utilizes multiple sensing information, including 3D LiDAR, IMU and the odometer, and can be used without GNSS. Firstly, the wheel speed odometer and IMU data of the mobile robot are multi-source fused by EKF (Extended Kalman Filter), and the sensor data obtained after multi-source fusion are used as the motion model to participate in the positional prediction of the particle set in AMCL to obtain the initial positioning information of the mobile robot. Then, the position pose difference values output by AMCL at adjacent moments are substituted into the PL-ICP algorithm as the initial position pose transformation matrix, and the 3D laser point cloud is aligned with the nonlinear system using the PL-ICP algorithm. The three-dimensional laser odometer is obtained by LM (Levenberg--Marquard) iterative solution in the PL-ICP algorithm. Finally, the initial position pose output by AMCL is corrected by the three-dimensional laser odometer, and the AMCL particles are weighted and sampled to output the final positioning result of the mobile robot. Through simulation and practical experiments, it is verified that the improved AMCL algorithm has higher positioning accuracy and stability compared to the AMCL algorithm. Full article
Show Figures

Figure 1

Figure 1
<p>Framework of improved AMCL localization algorithm based on multi-sensing.</p>
Full article ">Figure 2
<p>AMCL algorithm flow based on EKF fusion improvement.</p>
Full article ">Figure 3
<p>Principle of PL-ICP algorithm.</p>
Full article ">Figure 4
<p>Simulation experiment environment. (<b>a</b>) is the virtual simulation scene, and (<b>b</b>) is the related model of the mobile robot in Gazebo. In (<b>b</b>), the yellow module is the mobile robot chassis, the blue module is the Velodyne 16-line LiDAR, and the red module is the IMU inertial measurement unit.</p>
Full article ">Figure 5
<p>Point cloud correction before and after navigation map. The left figure shows before correction, and the right figure shows after correction, the white color shows the 2D laser point cloud data obtained after the conversion of 3D laser point cloud data, the green color shows the AMCL algorithm particle swarm, and the light purple part shows the cost map after the expansion.</p>
Full article ">Figure 6
<p>Results of the first group of AMCL simulation positioning experiments.</p>
Full article ">Figure 7
<p>Results of the second group of AMCL simulation positioning experiments.</p>
Full article ">Figure 8
<p>Results of the third group of AMCL simulation positioning experiments.</p>
Full article ">Figure 9
<p>Results of the first group of improved AMCL simulation positioning experiments.</p>
Full article ">Figure 10
<p>Results of the second group of improved AMCL simulation positioning experiments.</p>
Full article ">Figure 11
<p>Results of the third group of improved AMCL simulation positioning experiments.</p>
Full article ">Figure 12
<p>Results of the first group of cartographer simulation positioning experiments.</p>
Full article ">Figure 13
<p>Results of the second group of cartographer simulation positioning experiments.</p>
Full article ">Figure 14
<p>Results of the third group of cartographer simulation positioning experiments.</p>
Full article ">Figure 15
<p>Results of the first group of hdl_localization simulation positioning experiments.</p>
Full article ">Figure 16
<p>Results of the second group of hdl_localization simulation positioning experiments.</p>
Full article ">Figure 17
<p>Results of the third group of hdl_localization simulation positioning experiments.</p>
Full article ">Figure 18
<p>Experimental equipment. (<b>a</b>) shows the mobile robot platform and (<b>b</b>) shows the Leica Geosystems absolute tracker (AT960).</p>
Full article ">Figure 19
<p>Point cloud correction before and after navigation map. The left figure shows before correction and the right figure shows after correction, the color shows the initial 3D laser point cloud data and the white color shows the 2D laser point cloud data obtained after conversion.</p>
Full article ">Figure 20
<p>Actual positioning error of mobile robot. The left figure shows the position error of the mobile robot platform and the right figure shows the angle error of the mobile robot platform.</p>
Full article ">
14 pages, 8912 KiB  
Article
Vision-Sensor-Assisted Probabilistic Localization Method for Indoor Environment
by Hui Shi, Jianyu Yang, Jiashun Shi, Lida Zhu and Guofa Wang
Sensors 2022, 22(19), 7114; https://doi.org/10.3390/s22197114 - 20 Sep 2022
Cited by 4 | Viewed by 1477
Abstract
Among the numerous indoor localization methods, Light-Detection-and-Ranging (LiDAR)-based probabilistic algorithms have been extensively applied to indoor localization due to their real-time performance and high accuracy. Nevertheless, these methods are challenged in symmetrical environments when tackling global localization and the robot kidnapping problem. In [...] Read more.
Among the numerous indoor localization methods, Light-Detection-and-Ranging (LiDAR)-based probabilistic algorithms have been extensively applied to indoor localization due to their real-time performance and high accuracy. Nevertheless, these methods are challenged in symmetrical environments when tackling global localization and the robot kidnapping problem. In this paper, a novel hybrid method that combines visual and probabilistic localization results is proposed. Augmented Monte Carlo Localization (AMCL) is improved for position tracking continually. LiDAR-based measurements’ uncertainty is evaluated to incorporate discrete visual-based results; therefore, a better diversity of the particle can be maintained. The robot kidnapping problem can be detected and solved by preventing premature convergence of the particle filter. Extensive experiments were implemented to validate the robustness and accuracy performance. Meanwhile, the localization error was reduced from 30 mm to 9 mm during a 600 m tour. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

Figure 1
<p>Confusing measurement of range finder perception in repetitive environment.</p>
Full article ">Figure 2
<p>Architecture of the proposed method.</p>
Full article ">Figure 3
<p>Markers deployed in the environment.</p>
Full article ">Figure 4
<p>HOG features with different cell sizes.</p>
Full article ">Figure 5
<p>Global localization with markers.</p>
Full article ">Figure 6
<p>Experiment environment and platform.</p>
Full article ">Figure 7
<p>Position tracking comparison of different algorithms.</p>
Full article ">Figure 8
<p>Localization error of different algorithms.</p>
Full article ">Figure 9
<p>Particle number applied by the conventional AMCL. (<b>a</b>) Particle number with respect to sample steps during the whole process with a maximum of 5000 particles. (<b>b</b>) Particle number with respect to sample steps during the whole process with a maximum of 1000 particles.</p>
Full article ">Figure 10
<p>Kidnapping problem with conventional AMCL. (<b>a</b>)–(<b>c</b>) demonstrate the particle distribution when kidnapping problem happens. (<b>1</b>)–(<b>3</b>) demonstrate the robot’s real pose respectively.</p>
Full article ">Figure 11
<p>Kidnapping problem with proposed method. (<b>a</b>) is the real pose in the environment shown in (<b>b</b>). (<b>c</b>) illustrates the estimation deviation when the marker is not recognized.</p>
Full article ">
26 pages, 24597 KiB  
Article
Performance Analysis of Localization Algorithms for Inspections in 2D and 3D Unstructured Environments Using 3D Laser Sensors and UAVs
by Paul Espinosa Peralta, Marco Andrés Luna, Paloma de la Puente, Pascual Campoy, Hriday Bavle, Adrián Carrio and Christyan Cruz Ulloa
Sensors 2022, 22(14), 5122; https://doi.org/10.3390/s22145122 - 7 Jul 2022
Cited by 3 | Viewed by 2227
Abstract
One of the most relevant problems related to Unmanned Aerial Vehicle’s (UAV) autonomous navigation for industrial inspection is localization or pose estimation relative to significant elements of the environment. This paper analyzes two different approaches in this regard, focusing on its application to [...] Read more.
One of the most relevant problems related to Unmanned Aerial Vehicle’s (UAV) autonomous navigation for industrial inspection is localization or pose estimation relative to significant elements of the environment. This paper analyzes two different approaches in this regard, focusing on its application to unstructured scenarios where objects of considerable size are present, such as a truck, a wind tower, an airplane, a building, etc. The presented methods require a previously developed Computer-Aided Design (CAD) model of the main object to be inspected. The first approach is based on an occupancy map built from a horizontal projection of this CAD model and the Adaptive Monte Carlo Localization (AMCL) algorithm to reach convergence by considering the likelihood field observation model between the 2D projection of 3D sensor data and the created map. The second approach uses a point cloud prior map of the 3D CAD model and a scan-matching algorithm based on the Iterative Closest Point Algorithm (ICP) and the Unscented Kalman Filter (UKF). The presented approaches have been extensively evaluated using simulated as well as previously recorded real flight data. We focus on aircraft inspection as a test example, but our results and conclusions can be directly extended to other applications. To support this assertion, a truck inspection has been performed. Our tests reflected that creating a 2D or 3D map from a standard CAD model and using a 3D laser scan on the created maps can optimize the processing time, resources and improve robustness. The techniques used to segment unexpected objects in 2D maps improved the performance of AMCL. In addition, we showed that moving around locations with relevant geometry after take-off when running AMCL enabled faster convergence and high accuracy. Hence, it could be used as an initial position estimation method for other localization algorithms. The ICP-NL method works well in environments with elements other than the object to inspect, but it can provide better results if some techniques to segment the new objects are applied. Furthermore, the proposed ICP-NL scan-matching method together with UKF performed faster, in a more robust manner, than NDT. Moreover, it is not affected by flight height. However, ICP-NL error may still be too high for applications requiring increased accuracy. Full article
(This article belongs to the Special Issue Aerial Robotics: Navigation and Path Planning)
Show Figures

Figure 1

Figure 1
<p>Kind of maps to be used. (<b>a</b>) Three-dimensional Occupancy grid Map (Octomap). (<b>b</b>) Two-dimensional Occupancy grid Map. (<b>c</b>) Graph SLAM map.</p>
Full article ">Figure 2
<p>Example of 2D localization using AMCL algorithm (red), 2D laser Scan (green) and an Occupancy grid map (black).</p>
Full article ">Figure 3
<p>Proposed inspection trajectory (green). Three-dimensional CAD model (cyan).</p>
Full article ">Figure 4
<p>Example of 3D localization using nonlinear ICP algorithm (white), 3D laser Scan (green) and a 3D Occupancy grid map (red).</p>
Full article ">Figure 5
<p>Proposed system architecture for simulations and real flights.</p>
Full article ">Figure 6
<p>Components used in the simulation (<b>a</b>,<b>b</b>), and real flights (<b>c</b>,<b>d</b>). (<b>a</b>) Velodyne sensor and Hummingbird UAV. (<b>b</b>) Aircraft Airbus A330 model. (<b>c</b>) Three-dimensional LIDAR Ouster and DJI Matrice 100 UAV. (<b>d</b>) Airbus A330 [<a href="#B58-sensors-22-05122" class="html-bibr">58</a>].</p>
Full article ">Figure 7
<p>2D occupancy grip map creation process. (<b>a</b>) Airbus A330 Gazebo Model. (<b>b</b>) Octomap generated with 0.1 cm voxel resolution. (<b>c</b>) Octomap horizontal projection. (<b>d</b>) Modified 2D occupancy grip map.</p>
Full article ">Figure 8
<p>AMCL convergence process. Two-dimensional laser scan (blue), AMCL particle cloud (red). (<b>a</b>) UAV takes off, particles are distributed all over the map. (<b>b</b>) UAV moves forward, particle scattering begins to decrease The 2D laser scan does not match the map. (<b>c</b>) AMCL converges, particle scattering is small, and the laser data matches the map.</p>
Full article ">Figure 9
<p>(<b>a</b>) AMCL simulation test with the lowest ATE. AMCL (blue), ground-truth (green), take-off position (blue circle), error (red). (<b>b</b>) AMCL errors.</p>
Full article ">Figure 10
<p>(<b>a</b>) AMCL simulation test with the highest ATE. AMCL (blue), ground-truth (green), take-off position (blue circle), error (red). (<b>b</b>) AMCL errors.</p>
Full article ">Figure 11
<p>Proposed initial flight (green), Aircraft (cyan), take-off point (red). (<b>a</b>) Squared path. (<b>b</b>) Circular path.</p>
Full article ">Figure 12
<p>Two-dimensional occupancy grip map creation process. (<b>a</b>) Truck Gazebo Model. (<b>b</b>) Octomap generated with 0.1 cm voxel resolution. (<b>c</b>) Octomap horizontal projection. (<b>d</b>) Modified 2D occupancy grip map.</p>
Full article ">Figure 13
<p>Proposed inspection trajectory (green). Truck 3D CAD model (cyan).</p>
Full article ">Figure 14
<p>(<b>a</b>) AMCL simulation test with the lowest ATE. AMCL (blue), ground truth (green), take-off position (blue circle), error (red). (<b>b</b>) AMCL errors.</p>
Full article ">Figure 15
<p>(<b>a</b>) AMCL simulation test with the highest ATE. AMCL (blue), ground truth (green), take-off position (blue circle), error (red). (<b>b</b>) AMCL errors.</p>
Full article ">Figure 16
<p>Proposed trajectories for real flights. Take-off and landing point (red circle). (<b>a</b>) Path 1. (<b>b</b>) Path 2. (<b>c</b>) Path 3.</p>
Full article ">Figure 17
<p>Map update, stairs (red). (<b>a</b>) 3D Laser scan real data. (<b>b</b>) New occupancy grip map.</p>
Full article ">Figure 18
<p>Wall removal. Three-dimensional LIDAR point Cloud (blue), 2D laser scan (red). (<b>a</b>) Original 2D laser scan data and 3D LIDAR point cloud. (<b>b</b>) Wall segmentation by RANSAC PCL. (<b>c</b>) Wall segmentation by limiting 2D laser scan data. Range [0.1, 15] meters, angular field of view [−135, 135] degrees.</p>
Full article ">Figure 19
<p>Convergence tests on Path 1 (<a href="#sensors-22-05122-f016" class="html-fig">Figure 16</a>a) real flight with limited laser ranges. Three-dimensional LIDAR point cloud (blue), 2D laser scans (red), AMCL particles (green). (<b>a</b>) UAV position before algorithm convergence. (<b>b</b>) UAV position when the algorithm converged at 37 iterations.</p>
Full article ">Figure 20
<p>Convergence tests on Path 1 (<a href="#sensors-22-05122-f016" class="html-fig">Figure 16</a>a) real flight with RANSAC PCL segmentation. Three-dimensional LIDAR point cloud (blue), 2D laser scans (red), AMCL particles (green). (<b>a</b>) UAV position before algorithm convergence. (<b>b</b>) UAV position when the algorithm converges after 250 iterations.</p>
Full article ">Figure 21
<p>Convergence tests on Path 2 (<a href="#sensors-22-05122-f016" class="html-fig">Figure 16</a>b) real flight with RANSAC PCL segmentation. Three-dimensional LIDAR point cloud (blue), 2D laser scans (red), AMCL particles (green). (<b>a</b>) UAV position before algorithm convergence. (<b>b</b>) UAV position when the algorithm converges after 71 iterations.</p>
Full article ">Figure 22
<p>Convergence tests on Path 3 (<a href="#sensors-22-05122-f016" class="html-fig">Figure 16</a>c) real flight with RANSAC PCL segmentation. Three-dimensional LIDAR point cloud (blue), 2D laser scans (red), AMCL particles (green). (<b>a</b>) UAV position before algorithm convergence. (<b>b</b>) The algorithm does not converge during the flight.</p>
Full article ">Figure 23
<p>Aircraft 3D map generated by graph SLAM.</p>
Full article ">Figure 24
<p>Position estimate by the algorithms Non-linear ICP (<b>a</b>) and NDT (<b>b</b>). Three-dimensional graph SLAM map (red), position estimate by the algorithm (white), ground truth (green).</p>
Full article ">Figure 25
<p>ICP-NL simulation test with the lowest ATE. (<b>a</b>) Trajectories. ICP-N (blue), ground truth (green), take-off position (blue circle), error (red). (<b>b</b>) ICP-N errors.</p>
Full article ">Figure 26
<p>ICP-NL simulation test with the highest ATE. (<b>a</b>) Trajectories. ICP-NL (blue), ground truth (green), take-off position (blue circle), error (red). (<b>b</b>) ICP-N errors.</p>
Full article ">Figure 27
<p>Truck 3D map generated by graph SLAM.</p>
Full article ">Figure 28
<p>Pose estimation by ICP-NL. Three-dimensional map (red), ground truth path (green), ICP-NL path (blue).</p>
Full article ">Figure 29
<p>ICP-NL simulation test with the lowest ATE. (<b>a</b>) Trajectories. ICP-N (blue), ground truth (green), take-off position (blue circle), error (red). (<b>b</b>) ICP-N errors.</p>
Full article ">Figure 30
<p>ICP-NL simulation test with the highest ATE. (<b>a</b>) Trajectories. ICP-NL (blue), ground truth (green), take-off position (blue circle), error (red). (<b>b</b>) ICP-N errors.</p>
Full article ">Figure 31
<p>Performance results of the 3D localization algorithm with real data. Position estimate by the algorithm (white), 3D point cloud (green-yellow). (<b>a</b>) Path 1 test (<a href="#sensors-22-05122-f016" class="html-fig">Figure 16</a>a). 3D graph SLAM map (blue). (<b>b</b>) Path 2 test (<a href="#sensors-22-05122-f016" class="html-fig">Figure 16</a>b). 3D graph SLAM map (red).</p>
Full article ">
11 pages, 3351 KiB  
Article
Real-Time 3D Mapping in Isolated Industrial Terrain with Use of Mobile Robotic Vehicle
by Tomasz Buratowski, Jerzy Garus, Mariusz Giergiel and Andrii Kudriashov
Electronics 2022, 11(13), 2086; https://doi.org/10.3390/electronics11132086 - 3 Jul 2022
Cited by 8 | Viewed by 2081
Abstract
Simultaneous localization and mapping (SLAM) is a dual process responsible for the ability of a robotic vehicle to build a map of its surroundings and estimate its position on that map. This paper presents the novel concept of creating a 3D map based [...] Read more.
Simultaneous localization and mapping (SLAM) is a dual process responsible for the ability of a robotic vehicle to build a map of its surroundings and estimate its position on that map. This paper presents the novel concept of creating a 3D map based on the adaptive Monte-Carlo location (AMCL) and the extended Kalman filter (EKF). This approach is intended for inspection or rescue operations in a closed or isolated area where there is a risk to humans. The proposed solution uses particle filters together with data from on-board sensors to estimate the local position of the robot. Its global position is determined through the Rao–Blackwellized technique. The developed system was implemented on a wheeled mobile robot equipped with a sensing system consisting of a laser scanner (LIDAR) and an inertial measurement unit (IMU), and was tested in the real conditions of an underground mine. One of the contributions of this work is to propose a low-complexity and low-cost solution to real-time 3D-map creation. The conducted experimental trials confirmed that the performance of the three-dimensional mapping was characterized by high accuracy and usefulness for recognition and inspection tasks in an unknown industrial environment. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of local 3D pose estimation by AMCL-EKF algorithm.</p>
Full article ">Figure 2
<p>Block diagram of hybrid 3D-map-building bases on 2D SLAM and reduced Octomap algorithms.</p>
Full article ">Figure 3
<p>Mobile robotic vehicle used for tests.</p>
Full article ">Figure 4
<p>Underground inspection scenarios: (<b>a</b>) smooth terrain before accident; (<b>b</b>) rough terrain after accident.</p>
Full article ">Figure 5
<p>Maps obtained before accident: (<b>a</b>) created with the use of 2D SLAM technique; (<b>b</b>) created with the use of proposed hybrid 3D SLAM approach.</p>
Full article ">Figure 5 Cont.
<p>Maps obtained before accident: (<b>a</b>) created with the use of 2D SLAM technique; (<b>b</b>) created with the use of proposed hybrid 3D SLAM approach.</p>
Full article ">Figure 6
<p>Photographic image of the mine corridor after accident.</p>
Full article ">Figure 7
<p>Maps obtained during underground inspection in the mine terrain after accident: (<b>a</b>) created with the use of 2D SLAM technique; (<b>b</b>) created with the use of proposed hybrid 3D SLAM approach.</p>
Full article ">Figure 7 Cont.
<p>Maps obtained during underground inspection in the mine terrain after accident: (<b>a</b>) created with the use of 2D SLAM technique; (<b>b</b>) created with the use of proposed hybrid 3D SLAM approach.</p>
Full article ">
14 pages, 3561 KiB  
Article
Integrated Indoor Positioning System of Greenhouse Robot Based on UWB/IMU/ODOM/LIDAR
by Zhenhuan Long, Yang Xiang, Xiangming Lei, Yajun Li, Zhengfang Hu and Xiufeng Dai
Sensors 2022, 22(13), 4819; https://doi.org/10.3390/s22134819 - 25 Jun 2022
Cited by 21 | Viewed by 3554
Abstract
Conventional mobile robots employ LIDAR for indoor global positioning and navigation, thus having strict requirements for the ground environment. Under the complicated ground conditions in the greenhouse, the accumulative error of odometer (ODOM) that arises from wheel slip is easy to occur during [...] Read more.
Conventional mobile robots employ LIDAR for indoor global positioning and navigation, thus having strict requirements for the ground environment. Under the complicated ground conditions in the greenhouse, the accumulative error of odometer (ODOM) that arises from wheel slip is easy to occur during the long-time operation of the robot, which decreases the accuracy of robot positioning and mapping. To solve the above problem, an integrated positioning system based on UWB (ultra-wideband)/IMU (inertial measurement unit)/ODOM/LIDAR is proposed. First, UWB/IMU/ODOM is integrated by the Extended Kalman Filter (EKF) algorithm to obtain the estimated positioning information. Second, LIDAR is integrated with the established two-dimensional (2D) map by the Adaptive Monte Carlo Localization (AMCL) algorithm to achieve the global positioning of the robot. As indicated by the experiments, the integrated positioning system based on UWB/IMU/ODOM/LIDAR effectively reduced the positioning accumulative error of the robot in the greenhouse environment. At the three moving speeds, including 0.3 m/s, 0.5 m/s, and 0.7 m/s, the maximum lateral error is lower than 0.1 m, and the maximum lateral root mean square error (RMSE) reaches 0.04 m. For global positioning, the RMSEs of the x-axis direction, the y-axis direction, and the overall positioning are estimated as 0.092, 0.069, and 0.079 m, respectively, and the average positioning time of the system is obtained as 72.1 ms. This was sufficient for robot operation in greenhouse situations that need precise positioning and navigation. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the positioning system.</p>
Full article ">Figure 2
<p>Integrated positioning frame diagram based on UWB/IMU/ODOM/LIDAR.</p>
Full article ">Figure 3
<p>Schematic diagram of greenhouse experiment site.</p>
Full article ">Figure 4
<p>Greenhouse mapping with different combinations of sensors. (<b>a</b>) IMU/ODOM/LIDAR. (<b>b</b>) UWB/IMU/ODOM/LIDAR.</p>
Full article ">Figure 5
<p>Compare the trajectories of different positioning methods. (<b>a</b>) Trajectories. (<b>b</b>) Lateral error.</p>
Full article ">Figure 6
<p>Comparison of lateral error between two integrated algorithms.</p>
Full article ">Figure 7
<p>Lateral error at different moving speeds.</p>
Full article ">Figure 8
<p>Different positioning methods compared for accuracy of target point positioning. (<b>a</b>) Positioning results. (<b>b</b>) Positioning error.</p>
Full article ">Figure 9
<p>Positioning time.</p>
Full article ">
30 pages, 11380 KiB  
Article
A Robust Localization System Fusion Vision-CNN Relocalization and Progressive Scan Matching for Indoor Mobile Robots
by Yanjie Liu, Changsen Zhao and Yanlong Wei
Appl. Sci. 2022, 12(6), 3007; https://doi.org/10.3390/app12063007 - 16 Mar 2022
Cited by 6 | Viewed by 2047
Abstract
Map-based, high-precision dynamic pose tracking and rapid relocalization in the case of unknown poses are very important for indoor navigation robots. This paper aims to propose a robust and high-precision indoor robot positioning algorithm that combines vision and laser sensor information. This algorithm [...] Read more.
Map-based, high-precision dynamic pose tracking and rapid relocalization in the case of unknown poses are very important for indoor navigation robots. This paper aims to propose a robust and high-precision indoor robot positioning algorithm that combines vision and laser sensor information. This algorithm mainly includes two parts: initialization and real-time pose tracking. The initialization component is mainly to solve the problem of the uncertainty of a robot’s initial pose and loss of pose tracking. First, the laser information is added to the posenetLSTM neural network that only considers image information as a geometric constraint, and the loss function is redesigned thereby improving global positioning accuracy. Second, on the basis of visual rough positioning, the branch and bound method is used to quickly search the high-precision pose of the robot. In the real-time tracking component, small-scale correlation sampling is performed on the high-resolution environment grid map, and the robot’s pose is dynamically tracked in real time. When the score of the tracking pose is lower than a certain threshold, the method of nonlinear graph optimization is used to perform the pose optimization. In order to prove the robustness, high precision, and real-time performance of the algorithm, this article first builds a simulation environment in Gazebo for evaluation, and then verifies the relevant performance of the algorithm through the Mir robot platform. Both simulations and experiments show that the introduction of laser information into the neural network can greatly improve the accuracy of vision relocalization and the system can quickly perform high-precision repositioning when the camera is not severely blocked. At the same time, compared with the pose tracking performance of the adaptive Monte Carlo localization (AMCL) algorithm, the proposed algorithm has also improved in accuracy and in real-time performance. Full article
Show Figures

Figure 1

Figure 1
<p>System framework.</p>
Full article ">Figure 2
<p>Convolutional neural network structure.</p>
Full article ">Figure 3
<p>Grid map preprocessing. (<b>a</b>) h = 0; (<b>b</b>) h = 2; (<b>c</b>) h = 4; (<b>d</b>) original map.</p>
Full article ">Figure 4
<p>Correlative particle sampling.</p>
Full article ">Figure 5
<p>Bilinear interpolation.</p>
Full article ">Figure 6
<p>(<b>a</b>) Obtained image information; (<b>b</b>) Laser measurement information; (<b>c</b>) The robot’s trajectory; (<b>d</b>) Simulation environment in Gazebo.</p>
Full article ">Figure 7
<p>Visual pose tracking results based on neural network. (<b>a</b>,<b>c</b>,<b>e</b>) The comparison of pose estimation results in the <span class="html-italic">x</span>, <span class="html-italic">y</span>, and <span class="html-italic">yaw</span> directions, respectively; (<b>b</b>,<b>d</b>,<b>f</b>) The comparison of pose estimation errors in the <span class="html-italic">x</span>, <span class="html-italic">y</span>, and <span class="html-italic">yaw</span> directions, respectively.</p>
Full article ">Figure 8
<p>Record result of relocation (<b>a</b>) The posenetLSTM method; (<b>b</b>) The method proposed in this paper.</p>
Full article ">Figure 9
<p>The change of pose from coarse to fine in the process of relocalization that combines laser and visual information. (<b>a</b>,<b>c</b>,<b>e</b>) The pose change in the <span class="html-italic">x</span>, <span class="html-italic">y</span>, and <span class="html-italic">yaw</span> directions, respectively, when using the posenetLSTM method; (<b>b</b>,<b>d</b>,<b>f</b>) The pose change in the <span class="html-italic">x</span>, <span class="html-italic">y</span>, and <span class="html-italic">yaw</span> directions, respectively, when using the method proposed in this paper.</p>
Full article ">Figure 10
<p>Search time from coarse to fine pose.</p>
Full article ">Figure 11
<p>The pose estimation result of case 1. (<b>a</b>) The estimated error in <span class="html-italic">x</span> direction; (<b>b</b>) The estimated error in <span class="html-italic">y</span> direction; (<b>c</b>) The estimated error in the <span class="html-italic">yaw</span> direction; (<b>d</b>) True trajectory and estimated trajectory.</p>
Full article ">Figure 11 Cont.
<p>The pose estimation result of case 1. (<b>a</b>) The estimated error in <span class="html-italic">x</span> direction; (<b>b</b>) The estimated error in <span class="html-italic">y</span> direction; (<b>c</b>) The estimated error in the <span class="html-italic">yaw</span> direction; (<b>d</b>) True trajectory and estimated trajectory.</p>
Full article ">Figure 12
<p>The pose estimation result of case 2. (<b>a</b>) The estimated error in <span class="html-italic">x</span> direction; (<b>b</b>) The estimated error in <span class="html-italic">y</span> direction; (<b>c</b>)The estimated error in the <span class="html-italic">yaw</span> direction; (<b>d</b>) True trajectory and estimated trajectory.</p>
Full article ">Figure 13
<p>The comparison of time–cost. (<b>a</b>) case 1; (<b>b</b>) case 2.</p>
Full article ">Figure 14
<p>Experiment setup. (<b>a</b>) The experimental platform; (<b>b</b>) The measuring equipment.</p>
Full article ">Figure 15
<p>(<b>a</b>) Absolute pose error of posenetLSTM; (<b>b</b>) Absolute pose error of the proposed neural network; (<b>c</b>) Statistics of absolute pose error; (<b>d</b>) The robot’s trajectory.</p>
Full article ">Figure 16
<p>(<b>a</b>,<b>c</b>,<b>e</b>) The comparison of pose estimation in the <span class="html-italic">x</span>, <span class="html-italic">y</span><span class="html-italic">,</span> and <span class="html-italic">yaw</span> directions, respectively; (<b>b</b>,<b>d</b>,<b>f</b>) The comparison of pose estimation errors in the <span class="html-italic">x</span>, <span class="html-italic">y</span>, and <span class="html-italic">yaw</span> directions, respectively.</p>
Full article ">Figure 17
<p>(<b>a</b>) Robot at test position; (<b>b</b>) Image acquired by camera; (<b>c</b>) The coarse relocation result based on vision; (<b>d</b>) The accurate relocation result fused with laser information.</p>
Full article ">Figure 18
<p>(<b>a</b>) Location of 11 test points; (<b>b</b>) Search time from coarse to fine pose.</p>
Full article ">Figure 19
<p>(<b>a</b>–<b>c</b>) are the images obtained by the camera, the coarse positioning of vision and the accurate relocation result fused with laser information at test point 1, respectively; (<b>d</b>–<b>f</b>) are the results of test point 9.</p>
Full article ">Figure 19 Cont.
<p>(<b>a</b>–<b>c</b>) are the images obtained by the camera, the coarse positioning of vision and the accurate relocation result fused with laser information at test point 1, respectively; (<b>d</b>–<b>f</b>) are the results of test point 9.</p>
Full article ">Figure 20
<p>Robot’s two test trajectory.</p>
Full article ">Figure 21
<p>Comparison of the pose estimation and their errors of the two test trajectories. (<b>a</b>,<b>b</b>) Comparison of predicted trajectories;(<b>c</b>,<b>d</b>) Position deviation in <span class="html-italic">x</span> direction; (<b>e</b>,<b>f</b>)Position deviation in <span class="html-italic">y</span> direction.</p>
Full article ">Figure 21 Cont.
<p>Comparison of the pose estimation and their errors of the two test trajectories. (<b>a</b>,<b>b</b>) Comparison of predicted trajectories;(<b>c</b>,<b>d</b>) Position deviation in <span class="html-italic">x</span> direction; (<b>e</b>,<b>f</b>)Position deviation in <span class="html-italic">y</span> direction.</p>
Full article ">Figure 22
<p>Comparison of the calculation time of the two test trajectories (<b>a</b>) trajectory 1 (<b>b</b>) trajectory 2.</p>
Full article ">
21 pages, 24254 KiB  
Article
Text-MCL: Autonomous Mobile Robot Localization in Similar Environment Using Text-Level Semantic Information
by Gengyu Ge, Yi Zhang, Wei Wang, Qin Jiang, Lihe Hu and Yang Wang
Machines 2022, 10(3), 169; https://doi.org/10.3390/machines10030169 - 23 Feb 2022
Cited by 21 | Viewed by 2914
Abstract
Localization is one of the most important issues in mobile robotics, especially when an autonomous mobile robot performs a navigation task. The current and popular occupancy grid map, based on 2D LiDar simultaneous localization and mapping (SLAM), is suitable and easy for path [...] Read more.
Localization is one of the most important issues in mobile robotics, especially when an autonomous mobile robot performs a navigation task. The current and popular occupancy grid map, based on 2D LiDar simultaneous localization and mapping (SLAM), is suitable and easy for path planning, and the adaptive Monte Carlo localization (AMCL) method can realize localization in most of the rooms in indoor environments. However, the conventional method fails to locate the robot when there are similar and repeated geometric structures, like long corridors. To solve this problem, we present Text-MCL, a new method for robot localization based on text information and laser scan data. A coarse-to-fine localization paradigm is used for localization: firstly, we find the coarse place for global localization by finding text-level semantic information, and then get the fine local localization using the Monte Carlo localization (MCL) method based on laser data. Extensive experiments demonstrate that our approach improves the global localization speed and success rate to 96.2% with few particles. In addition, the mobile robot using our proposed approach can recover from robot kidnapping after a short movement, while conventional MCL methods converge to the wrong position. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

Figure 1
<p>Text-level semantic information in similar environments.</p>
Full article ">Figure 2
<p>Laser scans in long corridor environment. (<b>a</b>) Laser scans in two similar locations; (<b>b</b>) Laser data coordinates and scanning result; (<b>c</b>) Dimensions of the corridor.</p>
Full article ">Figure 3
<p>Text detection from the best position and perspective.</p>
Full article ">Figure 4
<p>Illustration of text extraction. (<b>a</b>) Original image; (<b>b</b>) MSER; (<b>c</b>) MSER+NMS; (<b>d</b>) Recognition from Fixed area.</p>
Full article ">Figure 5
<p>Map-building procedure.</p>
Full article ">Figure 6
<p>Localization procedure.</p>
Full article ">Figure 7
<p>Initial potential area of the particles.</p>
Full article ">Figure 8
<p>The mobile robot platform.</p>
Full article ">Figure 9
<p>Occupancy grid map of the experimental environment.</p>
Full article ">Figure 10
<p>Global localization results using AMCL method. (<b>a</b>) Real pose in the map. (<b>b</b>) Initial particle distribution with a pose given manually. (<b>c</b>) Initial particle distribution without a known pose. (<b>d</b>) Particle distribution after moving a short distance. (<b>e</b>) Particle distribution when the robot reached position #12.</p>
Full article ">Figure 11
<p>Global localization using our proposed Text-MCL method. (<b>a</b>) Initial state of the robot; (<b>b</b>) Moving to a doorway area; (<b>c</b>) Initialize the particles after text recognition.</p>
Full article ">Figure 12
<p>Relationship between successful localization rate and moving distance.</p>
Full article ">Figure 13
<p>Schematic diagram of the robot kidnapping problem.</p>
Full article ">Figure 14
<p>The result of the kidnapped robot problem. (<b>a</b>) the result of proposed method; (<b>b</b>) the result of the conventional method.</p>
Full article ">
19 pages, 2896 KiB  
Article
Hydrogeochemical Characteristics of Uranium and Radon in Groundwater from the Goesan Area of the Ogcheon Metamorphic Belt (OMB), Korea
by Byong-Wook Cho, Dong-Soo Kim, Moon-Su Kim, Jae-Hong Hwang and Chang-Oh Choo
Sustainability 2021, 13(20), 11261; https://doi.org/10.3390/su132011261 - 13 Oct 2021
Cited by 2 | Viewed by 1675
Abstract
Uranium and radon concentrations in groundwater from the Goesan area of the Ogcheon Metamorphic Belt (OMB), central Korea, whose bedrock is known to contain the highest uranium levels in Korea, were analyzed from 200 wells. We also measured the uranium concentrations in the [...] Read more.
Uranium and radon concentrations in groundwater from the Goesan area of the Ogcheon Metamorphic Belt (OMB), central Korea, whose bedrock is known to contain the highest uranium levels in Korea, were analyzed from 200 wells. We also measured the uranium concentrations in the bedrock near the investigated wells to infer a relationship between the bedrock geology and the groundwater. The five geologic bedrock units in the Goesan area consist of Cretaceous granite (Kgr), Jurassic granite (Jgr) and three types of metasedimentary rocks (og1, og2, and og3). The percentages of the groundwater samples over 30 μg/L (maximum contaminant level, MCL of US EPA) were 2.0% of the 200 groundwater samples; 12% of Kgr and 1.8% of Jgr exceeded the MCL, respectively. Overall, 16.5% of the 200 groundwater samples exceeded 148 Bq/L (alternative maximum contaminant level, AMCL of US EPA); 60.0% of Kgr and 25.0% of Jgr exceeded the AMCL, but only 0% of og1, 7.9% of og2, and 2.6% of og3 exceeded the value, respectively. No direct correlation was found between uranium concentration and radon concentration in water samples. Radon has a slightly linear correlation with Na (0.31), Mg (−0.30), and F (0.36). However, uranium behavior in groundwater was independent of other components. Based on thermodynamic calculation, uranium chemical speciation was dominated by carbonate complexes, namely the Ca2UO2(CO3)3(aq) and CaUO2(CO3)32− species. Although uraniferous mineral phases designated as saturation indices were greatly undersaturated, uranium hydroxides such as schoepite, UO2(OH)2 and U(OH)3 became possible phases. Uranium-containing bedrock in OMB did not significantly affect radioactive levels in the groundwater, possibly due to adsorption effects related to organic matter and geochemical reduction. Nevertheless, oxidation prevention of uranium-containing bedrock needs to be systematically managed for monitoring the possible migration of uranium into groundwater. Full article
(This article belongs to the Section Hazards and Sustainability)
Show Figures

Figure 1

Figure 1
<p>Spatial distribution of equivalent uranium (eU) levels at 200 measurement points on the geological map. The study area is marked by an arrow on the map of Korea inserted.</p>
Full article ">Figure 2
<p>Histogram of eU concentrations in bedrock. Kgr: Cretaceous granite; Jgr: Jurassic granite; og1: Ogcheon metasedimentary rocks 1; og2: Ogcheon metasedimentary rocks 2; og3: Ogcheon metasedimentary rocks 3. Max: maximum. eU unit is ppm (mg/kg).</p>
Full article ">Figure 3
<p>Water types plotted on the Piper diagram showing variations according to geology. Kgr: Cretaceous granite; Jgr: Jurassic granite; og1: Ogcheon metasedimentary rocks 1; og2: Ogcheon metasedimentary rocks 2; og3: Ogcheon metasedimentary rocks 3.</p>
Full article ">Figure 4
<p>Box-whisker plots showing uranium concentration in the groundwater for each geology. Kgr: Cretaceous granite; Jgr: Jurassic granite; og1: Ogcheon metasedimentary rocks 1; og2: Ogcheon metasedimentary rocks 2; og3: Ogcheon metasedimentary rocks 3.</p>
Full article ">Figure 5
<p>Spatial distribution of uranium levels in the groundwater on the area map.</p>
Full article ">Figure 6
<p>Box-whisker plots showing radon concentration in the groundwater for each geology. Kgr: Cretaceous granite; Jgr: Jurassic granite; og1: Ogcheon metasedimentary rocks 1; og2: Ogcheon metasedimentary rocks 2; og3: Ogcheon metasedimentary rocks 3.</p>
Full article ">Figure 7
<p>Spatial distribution of radon levels in the groundwater on the area map.</p>
Full article ">
Back to TopTop