Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (1)

Search Parameters:
Keywords = adaptive Gaussian weight-based fast point feature histogram

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 6969 KiB  
Article
A Point Cloud Data-Driven Pallet Pose Estimation Method Using an Active Binocular Vision Sensor
by Yiping Shao, Zhengshuai Fan, Baochang Zhu, Jiansha Lu and Yiding Lang
Sensors 2023, 23(3), 1217; https://doi.org/10.3390/s23031217 - 20 Jan 2023
Cited by 6 | Viewed by 2656
Abstract
Pallet pose estimation is one of the key technologies for automated fork pickup of driverless industrial trucks. Due to the complex working environment and the enormous amount of data, the existing pose estimation approaches cannot meet the working requirements of intelligent logistics equipment [...] Read more.
Pallet pose estimation is one of the key technologies for automated fork pickup of driverless industrial trucks. Due to the complex working environment and the enormous amount of data, the existing pose estimation approaches cannot meet the working requirements of intelligent logistics equipment in terms of high accuracy and real time. A point cloud data-driven pallet pose estimation method using an active binocular vision sensor is proposed, which consists of point cloud preprocessing, Adaptive Gaussian Weight-based Fast Point Feature Histogram extraction and point cloud registration. The proposed method overcomes the shortcomings of traditional pose estimation methods, such as poor robustness, time consumption and low accuracy, and realizes the efficient and accurate estimation of pallet pose for driverless industrial trucks. Compared with traditional Fast Point Feature Histogram and Signature of Histogram of Orientation, the experimental results show that the proposed approach is superior to the above two methods, improving the accuracy by over 35% and reducing the feature extraction time by over 30%, thereby verifying the effectiveness and superiority of the proposed method. Full article
(This article belongs to the Special Issue Applications of Manufacturing and Measurement Sensors)
Show Figures

Figure 1

Figure 1
<p>Diagram of pallet position deviation. (<b>a</b>) Correct pose. (<b>b</b>) Deviation pose.</p>
Full article ">Figure 2
<p>Flowchart of the pallet pose estimation method.</p>
Full article ">Figure 3
<p>Schematic diagram of FPFH neighborhood.</p>
Full article ">Figure 4
<p>A neighborhood point diagram of a key point.</p>
Full article ">Figure 5
<p>Weight variation trend of neighborhood points of a key point.</p>
Full article ">Figure 6
<p>Diagram of local coordinate system (u,v,w).</p>
Full article ">Figure 7
<p>Structure of the Percipio FM851-E2 vision sensor.</p>
Full article ">Figure 8
<p>Relative position of unmanned industrial vehicle and pallet in standard state.</p>
Full article ">Figure 9
<p>(<b>a</b>) The source point cloud. (<b>b</b>) The key points of source point cloud.</p>
Full article ">Figure 10
<p>Processing results of source point cloud. (<b>a</b>) Adaptive optimal neighborhood distribution. (<b>b</b>) The AGWF feature descriptor.</p>
Full article ">Figure 11
<p>Point cloud processing results of 5° deflection angle. (<b>a</b>) Scene point cloud. (<b>b</b>) Initial relative position. (<b>c</b>) Target point cloud. (<b>d</b>) Key points of the target point cloud. (<b>e</b>) Coarse registration. (<b>f</b>) Accurate registration.</p>
Full article ">Figure 12
<p>Scene point cloud and registration result. (<b>a</b>) 5° deflection angle. (<b>b</b>) 10° deflection angle. (<b>c</b>) 15° deflection angle. (<b>d</b>) 20° deflection angle. (<b>e</b>) Deviations of 0.05 m. (<b>f</b>) Deviations of 0.10 m. (<b>g</b>) Deviations of 0.15 m. (<b>h</b>) Deviations of 0.2 m.</p>
Full article ">Figure 12 Cont.
<p>Scene point cloud and registration result. (<b>a</b>) 5° deflection angle. (<b>b</b>) 10° deflection angle. (<b>c</b>) 15° deflection angle. (<b>d</b>) 20° deflection angle. (<b>e</b>) Deviations of 0.05 m. (<b>f</b>) Deviations of 0.10 m. (<b>g</b>) Deviations of 0.15 m. (<b>h</b>) Deviations of 0.2 m.</p>
Full article ">Figure 13
<p>Color image of the shelf.</p>
Full article ">Figure 14
<p>Shelf scene point cloud.</p>
Full article ">Figure 15
<p>Pass-through filtering.</p>
Full article ">Figure 16
<p>Key point extraction for target point cloud. (<b>a</b>) Target pallet point cloud. (<b>b</b>) Target pallet key points.</p>
Full article ">Figure 17
<p>Shelf scene target pallet point cloud adaptive optimal neighborhood radius.</p>
Full article ">Figure 18
<p>The AGWF feature descriptor for a point.</p>
Full article ">Figure 19
<p>Visualization of pallet pose estimation results. (<b>a</b>) Registration result (<b>b</b>) Pallet pose estimation.</p>
Full article ">Figure 20
<p>Comparison of FPFH and AGWF at a point. (<b>a</b>) The FPFH feature descriptor for a point. (<b>b</b>) The AGWF feature descriptor for a point.</p>
Full article ">Figure 21
<p>Comparison of coarse registration results between FPFH and AGWF. (<b>a</b>) Coarse registration based on FPFH. (<b>b</b>) Coarse registration based on AGWF.</p>
Full article ">
Back to TopTop