Point Cloud Semantic Segmentation Network Based on Multi-Scale Feature Fusion
<p>Illustration of the proposed multi-scale feature fusion network (MSSCN). First, downsampling is performed on the original point cloud with sampling proportions of <math display="inline"><semantics> <msub> <mi>k</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>k</mi> <mn>2</mn> </msub> </semantics></math>. The chosen points are stored in <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>n</mi> <mi>d</mi> <mi>e</mi> <msub> <mi>x</mi> <mn>1</mn> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>n</mi> <mi>d</mi> <mi>e</mi> <msub> <mi>x</mi> <mn>2</mn> </msub> </mrow> </semantics></math> for the first and second downsampling processes, respectively. Then, feature extraction is performed using a Spatial Aggregation Net (SAN) backbone, where <math display="inline"><semantics> <msub> <mi>d</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>d</mi> <mn>2</mn> </msub> </semantics></math> are the dimensionalities of the features for each downsampled point cloud. Finally, feature fusion is performed to obtain a relevant set of features, where ‘−’ indicates the deletion of descriptors that do not exist in <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>n</mi> <mi>d</mi> <mi>e</mi> <msub> <mi>x</mi> <mn>1</mn> </msub> </mrow> </semantics></math> according to <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>n</mi> <mi>d</mi> <mi>e</mi> <msub> <mi>x</mi> <mn>2</mn> </msub> </mrow> </semantics></math> and ‘+’ indicates feature fusion. Based on the extracted features, a multilayer perceptron (MLP) is used to obtain the score of each point for each of the <span class="html-italic">K</span> object categories.</p> "> Figure 2
<p>Feature fusion process: (<b>a</b>) feature interpolation based on distance and (<b>b</b>) direct mapping.</p> "> Figure 3
<p>Segmentation results of point clouds with different densities: (<b>a</b>) ground truth; (<b>b</b>) point cloud at Level-1 (<math display="inline"><semantics> <msub> <mi>P</mi> <mn>1</mn> </msub> </semantics></math>), where the sampling proportion is <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math>; (<b>c</b>) point cloud at Level-2 (<math display="inline"><semantics> <msub> <mi>P</mi> <mn>2</mn> </msub> </semantics></math>), where the sampling proportion is <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>4</mn> </mrow> </semantics></math>; and (<b>d</b>) MSSCN.</p> "> Figure 4
<p>Segmentation results on the S3DIS-1 dataset: (<b>a</b>) input, (<b>f</b>) ground truth, (<b>b</b>,<b>g</b>) PointNet++, (<b>c</b>,<b>h</b>) PointSIFT, (<b>d</b>,<b>i</b>) SAN, and (<b>e</b>,<b>j</b>) MSSCN.</p> "> Figure 5
<p>Segmentation results on the S3DIS-2 dataset: (<b>a</b>) input, (<b>f</b>) ground truth, (<b>b</b>,<b>g</b>) PointNet++, (<b>c</b>,<b>h</b>) PointSIFT, (<b>d</b>,<b>i</b>) SAN, and (<b>e</b>,<b>j</b>) MSSCN.</p> "> Figure 6
<p>Segmentation results on the S3DIS-3 dataset: (<b>a</b>) input, (<b>e</b>) ground truth, (<b>b</b>,<b>f</b>) Level-1, (<b>c</b>,<b>g</b>) Level-2, and (<b>d</b>,<b>h</b>) MSSCN.</p> "> Figure 7
<p>Segmentation results on the S3DIS-4 dataset: (<b>a</b>) input, (<b>e</b>) ground truth, (<b>b</b>,<b>f</b>) Level-1, (<b>c</b>,<b>g</b>) Level-2, and (<b>d</b>,<b>h</b>) MSSCN.</p> "> Figure 8
<p>Segmentation results on S3DIS-5 dataset: (<b>a</b>) input, (<b>e</b>) ground truth, (<b>b</b>,<b>f</b>) Level-1, (<b>c</b>,<b>g</b>) Level-2, and (<b>d</b>,<b>h</b>) MSSCN.</p> "> Figure 9
<p>Segmentation results on S3DIS-6 dataset: (<b>a</b>) input, (<b>e</b>) ground truth, (<b>b</b>,<b>f</b>) Level-1, (<b>c</b>,<b>g</b>) Level-2, and (<b>d</b>,<b>h</b>) MSSCN.</p> "> Figure 10
<p>Segmentation results on ScanNet-1 dataset: (<b>a</b>) input, (<b>f</b>) ground truth, (<b>b</b>,<b>g</b>) PointNet++, (<b>c</b>,<b>h</b>) PointSIFT, (<b>d</b>,<b>i</b>) SAN, and (<b>e</b>,<b>j</b>) MSSCN.</p> "> Figure 11
<p>Segmentation results on ScanNet-2 dataset. (<b>a</b>) input, (<b>f</b>) ground truth, (<b>b</b>,<b>g</b>) PointNet++, (<b>c</b>,<b>h</b>) PointSIFT, (<b>d</b>,<b>i</b>) SAN, and (<b>e</b>,<b>j</b>) MSSCN.</p> "> Figure 12
<p>Results by our approach on S3DIS using SAN, PointNet and PointNet++ as backbone network. Where original is the experimental result of directly using the backbone network.</p> ">
Abstract
:1. Introduction
2. Related Work
3. The Proposed Approach
3.1. Multiscale Point Feature Extraction
3.2. Feature Fusion and Loss Function
3.3. Algorithm Summary
Algorithm 1 Multi-Scale Feature Fusion Semantic Segmentation Network |
Input: P (N,3) |
Output: (N × × ,k) |
|
return |
4. Results and Discussion
4.1. Experimental Setup
4.2. Results on S3DIS
4.3. Results on ScanNet
4.4. Controlled Experiment
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
SAN | Spatial Aggregation Net |
MSSCN | Multi-Scale Feature Fusion Semantic Segmentation Network |
CNNs | Convolutional Neural Networks |
FCN | Fully Convolutional Networks |
GNSS | Global Navigation Satellite System |
GCNs | Graph Convolutional Networks |
CAGQ | Coverage Aware Grid Query |
GCA | Grid Context Aggregation |
SPVConv | Sparse Point-Voxel Convolution |
3D-NAS | 3D Neural Architecture Search |
References
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Kampffmeyer, M.; Salberg, A.B.; Jenssen, R. Semantic segmentation of small objects and modeling of uncertainty in urban remote sensing images using deep convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 1–9. [Google Scholar]
- Hamaguchi, R.; Fujita, A.; Nemoto, K.; Imaizumi, T.; Hikosaka, S. Effective use of dilated convolutions for segmenting small object instances in remote sensing imagery. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; pp. 1442–1450. [Google Scholar]
- Deng, Z.; Sun, H.; Zhou, S.; Zhao, J.; Lei, L.; Zou, H. Multi-scale object detection in remote sensing imagery with convolutional neural networks. ISPRS J. Photogramm. Remote Sens. 2018, 145, 3–22. [Google Scholar] [CrossRef]
- Ding, P.; Zhang, Y.; Deng, W.J.; Jia, P.; Kuijper, A. A light and faster regional convolutional neural network for object detection in optical remote sensing images. ISPRS J. Photogramm. Remote Sens. 2018, 141, 208–218. [Google Scholar] [CrossRef]
- Ren, Y.; Zhu, C.; Xiao, S. Small object detection in optical remote sensing images via modified faster R-CNN. Appl. Sci. 2018, 8, 813. [Google Scholar] [CrossRef] [Green Version]
- Gong, Z.; Lin, H.; Zhang, D.; Luo, Z.; Zelek, J.; Chen, Y.; Nurunnabi, A.; Wang, C.; Li, J. A Frustum-based probabilistic framework for 3D object detection by fusion of LiDAR and camera data. ISPRS J. Photogramm. Remote Sens. 2020, 159, 90–100. [Google Scholar] [CrossRef]
- Su, H.; Maji, S.; Kalogerakis, E.; Learned-Miller, E. Multi-view convolutional neural networks for 3d shape recognition. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 945–953. [Google Scholar]
- Lawin, F.J.; Danelljan, M.; Tosteberg, P.; Bhat, G.; Khan, F.S.; Felsberg, M. Deep Projective 3D Semantic Segmentation. In Computer Analysis of Images and Patterns; Felsberg, M., Heyden, A., Krüger, N., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 95–107. [Google Scholar]
- Feng, Y.; Zhang, Z.; Zhao, X.; Ji, R.; Gao, Y. GVCNN: Group-view convolutional neural networks for 3D shape recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 264–272. [Google Scholar]
- Guo, H.; Wang, J.; Gao, Y.; Li, J.; Lu, H. Multi-view 3D object retrieval with deep embedding network. IEEE Trans. Image Process. 2016, 25, 5526–5537. [Google Scholar] [CrossRef] [PubMed]
- Boulch, A.; Le Saux, B.; Audebert, N. Unstructured Point Cloud Semantic Labeling Using Deep Segmentation Networks. 3DOR 2017, 2, 7. [Google Scholar]
- Zhang, R.; Li, G.; Li, M.; Wang, L. Fusion of images and point clouds for the semantic segmentation of large-scale 3D scenes based on deep learning. ISPRS J. Photogra. Remote Sens. 2018, 143, 85–96. [Google Scholar] [CrossRef]
- Maturana, D.; Scherer, S. Voxnet: A 3d convolutional neural network for real-time object recognition. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 922–928. [Google Scholar]
- Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; Xiao, J. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1912–1920. [Google Scholar]
- Gadelha, M.; Wang, R.; Maji, S. Multiresolution tree networks for 3d point cloud processing. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 103–118. [Google Scholar]
- Qi, C.R.; Su, H.; Nießner, M.; Dai, A.; Yan, M.; Guibas, L.J. Volumetric and multi-view cnns for object classification on 3d data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 5648–5656. [Google Scholar]
- Lin, Y.; Wang, C.; Zhai, D.; Li, W.; Li, J. Toward better boundary preserved supervoxel segmentation for 3D point clouds. ISPRS J. Photogra. Remote Sens 2018, 143, 39–47. [Google Scholar] [CrossRef]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5099–5108. [Google Scholar]
- Contreras, J.; Denzler, J. Edge-Convolution Point Net for Semantic Segmentation of Large-Scale Point Clouds. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 5236–5239. [Google Scholar]
- Jia, M.; Li, A.; Wu, Z. A Global Point-Sift Attention Network for 3D Point Cloud Semantic Segmentation. In Proceedings of the IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 5065–5068. [Google Scholar]
- Zhao, H.; Jiang, L.; Fu, C.W.; Jia, J. PointWeb: Enhancing Local Neighborhood Features for Point Cloud Processing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5565–5573. [Google Scholar]
- Landrieu, L.; Simonovsky, M. Large-scale point cloud semantic segmentation with superpoint graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4558–4567. [Google Scholar]
- Pham, Q.H.; Nguyen, T.; Hua, B.S.; Roig, G.; Yeung, S.K. JSIS3D: Joint Semantic-Instance Segmentation of 3D Point Clouds with Multi-Task Pointwise Networks and Multi-Value Conditional Random Fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8827–8836. [Google Scholar]
- Yi, L.; Zhao, W.; Wang, H.; Sung, M.; Guibas, L.J. Gspn: Generative shape proposal network for 3d instance segmentation in point cloud. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3947–3956. [Google Scholar]
- Li, G.; Muller, M.; Thabet, A.; Ghanem, B. Deepgcns: Can gcns go as deep as cnns? In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea, 27 October–3 November 2019; pp. 9267–9276. [Google Scholar]
- Li, Y.; Bu, R.; Sun, M.; Wu, W.; Di, X.; Chen, B. Pointcnn: Convolution on x-transformed points. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 3–8 December 2018; pp. 820–830. [Google Scholar]
- Jiang, M.; Wu, Y.; Zhao, T.; Zhao, Z.; Lu, C. Pointsift: A sift-like network module for 3d point cloud semantic segmentation. arXiv 2018, arXiv:1807.00652. [Google Scholar]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Milz, S.; Simon, M.; Fischer, K.; Pöpperl, M. Points2Pix: 3D Point-Cloud to Image Translation using conditional Generative Adversarial Networks. arXiv 2019, arXiv:1901.09280. [Google Scholar]
- You, Y.; Lou, Y.; Liu, Q.; Ma, L.; Wang, W.; Tai, Y.; Lu, C. PRIN: Pointwise Rotation-Invariant Network. arXiv 2018, arXiv:1811.09361. [Google Scholar]
- Kanezaki, A.; Matsushita, Y.; Nishida, Y. Rotationnet: Joint object categorization and pose estimation using multiviews from unsupervised viewpoints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 5010–5019. [Google Scholar]
- Barnea, S.; Filin, S. Segmentation of terrestrial laser scanning data using geometry and image information. ISPRS J. Photogramm. Remote Sens. 2013, 76, 33–48. [Google Scholar] [CrossRef]
- Che, E.; Olsen, M.J. An Efficient Framework for Mobile Lidar Trajectory Reconstruction and Mo-norvana Segmentation. Remote Sens. 2019, 11, 836. [Google Scholar] [CrossRef] [Green Version]
- Kundu, A.; Yin, X.; Fathi, A.; Ross, D.A.; Brewington, B.; Funkhouser, T.A.; Pantofaru, C. Virtual Multi-view Fusion for 3D Semantic Segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; Volume 12369, pp. 518–535. [Google Scholar]
- Li, Y.; Pirk, S.; Su, H.; Qi, C.R.; Guibas, L.J. Fpnn: Field probing neural networks for 3d data. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 307–315. [Google Scholar]
- Tatarchenko, M.; Dosovitskiy, A.; Brox, T. Octree generating networks: Efficient convolutional architectures for high-resolution 3d outputs. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2088–2096. [Google Scholar]
- Wu, W.; Qi, Z.; Fuxin, L. Pointconv: Deep convolutional networks on 3d point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 9621–9630. [Google Scholar]
- Cai, G.; Jiang, Z.; Wang, Z.; Huang, S.; Chen, K.; Ge, X.; Wu, Y. Spatial Aggregation Net: Point Cloud Semantic Segmentation Based on Multi-Directional Convolution. Sensors 2019, 19, 4329. [Google Scholar] [CrossRef] [Green Version]
- Li, Y.; Ma, L.; Zhong, Z.; Cao, D.; Li, J. TGNet: Geometric Graph CNN on 3-D Point Cloud Segmentation. IEEE Trans. Geosci. Remote Sens. 2019, 58, 3588–3600. [Google Scholar] [CrossRef]
- Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic graph cnn for learning on point clouds. arXiv 2018, arXiv:1801.07829. [Google Scholar] [CrossRef] [Green Version]
- Lan, S.; Yu, R.; Yu, G.; Davis, L.S. Modeling local geometric structure of 3d point clouds using geo-cnn. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 998–1008. [Google Scholar]
- Thomas, H.; Qi, C.R.; Deschaud, J.E.; Marcotegui, B.; Goulette, F.; Guibas, L.J. KPConv: Flexible and Deformable Convolution for Point Clouds. arXiv 2019, arXiv:1904.08889. [Google Scholar]
- Liu, Y.; Fan, B.; Xiang, S.; Pan, C. Relation-Shape Convolutional Neural Network for Point Cloud Analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8895–8904. [Google Scholar]
- Lin, Y.; Yan, Z.; Huang, H.; Du, D.; Liu, L.; Cui, S.; Han, X. FPConv: Learning Local Flattening for Point Convolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 4292–4301. [Google Scholar]
- Xu, Q.; Sun, X.; Wu, C.; Wang, P.; Neumann, U. Grid-GCN for Fast and Scalable Point Cloud Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 5660–5669. [Google Scholar]
- Tang, H.; Liu, Z.; Zhao, S.; Lin, Y.; Lin, J.; Wang, H.; Han, S. Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; Volume 12373, pp. 685–702. [Google Scholar]
- Hu, Z.; Zhen, M.; Bai, X.; Fu, H.; Tai, C. JSENet: Joint Semantic Segmentation and Edge Detection Network for 3D Point Clouds. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; Volume 12365, pp. 222–239. [Google Scholar]
- Eldar, Y.; Lindenbaum, M.; Porat, M.; Zeevi, Y.Y. The farthest point strategy for progressive image sampling. IEEE Trans. Image Process. 1997, 6, 1305–1315. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Armeni, I.; Sener, O.; Zamir, A.R.; Jiang, H.; Brilakis, I.; Fischer, M.; Savarese, S. 3d semantic parsing of large-scale indoor spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1534–1543. [Google Scholar]
- Dai, A.; Chang, A.X.; Savva, M.; Halber, M.; Funkhouser, T.; Nießner, M. Scannet Richlyannotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5828–5839. [Google Scholar]
- Zhang, Z.; Hua, B.; Yeung, S. ShellNet: Efficient Point Cloud Convolutional Neural Networks Using Concentric Shells Statistics. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea, 27 October–3 November 2019; pp. 1607–1616. [Google Scholar]
- Hu, Q.; Yang, B.; Xie, L.; Rosa, S.; Guo, Y.; Wang, Z.; Trigoni, N.; Markham, A. RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 11105–11114. [Google Scholar]
Method | Accuracy without RGB (%) | Accuracy with RGB (%) |
---|---|---|
PointNet [19] | 70.46 | 78.62 |
PointNet++ [20] | 75.66 | 82.23 |
PointSIFT [29] | 76.61 | 82.33 |
SPG [24] | - | 85.50 |
SAN [44] | 78.39 | 82.93 |
DGCNN [46] | - | 84.10 |
ShellNet [57] | - | 87.10 |
RandLA-Net [58] | - | 88.00 |
Level-1 | 84.64 | 88.51 |
Level-2 | 84.66 | 87.46 |
MSSCN | 87.41 | 89.80 |
Level-1 (%) | Level-2 (%) | MSSCN (%) | |
---|---|---|---|
ceiling | 97.65 | 97.54 | 97.77 |
floor | 99.20 | 98.57 | 98.86 |
wall | 93.44 | 92.57 | 93.63 |
beam | 81.05 | 85.92 | 81.99 |
column | 70.42 | 74.08 | 76.67 |
window | 80.33 | 82.15 | 89.11 |
door | 83.30 | 85.63 | 85.86 |
table | 79.50 | 80.65 | 83.48 |
chair | 88.42 | 88.02 | 90.19 |
sofa | 81.30 | 70.26 | 81.58 |
bookcase | 84.28 | 81.40 | 84.16 |
board | 75.98 | 73.21 | 77.52 |
clutter | 80.37 | 79.28 | 80.16 |
Method | Accuracy (%) |
---|---|
3DCNN [56] | 73.0 |
PointNet [19] | 73.9 |
PointNet++ [20] | 84.5 |
PointCNN [28] | 85.1 |
SAN [44] | 85.1 |
PointSIFT[29] | 86.0 |
MSSCN | 86.3 |
Accuracy (%) | ||||
---|---|---|---|---|
0.0 | 0.4 | 0.4 | 0.1 | 86.96 |
0.1 | 0.4 | 0.4 | 0.1 | 87.15 |
0.3 | 0.4 | 0.4 | 0.1 | 86.67 |
0.5 | 0.4 | 0.4 | 0.1 | 87.41 |
0.7 | 0.4 | 0.4 | 0.1 | 86.96 |
0.4 | 0.0 | 0.4 | 0.1 | 86.92 |
0.4 | 0.1 | 0.4 | 0.1 | 86.88 |
0.4 | 0.3 | 0.4 | 0.1 | 86.94 |
0.4 | 0.5 | 0.4 | 0.1 | 87.07 |
0.4 | 0.7 | 0.4 | 0.1 | 86.62 |
0.4 | 0.4 | 0.1 | 0.1 | 86.91 |
0.4 | 0.4 | 0.3 | 0.1 | 86.57 |
0.4 | 0.4 | 0.5 | 0.1 | 87.04 |
0.4 | 0.4 | 0.7 | 0.1 | 86.67 |
0.4 | 0.4 | 0.4 | 0.0 | 87.00 |
0.4 | 0.4 | 0.4 | 0.1 | 87.35 |
0.4 | 0.4 | 0.4 | 0.3 | 86.86 |
0.4 | 0.4 | 0.4 | 0.5 | 86.82 |
0.4 | 0.4 | 0.4 | 0.7 | 86.69 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Du, J.; Jiang, Z.; Huang, S.; Wang, Z.; Su, J.; Su, S.; Wu, Y.; Cai, G. Point Cloud Semantic Segmentation Network Based on Multi-Scale Feature Fusion. Sensors 2021, 21, 1625. https://doi.org/10.3390/s21051625
Du J, Jiang Z, Huang S, Wang Z, Su J, Su S, Wu Y, Cai G. Point Cloud Semantic Segmentation Network Based on Multi-Scale Feature Fusion. Sensors. 2021; 21(5):1625. https://doi.org/10.3390/s21051625
Chicago/Turabian StyleDu, Jing, Zuning Jiang, Shangfeng Huang, Zongyue Wang, Jinhe Su, Songjian Su, Yundong Wu, and Guorong Cai. 2021. "Point Cloud Semantic Segmentation Network Based on Multi-Scale Feature Fusion" Sensors 21, no. 5: 1625. https://doi.org/10.3390/s21051625
APA StyleDu, J., Jiang, Z., Huang, S., Wang, Z., Su, J., Su, S., Wu, Y., & Cai, G. (2021). Point Cloud Semantic Segmentation Network Based on Multi-Scale Feature Fusion. Sensors, 21(5), 1625. https://doi.org/10.3390/s21051625