Leveraging Deep Convolutional Neural Network for Point Symbol Recognition in Scanned Topographic Maps
<p>Examples of point symbols on the 1955 STM of Grand Island. (<b>a</b>,<b>b</b>) are locally enlarged maps with point symbols. Map source: USGS-HTMC.</p> "> Figure 2
<p>Number statistical results for the training set and total set of the scanned map dataset.</p> "> Figure 3
<p>Three annotation samples of the scanned map dataset.</p> "> Figure 4
<p>Statistical results for the entire vectorized map dataset.</p> "> Figure 5
<p>Three annotation samples of the vectorized map dataset.</p> "> Figure 6
<p>Annotation samples of the PASCAL VOC dataset, which is mainly used for natural object detection.</p> "> Figure 7
<p>Overview of the YOLOv4 framework combined with ASPP. The backbone is composed of the feature extraction network CSPDarknet53, the neck is composed of PANet combined with ASPP, and the three heads are used for prediction.</p> "> Figure 8
<p>Illustration of the SPP module structure, redrawn according to [<a href="#B41-ijgi-12-00128" class="html-bibr">41</a>]. Extraction of features using multiple pooling layers at different scales and fusion into a 21-dimensional vector input for the fully connected layer.</p> "> Figure 9
<p>Illustration of the proposed ASPP module structure comprising multiple parallel levels: A 1 × 1 convolution; three 3 × 3 dilated convolutions with dilation rates of 1, 3, 5, respectively; and image pooling.</p> "> Figure 10
<p>Images obtained by applying the data augmentation methods to the original image (<b>a</b>): (<b>b</b>) with a horizontal flip, (<b>c</b>) with vertical flipping, (<b>d</b>) with cropping, (<b>e</b>) ToGray, and (<b>f</b>) by rotating at a small angle.</p> "> Figure 11
<p>Images obtained by applying the Gaussian blur combined with color jitter method to the original image (<b>a</b>): (<b>b</b>) brightness transformation, (<b>c</b>) hue transformation, (<b>d</b>) contrast transformation, (<b>e</b>) saturation transformation, (<b>f</b>) Gaussian blur, and (<b>g</b>) Gaussian blur combined with color jitter.</p> "> Figure 12
<p>Statistical plots of the AP values of the ten point symbols in <a href="#ijgi-12-00128-t001" class="html-table">Table 1</a> identified using different models.</p> "> Figure 13
<p>Results obtained by different models for Map A: (<b>a</b>) faster R-CNN, (<b>b</b>) VGG-SSD, (<b>c</b>) YOLOv3, (<b>d</b>) YOLOv4, and (<b>e</b>) ASPP-YOLOv4.</p> "> Figure 14
<p>Results obtained by different models for Map B: (<b>a</b>) faster R-CNN, (<b>b</b>) VGG-SSD, (<b>c</b>) YOLOv3, (<b>d</b>) YOLOv4, and (<b>e</b>) ASPP-YOLOv4.</p> "> Figure 15
<p>Results obtained by different models for Map C: (<b>a</b>) faster R-CNN, (<b>b</b>) VGG-SSD, (<b>c</b>) YOLOv3, (<b>d</b>) YOLOv4, and (<b>e</b>) ASPP-YOLOv4.</p> "> Figure 16
<p>Comparison of Grad-CAM results obtained with YOLO heads of three YOLOv4-based models. Warmer colors indicate higher interest in the area, whereas cooler colors indicate the opposite; the white arrow points to the object of the study, the point symbol III.</p> "> Figure 17
<p>Results obtained for different input image sizes of Map D: (<b>a</b>) 320 × 320, (<b>b</b>) 512 × 512, and (<b>c</b>) 640 × 640 pixels.</p> "> Figure 18
<p>Results obtained for different input image sizes of Map E: (<b>a</b>) 320 × 320, (<b>b</b>) 512 × 512, and (<b>c</b>) 640 × 640 pixels.</p> "> Figure 19
<p>Symbol detection processing operation for large-scale maps: “first cut, then zoom, detect, and finally stitch”.</p> "> Figure 20
<p>Results obtained by the proposed ASPP-YOLOv4 for Map F of size 1241 × 955 pixels. (<b>a</b>) Original image and (<b>b</b>) the image skewed during scanning. The red circle indicates missed recognition.</p> "> Figure 21
<p>Recognition results of different methods for Map G. (<b>a</b>) Results of Template matching. (<b>b</b>) Results of GHT. (<b>c</b>) Results of the proposed ASPP-YOLOv4. In (<b>a</b>,<b>b</b>), the green box indicates correct recognition, whereas the red circle indicates missed recognition.</p> "> Figure 22
<p>Symbol recognition results of template matching method for different scenes. The green box indicates correct recognition, whereas the red circle indicates missed recognition. (<b>a</b>) Template. (<b>b</b>) Results for different map backgrounds. (<b>c</b>) Results for 0.8 times smaller size. (<b>d</b>) Results for 5° angle rotation.</p> "> Figure 23
<p>Recognition results of the model trained on the vectorized map dataset for the scanned Map H. (<b>a</b>) Recognition results without the Gaussian blur and color jitter data augmentation method. (<b>b</b>) Recognition results with the Gaussian blur and color jitter data augmentation method.</p> "> Figure 24
<p>Statistical plots of different rotation angles on our test set.</p> "> Figure 25
<p>Recognition results for different rotation angles of Map I: (<b>a</b>) original image, (<b>b</b>) rotated by 2.5°, and (<b>c</b>) rotated by 5°.</p> ">
Abstract
:1. Introduction
- (1)
- Point symbols are small, have similar shapes, and are generally composed of simple geometric patterns.
- (2)
- They are difficult to separate from other geographic elements, such as contour lines, using image segmentation [10] or other existing methods.
- (3)
- Depending on the map quality, the symbols may be blurred, deformed, or discontinuous after scanning.
- (1)
- Some STMs cannot be released publicly. Thus, the inadequate sample of point symbols lacks the necessary data support for the study.
- (2)
- Because symbols frequently overlap with other geographic elements, they are difficult to recognize entirely; thus, it is challenging to achieve both efficient and accurate recognition.
- (3)
- One centimeter on the topographic map (TM) represents several kilometers of the field. However, the positioning of point symbols in the existing methods is slightly shifted, and the determination of symbol anchors must be improved.
- (1)
- STM point symbol datasets were constructed. Many scanned and vectorized map images were annotated, which comprised 1909 scanned map images and 2505 vectorized map images at different scales. We developed a DCNN model based on YOLOv4 for symbol recognition. The ASPP module, which enlarges the receptive field to obtain more information about the symbol object, was adopted to improve the accuracy and efficiency of symbol recognition.
- (2)
- To solve the cross-domain problem caused by datasets comprising maps of different styles and improve the model robustness, two data augmentation methods were designed: Gaussian blur combined with the color jitter method and small-angle rotation with an affine transformation. Additionally, to improve positioning accuracy, the k-means++ clustering method was adopted to generate anchor boxes that were more suitable for our point symbol datasets. A cut-stitch approach was designed for large-scale maps and its effectiveness was tested through several experiments on our dataset.
- (3)
- We achieved an mAP of 98.11% and a mean intersection over union (mIoU) of 0.876 for point symbol recognition on our test set. The proposed method is the first to employ DCNNs for recognizing point symbols, and it achieved a higher recognition and positioning accuracy than mainstream detectors and classical algorithms.
2. Point Symbol Datasets
2.1. Scanned Map Dataset for Training and Testing
2.2. Vectorized Map Dataset for Training
3. Methods
3.1. Dilated Convolution
3.2. Atrous Spatial Pyramid Pooling (ASPP)
3.3. Data Augmentation for Generalization Capability
3.4. Anchor Boxes Based on K-Means++ Clustering
4. Experimental Results and Analysis
4.1. Comparison of Different Object Detectors Recognition Models
4.2. Feature Analysis Based on Gradient-Weighted Class Activation Mapping (Grad-CAM)
4.3. Comparison of Different Input Image Scales
4.4. Classical Algorithms versus Learning-Based Models for Point Symbol Recognition
- •
- Long runtime for point symbol recognition. The existing methods require denoising and preprocessing of the map before the recognition of point symbols; next, the image must be segmented to extract possible target areas, which is time-consuming. Moreover, the point symbols may be erreneously removed during preprocessing. However, the proposed deep learning method comprises an end-to-end process, wherein the input RGB images go through the model to directly obtain recognition results, and good recognition performance is achieved. Moreover, the proposed method recognizes multiple point symbols simultaneously, overcoming the limitation of existing methods that individually recognize symbols. The inference times for different methods are presented in Table 5.
- •
- Relatively poor generalization capability. The conventional method can detect point symbols of the same color, size, orientation, and shape. If the paper TM is tilted during scanning, the recognition performance may be poor. As shown in Figure 22, assuming that the size, orientation, shape, or background of the symbols on the map change slightly, the point symbols may not be accurately recognized. In contrast, as described in Section 4, Parts C and E, the proposed model can accurately recognize point symbols that are rotated, scaled, or derived from different basemaps. Therefore, the proposed method has a good generalization capability.
- •
- Complexity of manually defined rules. Point symbols are approximated as comprising line elements, based on the skeleton information of the symbols, and artificially defined rules are represented in the parameter space to recognize point symbols. When symbols consist of multiple graphical elements, it is difficult to represent them accurately. Thus, the classical algorithms are unstable and highly complex.
4.5. Ablation Experiments
4.5.1. Direct Transfer Learning from Vector-Map to Scanned Map via Gaussian Blur and Color Jitter Augmentation
4.5.2. Improve Model Rotation Robustness via Small-Angle Rotation Data Augmentation
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Liu, T.; Xu, P.; Zhang, S. A review of recent advances in scanned topographic map processing. Neurocomputing 2019, 328, 75–87. [Google Scholar] [CrossRef]
- Lin, S.S.; Lin, C.H.; Hu, Y.J.; Lee, T.Y. Drawing Road Networks with Mental Maps. IEEE Trans. Vis. Comput. Graph. 2014, 20, 1241–1252. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lladós, J.; Valveny, E.; Sánchez, G.; Marti, E. Symbol Recognition: Current Advances and Perspectives. In Proceedings of the International Workshop on Graphics Recognition, Kingston, ON, Canada, 7–8 September 2001; pp. 104–128. [Google Scholar]
- Uhl, J.H.; Leyk, S.; Chiang, Y.Y.; Knoblock, C.A. Towards the automated large-scale reconstruction of past road networks from historical maps. Comput. Environ. Urban Syst. 2022, 94, 101794. [Google Scholar] [CrossRef]
- Burghardt, K.; Uhl, J.H.; Lerman, K.; Leyk, S. Road network evolution in the urban and rural United States since 1900. Comput. Environ. Urban Syst. 2022, 95, 101803. [Google Scholar] [CrossRef]
- Leyk, S.; Chiang, Y.Y. Information extraction based on the concept of geographic context. In Proceedings of the AutoCarto 2016, Reston, VI, USA, 9–11 December 2016; pp. 100–110. [Google Scholar]
- Khan, I.; Islam, N.; Ur Rehman, H.; Khan, M. A comparative study of graphic symbol recognition methods. Multimed. Tools Appl. 2020, 79, 8695–8725. [Google Scholar] [CrossRef]
- Song, J.; Zhang, Z.; Qi, Y.; Miao, Q. Point Symbol Recognition Algorithm based on Improved Generalized Hough Transform and Nonlinear Mapping. In Proceedings of the 2018 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC), Jinan, China, 14–18 December 2018; pp. 134–139. [Google Scholar]
- Nass, A.; Gasselt, S.v. Dynamic Cartography: A Concept for Multidimensional Point Symbols. In Progress in Cartography; Springer: Berlin/Heidelberg, Germany, 2016; pp. 17–30. [Google Scholar]
- Leyk, S.; Boesch, R. Colors of the past: Color image segmentation in historical topographic maps based on homogeneity. GeoInformatica 2010, 14, 1–21. [Google Scholar] [CrossRef]
- Szendrei, R.; Elek, I.; Márton, M. A knowledge-based approach to raster-vector conversion of large scale topographic maps. Acta Cybern. 2011, 20, 145–165. [Google Scholar] [CrossRef]
- Camassa, R.; Kuang, D.; Lee, L. A geodesic landmark shooting algorithm for template matching and its applications. SIAM J. Imaging Sci. 2017, 10, 303–334. [Google Scholar] [CrossRef] [Green Version]
- Shen, J.; Du, Y.; Wang, W.; Li, X. Lazy random walks for superpixel segmentation. IEEE Trans. Image Process. 2014, 23, 1451–1462. [Google Scholar] [CrossRef]
- Tian, F.; Wei, R.; Ding, Q.; Xiong, L. New approach for oil-well symbol recognition in petroleum geological structure map. In Proceedings of the 2010 International Conference on Electrical and Control Engineering, Wuhan, China, 25–27 June 2010; pp. 5357–5360. [Google Scholar]
- Miao, Q.; Xu, P.; Li, X.; Song, J.; Li, W.; Yang, Y. The Recognition of the Point Symbols in the Scanned Topographic Maps. IEEE Trans. Image Process. 2017, 26, 2751–2766. [Google Scholar] [CrossRef]
- Reiher, E.; Li, Y.; Delle Donne, V.; Lalonde, M.; Hayne, C.; Zhu, C. A system for efficient and robust map symbol recognition. In Proceedings of the 13th International Conference on Pattern Recognition, Washington, DC, USA, 25–29 August 1996; Volume 3, pp. 783–787. [Google Scholar]
- Pezeshk, A.; Tutwiler, R.L. Automatic Feature Extraction and Text Recognition From Scanned Topographic Maps. IEEE Trans. Geosci. Remote Sens. 2011, 49, 5047–5063. [Google Scholar] [CrossRef]
- Pezeshk, A.; Tutwiler, R. Extended character defect model for recognition of text from maps. In Proceedings of the 2010 IEEE Southwest Symposium on Image Analysis & Interpretation (SSIAI), Austin, TX, USA, 23–25 May 2010; pp. 85–88. [Google Scholar] [CrossRef]
- Pezeshk, A.; Tutwiler, R.L. Improved Multi Angled Parallelism for separation of text from intersecting linear features in scanned topographic maps. In Proceedings of the 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, Dallas, TX, USA, 14–19 March 2010; pp. 1078–1081. [Google Scholar]
- Zhou, X.; Wang, D.; Krähenbühl, P. Objects as points. arXiv 2019, arXiv:1904.07850. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. arXiv 2015, arXiv:1506.01497v3. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Qiu, C.; Li, H.; Guo, W.; Chen, X.; Yu, A.; Tong, X.; Schmitt, M. Transferring transformer-based models for cross-area building extraction from remote sensing images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 4104–4116. [Google Scholar] [CrossRef]
- Li, S.; Liao, C.; Ding, Y.; Hu, H.; Jia, Y.; Chen, M.; Xu, B.; Ge, X.; Liu, T.; Wu, D. Cascaded Residual Attention Enhanced Road Extraction from Remote Sensing Images. ISPRS Int. J.-Geo-Inf. 2021, 11, 9. [Google Scholar] [CrossRef]
- Cheng, G.; Zhou, P.; Han, J. Learning rotation-invariant convolutional neural networks for object detection in VHR optical remote sensing images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7405–7415. [Google Scholar] [CrossRef]
- Rahimzadegan, M.; Sadeghi, B. Development of the iterative edge detection method applied on blurred satellite images: State of the art. J. Appl. Remote Sens. 2016, 10, 035018. [Google Scholar] [CrossRef]
- Rahimzadegan, M.; Sadeghi, B.; Masoumi, M.; Taghizadeh Ghalehjoghi, S. Application of target detection algorithms to identification of iron oxides using ASTER images: A case study in the North of Semnan province, Iran. Arab. J. Geosci. 2015, 8, 7321–7331. [Google Scholar] [CrossRef]
- Zhang, P.; Gong, M.; Su, L.; Liu, J.; Li, Z. Change detection based on deep feature representation and mapping transformation for multi-spatial-resolution remote sensing images. ISPRS J. Photogramm. Remote Sens. 2016, 116, 24–41. [Google Scholar] [CrossRef]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-Based Learning Applied to Document Recognition; IEEE: Hoboken, NJ, USA, 1998; Volume 86, pp. 2278–2324. [Google Scholar]
- Quan, Y.; Shi, Y.; Miao, Q.; Qi, Y. A combinatorial solution to point symbol recognition. Sensors 2018, 18, 3403. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Guo, M.; Bei, W.; Huang, Y.; Chen, Z.; Zhao, X. Deep learning framework for geological symbol detection on geological maps. Comput. Geosci. 2021, 157, 104943. [Google Scholar] [CrossRef]
- Kim, H.; Lee, W.; Kim, M.; Moon, Y.; Lee, T.; Cho, M.; Mun, D. Deep-learning-based recognition of symbols and texts at an industrially applicable level from images of high-density piping and instrumentation diagrams. Expert Syst. Appl. 2021, 183, 115337. [Google Scholar] [CrossRef]
- Zhao, Z.Q.; Zheng, P.; Xu, S.t.; Wu, X. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Liu, Q.; Fan, X.; Xi, Z.; Yin, Z.; Yang, Z. Object detection based on Yolov4-Tiny and Improved Bidirectional feature pyramid network. J. Phys. Conf. Ser. 2022, 2209, 012023. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–10 October 2016; pp. 21–37. [Google Scholar]
- Chen, C.; Zhong, J.; Tan, Y. Multiple-oriented and small object detection with convolutional neural networks for aerial image. Remote Sens. 2019, 11, 2176. [Google Scholar] [CrossRef] [Green Version]
- Chen, L.; Shi, W.; Deng, D. Improved YOLOv3 based on attention mechanism for fast and accurate ship detection in optical remote sensing images. Remote Sens. 2021, 13, 660. [Google Scholar] [CrossRef]
- Li, G.; Xie, H.; Yan, W.; Chang, Y.; Qu, X. Detection of road objects with small appearance in images for autonomous driving in various traffic situations using a deep learning based approach. IEEE Access 2020, 8, 211164–211172. [Google Scholar] [CrossRef]
- Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
- Wang, P.; Chen, P.; Yuan, Y.; Liu, D.; Huang, Z.; Hou, X.; Cottrell, G. Understanding convolution for semantic segmentation. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; pp. 1451–1460. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Angeles, J.; Pasini, D. Affine Transformations. In Fundamentals of Geometry Construction: The Math Behind the CAD; Springer International Publishing: Cham, Switzerland, 2020; pp. 103–163. [Google Scholar] [CrossRef]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
The Symbols | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
The identifiers | I | II | III | IV | V | VI | VII | VIII | IX | X |
The meanings | sanjiaodian | dulitianwendian | kuangjing | dianshifasheta | fadianchang | biandiansuo | shiyoujing | kexueguancezhan | jinianbei | shuichang |
k-Values | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
---|---|---|---|---|---|---|---|---|---|
62.53 | 33.43 | 27.40 | 25.38 | 21.43 | 21.44 | 19.40 | 19.41 | 19.40 | |
58.64 | 47.51 | 33.62 | 40.40 | 37.37 | 37.37 | 34.34 | 33.34 | ||
67.72 | 48.48 | 33.62 | 34.63 | 27.54 | 27.54 | 26.54 | |||
Anchors | 66.71 | 54.54 | 48.48 | 48.48 | 43.42 | 43.42 | |||
71.77 | 59.59 | 40.72 | 52.52 | 37.67 | |||||
73.80 | 59.59 | 40.72 | 51.52 | ||||||
74.79 | 62.62 | 62.61 | |||||||
76.81 | 53.92 | ||||||||
79.78 | |||||||||
avgIoU | 0.598 | 0.698 | 0.745 | 0.766 | 0.799 | 0.818 | 0.831 | 0.843 | 0.852 |
Model | Precision | Recall | F1 Score | mAP | mIoU |
---|---|---|---|---|---|
Faster R-CNN | 97.21 | 96.44 | 96.82 | 97.81 | 0.874 |
VGG-SSD | 96.68 | 96.59 | 96.63 | 97.58 | 0.865 |
YOLOv3 | 92.89 | 95.68 | 94.26 | 97.35 | 0.824 |
YOLOv4 | 96.38 | 96.40 | 96.40 | 97.86 | 0.851 |
ASPP-YOLOv4 | 97.64 | 96.48 | 97.06 | 98.11 | 0.876 |
Input Sizel | Precision | Recall | F1 Score | mAP | MACs(G) | Inference Time (ms) |
---|---|---|---|---|---|---|
97.53 | 89.26 | 93.29 | 97.04 | 37.92 | 47.66 | |
98.05 | 92.89 | 95.32 | 97.86 | 97.06 | 62.88 | |
98.67 | 94.01 | 96.46 | 98.24 | 151.66 | 73.18 |
The Maps | Size | Template Matching (s) | GHT (s) | Ours (s) |
---|---|---|---|---|
1 | 1110 × 738 | 0.965 | 4.431 | 0.085 |
2 | 504 × 484 | 0.453 | 1.730 | 0.057 |
3 | 389 × 385 | 0.418 | 1.670 | 0.049 |
Gaussian Blur Combined with Color Jitter Method | Precision | Recall | F1 Score | mAP |
---|---|---|---|---|
× | 97.10 | 48.05 | 58.10 | 76.40 |
√ | 96.72 | 65.61 | 74.43 | 80.09 |
Small-Angle Rotation | AP | mAP | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Data Augmentation Method | I | II | III | IV | V | VI | VII | VIII | IX | X | |
× | 79.01 | 91.33 | 35.90 | 69.99 | 40.93 | 14.58 | 17.92 | 46.32 | 77.03 | 72.56 | 54.56 |
√ | 100.0 | 99.74 | 92.63 | 99.91 | 92.22 | 88.32 | 93.38 | 92.96 | 99.88 | 100.0 | 95.90 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Huang, W.; Sun, Q.; Yu, A.; Guo, W.; Xu, Q.; Wen, B.; Xu, L. Leveraging Deep Convolutional Neural Network for Point Symbol Recognition in Scanned Topographic Maps. ISPRS Int. J. Geo-Inf. 2023, 12, 128. https://doi.org/10.3390/ijgi12030128
Huang W, Sun Q, Yu A, Guo W, Xu Q, Wen B, Xu L. Leveraging Deep Convolutional Neural Network for Point Symbol Recognition in Scanned Topographic Maps. ISPRS International Journal of Geo-Information. 2023; 12(3):128. https://doi.org/10.3390/ijgi12030128
Chicago/Turabian StyleHuang, Wenjun, Qun Sun, Anzhu Yu, Wenyue Guo, Qing Xu, Bowei Wen, and Li Xu. 2023. "Leveraging Deep Convolutional Neural Network for Point Symbol Recognition in Scanned Topographic Maps" ISPRS International Journal of Geo-Information 12, no. 3: 128. https://doi.org/10.3390/ijgi12030128
APA StyleHuang, W., Sun, Q., Yu, A., Guo, W., Xu, Q., Wen, B., & Xu, L. (2023). Leveraging Deep Convolutional Neural Network for Point Symbol Recognition in Scanned Topographic Maps. ISPRS International Journal of Geo-Information, 12(3), 128. https://doi.org/10.3390/ijgi12030128