Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Describing the Urban Jungle: A Multicriteria Urbanization Index for the Amazon
Previous Article in Journal
The Study of Regional Innovation Network Structure: Evidence from the Yangtze River Delta Urban Agglomeration
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Discrete Artificial Bee Colony Algorithm Based on Label Similarity for Solving Point-Feature Label Placement Problem

1
School of Geoscience and Technology, Zhengzhou University, Zhengzhou 450001, China
2
Zhengzhou Zhonghe Jing Xuan Information Technology Co., Ltd., Zhengzhou 450001, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2023, 12(10), 429; https://doi.org/10.3390/ijgi12100429
Submission received: 26 June 2023 / Revised: 6 October 2023 / Accepted: 11 October 2023 / Published: 17 October 2023

Abstract

:
The artificial bee colony algorithm (ABC) is a promising metaheuristic algorithm for continuous optimization problems, but it performs poorly in solving discrete problems. To address this issue, this paper proposes a hybrid discrete artificial bee colony (HDABC) algorithm based on label similarity for the point-feature label placement (PFLP) problem. Firstly, to better adapt to PFLP, we have modified the update mechanism for employed bees and onlooker bees. Employed bees learn the label position of the better individuals, while onlooker bees perform dynamic probability searches using two neighborhood operators. Additionally, the onlooker bees’ selection method selects the most promising solutions based on label similarity, which improves the algorithm’s search capabilities. Finally, the Metropolis acceptance strategy is replaced by the original greedy acceptance strategy to avoid the premature convergence problem. Systematic experiments are conducted to verify the effectiveness of the neighborhood solution generation method, the selection operation based on label similarity, and the Metropolis acceptance strategy in this paper. In addition, experimental comparisons were made at different instances and label densities. The experimental results show that the algorithm proposed in this paper is better or more competitive with the compared algorithm.

1. Introduction

Combinatorial optimization problems are one of the most challenging and widely used mathematical problems today. Dealing with combinatorial optimization problems means choosing the optimal solution to maximize or minimize the objective function under the given conditions. There exist many practical application scenarios for this problem, such as graph coloring problems [1], workshop scheduling [2], traveling salesman problems [3,4], and label placement problems [5]. Solving such problems is significant in terms of productivity improvement and cost reduction. Label placement is one of the most attractive branches of discrete combinatorial optimization problems. In practical applications, its solution size is usually very large, and the problem solution grows exponentially with the problem dimension, which is a typical Non-deterministic Polynomial-hard (NP-hard) problem [6]. Label placement can be understood as assigning labels to each feature on the map according to cartographic rules and preferences while ensuring maximum freedom from conflict, and ultimately obtaining a clear, beautiful, and easy-to-read map. According to the type of map, features can be divided into three different kinds of labeling problems: point features [7] (hospitals, travel spots, etc.), line features (rivers, roads, etc.) [8], and area features [9] (continents, countries, oceans, etc.) [9]. Since all three types of problems can be abstracted as combinatorial optimization problems according to the label candidate model and the label quality evaluation function, the number of labels for point features is the largest. Thus, the most extensive research has been conducted on the placement of point-feature labels.
The difficulties of point element label placement are label conflict, label feature conflict, and label correlation. In addition, the difficulty of solving PFLP increases exponentially with the size of the problem. Current methods for solving the PFLP can be solved by both exact and metaheuristic algorithms, while exact algorithms are only suitable for small-scale optimization problems and are extremely time-consuming in solving large-scale problems. The metaheuristic algorithm can obtain optimal or near-optimal solutions in an acceptable time and is a general heuristic strategy [10]. The rules of the metaheuristic algorithm can use the current search information to adjust the search and form an intelligent iterative search mechanism. Such rules can effectively avoid falling into local optima, improve search efficiency, and have better efficiency and applicability compared to exact algorithms for solving complex optimization problems, and have become the mainstream method for solving the PFLP, such as simulated annealing [11,12], tabu search algorithms [13], genetic algorithms [14], etc. Metaheuristic algorithms fall into two main categories: single solution-based and population-based approaches. Population-based approaches are divided into two categories: evolutionary algorithms and swarm intelligence algorithms. The most common of the single solution-based approaches are tabu search and simulated annealing. Alvim and Taillard [15] used POPMUSIC to divide the problem into subproblems solved separately using tabu search. Rabello [16] combined a clustering search algorithm with simulated annealing to solve the point-feature label placement problem. Araujo et al. [17] improved Rabello’s clustering search algorithm by proposing a density clustering search using three methods: density-based clustering (DBSCAN), natural group identification (NGI), and label propagation (LP) to detect promising solutions. Guerine [18] combined data mining techniques and clustering search to achieve faster convergence and better label results than the previous clustering search. Cravo et al. [19] applied the greedy adaptive random algorithm to the point-feature label placement for solving. In terms of evolutionary algorithms, genetic algorithms, and differential evolutionary algorithms are two typical examples, and Lu [20] proposed a differential evolution and genetic algorithm for the multi-geographic feature label placement problem. Deng [21] improved the differential and genetic algorithm for Lu in three aspects: selection of candidate positions, evaluation of label quality, and sequential iteration. Li et al. [22] combined genetic algorithm and tabu search to solve the point-feature label placement problem. The metaheuristic algorithm for swarm intelligence is less applied in the label placement, and only the ant colony algorithm [23] is applied to solve the point-feature label placement, mainly because most swarm intelligence algorithms are used to solve continuous optimization problems, and some improvements need to be made in solving discrete problems.
This paper focuses on an innovative application of the artificial bee colony (ABC) algorithm for point-feature label placement, which is an excellent swarm intelligence optimization algorithm for solving continuous optimization problems [24]. The renewal of the solution cooperates with other bees, and the onlooker bees expand their reinforcement capacity, while the scout bees ensure their diversity, with fewer parameters and easier implementation. The algorithm was originally proposed to solve complex sequential problems and outperformed many other algorithms when tested on complex mathematical benchmarks. The method was subsequently well applied to similar combinatorial optimization problems such as the traveling salesman problem [25,26] and job-shop scheduling [27], so we believe that the method is equally well suited to solve the point-feature label placement problem. Since this method was originally used to solve continuous problems and some improvements are needed to apply it to combinatorial optimization problems, we propose a hybrid discrete artificial bee colony algorithm (HDABC) based on label similarity to solve the point-feature label placement problem. In HDABC, a new solution update method was designed for employed bees and onlooker bees to avoid the loss of population diversity due to a single update formula. In addition, a combination of label diversity and the roulette selection method was used to select more promising solutions for optimization in the selection phase of the onlooker bees. In addition, the greedy acceptance strategy was replaced by the Metropolis acceptance strategy to further improve the balance between exploration and diversity. The effectiveness of this paper’s algorithm is verified by comparing it with other metaheuristics on the tested instances. Our main contributions are as follows:
  • A new discrete optimization algorithm (HDABC) is proposed for solving the point-feature label placement problem;
  • The update methods for employed and onlooker bees are redesigned to suit the point element label placement problem. The hired bee uses learning from good individuals, and the scout bee searches alternately with two search operators based on dynamic probabilities;
  • Onlooker bees select more promising solutions for updating based on label similarity to improve the performance of the algorithm;
  • Replace the greedy acceptance strategy of employed bees and onlooker bees with the Metropolis acceptance strategy to avoid the premature convergence problem.
The text continues here.

2. Point-Feature Label Placement Problem

Point-feature label placement refers to the assignment of label text to point features on the map according to certain rules such as minimization of conflicts, label preference, and non-ambiguity. The point-feature label placement based on the label candidate model and the label quality evaluation function can be abstracted into a combinatorial optimization problem. In the following, we focus on a brief description of the label candidate model and the label quality evaluation function.

2.1. Label Candidate Model

The point-feature label placement requires a label candidate model to provide the label position for it, and the merit of the label candidate model directly affects the result of point-feature label placement, so it is important to select a suitable label model. Label candidate models are mainly divided into fixed models [28] and sliding models [29]. The fixed model can make full use of the label gap area by the sliding strategy but the computational complexity is larger. The common fixed models are 4-orientation and 8-orientation models. Zhou et al. [23] proposed a multi-level multi-orientation model. To fully utilize the blank area of the label, multiple label orientations can be employed, but this approach may increase the time required. To balance quality and efficiency, the label candidate model used in this paper adopts eight orientations. As shown in Figure 1, the shaded part of the figure is the point element symbol, the point feature symbol cannot exceed the minimum outer circle of the set point feature, the rectangular area 0–7 represents the candidate position of the point, each position has a priority size, the positive right side is the optimal position, and the smaller the number represents its higher priority. The dashed rectangular box in the figure is the smallest external rectangle containing the point feature and the label candidate rectangle.

2.2. Label Quality Evaluation Function

The point-feature label placement aims to obtain conflict-free, aesthetically pleasing, and ambiguity-free point-feature label maps, and the quality evaluation function of point-feature label placement is constructed mainly considering conflict, label priority, and label correlation [30,31]. The label quality evaluation function for solution x is as follows. Label quality evaluation function abstracts the PFLP problem into a minimization objective function problem. Smaller function values indicate that the solution is better.
min   f ( x ) = i = 1 n j = 1 m Q ( i , j ) α i j × 1000 / n
Q ( i , j ) = ω 1 Q c o n ( i , j ) + ω 2 Q p r i ( i , j ) + ω 3 Q a s s ( i , j )
Constraints:
j = 1 m α i j = 1
n is the number of point features, m is the number of label candidates, Q ( i , j ) is the evaluation function when the i-th point feature is at the j-th label candidate position, α i j is the switch variable, when the i-th point-feature label is at the j-th label candidate position, α i j = 1 , and vice versa α i j = 0 . Q c o n ( i , j ) is the conflict score between the i-th point feature and the other point features when the i-th point feature is located at the j-th candidate position. It is set to 1 if there is a conflict with other points; otherwise, it is set to 0. Q p r i ( i , j ) is the label priority score when the i-th point feature is located at the j-th label candidate position. Using the example of the 8 orientations label candidate model, Qpri(i,j) is assigned as follows: the rightmost direction is 0, the top direction is 1/8, the leftmost direction is 2/8, the bottom direction is 3/8, the top-right direction is 4/8, the top-left direction is 5/8, the bottom-left direction is 6/8, and the bottom-right direction is 7/8. Q a s s ( i , j ) is the label relevance score when the i-th point feature is located at the j-th label candidate position. ω 1 , ω 2 , ω 3 represent the weight proportion of conflict, label priority, and label correlation, respectively, which usually take the values of 0.5, 0.3, 0.2. Since label correlation is related to the minimum distance that can be recognized by the human eye, the ambiguity distance cannot be accurately measured. Therefore, in this paper, we do not consider the label correlation and set it to 0.

3. A Hybrid Discrete Artificial Bee Colony Algorithm

3.1. Standard Artificial Bee Colony Algorithm

The artificial bee colony (ABC) algorithm is a swarm intelligence optimization algorithm derived from the social behavior of honey bees and is used to solve numerical optimization problems. The solution to the problem to be optimized represents the location of the food source, and the amount of nectar in the food source represents the adaptation value of the corresponding solution. The artificial bee colony algorithm consists of a combination of employed bees, onlooker bees, and scout bees, with equal numbers of employed and observation bees. The algorithm is divided into four phases, namely: initialization phase, employed bee phase, onlooker bee phase, and scout bee phase.
  • Initialization phase
Randomly initialize the population p, consisting of a total of N individuals, each of which is a d-dimensional vector. It is constructed as follows:
x i j = l j + r ( u j l j )
where xij is the jth dimension of the ith solution, i { 1 , 2 , N } and j { 1 , 2 , d } . N is the population size. lj represents the lower and uj represents the upper bound of the parameter xij, and r is a random number between [0, 1].
2.
Employed bee phase
Employed bees play a vital role in searching for food sources, gathering information on their location and quality, and sharing this information with other bees. Each employed bee is assigned to a specific food source, meaning that there are an equal number of employed bees and food sources. Neighborhood search can be performed according to Equation (5) to generate new solutions to find better food sources.
v i j = { x i j i f   j q x i j + φ ( x i j x k j ) e l s e
xi represents the food source to be updated, xk is a randomly selected food source, and vi is a newly generated food source. vij corresponds to the jth dimension of vi, xij is the jth dimension of xi, and xkj is the kth dimension of xk. i { 1 , 2 , N } , k { 1 , 2 , N } , j { 1 , 2 , d } , and k i . q is a randomly chosen dimension and q { 1 , 2 , d } . φ is a random number between [−1, 1]. After producing the new candidate food source vi, its label quality evaluation function is calculated. Then, a greedy selection is applied between vi and xi. At this stage, each solution has the opportunity to be improved.
3.
Onlooker bee phase
When all the employed bees have completed their search, they share information about the food source in the dance area, and the onlooker bees evaluate the information provided by the employed bees to select the food source by roulette. The higher the adaptation value of the nectar source, the greater the probability of selection by the onlooker bees. The adaptation value is calculated as follows, and the selection probability is given in Equation (7), with N being the number of employed bees. After selecting a food source (xi), the onlooker bee performs a search to generate a new solution based on Equation (5) and greedily accepts the new solution. Some solutions may receive multiple opportunities for improvement during this phase, while some solutions may not have the opportunity for improvement.
f i t ( x i ) = 1 f ( x i )
p i = f i t ( x i ) i = 1 N f i t ( x i )
where f(xi) is the label quality evaluation function value of the ith solution, and fit(xi) is the fitness of xi. Since the label quality evaluation function is a minimization problem, smaller values indicate better solutions. Therefore, it is necessary to take the inverse of f(xi) to get fit(xi) for roulette selection. The larger the fit(xi) value, the better the solution. pi is the choice probability of the ith solution.
4.
Scout bee phase
During this phase, scout bees are used to find new food sources not found by the employed bees and onlooker bees, avoiding the search process from falling into a local optimum. When the quality of the solution exceeds the set number of searches limit L, the solution is considered to be fully explored and the employed bee is transformed into a scout bee that uses Equation (4) to generate a random solution to replace the current solution. This is why onlooker bees choose the better solution for exploration with higher probability.

3.2. A Hybrid Discrete Artificial Bee Colony Algorithm Based on Label Similarity

3.2.1. Coding and Initialization

In the algorithm proposed in this paper, a real number encoding is used to represent the solution of PFLP. In this paper, we have chosen an 8-orientation label candidate model, where the 8 label candidate positions of each point feature are encoded using numbers from 0 to 7. In addition, the standard artificial bee colony algorithm uses Equation (4) to initialize the colony; however, PFLP is a typical discrete optimization problem, and Equation (4) is no longer suitable. Therefore, in this paper, the swarm is initialized randomly. The initial swarm is generated by randomly selecting candidate positions from the eight label candidate positions of the point features.

3.2.2. Generation of Neighborhood Solutions

The standard ABC solution update equation is used to solve continuous optimization problems. For discrete combinatorial optimization problems like PFLP, the update equation needs to be redesigned to accommodate PFLP. First is the redefinition of subtraction, assuming that ei and gi are the i-th dimension of the solutions e and g, respectively.
e i g i = {   g i   , i f   e i g i   a n d   f ( e ) > f ( g ) r a n d , e l s e
The standard ABC algorithm is essentially learning from other solutions and therefore defines subtraction as learning the label positions of other solutions. If the learned label position is different from the original and the learned bee has a better fitness, the label position of the better bee is learned, and vice versa, a random label position is given. Combined with the above subtraction operation, the updated formula of the solution is shown in Equation (9), and the addition indicates replacing the original label position value in the i-th dimension of e with the newly learned label position value.
e = e + ( e i g i )
However, if both employed bees and onlooker bees update the label position by Equation (9), the way of updating the label position is too single to fully explore the solution space. Therefore, we integrate some transformation operators into the proposed algorithm for neighborhood solution generation, and use Equation (9) for the employed bees to update, while two operators, shift and conflict-shift, are used for the onlooker bees to enhance the search. The generation of neighboring solutions can result in changes in the label positions of points, all of which may potentially lead to a reduction in conflicts and changes in label priorities, thereby decreasing the label quality evaluation function.
Shift: A point pi is randomly selected from 1-n dimensions and a new label position is randomly generated instead of the original one. This operator is the most commonly used operator to generate new solutions for point-feature label placement. The shift operator updates only one dimension, and the update of one dimension guarantees that the solution space is explored at a finer granularity. Figure 2 is a schematic diagram of the shift neighborhood transformation operator. There are a total of 8 points, and the point at index 2 is selected to transform the label position 5 into the label position 4.
Conflict-shift: randomly select a point pi from 1-n dimensions, determine the set C of point features that conflict with the point, and generate a new label position of pi and all points in the set C to replace the original position. The conflict-shift operator is more perturbed for the update of the solution compared to the shift operator, which can effectively jump out of the local optimum. Figure 3 is a schematic diagram of the conflict-shift transformation operator. There are eight points in total. Select the point with index 4. Compute its set of conflicting point features C. The point features in C are indexed 1 and 3. Randomly generate label orientations to replace the original positions for point features indexed 1, 3, 4 in an attempt to eliminate conflicts between point features.
For neighborhood solution generation, a higher degree of perturbation is required to generate neighborhood solutions at the beginning of the iteration to help improve the quality of the solution quickly and jump out of the local optimum, while a higher perturbation later in the iteration may fail to produce a more optimal solution. This is because the solution is very close to the approximate global optimum late in the iteration, and a larger perturbation will move the solution away from the approximate global optimum. Therefore, for the two operators, we use the same probability of random selection for use in the initial stage, and the conflict-shift is more perturbing for the solution, and we keep reducing the selection probability of the conflict-shift operator and increasing the selection probability of the shift operator in the iterative process, but the probability of the conflict-shift operator cannot be lower than a certain threshold. In summary, the selection probability of the operator should be adjusted dynamically with the number of iterations of the algorithm, which is an adaptive parameter, and the dynamic probabilities of the two operators are as follows:
D c s = min c s + ( max c s min c s i t e r max ) ( i t e r max i t e r )
D c = 1 D c s
where Dcs is the dynamic selection probability of the conflict-shift operator, Dc is the selection probability of the shift operator, itermax is the maximum number of iterations, iter is the current number of iterations, and maxcs and mincs are the maximum and minimum selection probabilities of the conflict-shift operator.

3.2.3. Selection Operation Based on Label Similarity

Onlooker bees are used to enhance their search capabilities in standard ABC algorithms, where roulette selection is commonly used to select food sources associated with employed bees. However, the difference between the fitness values of each solution is not very large, so the difference in the selection probability between each solution is not large. The key to ABC is that the neighborhood of good solutions has a higher chance of finding a better solution compared to the neighborhood of poor solutions, and more exploration of good solutions is needed. Therefore, the roulette selection method cannot cause selection pressure and thus weaken the performance of the artificial bee colony algorithm. To improve the performance of the artificial bee colony algorithm, we select the most promising solutions to generate neighborhood solutions based on the label similarity between individual solutions to enhance the search capability of the onlooker bees. We classify solutions into superior and inferior food solutions according to the mean size of their fitness values. Those with fitness values below the mean are classified as superior solutions, and vice versa as inferior solutions. Based on the solution information shared by the employed bees, if the food source is inferior, it will visually inspect the surrounding food sources based on a certain detection probability pb to select the most suitable food source, i.e., the onlooker bees will evaluate similar food sources to select the best one. The method puts more improvements on more promising solutions effectively improving the algorithm search capability. Since the encoding of our solutions is the label position of each point, we can choose the Hamming distance to measure the similarity between solutions. Use Equations (12) and (13) to measure the similarity between the solutions xi and xj.
d v i j ( k ) = { 1 , i f   S i [ k ] = S j [ k ] 0 , e l s e
d ( i , j ) = k = 1 n d v i j ( k )
where d ( i , j ) is the similarity of solutions xi and xj, n is the dimension of the point-feature label placement, k is the kth dimension of the solution, d is the similarity of solutions xi and xj in the kth dimension, and S i [ k ] , S j [ k ] are the label positions of solutions xi and xj in the kth dimension, respectively. Label positions are abstracted into a 0–7 encoding.
Figure 4 gives a simple example of the label similarity measure. In this example, a PFLP problem with dimension 10 is given. The similarity generated by each dimension of xi and xj is calculated according to Equation (12). If the label position of the same dimension is the same then set it to 1, otherwise set it to 0. Finally, the total similarity degree d(i,j) = 2 is calculated according to Equation (13).
The selection operation based on the label similarity (SOLS) is specified as follows: firstly, the food sources are classified into superior food sources and inferior food sources based on the fitness of the solutions. Assume that xi is the solution obtained by the roulette selection method selection, and if the selected solution is an inferior food source, the similarity between xi and the remaining solutions is calculated with a certain detection probability pb to solve its similarity. Then the solutions are ranked according to their similarity from largest to smallest, and the top p number of solutions are selected and added to the candidate table of that solution. If there is a solution in the candidate table with fitness less than xi, the best solution xl in that solution set is used instead of the solution xi obtained from roulette, which is updated with dynamic probability using two operators.

3.2.4. Metropolis Acceptance Strategy

In the standard ABC algorithm, both employed bees and onlooker bees use a greedy strategy to decide whether to accept a new solution. However, in solving combinatorial optimization problems like PFLP, greedy acceptance strategies usually lead to premature convergence problems due to their discrete character. Therefore, this paper uses the Metropolis acceptance criterion to determine whether to accept poor solutions to increase the diversity of solutions. Suppose f(A) is the evaluation function of the current solution A and f(B) is the evaluation function of the new solution B. If the new solution is better than the old solution, the new solution directly replaces the old solution, and vice versa, the Metropolis acceptance criterion is used to determine whether to accept the poor solution. The probability formula for accepting the poor solution is as follows:
p = { 1 , i f   f ( B ) < f ( A ) e ( f ( B ) f ( A ) ) / T , e l s e , T = T × α
where T is the current temperature and α is the cooling parameter, which usually requires setting an initial temperature T0 that is continuously cooled down during the iterative process. As the number of iterations reaches the predefined annealing length (SAmax), the temperature undergoes a reduction. Cooling continues until the specified minimum temperature (Tmin) is attained, at which point the cooling process ceases.

3.2.5. Reset the Scout Bee

In a standard ABC, when there is no improved bee solution within a certain number of times, the employed bee abandons the nectar source to become a scout, and the scout randomly generates a new solution to replace the original one. However, for PFLP, randomly generating a new solution to the scout bees is not a good strategy because the way the new solution of PFLP is generated is not suitable for improving the randomly generated new solution quickly. Thus, we try to use t times multiple conflict-shift operators on the solution to be dropped. Try to jump out of the current stagnant state using the conflict-shift operator.
The overall flowchart of the hybrid discrete ABC algorithm based on label similarity is shown in Figure 5.

4. Experimental Results and Analysis

This section focuses on presenting the experimental results of the HDABC based on label similarity. We compare the performance of our algorithm with some existing algorithms in the literature to evaluate the effectiveness of HDABC. Furthermore, we analyze and discuss several essential components of HDABC.

4.1. Instance and Parameter Settings

To verify the performance of HDABC, we employed web crawling techniques to retrieve POI data from both Kaifeng and Beijing in China. A portion of this data was then selected to create actual point feature datasets consisting of 103, 266, and 525 points, respectively. We compared HDABC with genetic algorithm (GA) [14], simulated annealing (SA) [32], tabu search (TS) [13], and discrete differential evolution and genetic algorithm (DDEGA) [21]. The specific experimental parameters are listed in Table 1. The maximum number of iterations for HDABC, GA, TS, and DDEGA is 10,000. The iteration ends when the algorithm reaches the maximum number of iterations or the optimal solution reaches 2000 repetitions. For SA, the iteration ends when the maximum temperature drops to the minimum temperature, or when the number of repetitions of the optimal solution reaches 30,000. When the specified maximum number of repetitions is reached, the algorithm continues to iterate without any changes. The symbol size r and the distance from the coordinate point to the label are 5 and 10 pixels, respectively, and the label font is 12 pixels. All methods are implemented by Microsoft Visual Studio in C++, and experiments were carried out on an Intel (R) Core (TM) i5-8500 3.0 GHz processor with 8 GB of RAM.

4.2. Experimental Results and Comparison with Other Algorithms

To verify the effectiveness of the proposed algorithm in this paper, HDABC was compared with GA, SA, TS, and DDEGA. The experiment was conducted with eight commonly used label densities: 5%, 10%, 15%, 20%, 25%, 30%, 35%, and 40%. Label density ρ refers to the ratio of the sum of the symbol and label area in the map to the total map area, reflecting the density of feature and label distribution. The comparison is mainly conducted from two aspects: the number of labels and the label quality evaluation function. To ensure the reliability of the results, each algorithm was independently run 10 times and the data was averaged. Table 2 and Table 3 present the comparison of the number of labels without conflicts between HDABC and GA, SA, TS, and DDEGA for label densities from 5% to 40%. A higher number of conflict-free labels indicates a better result. Table 4 and Table 5 show the comparison of the evaluation function of label quality between HDABC and GA, SA, TS, and DDEGA for label densities from 5% to 40%. In this comparison, a smaller value indicates a better result. In these tables, “Instance” represents the test case, where I1, I2, and I3 correspond to acquired test cases with 103, 266, and 525 test cases, respectively. “Algorithm” denotes the algorithm used, and “Best” indicates the best result obtained from ten runs for each algorithm. “Average” represents the average result obtained from the ten runs for each algorithm. Figure 6 shows the ranking graph of the average label number and quality evaluation function for each algorithm.
In terms of the number of labels, Table 2 and Table 3 show that the differences between algorithms are not significant for the 103-point dataset, and in some cases, the average and best numbers of labels are the same. The label situation for small datasets is relatively simple, so the differences between algorithms are not significant. However, as the dataset size increases, the differences between algorithms become apparent. For the 266-point and 506-point datasets, the proposed algorithm achieved more average and best numbers of labels compared to GA, TS, and DDEGA. Compared to SA, HDABC outperformed SA in terms of the average and best numbers of labels in most cases, and only slightly underperformed SA in certain instances and label densities. Overall, HDABC provided higher quality solutions for PFLP for the vast majority of test cases and label densities in terms of the number of labels. Additionally, Figure 6a displays the rankings of various algorithms based on label number, with lower values indicating superior performance of the algorithm. As shown in Figure 6a, HDABC had the highest overall rank, followed by SA, TS, and GA, while DDEGA had the lowest rank.
Based on Table 4 and Table 5, it can be seen that in the majority of cases, HDABC outperforms other algorithms in terms of both the average label quality evaluation function and the best label quality evaluation function, regardless of the label and instances. At the 5% label density scenario of the 103-point dataset, HDABC, GA, SA, and DDEGA all achieved good results in terms of the optimal label quality evaluation function. As the dataset size and label density increase, the label placement becomes more complex and dense, and the advantages of the proposed algorithm become more apparent. The algorithm provides a higher quality solution for PFLP for the vast majority of instances and label density scenarios. In terms of the average label quality, HDABC is only inferior to SA at the 40% label density, while in all other scenarios, it is superior or equal to all other algorithms. From the perspective of the best label quality, HDABC is slightly inferior to SA in only a few cases, while in all other scenarios, it is superior or equal to all other algorithms. Figure 6b displays the rankings of various algorithms based on the label quality evaluation function, with lower values indicating superior performance of the algorithm. According to Figure 6b, HDABC has the highest ranking in terms of performance across various datasets and label densities, followed by SA, TS, and GA, with DDEGA having the lowest ranking.

4.3. Analysis and Discussion

4.3.1. Analysis of Neighborhood Solution Generation

In the traditional artificial bee colony algorithm, employed bees and onlooker bees mainly generate new solutions by Equation (5), which is too single and not well suited for discrete problems. Therefore, we use Equation (9) for the employed bee for updating and two search operators for the onlooker bee with dynamic probability to better fit the discrete problem of PFLP. To verify the effectiveness of the proposed neighborhood solution generation, we compare the proposed neighborhood solution generation method with Equation (5) in this paper, and the results are shown in Figure 7. The results indicate that the HDABC update approach for different instances and densities produces significantly better results than the traditional ABC solution update method using Equation (5), with an average reduction of 20.6 in the average label quality. The employed bees learning from good food sources is consistent with the features of PFLP while maintaining the good properties of the original update equation. The dynamic and alternating use of two neighborhood operators by the onlooker bees provides a certain amount of randomness to the algorithm and effectively avoids getting stuck in local optima traps.

4.3.2. The Role of Selection Operations Based on Label Similarity

The main purpose of the ABC algorithm is to enhance the development ability of the onlooker bees, so a suitable selection operation is needed. The traditional selection operation is roulette selection, and we compare the selection operation based on the label similarity (SOLS) proposed in this paper with roulette selection. The specific comparison results are shown in Table 6. Firstly, in terms of label quality scores, the selection operation based on label similarity performs better than the original roulette wheel selection in most instances and label densities. Moreover, on the 525-point dataset, the average label quality of the selection operation based on label similarity is superior to that of the roulette wheel selection. The selection based on label similarity updates the solution with more promising ones, leading to better label results in most instances and label densities. Additionally, we can observe that in the case of 103 data, the running time of the selection operation based on label similarity is similar to that of the roulette wheel selection method. However, as the size of the dataset increases, the running time of the selection operation based on label similarity decreases by 45.1% compared to the roulette wheel selection method. As the size of the dataset increases, the difficulty of solving the point-feature label problem also increases. The termination condition of the iteration in this study is when the optimal solution remains unchanged for a certain number of iterations or reaches the maximum number of iterations. Compared to the roulette wheel selection method, the selection operation based on label similarity provides more opportunities for more promising solutions, thus accelerating the convergence speed of the algorithm.

4.3.3. The Role of Metropolis Acceptance Strategy

In the standard artificial bee colony algorithm, the employed bees and onlooker bees accept new solutions through a greedy acceptance criterion. However, this acceptance strategy can quickly lead to premature convergence in discrete problems. Therefore, we replaced the greedy acceptance criterion with the Metropolis acceptance criterion to avoid this issue. We then compared the traditional artificial bee colony algorithm with the greedy acceptance criterion and the honey bee dance algorithm with the Metropolis acceptance criterion. The specific results are shown in Figure 8. The results indicate that the Metropolis acceptance strategy is significantly better than the greedy acceptance strategy, with an average decrease of 14.6 in average label quality. This suggests that the acceptance of lower quality solutions facilitated by the Metropolis acceptance criterion enhanced the algorithm’s ability to escape local optima and effectively avoided issues related to premature convergence. Furthermore, the Metropolis acceptance criterion was able to achieve a good balance between exploration and exploitation.

5. Conclusions and Future Outlook

In this paper, we propose a hybrid discrete artificial bee colony algorithm based on label similarity to solve point-feature label placement problems. Originally designed for continuous problems, we adapted some steps of the ABC algorithm to better suit discrete problems. Specifically, the neighborhood solution generation was modified by introducing a learning mechanism in the employed bees and dynamic probability-based use of two neighborhood search operators for onlooker bees. The selection operation was improved by label similarity to identify more promising solutions for updating. Lastly, the Metropolis acceptance criterion was implemented in place of the original greedy acceptance criterion for a better balance between exploration and exploitation. To validate the effectiveness of our method, we compared it with other algorithms on various instances and label densities. The experimental results demonstrate that our approach is a qualified and competitive solution for point-feature label placement problems. In future work, we will devise a more reasonable ambiguity factor in the label quality evaluation function and explore the application of ABC to other combinatorial optimization problems.

Author Contributions

Wen Cao and Jiaqi Xu designed the research; Wen Cao and Jiaqi Xu conceived the experiments; Wen Cao and Jiaqi Xu conducted the experiments and analyzed the results; Wen Cao and Jiaqi Xu contributed to the drafting of the work; Wen Cao, Jiaqi Xu, Yong Zhang, Siqi Zhao, Chu Xu and Xiaofeng Wu contributed to the review and editing of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Goudet, O.; Duval, B.; Hao, J.-K. Population-based gradient descent weight learning for graph coloring problems. Knowl.-Based Syst. 2021, 212, 106581. [Google Scholar] [CrossRef]
  2. Zhang, C.; Zhou, Y.; Peng, K.; Li, X.; Lian, K.; Zhang, S. Dynamic flexible job shop scheduling method based on improved gene expression programming. Meas. Control 2020, 54, 1125–1135. [Google Scholar] [CrossRef]
  3. Gunduz, M.; Aslan, M. DJAYA: A discrete Jaya algorithm for solving traveling salesman problem. Appl. Soft Comput. 2021, 105, 107275. [Google Scholar] [CrossRef]
  4. Zhang, Z.; Han, Y. Discrete sparrow search algorithm for symmetric traveling salesman problem. Appl. Soft Comput. 2022, 118, 108469. [Google Scholar] [CrossRef]
  5. Zhang, X.; Poon, S.-H.; Liu, S.; Li, M.; Lee, V.C. Consistent dynamic map labeling with fairness and importance. Comput. Aided Geom. Des. 2020, 81, 101892. [Google Scholar] [CrossRef]
  6. Christensen, J.; Marks, J.; Shieber, S. An empirical study of algorithms for point-feature label placement. ACM Trans. Graph. 1995, 14, 203–232. [Google Scholar] [CrossRef]
  7. Lhuillier, A.; van Garderen, M.; Weiskopf, D. Density-based label placement. Vis. Comput. 2019, 35, 1041–1052. [Google Scholar] [CrossRef]
  8. She, J.; Liu, J.; Li, C.; Li, J.; Wei, Q. A line-feature label placement algorithm for interactive 3D map. Comput. Graph. 2017, 67, 86–94. [Google Scholar] [CrossRef]
  9. Li, Y.; Sakamoto, M.; Shinohara, T.; Satoh, T. Automatic label placement of area-features using deep learning. The International Archives of Photogrammetry. Remote Sens. Spat. Inf. Sci. 2020, 43, 117–122. [Google Scholar] [CrossRef]
  10. Glover, F.; Greenberg, H.J. New approaches for heuristic search: A bilateral linkage with artificial intelligence. Eur. J. Oper. Res. 1989, 39, 119–130. [Google Scholar] [CrossRef]
  11. Zoraster, S. Practical Results Using Simulated Annealing for Point Feature Label Placement. Cartogr. Geogr. Inf. Syst. 1997, 24, 228–238. [Google Scholar] [CrossRef]
  12. Li, L.; Zhang, H.; Zhu, H.; Kuai, X.; Hu, W. A Labeling Model Based on the Region of Movability for Point-Feature Label Placement. ISPRS Int. J. Geo-Inf. 2016, 5, 159. [Google Scholar] [CrossRef]
  13. Yamamoto, M.; Camara, G.; Lorena, L.A.N. Tabu Search Heuristic for Point-Feature Cartographic Label Placement. GeoInformatica 2002, 6, 77–90. [Google Scholar] [CrossRef]
  14. Liang, J.; Xu, W.; Zhou, Y. A Map Point Labeling Method Based on Genetic Algorithm and Overlapping Conflict Prevention Mechanism. Geogr. Geo-Inf. Sci. 2019, 35, 6–11. [Google Scholar]
  15. Alvim, A.C.F.; Taillard, E.D. POPMUSIC for the point feature label placement problem. Eur. J. Oper. Res. 2009, 192, 396–413. [Google Scholar] [CrossRef]
  16. Rabello, R.L.; Mauri, G.R.; Ribeiro, G.M.; Lorena, L.A.N. A Clustering Search metaheuristic for the Point-Feature Cartographic Label Placement Problem. Eur. J. Oper. Res. 2014, 234, 802–808. [Google Scholar] [CrossRef]
  17. Araújo, E.J.; Chaves, A.A.; Lorena, L.A. Improving the Clustering Search heuristic: An application to cartographic labeling. Appl. Soft Comput. 2019, 77, 261–273. [Google Scholar] [CrossRef]
  18. Guerine, M.; Rosseti, I.; Plastino, A. A hybrid data mining heuristic to solve the point-feature cartographic label placement problem. Int. Trans. Oper. Res. 2020, 27, 1189–1209. [Google Scholar] [CrossRef]
  19. Cravo, G.L.; Ribeiro, G.M.; Lorena, L.A.N. A greedy randomized adaptive search procedure for the point-feature cartographic label placement. Comput. Geosci. 2008, 34, 373–386. [Google Scholar] [CrossRef]
  20. Lu, F.; Deng, J.; Li, S.; Deng, H. A Hybrid of Differential Evolution and Genetic Algorithm for the Multiple Geographical Feature Label Placement Problem. ISPRS Int. J. Geo-Inf. 2019, 8, 237. [Google Scholar] [CrossRef]
  21. Deng, J.; Guo, Z.; Lessani, M.N. Multiple Geographical Feature Label Placement Based on Multiple Candidate Positions in Two Degrees of Freedom Space. IEEE Access 2021, 9, 144085–144105. [Google Scholar] [CrossRef]
  22. Li, J.; Zhu, Q. A genetic taboo search algorithm for point-feature label placement considering the constrain of network. Bull. Surv. Map. 2019, 2, 80–85. [Google Scholar] [CrossRef]
  23. Zhou, X.; Sun, Z.; Wu, C.; Ding, Y. Automatic Label Placement of Point Feature: Using Ant Colony Algorithm Based on Group Clustering. J. Geo-Inf. Sci. 2015, 17, 902–908. [Google Scholar] [CrossRef]
  24. Karaboga, D.; Gorkemli, B.; Ozturk, C.; Karaboga, N. A comprehensive survey: Artificial bee colony (ABC) algorithm and applications. Artif. Intell. Rev. 2014, 42, 21–57. [Google Scholar] [CrossRef]
  25. Pandiri, V.; Singh, A. An artificial bee colony algorithm with variable degree of perturbation for the generalized covering traveling salesman problem. Appl. Soft Comput. 2019, 78, 481–495. [Google Scholar] [CrossRef]
  26. Khan, I.; Maiti, M.K. A swap sequence based Artificial Bee Colony algorithm for Traveling Salesman Problem. Swarm Evol. Comput. 2019, 44, 428–438. [Google Scholar] [CrossRef]
  27. Chen, R.; Yang, B.; Li, S.; Wang, S. A self-learning genetic algorithm based on reinforcement learning for flexible job-shop scheduling problem. Comput. Ind. Eng. 2020, 149, 106778. [Google Scholar] [CrossRef]
  28. Marín, A.; Pelegrín, M. Towards unambiguous map labeling—Integer programming approach and heuristic algorithm. Expert Syst. Appl. 2018, 98, 221–241. [Google Scholar] [CrossRef]
  29. Strijk, T.; van Kreveld, M. Practical extensions of point labeling in the slider model. Geoinformatica 2002, 6, 181–197. [Google Scholar] [CrossRef]
  30. Van Dijk, S.; Van Kreveld, M.; Strijk, T.; Wolff, A. Towards an evaluation of quality for names placement methods. Int. J. Geogr. Inf. Sci. 2010, 16, 641–661. [Google Scholar] [CrossRef]
  31. Rylov, M.A.; Reimer, A.W. Improving label placement quality by considering basemap detail with a raster-based approach. GeoInformatica 2014, 19, 463–486. [Google Scholar] [CrossRef]
  32. Cao, W.; Xu, J.; Peng, F.; Tong, X.; Wang, X.; Zhao, S.; Liu, W. A point-feature label placement algorithm based on spatial data mining. Math. Biosci. Eng. 2023, 20, 12169–12193. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Multi-level and multi-orientation label candidate model (The numbers 0–7 represent label positions).
Figure 1. Multi-level and multi-orientation label candidate model (The numbers 0–7 represent label positions).
Ijgi 12 00429 g001
Figure 2. Schematic diagram of the shift neighborhood transformation operator (The value on the box is the index of the point, the value in the box is the label position).
Figure 2. Schematic diagram of the shift neighborhood transformation operator (The value on the box is the index of the point, the value in the box is the label position).
Ijgi 12 00429 g002
Figure 3. Schematic diagram of the conflict-shift neighborhood transformation operator (The value on the box is the index of the point, the value in the box is the label position).
Figure 3. Schematic diagram of the conflict-shift neighborhood transformation operator (The value on the box is the index of the point, the value in the box is the label position).
Ijgi 12 00429 g003
Figure 4. Schematic diagram of label similarity calculation (The numbers 0–7 represent label positions).
Figure 4. Schematic diagram of label similarity calculation (The numbers 0–7 represent label positions).
Ijgi 12 00429 g004
Figure 5. The overall flowchart of the hybrid discrete ABC algorithm based on label similarity.
Figure 5. The overall flowchart of the hybrid discrete ABC algorithm based on label similarity.
Ijgi 12 00429 g005
Figure 6. Ranking of each algorithm: (a) Ranking of each algorithm based on label number; (b) Ranking of each algorithm based on label quality evaluation function.
Figure 6. Ranking of each algorithm: (a) Ranking of each algorithm based on label number; (b) Ranking of each algorithm based on label quality evaluation function.
Ijgi 12 00429 g006
Figure 7. Comparison of solution generation between ABC and HDABC.
Figure 7. Comparison of solution generation between ABC and HDABC.
Ijgi 12 00429 g007
Figure 8. Comparison between the greedy acceptance strategy and the Metropolis acceptance strategy.
Figure 8. Comparison between the greedy acceptance strategy and the Metropolis acceptance strategy.
Ijgi 12 00429 g008
Table 1. Parameter setting.
Table 1. Parameter setting.
AlgorithmParameterParameter DefinitionValue
HDABCNNumber of employed bees and onlooker bees50
LScout bee activation threshold500
T0Initial temperature1
αCooling speed0.95
Tminminimum temperature0.01
GANPPopulation size100
pCandidate table size4
pbDetection probability0.5
SAmaxAnnealing length20
pmElite Probability0.1
peCrossover probability0.8
pvMutation probability0.1
SAT0Initial temperature40,000
αCooling speed0.95
SAmaxAnnealing length4000
TcTermination temperature0.01
mvLabel transformation probability0.001
cnConflict count/
CLCandidate Table Size2 + 0.2  ×  n
TSTLContraindicated table size5 + 0.2  ×  cn
pDEWeight of DE0.7
DDEEGApGAWeight of GA0.3
FDE variation probability0.5
CrDE hybridization probability0.8
CGAGenetic variation probability0.1
NPPopulation size100
Table 2. Comparison of the number of labels for HDABC and other algorithms under 5–20% label densities.
Table 2. Comparison of the number of labels for HDABC and other algorithms under 5–20% label densities.
InstanceAlgorithmρ = 5%ρ = 10%ρ = 15%ρ = 20%
BestAverageBestAverageBestAverageBestAverage
I1HDABC8887848379787777
GA8887848278777776
SA8887848379787776
TS8887848379787776
DDEGA8886838279777776
I2HDABC213211189186172170157154
GA210208185180168163153147
SA212210190185172169157153
TS212210187185169166153150
DDEGA206205182177163160149145
I3HDABC438435398395366363347341
GA433429382387356352333327
SA437434397393366363344341
TS436434397392366361341338
DDEGA425422377381351344323317
Table 3. Comparison of the number of labels for HDABC and other algorithms under 25–40% label densities.
Table 3. Comparison of the number of labels for HDABC and other algorithms under 25–40% label densities.
InstanceAlgorithmρ = 25%ρ = 30%ρ = 35%ρ = 40%
BestAverageBestAverageBestAverageBestAverage
I1HDABC7775757471696866
GA7674747169676663
SA7775747370686765
TS7674747270686765
DDEGA7371697268656462
I2HDABC147144138134128125123119
GA141137134127121118115110
SA147144139134129125120117
TS144141134130123120118114
DDEGA138134127124120116115109
I3HDABC322317301294283276263259
GA306300282278267260248242
SA320316297293279275267261
TS316313296290277272261256
DDEGA297291279272261253245238
Table 4. Comparison of the label quality evaluation functions for HDABC and other algorithms under 5–20% label densities.
Table 4. Comparison of the label quality evaluation functions for HDABC and other algorithms under 5–20% label densities.
InstanceAlgorithmρ = 25%ρ = 30%ρ = 35%ρ = 40%
BestAverageBestAverageBestAverageBestAverage
I1HDABC153.6157.6163.2169.1187.1190.4201.4206.7
GA154.4161.4171.1177.1195.4200.1204.7213.9
SA152.1158.7167.0172.6188.6194.7199.5205.8
TS161.0167.3173.2180.2192.7196.3205.7209.1
DDEGA161.7169.6175.7181.6193.6203.4214.2217.7
I2HDABC246.6249.3266.2268.1280.8285.1294.8297.7
GA256.0260.3268.7278.7290.5296.5304.8310.4
SA245.3250.3264.4268.1278.4285.1296.1297.6
TS250.3255.6272.7276.8289.9293.9300.9305.0
DDEGA258.1263.0276.8281.7293.0298.3307.0311.8
I3HDABC217.5221.5240.3242.1256.5259.7269.9273
GA231.0235.6253.6257.2269.1273.3283.2289.3
SA219.4221.9241.8244.3257.1260.3269.1272.6
TS222.5225.1244.5247.1260.2263.7274.6278.2
DDEGA232.3237.5255.2259.4271.1275.1286.8288.9
Table 5. Comparison of the label quality evaluation functions for HDABC and other algorithms under 25–40% label densities.
Table 5. Comparison of the label quality evaluation functions for HDABC and other algorithms under 25–40% label densities.
InstanceAlgorithmρ = 25%ρ = 30%ρ = 35%ρ = 40%
BestAverageBestAverageBestAverageBestAverage
I1HDABC153.6157.6163.2169.1187.1190.4201.4206.7
GA154.4161.4171.1177.1195.4200.1204.7213.9
SA152.1158.7167.0172.6188.6194.7199.5205.8
TS161.0167.3173.2180.2192.7196.3205.7209.1
DDEGA161.7169.6175.7181.6193.6203.4214.2217.7
I2HDABC246.6249.3266.2268.1280.8285.1294.8297.7
GA256.0260.3268.7278.7290.5296.5304.8310.4
SA245.3250.3264.4268.1278.4285.1296.1297.6
TS250.3255.6272.7276.8289.9293.9300.9305.0
DDEGA258.1263.0276.8281.7293.0298.3307.0311.8
I3HDABC217.5221.5240.3242.1256.5259.7269.9273
GA231.0235.6253.6257.2269.1273.3283.2289.3
SA219.4221.9241.8244.3257.1260.3269.1272.6
TS222.5225.1244.5247.1260.2263.7274.6278.2
DDEGA232.3237.5255.2259.4271.1275.1286.8288.9
Table 6. Comparison of selection operation based on label similarity and roulette wheel selection.
Table 6. Comparison of selection operation based on label similarity and roulette wheel selection.
InstanceρSOLSRoulette Selection
BestAverageTimeBestAverageTime
I15%84.585.92584.586.127
10%108.6109.829108.6111.527
15%132.9133.926132.9135.825
20%141.5143.628141.514329
25%153.6157.628153.2156.730
30%163.2169.131166.6170.633
35%187.1190.431186.2189.133
40%201.4206.727203.3205.530
I25%116.8119.673116.5118.7158
10%166.5169.171167.7170.5133
15%199.2201.260200.8202.2130
20%228.1231.775230.7232.8124
25%246.6249.368247249.7107
30%266.2268.156264.9267.7115
35%280.8285.162282.1284.6115
40%294.8297.758294.9297.9111
I35%96.098.21839798.4413
10%139.3141.1200139.2142.3370
15%170172173170.5172.2358
20%196.6198.4222195.5198.6351
25%217.5221.5166220.9223313
30%240.3242.1204243.4244.8299
35%256.5259.7188259.1261.3300
40%269.9273172271.2274.5287
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cao, W.; Xu, J.; Zhang, Y.; Zhao, S.; Xu, C.; Wu, X. A Hybrid Discrete Artificial Bee Colony Algorithm Based on Label Similarity for Solving Point-Feature Label Placement Problem. ISPRS Int. J. Geo-Inf. 2023, 12, 429. https://doi.org/10.3390/ijgi12100429

AMA Style

Cao W, Xu J, Zhang Y, Zhao S, Xu C, Wu X. A Hybrid Discrete Artificial Bee Colony Algorithm Based on Label Similarity for Solving Point-Feature Label Placement Problem. ISPRS International Journal of Geo-Information. 2023; 12(10):429. https://doi.org/10.3390/ijgi12100429

Chicago/Turabian Style

Cao, Wen, Jiaqi Xu, Yong Zhang, Siqi Zhao, Chu Xu, and Xiaofeng Wu. 2023. "A Hybrid Discrete Artificial Bee Colony Algorithm Based on Label Similarity for Solving Point-Feature Label Placement Problem" ISPRS International Journal of Geo-Information 12, no. 10: 429. https://doi.org/10.3390/ijgi12100429

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop