Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Low-Pass Parabolic FFT Filter for Airborne and Satellite Lidar Signal Processing
Previous Article in Journal
Piezoelectric Energy Harvesting in Internal Fluid Flow
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Robot Visual Homing Method Based on SIFT Features

College of Automation, Harbin Engineering University, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Sensors 2015, 15(10), 26063-26084; https://doi.org/10.3390/s151026063
Submission received: 2 July 2015 / Revised: 30 September 2015 / Accepted: 9 October 2015 / Published: 14 October 2015
(This article belongs to the Section Physical Sensors)
Figure 1
<p>Derivation of the proposed homing algorithm.</p> ">
Figure 2
<p>Distribution Constraint 1. (<b>a</b>) The formation of the horizon circle; (<b>b</b>) the distribution constraint of the landmarks based on the horizon circle.</p> ">
Figure 3
<p>Distribution Constraint 2.</p> ">
Figure 4
<p>The performance of the proposed mismatching elimination algorithm. (<b>a</b>) The matching of landmarks before mismatching elimination; (<b>b</b>) the matching of landmarks after mismatching elimination; (<b>c</b>,<b>d</b>) the distribution of homing angles computed by the proposed homing algorithm before and after mismatching elimination.</p> ">
Figure 5
<p>Flow diagram of the proposed homing method.</p> ">
Figure 6
<p>Panoramic sample images and robot platform. (<b>a</b>–<b>c</b>) The samples of three image databases: original, arboreal and day; (<b>d</b>) the robot platform for experiments in the real scene.</p> ">
Figure 7
<p>Homing vector fields. (<b>a</b>,<b>c</b>,<b>e</b>) The homing vectors generated by the warping method; (<b>b</b>,<b>d</b>,<b>f</b>) the homing vectors generated by the proposed homing method.</p> ">
Figure 7 Cont.
<p>Homing vector fields. (<b>a</b>,<b>c</b>,<b>e</b>) The homing vectors generated by the warping method; (<b>b</b>,<b>d</b>,<b>f</b>) the homing vectors generated by the proposed homing method.</p> ">
Figure 8
<p>Angular error (AE) results. (<b>a</b>,<b>c</b>,<b>e</b>) The homing angular errors generated by warping method; (<b>b</b>,<b>d</b>,<b>f</b>) the homing angular errors generated by the proposed homing method.</p> ">
Figure 9
<p>Average homeward component (AHC) results. (<b>a</b>–<b>c</b>) The distribution of AHC under the experiment conditions: original-original, original-arboreal and original-day. P, proposed homing method; PN, proposed homing method without a mismatching elimination step; W, warping method.</p> ">
Figure 10
<p>Return ratio (RR) results. (<b>a</b>–<b>c</b>) The RR for five goal positions under the experiment conditions: original-original, original-arboreal and original-day.</p> ">
Figure 10 Cont.
<p>Return ratio (RR) results. (<b>a</b>–<b>c</b>) The RR for five goal positions under the experiment conditions: original-original, original-arboreal and original-day.</p> ">
Figure 11
<p>Robot trial environment in the real scene.</p> ">
Figure 12
<p>Robot homing Trial 1. Top left: the panorama of the goal position; top right: the homing trajectories for five different current positions; the table below: the number of homing steps and the average angular error for each current position. CP, current position.</p> ">
Figure 13
<p>Robot homing Trial 2.</p> ">
Figure 14
<p>Robot homing Trial 3.</p> ">
Figure 15
<p>Robot homing Trial 4.</p> ">
Versions Notes

Abstract

:
Warping is an effective visual homing method for robot local navigation. However, the performance of the warping method can be greatly influenced by the changes of the environment in a real scene, thus resulting in lower accuracy. In order to solve the above problem and to get higher homing precision, a novel robot visual homing algorithm is proposed by combining SIFT (scale-invariant feature transform) features with the warping method. The algorithm is novel in using SIFT features as landmarks instead of the pixels in the horizon region of the panoramic image. In addition, to further improve the matching accuracy of landmarks in the homing algorithm, a novel mismatching elimination algorithm, based on the distribution characteristics of landmarks in the catadioptric panoramic image, is proposed. Experiments on image databases and on a real scene confirm the effectiveness of the proposed method.

1. Introduction

The local navigation methods based on visual information (known as local visual homing) have attracted much attention in the mobile robot field [1,2,3]. Inspired by biological navigation, local visual homing provides the ability to guide the robot to return to a goal position only by relying on visual information [4,5,6]. It can calculate a homing direction by comparing the differences between the current image and the goal image, and in that homing direction, the robot can move from the current position to the goal position [7,8,9,10]. Note that neither the current image nor the goal image directly provide the depth information of the environment. By combining with the environmental topological map, the local visual homing algorithm can divide the complex large-scale navigation problem into a series of local navigation problems, which are easier to solve [11,12]. In the topological framework, the algorithm is used to solve navigation problems between adjacent connected nodes. Most previous research on visual homing was carried out under the assumption that environments are static. However, changes of the environment (objects, illumination) often happen in a real scene. Despite recent advances, the tolerance against changes in the environment is still widely unresolved [12,13]. So far, most studies of visual homing are limited to indoor environments. If the problem mentioned above can be resolved effectively, it will be beneficial to implement visual homing in more challenging outdoor environments [14,15].
Most local visual homing algorithms can be divided into the following four categories [16,17,18]. The first class is correspondence-based homing methods. They use feature extraction and matching to set up a set of correspondence vectors between the current image and the goal image. These vectors can be transformed into movement directions, and an overall homing vector can be obtained by combining these movement directions [19,20,21,22]. The second class is the DID (descent in image distances) methods. In these methods, the homing can be achieved by calculating the gradient descent direction in the image distance between the current image and the goal image [14,15,23,24]. The third class is the method based on the ALV (average landmark vector) model. ALV is a unit representation vector of a certain position. The homing direction can be acquired by the subtraction of ALV between the current position and the goal position [25,26,27]. The last class is a robust method, which is known as warping, despite the large amount of calculation. This method supposes that the robot performs virtual movement at the goal position according to the three motion parameters. These parameters separately describe the direction, distance and rotation of the robot movement from the goal position to its current position. The goal image is distorted according to the motion parameters, after which it is compared to the current image. The optimal parameter set can be found when the differences between the two images are minimal. The homing vector can be yielded by the optimal parameter set [13,28,29,30].
In the above four classes of visual homing methods, DID and ALV fail to reach the performance of the warping method [13,20]. Although the homing performance of some correspondence-based methods is superior, most of them need an external compass to adjust the current image and goal image to the same horizontal orientation in order to compute the homing vector [30]. However, warping does not need an external compass to align the orientations between the two images, as the change of orientation is one of the search parameters in warping. In conclusion, warping is an attractive visual homing method. Despite the above advantages, some problems of warping still remain to be solved. Firstly, the homing performance of warping is influenced greatly by the change of environment (objects, illumination), as it determines the homing vector according to the differences of the gray value between corresponding pixels of the horizon region in the current image and the goal image. Secondly, it is difficult to balance the homing accuracy and the amount of calculation. The homing accuracy of the warping method partly depends on the search interval of the parameter space. The smaller the search interval is, the higher the homing accuracy is. However, at the same time, the computation will also increase significantly. Although Möller et al. have improved the warping method, extending it to operate directly on two-dimensional images [30] and relaxing the equal distance assumption [13], there is no effective solution to the above two problems.
Motivated by the above problems, we propose a novel visual homing algorithm by combining the SIFT (scale-invariant feature transform) features with the warping method. Compared with the warping method, the proposed homing algorithm uses the SIFT features instead of the pixels of the horizon region in warping as landmarks, and its advantages can be stated as follows: (1) by using the good stability of SIFT features in translation, rotation, scaling, occlusion and illumination [31], the novel algorithm is more robust to the influence caused by the changes of the environment and has a better homing accuracy; (2) the proposed homing algorithm can resolve the conflict between the homing accuracy and the amount of calculation, because it can obtain the homing vector by solving a system of ternary equations in place of doing the exhaustive search in the parameter space. In addition, unlike most visual homing algorithms that need to unwarp the initial panoramic images, the proposed homing algorithm directly operates on the initial catadioptric panoramic images. As a result, the additional amount of calculation is reduced. It is according to the angle change of landmarks between the current image and the goal image that our homing algorithm computes the homing direction, so the matching accuracy of landmarks is crucial. Because of the complex imaging relations of catadioptric panoramic images, most existing mismatching elimination algorithms cannot directly operate on the initial panoramic images [32,33,34]. For the above reasons, in order to further improve the matching accuracy of landmarks, a novel mismatching elimination algorithm is proposed on the basis of the distribution characteristics of landmarks in the catadioptric panoramic image.
The remainder of the paper is organized as follows: Section 2 describes the design of the proposed homing algorithm. The selection of landmarks and the design of the proposed mismatching elimination algorithm are introduced in Section 3. Section 3 also shows the overview of the proposed homing method. The results and analysis of the experiments are mainly presented in Section 4. Section 5 draws conclusions and points out the focus of future research.

2. Homing Algorithm

For the derivation of the warping method, refer to [28]. Similar to the warping method, our homing algorithm is also based on the equal distance assumption, which assumes that all of the surrounding landmarks have identical distances from the goal position. Although this assumption is usually violated, Franz et al. have proven that the error due to this assumption decreases when the robot approaches the goal [28].
The geometric relations of the homing algorithm are shown in Figure 1. The goal position and the current position of the robot are indicated separately by H and C. L is a landmark in the scene. The distance from L and H is denoted by r. The initial orientation of the robot at position H is shown by OH. The robot moves away from H in direction α relative to its initial orientation OH and covers a distance d to C. After the movement, the initial orientation changes by ψ. The new orientation of the robot is shown by OC. The dashed arrow at position C indicates the old orientation. In order to facilitate the derivation, we assume that the landmark L corresponds to some feature in the real scene. The angle between landmark L and orientation OH is denoted by θ. As the robot moves, the angle θ changes to θ′, that is the angle between landmark L and new orientation OC. Applying the law of sines to the triangle LHC, we can obtain the following equation:
sin ( ψ + θ θ ) d = sin ( π [ ψ + θ α ] ) r
Figure 1. Derivation of the proposed homing algorithm.
Figure 1. Derivation of the proposed homing algorithm.
Sensors 15 26063 g001
Equation (1) can be rearranged as follows:
sin(ψ + θ′ − θ) = ρsin(ψ + θ′ − α)
where ρ = d/r. There are three unknown parameters (ρ, ψ, α) in Equation (2). ψ and α are invariable for all of the landmarks in a certain scene. Suppose there are three landmarks L1, L2 and L3 in the scene. The angles between these landmarks and the orientation of the robot are separately denoted by θ1, θ2 and θ3 in the goal image. In the current image, the corresponding angles are denoted by θ1, θ2 and θ3. Substitute (θ1, θ1), (θ2, θ2) and (θ3, θ3) into Equation (2), and we can obtain the following equations:
{ sin ( θ 1 θ 1 + ψ ) = ρ 1 sin ( θ 1 α + ψ ) sin ( θ 2 θ 2 + ψ ) = ρ 2 sin ( θ 2 α + ψ ) sin ( θ 3 θ 3 + ψ ) = ρ 3 sin ( θ 3 α + ψ )
Before Equation (3) can be applied for homing, two problems have to be explained:
1. The distance d is identical for all of the landmarks in the scene when the robot arrives at current position C from goal position H. According to the equal distance assumption, the distance r is also identical for all of the landmarks. Based on the above two conditions, there exist the relation ρ1 = ρ2 = ρ3.
2. The angles θ1, θ2, θ3 and θ1, θ2, θ3 can be worked out if the positions of landmarks L1, L2 and L3 in the panoramic image are known.
By solving Equations (3), we can get a parameter set (ρ, ψ, α). The homing direction β relative to the new orientation of the robot can be computed as follows:
β = π + αψ
It can be seen from Equations (3) to (4) that the homing direction can be determined by only three landmarks. As the number of landmarks is usually more than three in the real scene, a number of parameter sets, such as (ρ1, ψ1, α1), (ρ2, ψ2, α2)…(ρn, ψn, αn), can be obtained by solving Equation (3). In order to get the optimal parameter α ^ , we first give the definition of the sum of squares of deviations:
α S S D = i = 1 n d i f f ( α i α ¯ ) 2
where n is the number of parameter sets, α ¯ changes from −π to π and diff (αi α ¯ ) yields the difference between two angles, which is defined as:
d i f f ( α i α ¯ ) = { 2 π | α i α ¯ | , | α i α ¯ | π | α i α ¯ | , | α i α ¯ | < π
According to Equation (5), we acquire α ^ = α ¯ S S D min , and α ¯ S S D min is the value of α ¯ when αSSD is minimum. The calculation of ψ ^ is the same as that of α ^ . The final homing direction β ^ can be determined as follows:
β ^ = π + α ^ ψ ^

3. Landmark Optimization and Overview of the Proposed Method

For the purpose of guiding the robot to return to the goal position, all of the visual homing algorithms need to acquire accurate and reliable information from the image. It can be seen from Section 2 that the calculation precision of the proposed homing algorithm mainly depends on the landmarks extracted from the image, so both the selection of reliable landmarks and the matching accuracy are crucial to the homing performance.

3.1. Landmark Selection

In this section, we will make the qualitative analysis of the landmark selection. Gillner et al. [35] suggested the principle of landmark selection, which includes the following three points: (1) uniqueness: landmarks in the scene must be unique and can be identified clearly; (2) reliability: the reliability of landmarks refers to the stable visibility, which means that the landmarks should be always detected every time the robot travels to the same position; and (3) relevance: the relevance of landmarks is defined as their importance in determining the homing direction.
According to the above three principles, we choose SIFT features in the image as landmarks. Firstly, as SIFT features are highly distinctive and can be correctly matched with high probability against large features [31], they can meet the uniqueness condition of landmarks. Secondly, as SIFT features are invariant to image translation, rotation, scale and are stable with respect to the changes in illumination [31], the reliability condition can be satisfied. At last, as the angles between SIFT features and the orientations of the robot can be computed by the localization of SIFT features, we can get the homing direction by utilizing Equations (3)–(7). The relevance condition can be satisfied.

3.2. Mismatching Elimination

In order to obtain a larger field of view and a faster processing speed, the proposed homing algorithm directly extracts SIFT features from the initial catadioptric panoramic images as landmarks. Although the SIFT features have good performance, there still exist some mismatching points in the real scene. According to Section 2, the mismatching of the landmarks can generate the wrong corresponding angle pair (θ, θ′), which will affect the homing precision and even cause the failure of homing. To solve the above problems, we present two constraints according to the distribution characteristics of landmarks in the initial panoramic images and propose a novel mismatching elimination algorithm based on these constraints.

3.2.1. Two Distribution Constraints

The panoramic images used in this paper are all generated by the catadioptric panoramic imaging system based on the hyperbolic mirror. Before presenting the mismatching elimination algorithm, we firstly introduce two distribution constraints of the landmarks as follows:
Distribution Constraint 1: As shown in Figure 2a, in the catadioptric panoramic imaging system based on the hyperbolic mirror, a curved mirror is used to form the image of the surroundings. Under the curved mirror is located the camera, whose optic axis points to the mirror. The curved mirror works together with the camera to collect the panoramic image. Ideally, in the initial panoramic image, there exists a circle whose center is just at the center of the image. For landmarks located at the same horizontal level with focus F of the curved mirror, their projection must be on that circle. As shown in Figure 2b, we call this circle the horizon circle in this paper. According to the characteristics of the catadioptric panoramic imaging system, suppose the robot moves on a plane; the landmarks in the scene can be divided into three categories: (1) the landmarks are located higher than focus F; their imaging radius range is always outside the horizon circle, shown as R1R1; (2) the landmarks are located lower than focus F; their imaging radius range is always within the horizon circle, shown as R2R2; and (3) the landmarks are located at the same horizontal level with focus F. As R3(R3) shows, the imaging radius range of these landmarks is always on the circle as the robot travels. In this paper, we employ γIO(L) to determine the position relationship between the landmark L and the horizon circle in quantization, which can be defined as follows:
γ I O ( L ) = { 1 , r L < r H 0 , r L = r H 1 , r L > r H
where rL is the imaging radius of the landmark and rH is the radius of the horizon circle.
Figure 2. Distribution Constraint 1. (a) The formation of the horizon circle; (b) the distribution constraint of the landmarks based on the horizon circle.
Figure 2. Distribution Constraint 1. (a) The formation of the horizon circle; (b) the distribution constraint of the landmarks based on the horizon circle.
Sensors 15 26063 g002
According to the above principle, suppose LT is a landmark to be tested in the goal image, and L ˜ T denotes its matching landmark in the current image. If the pair of landmarks (LT, L ˜ T ) matches correctly, there will exist the relation γIO(LT) = γIO( L ˜ T ). In other words, if γIO(LT) ≠ γIO( L ˜ T ), (LT, L ˜ T ) is a pair of mismatching landmarks.
Distribution Constraint 2: As shown in Figure 3, the left picture shows the goal image, and the right picture shows the current image. O is the center of the image. L1, L2 and L3 are three landmarks in the goal image. L ˜ 1 , L ˜ 2 and L ˜ 3 are their matching landmarks in the current image, respectively. (L1, L ˜ 1 ) is a pair of landmarks to be tested. (L2, L ˜ 2 ) and (L3, L ˜ 3 ) are used as reference landmark pairs. The two directed lines radiate from the center O and point to L1 and L ˜ 1 , based on which both the goal image and the current image can be divided into the left half plane and the right half plane. As the robot moves from the goal position to the current position, the reference landmark pairs (L2, L ˜ 2 ) and (L3, L ˜ 3 ) will separately appear in the same half plane of the two images on the condition that (L1, L ˜ 1 ) are matching correctly. Conversely, if (L1, L ˜ 1 ) is a pair of mismatching landmarks, (L2, L ˜ 2 ) and (L3, L ˜ 3 ) may randomly appear in any position of the two images.
Figure 3. Distribution Constraint 2.
Figure 3. Distribution Constraint 2.
Sensors 15 26063 g003
Suppose the coordinates of O, L1 and L2 are (xO,yO), (xT,yT) and (xR,yR), respectively. The equation of the directed line OL1 is Ax + By + C = 0, where A = yTyO, B = xOxT and C = xTyOxOyT. The positional relation of the reference landmark L2 can be determined as follows:
DR = AxR + ByR + C
In Equation (9): (1) if DR < 0, L2 is located in the left half plane; (2) if DR > 0, L2 appears in the right half plane; (3) if DR = 0, L2 is on the line OL1. We employ γLR(L) to indicate the positional relation of a reference landmark in quantization, which is defined as:
γ L R ( L ) = { 1 , D R < 0 0 , D R = 0 1 , D R > 0

3.2.2. Mismatching Elimination Algorithm

Suppose S1 is the initial set of matching landmarks, which are extracted from the goal image IH and the current image IC. The matching landmark pair (LT, L ˜ T ) ∈ S1 is the one to be tested, where LT and L ˜ T are landmarks extracted separately from IH and IC. The proposed mismatching elimination algorithm mainly includes two phases, which can be presented as follows:
Figure 4. The performance of the proposed mismatching elimination algorithm. (a) The matching of landmarks before mismatching elimination; (b) the matching of landmarks after mismatching elimination; (c,d) the distribution of homing angles computed by the proposed homing algorithm before and after mismatching elimination.
Figure 4. The performance of the proposed mismatching elimination algorithm. (a) The matching of landmarks before mismatching elimination; (b) the matching of landmarks after mismatching elimination; (c,d) the distribution of homing angles computed by the proposed homing algorithm before and after mismatching elimination.
Sensors 15 26063 g004
The first phase: According to Constraint 1, we firstly calculate γIO(LT) and γIO( L ˜ T ) for each pair (LT, L ˜ T ) ∈ S1. If γIO(LT) ≠ γIO( L ˜ T ), the corresponding landmark pair will be discarded from set S1; if γIO(LT) = γIO( L ˜ T ), the corresponding landmark pair will be reserved. After the preliminary filtration of set S1, the remaining matching landmark pairs make up set S2.
The second phase: For each matching landmark pair (LT, L ˜ T ) ∈ S2, we firstly calculate the distance between LT and other landmarks in IH and choose nR landmarks that are nearest to LT as reference landmarks to constitute set SRH, then find their matching landmarks in IC to constitute set SRC. The correctness of each matching landmark pair to be examined can be evaluated by the reference landmark pairs in set SRH and set SRC, and the result will be registered in Vm, whose initial value is zero. According to Constraint 2, we separately calculate γLR(LR) and γLR( L ˜ R ) for each reference landmark LRSRH and its matching landmark L ˜ R SRC. If γLR(LR) = γLR( L ˜ R ), the current reference landmark pair supports the correctness of the matching pair to be examined, and the value of Vm increases by one; if γLR(LR) ≠ γLR( L ˜ R ), the matching pair to be examined is regarded as the wrong one, and the value of Vm does not change. Suppose the threshold is VTH; a matching pair m = (LT, L ˜ T ) ∈ S2 can be directly judged by the value of its Vm. If VmVTH, m is a correct matching pair and needs to be reserved; if Vm < VTH, m is regarded as a mismatching pair and will be removed from set S2. The remaining matching pairs will be used as the final landmarks and constitute set S3.
In order to evaluate the performance of the proposed mismatching elimination algorithm, the panoramic image databases provided by Bielefeld University [19] are adopted. The image databases will also be used in the subsequent homing experiments, and the details will be introduced in Section 4.1. Figure 4 shows the performance of the proposed mismatching elimination algorithm. In Figure 4a, the red lines indicate the obvious mismatching landmarks. It can be seen from Figure 4b that these mismatching pairs are eliminated effectively by the proposed algorithm. In Figure 4c,d, green lines denote the ideal homing angle. After eliminating the mismatching landmarks, the distribution of the homing angles calculated by the proposed homing algorithm is much closer to the ideal homing angle, as shown in Figure 4c,d. Experimental results show that the proposed algorithm in this section can effectively eliminate the mismatching landmark pairs and further improve the computational accuracy of the proposed homing algorithm.

3.3. Overview of the Proposed Visual Homing Method

As shown in Figure 5, the procedure of our homing method mainly includes three steps as follows:
Figure 5. Flow diagram of the proposed homing method.
Figure 5. Flow diagram of the proposed homing method.
Sensors 15 26063 g005
(1) Landmark extraction: We extract SIFT features from the goal image IH and the current image IC, then match the features to form the initial landmark set S1.
(2) Landmark optimization: The primary purpose of this step is to eliminate the mismatching landmark pairs in set S1. The proposed algorithm in this paper solves this problem in a coarse-to-fine hierarchical way. First of all, according to the landmark distribution Constraint 1, the matching landmark pairs in set S1 are filtered preliminarily by making full use of the distribution characteristic between the landmarks and the horizon circle. The remainder in set S1 form set S2. After that, with the help of the landmark distribution Constraint 2, the mismatching pairs in set S2 are further removed on the basis of the relative distribution relations of the landmarks in an initial panoramic image. The final landmark set S3 consists of the remaining landmark pairs in set S2.
(3) Homing angle calculation: According to Section 2, we can get an angle pair (θ, θ′) for each pair (L, L ˜ ) ∈ S3. The final homing direction β ^ can be worked out based on Equations (3) to (7).

4. Experiments

4.1. Image Databases and Robot Platform

The panoramic image databases used in this paper were collected by the Computer Department of Bielefeld University. These databases were widely applied to the tests of robot visual homing algorithms. The panoramic images in the databases were all collected by the catadioptric imaging system in different scenes. The capture grid of the above image databases measured 2.7 × 4.8 m, and images were collected uniformly spaced 0.3-m apart. The resolution of these images is 752 × 564. The images acquired in three different scenes of original, arboreal and day were used for the experiments. Three panoramic sample images are shown in Figure 6a to c, and the corresponding scene conditions can be introduced as follows: (a) original means a common room with doors and windows shut, and its overhead fluorescent light bars were on; (b) arboreal represents a tall plant that was added in the capture grid; and (c) day means images that were collected with curtains open in full daylight. The databases and their further details can be accessed through [36].
Figure 6. Panoramic sample images and robot platform. (ac) The samples of three image databases: original, arboreal and day; (d) the robot platform for experiments in the real scene.
Figure 6. Panoramic sample images and robot platform. (ac) The samples of three image databases: original, arboreal and day; (d) the robot platform for experiments in the real scene.
Sensors 15 26063 g006
In this paper, a tracked mobile robot was used in the experiments in the real scene, as shown in Figure 6d. The panoramic imaging system mounted on the top of the robot was composed of a hyperbolic mirror and a camera with a resolution of 1024 × 768. Although there were two sets of panoramic imaging equipment on the robot, we just used the one below in the experiments. Image processing and robot movement were controlled by an onboard computer (Pentium (R), 2 GHz).

4.2. Parameter Settings for Experiments

The parameter settings in the experiments are shown in Table 1. In this paper, we employ the SIFT features of the image as landmarks. On the premise of accurate results, we modified several parameters recommended by Lowe in [31] to get more SIFT features. Firstly, the number of scale layers S where SIFT features are extracted is increased from 3 to 5, which can both increase the total number of extracted landmarks and maintain a reasonable time of execution. Secondly, the response threshold of extreme points TDOG in the difference of Gaussian images is decreased from 0.08/S to 0.04/S in order to get more features from areas of low contrast, as indoor environments often contain such areas.
Table 1. Parameters for the experiments.
Table 1. Parameters for the experiments.
ParametersValueParametersValue
S5αW[0, 355]/72/5
TDOG0.04/SnR5
ρW[0, 0.95]/20/0.05VTH4
ψW[0, 355]/72/5
In Table 1, the parameter format of ρW, ψW and αW is shown as search range/search steps/resolution. In order to give full play to the performance of the warping method in the experiments, we adopted the parameters recommended by Möller in [30]. The search range of ρW is [0, 0.95], and there are 20 search steps. The settings of ψW and αW are the same, whose search range is [0, 355], and there are 72 search steps. In addition, according to the requirements of the proposed homing algorithm, the number of the reference landmark pairs nR and the threshold VTH for eliminating the mismatching landmarks were separately set to 5 and 4.
We separately took 100 pairs of images from the three image databases randomly. These image pairs were used as the goal image and the current image. Based on the parameter settings in Table 1, we determined the average computation time for two methods (2 GHz Pentium (R), MATLAB R2007b), as shown in Table 2. It can be seen that the average computation time of the warping method for each homing test was about 40% more than that of our method. We consider the settings in Table 1 reasonable enough for the comparison of homing performance.
Table 2. Average computation time for the two methods.
Table 2. Average computation time for the two methods.
MethodAverage Computation Time (s)
originalarborealday
Warping21.05118.64319.541
Proposed14.55912.63813.854

4.3. Performance Metrics

According to the previous presentation, the goal position and the current position are denoted separately by H and C in the experiments. In this paper, three metrics known as angular error (AE), average homeward component (AHC) and return ratio (RR) are adopted to evaluate the performance of the proposed method.
For a pair of H and C, suppose the homing angle computed by the homing algorithm is indicated by βhoming. βideal represents the ideal homing angle, which directly points from C to H. The angular error can be determined as:
AE(H,C) = diff(βhomingβideal)
where the function of diff() refers to Equation (6).
The average homeward component, used in the homing experiments frequently, is the evaluation criterion, which can measure both the validity and the angular deviation of the computed homing angles. As long as the value of AHC always stays above zero, the robot travels nearer to the goal position; the closer the value of AHC is to 1, the closer the movement direction of the robot gets to the ideal homing direction. Based on the angular error, the average homeward component can be defined as follows:
A H C ( n ) = cos ( 1 n i = 1 n A E ( H i , C i ) )
where n indicates the number of different pairs (H, C) selected in the experiment scene. In the experiments of AHC, we separately took 100 pairs of images from the image databases randomly according to the distance between C and H, which ranges from 30 to 390 cm in steps of 30 cm.
The last performance metric is the return ratio [18,19], which is defined as the successful percentage of homing trials. The return ratio can be computed by carrying out simulated homing trials on the capture grid of the image databases. A dummy robot is placed at all integer grid positions and allowed to move according to the homing vectors, which have been pre-computed with the two methods. The robot moves at a step of rh, which is generally determined by the ratio between the actual step length of the robot and the sampling interval of the images. The value of rh was set to 0.8 in the trials. βh(x,y) denotes the homing angle pre-computed at each position in the capture grid, and the motion direction of the robot is determined by βh(x,y), whose position is closest to the current position. A trial is considered successful if the robot can reach a place within a given distance threshold of H from C. The threshold was set to 0.5. The result of a homing trial at position C can be evaluated as follows:
  • Step 1: The robot moves a step according to the corresponding βh(x,y).
  • Step 2: If the following two cases happen, jump to Step 4.
    • Case 1: The robot arrives at the goal position H.
    • Case 2: The robot travels a distance longer than half of the perimeter of the capture grid.
  • Step 3: Continue to perform Step 1.
  • Step 4: If Case 1 happens, the homing trial is successful; if Case 1 does not happen and Case 2 happens, the trial has failed.
We define λ(H,C) as a binary evaluation function with a value of 1 for successful homing and 0 for unsuccessful homing. The return ratio is determined as:
R R ( H ) = 1 n i = 1 n λ ( H , C i )
where n indicates the number of different current positions selected in the experiment scene.

4.4. Homing Experiments on Image Databases

The image databases introduced in Section 4.1 were used to perform the experiments. The experiment environment of the scene in this section was divided into two classes: (1) static environment: the surroundings of the scene remain static during the experiment; (2) dynamic environment: the objects or illumination in the scene changes during the experiment. According to different experiment environments, three different groups of experiments were conducted: (1) Group 1: the main goal is to test the homing performance of the proposed method under static conditions; both current images and goal images were selected from the database original, and this experiment condition was represented by original-original; (2) Group 2: the main goal is to test the homing performance of the proposed method when the objects in the scene change; the changing of objects was simulated by using cross-database experiments, i.e., the current images were taken from the database original, and the goal images were taken from the database arboreal; the experiment condition was represented by original-arboreal; (3) Group 3: the main goal is to test the homing performance of the proposed method when the illumination of the scene changes; similar to the second group of experiments, the current images were taken from the database original and the goal images were taken from the database day, in order to simulate the changing of illumination; the experiment condition was represented by original-day.
Figure 7 shows the homing vector fields for the warping method and the proposed method. (1, 4), (7, 13) and (5, 9) in the capture grid were selected as goal positions, and the corresponding experiment conditions were original-original, original-arboreal and original-day, respectively. In Figure 7, the current positions are marked by blue squares, and the goal positions are marked by red squares. The homing direction is denoted by the line put out from the blue square. Figure 8 shows the AE results for two homing methods. The experiment conditions and goal positions are the same as the settings of Figure 7. The gray scale of each grid indicates the AE value of the corresponding (x, y) position in the capture grid of the database. Its changing from black to white represents the value ranging from 0 to the maximum of AE computed by the two methods. For observing, we change the threshold of white to 100 in the experiments. From Figure 7 to Figure 8, we can draw the following conclusions: Most of the homing directions computed by our method are more accurate than those computed by warping method. The AE of our homing method is lower than that of the warping method on the whole.
Figure 7. Homing vector fields. (a,c,e) The homing vectors generated by the warping method; (b,d,f) the homing vectors generated by the proposed homing method.
Figure 7. Homing vector fields. (a,c,e) The homing vectors generated by the warping method; (b,d,f) the homing vectors generated by the proposed homing method.
Sensors 15 26063 g007aSensors 15 26063 g007b
Figure 8. Angular error (AE) results. (a,c,e) The homing angular errors generated by warping method; (b,d,f) the homing angular errors generated by the proposed homing method.
Figure 8. Angular error (AE) results. (a,c,e) The homing angular errors generated by warping method; (b,d,f) the homing angular errors generated by the proposed homing method.
Sensors 15 26063 g008
Figure 9 shows the distribution of AHC according to the distance between C and H. The corresponding experiment conditions for Figure 9a–c were original-original, original-arboreal and original-day, respectively. From Figure 9, it can be seen from the changing trends of “P” and “PN” that the mismatching elimination step can effectively improves the homing performance. It also can be seen intuitively from the changing trends of “P” and “W” that the AHC of our homing method is superior to that of the warping method on the whole. In the three groups of experiments, the AHC of two methods is always above 0, which indicates that both methods have the ability to guide the robot to approach the goal position gradually. From Figure 9a,b, although the change of a tall plant in the experiment scene leads to a slight decrease in AHC for two methods, our homing method still performs better. As shown in Figure 9a,c, when the illumination of the experiment scene changes, the performance of the warping method drops dramatically, while the performance of our homing method drops slightly. In conclusion, the results show that our homing method has better robustness to the changes of the environment. In Figure 9, we can see an interesting phenomenon: when the distance between C and H is approximately within the range 30 to 330 cm, the AHC of our method is closer to 1 than that of the warping method, which shows that the average AE of our homing method is smaller; when the distance between C and H is approximately within the range 360 to 390 cm, the AHC of our homing method is smaller than that of the warping method, which indicates that the average AE of our homing method is higher. The main reasons for this situation are as follows: (1) when the distance between C and H is short (30 to 330 cm), there are more correct matching landmarks due to minor differences between the two images; consequently, the proposed mismatching elimination algorithm can effectively remove the mismatching landmarks, and our homing algorithm can get higher calculation accuracy; (2) when the distance between C and H is far (360 to 390 cm), the number of matching landmarks is smaller, and among them, there are more mismatching landmarks. For this reason, the proposed algorithm cannot effectively eliminate the mismatching landmarks, which makes the calculation accuracy of our homing algorithm decrease significantly.
Figure 10 shows the RR for the two methods under the three experiment conditions. “P” represents the proposed homing method; “PN” represents the proposed homing method without mismatching elimination step; and “W” represents the warping method. Trying to avoid the influence of randomness, we chose (1, 4), (1, 12), (5, 9), (8, 3) and (7, 13) in the capture grid of the image databases as goal positions, which are uniformly distributed in the experiment scene. From Figure 10, it can be seen from the bar chart of “P” and “PN” that the mismatching elimination step can effectively improve the RR of our homing method. There are 15 different goal positions tested; compared to the warping method, our method performs better at 13 goal positions. From Figure 10a,b, although the RR for the two methods drops slightly with the changing of a tall plant in the experiment scene, our homing method still performs better. The performance for the two methods is greatly influenced when the illumination changes in the experiment scene, as shown in Figure 10a,c. Especially for the warping method, the homing performance declines dramatically. The results of RR turn out to be the same as those of AHC: compared to the warping method, our homing method has better robustness to the changes of the environment in the scene.
Figure 9. Average homeward component (AHC) results. (ac) The distribution of AHC under the experiment conditions: original-original, original-arboreal and original-day. P, proposed homing method; PN, proposed homing method without a mismatching elimination step; W, warping method.
Figure 9. Average homeward component (AHC) results. (ac) The distribution of AHC under the experiment conditions: original-original, original-arboreal and original-day. P, proposed homing method; PN, proposed homing method without a mismatching elimination step; W, warping method.
Sensors 15 26063 g009
Figure 10. Return ratio (RR) results. (ac) The RR for five goal positions under the experiment conditions: original-original, original-arboreal and original-day.
Figure 10. Return ratio (RR) results. (ac) The RR for five goal positions under the experiment conditions: original-original, original-arboreal and original-day.
Sensors 15 26063 g010aSensors 15 26063 g010b

4.5. Homing Trials in a Real Scene

In order to further evaluate the performance of our method in practice, experiments were conducted in a real scene. We selected the intelligent robot lab of Harbin Engineering University as the experiment scene. The surroundings of the trial area are shown in Figure 11. The tracked mobile robot introduced in Section 4.1 was used in the trials. We randomly chose four goal positions, which were uniformly distributed in the trial area. Five current positions for each goal position spaced evenly throughout the area were selected for tests. The robot took the panoramic image at its current position, compared it to the goal image stored in memory and computed the homing angles separately by the proposed method and the warping method. After that, in the homing direction, the robot moved a step at the fixed length of 25 cm. In the trials, the above process would repeat until the robot reached a place within the range of 30 cm of the goal position or its movement distance was more than half of the circumference of the trial area, which was when the robot moves no more than 43 steps. If the robot can arrive at the goal position within the prescribed steps, the homing is successful; otherwise, the homing has failed. Because most stopping criteria based on image information are likely to lead the robot to oscillate around the goal position, the robot was manually stopped once it reached the goal area.
Figure 11. Robot trial environment in the real scene.
Figure 11. Robot trial environment in the real scene.
Sensors 15 26063 g011
Figure 12, Figure 13, Figure 14 and Figure 15 show the trajectories of the robot for the two methods, with tables listing the statistics of the number of homing steps N and the average angular errors σ(°) in each trial. The goal area is indicated by the red circle. “CP” represents the current position. Red lines represent the trajectories of the proposed method. Blue lines represent those of the warping method. As shown in Figure 12, Figure 13, Figure 14 and Figure 15, most homing trajectories for the warping method are more curved than those for our method. In total, 20 homing trials were carried out, 17 of which suggest that the average angular errors for our method are smaller; the number of total homing steps for the warping method is 312, while for our method, that number is only 292. Both the trajectories and statistics indicate that the homing angles computed by our method are more accurate, and the movement distance is shorter.
Figure 12. Robot homing Trial 1. Top left: the panorama of the goal position; top right: the homing trajectories for five different current positions; the table below: the number of homing steps and the average angular error for each current position. CP, current position.
Figure 12. Robot homing Trial 1. Top left: the panorama of the goal position; top right: the homing trajectories for five different current positions; the table below: the number of homing steps and the average angular error for each current position. CP, current position.
Sensors 15 26063 g012

MethodCP1CP2CP3CP4CP5
NσNσNσNσNσ
Warping1124.541321.121715.032019.141618.78
Proposed106.341311.57165.812015.43156.47
Figure 13. Robot homing Trial 2.
Figure 13. Robot homing Trial 2.
Sensors 15 26063 g013

MethodCP1CP2CP3CP4CP5
NσNσNσNσNσ
Warping59.661615.231411.791719.091420.19
Proposed56.471511.421514.92149.29128.31
Figure 14. Robot homing Trial 3.
Figure 14. Robot homing Trial 3.
Sensors 15 26063 g014

MethodCP1CP2CP3CP4CP5
NσNσNσNσNσ
Warping68.011710.651814.792123.821113.89
Proposed68.13178.072016.242011.89105.10
Figure 15. Robot homing Trial 4.
Figure 15. Robot homing Trial 4.
Sensors 15 26063 g015

MethodCP1CP2CP3CP4CP5
NσNσNσNσNσ
Warping1315.131412.932731.892315.991912.14
Proposed129.50139.892015.222111.32185.29

5. Conclusions

This paper proposes a novel method to solve the problem of local visual homing. The method is composed of a novel visual homing algorithm and a novel mismatching elimination algorithm. The former is inspired by the warping method, and the latter is based on the distribution characteristics of landmarks in the initial panoramic image. Compared to the warping method, the proposed homing method improves the homing accuracy effectively and has better robustness to the changes in the environment. Experiments on image databases and in a real scene confirm the improved performance.
For the visual homing algorithms based on landmarks, a reduction in the number of landmarks can effectively reduce the amount of computation, while the homing performance might be affected. In the future, we will focus on how to reduce the number of landmarks effectively on the premise of guaranteeing the homing precision.

Acknowledgments

The authors are very grateful to the reviewers and editors for their valuable comments and hard work. With their help, this paper has been improved significantly. This work is partially supported by the National Natural Science Foundation of China (61175089, 61203255, 51409053).

Author Contributions

The work presented in this paper was accomplished in a concerted effort by all authors. Qidan Zhu and Chuanjia Liu conceived of the study, designed the homing method and prepared the manuscript. Chuanjia Liu and Chengtao Cai performed the experiments and analyzed the data. All authors commented on the manuscript and approved the final version.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. López-Nicolás, G.; Guerrero, J.J.; Sagüés, C. Multiple homographies with omnidirectional vision for robot homing. Robot. Auton. Syst. 2010, 58, 773–783. [Google Scholar] [CrossRef]
  2. Ohnishi, N.; Imiya, A. Appearance-based navigation and homing for autonomous mobile robot. Image Vis. Comput. 2013, 31, 511–532. [Google Scholar] [CrossRef]
  3. Aranda, M.; López-Nicolás, G.; Sagüés, C. Angle-based homing from a reference image set using the 1D trifocal tensor. Auton. Robot. 2013, 34, 73–91. [Google Scholar] [CrossRef]
  4. Labrosse, F. Short and long-range visual navigation using warped panoramic images. Robot. Auton. Syst. 2007, 55, 675–684. [Google Scholar] [CrossRef] [Green Version]
  5. Baddeley, B.; Graham, P.; Philippides, A.; Husbands, P. Holistic visual encoding of ant-like routes: Navigation without waypoints. Adapt. Behav. 2011, 19, 3–15. [Google Scholar] [CrossRef]
  6. Yu, S.E.; Lee, C.; Kim, D. Analyzing the effect of landmark vectors in homing navigation. Adapt. Behav. 2012, 20, 337–359. [Google Scholar] [CrossRef]
  7. Fu, Y.; Hsiang, T.R. A fast robot homing approach using sparse image waypoints. Image Vis. Comput. 2012, 30, 109–121. [Google Scholar] [CrossRef]
  8. Liu, M.; Pradalier, C.; Pomerleau, F.; Siegwart, R. Scale-only visual homing from an omnidirectional camera. In Proceedings of the IEEE International Conference on Robotics and Automation, St Paul, MN, USA, 14–18 May 2012; pp. 3944–3949.
  9. Möller, R.; Krzykawski, M.; Gerstmayr-Hillen, L.; Horst, M.; Fleer, D.; de Jong, J. Cleaning robot navigation using panoramic views and particle clouds as landmarks. Robot. Auton. Syst. 2013, 61, 1415–1439. [Google Scholar] [CrossRef]
  10. Liu, M.; Pradalier, C.; Siegwart, R. Visual homing from scale with an uncalibrated omnidirectional camera. IEEE Trans. Robot. 2013, 29, 1353–1365. [Google Scholar] [CrossRef]
  11. Guzel, M.S.; Bicker, R. A behaviour-based architecture for mapless navigation using vision. Int. J. Adv. Robot. Syst. 2012, 9, 18:1–18:13. [Google Scholar]
  12. Qidan, Z.; Xue, L.; Chengtao, C. Feature optimization for long-range visual homing in changing environments. Sensors 2014, 14, 3342–3361. [Google Scholar]
  13. Möller, R.; Krzykawski, M.; Gerstmayr, L. Three 2D-warping schemes for visual robot navigation. Auton. Robot. 2010, 29, 253–291. [Google Scholar] [CrossRef]
  14. Zeil, J.; Hofmann, M.I.; Chahl, J.S. Catchment areas of panoramic snapshots in outdoor scenes. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 2003, 20, 450–469. [Google Scholar] [CrossRef] [PubMed]
  15. Sturzl, W.; Zeil, J. Depth, contrast and view-based homing in outdoor scenes. Biol. Cybern. 2007, 96, 519–531. [Google Scholar] [CrossRef] [PubMed]
  16. Arena, P.; De Fiore, S.; Fortuna, L.; Nicolosi, L.; Patane, L.; Vagliasindi, G. Visual homing: experimental results on an autonomous robot. In Proceedings of the European Conference on Circuit Theory and Design, Univ Sevilla, Seville, Spain, 26–30 August 2007; pp. 304–307.
  17. Möller, R. A model of ant navigation based on visual prediction. J. Theor. Biol. 2012, 305, 118–130. [Google Scholar] [CrossRef] [PubMed]
  18. Churchill, D.; Vardy, A. An orientation invariant visual homing algorithm. J. Intell. Robot. Syst. 2013, 71, 3–29. [Google Scholar] [CrossRef]
  19. Vardy, A.; Möller, R. Biologically plausible visual homing methods based on optical flow techniques. Connect. Sci. 2005, 17, 47–89. [Google Scholar] [CrossRef]
  20. Briggs, A.J.; Detweiler, C.; Li, Y.; Mullen, P.C.; Scharstein, D. Matching scale-space features in 1D panoramas. Comput. Vis. Image Underst. 2006, 103, 184–195. [Google Scholar] [CrossRef]
  21. Loizou, S.G.; Kumar, V. Biologically inspired bearing-only navigation and tracking. In Proceedings of the IEEE Conference on Decision and Control, New Orleans, LA, USA, 12–14 December 2007; pp. 6121–6126.
  22. Liu, M.; Pradalier, C.; Chen, Q.J.; Siegwart, R. A bearing-only 2D/3D-homing method under a visual servoing framework. In Proceedings of the IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–8 May 2010; pp. 4062–4067.
  23. Möller, R.; Vardy, A. Local visual homing by matched-filter descent in image distances. Biol. Cybern. 2006, 95, 413–430. [Google Scholar] [CrossRef] [PubMed]
  24. Möller, R.; Vardy, A.; Kreft, S.; Ruwisch, S. Visual homing in environments with anisotropic landmark distribution. Auton. Robot. 2007, 23, 231–245. [Google Scholar] [CrossRef]
  25. Lambrinos, D.; Möller, R.; Labhart, T.; Pfeifer, R.; Wehner, R. A mobile robot employing insect strategies for navigation. Robot. Auton. Syst. 2000, 30, 39–64. [Google Scholar] [CrossRef]
  26. Basten, K.; Mallot, H.A. Simulated visual homing in desert ant natural environments: Efficiency of skyline cues. Biol. Cybern. 2010, 102, 413–425. [Google Scholar] [CrossRef] [PubMed]
  27. Ramisa, A.; Goldhoom, A.; Aldavert, D.; Toledo, R.; de Mantaras, R.L. Combining invariant features and the ALV homing method for autonomous robot navigation based on panoramas. J. Intell. Robot. Syst. 2011, 64, 625–649. [Google Scholar] [CrossRef]
  28. Franz, M.O.; Schölkopf, B.; Mallot, H.A.; Bülthoff, H.H. Where did I take that snapshot? Scene-based homing by image matching. Biol. Cybern. 1998, 79, 191–202. [Google Scholar] [CrossRef]
  29. Sturzl, W.; Mallot, H.A. Efficient visual homing based on Fourier transformed panoramic images. Robot. Auton. Syst. 2006, 54, 300–313. [Google Scholar] [CrossRef]
  30. Möller, R. Local visual homing by warping of two-dimensional images. Robot. Auton. Syst. 2009, 57, 87–101. [Google Scholar] [CrossRef]
  31. Lowe, D.G. Distinctive image features from scale invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  32. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  33. Wei, Z.; Xu, W.S.; Yu, Y.L. Area harmony dominating rectification method for SIFT image matching. In Proceedings of the IEEE Conference on Electronic Measurement and Instruments, Xi'an, China, 16–18 August 2007; pp. 935–939.
  34. Wang, C.; Ma, K.K. Bipartite graph-based mismatch removal for wide-baseline image matching. J. Vis. Commun. Image Represent. 2014, 25, 1416–1424. [Google Scholar] [CrossRef]
  35. Gillner, S.; Weiß, A.M.; Mallot, H.A. Visual homing in the absence of feature-based landmark information. Cognition 2008, 109, 105–122. [Google Scholar] [CrossRef] [PubMed]
  36. Panoramic Image Database. Available online: http://www.ti.uni-bielefeld.de/html/research/avardy/index.html (accessed on 22 September 2014).

Share and Cite

MDPI and ACS Style

Zhu, Q.; Liu, C.; Cai, C. A Novel Robot Visual Homing Method Based on SIFT Features. Sensors 2015, 15, 26063-26084. https://doi.org/10.3390/s151026063

AMA Style

Zhu Q, Liu C, Cai C. A Novel Robot Visual Homing Method Based on SIFT Features. Sensors. 2015; 15(10):26063-26084. https://doi.org/10.3390/s151026063

Chicago/Turabian Style

Zhu, Qidan, Chuanjia Liu, and Chengtao Cai. 2015. "A Novel Robot Visual Homing Method Based on SIFT Features" Sensors 15, no. 10: 26063-26084. https://doi.org/10.3390/s151026063

APA Style

Zhu, Q., Liu, C., & Cai, C. (2015). A Novel Robot Visual Homing Method Based on SIFT Features. Sensors, 15(10), 26063-26084. https://doi.org/10.3390/s151026063

Article Metrics

Back to TopTop