Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Sensor Data Prediction in Missile Flight Tests
Previous Article in Journal
Resource Allocation in Downlink VLC-NOMA Systems for Factory Automation Scenario
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Algorithm for Infrared Earth Sensor with a Large Field of View

Micro-Satellite Research Center, School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2022, 22(23), 9409; https://doi.org/10.3390/s22239409
Submission received: 15 September 2022 / Revised: 21 November 2022 / Accepted: 24 November 2022 / Published: 2 December 2022
(This article belongs to the Topic Micro/Nano Satellite Technology, Systems and Components)

Abstract

:
Infrared Earth sensors with large-field-of-view (FOV) cameras are widely used in low-Earth-orbit satellites. To improve the accuracy and speed of Earth sensors, an algorithm based on modified random sample consensus (RANSAC) and weighted total least squares (WTLS) is proposed. Firstly, the modified RANSAC with a pre-verification step was used to remove the noisy points efficiently. Then, the Earth’s oblateness was taken into consideration and the Earth’s horizon was projected onto a unit sphere as a three-dimensional (3D) curve. Finally, the TLS and WTLS were used to fit the projection of the Earth horizon. With the help of TLS and WTLS, the accuracy of the Earth sensor was greatly improved. Simulated images and on-orbit infrared images obtained via the satellite Tianping-2B were used to assess the performance of the algorithm. The experimental results demonstrate that the method outperforms RANSAC, M-estimator sample consensus (MLESAC), and Hough transformation in terms of speed. The accuracy of the algorithm for nadir estimation is approximately 0 . 04 ° (root-mean-square error) when Earth is fully visible and 0 . 16 ° when the off-nadir angle is 120 ° , which is a significant improvement upon other nadir estimation algorithms

1. Introduction

An accurate and reliable attitude determination system is required for most spacecraft missions. Attitude determination systems usually consist of multiple sensors, such as a star tracker, infrared Earth sensor, sun sensor, magnetometer, etc. [1,2]. Among them, infrared Earth sensors are widely used in spacecraft for their reliability and low cost [3].
Infrared Earth sensors can be roughly divided into two categories: scanning Earth sensors and static Earth sensors. The scanning Earth sensors rotate along a certain axis and detect infrared signals from Earth. Then, the roll angle of the spacecraft will be calculated based on the time that the Earth runs across the sensor’s range of view. Some examples include STD-15 [4], STD-16 [5], and IERS [6]. Although this kind of Earth sensors could achieve a high accuracy, they are not well suited for small satellites because of their larger size and greater power consumption [7].
On the other hand, traditional static Earth sensors use multiple thermopile detectors to detect the Earth horizon and calculate the spacecraft’s attitude. These include MAI-SES, manufactured by Maryland Aerospace [3], and HSNS, manufactured by Solar MEMS Technologies [8]. The MAI-SES’s accuracy is 0 . 18 ° (position known) while the HSNS’s accuracy is better than 1 ° ( 3 σ ) . Generally, traditional static Earth sensors are low-cost and easy to implement, but their performance is affected by the variation of radiance in the atmosphere [9].
In recent years, many static Earth sensors use 2-D image sensors instead of thermopile detectors. The basic idea is to capture the infrared images of the Earth and extract horizon points with image-processing algorithms. Then, the Earth horizon points are fitted to a circle, ellipse, parabola, or hyperbola and the nadir vector will be obtained from the parameters of the conic [10]. For example, the accuracy of the digital Earth sensor designed by SITAEL is better than 1 ° and the Fov is 37 ° × 44 ° (horizontal and vertical) [11]. Saadat used a miniature vehicle inspection camera as an Earth sensor and the accuracy was 0 . 69 ° ( 1 σ ) [12]. Kikuya et al. proposed a three-axis attitude determination algorithm using a visible-ray camera with a resolution of 3280 × 2464 pixels as an Earth sensor [13]. With the help of image identification technology, this Earth sensor’s accuracy was similar to a coarse sun sensor. Recently, Modenini took the Earth oblateness into account and proposed a solution to estimate the attitude from imaged elilipsoids [14]. The accuracy of the roll and pitch determination achieved 0 . 01 ° . The performance of the prototype was not reported when the off-nadir angle was over 10 ° . Furthermore, the sensor prototype was designed with three different cameras to capture the entire Earth limb, which may introduce additional error. In [15], Christian modified Modenini’s solution and proposed a tutorial to infer the relative position, attitude, or both from the observed body’s horizon. This comprehensive work developed different algorithms for different optical navigation scenarios. For the spacecraft venturing in low-Earth orbit (LEO), many Earth sensors use wide FOV cameras, such as cameras with fisheye lens [16]. The most representative commercial sensor of this kind is the CubeSense manufactured by CubeSpace [17]. Its accuracy is up to 0 . 2 ° ( 3 σ ) if the Earth is fully visible and the FOV is 130 ° (horizontal and vertical). However, the effect of Earth oblateness is not considered and the accuracy decreases when the Earth moves towards the edge of the FOV. Barf designed a horizon sensor with a fisheye camera for sounding rockets [18,19]. Its error is below 0 . 5 ° . Two Cubesats, Athenoxat-1 and ISARA, also used fisheye cameras to obtain nadir vector, but no further information about the sensors they used is available [20,21].
In our previous work [22], an Earth sensor composed of a panoramic annular lens (PAL) and a complementary-metal-oxide-semiconductor (CMOS) infrared camera was designed. The Earth sensor has been used on multiple missions [23]. Compared with the Earth sensors using pinhole cameras, it only needs one camera to capture the entire Earth horizon. It can also operate in low-Earth orbit (LEO) satellites and estimate the nadir vector even when the off-nadir angle is over 90 ° . Similarly to the CubeSense, the accuracy decreases as the off-nadir angle increases and it did not take the Earth oblateness into consideration. Additionally, Hough transformation is used to remove outliers, which is robust but time-consuming.
In order to solve these problems, an algorithm based on modified RANSAC and WTLS is proposed. The basic idea is to use modified RANSAC to remove the disturbance points, then the weights of the Earth edge points are calculated. After that, the TLS and WTLS technique is applied to fit the Earth edge points to a three-dimensional(3D) curve instead of a conic section. Compared with the conventional Hough transformation and MLESAC algorithm, the time consumption is dramatically reduced. Furthermore, the accuracy of the sensor is especially improved when the Earth limb is small.
The main contributions of this paper are:
  • A modified RANSAC with a pre-verification procedure is used to remove outliers. A small amount of data instead of all measured data are used to qualify the established models, which is the pre-verification procedure that improves the efficiency.
  • The Earth horizon points are mapped onto the unit sphere instead of the image plane, which forms a three-dimensional curve instead of a conic section. The 3D curve fitting is more robust than conic fitting for PAL images or fisheye images.
  • The WTLS is introduced into the 3D curve fitting which is different for each horizon point’s precision. Consequently, the accuracy of the sensor is improved.
The rest of this paper is organized as follows: In Section 2, the proposed method is described by covering the modified RANSAC, the expression for the projection of the Earth horizon points, and WTLS-based 3D curve fitting. In Section 3, the implementation and performance of the proposed algorithm are denoted. In Section 4, conclusions and future work are presented.

2. Algorithm Description

The proposed algorithm consists of five steps. Firstly, the edge detection algorithm is performed to extract edge points in the image. Secondly, the edge points are mapped onto the unit image-sphere. Thirdly, the modified RANSAC is performed to remove the disturbance points. Fourthly, the actual horizon is fitted to a 3D curve with WTLS. Finally, the Nadir vector is calculated with the coefficients of the curve. The flowchart of the entire algorithm is shown in Figure 1.

2.1. Edge Detection

There are several kinds of edge detection algorithms that can be applied to the Earth horizon detection, such as Sobel, Prewitt, and Canny [24]. To increase the accuracy, we used the subpixel edge-localization algorithm developed by Christian [25,26]. Sobel edge detection was used to find the approximate edge location. Then, the subpixel edge localization algorithm using Zernike moments was performed to find the accurate edge points in the image. In this paper, the size of the Zernike masks is 7 × 7 .

2.2. Horizon Projection

The lens follows the principle of the flat-cylinder perspective [27]. The FOV of the PAL camera we used is 360 ° in azimuth and 90 ° in zenith. The imaging formula of PAL is given by:
r = f θ
where f is the focal length of the PAL, r is the distance between the object and the lens’ center, and θ is the angle from the optical axis. According to [15], the Earth horizon in the image plane, i.e., the z = 1 plane, must be a conic. Most algorithms project edge points onto the image plane [28,29,30]. Set the coordinates of an observed point in the camera frame as [ X C , Y C , Z C ] , then the image plane coordinates of the observed point [ x z , y z ] are given by:
x z = X C Z C y z = Y C Z C
For an observed point near the edge of FOV, its Z C is close to 0. As a result, its image plane coordinates will be large and inaccurate, which would lead the conic fitting process to fail. Thus, projecting the edge points onto the image plane and fitting them to a conic is not suitable for a wide-FOV camera such as PAL. In order to solve this problem, the edge points are mapped one-to-one onto the unit image-sphere, just like our previous work. The mapping is defined as follows:
x = sin θ i u i u i 2 + v i 2 y = sin θ i v i u i 2 + v i 2 z = cos θ i
where [ x , y , z ] are the coordinates of the point on the sphere, and [ u i , v i ] are the corresponding image coordinates. The angle θ i is determined from Equation (1). After the projection, the Earth horizon on the image-sphere is a 3D curve instead of a conic, and the details shall be found in Section 2.4.

2.3. Modified RANSAC

The extracted edge points may not only contain the actual Earth horizon, but also the boundary between areas with different infrared radiance. There are many algorithms that can be applied to remove the disturbance points: among them RANSAC algorithms are widely used [31,32,33]. The classic RANSAC algorithm randomly selects sample points to establish a model, which is often an error model due to the disturbance points. Thus, all other points have to be checked if the model was rejected. This means that it is significantly time consuming. Although the Earth horizon on the image-sphere is a 3D curve, it is also close to a circle because the Earth’s ellipticity is neglectable. Hence, this information can be used to verify the model. The basic flow of the modified RANSAC is as follows:
  • Randomly select five sample points. The coordinates of the i-th point are [ x i , y i , z i ] .
  • Calculate the normal vector to the plane determined by three of them using Equation (4):
    p = [ x 1 x 2 , y 1 y 2 , z 1 z 2 ] q = [ x 1 x 3 , y 1 y 3 , z 1 z 3 ] n = p × q
  • Calculate the angles between the normal vector and the vectors pointing from the origin to the sample points, respectively, θ 1 θ 5 . θ m is the mean of these angles. If the mean deviation A . D . > T , go back to step 1.
  • Calculate the angles between the normal vector and vectors pointing from the origin to the rest points, for instance, θ i , if θ i θ m < T , the point is considered as an inlier. The number of inliers is denoted as N k .
  • If N k > N m a x , then set N m a x = N k .
  • Repeat steps 1–5 k m a x times. Note that, if a set with 50 % inliers is discovered, end the loop.
  • Remove all the outliers and extract the actual horizon. Furthermore, the normal vector n is approximately the nadir vector and thus the approximate off-nadir angle φ can be obtained.

2.4. Three-Dimensional Curve Fitting

As mentioned above, the projection of the Earth horizon on the unit sphere is a 3D curve. In this section, the representation of the curve is derived and the fitting method is described.

2.4.1. Projection of Earth Horizon on the Unit Sphere

The geometry of the Earth sensor and Earth is shown in Figure 2.
As shown in Figure 2, O C is the origin of the camera coordinate system, while O E is the Earth’s center. S i is the vector pointing from O C to an Earth horizon point. r is a vector pointing from O E to O C . The shape matrix of the Earth A p is calculated as:
A p = 1 1 R a 2 R a 2 0 0 0 1 1 R b 2 R b 2 0 0 0 1 1 R c 2 R c 2
where R a = R b = 6378.137 km and R c = 6356.755 km [34,35]. According to [15], the vector from the camera origin to the Earth horizon point in the camera frame, S i , obeys the constraint:
S i T M S i = 0
Normalized S i is the projection of a horizon point on a unit sphere. Furthermore, M is given by:
M = A r r T A ( r T A r 1 ) A A = T c p A p T p c
T c p denotes the rotation matrix from an Earth coordinate system to the Earth sensor’s coordinate system and T p c is the transpose of T c p . Since M is symmetric, it can be written as:
M = a b b 2 2 d d 2 2 b b 2 2 c e e 2 2 d d 2 2 e e 2 2 f
Moreover, Equation (6) is rewritten as
a x 2 + b x y + c y 2 + d x z + e y z + f z 2 = 0
where x, y, and z are the coordinates of the normalized S i . This is the representation of the projection of Earth horizon on the unit sphere as well as a 3D curve. Since x 2 + y 2 + z 2 = 1 , Equation (9) is the same as
( 1 a f ) x 2 b f x y + ( 1 c f ) y 2 d f x z e f y z = 1

2.4.2. Weighted Total Least Squares

Define ξ as
ξ = [ ξ 1 , ξ 2 , ξ 3 , ξ 4 , ξ 5 ] T = [ ( 1 a f ) , b f , ( 1 c f ) , d f , e f ] T
Let [ x i , y i , z i ] denote the i-th edge point’s coordinates, define e i as
e i = [ x i 2 , x i y i , y i 2 , x i z i , y i z i ]
A linear system can be built based on Equation (10)
E ξ = 1 n × 1
where
E = e 1 e 2 e n
Since E is noisy, it is more suitable to use the total least squares to estimate ξ [36,37]. Furthermore, the precision of each point’s coordinates is different. Thus, the weighted total least squares can achieve better accuracy. However, the weighted total least squares requires more time. Therefore, the total least squares is used to estimate the initial value of ξ . Then, the weighted total least squares is only performed when the off-nadir angle is large, because the coordinates of the points near the edge of FOV are less accurate.
The singular value decomposition (SVD) is used to find the solution to the total least squares problem. Define matrix Ey as
Ey = E 1 n × 1
Then, singular value decomposition of Ey is performed:
Ey = U V T
The estimation of ξ is given by:
ξ = V ( 1 : 5 , 6 ) V ( 6 , 6 )
The weighted total least squares will be performed when the approximate off-nadir angle φ is greater than φ T . The cofactor matrices of the design matrix and observation vector are denoted by Q E and Q y . Q y is set as a zero matrix because the observation vector is a constant vector.
Substituting Equation (3) into Equation (12), the functional relationship between the elements of a design matrix and the coordinates of the edge points on the image is given by:
e i = x i 2 , x i y i , y i 2 , x i z i , y i z i = sin 2 θ i u i 2 u i 2 + v i 2 , sin 2 θ i u i v i u i 2 + v i 2 , sin 2 θ i v i 2 u i 2 + v i 2 , sin θ i cos θ i u i u i 2 + v i 2 , sin θ i cos θ i v i u i 2 + v i 2
Symbolize the function between e i and [ u i , v i ] as F. The functional relationship between E and [ u i , v i ] is given by:
v e c ( E ) = F 1 ( u 1 , v 1 , u 2 , v 2 , u n , v n ) F 2 ( u 1 , v 1 , u 2 , v 2 , u n , v n ) F 5 n ( u 1 , v 1 , u 2 , v 2 , u n , v n )
where v e c denotes the operator that stacks one column of a matrix underneath the previous one. According to the law of the propagation of cofactors [38], Q E is calculated by
Q E = J u v Q u v J u v T
where J u v is a matrix of partial derivatives
J u v = F 1 u 1 F 1 v 1 F 1 v n F 2 u 1 F 2 v 1 F 2 v n F 5 n u 1 F 5 n v 1 F 5 n v n
Assuming that the precision of observables u i and v 1 is the same, then Q u v is an identity matrix.
Once the cofactor matrix is obtained, the parameters can be estimated according to [39]. The pseudocode is listed as follows:
  • ξ 0 is estimated from TLS
  • for i=1 to N do
    • ξ i 1 = ξ i 1 I n
    • Q 1 i = ξ i 1 T Q E ξ i 1
    • v e c ( δ E i ) = Q E ξ i 1 ( Q 1 i ) 1 ( 1 n × 1 E ξ i 1 )
    • δ E i = reshape [ v e c ( δ E i ) ]
    • E i = E δ E i
    • ξ i = ( E i T ( Q 1 i ) 1 E i ) 1 E i T ( Q 1 i ) 1 ( 1 n × 1 δ E i ξ i 1 )
  • end when ξ i ξ i 1 < ε
Once ξ is obtained, set f to 1, then M is given by:
M = 1 ξ 1 ξ 2 2 ξ 4 2 ξ 2 2 1 ξ 3 ξ 5 2 ξ 4 2 ξ 5 2 1
With M , the nadir vector can be estimated by Christian’s algorithm, the details of which can be seen in [15].

3. Experiments

3.1. Calibration of PAL

Real lenses do not exactly follow the designed projection model. Additionally, the origin of the Earth sensor coordinate needs to be determined. Thus, it is better to calibrate the PAL before using it. The PAL lens’ projection model is given by [40]
r ( θ ) = k 1 θ + k 2 θ 3 + k 3 θ 5 +
The calibration devices mainly include a two-axis rotator and an infrared source. The details of the calibration method can be found in [41].

3.2. Simulation System

The proposed algorithm was verified and compared with other studies on a series of simulated Earth images and real images. The parameters of the optical system are shown in Table 1.
The Earth infrared radiance is effected by latitude, season, and atmospheric pressure, etc. [42]. The radiance of the Northern Hemisphere increases with increasing latitude in the summer and decreases with latitude in the winter. The radiance also decreases with increasing tangent height. The geometry of the tangent height is shown in Figure 3.
Ref. [43] provided the relation curves of the Earth infrared radiance at different latitudes as a function of the tangent height H T . The latitudes in [43] were set from 15 ° N to 90 ° N with a step of 15 ° , the seasons was summer and winter, and tangent height was from −30 km to 70 km. We assumed that the radiance of the Northern Hemisphere in summer is the same as that of the Southern Hemisphere in winter and the radiance at any latitude can be calculated by the known profiles using linear interpolation. For example, the radiance at 20 ° N can be calculated by P 15 + ( P 30 P 15 ) × ( 20 15 ) ( 20 15 ) ( 30 15 ) ( 30 15 ) , where P 15 and P 30 denote the radiance at 15 ° and 30 ° , respectively. The relationship between the image intensity and the radiance can be determined according to on-orbit images. Given a point at equator ( 0 ° N, 0 km tangent height), its radiance is denoted by P 0 , then the corresponding gray value is set as the mean gray value of the Earth image I e . The gray value of background noise is set as I b . Then, the relationship between THE image intensity and radiance is given by:
I n = I e I b P 0 P n + I b
The Earth images were simulated as follows:
  • Step1: Randomly set the Earth sensor’s position and attitude.
  • Step2: For each pixel of the image sensor, the line of sight is set as the vector from the camera origin to the pixel’s projection on the unit-image sphere. Calculate the corresponding tangent height and latitude.
  • Step3: Calculate each pixel’s radiance with tangent height H T and latitude.
  • Step4: The image intensity of each pixel is calculated by Equation (24).
  • Step5: Blur the image by a Gaussian function to simulate the effect of defocusing. Then, add Gaussian noise to the image.
  • Step6: Add noisy points to the edge points to simulate the effect of clouds.
Figure 4a,b represent a simulated Earth image without noisy points and a real Earth image obtained by the Tianping-2B satellite, respectively. Note that the real Earth image has been cropped.

4. Results

4.1. Computational Efficiency

The modified RANSAC was compared with MLESAC, conventional RANSAC, and Hough transformation on simulated images. These algorithms were implemented in MATLAB on PC with Intel Core 3.10 GHz. The iteration number k m a x can be calculated with
k m a x = log ( 1 p s u c c e s s ) log ( 1 ( 1 r o u t ) 3 )
where p s u c c e s s is the probability that the actual horizon is extracted and r o u t is the ratio of the outliers. In our simulation, p s u c c e s s was set as 90 % and r o u t was set as 80 % , which is the worst case we assumed; then, the iteration number k m a x was set as 286. The threshold of the modified RANSAC T was set as 1 ° . The ratio of noisy points was set from 10 % to 70 % .
The running times of these algorithms are shown in Figure 5. As can be seen, the Hough transformation is slower than other algorithms. Because it needs to go through the whole parameter space to find the optimal solution. The modified RANSAC is faster than MLESAC and conventional RANSAC because the wrong models are eliminated by the sample points.
The success rate of these algorithms is shown in Figure 6. As can be seen, the Hough transformation is more robust than other algorithms, and its success rate remains over 99 % . However, the modified RANSAC can still achieve a success rate of 97.2 % when the ratio of the outliers is 80 % , and it can be improved by increasing the iteration number.

4.2. Accuracy

The proposed nadir vector estimation algorithm (TLS and WTLS) was tested on simulated images along with the circle fitting algorithm [22] and non-iterative nadir vector estimation algorithm [19]. The orbit altitude was set to 600 km. The off-nadir angle was set from 0 ° to 120 ° with a step of 0 . 1 ° . For each off-nadir angle, 500 images were generated. The rotation matrix and latitude of the Earth sensor in these images were randomly set.
The nadir vector error represents the angular separation between the estimated nadir vector and the true nadir vector. The root mean square (RMS) of the nadir vector error is shown in Figure 7 and Figure 8. As can be seen, when the off-nadir angle is below 90 ° , using TLS and WTLS to fit the Earth horizon can achieve nearly the same accuracy. Their RMS errors (RMSEs) are below 0 . 1 ° . When the off-nadir angle is below 25 ° , which is the scenario in which the Earth is fully visible, their RMSEs are approximately 0 . 04 ° . As for the circle-fitting algorithm and non-iterative algorithm, their RMSEs are over 0 . 1 ° . This is mainly because these algorithms did not consider the Earth’s oblateness. The Earth’s oblateness may cause a maximum error of 0 . 19 ° . When the off-nadir angle is over 90 ° , the RMSE of TLS fitting increases from 0 . 1 ° to 0 . 27 ° , while the RMSE of WTLS fitting increases from 0 . 1 ° to 0 . 16 ° . Thus, our algorithm uses WTLS to fit the Earth horizon when the off-nadir angle is over 90 ° . As for the circle fitting algorithm and non-iterative nadir vector estimation algorithm, their RMSEs are approximately 0 . 14 ° and 0 . 15 ° , respectively. However, the accuracy of these algorithms would be worse when the latitude is near 45 ° .
Compared with Cubesense, whose accuracy is 0 . 2 ° ( 3 σ ) when Earth is fully visible, the accuracy of our algorithm is similar, however, the Earth’s oblateness is considered in our algorithm and the resolution of our Cmos is lower. Furthermore, our algorithm is more accurate when the off-nadir angle is large. Furthermore, compared with another commercial sensor MAI-SES whose accuracy is 0 . 18 ° (RMSE), our algorithm outperforms.

4.3. Performance on Real Earth Images

The Earth sensor was carried on the Tianping-2B satellite, which was developed by Zhejiang University and launched in March 2022. Several images of Earth were taken in orbit, as shown in Figure 9 and Figure 10. As shown in Figure 9c and Figure 10c, the red lines on the sphere are the 3D curves formed by Earth horizon points. Because the attitude matrix of the satellite is unknown when taking pictures, it is difficult to evaluate the accuracy of the proposed algorithm. However, the efficiency of the algorithm can be evaluated. Table 2 shows the time for the three algorithms to remove outliers. As can be seen, the proposed algorithm is more efficient.

5. Conclusions

A nadir vector estimation algorithm combining a modified RANSAC and TLS is proposed in this paper. It was intended to improve the efficiency and accuracy of Earth sensors with large-wide-FOV cameras. The algorithm uses a modified RANSAC algorithm to remove the noisy points and projects the Earth horizon onto the unit sphere as a 3D curve instead of a conic, which is more suitable for large-wide-FOV cameras. Then, the TLS and WTLS techniques are used to fit the 3D curve and improve the accuracy. The algorithm also takes the Earth’s oblateness into consideration. The experiments show that this algorithm is efficient and the accuracy is 0 . 04 ° when the Earth is fully visible. The RMSE increases as the off-nadir angle increases, but it can still achieve an accuracy of 0 . 16 ° when the off-nadir angle is 120 ° . Furthermore, the proposed algorithm is much faster in dealing with real pictures taken in orbit than those of the Hough transformation and MLEASAC. Future work will focus on further improving the accuracy and validating the proposed algorithm to be used in an Earth sensor.

Author Contributions

Conceptualization, B.W. and H.W.; methodology, B.W.; software, B.W.; writing—original draft preparation, B.W.; writing—review and editing, H.W.; project administration, Z.J.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Diriker, F.K.; Frias, A.; Keum, K.H.; Lee, R.S.K. Improved Accuracy of a Single-Slit Digital Sun Sensor Design for CubeSat Application Using Sub-Pixel Interpolation. Sensors 2021, 21, 1472. [Google Scholar] [CrossRef] [PubMed]
  2. Wang, X.; Wu, G.; Xing, L.; Pedrycz, W. Agile Earth Observation Satellite Scheduling Over 20 Years: Formulations, Methods, and Future Directions. IEEE Syst. J. 2020, 15, 3881–3892. [Google Scholar] [CrossRef]
  3. Nguyen, T.; Cahoy, K.; Marinan, A. Attitude Determination for Small Satellites with Infrared Earth Horizon Sensors. J. Spacecr. Rocket. 2018, 55, 1466–1475. [Google Scholar] [CrossRef]
  4. STD 15 Earth Sensor. Available online: https://www.satcatalog.com/component/std-15/ (accessed on 12 September 2022).
  5. STD 16 Earth Sensor. Available online: https://www.satcatalog.com/component/std-16/ (accessed on 12 September 2022).
  6. IERS. Available online: https://space.leonardo.com/en/product-catalogue (accessed on 12 September 2022).
  7. García Sáez, A.; Quero, J.M.; Angulo Jerez, M. Earth Sensor Based on Thermopile Detectors for Satellite Attitude Determination. IEEE Sens. J. 2016, 16, 2260–2271. [Google Scholar] [CrossRef]
  8. HSNS. Available online: https://www.solar-mems.com/hsns/ (accessed on 12 September 2022).
  9. Xu, L.; Chen, H. Improvement of Infrared Horizon Detector Using Two-dimensional Infrared Temperature Distribution Model. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, Kissimmee, FL, USA, 5–9 January 2015. [Google Scholar]
  10. Rensburg, V. An Infrared Earth Horizon Sensor for a Leo Satellite. Master’s Thesis, University of Stellenbosch, Stellenbosch, South Africa, 2008. [Google Scholar]
  11. Digital Earth Sensor. Available online: http://www.sitael-hellas.com/wp-content/uploads/2015/10/Digital-Earth-Sensor.pdf (accessed on 12 September 2022).
  12. Saadat, A. Attitude Determination with Self-Inspection Cameras Repurposed as Earth Horizon Sensors. In Proceedings of the 36th Annual Small Satellite Conference, Utah State University, Logan, UT, USA, 6–11 August 2022. [Google Scholar]
  13. Kikuya, Y.; Iwasaki, Y.; Yatsu, Y.; Matunaga, S. Attitude Determination Algorithm Using Earth Sensor Images and Image Recognition. Trans. JSASS Aerosp. Tech. Jpn. 2021, 64, 82–90. [Google Scholar] [CrossRef]
  14. Modenini, D.; Locarini, A.; Zannoni, M. Attitude Sensor from Ellipsoid Observations: A Numerical and Experimental Validation. Sensors 2020, 20, 433. [Google Scholar] [CrossRef] [Green Version]
  15. Christian, J.A. A Tutorial on Horizon-Based Optical Navigation and Attitude Determination With Space Imaging Systems. IEEE Access 2021, 9, 19819–19853. [Google Scholar] [CrossRef]
  16. Qiu, R.; Dou, W.; Kan, J.; Yu, K. Optical design of wide-angle lens for LWIR earth sensors. In Proceedings of the Image Sensing Technologies: Materials, Devices, Systems, and Applications IV, Anaheim, CA, USA, 9–13 April 2017. [Google Scholar]
  17. CubeSense. Available online: https://www.cubespace.co.za/products/gen-1/sensors/cubesense/ (accessed on 12 September 2022).
  18. Barf, J. Development and Implementation of an Image-Processing-Based Horizon Sensor for Sounding Rockets. Master’s Thesis, Luleå University of Technology, Luleå, Sweden, 2017. [Google Scholar]
  19. Braun, B.; Barf, J. Image processing based horizon sensor for estimating the orientation of sounding rockets, launch vehicles and spacecraft. CEAS Space J. 2022. [Google Scholar] [CrossRef]
  20. Manzoni, G.; Brama, Y.L.; Zhang, M. Athenoxat-1, Night Vision Experiments in LEO. In Proceedings of the 30th Annual Small Satellite Conference, Utah State University, Logan, UT, USA, 6–11 August 2016. [Google Scholar]
  21. Pack, D.W.; Ardila, D.; Herman, E.; Rowen, D.W.; Welle, R.P.; Wiktorowicz, S.J.; Hattersley, B. Two Aerospace Corporation CubeSat Remote Sensing Imagers: CUMULOS and R3. In Proceedings of the 31th Annual Small Satellite Conference, Utah State University, Logan, UT, USA, 7–10 August 2017. [Google Scholar]
  22. Wang, H.; Wang, Z.; Wang, B.; Jin, Z.; Crassidis, J.L. Infrared Earth sensor with a large field of view for low-Earth-orbiting micro-satellites. Front. Inf. Technol. Electron. Eng. 2021, 22, 262–271. [Google Scholar] [CrossRef]
  23. Deng, S.; Meng, T.; Wang, H.; Du, C.; Jin, Z. Flexible attitude control design and on-orbit performance of the ZDPS-2 satellite. Acta Astronaut. 2017, 130, 147–161. [Google Scholar] [CrossRef]
  24. Gonzalez, R.C.; Woods, R.E.; Eddins, S.L. Digital Image Processing Using MATLAB; Publishing House of Electronics Industry: Beijing, China, 2020; pp. 511–529. [Google Scholar]
  25. Christian, J.A. Accurate Planetary Limb Localization for Image-Based Spacecraft Navigation. J. Spacecr. Rocket. 2017, 54, 708–730. [Google Scholar] [CrossRef]
  26. Renshaw, D.T.; Christian, J.A. Subpixel Localization of Isolated Edges and Streaks in Digital Images. J. Imaging 2020, 6, 33. [Google Scholar] [CrossRef] [PubMed]
  27. Niu, S.; Bai, J.; Hou, X.; Yang, G. Design of a panoramic annular lens with a long focal length. Appl. Opt. 2007, 46, 7850–7857. [Google Scholar] [CrossRef] [PubMed]
  28. Hu, J.; Yang, B.; Liu, F.; Zhu, Q.; Li, S. Optimization of short-arc ellipse fitting with prior information for planetary optical navigation. Acta Astronaut. 2021, 184, 119–127. [Google Scholar] [CrossRef]
  29. Christian, J.A.; Robinson, S.B. Noniterative Horizon-Based Optical Navigation by Cholesky Factorization. J. Guid. Control Dyn. 2016, 39, 2757–2765. [Google Scholar] [CrossRef]
  30. Modenini, D. Attitude Determination from Ellipsoid Observations: A Modified Orthogonal Procrustes Problem. J. Guid. Control Dyn. 2018, 41, 2324–2326. [Google Scholar] [CrossRef]
  31. Mortari, D.; De Dilectis, F.; Zanetti, R. Position Estimation Using the Image Derivative. Aerospace 2015, 2, 435–460. [Google Scholar] [CrossRef] [Green Version]
  32. Torr, P.H.S.; Zisserman, A. MLESAC: A New Robust Estimator with Application to Estimating Image Geometry. Comput. Vis. Image Underst. 2000, 78, 138–156. [Google Scholar] [CrossRef] [Green Version]
  33. Gioia, F.; Meoni, G.; Giuffrida, G.; Donati, M.; Fanucci, L. A Robust RANSAC-Based Planet Radius Estimation for Onboard Visual Based Navigation. Sensors 2020, 20, 4041. [Google Scholar] [CrossRef]
  34. Moritz, H. Geodetic reference system. Bull. Geod. 1980, 54, 395–405. [Google Scholar] [CrossRef]
  35. Wang, H.; Bai, S. A versatile method for target area coverage analysis with arbitrary satellite attitude maneuver paths. Acta Astronaut. 2022, 194, 242–254. [Google Scholar] [CrossRef]
  36. Crassidis, J.L.; Cheng, Y. Maximum Likelihood Analysis of the Total Least Squares Problem with Correlated Errors. J. Guid. Control Dyn. 2019, 42, 1204–1217. [Google Scholar] [CrossRef]
  37. Markovsky, I.; Van Huffel, S. Overview of total least-squares methods. Signal Process. 2007, 87, 2283–2302. [Google Scholar] [CrossRef] [Green Version]
  38. Wang, B.; Li, J.; Liu, C. A robust weighted total least squares algorithm and its geodetic applications. Stud. Geophys. Geod. 2016, 60, 177–194. [Google Scholar] [CrossRef]
  39. Jazaeri, S.; Amiri-Simkooei, A.R.; Sharifi, M.A. Iterative algorithm for weighted total least squares adjustment. Surv. Rev. 2013, 46, 19–27. [Google Scholar] [CrossRef]
  40. Kannala, J.; Brandt, S.S. A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses. IEEE Trans. Pattern. Anal. Mach. Intell. 2006, 28, 1335–1340. [Google Scholar] [CrossRef]
  41. Zhang, G.; Wei, X.; Fan, Q.; Jiang, J. Method and Device for Calibration of Digital Sun Sensor. U.S. Patent 2009/0012731 A1, 8 January 2009. [Google Scholar]
  42. Phenneger, M.; Singhal, S.; Lee, T.; Stengle, T. Infrared Horizon Sensor Modeling for Attitude Determination and Control: Analysis and Mission Experience; NASA: Washington, DC, USA, 1985.
  43. Gontin, R.; Ward, K. Horizon sensor accuracy improvement using earth horizon profile phenomenology. In Proceedings of the Guidance, Navigation and Control, Monterey, CA, USA, 17–19 August 1987. [Google Scholar]
Figure 1. Flowchart of the proposed algorithm.
Figure 1. Flowchart of the proposed algorithm.
Sensors 22 09409 g001
Figure 2. The geometry of Earth sensor and Earth.
Figure 2. The geometry of Earth sensor and Earth.
Sensors 22 09409 g002
Figure 3. Geometry of tangent height.
Figure 3. Geometry of tangent height.
Sensors 22 09409 g003
Figure 4. (a) Simulated Earth image without noisy points; and (b) Real Earth image.
Figure 4. (a) Simulated Earth image without noisy points; and (b) Real Earth image.
Sensors 22 09409 g004
Figure 5. Time consumption vs. the ratio of noisy points.
Figure 5. Time consumption vs. the ratio of noisy points.
Sensors 22 09409 g005
Figure 6. Success probability vs. the ratio of noisy points.
Figure 6. Success probability vs. the ratio of noisy points.
Sensors 22 09409 g006
Figure 7. RMS error of the nadir vector vs. off-nadir angle. Off-nadir angle = 0– 90 ° .
Figure 7. RMS error of the nadir vector vs. off-nadir angle. Off-nadir angle = 0– 90 ° .
Sensors 22 09409 g007
Figure 8. RMS error of the nadir vector vs. off-nadir angle. Off-nadir angle = 90– 120 ° .
Figure 8. RMS error of the nadir vector vs. off-nadir angle. Off-nadir angle = 90– 120 ° .
Sensors 22 09409 g008
Figure 9. (a) Real Earth image. (b) Edge detected. (c) Actual horizon extracted.
Figure 9. (a) Real Earth image. (b) Edge detected. (c) Actual horizon extracted.
Sensors 22 09409 g009
Figure 10. (a) Real Earth image. (b) Edge detected. (c) Actual horizon extracted.
Figure 10. (a) Real Earth image. (b) Edge detected. (c) Actual horizon extracted.
Sensors 22 09409 g010
Table 1. Specifications of the infrared camera.
Table 1. Specifications of the infrared camera.
ParameterValue
Resolution of CMOS 288 × 384
Size of one pixel 25 μ m × 25 μ m
Focal length 2.3 mm
FOV 180 ° × 360 °
Spectral range8−14 μ m
Dimension Φ 40 × 30
Weight40 g
Table 2. The time for outlier detection.
Table 2. The time for outlier detection.
AlgorithmFigure 8Figure 9
Proposed algorithm8.89 ms13.84 ms
Hough transformation35.6636.13 ms
MLESAC27.94 ms33.36 ms
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, B.; Wang, H.; Jin, Z. An Efficient Algorithm for Infrared Earth Sensor with a Large Field of View. Sensors 2022, 22, 9409. https://doi.org/10.3390/s22239409

AMA Style

Wang B, Wang H, Jin Z. An Efficient Algorithm for Infrared Earth Sensor with a Large Field of View. Sensors. 2022; 22(23):9409. https://doi.org/10.3390/s22239409

Chicago/Turabian Style

Wang, Bendong, Hao Wang, and Zhonghe Jin. 2022. "An Efficient Algorithm for Infrared Earth Sensor with a Large Field of View" Sensors 22, no. 23: 9409. https://doi.org/10.3390/s22239409

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop