Abstract
Fighter aircraft recognition is important in military applications to make strategic decisions. The complexity lies in correctly identifying the unknown aircraft irrespective of its orientations. The work reported here is a research initiative in this regard. The database used here was obtained by using rapid prototyped physical models of four classes of fighter aircraft: P51 Mustang, G1-Fokker, MiG25-F, and Mirage 2000. The image database was divided into the training set and test set. Two feature sets, Feature Set1 (FS1) and FS2, were extracted for the images. FS1 consisted of 15 general features and FS2 consisted of 14 invariant moment features. Four multilayered feedforward backpropagation neural networks were designed and trained optimally with the normalized feature sets. The neural networks were configured to classify the test aircraft image. An overall accuracy of recognition of 91% and a response time of 3 s were achieved for the developed automatic fighter aircraft model image recognition system.
1 Introduction
The field of object recognition has been continuously attracting the attention of researchers, as there is still a lot of scope in this field. It plays a pivotal role in machine vision, vehicle identification, automatic intruder detection, and also bio-medical applications. The task of automatic fighter aircraft identification, in particular, is important in the defense field for strategic decision making. This research is important because it is not a trivial task to correctly recognize aircraft irrespective of their orientations and class. There will be resemblances among some aircraft of the same class, which aggravates the aircraft recognition problem. In real-time situations, the aircraft may be maneuvering. The visibility is affected by the background and lighting conditions. The aircraft may only be seen partially due to its orientations, restricted camera positions, or terrain masking. These aspects are to be considered in the analysis of fighter aircraft recognition problems. Based on the literature survey made here, much of the earlier works in this area were carried out for non-practical situations, assumed ideal conditions, did not contemplate on the response time, used heterogeneous air vehicles, or used only simulated aircraft models from computer-aided design (CAD). Though visual aircraft recognition systems are in place in combat zones, it is also important to have an automatic aircraft recognition system to augment or independently take decisions. This research work is motivated by the need for such applications.
In this paper, the authors report an image recognition system for fighter aircraft models. This work used fighter aircraft physical models prepared by three-dimensional (3-D) CAD files and rapid prototyping by fused deposition modeling. An image database was used, which was created by using these models and an automated experimental facility. This is an improvement over the use of plastic models and mere CAD models by some of the researchers. The use of automated control for image capture is also an improvement in contrast to the manual settings used by some of the researchers. A specific segmentation algorithm has been developed, which works well for the complex images considered here, as against the use of distinct foreground and background and general segmentation techniques by some researchers.
2 Related Work
Li et al. [9] proposed a method for solving the problem of automatic aircraft recognition from a sequence of images of an unknown aircraft. Their method used Hu’s moment invariants as the first feature vector and the partial singular values of the outline of the aircraft as the second feature vector. Karacor et al. [7] suggested an aircraft classification method using image processing techniques and a three-layered feedforward artificial neural network. They used four different types of aircraft. Liu [10] proposed an aircraft recognition system with three levels of data fusion and neural network. Aircraft image recognition systems using moment invariants and phase correlation were described by Roopa and Rama Murthy in Refs. [19] and [20], respectively. Molina et al. [15] described an identification system for aircraft in an airport. Video cameras were deployed near the runways for capturing the image of the tail of an aircraft. Then, an optical character recognition (OCR) algorithm was used to recognize the aircraft tail number. The results of OCR were used to identify the aircraft among those in the aircraft database. Rihaczek and Hershkowitz [18] proposed a system for identification of large aircraft by using features related to length and wingspan. Hsieh et al. [6] described a hierarchical classification system to recognize aircraft in satellite images.
Several researchers used invariant moment features in their research. Dudani et al. [2] described an aircraft identification system by using invariant moment features. They considered six different aircraft classes. They discretized the ranges of azimuth and roll angles each at 5° interval. The training sample set and the test set consisted of 3000 and 132 images, respectively. They used Bayes decision rule and a distance-weighted k-nearest-neighbor rule for classification. The response time required for recognition was 30 s. Mercimek et al. [14] used invariant moment features and compared the performance of classifiers for real object recognition. Ramteke [17] used invariant moment features for recognition of handwritten Devanagari vowels.
Several researchers used neural networks in solving classification problems. Saghafi et al. [23] described aircraft type recognition by using an area-based feature extraction method and a multilayer perceptron neural network for classification. They used 3-D computer models of five different aircraft, viz. Bell 206, C-130 Hercules, AH-1 Cobra, Su-25, and Mustang, and for each model, 1080 images were generated for training the neural network. No data regarding the response time were given. Kim et al. [8] used two multilayer feedforward neural networks for aircraft identification and orientation estimation. They used three different aircraft models, viz. DC10, Phantom, and MIG 21. The total training set consisted of 216 images. The test set also consisted of 216 images with ±5° deviations to the nearest training image. The response time was not specified.
Somaie et al. [25] proposed an aircraft identification system using backpropagation neural network. They used six different types of aircraft models. A review of the methods for converting a multiclass problem into several two-class problems and their advantages was presented in Ref. [3].
This work addresses classification of rapid prototyped fighter aircraft models with more than 10 features extracted for improving classification efficiency with reduced response time.
3 Methods
3.1 Block Diagram
Figure 1 shows the block diagram of the work carried out. Input images of fighter aircraft models were pre-processed to a standard size of 64×64 and converted to gray scale images. The pre-processed images were segmented. A tutorial on image segmentation can be referred in Ref. [11]. The segmentation used here involved the following steps:
Block Diagram of the Work Carried Out.
From the input image, a binary labeled image was obtained using eight connectivities.
In the labeled image, the object with maximum area was found.
The image region corresponding to this object in the original image was retained. All other pixels were set to zero in the original image.
After segmentation, the image size is retained as 64×64. The segmentation process retains the relevant information of the captured image of the physical model that enhances the process of feature extraction. The segmented image was further processed to extract features. Based on the earlier work of the authors [19], [20], [21], it was inferred that multiple feature sets will provide an increase in the recognition accuracy in complex image analysis, as in this work. Statistical parameters are good indicators of image features. As neural networks have parallel computational capability, the response time can be reduced. Hence, neural networks were used for classification. In this work, for a given image, 29 features were extracted that were grouped into two sets each of 15 (Feature Set1, FS1) and 14 (Feature Set2, FS2). The grouping of features was done because the values of the invariant moments are smaller compared to those of other features. Hence, invariant moments were grouped together and represented as FS2. Other features were grouped and represented as FS1. Each feature set was applied to a different neural network.
Using these two groups of features, it was possible to distinguish between image sets that could enhance the classification process.
Four classifiers (neural networks NN1, NN2, NN3, and NN4) were developed that classify the input image into four classes based on 29 features that were normalized to lie in the range of 0 to 1. Table 1 highlights how the developed neural network classifier is different from the ones in the previous work and indicates the identification efficiency. However, a comparison of the performances of the neural networks seems unviable as the datasets, complexity of the images, and aircraft types are different in each case.
Comparison of Neural Network Classifiers for Aircraft Recognition Systems.
Title | Parameters | ||||||||
---|---|---|---|---|---|---|---|---|---|
No. of Features | No. of Aircraft | Aircraft Type | Type of Neural Network | No. of Neural Networks | No. of Images in the Training Set | No. of Images in the Test Set | No. of Neurons in Hidden Layer | % Recall | |
Aircraft visual identification by neural networks [23] | 20 | 5 | Heterogeneous | Multilayer (three-layered) feedforward, backpropagation | 3 | Not specified | Not specified | Not specified | 87 |
Multiclass 3-D aircraft identification and orientation estimation using multilayer feedforward neural network [8] | 144 | 3 | Heterogeneous | Multilayer (three-layered) feedforward backpropagation | 2 | 216 | 216 | 5 | 98 |
Reported method | 29 | 4 | Fighter | Multilayer (refer to Table 2) feedforward backpropagation | 4 | 384 | 128 | Refer to Table 2 | 91 |
% Recall=100*(No. of test images recognized correctly/total no. of test images).
3.2 Experimental Setup
A standard database for validation of fighter aircraft image recognition algorithms is not available in the open literature. Hence, as a practical approach, a uniform dataset of fighter aircraft images in different orientations was generated by using the rapid prototyped models of fighter aircraft and an experimental setup. Each image in the database is a separate file in the standard .jpg format. Partially visible images were also considered for analysis so that the algorithms are robust.
Rapid prototyped models for fighter aircraft have been developed by Roopa and Rama Murthy [21]. An experimental setup has been constructed to capture the aircraft model images in different orientations [21].
In the experimental setup, microcontrollers, stepper motors, and mechanical arrangements were used to fix the aircraft model and control its orientations. To simulate the yaw, pitch, and roll movements of the aircraft model, three stepper motors were used that can be rotated independently to the desired orientations. For each of the three stepper motors, two push button keys were provided for forward and reverse movements. The stepper motors were independently driven by three control circuits. Each control circuit consisted of an 8-bit microcontroller AT89C2051, relay-driver circuit, and four relays to connect to the coils of the stepper motor.
A high-resolution camera was used to capture images of aircraft models in different pitch, yaw, and roll angles. The captured images of the prototyped aircraft models formed the database for the work carried out.
The sample database used in this work consisted of 384 images in the training set and 128 images in the testing set. Four classes of aircraft, viz. P51 Mustang (P51), G1-Fokker (G1F), MiG25-F (MiG), and Mirage2000 (Mirage), were used for creating the database. The details of this work can be found in Ref. [21]. A few of the segmented training and test images are shown in Figures 2 and 3 , respectively. For example, G1F_P0_Y45_R0 indicates image G1-Fokker whose orientations are pitch 0°, yaw 45°, and roll 0°. G1F_P45_Y135_R135 indicates G1-Fokker with orientations as pitch 45°, yaw 135°, and roll 135°.
Segmented Training Images.
Segmented Test Images.
3.3 Flowchart for the Software
The flowchart for the developed classifier is shown in Figure 4.
Flowchart for the Software.
3.4 Feature Extraction
FS1 consisted of 15 general image features that have been extracted for each of the images. In Ref. [24], the statistical features extracted were the mean, standard deviation, smoothness, third moment, uniformity, and entropy. In Ref. [1], the features extracted were area, diameter, perimeter, circularity, volume, compactness, bounding box dimension, principal axis lengths, elongation, mean, variance, skewness, kurtosis, and eigenvalues. The authors in Ref. [22] extracted gray level co-occurrence matrix features in their work to train the artificial neural network. However, in this research work, considering the complex information for classification, it was required to select as much non-redundant information as possible. Hence, the following features [4], [12] were extracted: energy, entropy, average, variance, standard deviation, Euler number, homogeneity, area, ratio 1=major axis length/minor axis length, ratio 2=(perimeter)2 / area, solidity, equivalent diameter, eccentricity, kurtosis [16], and skewness [16].
FS2 consisted of (i) seven invariant boundary moments and (ii) seven invariant region moments. These are invariant moments applied to the boundary as well as the region of the image. The expressions for invariant moments can be referred in Ref. [4]. The underlying basic equations to calculate these moments are given below.
The 2-D moment of order (p + q) for f(x, y) is given by
where f(x, y) is a digital image of size M×N.
The central moment of order (p + q) is defined as
where p=0, 1, 2, …, and q=0, 1, 2, …,
The normalized central moments,
where
The features were added based on their importance during training the neural network. A small randomly picked test data set was used for this purpose. Then, the contribution of an ith feature in a set of n features was found as the difference in error rate of the classifier output based on all of the n features and a classifier based on all but the ith feature. The plot of general features (FS1) for the P51, G1F, MiG, and Mirage aircraft models is shown in Figure 5. Similarly, the plot of invariant moment features (FS2) for the P51, G1F, MiG, and Mirage aircraft models is shown in Figure 6. From Figures 5 and 6, we can infer that there is no definite demarcation among the classes. This indicates non-linearity in the input data set, which can be solved by neural networks.
Plot of FS1 for the P51, G1F, MiG, and Mirage Aircraft Models.
Plot of FS2 for the P51, G1F, MiG and Mirage Aircraft Models.
3.5 Design of Neural Network Classifier
A three-layer network is shown in Figure 7. Here, the superscripts of variables represent the layer number, e.g. W1 represents the weight matrix of the first layer [5], [13]. W2 represents the weight matrix of the second layer. There are P inputs, S1 neurons in the first layer, S2 neurons in the second layer, and so on.
Three-Layer Network.
The design of neural network architecture for the current work consisted of first obtaining an optimum set of training parameters. The error in training for each network for different number of hidden layers was found by experimentation and is plotted in Figure 8.
Training Errors for net1, net2, net3, and net4 for Hidden Layers 1 to 6.
The optimum number of hidden layers thus found for each net is given in Table 2. Then, the optimum number of neurons for each hidden layer for all the networks was found by experimenting, which is also shown in Table 2. The transfer functions used for the hidden and output layers are “tansig”. The tansig saturation level is−1 to + 1. During transition from−1 to + 1, maximum linearity is exhibited in the range of−0.6 to 0.6, and the features used for classification also lie in this range, thus assisting the classification process. During the design of the classification algorithm, it was found that a single four-class classifier was not efficient in classifying all the aircraft classes. Hence, an alternate procedure was developed to convert the four-class classification problem into multiple two-class problems. That gave an overall promising recognition efficiency of 91.4%.
Network Parameters used in the Developed Neural Network Design.
Sl. No. | Neural Network | No. of features | Hidden layers | Total no. of layers (excluding input layer) | No. of neurons in each layer | Network function in all layers | Set Target | Bias used | No. of outputs |
---|---|---|---|---|---|---|---|---|---|
1 | Net1 | 15 | 5 | 6 | 9 | Tangent sigmoid | 0.25, 0.5, 0.75, 1 | [0;1;1;1;1;0] | 1 |
2 | Net2 | 14 | 4 | 5 | 7 | Tangent sigmoid | 0.25, 0.5, 0.75, 1 | [0;1;1;1;0] | 1 |
3 | Net3 | 15 | 4 | 5 | 5 in layers 1and 2. 4 in layers 3 and 4. | Tangent sigmoid | 0.5, 1 | [0;1;1;1;0] | 1 |
4 | Net4 | 14 | 2 | 3 | 3 | Tangent sigmoid | 0.5, 1 | [0;1;0] | 1 |
Following the preceding discussion, net1 is represented by the following equation:
net2 and net3 are represented by the following equation:
net4 is represented by the following equation:
Equations (4) to (6) were modeled in Matlab, and networks were trained using the Levenberg-Marquardt [26] backpropagation algorithm with epoch set to 1000 and learning rate 0.4. The training achieved the set target, and optimum weights and biases for the designed network were obtained.
4 Results and Discussion
The normalized inputs to neural networks for supervised training and the outputs are shown in Figure 9. The data of P51, G1F, MiG, and Mirage corresponding to FS1 were used as training input (Tr_Input_1) to neural network1 (NN1), and the data of P51, G1F, MiG, and Mirage corresponding to FS2 were used as training input (Tr_Input_2) to neural network 2 (NN2). The data of G1F and MiG corresponding to FS1 were used as training input (Tr_Input_3) to NN3. The data of G1F and MiG corresponding to FS2 were used as training input (Tr_Input_4) to NN4. The set targets for NN1 and NN2 (Tr_Target_1) were 0.25, 0.5, 0.75, and 1 for the P51, G1F, MiG, and Mirage classes, respectively. The set targets for NN3 and NN4 (Tr_Target_2) were 0.5 and 1 for the G1F and MiG classes, respectively. The nets were trained with 384 images consisting of 96 images in each of four aircraft categories.
Neural Networks Training Inputs, Outputs, and Targets.
The plots of target vs. training output, and performance plots for NN1, NN2, NN3, and NN4 are shown in Figures 10–17.
Target vs. Output for Training Data Set for net1.
Performance Plot for net1.
Target vs. Output for Training Data Set for net2.
Performance Plot for net2.
Target versus Output for Training Data Set for net3.
Performance Plot for net3.
Target vs. Output for Training Data Set for net4.
Performance Plot for net4.
As shown in Figure 10, net1 has a training error. The percentage error in training for net1 for G1F was 4.17 and that for MiG was 2.08. This was with five hidden layers. The following training images were trained wrongly: G1F_P0_Y315_R45.JPG, G1F_P0_Y315_R90.JPG, G1F_P45_Y225_R225.JPG, G1F_P45_Y90_R180.JPG, MiG_P0_Y45_R225.JPG, and MiG_25_F_P45_Y0_R90.JPG. The error could have been reduced if the number of hidden layers was increased to six, as shown in Figure 8. However, to reduce the complexity of the network, the number of hidden layers was maintained at five only. Figure 12 indicates that there was no error in the training of net2. As shown in Figure 8, four hidden layers were chosen for net2. The performance plots of Figures 11 and 13 indicate that the designed networks net1 and net2 require a minimum of 400 epochs to reach the desired target. From 400 to 1000 epochs, the training algorithm fine tunes the weights and biases for proper classification. After 1000 epochs, it was observed that there is a very small incremental change in mean squared error (mse). Hence, it has been trained only till 1000 epochs to reach the optimum weights and biases.
Figure 14 shows that there was no training error for net3. Figure 15 shows the performance plot for net3, which indicates that from 950 to 1000 epochs, there is a significant change in mse. Hence, the network has been trained till 1000 epochs to obtain optimum weights and biases.
Figure 16 shows that for net4, there was no error in training. net4 was trained to obtain optimum weights and biases at the end of the 50th epoch, and remains constant thereafter as shown in Figure 17.
The efficiency of training for the training data set for net1, net2, net3, and net4 is summarized in Table 3.
Efficiency of Training for Training Data Set.
Sl. no. | Aircraft category | % Efficiency of training | |||
---|---|---|---|---|---|
Net1 | Net2 | Net3 | Net4 | ||
1 | P51 | 100.0 | 100 | – | – |
2 | G1F | 95.83 | 100 | 100 | 100 |
3 | MiG | 97.92 | 100 | 100 | 100 |
4 | Mirage | 100 | 100 | – | – |
The simulation configuration for testing of aircraft images for verification of performance of trained neural networks is shown in Figure 18. This was arrived at by experiments and analysis. The combined output from two four-class classifiers NN1 and NN2 responded well for classes P51 and Mirage. Another set of two two-class classifiers NN3 and NN4 together was used for classifying the G1F and MiG aircraft classes. The combined networks NN3 and NN4 performed well for classifying G1F and MiG. The outputs of NN1 and NN2 were combined by a decision logic of OR operation of their outputs y1 and y2 to give y12. Similarly, NN3 and NN4 outputs y3 and y4 were combined by a decision logic of OR operation to give y34. A known database consisting of 128 test images was used to test the performance of the classification algorithm that had been trained to classify 384 images. The 128 test images had 32 images in each of the four aircraft categories. The training and test data were two mutually exclusive sets. The advantage of this system is the accuracy of prediction, as images in the same orientations of pitch yaw and roll were selected in each aircraft class for training the classifier. This was also true for test data. A Matlab-compliant USB camera was used to capture the images. An optimally trained network is able to generalize well and will perform correctly for an unknown test input. The program was written in Matlab using image processing and neural network toolboxes. The test results are shown in Table 4. The exact classified outputs for each input class are shown in matrix form in Figure 19. The use of different sets of features and the developed classifier using neural networks gave an accuracy of 91% and a response time of 3 s. It is promising as all the aircraft under consideration belong to the same category, viz. fighter aircraft type, unlike the heterogeneous aircraft used by some of the researchers, which may increase the accuracy due to the inherent dissimilarity in shape among those aircraft. Also, a faster response time is as important as the accuracy of recognition in real-time scenarios of fighter aircraft recognition.
Testing by Simulation for Verification of Performance of Trained Neural Networks.
Percentage Recall for Test Class.
Sl. no. | Neural network | Test class | % Recall for each net | % Recall by combined classifier | ||||
---|---|---|---|---|---|---|---|---|
1 | NN1 | P51 | 84.375 | 90.625 | ||||
2 | NN2 | P51 | 65.625 | |||||
3 | NN3 | G1F | 90.625 | 96.875 | ||||
4 | NN4 | G1F | 84.375 | |||||
5 | NN3 | MIG | 81.25 | 90.625 | ||||
6 | NN4 | MIG | 75 | |||||
7 | NN1 | Mirage | 68.75 | 87.5 | ||||
8 | NN2 | Mirage | 68.75 | |||||
Average % recall=91 |
% Recall=100*(No. of test images recognized correctly/total no. of test images).
Classification Matrix.
A comparison of the work presented in this paper with that of authors in Refs. [2], [8], [23] is given in Table 5. A scale of seven comprises of the parameters (i) invariant moments (IM), (ii) FS1, (iii) hardware for database generation, (iv) specific segmentation algorithm, (v) data size, (vi) aircraft type, and (vii) rapid prototyped models. If a particular method uses one of the parameters (i) to (iv) and (vii), then a score of 1 is given under that head; use of very large data size scores 1. Under “aircraft type”, a referenced method scores 1 if the aircraft used for analysis belong to the same category; for example, the method based on Ref. [2] uses invariant moment and a very large data size. Hence, it is given a score of 1 for each of these two parameters, while for the other five parameters the score is zero. Thus, the method based on Ref. [2] has a total score of 2 on a scale of 7. On a similar note, the proposed method of the authors scores 6 wherein the only unfavorable parameter is the smaller data size compared to other methods.
Comparison of Aircraft Recognition Systems.
Title | Parameters | |||||||
---|---|---|---|---|---|---|---|---|
IM | FS1 | Hardware for database generation | Specific segmentation algorithm | Data size | Aircraft type | Rapid prototyped models | Score on a scale of 7 | |
Aircraft identification by moment invariants [2] | Used | Not used | Not used | Not used | Very large | Fighter and bomber | Not used | 2 |
Aircraft visual identification by neural networks [23] | Not used | Used | Not used | Not used | Very large | Heterogeneous | Not used | 2 |
Multiclass 3-D aircraft identification and orientation estimation using multilayer feedforward neural network [8] | Not used | Used | Not used | Not used | Very large | Heterogeneous | Not used | 2 |
Current method | Used | Used | Used | Used | large | Fighter | Used | 6 |
IM, Invariant Moment; FS1, Feature Set1.
5 Conclusion
This paper described feature extraction of images of fighter aircraft models, training a neural network classifier using these features, and classifying test aircraft images using the trained neural networks. Statistical descriptors were extracted from the images. Four classes of aircraft were used for analysis, viz. P51, G1F, MiG, and Mirage. Four neural networks were used in classification. The trained networks were used for classification of 128 known test images. The overall recognition was 91% and the response time was 3 s.
In the method described in this work, the authors used fighter aircraft that have many similarities, while other researchers used heterogeneous aircraft like fighters, bombers, helicopters, and military transport aircraft. If the aircraft are heterogeneous, the recognition problem is simpler and will not give satisfactory results for similar aircraft. The authors also used invariant moments and FS1, while other authors used one of them only. The authors also constructed a dedicated hardware for database generation.
About the authors
K. Roopa received her master’s degree in electronics from the Visvesvaraya Technological University, Belgaum, Karnataka, India in 2007. She is pursing PhD at Jawaharlal Nehru Technological University, Hyderabad, India. She is also working as an associate professor at Sir M. Visvesvaraya Institute of Technology, Bangalore, Karnataka, India. She is a life member of the Indian Society for Technical Education and a senior member of Institute of Electrical and Electronics Engineers. Her main research interest is in the field of image processing.
T. V. Rama Murthy joined the National Aerospace Laboratories in 1973 and was actively involved with R&D activities in the area of aerospace electronics and systems. He obtained his bachelor’s degree in electrical technology with distinction from IISc, MS and PhD degrees from IIT, Madras. Since 2000, he has been in academics, teaching graduate students and guiding research scholars. He was a principal investigator of many sponsored projects and has undertaken sponsored faculty development programs. He has contributed to Visvesvaraya Technological University as a member of academic bodies. He has published 40 papers in refereed national and international journals and conferences.
P. Cyril Prasanna Raj is currently working as dean (R&D) at MS Engineering College, Bangalore. Prior to this, he was at MS Ramaiah School of Advanced Studies, Bangalore, as HOD-EEE Department. He has more than 17 years of experience in teaching and research. His areas of interest include VLSI signal processing, reconfigurable hardware, and nanoelectronics.
Acknowledgments
The authors thank the principals and managements of Sir M. Visvesvaraya Institute of Technology and MS Engineering College; Mathworks Inc. for online resources and supporting documents for the development of the software model; and Dr. P. Chandrasekhar Reddy, Professor Coordination, Jawaharlal Nehru Technological University, for the encouragement.
Bibliography
[1] S. Akram, M. Y. Javed, U. Qamar, A. Khanum and A. Hassan, Artificial neural network based classification of lungs nodule using hybrid features from computerized tomographic images, Appl. Math. Inform. Sci.9 (2015), 183–195.10.12785/amis/090124Search in Google Scholar
[2] S. A. Dudani, K. J. Breeding and R. B. McGhee, Aircraft identification by moment invariants, IEEE Trans. Comput.C-26 (1977), 39–46.10.1109/TC.1977.5009272Search in Google Scholar
[3] N. García-Pedrajas and A. d. H. García, Output coding methods: review and experimental comparison, Available at: http://www.intechopen.com/books/pattern_recognition_techniques_technology_and_applications/output_coding_methods__review_and_experimental_comparison, Accessed July 2015.Search in Google Scholar
[4] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd ed., Pearson Education, 2009.10.1117/1.3115362Search in Google Scholar
[5] M. T. Hagan, H. B. Demuth, M. H. Beale and O. De Jesús, Neural Network Design, 2nd ed., 2014.Search in Google Scholar
[6] J. W. Hsieh, J. M. Chen, C. H. Chuang and K. C. Fan, Aircraft type recognition in satellite images, IEEE Proc. Vision Image Signal Process.152 (2005), 307–315.10.1049/ip-vis:20049020Search in Google Scholar
[7] A. G. Karacor, E. Torun and R. Abay, Aircraft classification using image processing techniques and artificial neural networks, Int. J. Patt. Recogn. Artif. Intell.25 (2011), 1321–1335.10.1142/S0218001411009044Search in Google Scholar
[8] D. Y. Kim, S II Chien and H. Son, Multiclass 3-D aircraft identification and orientation estimation using multilayer feedforward neural network, IEEE Int. Joint Conf. Neural Netw.1 (1991), 758–764.Search in Google Scholar
[9] X. d. Li, J. d. Pan and J. Dezert, Automatic aircraft recognition using DSmT and HMM, in: The 17th International Conference on Information Fusion, pp. 1–8, 2014.Search in Google Scholar
[10] B. Liu, Three dimensional aircraft recognition based on neural network and the D-S evidence theory, in: 2011 International Conference on Electrical and Control Engineering (ICECE), pp. 3752–3756, 2011.Search in Google Scholar
[11] MathWorks, Image segmentation tutorial, Available at: http://www.mathworks.com/matlabcentral/fileexchange/25157-image-segmentation-tutorial, Accessed July 2015.Search in Google Scholar
[12] Matlab documentation, MathWorks, R2009a.Search in Google Scholar
[13] K. Mehrotra, C. K. Mohan and S. Ranka, Elements of Artificial Neural Networks, Penram Intl. Publishing, India, 1997.10.7551/mitpress/2687.001.0001Search in Google Scholar
[14] M. Mercimek, K. Gulez and T. V. Mumcu, Real object recognition using moment invariants, Sadhana30 (2005), 765–775.10.1007/BF02716709Search in Google Scholar
[15] J. M. Molina, J. García, A. Berlanga, J. Besada and J. Portillo, Automatic video system for aircraft identification, ISIF (2002), 1387–1394.10.1109/ICIF.2002.1020975Search in Google Scholar
[16] NIST/SEMATECH, Measures of skewness and kurtosis, in: NIST/SEMATECH e-Handbook of Statistical Methods, Available at: http://itl.nist.gov/div898/handbook/eda/section3/eda35b.htm, Accessed July 2015.Search in Google Scholar
[17] R. J. Ramteke, Invariant moments based feature extraction for handwritten Devanagari vowels recognition, Int. J. Comput. Appl.1 (2010), 1–5.10.5120/392-585Search in Google Scholar
[18] A. W. Rihaczek and S. J. Hershkowitz, Identification of large aircraft, IEEE Trans. Aerosp. Electron. Syst.37 (2001), 706–710.10.1109/7.937482Search in Google Scholar
[19] K. Roopa and T. V. Rama Murthy, Aircraft image recognition system using phase correlation method, J. Intell. Syst.22 (2013), 283–297.10.1515/jisys-2013-0035Search in Google Scholar
[20] K. Roopa and T. V. Rama Murthy, Aircraft recognition system using image analysis, LNEE248 (2014), 195–204.10.1007/978-81-322-1157-0_21Search in Google Scholar
[21] K. Roopa and T. V. Rama Murthy, Experimental set up for generating database for recognition of aircraft images, in: IEEE International Advance Computing Conference, pp. 131–135, June 12–13, 2015.10.1109/IADCC.2015.7154685Search in Google Scholar
[22] S. Saini and R. Vijay, Performance analysis of artificial neural network based breast cancer detection system, Int. J. Soft Comput. Eng. (IJSCE), 4 (2014), 70–72.Search in Google Scholar
[23] F. Saghafi, S. M. Khansari Zadeh and V. E. Bakhsh, Aircraft visual identification by neural networks, JAST5 (2008), 123–128.Search in Google Scholar
[24] H. S. Sheshadri and A. Kandaswamy, Breast tissue classification using statistical feature extraction of mammograms, Med. Imaging Inform. Sci.23 (2006), 105–107.Search in Google Scholar
[25] A. A. Somaie, A. Badr and T. Salah, Aircraft image recognition using back propagation, in: Proceedings of the CIE International Conference on Radar, pp. 498–501, 2001.Search in Google Scholar
[26] H. Yu and B. M. Wilamowski, Levenberg-Marquardt training, Available at: http://www.eng.auburn.edu/∼wilambm/pap /2011/K10149_C012.pdf. Accessed July 2015.Search in Google Scholar
©2018 Walter de Gruyter GmbH, Berlin/Boston
This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.