Nothing Special   »   [go: up one dir, main page]

CN103824093A - SAR (Synthetic Aperture Radar) image target characteristic extraction and identification method based on KFDA (Kernel Fisher Discriminant Analysis) and SVM (Support Vector Machine) - Google Patents

SAR (Synthetic Aperture Radar) image target characteristic extraction and identification method based on KFDA (Kernel Fisher Discriminant Analysis) and SVM (Support Vector Machine) Download PDF

Info

Publication number
CN103824093A
CN103824093A CN201410103639.2A CN201410103639A CN103824093A CN 103824093 A CN103824093 A CN 103824093A CN 201410103639 A CN201410103639 A CN 201410103639A CN 103824093 A CN103824093 A CN 103824093A
Authority
CN
China
Prior art keywords
msub
mrow
munderover
target sample
kfda
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410103639.2A
Other languages
Chinese (zh)
Other versions
CN103824093B (en
Inventor
高飞
梅净缘
孙进平
王俊
吕文超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201410103639.2A priority Critical patent/CN103824093B/en
Publication of CN103824093A publication Critical patent/CN103824093A/en
Application granted granted Critical
Publication of CN103824093B publication Critical patent/CN103824093B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides an SAR (Synthetic Aperture Radar) image target characteristic extraction and identification method based on KFDA (Kernel Fisher Discriminant Analysis) and an SVM (Support Vector Machine). The method comprises the following steps: performing amplitude data normalization processing on a training target sample of a known type and a testing target sample of an unknown type; performing characteristic extraction on the normalized training target sample of the known type and the testing target sample of the unknown type respectively by using a KFDA criterion; training an SVM classifier by using training target sample characteristics of known types extracted according to the KFDA criterion to generate an optimal classification face; identifying the characteristics of the testing target sample of the unknown type extracted according to the KFDA criterion through the optimal classification face. By adopting the method, the requirement on a preprocessing process is lowered, the target-aspect sensitivity of an SAR image is avoided, the dimensions of sample characteristics are compressed, and high target identification rate is obtained. The method has high popularity.

Description

SAR image target feature extraction and recognition method based on KFDA and SVM
Technical Field
The invention belongs to the field of SAR image processing and pattern recognition, and relates to an SAR image target feature extraction and recognition method based on KFDA (Kernel Fisher cognitive analysis) and SVM (support Vector machine).
Background
Synthetic Aperture Radar (SAR) is an active sensor using microwave sensing, can perform all-day and all-weather reconnaissance on an interested target or area, and has the capability of acquiring multi-view and multi-depression data and the capability of penetrating ground objects. The radar target identification is to extract target features according to radar echo signals of a target and an environment on the basis of detection and positioning of the target by a radar, so as to judge the attribute, the category or the model of the target. With the continuous maturation of the SAR imaging technology, the target identification based on the SAR image has more and more important significance.
In the SAR image-based target identification process, the most important two steps are feature extraction and identification. Due to the special imaging mode of the SAR image, the SAR image can not describe the overall shape of a target completely like a general optical image, but shows sparse scattering center distribution and is sensitive to the imaging direction. Therefore, it is important to effectively extract the target features. After the features of the target of the SAR image are obtained, the next main task is to identify the unknown target.
In the method for extracting target features of the SAR image, methods such as Principal Component Analysis (PCA), kernel function principal component analysis (KPCA) and the like are most commonly used, wherein the principal component analysis method has the disadvantage that non-linear features existing in the image cannot be extracted, and the kernel function principal component analysis method has the disadvantage that the extracted features do not have good class discrimination capability and the feature dimension is high; in the target identification method of the SAR image, a maximum correlation classifier and a nearest neighbor classifier and the like are most commonly used, wherein the maximum correlation classifier has the disadvantage that the algorithm complexity is higher when the sample dimension is higher, and the nearest neighbor classifier has the disadvantage that the selected optimal classification surface is not globally optimal.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method comprises the steps of extracting target features by using a KFDA criterion and realizing target recognition through an SVM classifier. By combining the KPCA criterion with the SVM classifier, the method can well complete the target feature extraction and identification of the SAR image, reduce the requirement on the preprocessing process, overcome the orientation sensitivity of the SAR image, compress the dimension of the sample feature, obtain higher target identification rate and have good popularization.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a method for extracting and identifying SAR image target features based on KFDA and SVM comprises the following steps:
step (1) carrying out amplitude data normalization processing on training target samples of known types and test target samples of unknown types;
performing feature extraction on the normalized training target sample of the known type and the test target sample data of the unknown type by using a KFDA (Kalman Filter/data acquisition) criterion;
training an SVM (support vector machine) classifier by using training target sample characteristics of known types extracted by a KFDA (KFDA) criterion to generate an optimal classification surface;
and (4) identifying the characteristics of the test target sample of unknown class extracted by the KFDA criterion through the optimal classification surface.
Further, the process of performing amplitude data normalization processing on the training target samples of the known type and the test target samples of the unknown type in the step (1) specifically includes:
the normalized formula is:
x Normalized = x | | x | | 2
where x is a vector representation of a training target sample of any known class or a test target sample of unknown class (i.e., the image matrix is arranged in columns in vector form), xNormalizedAnd (3) vector representation after normalization of the amplitude data of the training target sample of the corresponding known class or the testing target sample of the unknown class.
Further, the process of performing feature extraction on the normalized training target samples of the known type and the test target sample data of the unknown type by using the KFDA criterion in the step (2) specifically comprises: first solving an intra-class divergence matrix KwAnd the inter-class divergence matrix KbThen ask for
Figure BDA0000479324460000022
Finally solving the characteristics of the training target sample of the known type and the test target sample of the unknown type under the KFDA criterion; wherein the intra-class divergence matrix KwComprises the following steps:
<math> <mrow> <msub> <mi>K</mi> <mi>w</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>c</mi> </munderover> <msub> <mi>K</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>I</mi> <mo>-</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mi>i</mi> </msub> </mfrac> <mo>)</mo> </mrow> <msubsup> <mi>K</mi> <mi>i</mi> <mi>T</mi> </msubsup> </mrow> </math>
wherein N is the number of training target samples of known classes, c is the number of classes of training target samples of known classes,
Figure BDA0000479324460000024
is NxNiMatrix, xp(p is 1,2, …, N) is the normalized data of the training target sample of the p-th known class,
Figure BDA0000479324460000025
normalized data for the jth training target sample in class i, NiNumber of samples, k, of training target samples of known class i1(-) represents a kernel function, I is Ni×NiThe unit array is formed by a plurality of unit arrays,is composed of elements of
Figure BDA0000479324460000032
N of (A)i×NiA square matrix; if KwIf the matrix is a singular matrix, let Kw≈Kw+ kappa I to solve KwSingularity of (A) and (B)wUnity matrix of the same order, κ is a perturbation constant small and greater than zero, and it is generally desirable that κ ≦ 10-2
Inter-class divergence matrix KbComprises the following steps:
<math> <mrow> <msub> <mi>K</mi> <mi>b</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>c</mi> </munderover> <msub> <mi>N</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>G</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>G</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> </mrow> </math>
wherein, <math> <mrow> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mi>i</mi> </msub> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>N</mi> <mi>i</mi> </msub> </munderover> <msub> <mi>k</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>,</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>i</mi> </msubsup> <mo>)</mo> </mrow> <mo>,</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mi>i</mi> </msub> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>N</mi> <mi>i</mi> </msub> </munderover> <msub> <mi>k</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mtext>i</mtext> </msubsup> <mtext></mtext> <mo>)</mo> </mrow> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mi>i</mi> </msub> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>N</mi> <mi>i</mi> </msub> </munderover> <msub> <mi>k</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>N</mi> </msub> <mo>,</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>i</mi> </msubsup> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>,</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>i</mi> </msubsup> <mrow> <mo>(</mo> <mi>j</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msub> <mi>N</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </math> normalized data for the jth training target sample in class i, <math> <mrow> <msub> <mi>G</mi> <mn>0</mn> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>k</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>k</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>k</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>N</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>.</mo> </mrow> </math>
then ask for
Figure BDA0000479324460000036
The non-zero eigenvalue of (a) corresponds to the eigenvector α, i.e.:
<math> <mrow> <mi>&lambda;&alpha;</mi> <mo>=</mo> <msubsup> <mi>K</mi> <mi>w</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi>K</mi> <mi>b</mi> </msub> <mi>&alpha;</mi> </mrow> </math>
wherein λ represents a characteristic value; for any normalized training target sample of a known class or test target sample of an unknown class, the characteristics of the training target sample of the known class or the test target sample of the unknown class finally extracted by the KFDA criterion are all a c-1-dimensional vector, and can be expressed as z ═ z [ -z [1,z2,…,zc-1]TEach dimension element may be represented as:
<math> <mrow> <msub> <mi>z</mi> <mi>t</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msubsup> <mi>&alpha;</mi> <mi>j</mi> <mi>t</mi> </msubsup> <msub> <mi>k</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>,</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </math>
wherein t is 1,2, …, c-1,
Figure BDA0000479324460000039
to represent
Figure BDA00004793244600000310
The jth non-zero eigenvalue of (a) corresponds to the jth element of the eigenvector.
Further, the training of the SVM classifier is performed by using the training target sample features of known classes extracted by the KFDA criterion in step (3), and the process of generating the optimal classification surface specifically includes:
maximizing the functional by Lagrange multiplier method:
<math> <mrow> <mi>Q</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>i</mi> </msub> <msub> <mi>a</mi> <mi>j</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>j</mi> </msub> <msub> <mi>k</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>z</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>y</mi> <mi>i</mi> </msub> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </math>
ai≥0,i=1,…,n
wherein, yiThe method comprises the following steps that (1) a { +1, -1} is formed, and training target samples of two different types of known classes are respectively corresponding to the samples; z is a radical ofiFeatures of an ith known class of training target extracted for the KFDA criteria; k is a radical of2(-) represents a kernel function, with k above1(-) is completely independent; a isiIs the Lagrange multiplier to be solved corresponding to the training target sample of the ith known class.
Further, the process of identifying the features of the test target samples of unknown types extracted by the KFDA criterion through the optimal classification surface in the step (4) specifically includes:
the function of the optimal classification surface is expressed as:
<math> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>sgn</mi> <mo>{</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> <msub> <mi>k</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> <mo>+</mo> <msup> <mi>b</mi> <mo>*</mo> </msup> <mo>}</mo> </mrow> </math>
wherein, aiThe Lagrange multiplier corresponding to the training target sample of the ith known class to be solved; y isiThe method comprises the following steps that (1) a { +1, -1} is formed, and training target samples of two different types of known classes are respectively corresponding to the samples; k is a radical of2(-) represents a kernel function; z is a radical ofiFor training targets of the ith known class extracted for the KFDA criterionCharacteristic; z is the characteristic of the test target sample of unknown class extracted by the KFDA criterion;
Figure BDA0000479324460000042
f (x) E { +1, -1}, i.e. the class of the test target sample of the current unknown class is determined.
The principle of the invention is as follows: applying a kernel Fisher discrimination criterion to feature extraction, namely extracting nonlinear features of the SAR image in a high-dimensional space by a kernel method to obtain better class discrimination capability; the SVM searches for an optimal hyperplane meeting the classification requirement, and is suitable for the problems of small samples, nonlinearity and the like; therefore, the advantages of the SAR image and the target feature extraction and identification can be well realized by combining the advantages of the SAR image and the target feature extraction.
Compared with the prior art, the invention has the advantages that:
1. for the target feature extraction part, compared with the PCA criterion, the method can obtain the nonlinear features in the image;
2. for the target feature extraction part, compared with a KPCA (kernel principal component analysis) criterion, the method can obtain lower feature dimension and has better robustness;
3. for the target identification part, compared with the maximum correlation classifier, the method skillfully solves the problem of dimension, and the algorithm complexity is irrelevant to the sample dimension;
4. for the target identification part, compared with the nearest neighbor classifier, the global optimal solution is obtained by the method;
5. by combining the KPCA criterion with the SVM classifier, the method can well complete the target feature extraction and recognition of the SAR image, reduce the requirement on the preprocessing process, overcome the orientation sensitivity of the SAR image, compress the dimension of the sample feature, obtain higher target recognition rate and have good popularization.
Drawings
FIG. 1 is a flow chart of target feature extraction and identification in accordance with the present invention;
FIG. 2 is a process of target feature extraction and identification for an example.
Wherein:
(a) the method comprises the following steps of (1) taking three types of tanks a, b and c with a pitch angle of 17 degrees as a two-dimensional characteristic distribution map of a training target sample of a known type;
(b) the two-dimensional characteristic distribution map is used for identifying the c-type tank with the pitch angle of 15 degrees;
(c) the two-dimensional characteristic distribution diagram is used for identifying the degeneration target c # (namely, targets with the same model and different configurations) of the c-type tank with the pitching angle of 15 degrees.
Detailed Description
The invention is described in detail below with reference to the figures and the detailed description.
As shown in fig. 1, the method for extracting and identifying the target features of the SAR image based on KFDA and SVM of the present invention comprises the following steps:
step (1), carrying out amplitude data normalization processing on training target samples of known types and test target samples of unknown types, wherein the normalization formula is as follows:
x Normalized = x | | x | | 2
where x is a training target of any known classVector representation of samples or unknown classes of test target samples (i.e. arranging the image matrix in vector form by columns), xNormalizedAnd (3) vector representation after normalization of the amplitude data of the training target sample of the corresponding known class or the testing target sample of the unknown class.
Step (2), training target samples of known types and test target sample data x of unknown types are normalized by using a KFDA (Kalman Filter-Kalman Filter) criterionNormalizedExtracting features, and calculating the divergence matrix K in classwAnd the inter-class divergence matrix KbThen ask forFinally solving the characteristics of the training target sample of the known type and the test target sample of the unknown type under the KFDA criterion; wherein the intra-class divergence matrix KwComprises the following steps:
<math> <mrow> <msub> <mi>K</mi> <mi>w</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>c</mi> </munderover> <msub> <mi>K</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>I</mi> <mo>-</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mi>i</mi> </msub> </mfrac> <mo>)</mo> </mrow> <msubsup> <mi>K</mi> <mi>i</mi> <mi>T</mi> </msubsup> </mrow> </math>
wherein N is the number of training target samples of known classes, c is the number of classes of training target samples of known classes,
Figure BDA0000479324460000054
is NxNiMatrix, xp(p is 1,2, …, N) is the normalized data of the training target sample of the p-th known class,
Figure BDA0000479324460000055
normalized data for the jth training target sample in class i, NiNumber of samples, k, of training target samples of known class i1(-) represents a kernel function, I is Ni×NiThe unit array is formed by a plurality of unit arrays,
Figure BDA0000479324460000056
is composed of elements of
Figure BDA0000479324460000057
N of (A)i×NiA square matrix; if KwIf the matrix is a singular matrix, let Kw≈Kw+ kappa I to solve KwSingularity of (A) and (B)wUnity matrix of the same order, κ is a perturbation constant small and greater than zero, and it is generally desirable that κ ≦ 10-2
Inter-class divergence matrix KbComprises the following steps:
<math> <mrow> <msub> <mi>K</mi> <mi>b</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>c</mi> </munderover> <msub> <mi>N</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>G</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>G</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> </mrow> </math>
wherein, <math> <mrow> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mi>i</mi> </msub> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>N</mi> <mi>i</mi> </msub> </munderover> <msub> <mi>k</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>,</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>i</mi> </msubsup> <mo>)</mo> </mrow> <mo>,</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mi>i</mi> </msub> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>N</mi> <mi>i</mi> </msub> </munderover> <msub> <mi>k</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mtext>i</mtext> </msubsup> <mtext></mtext> <mo>)</mo> </mrow> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mi>i</mi> </msub> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>N</mi> <mi>i</mi> </msub> </munderover> <msub> <mi>k</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>N</mi> </msub> <mo>,</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>i</mi> </msubsup> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>,</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>i</mi> </msubsup> <mrow> <mo>(</mo> <mi>j</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msub> <mi>N</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </math> normalized data for the jth training target sample in class i, <math> <mrow> <msub> <mi>G</mi> <mn>0</mn> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>k</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>k</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>k</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>N</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>.</mo> </mrow> </math>
then ask for
Figure BDA0000479324460000064
The non-zero eigenvalue of (a) corresponds to the eigenvector α, i.e.:
<math> <mrow> <mi>&lambda;&alpha;</mi> <mo>=</mo> <msubsup> <mi>K</mi> <mi>w</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi>K</mi> <mi>b</mi> </msub> <mi>&alpha;</mi> </mrow> </math>
wherein λ represents a characteristic value; for any normalized training target sample of a known class or test target sample of an unknown class, the characteristics of the training target sample of the known class or the test target sample of the unknown class finally extracted by the KFDA criterion are all a c-1-dimensional vector, and can be expressed as z ═ z [ -z [1,z2,…,zc-1]TEach dimension element may be represented as:
<math> <mrow> <msub> <mi>z</mi> <mi>t</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msubsup> <mi>&alpha;</mi> <mi>j</mi> <mi>t</mi> </msubsup> <msub> <mi>k</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>,</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </math>
wherein t is 1,2, …, c-1,
Figure BDA0000479324460000067
to represent
Figure BDA0000479324460000068
The jth non-zero eigenvalue of (a) corresponds to the jth element of the eigenvector.
And (3) training the SVM classifier by using the training target sample characteristics of known types extracted by a KFDA (Kalman Filter and data acquisition) criterion to generate an optimal classification surface, namely maximizing the functional by using a Lagrange multiplier method:
<math> <mrow> <mi>Q</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>i</mi> </msub> <msub> <mi>a</mi> <mi>j</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>j</mi> </msub> <msub> <mi>k</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>z</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>y</mi> <mi>i</mi> </msub> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </math>
ai≥0,i=1,…,n
wherein, yiThe method comprises the following steps that (1) a { +1, -1} is formed, and training target samples of two different types of known classes are respectively corresponding to the samples; z is a radical ofiFeatures of an ith known class of training target extracted for the KFDA criteria; k is a radical of2(-) represents a kernel function, with k above1(-) is completely independent; a isiIs on demandLagrange multipliers corresponding to training target samples of the ith known class.
And (4) identifying the characteristics of the test target sample of unknown class extracted by the KFDA criterion through the optimal classification surface, wherein the function of the optimal classification surface is expressed as:
<math> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>sgn</mi> <mo>{</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> <msub> <mi>k</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> <mo>+</mo> <msup> <mi>b</mi> <mo>*</mo> </msup> <mo>}</mo> </mrow> </math>
wherein, aiThe Lagrange multiplier corresponding to the training target sample of the ith known class to be solved; y isiThe method comprises the following steps that (1) a { +1, -1} is formed, and training target samples of two different types of known classes are respectively corresponding to the samples; k is a radical of2(-) represents a kernel function; z is a radical ofiFeatures of an ith known class of training target extracted for the KFDA criteria; z is the characteristic of the test target sample of unknown class extracted by the KFDA criterion;
Figure BDA0000479324460000071
f (x) E { +1, -1}, i.e. the class of the test target sample of the current unknown class is determined.
In fig. 2, 201 is a two-dimensional feature distribution diagram of three types a, b and c of tanks with a pitch angle of 17 ° as training target samples of known types; 202 is a two-dimensional characteristic distribution map for identifying a c-type tank with a pitch angle of 15 degrees; reference numeral 203 denotes a two-dimensional feature distribution map for identifying a tank of the degeneration target type c # of the tank of type c (i.e., targets having the same model and different configurations) with a pitch angle of 15 °. It can be seen that, in the diagram 202, as the same configuration target, the two-dimensional features of the class c tank of the test target sample of unknown class are well gathered around the two-dimensional features of the class c tank of the training target sample of known class; in fig. 203, as a deformation target, the two-dimensional features of the test target sample class c # tank of the unknown class are also well clustered around the two-dimensional features of the training target sample class c tank of the known class. The method is proved to have excellent target identification capability, and the SAR image target feature extraction and identification are realized.
Those skilled in the art will appreciate that the invention may be practiced without these specific details.
Although the preferred embodiments of the present invention and the accompanying drawings have been disclosed for illustrative purposes, those skilled in the art will appreciate that: various substitutions, changes and modifications are possible without departing from the spirit and scope of the present invention and the appended claims. Therefore, the technical solution protected by the present invention should not be limited to the disclosure of the best embodiment and the accompanying drawings.

Claims (5)

1. A SAR image target feature extraction and recognition method based on KFDA and SVM is characterized by comprising the following steps:
step (1) carrying out amplitude data normalization processing on training target samples of known types and test target samples of unknown types;
performing feature extraction on the normalized training target sample of the known type and the test target sample data of the unknown type by using a KFDA (Kalman Filter/data acquisition) criterion;
training the SVM classifier by using the training target sample characteristics of known classes extracted by a KFDA (Kalman Filter and data acquisition) criterion to generate an optimal classification surface;
and (4) identifying the characteristics of the test target sample of unknown class extracted by the KFDA criterion through the optimal classification surface.
2. The method for extracting and identifying the target features of the SAR image based on the KFDA and the SVM of claim 1, wherein: the process of performing amplitude data normalization processing on the training target samples of the known type and the test target samples of the unknown type in the step (1) specifically comprises the following steps:
the normalized formula is:
x Normalized = x | | x | | 2
where x is a vector representation of a training target sample of any known class or a test target sample of unknown class (i.e., the image matrix is arranged in columns in vector form), xNormalizedAnd (3) vector representation after normalization of the amplitude data of the training target sample of the corresponding known class or the testing target sample of the unknown class.
3. The method for extracting and identifying the target features of the SAR image based on the KFDA and the SVM of claim 1, wherein: the process of performing feature extraction on the normalized training target sample of the known type and the test target sample data of the unknown type by using the KFDA criterion in the step (2) specifically comprises the following steps: first solving an intra-class divergence matrix KwAnd the inter-class divergence matrix KbThen ask for
Figure FDA0000479324450000013
Finally solving the characteristics of the training target sample of the known type and the test target sample of the unknown type under the KFDA criterion; wherein the intra-class divergence matrix KwComprises the following steps:
<math> <mrow> <msub> <mi>K</mi> <mi>w</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>c</mi> </munderover> <msub> <mi>K</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>I</mi> <mo>-</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mi>i</mi> </msub> </mfrac> <mo>)</mo> </mrow> <msubsup> <mi>K</mi> <mi>i</mi> <mi>T</mi> </msubsup> </mrow> </math>
wherein N is the number of training target samples of known classes, c is the number of classes of training target samples of known classes,
Figure FDA0000479324450000014
is NxNiMatrix, xp(p is 1,2, …, N) is the normalized data of the training target sample of the p-th known class,
Figure FDA0000479324450000021
normalized data for the jth training target sample in class i, NiNumber of samples, k, of training target samples of known class i1(-) represents a kernel function, I is Ni×NiThe unit array is formed by a plurality of unit arrays,
Figure FDA0000479324450000022
is composed of elements of
Figure FDA0000479324450000023
N of (A)i×NiA square matrix; if KwIf the matrix is a singular matrix, let Kw≈Kw+ kappa I to solve KwSingularity of (A) and (B)wUnity matrix of the same order, κ is a perturbation constant small and greater than zero, and it is generally desirable that κ ≦ 10-2
Inter-class divergence matrix KbComprises the following steps:
<math> <mrow> <msub> <mi>K</mi> <mi>b</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>c</mi> </munderover> <msub> <mi>N</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>G</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>G</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> </mrow> </math>
wherein, <math> <mrow> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mi>i</mi> </msub> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>N</mi> <mi>i</mi> </msub> </munderover> <msub> <mi>k</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>,</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>i</mi> </msubsup> <mo>)</mo> </mrow> <mo>,</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mi>i</mi> </msub> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>N</mi> <mi>i</mi> </msub> </munderover> <msub> <mi>k</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mtext>i</mtext> </msubsup> <mtext></mtext> <mo>)</mo> </mrow> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mi>i</mi> </msub> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>N</mi> <mi>i</mi> </msub> </munderover> <msub> <mi>k</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>N</mi> </msub> <mo>,</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>i</mi> </msubsup> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>,</mo> <msubsup> <mi>x</mi> <mi>j</mi> <mi>i</mi> </msubsup> <mrow> <mo>(</mo> <mi>j</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msub> <mi>N</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </math> normalized data for the jth training target sample in class i, <math> <mrow> <msub> <mi>G</mi> <mn>0</mn> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>k</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>k</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>k</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>N</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>;</mo> </mrow> </math>
then ask for
Figure FDA0000479324450000027
The non-zero eigenvalue of (a) corresponds to the eigenvector α, i.e.:
<math> <mrow> <mi>&lambda;&alpha;</mi> <mo>=</mo> <msubsup> <mi>K</mi> <mi>w</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi>K</mi> <mi>b</mi> </msub> <mi>&alpha;</mi> </mrow> </math>
wherein λ represents a characteristic value; for any normalized training target sample of a known class or test target sample of an unknown class, the characteristics of the training target sample of the known class or the test target sample of the unknown class finally extracted by the KFDA criterion are all a c-1-dimensional vector, and can be expressed as z ═ z [ -z [1,z2,…,zc-1]TEach dimension element may be represented as:
<math> <mrow> <msub> <mi>z</mi> <mi>t</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msubsup> <mi>&alpha;</mi> <mi>j</mi> <mi>t</mi> </msubsup> <msub> <mi>k</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>,</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </math>
wherein t is 1,2, …, c-1,
Figure FDA00004793244500000210
to represent
Figure FDA00004793244500000211
The jth non-zero eigenvalue of (a) corresponds to the jth element of the eigenvector.
4. The method for extracting and identifying the target features of the SAR image based on the KFDA and the SVM of claim 3, wherein: in the step (3), training the SVM classifier by using the training target sample characteristics of known types extracted by the KFDA criterion, wherein the process of generating the optimal classification surface specifically comprises the following steps:
maximizing the functional by Lagrange multiplier method:
<math> <mrow> <mi>Q</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>i</mi> </msub> <msub> <mi>a</mi> <mi>j</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>j</mi> </msub> <msub> <mi>k</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>z</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>y</mi> <mi>i</mi> </msub> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </math>
ai≥0,i=1,…,n
wherein, yiThe method comprises the following steps that (1) a { +1, -1} is formed, and training target samples of two different types of known classes are respectively corresponding to the samples; z is a radical ofiFeatures of an ith known class of training target extracted for the KFDA criteria; k is a radical of2(-) represents a kernel function, with k above1(-) is completely independent; a isiIs the Lagrange multiplier to be solved corresponding to the training target sample of the ith known class.
5. The method for extracting and identifying the target features of the SAR image based on the KFDA and the SVM of claim 1, wherein: the process of identifying the features of the test target sample of unknown type extracted by the KFDA criterion through the optimal classification surface in the step (4) is specifically as follows:
the function of the optimal classification surface is expressed as:
<math> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>sgn</mi> <mo>{</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> <msub> <mi>k</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> <mo>+</mo> <msup> <mi>b</mi> <mo>*</mo> </msup> <mo>}</mo> </mrow> </math>
wherein, aiThe Lagrange multiplier corresponding to the training target sample of the ith known class to be solved; y isiThe method comprises the following steps that (1) a { +1, -1} is formed, and training target samples of two different types of known classes are respectively corresponding to the samples; k is a radical of2(-) represents a kernel function; z is a radical ofiFeatures of an ith known class of training target extracted for the KFDA criteria; z is the characteristic of the test target sample of unknown class extracted by the KFDA criterion;
Figure FDA0000479324450000032
f (x) E { +1, -1}, i.e. the class of the test target sample of the current unknown class is determined.
CN201410103639.2A 2014-03-19 2014-03-19 It is a kind of based on KFDA and SVM SAR image target's feature-extraction and recognition methods Expired - Fee Related CN103824093B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410103639.2A CN103824093B (en) 2014-03-19 2014-03-19 It is a kind of based on KFDA and SVM SAR image target's feature-extraction and recognition methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410103639.2A CN103824093B (en) 2014-03-19 2014-03-19 It is a kind of based on KFDA and SVM SAR image target's feature-extraction and recognition methods

Publications (2)

Publication Number Publication Date
CN103824093A true CN103824093A (en) 2014-05-28
CN103824093B CN103824093B (en) 2017-10-13

Family

ID=50759145

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410103639.2A Expired - Fee Related CN103824093B (en) 2014-03-19 2014-03-19 It is a kind of based on KFDA and SVM SAR image target's feature-extraction and recognition methods

Country Status (1)

Country Link
CN (1) CN103824093B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050489A (en) * 2014-06-27 2014-09-17 电子科技大学 SAR ATR method based on multicore optimization
CN104268552A (en) * 2014-09-04 2015-01-07 电子科技大学 Fine category classification method based on component polygons
CN105629210A (en) * 2014-11-21 2016-06-01 中国航空工业集团公司雷华电子技术研究所 Airborne radar space and ground moving target classification and recognition method
CN106054189A (en) * 2016-07-17 2016-10-26 西安电子科技大学 Radar target recognition method based on dpKMMDP model
CN106845489A (en) * 2015-12-03 2017-06-13 中国航空工业集团公司雷华电子技术研究所 Based on the SAR image target's feature-extraction method for improving Krawtchouk squares
TWI617175B (en) * 2016-11-18 2018-03-01 國家中山科學研究院 Image detection acceleration method
CN109753887A (en) * 2018-12-17 2019-05-14 南京师范大学 A kind of SAR image target recognition method based on enhancing nuclear sparse expression
CN109784356A (en) * 2018-07-18 2019-05-21 北京工业大学 Matrix variables based on Fisher discriminant analysis are limited Boltzmann machine image classification method
CN111400565A (en) * 2020-03-19 2020-07-10 北京三维天地科技股份有限公司 Visualized dragging online data processing method and system
CN116776209A (en) * 2023-08-28 2023-09-19 国网福建省电力有限公司 Method, system, equipment and medium for identifying operation state of gateway metering device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339244A (en) * 2008-08-01 2009-01-07 北京航空航天大学 On-board SAR image automatic target positioning method
JP2009048641A (en) * 2007-08-20 2009-03-05 Fujitsu Ltd Character recognition method and character recognition device
CN102567742A (en) * 2010-12-15 2012-07-11 中国科学院电子学研究所 Automatic classification method of support vector machine based on selection of self-adapting kernel function
CN103164689A (en) * 2011-12-16 2013-06-19 上海移远通信技术有限公司 Face recognition method and face recognition system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009048641A (en) * 2007-08-20 2009-03-05 Fujitsu Ltd Character recognition method and character recognition device
CN101339244A (en) * 2008-08-01 2009-01-07 北京航空航天大学 On-board SAR image automatic target positioning method
CN102567742A (en) * 2010-12-15 2012-07-11 中国科学院电子学研究所 Automatic classification method of support vector machine based on selection of self-adapting kernel function
CN103164689A (en) * 2011-12-16 2013-06-19 上海移远通信技术有限公司 Face recognition method and face recognition system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
刘爱平,等: ""一种有效的高分辨率SAR目标特征提取与识别方法"", 《武汉大学学报 信息科学版》 *
宦若虹: ""基于KFD+ICA特征提取的SAR图像目标实现"", 《系统工程与电子技术》 *
张一凡: ""基于Curvelet和快速稀疏LSSVM的目标识别"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
李莉莉 等: ""KPCA和SVM在人脸识别中的应用"", 《山西电子技术》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050489A (en) * 2014-06-27 2014-09-17 电子科技大学 SAR ATR method based on multicore optimization
CN104050489B (en) * 2014-06-27 2017-04-19 电子科技大学 SAR ATR method based on multicore optimization
CN104268552A (en) * 2014-09-04 2015-01-07 电子科技大学 Fine category classification method based on component polygons
CN104268552B (en) * 2014-09-04 2017-06-13 电子科技大学 One kind is based on the polygonal fine classification sorting technique of part
CN105629210A (en) * 2014-11-21 2016-06-01 中国航空工业集团公司雷华电子技术研究所 Airborne radar space and ground moving target classification and recognition method
CN106845489B (en) * 2015-12-03 2020-07-03 中国航空工业集团公司雷华电子技术研究所 SAR image target feature extraction method based on improved Krawtchouk moment
CN106845489A (en) * 2015-12-03 2017-06-13 中国航空工业集团公司雷华电子技术研究所 Based on the SAR image target's feature-extraction method for improving Krawtchouk squares
CN106054189A (en) * 2016-07-17 2016-10-26 西安电子科技大学 Radar target recognition method based on dpKMMDP model
TWI617175B (en) * 2016-11-18 2018-03-01 國家中山科學研究院 Image detection acceleration method
CN109784356A (en) * 2018-07-18 2019-05-21 北京工业大学 Matrix variables based on Fisher discriminant analysis are limited Boltzmann machine image classification method
CN109784356B (en) * 2018-07-18 2021-01-05 北京工业大学 Matrix variable limited Boltzmann machine image classification method based on Fisher discriminant analysis
CN109753887A (en) * 2018-12-17 2019-05-14 南京师范大学 A kind of SAR image target recognition method based on enhancing nuclear sparse expression
CN109753887B (en) * 2018-12-17 2022-09-23 南京师范大学 SAR image target identification method based on enhanced kernel sparse representation
CN111400565A (en) * 2020-03-19 2020-07-10 北京三维天地科技股份有限公司 Visualized dragging online data processing method and system
CN116776209A (en) * 2023-08-28 2023-09-19 国网福建省电力有限公司 Method, system, equipment and medium for identifying operation state of gateway metering device

Also Published As

Publication number Publication date
CN103824093B (en) 2017-10-13

Similar Documents

Publication Publication Date Title
CN103824093B (en) It is a kind of based on KFDA and SVM SAR image target&#39;s feature-extraction and recognition methods
Jia et al. Feature mining for hyperspectral image classification
Al Bashish et al. A framework for detection and classification of plant leaf and stem diseases
Rignot et al. Segmentation of polarimetric synthetic aperture radar data
CN104376330B (en) Polarimetric SAR Image Ship Target Detection method based on super-pixel scattering mechanism
CN106096506B (en) Based on the SAR target identification method for differentiating doubledictionary between subclass class
Maghsoudi et al. Polarimetric classification of Boreal forest using nonparametric feature selection and multiple classifiers
CN101968850B (en) Method for extracting face feature by simulating biological vision mechanism
CN102629378B (en) Remote sensing image change detection method based on multi-feature fusion
Tan et al. Agricultural crop-type classification of multi-polarization SAR images using a hybrid entropy decomposition and support vector machine technique
Sun et al. Nonlinear dimensionality reduction via the ENH-LTSA method for hyperspectral image classification
CN103955701A (en) Multi-level-combined multi-look synthetic aperture radar image target recognition method
CN105160353B (en) Polarization SAR data terrain classification method based on multiple features collection
CN105405132A (en) SAR image man-made target detection method based on visual contrast and information entropy
CN104751183B (en) Classification of Polarimetric SAR Image method based on tensor MPCA
CN104809471B (en) A kind of high spectrum image residual error integrated classification method based on spatial spectral information
CN109753887B (en) SAR image target identification method based on enhanced kernel sparse representation
CN105160351A (en) Semi-monitoring high-spectral classification method based on anchor point sparse graph
CN106909939A (en) A kind of polarization SAR terrain classification method of combination rotational domain polarization zero angle feature
CN110458876A (en) Multidate POLSAR method for registering images based on SAR-SIFT feature
CN107203779A (en) Hyperspectral dimensionality reduction method based on spatial-spectral information maintenance
CN107563447B (en) Method for identifying target to target part in remote sensing image in grading manner
CN104102900A (en) Vehicle identification system
Shi et al. MHCFormer: Multiscale hierarchical conv-aided fourierformer for hyperspectral image classification
CN104050489B (en) SAR ATR method based on multicore optimization

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20171013

Termination date: 20210319

CF01 Termination of patent right due to non-payment of annual fee