CN106407986A - Synthetic aperture radar image target identification method based on depth model - Google Patents
Synthetic aperture radar image target identification method based on depth model Download PDFInfo
- Publication number
- CN106407986A CN106407986A CN201610756338.9A CN201610756338A CN106407986A CN 106407986 A CN106407986 A CN 106407986A CN 201610756338 A CN201610756338 A CN 201610756338A CN 106407986 A CN106407986 A CN 106407986A
- Authority
- CN
- China
- Prior art keywords
- layer
- convolution
- depth model
- training sample
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000012549 training Methods 0.000 claims abstract description 49
- 230000008569 process Effects 0.000 claims abstract description 14
- 239000011159 matrix material Substances 0.000 claims description 18
- 238000013527 convolutional neural network Methods 0.000 claims description 12
- 230000006870 function Effects 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 11
- 239000013598 vector Substances 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 7
- 238000004422 calculation algorithm Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000012937 correction Methods 0.000 abstract description 11
- 238000000605 extraction Methods 0.000 abstract description 9
- 238000013461 design Methods 0.000 abstract description 6
- 230000004913 activation Effects 0.000 abstract description 2
- 230000002265 prevention Effects 0.000 abstract 1
- 239000000284 extract Substances 0.000 description 6
- 238000013135 deep learning Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 108010049931 Bone Morphogenetic Protein 2 Proteins 0.000 description 1
- 102100024506 Bone morphogenetic protein 2 Human genes 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 238000003475 lamination Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000008719 thickening Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a synthetic aperture radar image target identification method based on a depth model. The method comprises steps of image cutting; depth model level structure design, characteristic extraction filter design, parameter quantity control and overfitting prevention, function activation and non-linear lifting, identification classification and autonomous parameter correction update; depth model training; target identification. The method is advantaged in that filter parameters can realize autonomous iteration update in a training process, characteristic selection and extraction cost is greatly reduced, moreover, target different-level characteristics can be extracted through the depth model, the characteristics can be acquired through high-degree matching and training, so high-degree target representation can be realized, and target identification accuracy of SAR images is improved.
Description
Technical field
The present invention relates to machine learning, deep learning application technology, particularly to deep learning method in synthetic aperture thunder
Reach the application in images steganalysis.
Background technology
Synthetic aperture radar (Synthetic Aperture Radar, hereinafter referred to as SAR) can be under full weather conditions
Whole day provides high-definition picture.The main method of current SAR images steganalysis system is the feature instruction using image object
Practice grader, SAR image target identification is realized by grader, the performance of therefore grader determines the identification energy of identifying system
Power.
The selection of feature and extraction have a great impact to the design and performance of grader.Pattern-recognition is exactly by concrete thing
Thing is grouped into the process of specifically a certain classification, that is, first uses a number of sample, is carried out according to the similitude between them
Classifier design, then carries out categorised decision with designed grader to sample to be identified.The process of classification both can be
Carry out in original data space it is also possible to line translation is entered to initial data, map the data in feature space and carry out.Compare and
Speech, the latter makes the design of decision-making machine become more easy, and it passes through more stable character representation, improves decision-making machine
Performance, eliminate unnecessary or incoherent information, and be more prone to find the intrinsic contact between research object.Cause
And, feature is the key determining similitude and classifier design between sample.After the purpose of classification determines, how to find
Suitable feature is the key problem of identification.
SAR image imaging mode is very special, and for the optical imagery that compares, it shows as sparse scattering center and divides
Cloth, and very sensitive to the azimuth of imaging, and ambient noise is complicated, different degrees of distortion easily.Therefore, for SAR
The explicit features of image are extracted and are not easy to, also and not always reliable in some application problems.Meanwhile, the quantity shadow of feature
Ring the complexity of identifying system, substantial amounts of feature does not ensure that the recognition effect acquiring because between feature not being
Independent, and there is correlation, bad feature may damage the work of system significantly;Meter can be reduced using less feature
Evaluation time, this is extremely important for real-time application, but may lead to feature based classifier training insufficient and not become
Ripe, substantially reduce recognition performance.For needing the feature selecting that just can carry out by the training stage, if the feature selecting is excessive
Mean that and remove matching training sample with more complicated model.Because training sample there may be various noises, the mould of complexity
Type may produce the over-fitting situation to training sample, makes model be sensitive to noise, loses abstract ability, also cannot be very
Identify target well.Above various reasons so that in SAR target identification system, the selection of feature be extracted into for difficulty
Big problem.
Content of the invention
The goal of the invention of the present invention is:In order to overcome the weak point to SAR image target identification in prior art, with
Reach SAR image target identification more high accuracy, a kind of SAR image target recognition method based on depth model of special offer.
Depth model is a kind of multilayer perceptron, and it has multiple hierarchical structures, the spy that each level selects and extracts
Levying is to be obtained by its own matched and training, and the height that the feature therefore extracted can be regarded as target characterizes.This
Kind of altitude feature ability to express is the learning ability powerful based on its own, by image pattern directly as input data, therefore
The expense required for character selection and abstraction during Image semantic classification can be greatly reduced in.Meanwhile, depth model has
Good parallel processing capability and learning ability, can process the complicated problem of SAR image environmental information, even allowing for sample has
Larger displacement, stretching, rotation etc..
Step 1:As the SAR image (note number of training is M) of training sample, wherein training sample should include for input
Different classes of identification target, identifies the classification number of training sample with T.
Acquisition modes as the SAR image of training sample can be, to the original SAR image becoming aperture radar to obtain with
Cut centered on target, the SAR image after each cutting comprises complete identification target.Ensure the SAR figure after segmentation
In picture, the diversity of identification azimuth of target and the angle of pitch contributes to obtaining more ripe depth model in the training stage.
Step 2:Build depth model:
Step 201:Build convolutional neural networks module:
Build by the convolutional neural networks module of convolution filter and the cascade of pond wave filter, wherein convolution filter is used for
Input data is carried out with sliding window convolution and obtains convolution output;Pond wave filter is used for input data (i.e. convolution output) is dropped
Dimension is processed, and obtains the convolutional layer output of convolutional neural networks module, and wherein dimension-reduction treatment is:Maximum filter is carried out to convolution output
Ripple is processed, and substitutes filter field with the local maximum of filter field overall;
Step 202:The depth model of setting H layer, 1~H-1 layer of depth model is H-1 convolutional Neural net of cascade
Network module, the 1st layer of input is training sample, and the input of 2~H-1 layer is the convolutional layer output of last layer, and 1~H-1
The size of the convolution filter of layer tapers into.The size of pond wave filter is preset value, can be carried out according to practical application request
It is correspondingly arranged, may be sized to of every layer of pond wave filter is identical, may be alternatively provided as differing.
The H layer of depth model includes convolution filter, for input data is carried out with convolution (non-sliding window) filtering, H
The input of the convolution filter of layer is the convolutional layer output of H-1 layer, and the size of convolution filter is equal to the convolution of H-1 layer
The output characteristic figure size of neural network module.
The depth model hierarchical structure of the present invention can extract the image object feature of different depth, mainly passes through size big
The little convolution filter for ω carries out convolution algorithm and extracts the feature of input picture as output to image.
Convolution algorithm can be formulated as:I.e. with default step-length s1 (in this formula
S1=1) sliding window mode to input Sij(i, j are image coordinate) carries out the output S that convolution obtains correspondence positioni′j′', wnmGeneration
Table convolution filter line n m row parameter;The size of adjustment ω controls the size of convolution filter;Depth with depth structure
Enter, the size of characteristic image is gradually reduced, the size of ω suitably reduces therewith, the reasonable quantity of convolution filter increases.
Convolution filter is directly acted on image, extracts characteristics of image.Because depth model hierarchical structure is capable of
Autonomous study updates, and the output of system is by feedback effect in the network parameter (convolution filter parameter) of each level, Gauss
The convolution filter of random initializtion under this feedback effect, independently revised by parameter, eventually becomes and can extract altimeter
Levy the feature extraction wave filter of target signature, the selection of this feature and extraction process are independently completed by system, eliminate traditional mesh
Identify the preprocessing process such as other image characteristics extraction, greatly reduce the expense realizing target identification, moreover, all of
Feature extraction wave filter all obtains through matched training, and the further feature obtaining advantageously carries out target knowledge in system
Not.
To each convolutional neural networks module, the characteristic pattern that its convolutional layer (convolution filter) exports is a size of:ho1=
(hi- ω)/s1+1, wherein h01、hi, ω represent output characteristic figure size, input feature vector figure size and convolution filter respectively
Size.Step-length s1 can efficiently reduce the convolution algorithm to repeat region, improves the operating rate of depth model.With depth
Spend going deep into of structure, the size of characteristic image is gradually reduced, the reasonable quantity increase of convolution filter carries to ensure hierarchical structure
Take the diversity of feature.
In order to extract the further feature of target, depth model has hierarchical structure and the filter of a fairly large number of convolution of complexity
Ripple device, therefore can produce substantial amounts of parameter, increased the burden of identifying processing.Moreover, different convolution filters is only right
Specific feature-sensitive, this also means that the characteristic pattern after convolutional layer will produce substantial amounts of redundancy after feature extraction,
Lead to follow-up level will to expend ample resources in the process of these redundancies.In order to realize to depth model number of parameters
Control, and reject redundancy, in each convolutional neural networks module, need using pond method by way of sliding window
(step-length is set to s2) is replaced with the local maximum of current filter region (current window) to the convolution output of convolution filter
For current filter region entirety as output:Wherein eijRepresentative image i-th row
Jth row pixel value, ei+nj+nImplication and eijCommunicate, eoFor output pixel value.
The characteristic pattern of pond wave filter output is a size of:ho2=(hi-ωd)/s2+1, wherein ωdRepresent pond wave filter
Size, stride is provided with the interval of adjacent pool wave filter, to each convolutional neural networks module, the chi of its output characteristic figure
Very little characteristic pattern size h being pond wave filter outputo2.
The clear and definite target of deep learning one is exactly to extract key factor from initial data, and initial data generally twines
Around highly dense characteristic vector, these characteristic vectors are to be mutually related, key factor may be wrapped several even
Substantial amounts of characteristic vector, in order to reject redundancy further, and degree of approximation maximally retention data feature, can be by big
Most elements be 0 sparse matrix realizing, such as by Sigmod activation primitive f (x)=(1+e-x)-1, hyperbolic tangent function f
(x)=tanh (x), f (x)=| tanh (x) | and correction linear unit (Rectified Linear Unit) f (x)=max
(0, the correcting process function such as x), wherein x represents the individual element of convolution output.
The present invention the preferred f (x) of correcting process function=max (0, x), that is, for convolution output each element, take it
With 0 in maximal term as correction result.Thus introduce and revise linear unit, be i.e. every layer of setting in depth model is revised linearly
Unit, for being modified to the output of convolution filter processing.To convolutional neural networks module, then first carry out correcting process
Afterwards, then carry out pondization filtering, the H layer to depth model, by after correcting process convolution output final as depth model
Output.
The bottom (H layer) of depth model connects output using complete, i.e. each width characteristic pattern (chi to the output of H-1 layer
Very little size is L × W) in element be weighted summation and obtain value presetWherein xiSubscript be used for identifying
The different characteristic figure (also corresponding to the different convolution filter marks of H layer) of same training sample, knmConvolutional filtering for H layer
The parameter of device line n m row, enmBe characterized figure line n m row element, by the same training sample obtaining H layer institute
There is value preset x of characteristic patterniCombination forms output characteristic matrix, i.e. the eigenvectors matrix X=[x of training sample1x2x3...xp]T,
Wherein p represents the feature map number in H layer for each training sample.
Step 3:Depth model is trained:
Step 301:Initializationization iterations d=0, initialization learning rate α are preset value;
Step 302:Randomly choose N width image as sub- training sample set from training sample sample set, be input to depth
The 1st layer of model, the H layer output based on depth model obtains the eigenvectors matrix X of each training sample;
The error amount δ of step-by-step calculation convolution filter at different levels:The error amount of H layer convolution filter is F-X it is desirable to export
F is preset value;The error amount of 1~H-1 layer convolution filter is the ginseng of the then error amount based on last layer and convolution filter
Number wnmProduct obtain, subscript n=1,2 ..., ω, m=1,2 ..., ω, ω represent the size of convolution filter;According to volumes at different levels
Error amount undated parameter w of long-pending wave filternm:wnm=wnm-Δwnm, wherein
If in identifying processing, belonged to respectively using the eigenvectors matrix that Softmax regression model calculates images to be recognized
In T classification target class probability.Then in step 302 in addition it is also necessary to feature based vector matrix X is joined to Softmax regression model
Number θj(j=1,2 ..., T) it is iterated renewal learning:
Can get category probability matrix h of each eigenvectors matrix X based on Softmax regression modelθ(X):
Wherein vectorial θ=(θ1,θ2,…,θT), its initial value is random initializtion, and y represents classification recognition result, and e represents
The natural truth of a matter,Represent with regard to θjMatrix transposition.
N number of training sample of current iteration is expressed as:(X(1),y(1)),(X(2),y(2)),(X(3),y(3))...(X(N),y(N)), wherein X(i)Represent the eigenvectors matrix (being obtained by the final output of depth model) of i-th training sample, y(i)Represent
Corresponding X(i)Classification logotype, i.e. y(i)=1,2 ..., T, based on N number of (X(i),y(i)) calculate likelihood function and log-likelihood letter
Number:
Likelihood functionWherein P (y(i)|X(i), θ) represent
X(i)It is categorized as the probability of j;Log-likelihood functionWherein I { }
For indicator function, if { } is true, I { }=1;If { } is true and false, I { }=0;
The cost function of log-likelihood function l (θ)
Realize the minimum of J (θ) by gradient descent algorithm:
WillWith the product of learning rate α as Parameters in Regression Model correction:Exist
During next iteration, using last correction as current iteration Parameters in Regression Model;
Finally, update iterations d=d+1.
Step 303:Judge whether iterations reaches end threshold value, if so, then execution step 304;Otherwise, then judge to change
Whether generation number reaches adjustment threshold value, if so, then reduces learning rate α, and the parameter based on the convolution filters at different levels after updating
wnmExecution step 302;If it is not, being then directly based upon convolution filter parameters w at different levels after renewalnmExecution step 302;
Step 304:Parameter current w based on convolution filters at different levelsnmObtain training the depth model completing, and will be last
The correction of an iteration as final Parameters in Regression Model, for the identifying processing to images to be recognized.
Step 4:Input SAR image to be identified, centered on target to be identified, carry out image cutting, obtain and training sample
The images to be recognized of same size;The depth model that will complete with identification image input training, obtains the feature of images to be recognized
Vector is belonging respectively to T classification target class probability, using the corresponding classification of maximum probability as target identification result.
Step 4:Input SAR image to be identified, centered on target to be identified, carry out image cutting, obtain and training sample
The images to be recognized of same size;
The depth model that images to be recognized input training is completed, the eigenvectors matrix of output images to be recognized;
Step 5:The eigenvectors matrix calculating images to be recognized is belonging respectively to T classification target class probability, with the most general
The corresponding classification of rate is as target identification result.According to Softmax regression model, then final the returning being obtained based on step 3
Return model parameter, the eigenvectors matrix calculating images to be recognized is belonging respectively to T classification target class probability, with maximum probability pair
The classification answered is as target identification result.
In sum, the invention has the beneficial effects as follows:The SAR image of target recognition can directly be located, efficiently reduce
Workload needed for the preprocessing process of character selection and abstraction in object recognition task, extracts the depth that matched identifies target
Layer feature, the degree of accuracy of lifting target identification.
Brief description
Fig. 1 is depth model structural representation.
Fig. 2 is convolutional filtering schematic diagram.
Fig. 3 is maximum pond schematic diagram.
Fig. 4 is MSTAR tank original image.
Fig. 5 is wave filter and each layer output characteristic diagram intention of depth model.
Specific embodiment
For making the object, technical solutions and advantages of the present invention clearer, with reference to embodiment and accompanying drawing, to this
Bright it is described in further detail.
Realize the present invention using 4 layer depth model structures as shown in Figure 1, wherein the 1st~3 layer includes convolutional filtering respectively
Device, correction linear unit and pond wave filter, the 4th layer is full articulamentum, and it includes convolution filter and revises linear unit.Deep
The input (input) of degree model is training sample, test sample.1st~3 layer of convolution filter is with the cunning of default step-length s=1
Window mode carries out convolution and obtains convolution output, as shown in Fig. 2 b to input data (as shown in Fig. 2 a);Pond wave filter is to volume
Long-pending output carries out dimension-reduction treatment:Substitute filter field entirety with the local maximum of filter field, as shown in Figure 3.
In the present embodiment, the extraction of training sample is originated and (each car can be provided with 72 differences for MSTAR view data
Visual angle and the sample of different directions).Size as shown in Figure 4 is the SAR image of 128*128, and in figure comprises 3 regions:Tank,
Have than more serious coherent speckle noise in shade and background, and image.
Based on the training sample set of collection, in conjunction with the depth model shown in Fig. 1, a model repetitive exercise mistake can be obtained
Cheng Zhong, the characteristic pattern of each layer wave filter of depth model and the output of each layer, as shown in figure 5, wherein 5-a is the of depth model
Level 1 volume lamination wave filter, 5-b is that SAR original image amasss the feature of output after filtering, correction and dimension-reduction treatment through level 1 volume
Figure, is defined as the 1st layer of character figure, and Fig. 5-c is that the 1st layer of characteristic pattern amasss after filtering, correction and dimension-reduction treatment through level 2 volume
The characteristic pattern of output, is defined as the 2nd layer of characteristic pattern, and Fig. 5-d is through the 3rd layer of convolutional filtering and correction after the 2nd layer of pondization exports
The characteristic pattern of output after process.
The depth model being completed based on training, carries out target identification processing to different test samples (referring to table 1) respectively,
In the present embodiment, calculate the eigenvectors matrix of images to be recognized of depth model output using Softmax regression model respectively
Belong to T classification target class probability, using the corresponding classification of maximum probability as target identification result.For MSTAR data set ten
Class vehicle target:2S1, BMP-2, BRDM_2, BTR60, BTR70, D7, T62, T72, ZIL_131 and ZSU_23_4 can reach
93.99% discrimination.
Table 1
Table 2 gives the inventive method and existing method IGT (iterative graph thickening on
Discriminative graphical models), EMACH (extension maximum average relevant height wave filter), SVM (support to
Amount machine), Cond Gauss (conditional Gaussian model) and Adaboost (feature fusion via boosting on
Individual neural net classifies) averagely quasi- rate contrast table:
Table 2
Wherein, existing method IGT, EMACH, SVM, Cond Gauss and Adaboost use attitude correction algorithm raising property
Can, in the case of not correcting in platform, the accuracy rate of said method drops to 88.60%, SVM83.90%, Cond
Gauss86.10%, Adaboost87.20%, although the convolutional neural networks of the present invention are in the feelings not having any Image semantic classification
It is trained under condition, but compares relatively above most of method corrected through attitude, it still has very outstanding performance, only
Have that the IGT method corrected through attitude is higher by 1% than the recognition accuracy of the present invention (do not need attitude correct), compare its other party
Method, the present invention can save the resource being spent in terms of Image semantic classification in a large number and time cost, and work expense is low.
The above, the only specific embodiment of the present invention, any feature disclosed in this specification, except non-specifically
Narration, all can be replaced by other alternative features that are equivalent or having similar purpose;Disclosed all features or all sides
Method or during step, in addition to mutually exclusive feature and/or step, all can be combined in any way.
Claims (3)
1. a kind of identification method of image target of synthetic aperture radar based on depth model is it is characterised in that comprise the following steps:
Step 1:Training sample gathers:
Step 101:The different classes of original SAR image of input identification target, wherein classification number are T;
Step 102:Image cutting is carried out to original SAR image goal-orientation, obtains image size identical training sample
Collection, and category identifier is set for each training sample;
Step 2:Build depth model:
Step 201:Build by the convolutional neural networks module of convolution filter and the cascade of pond wave filter, wherein convolution filter
Obtain convolution output for input data is carried out with sliding window convolution;Pond wave filter is used for input data is carried out at sliding window dimensionality reduction
Reason, obtains the convolutional layer output of convolutional neural networks module, and described dimension-reduction treatment is:Convolution output is carried out at maximum value filtering
Reason, takes the local maximum of current window to export as the filtering of current window;
Step 202:The depth model of setting H layer, 1~H-1 layer of depth model is H-1 convolutional neural networks mould of cascade
Block, the 1st layer of input is training sample, and the input of 2~H-1 layer is the convolutional layer output of last layer, and 1~H-1 layer
The size of convolution filter tapers into;
The H layer of depth model includes convolution filter, for the convolutional filtering to input data, the convolution filter of H layer
Input for H-1 layer convolutional layer output, and the size of convolution filter is equal to the convolutional neural networks module of H-1 layer
Output characteristic figure size;
Step 3:Depth model is trained:
Step 301:Initializationization iterations d=0, initialization learning rate α are preset value;
Step 302:Randomly choose N width image as sub- training sample set from training sample sample set, be input to depth model
The 1st layer, the H layer output based on depth model obtains the eigenvectors matrix X of each training sample;
The error amount δ of step-by-step calculation each layer convolution filter:The error amount of H layer convolution filter is F-X it is desirable to output F is
Preset value;Parameter w of the error amount of 1~H-1 layer convolution filter then error amount based on last layer and convolution filternm's
Product obtains, subscript n=1,2 ..., ω, m=1, and 2 ..., ω, ω represent the size of convolution filter;
Error amount undated parameter w according to convolution filters at different levelsnm:wnm=wnm-Δwnm, wherein
Update iterations d=d+1;
Step 303:Judge whether iterations reaches end threshold value, if so, then execution step 304;Otherwise, then judge iteration time
Whether number reaches adjustment threshold value, if so, then reduces learning rate α, and parameter w based on the convolution filters at different levels after updatingnmHold
Row step 302;If it is not, being then directly based upon convolution filter parameters w at different levels after renewalnmExecution step 302;
Step 304:Parameter current w based on convolution filters at different levelsnmObtain training the depth model completing;
Step 4:Input SAR image to be identified, centered on target to be identified, carry out image cutting, obtain identical with training sample
The images to be recognized of size;
The depth model that images to be recognized input training is completed, the eigenvectors matrix of output images to be recognized;
Step 5:The eigenvectors matrix calculating images to be recognized is belonging respectively to T classification target class probability, with maximum probability pair
The classification answered is as target identification result.
2. the method for claim 1 is it is characterised in that step 302 is also included to Softmax Parameters in Regression Model vector
θjIteration update, wherein θjFor random initializtion, j=1,2 ..., T:
According to formulaCalculation cost function J (θ), wherein I { } are to refer to
Show function, if { } is true, I { }=1;If { } is true and false, I { }=0;Expression formula y(i)=j represents sub- training sample
The recognition result y of i-th training sample of this concentration(i)For classification j, e represents the nature truth of a matter,Represent with regard to θjMatrix turn
Put, X(i)Represent the characteristic vector of i-th training sample that sub- training sample is concentrated;
Realize the minimum of cost function J (θ) according to gradient descent algorithm, obtainAnd according to formulaParameters in Regression Model vector θ after being updatedj;
In step 5, based on current Parameters in Regression Model vector θjThe characteristic vector calculating images to be recognized is belonging respectively to T class target
Class probability.
3. method as claimed in claim 1 or 2 is it is characterised in that each layer of depth model also includes amending unit, described repaiies
Positive unit is used for being modified processing according to the convolution output of correcting process function pair convolution filter, and correcting process function is f
(x)=(1+e-x)-1, f (x)=tanh (x), f (x)=| tanh (x) | or f (x)=max (0, x), wherein x represent convolution export
Individual element.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610756338.9A CN106407986B (en) | 2016-08-29 | 2016-08-29 | A kind of identification method of image target of synthetic aperture radar based on depth model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610756338.9A CN106407986B (en) | 2016-08-29 | 2016-08-29 | A kind of identification method of image target of synthetic aperture radar based on depth model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106407986A true CN106407986A (en) | 2017-02-15 |
CN106407986B CN106407986B (en) | 2019-07-19 |
Family
ID=58002606
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610756338.9A Expired - Fee Related CN106407986B (en) | 2016-08-29 | 2016-08-29 | A kind of identification method of image target of synthetic aperture radar based on depth model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106407986B (en) |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107256396A (en) * | 2017-06-12 | 2017-10-17 | 电子科技大学 | Ship target ISAR characteristics of image learning methods based on convolutional neural networks |
CN107341488A (en) * | 2017-06-16 | 2017-11-10 | 电子科技大学 | A kind of SAR image target detection identifies integral method |
CN107392122A (en) * | 2017-07-07 | 2017-11-24 | 西安电子科技大学 | Polarization SAR silhouette target detection method based on multipolarization feature and FCN CRF UNEs |
CN107516317A (en) * | 2017-08-18 | 2017-12-26 | 上海海洋大学 | A kind of SAR image sea ice sorting techniques based on depth convolutional neural networks |
CN107563422A (en) * | 2017-08-23 | 2018-01-09 | 西安电子科技大学 | A kind of polarization SAR sorting technique based on semi-supervised convolutional neural networks |
CN107728143A (en) * | 2017-09-18 | 2018-02-23 | 西安电子科技大学 | Radar High Range Resolution target identification method based on one-dimensional convolutional neural networks |
CN107886123A (en) * | 2017-11-08 | 2018-04-06 | 电子科技大学 | A kind of synthetic aperture radar target identification method based on auxiliary judgement renewal learning |
CN107977683A (en) * | 2017-12-20 | 2018-05-01 | 南京大学 | Joint SAR target identification methods based on convolution feature extraction and machine learning |
CN108038445A (en) * | 2017-12-11 | 2018-05-15 | 电子科技大学 | A kind of SAR automatic target recognition methods based on various visual angles deep learning frame |
CN108169745A (en) * | 2017-12-18 | 2018-06-15 | 电子科技大学 | A kind of borehole radar target identification method based on convolutional neural networks |
CN108280490A (en) * | 2018-02-28 | 2018-07-13 | 北京邮电大学 | A kind of fine granularity model recognizing method based on convolutional neural networks |
CN108364000A (en) * | 2018-03-26 | 2018-08-03 | 南京大学 | A kind of similarity preparation method based on the extraction of neural network face characteristic |
CN108564098A (en) * | 2017-11-24 | 2018-09-21 | 西安电子科技大学 | Based on the polarization SAR sorting technique for scattering full convolution model |
CN108681999A (en) * | 2018-05-22 | 2018-10-19 | 浙江理工大学 | SAR image target shape generation method based on depth convolutional neural networks model |
CN108846047A (en) * | 2018-05-30 | 2018-11-20 | 百卓网络科技有限公司 | A kind of picture retrieval method and system based on convolution feature |
CN109087337A (en) * | 2018-11-07 | 2018-12-25 | 山东大学 | Long-time method for tracking target and system based on layering convolution feature |
CN109934237A (en) * | 2019-02-18 | 2019-06-25 | 杭州电子科技大学 | SAR image feature extracting method based on convolutional neural networks |
CN110113277A (en) * | 2019-03-28 | 2019-08-09 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | The intelligence communication signal modulation mode identification method of CNN joint L1 regularization |
WO2020082263A1 (en) * | 2018-10-24 | 2020-04-30 | Alibaba Group Holding Limited | Fast computation of convolutional neural network |
CN111712830A (en) * | 2018-02-21 | 2020-09-25 | 罗伯特·博世有限公司 | Real-time object detection using depth sensors |
CN111797774A (en) * | 2020-07-07 | 2020-10-20 | 金陵科技学院 | Road surface target identification method based on radar image and similarity weight |
CN112819742A (en) * | 2021-02-05 | 2021-05-18 | 武汉大学 | Event field synthetic aperture imaging method based on convolutional neural network |
CN113865682A (en) * | 2021-09-29 | 2021-12-31 | 深圳市汉德网络科技有限公司 | Truck tire load determining method and device and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105139028A (en) * | 2015-08-13 | 2015-12-09 | 西安电子科技大学 | SAR image classification method based on hierarchical sparse filtering convolutional neural network |
CN105139395A (en) * | 2015-08-19 | 2015-12-09 | 西安电子科技大学 | SAR image segmentation method based on wavelet pooling convolutional neural networks |
CN105184309A (en) * | 2015-08-12 | 2015-12-23 | 西安电子科技大学 | Polarization SAR image classification based on CNN and SVM |
CN105718957A (en) * | 2016-01-26 | 2016-06-29 | 西安电子科技大学 | Polarized SAR image classification method based on nonsubsampled contourlet convolutional neural network |
CN105868793A (en) * | 2016-04-18 | 2016-08-17 | 西安电子科技大学 | Polarization SAR image classification method based on multi-scale depth filter |
-
2016
- 2016-08-29 CN CN201610756338.9A patent/CN106407986B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105184309A (en) * | 2015-08-12 | 2015-12-23 | 西安电子科技大学 | Polarization SAR image classification based on CNN and SVM |
CN105139028A (en) * | 2015-08-13 | 2015-12-09 | 西安电子科技大学 | SAR image classification method based on hierarchical sparse filtering convolutional neural network |
CN105139395A (en) * | 2015-08-19 | 2015-12-09 | 西安电子科技大学 | SAR image segmentation method based on wavelet pooling convolutional neural networks |
CN105718957A (en) * | 2016-01-26 | 2016-06-29 | 西安电子科技大学 | Polarized SAR image classification method based on nonsubsampled contourlet convolutional neural network |
CN105868793A (en) * | 2016-04-18 | 2016-08-17 | 西安电子科技大学 | Polarization SAR image classification method based on multi-scale depth filter |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107256396A (en) * | 2017-06-12 | 2017-10-17 | 电子科技大学 | Ship target ISAR characteristics of image learning methods based on convolutional neural networks |
CN107341488A (en) * | 2017-06-16 | 2017-11-10 | 电子科技大学 | A kind of SAR image target detection identifies integral method |
CN107392122A (en) * | 2017-07-07 | 2017-11-24 | 西安电子科技大学 | Polarization SAR silhouette target detection method based on multipolarization feature and FCN CRF UNEs |
CN107516317A (en) * | 2017-08-18 | 2017-12-26 | 上海海洋大学 | A kind of SAR image sea ice sorting techniques based on depth convolutional neural networks |
CN107516317B (en) * | 2017-08-18 | 2021-04-27 | 上海海洋大学 | SAR image sea ice classification method based on deep convolutional neural network |
CN107563422A (en) * | 2017-08-23 | 2018-01-09 | 西安电子科技大学 | A kind of polarization SAR sorting technique based on semi-supervised convolutional neural networks |
CN107728143A (en) * | 2017-09-18 | 2018-02-23 | 西安电子科技大学 | Radar High Range Resolution target identification method based on one-dimensional convolutional neural networks |
CN107728143B (en) * | 2017-09-18 | 2021-01-19 | 西安电子科技大学 | Radar high-resolution range profile target identification method based on one-dimensional convolutional neural network |
CN107886123A (en) * | 2017-11-08 | 2018-04-06 | 电子科技大学 | A kind of synthetic aperture radar target identification method based on auxiliary judgement renewal learning |
CN107886123B (en) * | 2017-11-08 | 2019-12-10 | 电子科技大学 | synthetic aperture radar target identification method based on auxiliary judgment update learning |
CN108564098A (en) * | 2017-11-24 | 2018-09-21 | 西安电子科技大学 | Based on the polarization SAR sorting technique for scattering full convolution model |
CN108038445A (en) * | 2017-12-11 | 2018-05-15 | 电子科技大学 | A kind of SAR automatic target recognition methods based on various visual angles deep learning frame |
CN108169745A (en) * | 2017-12-18 | 2018-06-15 | 电子科技大学 | A kind of borehole radar target identification method based on convolutional neural networks |
CN107977683B (en) * | 2017-12-20 | 2021-05-18 | 南京大学 | Joint SAR target recognition method based on convolution feature extraction and machine learning |
CN107977683A (en) * | 2017-12-20 | 2018-05-01 | 南京大学 | Joint SAR target identification methods based on convolution feature extraction and machine learning |
CN111712830A (en) * | 2018-02-21 | 2020-09-25 | 罗伯特·博世有限公司 | Real-time object detection using depth sensors |
CN111712830B (en) * | 2018-02-21 | 2024-02-09 | 罗伯特·博世有限公司 | Real-time object detection using depth sensors |
CN108280490A (en) * | 2018-02-28 | 2018-07-13 | 北京邮电大学 | A kind of fine granularity model recognizing method based on convolutional neural networks |
CN108364000B (en) * | 2018-03-26 | 2019-08-23 | 南京大学 | A kind of similarity preparation method extracted based on neural network face characteristic |
CN108364000A (en) * | 2018-03-26 | 2018-08-03 | 南京大学 | A kind of similarity preparation method based on the extraction of neural network face characteristic |
CN108681999B (en) * | 2018-05-22 | 2022-05-31 | 浙江理工大学 | SAR image target shape generation method based on deep convolutional neural network model |
CN108681999A (en) * | 2018-05-22 | 2018-10-19 | 浙江理工大学 | SAR image target shape generation method based on depth convolutional neural networks model |
CN108846047A (en) * | 2018-05-30 | 2018-11-20 | 百卓网络科技有限公司 | A kind of picture retrieval method and system based on convolution feature |
WO2020082263A1 (en) * | 2018-10-24 | 2020-04-30 | Alibaba Group Holding Limited | Fast computation of convolutional neural network |
CN109087337A (en) * | 2018-11-07 | 2018-12-25 | 山东大学 | Long-time method for tracking target and system based on layering convolution feature |
CN109934237A (en) * | 2019-02-18 | 2019-06-25 | 杭州电子科技大学 | SAR image feature extracting method based on convolutional neural networks |
CN110113277B (en) * | 2019-03-28 | 2021-12-07 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | CNN combined L1 regularized intelligent communication signal modulation mode identification method |
CN110113277A (en) * | 2019-03-28 | 2019-08-09 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | The intelligence communication signal modulation mode identification method of CNN joint L1 regularization |
CN111797774A (en) * | 2020-07-07 | 2020-10-20 | 金陵科技学院 | Road surface target identification method based on radar image and similarity weight |
CN111797774B (en) * | 2020-07-07 | 2023-09-22 | 金陵科技学院 | Pavement target recognition method based on radar image and similarity weight |
CN112819742A (en) * | 2021-02-05 | 2021-05-18 | 武汉大学 | Event field synthetic aperture imaging method based on convolutional neural network |
CN112819742B (en) * | 2021-02-05 | 2022-05-13 | 武汉大学 | Event field synthetic aperture imaging method based on convolutional neural network |
CN113865682A (en) * | 2021-09-29 | 2021-12-31 | 深圳市汉德网络科技有限公司 | Truck tire load determining method and device and storage medium |
CN113865682B (en) * | 2021-09-29 | 2023-11-21 | 深圳市汉德网络科技有限公司 | Truck tire load determining method, truck tire load determining device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN106407986B (en) | 2019-07-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106407986B (en) | A kind of identification method of image target of synthetic aperture radar based on depth model | |
CN110443143B (en) | Multi-branch convolutional neural network fused remote sensing image scene classification method | |
CN113378632B (en) | Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method | |
CN109977918B (en) | Target detection positioning optimization method based on unsupervised domain adaptation | |
CN109919108B (en) | Remote sensing image rapid target detection method based on deep hash auxiliary network | |
Lin et al. | Hyperspectral image denoising via matrix factorization and deep prior regularization | |
CN105205448B (en) | Text region model training method and recognition methods based on deep learning | |
CN105184312B (en) | A kind of character detecting method and device based on deep learning | |
US20190228268A1 (en) | Method and system for cell image segmentation using multi-stage convolutional neural networks | |
CN103605972B (en) | Non-restricted environment face verification method based on block depth neural network | |
CN111178432A (en) | Weak supervision fine-grained image classification method of multi-branch neural network model | |
CN111639719B (en) | Footprint image retrieval method based on space-time motion and feature fusion | |
Yu et al. | Context-based hierarchical unequal merging for SAR image segmentation | |
CN108388896A (en) | A kind of licence plate recognition method based on dynamic time sequence convolutional neural networks | |
CN108648191A (en) | Pest image-recognizing method based on Bayes's width residual error neural network | |
CN110569725B (en) | Gait recognition system and method for deep learning based on self-attention mechanism | |
CN111401145B (en) | Visible light iris recognition method based on deep learning and DS evidence theory | |
CN110827260B (en) | Cloth defect classification method based on LBP characteristics and convolutional neural network | |
CN106611423B (en) | SAR image segmentation method based on ridge ripple filter and deconvolution structural model | |
CN105512681A (en) | Method and system for acquiring target category picture | |
CN104732244A (en) | Wavelet transform, multi-strategy PSO (particle swarm optimization) and SVM (support vector machine) integrated based remote sensing image classification method | |
CN108664994A (en) | A kind of remote sensing image processing model construction system and method | |
CN113378706B (en) | Drawing system for assisting children in observing plants and learning biological diversity | |
CN113705655A (en) | Full-automatic classification method for three-dimensional point cloud and deep neural network model | |
Lin et al. | Determination of the varieties of rice kernels based on machine vision and deep learning technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190719 |