CN113505710A - Image selection method and system based on deep learning SAR image classification - Google Patents
Image selection method and system based on deep learning SAR image classification Download PDFInfo
- Publication number
- CN113505710A CN113505710A CN202110802002.2A CN202110802002A CN113505710A CN 113505710 A CN113505710 A CN 113505710A CN 202110802002 A CN202110802002 A CN 202110802002A CN 113505710 A CN113505710 A CN 113505710A
- Authority
- CN
- China
- Prior art keywords
- image
- image selection
- images
- deep learning
- sar
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/211—Selection of the most significant subset of features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/29—Graphical models, e.g. Bayesian networks
- G06F18/295—Markov models or related models, e.g. semi-Markov models; Markov random fields; Networks embedding Markov models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an image selection method based on deep learning SAR image classification, wherein the relation between corresponding pixels of different images is established as a Markov model; by utilizing the belief propagation algorithm, joint statistical distribution of a computational sensor is avoided, the calculation of the image selection algorithm is facilitated to be simplified, and the belief propagation algorithm serving as a message transmission mechanism can simplify a message updating rule into a linear fusion model; the feature correlations between the selected image and other images are determined, and the capacity of sensor fusion is determined to complete the image selection process. The invention mainly relates to an image selection algorithm with fused feature layers, wherein the relation between corresponding pixels of different images is established as a Markov model; by utilizing the belief propagation algorithm, the joint statistical distribution of the computational sensor is avoided, and the calculation of the image selection algorithm is facilitated to be simplified. Mutual information, as a metric based on information theory, can be used to describe the accurate correlation of the estimates.
Description
Technical Field
The invention relates to the technical field of image analysis, in particular to an image selection method and system based on deep learning SAR image classification.
Background
From the stage where the fusion is in the recognition process, there are three levels of fusion: a feature layer, a matching layer and a decision layer. Feature layer fusion refers to extracting corresponding feature vectors from biological feature data of different modalities and "fusing" them in a unified space into a new feature vector with higher dimension for identification.
The SAR image classification based on deep learning is widely applied to the fields of terrain surface classification, sea ice classification, ocean monitoring and the like. More data is not beneficial for maximizing useful information due to non-idealities, mismatches, and estimated defects. Longbotham et al confirmed the above conclusion. Therefore, designing an image selection algorithm can not only maximize the extraction information but also save the amount of computation.
In the prior art, for example, Chlaily et al propose an image selection algorithm based on a heterogeneous sensor fusion decision layer. Guerriro et al propose an SAR image selection algorithm based on the fusion detection of SAR images and radar sequences. Fusing data at the feature level is more efficient because the feature set contains more information than the decision set. However, there is currently no image selection algorithm that fuses in feature sets.
The key to designing an image selection algorithm is to establish a relationship model between corresponding pixels of different images. Hedhli et al establishes the relationship between corresponding pixels as a multi-layered Markov model when fusing SAR images with optical images. Tuia et al established the relationship between corresponding pixels as a conditional random field. Kang et al established relationships between corresponding pixels using a graph network.
Disclosure of Invention
The invention mainly relates to an image selection algorithm with feature layer fusion, wherein the relation between corresponding pixels of different images is established as a Markov model, and a belief propagation algorithm is utilized, so that the joint statistical distribution of a calculation sensor is avoided, and the calculation of the image selection algorithm is facilitated to be simplified. Mutual information, as a metric based on information theory, can be used to describe the accurate correlation of the estimates. No other research is currently devoted to discussing the image selection problem of feature-layer fusion.
The present invention is directed to solving at least one of the problems of the prior art. To this end, the invention discloses an image selection method based on deep learning SAR image classification, which comprises the following steps:
initializing an image to be analyzed;
introducing a factor graph to describe the relation between corresponding pixels in different images, and establishing the relation between the corresponding pixels of the different images as a Markov model;
by utilizing the belief propagation algorithm, joint statistical distribution of the computational sensor is avoided, and the simplification of the calculation of the image selection algorithm is facilitated, wherein the belief propagation algorithm serving as a message transmission mechanism can simplify a message updating rule into a linear fusion model;
the feature correlations between the selected image and other images are determined, and the capacity of sensor fusion is determined to complete the image selection process.
Still further, the method further comprises: feature-layer fusion can be expressed as a maximum a posteriori probability (MAP) estimate,
wherein, S represents the mark of the corresponding pixel of N images, and Y is [ Y ═ Y-1,…,yN]Including the characteristics of corresponding pixels of different images.
Further, a factor graph is introduced to describe the relationship between corresponding pixels in different images, and the posterior distribution p (SY) is decomposed into single variable and paired terms
Wherein each univariate Φn(Sn) Simulating S in joint distributionnOf each pair of terms Ψij(Si,Sj) Representing the edge S in the diagramiAnd SjThe interdependence of (a).
Still further, the method further comprises: as message deliveryThe mechanism belief propagation algorithm can simplify the message updating rule into a linear fusion model and reduce LnAnd θ is defined as:
for each of the pixels/the number of pixels is,
θ=AlLn, (5)
wherein A islIs a 1 XN vector, each element aiRepresenting the relationship between any image and other images, LnIs an N × 1 vector.
Further, the sensor fusion capacity is represented by θ and LnMutual information definition of (2):
where l is the pixel number.
Further, the capacity of a multiple-input single-output (MISO) system with power constraints per pixel is obtained
Wherein M is the number of selected images and does not exceed N, sigma2Is the power of additive white Gaussian noise, aiIs AlIth element of (2), piIs the power per pixel.
Further, due to piSet to be constant, equation (7) becomes
Wherein, aiRepresenting the feature correlation between image n and other images, computed using a typical correlation analysis (CCA) method,is the average signal-to-noise ratio of each image.
The invention further discloses a system for selecting images based on deep learning SAR image classification, which comprises a processor and a machine-readable storage medium, wherein the machine-readable storage medium is connected with the processor and is used for storing programs, instructions or codes, and the processor is used for executing the programs, the instructions or the codes in the machine-readable storage medium so as to realize the method for selecting images based on deep learning SAR image classification.
Compared with the prior art, the invention has the beneficial effects that: the algorithm adopted by the invention can greatly and effectively reduce the calculated amount and achieve the optimal image classification performance.
Drawings
The invention will be further understood from the following description in conjunction with the accompanying drawings. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the embodiments. In the drawings, like reference numerals designate corresponding parts throughout the different views.
FIG. 1 is a logic flow diagram of the present invention.
Fig. 2 is a process of an image selection method according to an embodiment of the invention.
Detailed Description
Example one
As shown in fig. 1, the present embodiment discloses an image selection method based on deep learning SAR image classification, which includes the following steps:
step 1, initializing an image to be analyzed;
step 2, introducing a factor graph to describe the relation between corresponding pixels in different images, and establishing the relation between the corresponding pixels of the different images as a Markov model;
step 3, the belief propagation algorithm is utilized, joint statistical distribution of the computing sensor is avoided, and the simplification of the computation of the image selection algorithm is facilitated, wherein the belief propagation algorithm serving as a message transmission mechanism can simplify a message updating rule into a linear fusion model;
and 4, determining the characteristic correlation between the selected image and other images and determining the capacity of sensor fusion.
Still further, the step 2 further comprises: feature-layer fusion can be expressed as a maximum a posteriori probability (MAP) estimate,
wherein, S represents the mark of the corresponding pixel of N images, and Y is [ Y ═ Y-1,…,yN]Including the characteristics of corresponding pixels of different images.
Further, a factor graph is introduced to describe the relationship between corresponding pixels in different images, and the posterior distribution p (S | Y) is decomposed into single variables and pairs of terms
Wherein each univariate Φn(Sn) Simulating S in joint distributionnOf each pair of terms Ψij(Si,Sj) Representing the edge S in the diagramiAnd SjThe interdependence of (a).
Still further, the step 3 further comprises: the belief propagation algorithm serving as a message transmission mechanism can simplify the message updating rule into a linear fusion model and reduce LnAnd θ is defined as:
for each of the pixels/the number of pixels is,
θ=AlLn, (5)
wherein A islIs a 1 XN vector, each element aiRepresenting the relationship between any image and other images, LnIs an N × 1 vector.
Further, the sensor fusion capacity is represented by θ and LnMutual information definition of (2):
where l is the pixel number.
Further, the capacity of a multiple-input single-output (MISO) system with power constraints per pixel is obtained
Wherein M is the number of selected images and does not exceed N, sigma2Is the power of additive white Gaussian noise, aiIs AlIth element of (2), piIs the power per pixel.
Further, due to piSet to be constant, equation (7) becomes
Wherein, aiRepresenting the feature correlation between image n and other images, computed using a typical correlation analysis (CCA) method,is the average signal-to-noise ratio of each image.
The invention further discloses a system for selecting images based on deep learning SAR image classification, which comprises a processor and a machine-readable storage medium, wherein the machine-readable storage medium is connected with the processor and is used for storing programs, instructions or codes, and the processor is used for executing the programs, the instructions or the codes in the machine-readable storage medium so as to realize the method for selecting images based on deep learning SAR image classification.
Example two
The process of the image selection method of the present invention is illustrated in fig. 2.
First, let image 1 be the reference image. If M is 2, CCA between image 1 and image 2 is calculated to obtain ai. If M is 3, CCA between the image 1 and other images is calculated to obtain ai. Then, let image 2 be the reference image. If M is 2, the CCA between image 2 and image 3 is calculated to obtain ai。
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Although the invention has been described above with reference to various embodiments, it should be understood that many changes and modifications may be made without departing from the scope of the invention. It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to define the spirit and scope of this invention. The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.
Claims (8)
1. A method for image selection based on deep learning SAR image classification, the method comprising:
initializing an image to be analyzed;
introducing a factor graph to describe the relation between corresponding pixels in different images, and establishing the relation between the corresponding pixels of the different images as a Markov model;
by utilizing the belief propagation algorithm, joint statistical distribution of the computational sensor is avoided, and the simplification of the calculation of the image selection algorithm is facilitated, wherein the belief propagation algorithm serving as a message transmission mechanism can simplify a message updating rule into a linear fusion model;
the feature correlations between the selected image and other images are determined, and the capacity of sensor fusion is determined to complete the image selection process.
2. The method for image selection based on deep learning SAR image classification as claimed in claim 1, wherein the method further comprises: feature-layer fusion can be expressed as a maximum a posteriori probability (MAP) estimate,
wherein, S represents the mark of the corresponding pixel of N images, and Y is [ Y ═ Y-1,…,yN]Involving different figuresThe image corresponds to a characteristic of the pixel.
3. The method of image selection based on deep learning SAR image classification as claimed in claim 2 characterized in that a factor graph is introduced to describe the relation between corresponding pixels in different images, the posterior distribution p (S | Y) is decomposed into single variable and pair terms
Wherein each univariate Φn(Sn) Simulating S in joint distributionnOf each pair of terms Ψij(Si,Sj) Representing the edge S in the diagramiAnd SjThe interdependence of (a).
4. The method for image selection based on deep learning SAR image classification as claimed in claim 1, wherein the method further comprises: the belief propagation algorithm serving as a message transmission mechanism can simplify the message updating rule into a linear fusion model and reduce LnAnd θ is defined as:
for each of the pixels/the number of pixels is,
θ=AlLn, (5)
wherein A islIs a 1 XN vector, each element aiRepresenting the relationship between any image and other images, LnIs an N × 1 vector.
6. The method of claim 5, wherein each pixel has a capacity of a power constrained Multiple Input Single Output (MISO) system, thereby obtaining
Wherein M is the number of selected images and does not exceed N, sigma2Is the power of additive white Gaussian noise, aiIs AlIth element of (2), piIs the power per pixel.
7. The method for image selection based on deep learning SAR image classification as claimed in claim 6, characterized in that due to piSet to be constant, equation (7) becomes
8. A system for image selection based on deep learning SAR image classification, comprising a processor, a machine readable storage medium connected with the processor, the machine readable storage medium for storing a program, instructions or code, the processor for executing the program, instructions or code in the machine readable storage medium to implement the method for image selection based on deep learning SAR image classification of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110802002.2A CN113505710B (en) | 2021-07-15 | 2021-07-15 | Image selection method and system based on deep learning SAR image classification |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110802002.2A CN113505710B (en) | 2021-07-15 | 2021-07-15 | Image selection method and system based on deep learning SAR image classification |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113505710A true CN113505710A (en) | 2021-10-15 |
CN113505710B CN113505710B (en) | 2022-06-03 |
Family
ID=78013009
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110802002.2A Active CN113505710B (en) | 2021-07-15 | 2021-07-15 | Image selection method and system based on deep learning SAR image classification |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113505710B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104217436A (en) * | 2014-09-16 | 2014-12-17 | 西安电子科技大学 | SAR image segmentation method based on multiple feature united sparse graph |
WO2016145379A1 (en) * | 2015-03-12 | 2016-09-15 | William Marsh Rice University | Automated Compilation of Probabilistic Task Description into Executable Neural Network Specification |
CN105956622A (en) * | 2016-04-29 | 2016-09-21 | 武汉大学 | Polarized SAR image classification method based on multi-characteristic combined modeling |
CN106295714A (en) * | 2016-08-22 | 2017-01-04 | 中国科学院电子学研究所 | A kind of multi-source Remote-sensing Image Fusion based on degree of depth study |
CN107145860A (en) * | 2017-05-05 | 2017-09-08 | 西安电子科技大学 | Classification of Polarimetric SAR Image method based on spatial information and deep learning |
CN107728111A (en) * | 2017-09-22 | 2018-02-23 | 合肥工业大学 | SAR image joint CFAR detection methods based on spatial correlation characteristic |
US10042048B1 (en) * | 2014-02-20 | 2018-08-07 | National Technology & Engineering Solutions Of Sandia, Llc | Superpixels for improved structure and terrain classification using multiple synthetic aperture radar image products |
CN108537102A (en) * | 2018-01-25 | 2018-09-14 | 西安电子科技大学 | High Resolution SAR image classification method based on sparse features and condition random field |
CN109344880A (en) * | 2018-09-11 | 2019-02-15 | 天津理工大学 | SAR image classification method based on multiple features and complex nucleus |
CN110781830A (en) * | 2019-10-28 | 2020-02-11 | 西安电子科技大学 | SAR sequence image classification method based on space-time joint convolution |
-
2021
- 2021-07-15 CN CN202110802002.2A patent/CN113505710B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10042048B1 (en) * | 2014-02-20 | 2018-08-07 | National Technology & Engineering Solutions Of Sandia, Llc | Superpixels for improved structure and terrain classification using multiple synthetic aperture radar image products |
CN104217436A (en) * | 2014-09-16 | 2014-12-17 | 西安电子科技大学 | SAR image segmentation method based on multiple feature united sparse graph |
WO2016145379A1 (en) * | 2015-03-12 | 2016-09-15 | William Marsh Rice University | Automated Compilation of Probabilistic Task Description into Executable Neural Network Specification |
CN105956622A (en) * | 2016-04-29 | 2016-09-21 | 武汉大学 | Polarized SAR image classification method based on multi-characteristic combined modeling |
CN106295714A (en) * | 2016-08-22 | 2017-01-04 | 中国科学院电子学研究所 | A kind of multi-source Remote-sensing Image Fusion based on degree of depth study |
CN107145860A (en) * | 2017-05-05 | 2017-09-08 | 西安电子科技大学 | Classification of Polarimetric SAR Image method based on spatial information and deep learning |
CN107728111A (en) * | 2017-09-22 | 2018-02-23 | 合肥工业大学 | SAR image joint CFAR detection methods based on spatial correlation characteristic |
CN108537102A (en) * | 2018-01-25 | 2018-09-14 | 西安电子科技大学 | High Resolution SAR image classification method based on sparse features and condition random field |
CN109344880A (en) * | 2018-09-11 | 2019-02-15 | 天津理工大学 | SAR image classification method based on multiple features and complex nucleus |
CN110781830A (en) * | 2019-10-28 | 2020-02-11 | 西安电子科技大学 | SAR sequence image classification method based on space-time joint convolution |
Non-Patent Citations (2)
Title |
---|
J.KARVONEN: "Idepentent component analysis for sea ice SAR image classification", 《SCANNING THE PRESENT AND RESOLVING THE FUTURE》 * |
王海江: "极化SAR图像分类方法研究", 《中国优秀博士学位论文全文库 信息科技辑》 * |
Also Published As
Publication number | Publication date |
---|---|
CN113505710B (en) | 2022-06-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Theis et al. | Faster gaze prediction with dense networks and fisher pruning | |
Blake et al. | Markov random fields for vision and image processing | |
CN109299668B (en) | Hyperspectral image classification method based on active learning and cluster analysis | |
JP6892606B2 (en) | Positioning device, position identification method and computer program | |
JP6107531B2 (en) | Feature extraction program and information processing apparatus | |
CN117237733A (en) | Breast cancer full-slice image classification method combining self-supervision and weak supervision learning | |
CN116563680B (en) | Remote sensing image feature fusion method based on Gaussian mixture model and electronic equipment | |
CN113705811A (en) | Model training method, device, computer program product and equipment | |
CN115147632A (en) | Image category automatic labeling method and device based on density peak value clustering algorithm | |
CN115080749B (en) | Weak supervision text classification method, system and device based on self-supervision training | |
CN116740384B (en) | Intelligent control method and system of floor washing machine | |
Pham et al. | Unsupervised training of Bayesian networks for data clustering | |
Liu et al. | Revisiting pseudo-label for single-positive multi-label learning | |
CN115664814A (en) | Network intrusion detection method and device, electronic equipment and storage medium | |
CN114596464A (en) | Multi-feature interactive unsupervised target detection method and system, electronic device and readable storage medium | |
CN112348001B (en) | Training method, recognition method, device, equipment and medium for expression recognition model | |
CN113505710B (en) | Image selection method and system based on deep learning SAR image classification | |
CN116861250A (en) | Fault diagnosis model training method and device | |
CN116863244A (en) | Image category determining method, device, electronic equipment and storage medium | |
CN116543259A (en) | Deep classification network noise label modeling and correcting method, system and storage medium | |
KR101318923B1 (en) | System and method for data feature extraction | |
CN114168780A (en) | Multimodal data processing method, electronic device, and storage medium | |
CN118097432B (en) | Remote sensing image estimation model construction method based on second-order space consistency constraint | |
CN117975504A (en) | Novel camouflage target detection method, device, equipment and storage medium | |
CN117392537A (en) | Super-pixel guided learning-based heterogeneous remote sensing image change detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |