CN106897681A - A kind of remote sensing images comparative analysis method and system - Google Patents
A kind of remote sensing images comparative analysis method and system Download PDFInfo
- Publication number
- CN106897681A CN106897681A CN201710080906.2A CN201710080906A CN106897681A CN 106897681 A CN106897681 A CN 106897681A CN 201710080906 A CN201710080906 A CN 201710080906A CN 106897681 A CN106897681 A CN 106897681A
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- sensing images
- segmentation
- comparative analysis
- convolutional layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Probability & Statistics with Applications (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of remote sensing images comparative analysis method and system, method includes:S1:It is identified and splits by the atural object in two remote sensing images that full convolutional network shoots to areal different time respectively, obtain the segmentation figure picture of all atural objects in two remote sensing images, full convolutional network includes multiple convolutional layer groups and multiple warp laminations, wherein, convolutional layer group includes the convolutional layer and lax convolutional layer that are alternately arranged;S2:Segmentation figure picture to same atural object in two remote sensing images is analyzed, and obtains comparative analysis result.The beneficial effects of the invention are as follows:The technical program is preferable to the fault-tolerance of the disturbing factors such as air, season, and the discrimination to intensive atural object is higher.
Description
Technical field
The present invention relates to remote sensing images comparative analysis technical field, more particularly to a kind of remote sensing images comparative analysis method and
System.
Background technology
Remote sensing images to different times compare analysis also referred to as change detection, are the crucial skills of GIS-Geographic Information System
Art, has a very important role in the reallocation of land, diaster prevention and control, unmanned plane, satellite, unmanned boat and monitoring resource field.Pass
Alignment algorithm of the system based on pixel can not well exclude interference in remote sensing images, can not realize atural object in remote sensing images
Classification is compared.Existing image comparison method is that two figures are directly compared, and comparing result is not coarse accurate.
The content of the invention
The technical problems to be solved by the invention are:Alignment algorithm of the tradition based on pixel can not well exclude remote sensing figure
Interference as in, can not realize that the classification of atural object in remote sensing images is compared, and comparing result is not coarse accurate.
The technical scheme that the present invention solves above-mentioned technical problem is as follows:
A kind of remote sensing images comparative analysis method, including:
S1:Carried out by the atural object in two remote sensing images that full convolutional network shoots to areal different time respectively
Identification and segmentation, obtain the segmentation figure picture of all atural objects in two remote sensing images, and the full convolutional network includes multiple volume
Lamination group and multiple warp laminations, wherein, the convolutional layer group includes the convolutional layer and lax convolutional layer that are alternately arranged;
S2:Segmentation figure picture to same atural object in two remote sensing images is analyzed, and obtains comparative analysis knot
Really.
The beneficial effects of the invention are as follows:By convolutional network train parametric results extract want contrast images feature and by
Pixel classifications, classification results are the images that different pixel values are filled with to different atural objects, are so separating different types of atural object
While mark the proper boundary of different atural objects, then with classification results, i.e., different atural objects are filled with the image of different pixel values
Based on the remote sensing images of areal different time are analyzed, contrast obtains whether two times of somewhere occur
Change, to the fault-tolerance of the disturbing factors such as air, season preferably, the discrimination to intensive atural object is higher for the technical program.
On the basis of above-mentioned technical proposal, the present invention can also do following improvement.
Preferably, the step S1 includes:
S11:Two remote sensing images are put into the full convolutional network respectively;
S12:Respectively by two remote sensing images by the image after convolutional layer group coordinate points mark described at least one
Repeatedly merged with the image by all convolutional layer groups and after warp lamination coordinate points mark described at least one, obtained
To fused images;
S13:Respectively by two remote sensing images with the fused images by warp lamination coordinate described at least one
Image after point mark is repeatedly merged, and obtains terrain classification probability graph;
S14:The atural object in two terrain classification probability graphs is split respectively by CRF probabilistic models, is obtained
The segmentation figure picture of all atural objects in two remote sensing images.
Beneficial effect using above-mentioned further scheme is:The full convolutional network is substituted for the full connection of legacy network
Convolution, add warp lamination, and the final result of result several layers of before network and network is carried out it is warm, can obtain more
Image information;Ground object target is distinguished from background by CRF probabilistic models, obtains the segmentation figure picture of each atural object, with
Just further comparative analysis is carried out.
Preferably, in the step S2, by contrasting neutral net one by one to same atural object in two remote sensing images
Segmentation figure picture be analyzed, obtain comparative analysis result.
Preferably, the contrast neutral net is 2-channel networks or Siamese networks.
Beneficial effect using above-mentioned further scheme is:Two remote sensing images of areal different time are to score
Result images after two contrast images segmentations are directly placed into neutral net and carried out by analysis, 2-channel as two passages
Contrast;Siamese networks have two networks of shared parameter, two result images respectively as the input of network, extract special
Comparing result is obtained after levying.
A kind of remote sensing images comparative analysis system, including:
Segmentation module, for by full convolutional network respectively to areal different time shoot two remote sensing images in
Atural object be identified and split, obtain the segmentation figure picture of all atural objects in two remote sensing images, the full convolutional network
Including multiple convolutional layer groups and multiple warp laminations, wherein, the convolutional layer group includes the convolutional layer and lax volume that are alternately arranged
Lamination;
Contrast module, is analyzed for the segmentation figure picture one by one to same atural object in two remote sensing images,
Obtain comparative analysis result.
Preferably, the segmentation module includes:
Submodule is put into, for two remote sensing images to be put into full convolutional network respectively;
First fusion submodule, for respectively by two remote sensing images by convolutional layer group coordinate described at least one
Image after point mark and the image by all convolutional layer groups and after warp lamination coordinate points mark described at least one
Repeatedly merged, obtained fused images;
Second fusion submodule, for respectively by two remote sensing images with the fused images by least one institute
State the image after warp lamination coordinate points mark repeatedly to be merged, obtain terrain classification probability graph;
Segmentation submodule, for being entered to the atural object in two terrain classification probability graphs respectively by CRF probabilistic models
Row segmentation, obtains the segmentation figure picture of all atural objects in two remote sensing images.
Preferably, the contrast module is specifically for by contrasting neutral net one by one to same in two remote sensing images
The segmentation figure picture of one atural object is analyzed, and obtains comparative analysis result.
Preferably, the contrast neutral net is 2-channel networks or Siamese networks.
Brief description of the drawings
Fig. 1 is a kind of schematic flow sheet of remote sensing images comparative analysis method provided in an embodiment of the present invention;
A kind of schematic flow sheet of remote sensing images comparative analysis method that Fig. 2 is provided for another embodiment of the present invention;
Fig. 3 is a kind of structural representation of remote sensing images comparative analysis system provided in an embodiment of the present invention;
A kind of structural representation of remote sensing images comparative analysis system that Fig. 4 is provided for another embodiment of the present invention.
Specific embodiment
Principle of the invention and feature are described below in conjunction with accompanying drawing, example is served only for explaining the present invention, and
It is non-for limiting the scope of the present invention.
As shown in figure 1, the embodiment of the present invention provides a kind of remote sensing images comparative analysis method, including:
S1:Carried out by the atural object in two remote sensing images that full convolutional network shoots to areal different time respectively
Identification and split, obtain the segmentation figure picture of all atural objects in two remote sensing images, full convolutional network include multiple convolutional layer groups and
Multiple warp laminations, wherein, convolutional layer group includes the convolutional layer and lax convolutional layer that are alternately arranged;
S2:Segmentation figure picture to same atural object in two remote sensing images is analyzed, and obtains comparative analysis result.
Specifically, in the embodiment, by convolutional network train parametric results extract want contrast images feature and by
Pixel classifications, classification results are the images that different pixel values are filled with to different atural objects, are so separating different types of atural object
While mark the proper boundary of different atural objects, then with classification results, i.e., different atural objects are filled with the image of different pixel values
Based on the remote sensing images of areal different time are analyzed, contrast obtains whether two times of somewhere occur
Change, by this method to the fault-tolerance of the disturbing factors such as air, season preferably, to the discrimination of intensive atural object compared with
Height, and can adapt to the remote sensing images of different scale.
In above-described embodiment, various data enhancement methods are used in convolutional network training process, realized in less mark
Training accuracy higher is reached in the case of data, wherein, the data enhancement methods of use have rotation and mirror image of data etc.,
Image is done into mirror image, or rotation, can effectively dilated data set, raising network training quality, prevent poor fitting.
As shown in Fig. 2 in another embodiment, the step S1 in Fig. 1 includes:
S11:Two remote sensing images are put into full convolutional network respectively;
S12:Respectively by two remote sensing images by the image after at least one convolutional layer group coordinate points mark and by institute
There is the image after convolutional layer group and at least one warp lamination coordinate points mark repeatedly to be merged, obtain fused images;
S13:Respectively by two remote sensing images and fused images by the figure after at least one warp lamination coordinate points mark
As repeatedly being merged, terrain classification probability graph is obtained;
S14:The atural object in two terrain classification probability graphs is split respectively by CRF probabilistic models, two are obtained
The segmentation figure picture of all atural objects in remote sensing images.
Specifically, in the embodiment, the full connection of legacy network has been substituted for convolution by the full convolutional network, adds warp
Lamination, the final result of result several layers of before network and network is carried out warm, can obtain more image informations;Pass through
CRF probabilistic models distinguish ground object target from background, obtain the segmentation figure picture of each atural object, further to carry out
Comparative analysis.CRF (conditional random fieldalgorithm, condition random field) combine maximum entropy model and
The characteristics of hidden Markov model, be a kind of undirected graph model, in recent years in sequences such as participle, part-of-speech tagging and name Entity recognitions
Good effect is achieved in row mark task.CRF is a typical discriminative model.
In step S2, by contrast neutral net the segmentation figure picture of same atural object in two remote sensing images is carried out one by one it is right
Than analysis, comparative analysis result is obtained.
Contrast neutral net is 2-channel networks or Siamese networks.
Specifically, in the embodiment, two remote sensing images comparative analyses of areal different time, 2-channel handles
Result images after two contrast images segmentations are directly placed into neutral net and are contrasted as two passages;Siamese nets
Network has two networks of shared parameter, two result images respectively as the input of network, contrast knot is obtained after extracting feature
Really.
As shown in figure 3, the embodiment of the present invention also provides a kind of remote sensing images comparative analysis system, including:
Segmentation module 1, for two remote sensing images shot to areal different time respectively by full convolutional network
In atural object be identified and split, obtain the segmentation figure picture of all atural objects in two remote sensing images, full convolutional network includes many
Individual convolutional layer group and multiple warp laminations, wherein, convolutional layer group includes the convolutional layer and lax convolutional layer that are alternately arranged;
Contrast module 2, is analyzed for the segmentation figure picture to same atural object in two remote sensing images, is contrasted
Analysis result.
As shown in figure 4, in another embodiment, the segmentation module 1 in Fig. 3 includes:
Submodule 11 is put into, for two remote sensing images to be put into full convolutional network respectively;
First fusion submodule 12, for respectively marking two remote sensing images by least one convolutional layer group coordinate points
Image afterwards is repeatedly merged with by the image after all convolutional layer groups and at least one warp lamination coordinate points mark, is obtained
To fused images;
3rd fusion submodule 13, for respectively by two remote sensing images and fused images by least one warp lamination
Image after coordinate points mark is repeatedly merged, and obtains terrain classification probability graph;
Segmentation submodule 14, for being identified to the atural object in two terrain classification probability graphs by CRF probabilistic models
And segmentation, obtain the segmentation figure picture of all atural objects in two remote sensing images.
Contrast module 2 is specifically for by contrasting neutral net one by one to the segmentation figure of same atural object in two remote sensing images
As being analyzed, comparative analysis result is obtained.
Contrast neutral net is 2-channel networks or Siamese networks.
The foregoing is only presently preferred embodiments of the present invention, be not intended to limit the invention, it is all it is of the invention spirit and
Within principle, any modification, equivalent substitution and improvements made etc. should be included within the scope of the present invention.
Claims (8)
1. a kind of remote sensing images comparative analysis method, it is characterised in that including:
S1:It is identified by the atural object in two remote sensing images that full convolutional network shoots to areal different time respectively
And segmentation, the segmentation figure picture of all atural objects in two remote sensing images is obtained, the full convolutional network includes multiple convolutional layers
Group and multiple warp laminations, wherein, the convolutional layer group includes the convolutional layer and lax convolutional layer that are alternately arranged;
S2:Segmentation figure picture to same atural object in two remote sensing images is analyzed, and obtains comparative analysis result.
2. remote sensing images comparative analysis method according to claim 1, it is characterised in that the step S1 includes:
S11:Two remote sensing images are put into the full convolutional network respectively;
S12:Respectively by two remote sensing images by the image after convolutional layer group coordinate points mark described at least one and warp
The image crossed after all convolutional layer groups and warp lamination coordinate points mark described at least one is repeatedly merged, and is melted
Close image;
S13:Respectively by two remote sensing images with the fused images by warp lamination coordinate points mark described at least one
Image after note is repeatedly merged, and obtains terrain classification probability graph;
S14:The atural object in two terrain classification probability graphs is split respectively by CRF probabilistic models, two are obtained
The segmentation figure picture of all atural objects in the remote sensing images.
3. remote sensing images comparative analysis method according to claim 1 and 2, it is characterised in that in the step S2, pass through
Segmentation figure picture of the contrast neutral net one by one to same atural object in two remote sensing images is analyzed, and obtains to score
Analysis result.
4. remote sensing images comparative analysis method according to claim 3, it is characterised in that the contrast neutral net is 2-
Channel networks or Siamese networks.
5. a kind of remote sensing images comparative analysis system, it is characterised in that including:
Segmentation module (1), for by full convolutional network respectively to areal different time shoot two remote sensing images in
All atural objects are identified and split, and obtain the segmentation figure picture of each atural object in two remote sensing images, the full convolution net
Network includes multiple convolutional layer groups and multiple warp laminations, wherein, the convolutional layer group includes the convolutional layer being alternately arranged and lax
Convolutional layer;
Contrast module (2), is analyzed for the segmentation figure picture one by one to same atural object in two remote sensing images, obtains
To comparative analysis result.
6. remote sensing images comparative analysis system according to claim 5, it is characterised in that the segmentation module (1) includes:
Submodule (11) is put into, for two remote sensing images to be put into full convolutional network respectively;
First fusion submodule (12), for respectively by two remote sensing images by convolutional layer group coordinate described at least one
Image after point mark and the image by all convolutional layer groups and after warp lamination coordinate points mark described at least one
Repeatedly merged, obtained fused images;
Second fusion submodule (13), for respectively by two remote sensing images with the fused images by least one institute
State the image after warp lamination coordinate points mark repeatedly to be merged, obtain terrain classification probability graph;
Segmentation submodule (14), for being known to the atural object in two terrain classification probability graphs by CRF probabilistic models
Other and segmentation, obtains the segmentation figure picture of all atural objects in two remote sensing images.
7. the remote sensing images comparative analysis system according to claim 5 or 6, it is characterised in that contrast module (2) tool
Body is used to be analyzed the segmentation figure picture of same atural object in two remote sensing images by contrasting neutral net, obtains
Comparative analysis result.
8. remote sensing images comparative analysis system according to claim 7, it is characterised in that the contrast neutral net is 2-
Channel networks or Siamese networks.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710080906.2A CN106897681B (en) | 2017-02-15 | 2017-02-15 | Remote sensing image contrast analysis method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710080906.2A CN106897681B (en) | 2017-02-15 | 2017-02-15 | Remote sensing image contrast analysis method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106897681A true CN106897681A (en) | 2017-06-27 |
CN106897681B CN106897681B (en) | 2020-11-10 |
Family
ID=59198665
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710080906.2A Active CN106897681B (en) | 2017-02-15 | 2017-02-15 | Remote sensing image contrast analysis method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106897681B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108171122A (en) * | 2017-12-11 | 2018-06-15 | 南京理工大学 | The sorting technique of high-spectrum remote sensing based on full convolutional network |
CN108537824A (en) * | 2018-03-15 | 2018-09-14 | 上海交通大学 | Topological expansion method based on the enhancing of the alternately characteristic pattern of deconvolution and convolution |
CN108776805A (en) * | 2018-05-03 | 2018-11-09 | 北斗导航位置服务(北京)有限公司 | It is a kind of establish image classification model, characteristics of image classification method and device |
CN108961236A (en) * | 2018-06-29 | 2018-12-07 | 国信优易数据有限公司 | Training method and device, the detection method and device of circuit board defect detection model |
CN109409263A (en) * | 2018-10-12 | 2019-03-01 | 武汉大学 | A kind of remote sensing image city feature variation detection method based on Siamese convolutional network |
CN109711311A (en) * | 2018-12-20 | 2019-05-03 | 北京以萨技术股份有限公司 | One kind being based on dynamic human face optimal frames choosing method |
CN110570397A (en) * | 2019-08-13 | 2019-12-13 | 创新奇智(重庆)科技有限公司 | Method for detecting ready-made clothes printing defects based on deep learning template matching algorithm |
CN111274938A (en) * | 2020-01-19 | 2020-06-12 | 四川省自然资源科学研究院 | Web-oriented dynamic monitoring method and system for high-resolution remote sensing river water quality |
WO2021077947A1 (en) * | 2019-10-22 | 2021-04-29 | 北京市商汤科技开发有限公司 | Image processing method, apparatus and device, and storage medium |
CN112816000A (en) * | 2021-02-26 | 2021-05-18 | 华南理工大学 | Comprehensive index evaluation method and system for indoor and outdoor wind environment quality of green building group |
CN116977747A (en) * | 2023-08-28 | 2023-10-31 | 中国地质大学(北京) | Small sample hyperspectral classification method based on multipath multi-scale feature twin network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102147920A (en) * | 2011-03-02 | 2011-08-10 | 上海大学 | Shadow detection method for high-resolution remote sensing image |
CN102855759A (en) * | 2012-07-05 | 2013-01-02 | 中国科学院遥感应用研究所 | Automatic collecting method of high-resolution satellite remote sensing traffic flow information |
CN105809693A (en) * | 2016-03-10 | 2016-07-27 | 西安电子科技大学 | SAR image registration method based on deep neural networks |
-
2017
- 2017-02-15 CN CN201710080906.2A patent/CN106897681B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102147920A (en) * | 2011-03-02 | 2011-08-10 | 上海大学 | Shadow detection method for high-resolution remote sensing image |
CN102855759A (en) * | 2012-07-05 | 2013-01-02 | 中国科学院遥感应用研究所 | Automatic collecting method of high-resolution satellite remote sensing traffic flow information |
CN105809693A (en) * | 2016-03-10 | 2016-07-27 | 西安电子科技大学 | SAR image registration method based on deep neural networks |
Non-Patent Citations (2)
Title |
---|
张凤玉: "遥感图像变化检测方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
汤浩,何楚: "全卷积网络结合改进的条件随机场-循环神经网络用于SAR图像场景分类", 《计算机应用》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108171122A (en) * | 2017-12-11 | 2018-06-15 | 南京理工大学 | The sorting technique of high-spectrum remote sensing based on full convolutional network |
CN108537824A (en) * | 2018-03-15 | 2018-09-14 | 上海交通大学 | Topological expansion method based on the enhancing of the alternately characteristic pattern of deconvolution and convolution |
CN108537824B (en) * | 2018-03-15 | 2021-07-16 | 上海交通大学 | Feature map enhanced network structure optimization method based on alternating deconvolution and convolution |
CN108776805A (en) * | 2018-05-03 | 2018-11-09 | 北斗导航位置服务(北京)有限公司 | It is a kind of establish image classification model, characteristics of image classification method and device |
CN108961236B (en) * | 2018-06-29 | 2021-02-26 | 国信优易数据股份有限公司 | Circuit board defect detection method and device |
CN108961236A (en) * | 2018-06-29 | 2018-12-07 | 国信优易数据有限公司 | Training method and device, the detection method and device of circuit board defect detection model |
CN109409263A (en) * | 2018-10-12 | 2019-03-01 | 武汉大学 | A kind of remote sensing image city feature variation detection method based on Siamese convolutional network |
CN109409263B (en) * | 2018-10-12 | 2021-05-04 | 武汉大学 | Method for detecting urban ground feature change of remote sensing image based on Siamese convolutional network |
CN109711311A (en) * | 2018-12-20 | 2019-05-03 | 北京以萨技术股份有限公司 | One kind being based on dynamic human face optimal frames choosing method |
CN110570397A (en) * | 2019-08-13 | 2019-12-13 | 创新奇智(重庆)科技有限公司 | Method for detecting ready-made clothes printing defects based on deep learning template matching algorithm |
WO2021077947A1 (en) * | 2019-10-22 | 2021-04-29 | 北京市商汤科技开发有限公司 | Image processing method, apparatus and device, and storage medium |
JP2022509030A (en) * | 2019-10-22 | 2022-01-20 | ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド | Image processing methods, devices, equipment and storage media |
CN111274938A (en) * | 2020-01-19 | 2020-06-12 | 四川省自然资源科学研究院 | Web-oriented dynamic monitoring method and system for high-resolution remote sensing river water quality |
CN112816000A (en) * | 2021-02-26 | 2021-05-18 | 华南理工大学 | Comprehensive index evaluation method and system for indoor and outdoor wind environment quality of green building group |
CN116977747A (en) * | 2023-08-28 | 2023-10-31 | 中国地质大学(北京) | Small sample hyperspectral classification method based on multipath multi-scale feature twin network |
CN116977747B (en) * | 2023-08-28 | 2024-01-23 | 中国地质大学(北京) | Small sample hyperspectral classification method based on multipath multi-scale feature twin network |
Also Published As
Publication number | Publication date |
---|---|
CN106897681B (en) | 2020-11-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106897681A (en) | A kind of remote sensing images comparative analysis method and system | |
CN110443143B (en) | Multi-branch convolutional neural network fused remote sensing image scene classification method | |
CN106469299B (en) | A kind of vehicle search method and device | |
CN109829398B (en) | Target detection method in video based on three-dimensional convolution network | |
CN110363134B (en) | Human face shielding area positioning method based on semantic segmentation | |
Zhai et al. | Detecting vanishing points using global image context in a non-manhattan world | |
CN113160062B (en) | Infrared image target detection method, device, equipment and storage medium | |
CN111178120B (en) | Pest image detection method based on crop identification cascading technology | |
CN109447169A (en) | The training method of image processing method and its model, device and electronic system | |
CN107862261A (en) | Image people counting method based on multiple dimensioned convolutional neural networks | |
CN106909886B (en) | A kind of high-precision method for traffic sign detection and system based on deep learning | |
CN108960404B (en) | Image-based crowd counting method and device | |
CN106529499A (en) | Fourier descriptor and gait energy image fusion feature-based gait identification method | |
CN105160310A (en) | 3D (three-dimensional) convolutional neural network based human body behavior recognition method | |
CN109165682A (en) | A kind of remote sensing images scene classification method merging depth characteristic and significant characteristics | |
CN110827312B (en) | Learning method based on cooperative visual attention neural network | |
CN112347933A (en) | Traffic scene understanding method and device based on video stream | |
CN110175615A (en) | The adaptive visual position recognition methods in model training method, domain and device | |
CN104657980A (en) | Improved multi-channel image partitioning algorithm based on Meanshift | |
CN114565675B (en) | Method for removing dynamic feature points at front end of visual SLAM | |
CN108921850B (en) | Image local feature extraction method based on image segmentation technology | |
CN107944437B (en) | A kind of Face detection method based on neural network and integral image | |
CN109961013A (en) | Recognition methods, device, equipment and the computer readable storage medium of lane line | |
CN104699781B (en) | SAR image search method based on double-deck anchor figure hash | |
CN104732534B (en) | Well-marked target takes method and system in a kind of image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |