Nothing Special   »   [go: up one dir, main page]

CN115223054A - Remote sensing image change detection method based on partition clustering and convolution - Google Patents

Remote sensing image change detection method based on partition clustering and convolution Download PDF

Info

Publication number
CN115223054A
CN115223054A CN202210836780.8A CN202210836780A CN115223054A CN 115223054 A CN115223054 A CN 115223054A CN 202210836780 A CN202210836780 A CN 202210836780A CN 115223054 A CN115223054 A CN 115223054A
Authority
CN
China
Prior art keywords
remote sensing
image
sensing image
stage
spots
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210836780.8A
Other languages
Chinese (zh)
Inventor
罗春林
周红斌
李华
张治军
张成程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Survey And Planning Institute Of State Forestry And Grassland Administration
Original Assignee
Southwest Survey And Planning Institute Of State Forestry And Grassland Administration
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Survey And Planning Institute Of State Forestry And Grassland Administration filed Critical Southwest Survey And Planning Institute Of State Forestry And Grassland Administration
Priority to CN202210836780.8A priority Critical patent/CN115223054A/en
Publication of CN115223054A publication Critical patent/CN115223054A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a remote sensing image change detection method based on partition clustering and convolution, which comprises the following steps: the method comprises the steps of obtaining optical satellite remote sensing images in two periods, preprocessing and checking, automatically partitioning, carrying out asynchronous clustering and marking on each partitioned image, intercepting the clustered images as training samples, training the models, inputting the clustered partitioned images into the models, carrying out semantic segmentation, converting semantic segmentation results into vector patches, screening the vector patches according to categories, reversely splicing the vector patches in two periods, scattering multiple parts of the reversely spliced vector patches in two periods, extracting change patches of corresponding types, smoothing boundary lines of the change patches, segmenting by combining with operation boundary lines of corresponding types, and removing fine broken patches. The remote sensing image change detection method based on partition clustering and convolution provided by the invention can quickly realize the quick detection of the change of the remote sensing image in a large range, and has the advantages of high pixel partition clustering speed and high automation degree.

Description

Remote sensing image change detection method based on partition clustering and convolution
Technical Field
The invention relates to the technical field of optical satellite remote sensing, in particular to a remote sensing image change detection method based on partition clustering and convolution.
Background
The optical satellite remote sensing image is an important medium for acquiring the earth surface information, and the comparison analysis of the remote sensing images at the front and the rear stages becomes an important means for acquiring the earth surface change information due to the factors of large information amount, wide coverage, short satellite revisit period and the like of the satellite remote sensing image. For small-range land surface change information, the change area can be obtained in a manual visual interpretation mode. Because the coverage area of the remote sensing image is often large, the change information in a large range needs to be obtained, the workload of a visual interpretation mode is overlarge, for example, the change information of forest resources in a certain county is obtained by visually interpreting the remote sensing image, and a single interpretation personnel often needs 2 to 5 working days; and a single interpretation person needs even 10-20 working days to acquire forest resource change information of a state city with a larger area. Furthermore, the visual interpretation effect sometimes differs greatly in precision and accuracy due to proficiency and interpretation experience of the interpreters, and some interpreters regard the same region as a changed region, while others may regard the changed region as a pseudo-change caused by cloud, rain, snow, fog and the like, so that the interpretation result is finally deviated.
With the development of computer technology, researchers have gradually proposed various computer-aided change detection methods. The early remote sensing image change detection method carries out change analysis in a clustering method and a pixel value comparison mode. In practice, the remote sensing images in the front and back periods often deviate, interference factors such as cloud, rain, snow, fog and the like often cause a great number of false changes, and different remote sensing image types and model operation parameters often need to be adjusted, so that the remote sensing images are difficult to be applied to production activities in a large scale. However, the traditional remote sensing image change detection method has the advantages of high running speed, no need of selecting model training samples and the like.
In recent years, with the development of deep learning technology, the remote sensing image change detection technology is greatly developed and gradually develops in the direction of deep learning, and some classical convolutional neural network models are applied to the field of remote sensing image change detection. The existing deep learning model improves the precision and the accuracy in the change detection process of part of remote sensing images, but depends on model training samples. In actual production activities, the source of the remote sensing image is not completely single, and the optical remote sensing image is easily covered by cloud, snow, fog and the like. When an existing deep learning model is trained, the accuracy of model training is reduced by adding coverage areas such as clouds, snow and fog, and even the model is difficult to converge. In the model training process, the requirement on technicians is very high, and the general personnel often have difficulty in mastering the scale of model training. The existing deep learning models are deep learning models, the number of model layers is too many, requirements on computer hardware equipment and training samples are high, and training time and training effort are consumed. Due to the increase of model levels, the gradient in the model training process is easy to disappear or explode. Therefore, it is necessary to design a method for detecting changes in remote sensing images based on partition clustering and convolution.
Disclosure of Invention
The invention aims to provide a remote sensing image change detection method based on partition clustering and convolution, which can quickly realize the quick detection of the change of a remote sensing image in a large range, and has the advantages of high pixel partition clustering speed, high automation degree and low requirement on hardware.
In order to achieve the purpose, the invention provides the following scheme:
a remote sensing image change detection method based on partition clustering and convolution comprises the following steps:
step 1: acquiring optical satellite remote sensing images of two periods before and after three months or more and twelve months within the same region, and preprocessing and quality inspection are carried out on the optical satellite remote sensing images;
and 2, step: acquiring effective public ranges of the optical satellite remote sensing images in two periods before and after the preprocessing and the checking are finished, and calculating the number of required partitions and the number of sub-processes capable of running simultaneously;
and 3, step 3: according to the number of sub-processes capable of running simultaneously, starting a plurality of processes to perform asynchronous clustering on each partitioned image of the two-stage optical satellite remote sensing image, and detecting and marking areas covered by cloud, snow and rain and invalid pixel data by using a deep neural network during asynchronous clustering;
and 4, step 4: building a pyramid scene analysis network model, automatically intercepting the clustered images to make training samples and verification data, and training the pyramid scene analysis network model through the training samples and the verification data;
and 5: processing all clustered images of the partitions, automatically eliminating marked invalid data, inputting the marked invalid data into a trained model, performing semantic segmentation on the trained model to obtain a semantic segmentation result, and converting the semantic segmentation result into a vector image spot;
step 6: screening the vector pattern spots according to the categories, reversely splicing the two-stage vector pattern spots, scattering multiple parts of the reversely spliced two-stage vector pattern spots, eliminating unqualified pattern spots, extracting the variation pattern spots of corresponding types, smoothing the boundary lines of the variation pattern spots, segmenting the smoothed pattern spots by combining with the business boundary lines of corresponding types, and eliminating finely-broken pattern spots to obtain the final variation pattern spots.
Optionally, in step 1, optical satellite remote sensing images of two periods before and after the same region is obtained at an interval of more than three months and within twelve months, and are preprocessed and quality-checked, specifically:
obtaining the same area at three months intervalsOptical satellite remote sensing images of two periods before and after within twelve months, wherein the optical satellite remote sensing image acquired in the previous period is called as a previous period remote sensing image X 1 The optical satellite remote sensing image acquired in the later period is called as a later remote sensing image X 2
For the obtained early-stage remote sensing image X 1 And later remote sensing image X 2 Performing geometric correction, registration, fusion and enhancement operation to obtain the early-stage remote sensing image X 1 And later remote sensing image X 2 The pixel data type of the image is converted into an 8-bit integer of RGB three wave bands, and after the conversion is finished, the early-stage remote sensing image X is converted into the image 1 And later remote sensing image X 2 Uniformly converting a Gaussian projection coordinate system under the same sub-band, and storing the Gaussian projection coordinate system in a storage device in a TIF format;
reading early-stage remote sensing image X of storage device 1 And later remote sensing image X 2 For the earlier stage remote sensing image X 1 And later remote sensing image X 2 The image deviation condition and the cloud, snow and fog coverage degree of the image are checked, wherein the early-stage remote sensing image X 1 And later remote sensing image X 2 F (f is more than or equal to 150) points are respectively extracted, and points covered by dense clouds, snow and rain are set as f 1 If (f) 1 If/f) is more than or equal to 0.45, the examination is failed, the image is obtained again, and the remote sensing image X in the previous period 1 And later remote sensing image X 2 In the range of (1), k ground objects (k is more than or equal to 100) are uniformly extracted, and the same ground object is subjected to remote sensing image X in the previous period 1 And later remote sensing image X 2 The number of pixels with offset exceeding 3 is recorded as k 3 The number of more than 5 pixels is k 5 The number of more than 8 pixels is k 8 If (k) 5 /k) is not less than 0.01 or (k) 3 K) is not less than 0.1 or k 8 If the image is more than 0, the image is not checked to pass, and the previous remote sensing image X is checked again 1 And later remote sensing image X 2 And (4) carrying out pretreatment.
Optionally, in step 2, obtaining effective public ranges of the optical satellite remote sensing images in two periods before and after the preprocessing and the checking are completed, calculating the number of required partitions and the number of subprocesses capable of running simultaneously, and partitioning specifically:
obtaining an early-stage remote sensing image X 1 In the range of m 1 Line and n 1 Column, late-stage remote sensing image X 2 Range m of 2 Line n 2 Row, traversal early-stage remote sensing image X 1 M of 1 Line n 1 Column, let the RGB pixel value of i row, j column be R ij 、G ij 、B ij If R is ij 、G ij 、B ij Meets the condition (R) ij <5 and G ij <5 and B ij <5) Or (R) ij >251 and G ij >251 and B ij >251 Judging as an invalid pixel point, and sending R ij 、G ij 、B ij All color components are assigned as (0, 0), if not, the color components are effective pixel points, and the early-stage remote sensing image X is divided into 1 The outer boundary range of all effective pixel points is defined as E 1 Processing the later remote sensing image X in the same way 2 Forming a later-stage remote sensing image X 2 Of all effective pixel points E 2 A 1 is mixing E 1 And E 2 The intersected public range is defined as E, and the public range E is used for cutting the early-stage remote sensing image X 1 And later remote sensing image X 2 Replacing the previous remote sensing image X with the clipped result 1 And later remote sensing image X 2 Obtaining the final early-stage remote sensing image X needing change detection 1 And final later remote sensing image X 2 Wherein the final early-stage remote sensing image X 1 And final later remote sensing image X 2 Are equal in height, H, equal in width, W;
obtaining the running memory T of the computer, and according to the final early-stage remote sensing image X 1 And final later remote sensing image X 2 The height H and the width W of the image are obtained, and the final early-stage remote sensing image X is obtained 1 And final later remote sensing image X 2 The image segmentation method comprises the following steps of segmenting the image into a plurality of subarea images, wherein each subarea image has the same size, and the parameters are as follows:
Figure BDA0003748772130000041
Figure BDA0003748772130000042
Figure BDA0003748772130000043
C=M×N
wherein, L is the width and height of each partitioned image, M is the number of rows of partitioned images, N is the number of columns of partitioned images, and C is the number of partitioned images, and the upper left corner coordinates (x, y), the image resolution p and the coordinate system of each partitioned image are recorded;
obtaining the number Q of physical cores of the running computer, setting the number of clustering processes to be started to be N, and obtaining the number N = max { Q-2,2} of simultaneously-running subprocesses after verification, wherein the recorded partition initial coordinates (x, y) and the partition row and column numbers M and N are sent to each subprocess while starting the processes.
Optionally, in step 3, according to the number of subprocesses capable of running simultaneously, a plurality of processes are started to perform asynchronous clustering on each partitioned image of the two-stage optical satellite remote sensing image, and a deep neural network is applied to detect and mark areas covered by cloud, snow and rain and invalid pixel data during asynchronous clustering, specifically:
starting n processes to start the final early-stage remote sensing image X according to the number n of the sub-processes 1 And final late remote sensing image X 2 Clustering to obtain the initial coordinate (x) of the k-th image k ,y k ) Number of rows and columns M of the partition k 、N k Width W of each partition k Height H k Obtaining the final early-stage remote sensing image X 1 And final later remote sensing image X 2 Reading the final early-stage remote sensing image X by the image resolution p, the height H and the width W 1 And final later remote sensing image X 2 From x k Starting with M k Line, slave y k Starting with N k The remote sensing image data of the row is converted into a two-dimensional array, wherein if the logic partition k exceeds the range, the logic partition k needs to be subjected to the conditionAssigning the RGB color components of the two-dimensional array in the area beyond the range to be (0, 0), and continuously marking the remote sensing image data of the block, wherein the RGB pixel values of the ith row and the j column are marked as R ij 、G ij 、B ij If R is ij 、G ij 、B ij Meets the condition (R) ij <5 and G ij <5 and B ij <5) Or (R) ij >251 and G ij >251 and B ij >251 Marked as invalid pixel value, R is marked as invalid ij 、G ij 、B ij The value is (0, 0), the kth subarea image is divided into small blocks with 16 multiplied by 16 pixel points, the coverage areas of the cloud, the snow and the fog in the subarea image are detected by utilizing a Resnet16 neural network model, the pixel values of the coverage areas of the cloud, the snow and the fog are marked as (0, 0), and then (x) is calculated k ,y k ) Starting, traversing all pixel points in the subarea, and if R is ij 、G ij 、B ij And if the number of the pixel points is less than 3, ignoring the pixel point, classifying the rest pixel points according to the size of RGB color components, marking the R component with the maximum value as a category 1, marking the G component with the maximum value as a category 2, marking the B component with the maximum value as a category 3, gathering adjacent pixel points of the same category into a region from small to large, replacing the region formed by less than 10 pixel points by the average value of adjacent pixels, keeping the pixel values of the regions with more than 10 pixel points unchanged, writing the modified pixel data into a file to form a partitioned image file, storing the partitioned image file in a tif format, and repeatedly operating until all the partitioned areas finish operating to finish clustering.
Optionally, in step 4, a pyramid scene analysis network model is built, the clustered images are automatically intercepted to make training samples and verification data, and the pyramid scene analysis network model is trained through the training samples and the verification data, specifically:
building a pyramid scene analysis network model in a semantic segmentation model, intercepting a subarea image file through the boundary line of the territorial three-tone of land, manufacturing a training sample and verification data, wherein the original data of the training sample is a png image file of 512 multiplied by 512 pixel points intercepted by the boundary line of the territorial three-tone of land, and the label is a png image file of 512 multiplied by 512 pixel points generated by the map spot of the territorial three-tone of land at the same position, wherein the training sample comprises 9 categories and 1 background value, and the 9 categories are respectively arbor forest and bamboo forest land, bush forest land, other forest land, wetland, residential land, transportation land, water area, grassland and cultivated land, eliminating the range of an invalid area by an erasing method, and marking the place marked as the invalid area as the background value, randomly extracting 90% of samples as training samples, using the rest 10% of samples as a verification data set, carrying out pixel value normalization on the training samples and the verification data, compressing RGB components of pixel points participating in training to a range of 0-1, converting labels into One-Hot codes, inputting the training samples into a pyramid scene analysis network model for training, wherein the training samples in each batch are 16 pictures with 512 x 512 pixels, the maximum training round number is set to 2000, after each training is finished by 20 rounds, using the verification data for verification, quitting the training when the loss value of the verification data set is lower than 0.055 and the precision is higher than 0.85, storing parameters of a neural network model, finishing the training, and obtaining the neural network model.
Optionally, in step 5, all clustered images of the partitions are processed, labeled invalid data are automatically removed, the labeled invalid data are input into the trained model, semantic segmentation is performed on the trained model to obtain a semantic segmentation result, the semantic segmentation result is converted into vector image speckle data, and the pyramid scene analysis network model is trained through training samples and verification data, specifically:
normalizing the pixel values of the clustered subarea images, dividing the picture into png picture files with the size of 512 multiplied by 512 pixels, and storing the initial coordinates (x) of each file i ,y i ) Inputting all the partitioned pictures into a neural network model for operation, executing semantic segmentation, storing semantic segmentation results into new png picture files, creating a file geographic database according to each png picture file after the semantic segmentation is finished, creating a layer with a geometric type of a surface in the file geographic database, adding category fields in the layer, converting each category of the png picture files into vector patches by a pixel clustering method, and during conversion, converting the vector patches into vector patchesLoading starting coordinates (x) on the plaque i ,y i ) And storing the category value in the semantic segmentation result into a category field, and storing the vector graphic spot into a file geographic database after the category value is stored into the category field.
Optionally, in step 6, vector pattern spots are screened according to categories, two-stage vector pattern spots are reversely spliced, multiple parts of the reversely spliced two-stage vector pattern spots are scattered, unqualified pattern spots are removed, variation pattern spots of corresponding types are extracted, boundary lines of the variation pattern spots are smoothed, the smoothed pattern spots are combined with business boundary lines of corresponding types to be segmented, and finely-broken pattern spots are removed, so that final variation pattern spots are obtained, and the method specifically comprises the following steps:
respectively screening vector pattern spots of the early-stage remote sensing image and vector pattern spots of the later-stage remote sensing image according to categories, respectively deriving an early-stage image segmentation result and a later-stage image segmentation result, splicing a plurality of pattern layers in the early-stage image segmentation result to obtain a complete early-stage image segmentation result pattern layer, splicing a plurality of pattern layers in the later-stage image segmentation result to obtain a complete later-stage image segmentation result pattern layer, carrying out pattern spot fusion on the complete early-stage image segmentation result pattern layer according to category values, calculating the area of a pattern spot, rejecting the pattern spot with the area smaller than 100 square meters, carrying out pattern spot fusion on the complete later-stage image segmentation result pattern layer according to the category values, calculating the area of the pattern spot, rejecting the pattern spot with the area smaller than 100 square meters, and obtaining the reversely spliced early-stage image segmentation result pattern layer and the later-stage image segmentation result pattern layer, breaking up multiple parts of the early-stage image segmentation result image layer and the later-stage image segmentation result image layer which are spliced reversely, eliminating the spots with the area smaller than 100 square meters to obtain a broken-up multiple parts of the early-stage image segmentation result image layer and the later-stage image segmentation result image layer, screening the spots of corresponding categories in the broken-up multiple parts of the later-stage image segmentation result image layer, screening the spots of non-corresponding categories in the broken-up multiple parts of the early-stage image segmentation result image layer, intersecting the spots to obtain the forward variation spots from the non-corresponding categories to the corresponding categories, screening the spots of non-corresponding categories in the broken-up multiple parts of the later-stage image segmentation result image layer, screening the spots of corresponding categories in the broken-up multiple parts of the early-stage image segmentation result image layer, intersecting the spots to obtain the reverse variation spots from the corresponding categories to the non-corresponding categories, and smoothing the boundary lines of the forward variation spots and the reverse variation spots, and removing the patches of the non-corresponding categories from the reverse variation patches to obtain a forward variation layer and a reverse variation layer, segmenting the forward variation layer and the reverse variation layer by combining with the operation management boundary of the corresponding categories, recalculating the area, removing the patches with the area less than 100 square meters, and storing the finally obtained patches into a file in a Shape format.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: the invention provides a remote sensing image change detection method based on partition clustering and convolution, which comprises the steps of obtaining optical satellite remote sensing images of two periods before and after three months or more and within twelve months of the same region interval time, preprocessing and quality inspection the optical satellite remote sensing images, obtaining the effective public range of the optical satellite remote sensing images of the two periods before and after the preprocessing and the inspection are finished, calculating the number of required partitions and the number of subprocesses capable of running simultaneously, starting a plurality of processes to carry out asynchronous clustering on each partition image of the optical satellite remote sensing images of the two periods according to the number of the subprocesses capable of running simultaneously, detecting and marking cloud, snow and rain covered areas and invalid pixel data by applying a deep neural network during the asynchronous clustering, building a pyramid scene analysis network model, automatically intercepting the clustered images to make training samples and verification data, training a pyramid scene analysis network model through a training sample and verification data, processing all clustered partitioned images, automatically eliminating marked invalid data, inputting the marked invalid data into the trained model, performing semantic segmentation on the trained model to obtain a semantic segmentation result, converting the semantic segmentation result into vector image spot data, training the pyramid scene analysis network model through the training sample and the verification data, screening the vector image spots according to categories, reversely splicing the two-stage vector image spots, scattering multiple parts of the reversely spliced two-stage vector image spots, eliminating unqualified image spots, extracting corresponding types of change image spots, smoothing boundary lines of the change image spots, segmenting the smoothed image spots by combining with corresponding types of operation boundary lines, and eliminating finely-broken image spots to obtain final change image spots; the method utilizes the advantages that a traditional change detection method does not depend on training samples, the pixel clustering speed is high, and the like, the remote sensing images are preliminarily extracted, the preliminarily extracted results are sent to a neural network model, the complexity of model training and operation is simplified through the preliminarily extracted results, gradient disappearance or gradient explosion in the model training process can be effectively reduced on the premise that the accuracy and performance of model operation are guaranteed, the depth of the model operation results is in accordance with the requirements of actual production and application, the remote sensing images in a large range can be rapidly detected, and the number of pseudo changes can be effectively reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a remote sensing image change detection method based on partition clustering and convolution according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a remote sensing image change detection method based on partition clustering and convolution, which can quickly realize the quick detection of the change of a remote sensing image in a large range, and has the advantages of high pixel partition clustering speed, high automation degree and low requirement on hardware.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
As shown in fig. 1, the method for detecting a change in a remote sensing image based on partition clustering and convolution according to an embodiment of the present invention includes the following steps:
step 1: acquiring optical satellite remote sensing images of two periods before and after three months or more and twelve months within the same region, and preprocessing and quality inspection are carried out on the optical satellite remote sensing images;
and 2, step: acquiring effective public ranges of the optical satellite remote sensing images in two periods before and after the preprocessing and the checking are finished, and calculating the number of required partitions and the number of sub-processes capable of running simultaneously;
and step 3: according to the number of sub-processes capable of running simultaneously, starting a plurality of processes to perform asynchronous clustering on each partitioned image of the two-stage optical satellite remote sensing image, and detecting and marking areas covered by cloud, snow and rain and invalid pixel data by using a deep neural network during asynchronous clustering;
and 4, step 4: building a pyramid scene analysis network model, automatically intercepting the clustered images to make training samples and verification data, and training the pyramid scene analysis network model through the training samples and the verification data;
and 5: processing all clustered images of the partitions, automatically eliminating marked invalid data, inputting the marked invalid data into a trained model, performing semantic segmentation on the trained model to obtain a semantic segmentation result, and converting the semantic segmentation result into a vector image spot;
and 6: screening the vector pattern spots according to categories, reversely splicing the two-stage vector pattern spots, scattering multiple parts of the reversely spliced two-stage vector pattern spots, eliminating unqualified pattern spots, extracting the variation pattern spots of corresponding types, smoothing the boundary lines of the variation pattern spots, segmenting the smoothed pattern spots by combining with the business boundary lines of the corresponding types, and eliminating finely-broken pattern spots to obtain the final variation pattern spots.
In the step 1, optical satellite remote sensing images of two periods before and after three months or more and within twelve months of the interval time in the same area are obtained, and are preprocessed and subjected to quality inspection, specifically:
acquiring optical satellite remote sensing images of the same region at two periods before and after within more than three months and twelve months at intervals, wherein the optical satellite remote sensing image acquired at the previous period is called as a previous remote sensing image X 1 The optical satellite remote sensing image acquired in the later period is called as the later remote sensing image X 2
For the obtained early-stage remote sensing image X 1 And later remote sensing image X 2 Performing geometric correction, registration, fusion and enhancement operation to obtain the early-stage remote sensing image X 1 And later remote sensing image X 2 The pixel data type of the remote sensing image is converted into an 8-bit integer of RGB three wave bands, and after the conversion is finished, the early-stage remote sensing image X is converted into a remote sensing image X 1 And later remote sensing image X 2 Uniformly converting a Gaussian projection coordinate system under the same sub-band, and storing the Gaussian projection coordinate system in a storage device in a TIF format;
reading early-stage remote sensing image X of storage equipment 1 And later remote sensing image X 2 For the earlier stage remote sensing image X 1 And later remote sensing image X 2 The image deviation condition and the cloud, snow and fog coverage degree of the image are checked, wherein the early-stage remote sensing image X 1 And later remote sensing image X 2 F (f is more than or equal to 150) points are respectively extracted, and points covered by dense clouds, snow and rain are set as f 1 If (f) 1 If/f) is more than or equal to 0.45, the examination is failed, the image is obtained again, and the remote sensing image X in the previous period 1 And later remote sensing image X 2 In the range of (2), k ground objects (k is more than or equal to 100) are uniformly extracted, and the same ground object is taken as a remote sensing image X in the previous period 1 And later remote sensing image X 2 The number of pixels with the deviation exceeding 3 is recorded as k 3 The number of more than 5 pixels is k 5 The number of more than 8 pixels is k 8 If (k) 5 /k) is not less than 0.01 or (k) 3 K) is not less than 0.1 or k 8 If the image is more than 0, the image is not checked to pass, and the previous remote sensing image X is checked again 1 And later remote sensing image X 2 And (4) carrying out pretreatment.
In step 2, obtaining the effective public range of the optical satellite remote sensing images in the two periods before and after the preprocessing and the checking are finished, calculating the number of required partitions and the number of subprocesses capable of running simultaneously, and partitioning specifically:
obtaining an early-stage remote sensing image X 1 Range m of 1 Line and n 1 Remote sensing image X of column and later period 2 Range m of 2 Line, n 2 Row, traversal early-stage remote sensing image X 1 M of 1 Line n 1 Column, let the RGB pixel value of i row, j column be R ij 、G ij 、B ij If R is ij 、G ij 、B ij Meets the condition (R) ij <5 and G ij <5 and B ij <5) Or (R) ij >251 and G ij >251 and B ij >251 Judging as an invalid pixel point, and sending R ij 、G ij 、B ij All color components are assigned as (0, 0), if not, the color components are effective pixel points, and the early-stage remote sensing image X is obtained 1 The outer boundary range of all effective pixel points is defined as E 1 Processing the later remote sensing image X in the same way 2 Forming a later-stage remote sensing image X 2 Of all effective pixel points E 2 D, E is to 1 And E 2 The intersected public range is defined as E, and the public range E is used for cutting the early-stage remote sensing image X 1 And later remote sensing image X 2 Replacing the previous remote sensing image X with the clipped result 1 And later remote sensing image X 2 Obtaining the final early-stage remote sensing image X needing change detection 1 And final late remote sensing image X 2 Wherein, the final early-stage remote sensing image X 1 And final later remote sensing image X 2 Are equal in height, H, equal in width, W;
obtaining the running memory T of the computer, directly reading from the running memory of the computer, and obtaining the remote sensing image X according to the final early stage 1 And final late remote sensing image X 2 The height H and the width W of the image are obtained, and the final early-stage remote sensing image X is obtained 1 And final late remote sensing image X 2 The image segmentation method comprises the following steps of segmenting the image into a plurality of subarea images, wherein each subarea image has the same size, and the parameters are as follows:
Figure BDA0003748772130000101
Figure BDA0003748772130000102
Figure BDA0003748772130000103
C=M×N
wherein, L is the width and height of each partitioned image, M is the number of rows of partitioned images, N is the number of columns of partitioned images, and C is the number of partitioned images, and the upper left corner coordinates (x, y), the image resolution p and the coordinate system of each partitioned image are recorded;
after testing, it is found that an integer power with L being 2 is most suitable, and generally L =2048 is taken, so that not only can the memory be fully utilized, but also the data processing of the subsequent steps can be simplified, each partition is only logically divided, the image cutting condition does not exist, the initial coordinates (x, y) of the partition, the number of rows and columns of the partition, M and N, the width W of each partition need to be recorded in the partitioning process k 、H k Due to image X 1 、X 2 It is not necessarily square, and thus (x + W) may be present k )﹥W、(y+H k )>H, these regions are marked specifically in the subsequent steps;
the method comprises the steps of obtaining the number Q of physical cores of a running computer, setting the number of clustering processes to be started to be N, and according to the verification, setting the number of the processes participating in clustering operation to be N = max { Q-2,2} by an algorithm to be close to an optimal running state, so that the number of sub-processes capable of running simultaneously is determined to be N = max { Q-2,2}, wherein recorded partition starting coordinates (x, y) and partition row and column numbers M and N are sent to the sub-processes when the processes are started.
In step 3, according to the number of sub-processes capable of running simultaneously, starting a plurality of processes to perform asynchronous clustering on each partitioned image of the two-stage optical satellite remote sensing image, and detecting and marking areas covered by cloud, snow and rain and invalid pixel data by using a deep neural network during asynchronous clustering, specifically:
starting n processes according to the number n of subprocesses to start to perform the final early-stage remote sensing image X 1 And final late remote sensing image X 2 Clustering to obtain the initial coordinate (x) of the k-th image k ,y k ) Number of rows and columns M of the partition k 、N k Width W of each partition k Height H k Obtaining the final early-stage remote sensing image X 1 And final late remote sensing image X 2 Reading the final early-stage remote sensing image X by the image resolution p, the height H and the width W 1 And final later remote sensing image X 2 From x k Start to M k Line, slave y k Starting with N k And (3) the remote sensing image data of the rows are converted into a two-dimensional array, wherein if the logic partition k has the condition of exceeding the range, RGB color components of the two-dimensional array in the area of exceeding the range are assigned as (0, 0), and the remote sensing image data of the block is continuously marked, wherein RGB pixel values of the ith row and the j column are marked as R ij 、G ij 、B ij If R is ij 、G ij 、B ij Meets the condition (R) ij <5 and G ij <5 and B ij <5) Or (R) ij >251 and G ij >251 and B ij >251 Mark as invalid pixel value, mark R as invalid pixel value ij 、G ij 、B ij The value is (0, 0), the kth subarea image is divided into small blocks of 16 multiplied by 16 pixel points, the coverage areas of cloud, snow and fog in the subarea image are detected by utilizing a Resnet16 neural network model, the pixel values of the coverage areas of the cloud, the snow and the fog are marked as (0, 0), and then (x) is carried out k ,y k ) Starting, traversing all pixel points in the subarea, and if R is ij 、G ij 、B ij If the average value is less than 3, ignoring the pixel point, classifying the rest pixel points according to the size of RGB color components, marking the maximum R component as a class 1, marking the maximum G component as a class 2, marking the maximum B component as a class 3, gathering adjacent pixel points of the same class into a region from small to large, and gathering the adjacent pixel points of less than 10 pixelsThe area formed by the points is replaced by the average value of the adjacent pixels, the pixel values of the area with more than 10 pixel points are kept unchanged, each modified pixel data is written into a file to form a partitioned image file, the partitioned image file is stored in the tif format and repeatedly operated until all partitions are operated, most of abnormal points can be removed through the step 3, most of cloud, rain and snow coverage areas can also be removed, the conditions of pseudo-change and gradient explosion in the subsequent steps are avoided to the maximum extent, and clustering is completed.
In step 4, a pyramid scene analysis network model is built, the clustered images are automatically intercepted to make training samples and verification data, and the pyramid scene analysis network model is trained through the training samples and the verification data, and the method specifically comprises the following steps:
building a pyramid scene analysis network model in a semantic segmentation model, intercepting a partitioned image file through a territorial three-tone territorial boundary, and making a training sample and verification data, wherein the original data of the training sample is a png picture file of 512 multiplied by 512 pixel points intercepted by the territorial boundary, and the label is a png picture file of 512 multiplied by 512 pixel points generated by a territorial three-tone territorial pattern spot at the same position, wherein the training sample comprises 9 categories and 1 background value, the 9 categories are respectively 1-arbor forest and bamboo forest land, 2-bush forest land, 3-other forest land, 4-house land, 5-residential land, 6-transportation land, 7-water area, 8-grassland and 9-cultivated land, and the areas outside the 9 categories are uniformly taken as: 0-background value, when a training sample is manufactured, the range of an invalid region is removed by an erasing method, the part marked as the invalid region is marked as the background value, 90% of samples are randomly extracted as the training sample, the rest 10% of samples are used as a verification data set, the training sample needs to be intercepted by the early-stage image and the later-stage image, and the effect is better when the proportion difference of the two samples is within 30% after verification;
the method comprises the steps of carrying out pixel value normalization on training samples and verification data, compressing RGB components of pixel points participating in training to a range of 0-1, converting labels into One-Hot codes, inputting the training samples into a pyramid scene analysis network model for training, setting the maximum training round number to be 2000, carrying out verification by using the verification data after 20 rounds of training are completed, quitting the training when the loss value of a verification data set is lower than 0.055 and the precision is higher than 0.85, saving parameters of a neural network model to a file with a ckpt format suffix name, completing the training and obtaining the neural network model.
Step 5, processing all clustered images of the partitions, automatically eliminating marked invalid data, inputting the marked invalid data into a trained model, performing semantic segmentation on the trained model to obtain a semantic segmentation result, converting the semantic segmentation result into vector map speckle data, and training the pyramid scene analysis network model through training samples and verification data, wherein the method specifically comprises the following steps:
normalizing the pixel values of the clustered subarea images, dividing the picture into png picture files with the size of 512 multiplied by 512 pixels, and storing the initial coordinates (x) of each file i ,y i ) Inputting all the partitioned pictures into a neural network model for operation, executing semantic segmentation, storing semantic segmentation results into new png picture files, creating a file geographic database according to each png picture file after the semantic segmentation is finished, creating a layer with a geometric type of a surface in the file geographic database, adding category fields in the layer, converting each category of the png picture files into vector patches by a pixel clustering method, and loading initial coordinates (x) on the vector patches during conversion i ,y i ) And storing the category value in the semantic segmentation result into a category field, and storing the vector graphic spot into a file geographic database after the category value is stored into the category field.
In step 6, vector pattern spots are screened according to categories, two-stage vector pattern spots are reversely spliced, multiple parts of the reversely spliced two-stage vector pattern spots are scattered, unqualified pattern spots are removed, corresponding types of variation pattern spots are extracted, boundary lines of the variation pattern spots are smoothed, the smoothed pattern spots are combined with corresponding types of business boundary lines to be segmented, fine broken pattern spots are removed, and a final variation pattern spot is obtained, wherein the method specifically comprises the following steps:
the method comprises the steps of respectively screening vector image spots of a front-stage remote sensing image and vector image spots of a rear-stage remote sensing image according to categories, respectively deriving a front-stage image segmentation result and a rear-stage image segmentation result, splicing a plurality of image layers in the front-stage image segmentation result to obtain a complete front-stage image segmentation result image layer, splicing a plurality of image layers in the rear-stage image segmentation result to obtain a complete rear-stage image segmentation result image layer, performing image spot fusion on the complete front-stage image segmentation result image layer according to category values, calculating an image spot area, removing the image spots with the area smaller than 100 square meters, obtaining the front-stage image segmentation result image layer and the rear-stage image segmentation result image layer after reverse splicing, breaking up a plurality of image layers for the front-stage image segmentation result image layer and the rear-stage image segmentation result image layer after reverse splicing, removing the image spots with the area smaller than 100 square meters, obtaining non-stage image non-screening corresponding categories of the front-stage image which are obtained by breaking up the corresponding non-stage image spots in the categories of the front-stage image layers, and breaking up the non-stage image layers with the non-stage image non-screening categories corresponding categories, and breaking up the non-stage image categories corresponding categories of the non-screening multi-stage image layers in the forward-stage image layers after-stage image layers, and obtaining the non-stage image layers after reverse splicing, and breaking up the non-stage image layers after the forward-stage image layers. Removing the spots of non-corresponding categories from the reverse variation spots to obtain a forward variation layer and a reverse variation layer, segmenting the forward variation layer and the reverse variation layer by combining with the operation management interface of the corresponding category, recalculating the area, removing the spots with the area less than 100 square meters, and storing the finally obtained spots into a file in a Shape format;
for example, one embodiment of the present invention is: screening the spots of which the types are woodland in a broken multi-component later-stage image segmentation result layer, screening the spots of which the types are non-woodland in a broken multi-component earlier-stage image segmentation result layer, intersecting the spots to obtain forward variation spots from the non-woodland to the woodland, screening the spots of which the types are non-woodland in a broken multi-component later-stage image segmentation result layer, screening the spots of which the types are non-woodland in a broken multi-component earlier-stage image segmentation result layer, intersecting the spots to obtain reverse variation spots from the woodland to the non-woodland, performing boundary line smoothing on the forward variation spots and the reverse variation spots, removing the spots of which the types are non-woodland from the reverse variation spots to obtain the forward variation layer and the reverse variation layer, combining forestry operation management boundaries such as a forest farm and a forest class, splitting the forward variation layer and the reverse variation layer, recalculating the area, removing the spots of which are smaller than 100 square meters, and storing the finally obtained spots in a file with the earlier-stage image characteristics of which are non-woodland and the forward variation spots of which are non-woodland; finally, the obtained early-stage image features are non-forest land, and the current-stage image features are forward variation pattern spots of the forest land.
The invention provides a remote sensing image change detection method based on partition clustering and convolution, which comprises the steps of obtaining optical satellite remote sensing images of a front period and a rear period within three months and twelve months of the same region interval time, preprocessing and quality inspection the optical satellite remote sensing images, obtaining the effective public range of the optical satellite remote sensing images of the front period and the rear period after preprocessing and inspection, calculating the number of required partitions and the number of subprocesses capable of operating simultaneously, starting a plurality of processes to carry out asynchronous clustering on each partition image of the optical satellite remote sensing images of the two periods according to the number of the subprocesses capable of operating simultaneously, detecting and marking areas covered by cloud, snow and rain and invalid pixel data by using a deep neural network during asynchronous clustering, building a pyramid scene analysis network model, automatically intercepting the clustered images to make training samples and verification data, training a pyramid scene analysis network model through a training sample and verification data, processing all clustered partitioned images, automatically eliminating marked invalid data, inputting the marked invalid data into the trained model, performing semantic segmentation on the trained model to obtain a semantic segmentation result, converting the semantic segmentation result into vector image spot data, training the pyramid scene analysis network model through the training sample and the verification data, screening the vector image spots according to categories, reversely splicing the two-stage vector image spots, scattering multiple parts of the reversely spliced two-stage vector image spots, eliminating unqualified image spots, extracting corresponding types of change image spots, smoothing boundary lines of the change image spots, segmenting the smoothed image spots by combining with corresponding types of operation boundary lines, and eliminating finely-broken image spots to obtain final change image spots; the method utilizes the advantages that a traditional change detection method does not depend on training samples, the pixel clustering speed is high, and the like, the remote sensing images are preliminarily extracted, the preliminarily extracted results are sent to a neural network model, the complexity of model training and operation is simplified through the preliminarily extracted results, gradient disappearance or gradient explosion in the model training process can be effectively reduced on the premise that the accuracy and performance of model operation are guaranteed, the depth of the model operation results is in accordance with the requirements of actual production and application, the remote sensing images in a large range can be rapidly detected, and the number of pseudo changes can be effectively reduced.
The principle and the embodiment of the present invention are explained by applying specific examples, and the above description of the embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (7)

1. A remote sensing image change detection method based on partition clustering and convolution is characterized by comprising the following steps:
step 1: acquiring optical satellite remote sensing images of a same area at two periods before and after three months or more and within twelve months, and performing pretreatment and quality inspection on the images;
step 2: acquiring effective public ranges of the optical satellite remote sensing images in two periods before and after the preprocessing and the checking are finished, and calculating the number of required partitions and the number of sub-processes capable of running simultaneously;
and 3, step 3: according to the number of sub-processes capable of running simultaneously, starting a plurality of processes to perform asynchronous clustering on each partitioned image of the two-stage optical satellite remote sensing image, and detecting and marking areas covered by cloud, snow and rain and invalid pixel data by using a deep neural network during asynchronous clustering;
and 4, step 4: building a pyramid scene analysis network model, automatically intercepting clustered images to make training samples and verification data, and training the pyramid scene analysis network model through the training samples and the verification data;
and 5: processing all clustered images of the partitions, automatically eliminating marked invalid data, inputting the marked invalid data into a trained model, performing semantic segmentation on the trained model to obtain a semantic segmentation result, and converting the semantic segmentation result into a vector image spot;
and 6: screening the vector pattern spots according to the categories, reversely splicing the two-stage vector pattern spots, scattering multiple parts of the reversely spliced two-stage vector pattern spots, eliminating unqualified pattern spots, extracting the variation pattern spots of corresponding types, smoothing the boundary lines of the variation pattern spots, segmenting the smoothed pattern spots by combining with the business boundary lines of corresponding types, and eliminating finely-broken pattern spots to obtain the final variation pattern spots.
2. The method for detecting the change of the remote sensing image based on the partition clustering and the convolution according to claim 1, wherein in step 1, the optical satellite remote sensing images of two periods before and after the same area with an interval time of more than three months and within twelve months are obtained, and are preprocessed and subjected to quality inspection, specifically:
acquiring optical satellite remote sensing images of the same region at two periods before and after within more than three months and twelve months at intervals, wherein the optical satellite remote sensing image acquired at the previous period is called as a previous remote sensing image X 1 The optical satellite remote sensing image acquired in the later period is called as a later remote sensing image X 2
For the obtained early-stage remote sensing image X 1 And later remote sensing image X 2 Performing geometric correction, registration, fusion and enhancement operations to obtain the early-stage remote sensing image X 1 And later remote sensing image X 2 Is formed by a plurality of pixelsConverting the data type into an 8-bit integer of RGB three-band, and after finishing, converting the early-stage remote sensing image X 1 And later remote sensing image X 2 Uniformly converting a Gaussian projection coordinate system under the same sub-band, and storing the Gaussian projection coordinate system in a storage device in a TIF format;
reading early-stage remote sensing image X of storage equipment 1 And later remote sensing image X 2 For the earlier stage remote sensing image X 1 And later remote sensing image X 2 The image deviation condition and the cloud, snow and fog coverage degree of the image are checked, wherein the early-stage remote sensing image X 1 And later remote sensing image X 2 F (f is more than or equal to 150) points are respectively extracted, and points covered by dense clouds, snow and rain are set as f 1 If (f) 1 If/f) is more than or equal to 0.45, the examination fails, the image is obtained again, and the remote sensing image X in the previous period is obtained 1 And later remote sensing image X 2 In the range of (1), k ground objects (k is more than or equal to 100) are uniformly extracted, and the same ground object is subjected to remote sensing image X in the previous period 1 And later remote sensing image X 2 The number of pixels with the deviation exceeding 3 is recorded as k 3 The number of more than 5 pixel points is k 5 The number of more than 8 pixels is k 8 If (k) 5 K) is not less than 0.01 or (k) 3 K) is not less than 0.1 or k 8 If the image is more than 0, the inspection is failed, and the previous remote sensing image X is re-checked 1 And later remote sensing image X 2 And (4) carrying out pretreatment.
3. The method for detecting the change of the remote sensing image based on the partition clustering and the convolution according to claim 2, wherein in the step 2, the effective public range of the optical satellite remote sensing image in the two periods before and after the preprocessing and the checking is obtained, the number of the required partitions and the number of the subprocesses which can be operated simultaneously are calculated, and partition division is performed, specifically:
obtaining an early-stage remote sensing image X 1 Range m of 1 Line and n 1 Remote sensing image X of column and later period 2 In the range of m 2 Line n 2 Row, traversal early-stage remote sensing image X 1 M of (a) 1 Line, n 1 Column, let the RGB pixel value of i row, j column be R ij 、G ij 、B ij If R is ij 、G ij 、B ij Meets the condition (R) ij <5 and G ij <5 and B ij <5) Or (R) ij >251 and G ij >251 and B ij >251 Judging as an invalid pixel point, and sending R ij 、G ij 、B ij All color components are assigned as (0, 0), if not, the color components are effective pixel points, and the early-stage remote sensing image X is divided into 1 The outer boundary range of all effective pixel points is defined as E 1 Processing the later remote sensing image X in the same way 2 Forming a later-stage remote sensing image X 2 Of all effective pixel points 2 A 1 is mixing E 1 And E 2 The intersected public range is defined as E, and the public range E is used for cutting the early-stage remote sensing image X 1 And later remote sensing image X 2 Replacing the previous remote sensing image X with the clipped result 1 And later remote sensing image X 2 Obtaining the final early-stage remote sensing image X needing change detection 1 And final later remote sensing image X 2 Wherein the final early-stage remote sensing image X 1 And final later remote sensing image X 2 H, W, with equal width;
obtaining the running memory T of the computer, and according to the final early-stage remote sensing image X 1 And final later remote sensing image X 2 The height H and the width W of the image are obtained, and the final early-stage remote sensing image X is obtained 1 And final later remote sensing image X 2 The image segmentation method comprises the following steps of segmenting the image into a plurality of subarea images, wherein each subarea image has the same size, and the parameters are as follows:
Figure FDA0003748772120000031
Figure FDA0003748772120000032
Figure FDA0003748772120000033
C=M×N
in the formula, L is the width and height of each partitioned image after division, M is the number of rows of the partitioned images after division, N is the number of columns of the partitioned images after division, and C is the number of the partitioned images after division, and the upper left-corner coordinates (x, y), the image resolution p and the coordinate system of each partitioned image are recorded;
obtaining the number Q of physical cores of the running computer, setting the number of clustering processes to be started to be N, and obtaining the number N = max { Q-2,2} of simultaneously-running sub-processes through verification, wherein the recorded partition initial coordinates (x, y) and the partition row and column numbers M and N are sent to each sub-process while the processes are started.
4. The method for detecting the change of the remote sensing image based on the partition clustering and the convolution according to claim 3, wherein in the step 3, according to the number of the subprocesses capable of running simultaneously, a plurality of processes are started to perform asynchronous clustering on each partition image of the two-stage optical satellite remote sensing image, and a deep neural network is applied to detect and mark areas covered by cloud, snow and rain and invalid pixel data during the asynchronous clustering, and specifically:
starting n processes according to the number n of subprocesses to start to perform the final early-stage remote sensing image X 1 And final later remote sensing image X 2 Clustering to obtain the initial coordinate (x) of the kth subarea image k ,y k ) Number of rows and columns M of the partition k 、N k Width W of each partition k Height H k Obtaining the final early-stage remote sensing image X 1 And final later remote sensing image X 2 Reading the final early-stage remote sensing image X by the image resolution p, the height H and the width W 1 And final later remote sensing image X 2 From x k Starting with M k Line, slave y k Starting with N k The remote sensing image data of the row is input into a two-dimensional array, wherein if the logic partition k has the condition of exceeding the range, RGB color components of the two-dimensional array in the area of exceeding the range are all assigned to (0, 0), the remote sensing image data of the block is marked continuously, wherein R of the ith row and the j columnThe GB pixel value is recorded as R ij 、G ij 、B ij If R is ij 、G ij 、B ij Meets the condition (R) ij <5 and G ij <5 and B ij <5) Or (R) ij >251 and G ij >251 and B ij >251 Marked as invalid pixel value, R is marked as invalid ij 、G ij 、B ij The value is (0, 0), the kth subarea image is divided into small blocks with 16 multiplied by 16 pixel points, the coverage areas of the cloud, the snow and the fog in the subarea image are detected by utilizing a Resnet16 neural network model, the pixel values of the coverage areas of the cloud, the snow and the fog are marked as (0, 0), and then (x) is calculated k ,y k ) Starting, traversing all pixel points in the subarea, and if R is ij 、G ij 、B ij And if the number of the pixel points is less than 3, ignoring the pixel point, classifying the rest pixel points according to the size of RGB color components, marking the R component with the maximum value as a category 1, marking the G component with the maximum value as a category 2, marking the B component with the maximum value as a category 3, gathering adjacent pixel points of the same category into a region from small to large, replacing the region formed by less than 10 pixel points by the average value of adjacent pixels, keeping the pixel values of the regions with more than 10 pixel points unchanged, writing the modified pixel data into a file to form a partitioned image file, storing the partitioned image file in a tif format, and repeatedly operating until all the partitioned areas finish operating to finish clustering.
5. The remote sensing image change detection method based on partition clustering and convolution according to claim 4, wherein in step 4, a pyramid scene analysis network model is built, clustered images are automatically intercepted to make training samples and verification data, and the pyramid scene analysis network model is trained through the training samples and the verification data, and specifically:
building a pyramid scene analysis network model in a semantic segmentation model, intercepting a subarea image file through the boundary line of the territorial three-tone of land, manufacturing a training sample and verification data, wherein the original data of the training sample is a png image file of 512 multiplied by 512 pixel points intercepted by the boundary line of the territorial three-tone of land, and the label is a png image file of 512 multiplied by 512 pixel points generated by the map spot of the territorial three-tone of land at the same position, wherein the training sample comprises 9 categories and 1 background value, and the 9 categories are respectively arbor forest and bamboo forest land, bush forest land, other forest land, wetland, residential land, transportation land, water area, grassland and cultivated land, eliminating the range of an invalid area by an erasing method, and marking the place marked as the invalid area as the background value, randomly extracting 90% of samples as training samples, using the rest 10% of samples as a verification data set, carrying out pixel value normalization on the training samples and the verification data, compressing RGB components of pixel points participating in training to a range of 0-1, converting labels into One-Hot codes, inputting the training samples into a pyramid scene analysis network model for training, wherein the training samples in each batch are 16 pictures with 512 x 512 pixels, the maximum training round number is set to 2000, after each training is finished by 20 rounds, using the verification data for verification, quitting the training when the loss value of the verification data set is lower than 0.055 and the precision is higher than 0.85, storing parameters of a neural network model, finishing the training, and obtaining the neural network model.
6. The method for detecting the change of the remote sensing image based on the partition clustering and the convolution according to claim 5, wherein in step 5, all the clustered partition images are processed, the marked invalid data are automatically removed, the marked invalid data are input into a trained model, semantic segmentation is performed on the trained model to obtain a semantic segmentation result, the semantic segmentation result is converted into vector map speckle data, and a pyramid scene analysis network model is trained through training samples and verification data, specifically:
normalizing the pixel values of the clustered subarea images, dividing the picture into png picture files with the size of 512 multiplied by 512 pixels, and storing the initial coordinates (x) of each file i ,y i ) Inputting all the partitioned pictures into a neural network model for operation, executing semantic segmentation, storing semantic segmentation results into new png picture files, creating a file geographic database according to each png picture file after the semantic segmentation is finished, creating a layer with a geometric type of a surface in the file geographic database, and placing the layer with a geometric type of a surface in the layerAdding category fields, converting each category of the png picture file into a vector graphic spot by a pixel clustering method, and loading an initial coordinate (x) on the vector graphic spot during conversion i ,y i ) And storing the category value in the semantic segmentation result into a category field, and storing the vector graphic spot into a file geographic database after the category value is stored into the category field.
7. The remote sensing image change detection method based on partition clustering and convolution according to claim 6, wherein in step 6, vector pattern spots are screened according to categories, vector pattern spots at two stages are reversely spliced, multiple parts of the reversely spliced vector pattern spots at two stages are scattered, unqualified pattern spots are removed, change pattern spots of corresponding types are extracted, boundary lines of the change pattern spots are smoothed, the smoothed pattern spots are segmented by combining with corresponding types of business boundary lines, and finely broken pattern spots are removed, so that a final change pattern spot is obtained, and the method specifically comprises the following steps:
respectively screening vector pattern spots of the early-stage remote sensing image and vector pattern spots of the later-stage remote sensing image according to categories, respectively deriving an early-stage image segmentation result and a later-stage image segmentation result, splicing a plurality of pattern layers in the early-stage image segmentation result to obtain a complete early-stage image segmentation result pattern layer, splicing a plurality of pattern layers in the later-stage image segmentation result to obtain a complete later-stage image segmentation result pattern layer, carrying out pattern spot fusion on the complete early-stage image segmentation result pattern layer according to category values, calculating the area of a pattern spot, rejecting the pattern spot with the area smaller than 100 square meters, carrying out pattern spot fusion on the complete later-stage image segmentation result pattern layer according to the category values, calculating the area of the pattern spot, rejecting the pattern spot with the area smaller than 100 square meters, and obtaining the reversely spliced early-stage image segmentation result pattern layer and the later-stage image segmentation result pattern layer, breaking up multiple parts of the early-stage image segmentation result image layer and the later-stage image segmentation result image layer which are spliced reversely, eliminating the spots with the area smaller than 100 square meters to obtain a broken-up multiple parts of the early-stage image segmentation result image layer and the later-stage image segmentation result image layer, screening the spots of corresponding categories in the broken-up multiple parts of the later-stage image segmentation result image layer, screening the spots of non-corresponding categories in the broken-up multiple parts of the early-stage image segmentation result image layer, intersecting the spots to obtain the forward variation spots from the non-corresponding categories to the corresponding categories, screening the spots of non-corresponding categories in the broken-up multiple parts of the later-stage image segmentation result image layer, screening the spots of corresponding categories in the broken-up multiple parts of the early-stage image segmentation result image layer, intersecting the spots to obtain the reverse variation spots from the corresponding categories to the non-corresponding categories, and smoothing the boundary lines of the forward variation spots and the reverse variation spots, and removing the spots of non-corresponding categories from the reverse variation spots to obtain a forward variation layer and a reverse variation layer, segmenting the forward variation layer and the reverse variation layer by combining with the operation management interface of the corresponding category, recalculating the area, removing the spots of which the area is less than 100 square meters, and storing the finally obtained spots into a file in a Shape format.
CN202210836780.8A 2022-07-15 2022-07-15 Remote sensing image change detection method based on partition clustering and convolution Pending CN115223054A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210836780.8A CN115223054A (en) 2022-07-15 2022-07-15 Remote sensing image change detection method based on partition clustering and convolution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210836780.8A CN115223054A (en) 2022-07-15 2022-07-15 Remote sensing image change detection method based on partition clustering and convolution

Publications (1)

Publication Number Publication Date
CN115223054A true CN115223054A (en) 2022-10-21

Family

ID=83612508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210836780.8A Pending CN115223054A (en) 2022-07-15 2022-07-15 Remote sensing image change detection method based on partition clustering and convolution

Country Status (1)

Country Link
CN (1) CN115223054A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578607A (en) * 2022-12-08 2023-01-06 自然资源部第三航测遥感院 Method for rapidly extracting coverage area of effective pixels of remote sensing image
CN116091497A (en) * 2023-04-07 2023-05-09 航天宏图信息技术股份有限公司 Remote sensing change detection method, device, electronic equipment and storage medium
CN116258961A (en) * 2023-01-18 2023-06-13 广州市绿之城园林绿化工程有限公司 Forestry pattern spot change rapid identification method and system
CN117095299A (en) * 2023-10-18 2023-11-21 浙江省测绘科学技术研究院 Grain crop extraction method, system, equipment and medium for crushing cultivation area
CN117612003A (en) * 2023-11-27 2024-02-27 通友微电(四川)有限公司 Urban built-up area green land change identification method based on multi-source remote sensing image
CN118470092A (en) * 2024-06-04 2024-08-09 航天宏图信息技术股份有限公司 Crop planting area extraction method, device, equipment and medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578607A (en) * 2022-12-08 2023-01-06 自然资源部第三航测遥感院 Method for rapidly extracting coverage area of effective pixels of remote sensing image
CN116258961A (en) * 2023-01-18 2023-06-13 广州市绿之城园林绿化工程有限公司 Forestry pattern spot change rapid identification method and system
CN116258961B (en) * 2023-01-18 2023-12-01 广州市绿之城园林绿化工程有限公司 Forestry pattern spot change rapid identification method and system
CN116091497A (en) * 2023-04-07 2023-05-09 航天宏图信息技术股份有限公司 Remote sensing change detection method, device, electronic equipment and storage medium
CN117095299A (en) * 2023-10-18 2023-11-21 浙江省测绘科学技术研究院 Grain crop extraction method, system, equipment and medium for crushing cultivation area
CN117095299B (en) * 2023-10-18 2024-01-26 浙江省测绘科学技术研究院 Grain crop extraction method, system, equipment and medium for crushing cultivation area
CN117612003A (en) * 2023-11-27 2024-02-27 通友微电(四川)有限公司 Urban built-up area green land change identification method based on multi-source remote sensing image
CN118470092A (en) * 2024-06-04 2024-08-09 航天宏图信息技术股份有限公司 Crop planting area extraction method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN111986099B (en) Tillage monitoring method and system based on convolutional neural network with residual error correction fused
CN115223054A (en) Remote sensing image change detection method based on partition clustering and convolution
CN110136170B (en) Remote sensing image building change detection method based on convolutional neural network
US20210319561A1 (en) Image segmentation method and system for pavement disease based on deep learning
CN110084817B (en) Digital elevation model production method based on deep learning
EP2801951B1 (en) Aerial image segmentation for refineries
CN110009637B (en) Remote sensing image segmentation network based on tree structure
CN107832849B (en) Knowledge base-based power line corridor three-dimensional information extraction method and device
CN114898097B (en) Image recognition method and system
CN114972759B (en) Remote sensing image semantic segmentation method based on hierarchical contour cost function
CN110909623B (en) Three-dimensional target detection method and three-dimensional target detector
CN109584206B (en) Method for synthesizing training sample of neural network in part surface flaw detection
CN118314054B (en) Image enhancement method based on forest grass wet map spots
CN115019200B (en) Sample imbalance-based pine wood nematode disease tree intelligent identification method
CN115115954A (en) Intelligent identification method for pine nematode disease area color-changing standing trees based on unmanned aerial vehicle remote sensing
CN115272876A (en) Remote sensing image ship target detection method based on deep learning
CN116363522A (en) Coastal zone reclamation sea change remote sensing monitoring method based on deep learning
CN109558801B (en) Road network extraction method, medium, computer equipment and system
CN116310628A (en) Token mask mechanism-based large-scale village-in-city extraction method
CN113378642B (en) Method for detecting illegal occupation buildings in rural areas
CN113033386B (en) High-resolution remote sensing image-based transmission line channel hidden danger identification method and system
CN112949657B (en) Forest land distribution extraction method and device based on remote sensing image texture features
CN116012709B (en) High-resolution remote sensing image building extraction method and system
Galantucci et al. Machine Learning for the Semi-Automatic 3D Decay Segmentation and Mapping of Heritage Assets
CN117115671A (en) Soil quality analysis method and device based on remote sensing and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination