CN110852393A - Remote sensing image segmentation method and system - Google Patents
Remote sensing image segmentation method and system Download PDFInfo
- Publication number
- CN110852393A CN110852393A CN201911111277.0A CN201911111277A CN110852393A CN 110852393 A CN110852393 A CN 110852393A CN 201911111277 A CN201911111277 A CN 201911111277A CN 110852393 A CN110852393 A CN 110852393A
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- training
- sensing image
- neural network
- classification model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/188—Vegetation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30188—Vegetation; Agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method and a system for segmenting a remote sensing image. The segmentation method comprises the following steps: acquiring remote sensing image samples input by a user and ground object label data corresponding to each sample; training a recognition classification model by using a neural network by using the remote sensing image sample and the ground feature label data to obtain a neural network classification model; acquiring a remote sensing image to be segmented; and classifying the remote sensing image to be segmented by adopting the neural network classification model to obtain a segmentation result marked with the ground object label. The method can realize automatic segmentation of the remote sensing image, reduce manpower and material resources, reduce cost and improve the segmentation efficiency of the remote sensing image.
Description
Technical Field
The invention relates to the field of image processing, in particular to a method and a system for segmenting a remote sensing image.
Background
At present, the ground feature identification of agricultural remote sensing mainly depends on collecting field sample points manually on site, then bringing the field sample points back to a working room, and inputting the manually sampled sample points into a computer. For a scene image, firstly marking sampling points in the image, then classifying ground objects by using a traditional statistical learning method packaged in software according to the sampling points, and then carrying out micro-processing at a later stage to obtain a result.
Although the traditional processing result of the remote sensing image is accurate, a large amount of manpower and material resources are consumed, the cost is high, and the time is slow.
Disclosure of Invention
The invention aims to provide a method and a system for segmenting a remote sensing image, which are used for realizing automatic segmentation of the remote sensing image, reducing manpower and material resources, reducing cost and improving segmentation efficiency of the remote sensing image.
In order to achieve the purpose, the invention provides the following scheme:
a segmentation method of a remote sensing image comprises the following steps:
acquiring remote sensing image samples input by a user and ground object label data corresponding to each sample;
training a recognition classification model by using a neural network by using the remote sensing image sample and the ground feature label data to obtain a neural network classification model;
acquiring a remote sensing image to be segmented;
and classifying the remote sensing image to be segmented by adopting the neural network classification model to obtain a segmentation result marked with the ground object label.
Optionally, the remote sensing image sample and the surface feature tag data are adopted, and a neural network is used for training a recognition classification model to obtain a neural network classification model, which specifically includes:
acquiring a plurality of hyper-parameters input by a user; the hyper-parameters comprise a learning rate, a batch processing size and an iteration number;
obtaining the optimizer type, the evaluation function and the loss function selected by the user;
training the recognition classification model by using the remote sensing image sample and the ground feature label data by adopting a packaged neural network and combining the multiple hyper-parameters, the optimizer type, the evaluation function and the loss function;
judging whether the training state after the training is finished meets the expected condition input by the user or not to obtain a first judgment result;
when the first judgment result shows that the training state after the training is finished meets the expected condition input by the user, finishing the training, and determining the recognition classification model after the training is finished as the neural network classification model;
when the first judgment result shows that the training state after the secondary training does not accord with the expected condition input by the user, adjusting the training parameter and updating the iteration times; and returning to the step of training the recognition classification model by using the remote sensing image sample and the ground feature label data by adopting a packaged neural network and combining the multiple hyper-parameters, the optimizer type, the evaluation function and the loss function.
Optionally, the training of the recognition classification model by using the encapsulated neural network and combining the multiple hyper-parameters, the optimizer type, the evaluation function and the loss function by using the remote sensing image sample and the surface feature tag data specifically includes:
performing linear stretching and normalization operation on the remote sensing image sample to obtain a preprocessed remote sensing image sample;
and training the recognition classification model by adopting the packaged neural network in combination with the preprocessed remote sensing image sample.
Optionally, the neural network is a U-net deep neural network.
Optionally, the classifying the remote sensing image to be segmented by using the neural network classification model to obtain the segmentation result labeled with the ground object label specifically includes:
acquiring an input image of a user; the input image comprises a plurality of remote sensing images to be segmented;
classifying the input image by adopting the neural network classification model to obtain a classification result of each block of region of the input image;
and splicing the classification result of each block of area to obtain the segmentation result of the ground object label marked with the ground object label.
The invention also provides a segmentation system of the remote sensing image, which comprises the following steps:
the input data acquisition module is used for acquiring remote sensing image samples input by a user and ground feature label data corresponding to each sample;
the training module is used for training a recognition classification model by using a neural network by adopting the remote sensing image sample and the ground feature label data to obtain a neural network classification model;
the image to be segmented acquisition module is used for acquiring a remote sensing image to be segmented;
and the classification module is used for classifying the remote sensing image to be segmented by adopting the neural network classification model to obtain the segmentation result marked with the ground object label.
Optionally, the training module specifically includes:
the user input unit is used for acquiring a plurality of hyper-parameters input by a user; the hyper-parameters comprise a learning rate, a batch processing size and an iteration number;
the user selection unit is used for acquiring the optimizer type, the evaluation function and the loss function selected by the user;
the training unit is used for training the recognition classification model by using the remote sensing image sample and the ground feature label data by adopting a packaged neural network and combining the multiple hyper-parameters, the optimizer type, the evaluation function and the loss function;
the first judgment unit is used for judging whether the training state after the training is finished meets the expected condition input by the user or not to obtain a first judgment result;
the neural network classification model determining unit is used for finishing training and determining the recognition classification model after finishing the training as the neural network classification model when the first judgment result shows that the training state after finishing the training conforms to the expected condition input by the user;
the iteration unit is used for adjusting the training parameters and updating the iteration times when the first judgment result shows that the training state after the training is finished does not accord with the expected condition input by the user; and returning to the step of training the recognition classification model by using the remote sensing image sample and the ground feature label data by adopting a packaged neural network and combining the multiple hyper-parameters, the optimizer type, the evaluation function and the loss function.
Optionally, the training unit specifically includes:
the preprocessing subunit is used for performing linear stretching and normalization operation on the remote sensing image sample to obtain a preprocessed remote sensing image sample;
and the training subunit is used for training the recognition classification model by adopting the packaged neural network in combination with the preprocessed remote sensing image sample.
Optionally, the neural network is a U-net deep neural network.
Optionally, the classification module specifically includes:
an input image acquisition unit for acquiring an input image of a user; the input image comprises a plurality of remote sensing images to be segmented;
the classification unit is used for classifying the input image by adopting the neural network classification model to obtain a classification result of each block of region of the input image;
and the splicing unit is used for splicing the classification result of each block of area to obtain the segmentation result of the ground object label marked with the ground object label.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
(1) the advanced deep learning method is combined with the identification of the ground features of the agricultural remote sensing image, and the effect of segmenting the ground features in the remote sensing image is achieved. Compared with the traditional method, the deep learning method has higher efficiency and accuracy and wider applicable scenes;
(2) the method allows a user to freely adjust key parameters such as learning rate, iteration number, batch size, optimization function, loss function, evaluation function, normalization and the like in a certain range, so that the system operates according to the direction of user requirements, the interactivity and operability of the system are improved, and the system has better use experience;
(3) by applying the U-net deep neural network, the input data and the output data of the U-net network are images, and due to the end-to-end particularity, the application process of the network is convenient and fast, and the result is more intuitive and understandable;
(4) the agricultural remote sensing image classification method allows the agricultural remote sensing image to be classified in a batch mode, namely, a plurality of images can be classified through one-time operation, and the use efficiency and the operation cost are greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a method for segmenting a remote sensing image according to the present invention;
FIG. 2 is a schematic structural diagram of a remote sensing image segmentation system according to the present invention;
FIG. 3 is an input image during a training phase in accordance with an embodiment of the present invention;
FIG. 4 is a training parameter setting interface according to an embodiment of the present invention;
FIG. 5 is a training process interface for an embodiment of the present invention;
FIG. 6 is a graph comparing linear stretching according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a U-net neural network structure according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a convolution operation according to an embodiment of the present invention;
FIG. 9 is a schematic illustration of pooling operation according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a deconvolution operation in accordance with an embodiment of the present invention;
FIG. 11 is a remote sensing image to be segmented in an embodiment of the present invention;
FIG. 12 is a graph showing the classification results in the embodiment of the present invention;
FIG. 13 is a comparison diagram of the detail of the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
FIG. 1 is a schematic flow chart of the method for segmenting the remote sensing image according to the invention. As shown in fig. 1, the segmentation method includes the following steps:
step 100: and acquiring remote sensing image samples input by a user and ground object label data corresponding to each sample. The feature tag data are typical feature tags such as crop tag data, road tag data, residential feature tag data and farmland tag data, and the specific data of the feature tag data are different according to different application fields. For example, when applied to the field of crop identification, the ground object label is a crop label; when the method is applied to the fields of road identification and the like, the ground object label is a road label. At this stage, the user needs to input relevant data for training, the data is divided into necessary data and optional data, the necessary data is the data which needs to be input, and the optional data is the data which can be input selectively. The necessary data comprises a remote sensing image sample and ground object label data corresponding to the image; the optional data comprises verification data and pre-training weights, the verification data is used for verifying the training accuracy, and the verification data is not superposed with the training data and does not participate in training, so that the accuracy of the training model can be verified more accurately; the pre-training weights are used for pre-loading the designated weights before training, and can accelerate the training speed of the network to a certain extent, so that the model can be converged better and faster.
Step 200: and training the recognition classification model by using the remote sensing image sample and the ground feature label data and utilizing the neural network to obtain the neural network classification model. At this stage, the user can freely adjust multiple hyper-parameters in the model training within a certain range, so that the model is trained according to the direction required by the user. The specific process of training is as follows:
step 1: acquiring a plurality of hyper-parameters input by a user; the hyper-parameters include learning rate, batch size, and iteration number.
Step 2: and acquiring the optimizer type, the evaluation function and the loss function selected by the user.
Step 3: and training the recognition classification model by using the remote sensing image sample and the ground feature label data by adopting a packaged neural network and combining the multiple hyper-parameters, the optimizer type, the evaluation function and the loss function. When training is carried out, firstly, linear stretching and normalization operations are carried out on the remote sensing image sample to obtain a preprocessed remote sensing image sample. And then, training the recognition classification model by adopting the packaged neural network in combination with the preprocessed remote sensing image sample.
Step 3: and judging whether the training state after the training is finished meets the expected condition input by the user. If so, finishing training, and determining the recognition classification model after finishing the training as the neural network classification model; if not, adjusting the training parameters, updating the iteration times, and returning to Step3 to continue training.
Step 300: and acquiring a remote sensing image to be segmented. After the neural network classification model is trained, the remote sensing image segmentation process can be carried out, a user only needs to input the remote sensing image to be segmented, and the user can input a plurality of remote sensing images at one time.
Step 400: and classifying the remote sensing image to be segmented by adopting a neural network classification model to obtain a segmentation result labeled with the ground object label. The step can adopt batch processing operation to process a plurality of remote sensing images without sequentially processing a single remote sensing image. The specific process is as follows:
firstly, an input image of a user is obtained, wherein the input image comprises a plurality of remote sensing images to be segmented.
Secondly, classifying the input image by adopting the neural network classification model to obtain a classification result of each block of the input image.
And finally, splicing the classification result of each block of area to obtain the segmentation result of the ground object label marked with the ground object label. The segmentation result is related to the ground feature label in step 100, for example, if the ground feature label in step 100 is a crop label, the segmentation result labeled with the crop label is obtained here; if the feature label in step 100 is a road label, a segmentation result labeled with the road label is obtained here.
Corresponding to the segmentation method shown in fig. 1, the invention further provides a segmentation system of the remote sensing image, and fig. 2 is a schematic structural diagram of the segmentation system of the remote sensing image. As shown in fig. 2, the segmentation system includes the following structure:
the input data acquisition module 201 is configured to acquire remote sensing image samples input by a user and ground feature label data corresponding to each sample.
The training module 202 is used for training a recognition classification model by using a neural network by using the remote sensing image sample and the ground feature label data to obtain a neural network classification model;
the to-be-segmented image acquisition module 203 is used for acquiring a to-be-segmented remote sensing image;
and the classification module 204 is configured to classify the remote sensing image to be segmented by using the neural network classification model to obtain a segmentation result labeled with a ground object label.
As another embodiment, the training module 202 in the segmentation system of the present invention specifically includes:
the user input unit is used for acquiring a plurality of hyper-parameters input by a user; the hyper-parameters comprise a learning rate, a batch processing size and an iteration number;
the user selection unit is used for acquiring the optimizer type, the evaluation function and the loss function selected by the user;
the training unit is used for training the recognition classification model by using the remote sensing image sample and the ground feature label data by adopting a packaged neural network and combining the multiple hyper-parameters, the optimizer type, the evaluation function and the loss function;
the first judgment unit is used for judging whether the training state after the training is finished meets the expected condition input by the user or not to obtain a first judgment result;
the neural network classification model determining unit is used for finishing training and determining the recognition classification model after finishing the training as the neural network classification model when the first judgment result shows that the training state after finishing the training conforms to the expected condition input by the user;
the iteration unit is used for adjusting the training parameters and updating the iteration times when the first judgment result shows that the training state after the training is finished does not accord with the expected condition input by the user; and returning to the step of training the recognition classification model by using the remote sensing image sample and the ground feature label data by adopting a packaged neural network and combining the multiple hyper-parameters, the optimizer type, the evaluation function and the loss function.
As another embodiment, the training unit in the segmentation system of the present invention specifically includes:
the preprocessing subunit is used for performing linear stretching and normalization operation on the remote sensing image sample to obtain a preprocessed remote sensing image sample;
and the training subunit is used for training the recognition classification model by adopting the packaged neural network in combination with the preprocessed remote sensing image sample.
As another embodiment, the classification module 204 in the segmentation system of the present invention specifically includes:
an input image acquisition unit for acquiring an input image of a user; the input image comprises a plurality of remote sensing images to be segmented;
the classification unit is used for classifying the input image by adopting the neural network classification model to obtain a classification result of each block of region of the input image;
and the splicing unit is used for splicing the classification result of each block of area to obtain the segmentation result of the ground object label marked with the ground object label.
The following provides a detailed description of the embodiments of the invention.
The specific embodiment applies an artificial intelligence method, simplifies the identification of the ground features in the remote sensing image, reduces the participation of manpower, establishes an automatic method, and directly obtains the required result from the input remote sensing image without adding information such as sampling points manually. The specific implementation case comprises a training part and a classifier part, namely the training part and the classification part.
And the training part is used for training the recognition classification model by utilizing a pre-packaged neural network according to the paired remote sensing image samples and label data which meet the format requirements. After training is finished, a user obtains a neural network classification model, and the model marks corresponding labels for the input remote sensing images under similar scenes. The software allows the user to freely adjust a range of parameters over a training session. The specific process is as follows:
(1) inputting data
At this stage, the user needs to input relevant data for training, and the data is divided into necessary data and optional data. The necessary data comprises remote sensing images and road label data corresponding to the images; the optional data includes validation data and pre-training weights. As shown in FIG. 3, FIG. 3 is an input image of the training phase of the embodiment of the present invention, in which the general farmland data is used as an input. In fig. 3, four parts of the first row are original image samples of the remote sensing image, four parts of the second row are road label images, a white area is an artificial labeling image corresponding to a cultivated land area in the first row, and a black area is a non-cultivated land area, such as a wasteland, a residential area, and the like.
(2) Setting training parameters
At this stage, the user can freely adjust multiple hyper-parameters in the model training within a certain range, so that the model is trained according to the direction required by the user. The hyper-parameters that can be adjusted include basic parameters of learning rate (learning rate), batch size (batch size), and iteration number (epoch); parameters that may be selected include the normalization (normalization) mode, the optimizer (optimizer) type for network convergence, the evaluation function (metrics), and the loss function (loss function). The selection of the normalization mode has certain influence on the processing of subsequent data. As shown in fig. 4, fig. 4 is a training parameter setting interface according to an embodiment of the present invention.
With respect to the meaning of the above parameters, the hyper-parameters: the parameters of the values are set prior to the beginning of the learning process, rather than the parameter data obtained by training. In general, the hyper-parameters need to be optimized, and a group of optimal hyper-parameters is selected for the learning machine, so as to improve the learning performance and effect. Learning rate: the lower the learning rate, the slower the rate of change of the loss function, to the extent that the gradient changes in each iteration. Iteration times are as follows: the model completes all training sample training and is called an iteration, and the iteration number is the maximum iteration number of the model training. An optimizer: and updating and calculating network parameters influencing model training and model output to approximate or reach an optimal value, so that the loss function is minimized or maximized. Evaluating a function: function that evaluates the model. Normalization: the data to be processed is processed by a specific algorithm and then limited to a certain range of operation. Normalization can make data processing more convenient and accelerate convergence of the model. Loss function: and the deviation degree of the model predicted value and the actual value.
(3) Deep learning training and adjusting training parameters
At this stage, the user does not need to operate, the system automatically trains and displays the training information in real time, as shown in fig. 5, fig. 5 is a training process interface of the embodiment of the present invention.
First, the back end of the trainer will perform linear stretching and normalization operations on the remote sensing sample image. Linear stretching is a data enhancement operation, and aims to highlight interesting objects or gray scale regions and relatively suppress uninteresting gray scale regions, so that the visual effect and normalization effect of an image can be enhanced. The linear stretching is directly stretching, cutting stretching and sectional stretching, and a reasonable stretching mode is selected according to actual requirements. In the embodiment, the cutting and stretching are selected, that is, the numbers below 2% percentile and the numbers above 98% percentile in the pixel values of the image are removed, and the probability of the occurrence of the abnormal numerical values in the pixel values is high, which affects the visual representation and data normalization of the image.
The data is primarily processed by adopting a linear stretching normalization mode, and the linear stretching formula is as follows: g (x, y) ═ [ (d-c)/(b-a) ] f (x, y) + c. Where f (x, y) is the pixel value of the original remote sensing image, linear stretching of 2% may be used to control the gray level of the original image within the range of [ c, d ], b is 98% of the gray level of the original image pixel, and a is the gray level of the original image.
Normalization is the uniform mapping of data to [0, 1 ]]The data processing operation of the interval is convenient for indexes of different units or magnitude levels to be compared and weighted, and the normalization formula is as follows:wherein max is the maximum value of the sample data, min is the minimum value of the sample data, x is the sample data to be normalized, x*Is a normalized result.
As shown in fig. 6, fig. 6 is a comparison graph of linear stretching according to an embodiment of the present invention, where the left part of fig. 6 is the original before stretching, and the right part of fig. 6 is the image after linear stretching by 2%, and it can be seen that the visual effect and the contrast effect of the image are significantly improved by linear stretching.
And then, the trainer trains the remote sensing image recognition classification model by utilizing the pre-packaged neural network. The neural network packaged by the trainer is a U-net neural network taking inceptionv3 as backbone. After one training iteration is finished, if the training state does not meet the expectation, the model adjusts the training parameters and continues to carry out the next training iteration; and if the training state is in accordance with the expectation, finishing the training and generating a classification model after the training is finished. The following is the training process of the neural network, the relevant parameter calculation and the backbone of the encapsulation network.
a. Neural network training process
The U-net neural network is a network improved on the basis of a Full Convolution Network (FCN). The U-net neural network modifies and extends the network structure of the FCN to obtain an accurate segmentation result under the condition of training with only a small amount of data, and the specific structure is shown in fig. 7, and fig. 7 is a schematic diagram of the U-net neural network structure of the specific embodiment of the present invention.
The remote sensing image data is input into a U-net network after linear stretching and normalization, and is first converted into a Feature Map (Feature Map) of 568x568x64 through convolution operation, as shown in fig. 8, fig. 8 is a schematic view of convolution operation according to an embodiment of the present invention, in the figure, image 5 x5 part represents a convolved image, filter 3 x3 part represents a convolution kernel for performing convolution operation, and Feature Map 3 x3 represents a result obtained by performing convolution operation on the image. Each pixel of the sample image is first numbered with xi,jAn ith row and a jth column element representing an image; numbering each weight of the filter by wm,nRepresents the weight of the m-th row and the n-th column by wbA bias term representing filter; numbering each element of the feature map with ai,jAn ith row and a jth column element representing the feature map; the activation function is denoted by f, taking the relu activation function as an example, using the formulaA convolution is calculated. The window amplitude of one-time movement of the convolution kernel (filter) is convolution step (stride), the convolution step (stride) of U-net is 1, and the padding strategy is valid (i.e. a window which cannot be convolved is discarded), so that the size of the feature map is reduced by 2 after convolution.
The data was then converted to a profile of 284x284x64 by max pooling (maxporoling). Pooling is the division of the entire picture into several non-overlapping small chunks of the same size (posing size). Only the maximum number is taken in each small block, and after other nodes are abandoned, the original plane structure is kept to obtain a result. Fig. 9 is a schematic diagram of the pooling operation of an embodiment of the present invention. The pooling is used for downsampling (subsampled) to remove redundant information, the convolution kernel size of U-net pooling is 2x2, and the padding strategy is also valid.
Then, the 284x284x64 feature map after pooling is subjected to convolution operation and pooling operation for several times to be converted into a 26x26x1024 feature map, and thus, the feature extraction part of the neural network is completed. Next, the up sampling (up sampling) portion of the neural network is entered. And performing deconvolution (backprojection convolution) on the 26x26x1024 feature map after the convolution operation and the pooling operation, merging and fusing the image after the deconvolution operation and the intermediate image with the corresponding size obtained by the convolution operation of the previous feature extraction part, wherein the size of the image of the intermediate image needs to be cut before merging and fusing to be consistent with the image obtained by deconvolution, and the size of the image is aligned, so that the image can be fused after the size of the intermediate image is consistent with that of the image obtained by deconvolution. And performing one-time fusion after each deconvolution operation, and performing the next deconvolution operation and image fusion by taking the fused image as the initial image of the next deconvolution. Such iterative operations introduce the original features of the image into the deconvolution process. The deconvolution is the inverse operation of the convolution, as shown in fig. 10, fig. 10 is a schematic diagram of the deconvolution operation according to the embodiment of the present invention. And performing one-time fusion after each deconvolution operation, and performing the next deconvolution operation and image fusion by taking the fused image as the initial image of the next deconvolution. The repeated operation introduces the original features of the image into the deconvolution process, and the original features are converted into a feature map of 100x100x 256. The deconvolution kernel size of U-net is 2x2, i.e., the feature map size would double.
Finally, after several times of deconvolution, convolution and pooling, 388x388x64 feature maps are finally obtained, and the feature maps directly obtain final classification result pictures, namely road identification pictures, through convolution operation with convolution kernel of 1x 1.
b. Calculation of correlation parameters
The U-net calculates the cross-entropy loss function as an energy function of the whole network by activating the output result of the function softmax on the final characteristic diagram. The calculation method of the output layer of the activation function softmax comprises the following steps:
wherein a isk(x) Represents the activation value of the k-th layer of the pixel at position x in the feature map, where x ∈ Ω, Ω ∈ Z2I.e. x belongs to the space omega, which is the set of integers Z2A subset of (a). K is the total number of classes of pixel points, pk(x) Is an approximate maximum function. It is defined as softmax.
The loss function is a cross-entropy loss function, or may also be referred to as log-likelihood, as shown in detail below:
where p (y | x) represents the result of the network computation at the pixel value x of the picture elementThe difference from the y value on the actual label,
here, a weighted loss function is used, with its own weight for each pixel. The U-net network obtains the weight value of each pixel in the loss function by pre-calculating a weight map, and the method compensates different frequencies of each type of pixel of training data and enables the network to pay more attention to learning the edges between cells which are in contact with each other. The segmentation boundary uses morphological operation, and the feature map is calculated by the following method:
wherein, wc(x) Is a weight map for balancing the class frequencies, d1(x) Is the distance from the pixel to the nearest cell boundary; d2(x) Is the distance from the pixel to the second nearest cell boundary.
c. Backbone of packaging network
The feature extraction part of the neural network can be replaced by other neural networks, and the neural network used for replacement is called a backbone. The backbone of the packaging network of the method is an inclusion-V3 network. The inclusion network is a deep learning network for image classification, and the inclusion architecture has the greatest characteristic of higher efficiency under the condition of utilizing network internal resources. Researchers ensure that the magnitude of the increase of the computation load is not large when the depth width of the network is increased through a series of architecture designs. The Incep network applies the Hertz theory and applies a multi-standard method in the design selection and construction, and the two points improve the efficiency and the result of the Incep network. The construction of Inceptation v2 uses for reference the VGG network and makes an improvement, uses for reference the VGG network, and the large convolution is replaced by two small convolutions, so that the parameter scale is reduced, the nonlinear transformation is improved, and the learning capability of the network on the characteristic value is improved. The inclusion v3 inherits the improvement of the inclusion v2, and on the basis of the inclusion v2, a root-mean-square back propagation algorithm, decomposed convolution, class label smoothing regularization and the like are also used, and batch standardization is added on an auxiliary classifier. Furthermore, inclusion v3 applies a key innovative concept: decomposition (Factorization), in the construction of the inclusion v3 network, a larger convolution is decomposed into a small convolution, the parameter scale is further saved, the operation efficiency and speed are increased, and overfitting is reduced.
(4) Artificial intelligence model storage
At this stage, the artificial intelligent remote sensing image classification model generated at the previous stage is stored in a designated directory, and mainly stores the weight data of each node in the network.
And the classifier is used for performing road classification and identification on the input remote sensing image by utilizing an image classification and identification model appointed by a user, storing a classification result in a preset directory, and outputting a calculation result of the model, namely a road identification result picture required by the user without extra operation processing. If the user does not have a qualified model, the classification model needs to be trained in advance by using a trainer, and the part is equivalent to a black box process. The specific process is as follows:
(1) inputting remote sensing image
At this stage, the user needs to input the remote sensing image to be classified, and the user expects to be able to automatically obtain the automatically labeled data. To facilitate user processing, this part of the classifier allows a user to perform batch operations, i.e., several telemetric images can be input in a single classification process. An example of the input sample is shown in fig. 11, and fig. 11 is a remote sensing image to be segmented in the embodiment of the present invention. The figure includes two partial images, a left part and a right part.
(2) Loading of artificial intelligence models
In this stage, the classification model trained by the trainer is used, and the data is processed and then the conversion format is sent to the model for calculation. The process here is the same as the process before the model is trained in the trainer. And thus will not be described in detail. At the back end, a network is established and the weight file stored in the trainer is loaded.
(3) The result of recognition
At this stage, after the classifier performs black box calculation on the image data, a classification result for each block of region of the input image can be obtained, and the rear end can automatically piece together the results of each part, so that labeled data can be obtained. As shown in fig. 12, fig. 12 is a diagram of the classification result in the embodiment of the present invention, which corresponds to the two partial images shown in fig. 11.
After the classification result is obtained, a comparison observation can be further performed on the local detail, as shown in fig. 13, fig. 13 is a comparison graph of the local detail in the specific embodiment of the present invention, the left part of fig. 13 is the local detail in the right part of fig. 11, and the right part of fig. 13 is the comparison graph of the local classification result in the right part of fig. 12.
The specific implementation case has the following advantages:
the complex and difficult bottom layer codes are packaged completely, so that the user can still use the bottom layer codes normally without knowing a specific implementation principle. In addition, the system also provides a complete visual interface, so that the use experience of a user is further improved, and the practicability and convenience are improved;
the advanced deep learning method is combined with the agricultural remote sensing image ground feature identification, and the method is essentially different from the traditional classification method, and is an innovation in the method. Compared with the traditional method, the deep learning method has higher efficiency and accuracy and wider applicable scenes;
the U-net deep neural network is applied, and the neural network has excellent effect on image classification and segmentation and can obtain satisfactory effect. The input data and the output data of the U-net network are images, and due to the end-to-end particularity, the application process of the network is convenient and fast, and the result is more visual and understandable;
the method allows a user to freely adjust key parameters such as learning rate, iteration number, batch size, optimization function, loss function, evaluation function, normalization and the like in a certain range, so that the system operates according to the direction of user requirements, the interactivity and operability of the system are improved, and the system has better use experience;
data enhancement, verification of data and pre-training weights are added. Data enhancement is used for enhancing the classification effect of the image; the verification data is used for evaluating the result; the pre-training weights are used to speed up the convergence of the model. These options enrich the utility and integrity of the system, further meeting user needs and enhancing the use experience.
The agricultural remote sensing images or the road remote sensing images are allowed to be classified in a batch mode, namely, a plurality of images can be classified through one-time operation, and the use efficiency and the operation cost are greatly improved;
the idea of artificial intelligence is combined with the remote sensing image crop identification classification or road identification classification, which is an innovation on the idea. The traditional thought depends on the experience of an operator, the accuracy is not guaranteed, and the efficiency is low. The artificial intelligence idea obviously improves the accuracy, efficiency and stability, and greatly optimizes the use experience. The idea also conforms to the current technological trend and the overall trend, and has high advancement and innovation.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.
Claims (10)
1. A method for segmenting a remote sensing image is characterized by comprising the following steps:
acquiring remote sensing image samples input by a user and ground object label data corresponding to each sample;
training a recognition classification model by using a neural network by using the remote sensing image sample and the ground feature label data to obtain a neural network classification model;
acquiring a remote sensing image to be segmented;
and classifying the remote sensing image to be segmented by adopting the neural network classification model to obtain a segmentation result marked with the ground object label.
2. The method for segmenting the remote sensing image according to claim 1, wherein the method for training the recognition classification model by using the remote sensing image sample and the ground feature label data and using the neural network to obtain the neural network classification model specifically comprises the following steps:
acquiring a plurality of hyper-parameters input by a user; the hyper-parameters comprise a learning rate, a batch processing size and an iteration number;
obtaining the optimizer type, the evaluation function and the loss function selected by the user;
training the recognition classification model by using the remote sensing image sample and the ground feature label data by adopting a packaged neural network and combining the multiple hyper-parameters, the optimizer type, the evaluation function and the loss function;
judging whether the training state after the training is finished meets the expected condition input by the user or not to obtain a first judgment result;
when the first judgment result shows that the training state after the training is finished meets the expected condition input by the user, finishing the training, and determining the recognition classification model after the training is finished as the neural network classification model;
when the first judgment result shows that the training state after the secondary training does not accord with the expected condition input by the user, adjusting the training parameter and updating the iteration times; and returning to the step of training the recognition classification model by using the remote sensing image sample and the ground feature label data by adopting a packaged neural network and combining the multiple hyper-parameters, the optimizer type, the evaluation function and the loss function.
3. The method for segmenting remote sensing images according to claim 2, wherein the training of the recognition classification model by using the encapsulated neural network in combination with the plurality of hyper-parameters, the optimizer type, the evaluation function and the loss function by using the remote sensing image samples and the ground feature label data specifically comprises:
performing linear stretching and normalization operation on the remote sensing image sample to obtain a preprocessed remote sensing image sample;
and training the recognition classification model by adopting the packaged neural network in combination with the preprocessed remote sensing image sample.
4. The method for segmenting a remote sensing image according to claim 1, characterized in that the neural network is a U-net deep neural network.
5. The remote sensing image segmentation method according to claim 1, wherein the classifying the remote sensing image to be segmented by using the neural network classification model to obtain the segmentation result labeled with the ground object label specifically comprises:
acquiring an input image of a user; the input image comprises a plurality of remote sensing images to be segmented;
classifying the input image by adopting the neural network classification model to obtain a classification result of each block of region of the input image;
and splicing the classification result of each block of area to obtain the segmentation result of the ground object label marked with the ground object label.
6. A system for segmenting a remotely sensed image, comprising:
the input data acquisition module is used for acquiring remote sensing image samples input by a user and ground feature label data corresponding to each sample;
the training module is used for training a recognition classification model by using a neural network by adopting the remote sensing image sample and the ground feature label data to obtain a neural network classification model;
the image to be segmented acquisition module is used for acquiring a remote sensing image to be segmented;
and the classification module is used for classifying the remote sensing image to be segmented by adopting the neural network classification model to obtain the segmentation result marked with the ground object label.
7. The remote sensing image segmentation system of claim 6, wherein the training module specifically comprises:
the user input unit is used for acquiring a plurality of hyper-parameters input by a user; the hyper-parameters comprise a learning rate, a batch processing size and an iteration number;
the user selection unit is used for acquiring the optimizer type, the evaluation function and the loss function selected by the user;
the training unit is used for training the recognition classification model by using the remote sensing image sample and the ground feature label data by adopting a packaged neural network and combining the multiple hyper-parameters, the optimizer type, the evaluation function and the loss function;
the first judgment unit is used for judging whether the training state after the training is finished meets the expected condition input by the user or not to obtain a first judgment result;
the neural network classification model determining unit is used for finishing training and determining the recognition classification model after finishing the training as the neural network classification model when the first judgment result shows that the training state after finishing the training conforms to the expected condition input by the user;
the iteration unit is used for adjusting the training parameters and updating the iteration times when the first judgment result shows that the training state after the training is finished does not accord with the expected condition input by the user; and returning to the step of training the recognition classification model by using the remote sensing image sample and the ground feature label data by adopting a packaged neural network and combining the multiple hyper-parameters, the optimizer type, the evaluation function and the loss function.
8. The remote sensing image segmentation system according to claim 7, wherein the training unit specifically includes:
the preprocessing subunit is used for performing linear stretching and normalization operation on the remote sensing image sample to obtain a preprocessed remote sensing image sample;
and the training subunit is used for training the recognition classification model by adopting the packaged neural network in combination with the preprocessed remote sensing image sample.
9. The remote sensing image segmentation system according to claim 6, wherein the neural network is a U-net deep neural network.
10. The remote sensing image segmentation system according to claim 6, wherein the classification module specifically includes:
an input image acquisition unit for acquiring an input image of a user; the input image comprises a plurality of remote sensing images to be segmented;
the classification unit is used for classifying the input image by adopting the neural network classification model to obtain a classification result of each block of region of the input image;
and the splicing unit is used for splicing the classification result of each block of area to obtain the segmentation result of the ground object label marked with the ground object label.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911111277.0A CN110852393A (en) | 2019-11-14 | 2019-11-14 | Remote sensing image segmentation method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911111277.0A CN110852393A (en) | 2019-11-14 | 2019-11-14 | Remote sensing image segmentation method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110852393A true CN110852393A (en) | 2020-02-28 |
Family
ID=69601697
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911111277.0A Pending CN110852393A (en) | 2019-11-14 | 2019-11-14 | Remote sensing image segmentation method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110852393A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111414962A (en) * | 2020-03-19 | 2020-07-14 | 创新奇智(重庆)科技有限公司 | Image classification method introducing object relationship |
CN111476199A (en) * | 2020-04-26 | 2020-07-31 | 国网湖南省电力有限公司 | Power transmission and transformation project common grave ground identification method based on high-definition aerial survey image |
CN111582176A (en) * | 2020-05-09 | 2020-08-25 | 湖北同诚通用航空有限公司 | Visible light remote sensing image withered and dead wood recognition software system and recognition method |
CN112001293A (en) * | 2020-08-19 | 2020-11-27 | 四创科技有限公司 | Remote sensing image ground object classification method combining multi-scale information and coding and decoding network |
CN112329852A (en) * | 2020-11-05 | 2021-02-05 | 西安泽塔云科技股份有限公司 | Classification method and device for earth surface coverage images and electronic equipment |
CN112435274A (en) * | 2020-11-09 | 2021-03-02 | 国交空间信息技术(北京)有限公司 | Remote sensing image planar ground object extraction method based on object-oriented segmentation |
CN112634284A (en) * | 2020-12-22 | 2021-04-09 | 上海体素信息科技有限公司 | Weight map loss-based staged neural network CT organ segmentation method and system |
CN112906537A (en) * | 2021-02-08 | 2021-06-04 | 北京艾尔思时代科技有限公司 | Crop identification method and system based on convolutional neural network |
CN113344871A (en) * | 2021-05-27 | 2021-09-03 | 中国农业大学 | Agricultural remote sensing image analysis method and system |
CN113627288A (en) * | 2021-07-27 | 2021-11-09 | 武汉大学 | Intelligent information label obtaining method for massive images |
CN113792764A (en) * | 2021-08-24 | 2021-12-14 | 北京遥感设备研究所 | Sample expansion method, system, storage medium and electronic equipment |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109389051A (en) * | 2018-09-20 | 2019-02-26 | 华南农业大学 | A kind of building remote sensing images recognition methods based on convolutional neural networks |
-
2019
- 2019-11-14 CN CN201911111277.0A patent/CN110852393A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109389051A (en) * | 2018-09-20 | 2019-02-26 | 华南农业大学 | A kind of building remote sensing images recognition methods based on convolutional neural networks |
Non-Patent Citations (5)
Title |
---|
SCHROFF F, ET.AL: "FaceNet: A unified embedding for face recognition and clustering", 《PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION(CVPR)》 * |
周春光等: "《计算智能人工神经网络模糊系统进化计算》", 31 July 2005, 吉林大学出版社 * |
方卫华等: "《跨栏河建筑物安全状态感知、融合与预测》", 31 December 2018, 河海大学出版社 * |
杨先卫: "《大学物理下册》", 31 January 2017, 北京邮电大学出版社 * |
许殿元等: "《遥感图像信息处理》", 31 December 1990, 北京:宇航出版社 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111414962A (en) * | 2020-03-19 | 2020-07-14 | 创新奇智(重庆)科技有限公司 | Image classification method introducing object relationship |
CN111476199A (en) * | 2020-04-26 | 2020-07-31 | 国网湖南省电力有限公司 | Power transmission and transformation project common grave ground identification method based on high-definition aerial survey image |
CN111582176A (en) * | 2020-05-09 | 2020-08-25 | 湖北同诚通用航空有限公司 | Visible light remote sensing image withered and dead wood recognition software system and recognition method |
CN112001293A (en) * | 2020-08-19 | 2020-11-27 | 四创科技有限公司 | Remote sensing image ground object classification method combining multi-scale information and coding and decoding network |
CN112329852A (en) * | 2020-11-05 | 2021-02-05 | 西安泽塔云科技股份有限公司 | Classification method and device for earth surface coverage images and electronic equipment |
CN112435274B (en) * | 2020-11-09 | 2024-05-07 | 国交空间信息技术(北京)有限公司 | Remote sensing image planar ground object extraction method based on object-oriented segmentation |
CN112435274A (en) * | 2020-11-09 | 2021-03-02 | 国交空间信息技术(北京)有限公司 | Remote sensing image planar ground object extraction method based on object-oriented segmentation |
CN112634284A (en) * | 2020-12-22 | 2021-04-09 | 上海体素信息科技有限公司 | Weight map loss-based staged neural network CT organ segmentation method and system |
CN112906537B (en) * | 2021-02-08 | 2023-12-01 | 北京艾尔思时代科技有限公司 | Crop identification method and system based on convolutional neural network |
CN112906537A (en) * | 2021-02-08 | 2021-06-04 | 北京艾尔思时代科技有限公司 | Crop identification method and system based on convolutional neural network |
CN113344871A (en) * | 2021-05-27 | 2021-09-03 | 中国农业大学 | Agricultural remote sensing image analysis method and system |
CN113344871B (en) * | 2021-05-27 | 2024-08-20 | 中国农业大学 | Agricultural remote sensing image analysis method and system |
CN113627288A (en) * | 2021-07-27 | 2021-11-09 | 武汉大学 | Intelligent information label obtaining method for massive images |
CN113627288B (en) * | 2021-07-27 | 2023-08-18 | 武汉大学 | Intelligent information label acquisition method for massive images |
CN113792764A (en) * | 2021-08-24 | 2021-12-14 | 北京遥感设备研究所 | Sample expansion method, system, storage medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110852393A (en) | Remote sensing image segmentation method and system | |
CN110111335B (en) | Urban traffic scene semantic segmentation method and system for adaptive countermeasure learning | |
CN110443818B (en) | Graffiti-based weak supervision semantic segmentation method and system | |
Gui et al. | Joint learning of visual and spatial features for edit propagation from a single image | |
CN107833183B (en) | Method for simultaneously super-resolving and coloring satellite image based on multitask deep neural network | |
CN113160062B (en) | Infrared image target detection method, device, equipment and storage medium | |
CN108197669B (en) | Feature training method and device of convolutional neural network | |
CN113378812A (en) | Digital dial plate identification method based on Mask R-CNN and CRNN | |
CN117115640A (en) | Improved YOLOv 8-based pest and disease damage target detection method, device and equipment | |
CN112950780A (en) | Intelligent network map generation method and system based on remote sensing image | |
Mohmmad et al. | A survey machine learning based object detections in an image | |
CN113989261A (en) | Unmanned aerial vehicle visual angle infrared image photovoltaic panel boundary segmentation method based on Unet improvement | |
CN115018039A (en) | Neural network distillation method, target detection method and device | |
CN113205103A (en) | Lightweight tattoo detection method | |
CN116580184A (en) | YOLOv 7-based lightweight model | |
CN114120359A (en) | Method for measuring body size of group-fed pigs based on stacked hourglass network | |
CN114898359B (en) | Litchi plant diseases and insect pests detection method based on improvement EFFICIENTDET | |
CN114550014B (en) | Road segmentation method and computer device | |
CN116740362A (en) | Attention-based lightweight asymmetric scene semantic segmentation method and system | |
CN115690129A (en) | Image segmentation paraphrasing method based on multi-expert mixing, electronic equipment and storage medium | |
CN114187506B (en) | Remote sensing image scene classification method of viewpoint-aware dynamic routing capsule network | |
CN113496148A (en) | Multi-source data fusion method and system | |
CN111079807A (en) | Ground object classification method and device | |
CN117274750B (en) | Knowledge distillation semi-automatic visual labeling method and system | |
CN117876383A (en) | Road surface strip crack detection method based on yolov5l |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200228 |