Nothing Special   »   [go: up one dir, main page]

CN113256651B - Model training method and device, and image segmentation method and device - Google Patents

Model training method and device, and image segmentation method and device Download PDF

Info

Publication number
CN113256651B
CN113256651B CN202110560183.2A CN202110560183A CN113256651B CN 113256651 B CN113256651 B CN 113256651B CN 202110560183 A CN202110560183 A CN 202110560183A CN 113256651 B CN113256651 B CN 113256651B
Authority
CN
China
Prior art keywords
segmentation
data
image
model
segmented
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110560183.2A
Other languages
Chinese (zh)
Other versions
CN113256651A (en
Inventor
于朋鑫
王少康
陈宽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Infervision Medical Technology Co Ltd
Original Assignee
Infervision Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Infervision Medical Technology Co Ltd filed Critical Infervision Medical Technology Co Ltd
Priority to CN202110560183.2A priority Critical patent/CN113256651B/en
Publication of CN113256651A publication Critical patent/CN113256651A/en
Application granted granted Critical
Publication of CN113256651B publication Critical patent/CN113256651B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a model training method and device and an image segmentation method and device, which are used for training an initial segmentation model comprising a feedback propagation module to generate an image segmentation model comprising the feedback propagation module. The model training method comprises the following steps: determining an image sample to be segmented and a segmentation annotation data sample corresponding to the image sample to be segmented; segmenting the image sample to be segmented by utilizing the initial segmentation model to determine a plurality of intermediate segmentation prediction data, wherein the intermediate segmentation prediction data are segmentation prediction data determined based on an output result of the loop iteration calculation of the feedback propagation module; determining segmentation uncertainty data corresponding to the image sample to be segmented based on the intermediate segmentation prediction data; training the initial segmentation model based on the segmentation uncertainty data, the plurality of intermediate segmentation prediction data, and the segmentation annotation data samples to generate the image segmentation model.

Description

Model training method and device, and image segmentation method and device
Technical Field
The present application relates to the field of medical image processing technologies, and in particular, to a model training method, an image segmentation method, a model training apparatus, an image segmentation apparatus, an electronic device, and a computer-readable storage medium.
Background
In medical image analysis, the segmentation of a Region of Interest (ROI) plays an important role in guiding subsequent analysis. However, manual segmentation is not only uneven in quality and huge in workload, so that training of an image segmentation model based on deep learning to automatically segment the ROI in the medical image becomes a research hotspot.
When an image segmentation model is trained by the existing model training method, the optimization direction of the segmentation model has deviation due to the factors of insufficient utilization of one-way extraction features on the image, noise labeling and the like. When the model is used for segmentation, over-confidence misjudgment can occur, so that the segmentation accuracy of the image segmentation model is low.
Disclosure of Invention
In view of this, embodiments of the present application provide a model training method, an image segmentation method, a model training device, an image segmentation device, an electronic device, and a computer-readable storage medium, so as to solve the technical problem in the prior art that an optimization direction deviation during model training results in a low model segmentation accuracy.
According to an aspect of the present application, an embodiment of the present application provides a model training method for training an initial segmentation model including a feedback propagation module to generate an image segmentation model including the feedback propagation module, the method including: determining an image sample to be segmented and a segmentation annotation data sample corresponding to the image sample to be segmented; segmenting the image sample to be segmented by utilizing the initial segmentation model to determine a plurality of intermediate segmentation prediction data, wherein the intermediate segmentation prediction data are segmentation prediction data determined based on an output result of the loop iteration calculation of the feedback propagation module; determining segmentation uncertainty data corresponding to the image sample to be segmented based on the intermediate segmentation prediction data; training the initial segmentation model based on the segmentation uncertainty data, the plurality of intermediate segmentation prediction data, and the segmentation annotation data samples to generate the image segmentation model.
In one embodiment, the number of loop iteration calculations is N, where N is a positive integer greater than or equal to 2, and the training of the initial segmentation model based on the segmentation uncertainty data, the plurality of intermediate segmentation prediction data, and the segmentation annotation data samples to generate the image segmentation model comprises: calculating corresponding intermediate segmentation prediction data and segmentation marking data samples based on the 1 st to the N-1 st loop iteration, and determining N-1 intermediate loss functions; combining the segmentation uncertainty data, calculating corresponding intermediate segmentation prediction data and segmentation annotation data samples based on the Nth cycle iteration, and determining a loss function of the combination uncertainty; training the initial segmentation model based on the N-1 intermediate loss functions and the loss function of the binding uncertainty to generate the image segmentation model.
In one embodiment, said combining the segmentation uncertainty data, computing corresponding intermediate segmentation prediction data and the segmentation annotation data samples based on the nth loop iteration, and determining a loss function of the combining uncertainty includes: performing loss calculation on the intermediate segmentation prediction data and the segmentation marking data samples corresponding to the Nth loop iteration calculation to determine a first initial loss value; taking the segmentation uncertain data as a weight value, carrying out weighting operation on the first initial loss value, and obtaining a loss value after the weighting operation; and acquiring the loss function of the combination uncertainty based on the cross entropy function and the loss value after the weighting operation.
In one embodiment, the computing corresponding intermediate segmentation prediction data and segmentation annotation data samples based on the 1 st to N-1 st loop iterations determines N-1 intermediate loss functions, including: performing loss calculation on the intermediate segmentation prediction data and the segmentation marking data samples aiming at each intermediate segmentation prediction data in the 1 st to N-1 st loop iteration calculation to determine a second initial loss value corresponding to the intermediate segmentation prediction data; and calculating N-1 second initial loss values corresponding to the intermediate segmentation prediction data based on the cross entropy function and the 1 st to N-1 st loop iteration, and determining the N-1 intermediate loss functions.
In one embodiment, the training the initial segmentation model based on the N-1 intermediate loss functions and the loss function of the joint uncertainty to generate the image segmentation model comprises: superposing the N-1 intermediate loss functions and the loss function of the combination uncertainty to obtain a loss function after superposition operation; training the initial segmentation model based on the post-superposition loss function to generate the image segmentation model.
In one embodiment, the determining segmentation uncertainty data corresponding to the image sample to be segmented based on the plurality of intermediate segmentation prediction data includes: for each intermediate segmentation prediction data in the plurality of intermediate segmentation prediction data, determining a prediction value of each pixel in the intermediate segmentation prediction data, wherein the prediction value is used for representing the probability that the pixel belongs to the region of interest; performing confidence calculation on the predicted values of a plurality of pixels located at the same pixel coordinate in the plurality of intermediate segmentation prediction data based on a confidence calculation formula to obtain a confidence value corresponding to each pixel coordinate; and forming a matrix by using the confidence value corresponding to each pixel coordinate to acquire the segmentation uncertainty data.
In one embodiment, the segmenting the image sample to be segmented by using the initial segmentation model to determine a plurality of intermediate segmentation prediction data includes: determining global shared feature data corresponding to the image sample to be segmented based on the image sample to be segmented; calculating corresponding characteristic data based on the global shared characteristic data and the Mth cycle iteration of the feedback propagation module, and determining the characteristic data corresponding to the M +1 th cycle iteration calculation, wherein M is a positive integer smaller than N, and N is a positive integer greater than or equal to 2; and calculating the characteristic data corresponding to the 1 st iteration to the Nth iteration, and determining the plurality of intermediate segmentation prediction data.
In one embodiment, the initial segmentation model includes a low-level feature extraction module, wherein the determining global shared feature data corresponding to the image sample to be segmented based on the image sample to be segmented includes: and inputting the image sample to be segmented into the low-level feature extraction module to obtain the global shared feature data.
According to another aspect of the present application, an embodiment of the present application provides an image segmentation method, including: acquiring an image to be segmented; inputting the image to be segmented into an image segmentation model to determine segmentation prediction data corresponding to the image to be segmented, wherein the image segmentation model is obtained by training based on the model training method of any embodiment.
In one embodiment, the image segmentation model is used for generating segmentation prediction data and segmentation uncertainty data corresponding to the image to be segmented based on the image to be segmented, and the method further includes: and generating a segmentation uncertainty thermodynamic diagram corresponding to the image to be segmented based on the segmentation uncertainty data.
According to another aspect of the present application, an embodiment of the present application provides a model training apparatus for training an initial segmentation model including a feedback propagation module to generate an image segmentation model including the feedback propagation module, the apparatus including: the image segmentation method comprises a first determination module, a second determination module and a third determination module, wherein the first determination module is configured to determine an image sample to be segmented and a segmentation annotation data sample corresponding to the image sample to be segmented; a second determination module configured to segment the image sample to be segmented by using the initial segmentation model to determine a plurality of intermediate segmentation prediction data, wherein the plurality of intermediate segmentation prediction data are segmentation prediction data determined based on an output result of the loop iteration calculation of the feedback propagation module; a third determining module configured to determine segmentation uncertainty data corresponding to the image sample to be segmented based on the plurality of intermediate segmentation prediction data; a generation module configured to train the initial segmentation model based on the segmentation uncertainty data, the plurality of intermediate segmentation prediction data, and the segmentation annotation data samples to generate the image segmentation model.
According to another aspect of the present application, an embodiment of the present application provides an image segmentation apparatus, including: the acquisition module is configured to acquire an image to be segmented; an image segmentation module, configured to input the image to be segmented into an image segmentation model to determine segmentation prediction data corresponding to the image to be segmented, where the image segmentation model is obtained by training based on the model training method in any of the embodiments.
According to yet another aspect of the present application, an embodiment of the present application provides an electronic device, including: a processor; a memory; and computer program instructions stored in the memory, which when executed by the processor, cause the processor to perform a method as in any one of the embodiments described above.
According to yet another aspect of the present application, an embodiment of the present application provides a computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, cause the processor to perform the method according to any one of the above embodiments.
The model training method provided by the embodiment of the application trains the initial segmentation model including the feedback propagation module to generate the image segmentation model including the feedback propagation module. Specifically, based on an output result of a loop iteration calculation of a feedback propagation module, determining a plurality of intermediate prediction division data; determining segmentation uncertainty data corresponding to an image sample to be segmented based on a plurality of intermediate prediction segmentation data; an initial segmentation model is trained based on the segmentation uncertainty data, the plurality of intermediate segmentation prediction data, and the segmentation annotation data samples to generate an image segmentation model.
The feedback propagation module enables information output by the previous loop iterative computation to be fed back to the current loop iterative computation, so that each loop iterative computation can more fully utilize the image information, and the segmentation accuracy of the image segmentation model is improved. The segmentation uncertainty data of the model is determined by utilizing a plurality of intermediate prediction segmentation data in the model training process, and the uncertainty data is combined with the optimization loss function, so that the model optimization is optimized in the direction of reducing the segmentation uncertainty, and the segmentation accuracy of the image segmentation model is further improved.
Drawings
Fig. 1 is a schematic flow chart of a model training method according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of a model training method according to an embodiment of the present application.
Fig. 3 is a schematic flow chart of a model training method according to an embodiment of the present application.
Fig. 4 is a schematic flowchart illustrating a model training method according to an embodiment of the present application.
Fig. 5 is a schematic flowchart illustrating a model training method according to an embodiment of the present application.
Fig. 6 is a schematic flowchart illustrating a model training method according to an embodiment of the present application.
Fig. 7a is a schematic flowchart illustrating a model training method according to an embodiment of the present application.
Fig. 7b is a schematic flowchart of the process of determining a plurality of intermediate segmentation prediction data corresponding to the image sample to be segmented based on the image sample to be segmented in the embodiment shown in fig. 7 a.
Fig. 8 is a flowchart illustrating an image segmentation method according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of a model training apparatus according to an embodiment of the present application.
Fig. 10 is a schematic structural diagram of a model training apparatus according to an embodiment of the present application.
Fig. 11 is a schematic structural diagram of an image segmentation apparatus according to an embodiment of the present application.
Fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In medical image analysis, segmentation of a Region of Interest (ROI), such as a lesion Region, has an important guiding role in subsequent analysis. At present, manual labeling is mostly adopted clinically. However, manual segmentation is very labor intensive due to the numerous types of medical images and the large majority of high dimensional data. And the quality of the manual standard is uneven due to the influence of subjective experience of doctors and other factors. Therefore, training an image segmentation model based on deep learning to automatically segment the ROI in the medical image becomes a research hotspot.
When an image segmentation model is trained, the existing model training method often only performs one-way feature extraction, and image information cannot be fully utilized. In addition, because fuzzy regions which are difficult to define exist in the medical image, labeling noise exists in the ROI segmentation process. When the model is used for segmentation, over-confidence misjudgment can occur, so that the segmentation accuracy of the image segmentation model is low.
And (3) one-way feature extraction in the segmentation process, namely one-way propagation during feature extraction. In the segmentation process, an original image is firstly input into a first stage, then the output of each stage is used as the input of the next stage until the last stage, and the output of the last stage is used as segmentation prediction data.
Exemplary model training method
Fig. 1 is a schematic flow chart of a model training method according to an embodiment of the present application. The model training method is used for training an initial segmentation model comprising a feedback propagation module to generate an image segmentation model comprising the feedback propagation module. The model training methods provided in all embodiments of the present application are all used to train an initial segmentation model including a feedback propagation module to generate an image segmentation model including the feedback propagation module, and specific application scenarios of the model training methods provided in the present application are not described in detail in subsequent embodiments.
The feedback propagation module is a neural network module formed based on a feedback propagation mechanism, and the neural network module is a loop structure and comprises a plurality of loop iterative computations. The output result of the last loop iteration calculation is combined into the current loop iteration calculation, so that the information extracted by the last loop iteration calculation is fed back to the current loop iteration calculation, and the image information is more fully utilized.
As shown in fig. 1, the model training method includes the following steps.
Step 101: and determining an image sample to be segmented and a segmentation annotation data sample corresponding to the image sample to be segmented.
Specifically, the image sample to be segmented may be a Computed Tomography (CT) medical image in which the region of interest has been segmented and marked, and accordingly, the segmentation and labeling data sample corresponding to the image sample to be segmented is a mark on the CT medical image. For example: the image sample to be segmented may be a Computed Tomography (CT) lung image in which a region of interest such as a lung tumor has been marked.
Step 102: the image sample to be segmented is segmented using an initial segmentation model to determine a plurality of intermediate segmentation prediction data.
Illustratively, the plurality of intermediate split prediction data are split prediction data determined based on an output result of a loop iteration calculation computation of the feedback propagation module.
Specifically, since the initial segmentation model includes the feedback propagation module, the feedback propagation module includes a plurality of loop iteration calculations, each loop iteration calculation has an output result, and one intermediate segmentation prediction data can be determined based on the output result, so that a plurality of intermediate segmentation prediction data can be determined by segmenting the image sample to be segmented by using the initial segmentation model. In addition, because the information output by the previous loop iterative computation is fed back to the current loop iterative computation, each loop iterative computation can more fully utilize the image information, so that the trained image segmentation model can more fully utilize the image information during segmentation to perform accurate segmentation.
Step 103: and determining segmentation uncertainty data corresponding to the image sample to be segmented based on the intermediate segmentation prediction data.
It should be noted that the segmentation uncertainty data is used to characterize the certainty of the segmentation prediction data obtained by the segmentation model.
Specifically, since the initial segmentation model includes a feedback propagation module including a plurality of loop iteration calculations, and one intermediate segmentation prediction data is determined based on an output result of each loop iteration calculation, a plurality of intermediate segmentation prediction data can be determined. Based on the intermediate segmentation prediction data, segmentation uncertainty data corresponding to the image sample to be segmented can be determined, so that the segmentation uncertainty data can be determined in the model training process.
In the prior art, a model is generally used to segment a plurality of different image samples to be segmented to determine uncertainty data of the model, or a plurality of models are used to segment the same image sample to be segmented to determine uncertainty data of the model, and the uncertainty data in the prior art are obtained after the model is trained, and are used to represent the segmentation determination degree of the trained model. In the embodiment of the application, the segmentation uncertainty data can be determined in the model training process, and meanwhile, the segmentation uncertainty data is used for optimizing the segmentation model.
Step 104: an initial segmentation model is trained based on the segmentation uncertainty data, the plurality of intermediate segmentation prediction data, and the segmentation annotation data samples to generate an image segmentation model.
Specifically, in the prior art, when an initial segmentation model is trained, a unidirectional flow method is adopted, an image to be detected is input into a first stage, then the output of each stage is used as the input of the next stage until the last stage, and the output of the last stage is used as segmentation prediction data. In the prior art, only the number output result of the last stage is output, so that the loss function can be calculated only by using the segmentation prediction data and the segmentation marking data samples of the last stage to train the initial segmentation model, and the prediction difference of each loop iteration calculation cannot be applied to the training of the initial segmentation model.
In the embodiment of the present application, since the feedback propagation module is provided and includes a plurality of loop iteration calculations, a plurality of intermediate prediction partition data may be obtained. Segmentation uncertainty data may be determined using a plurality of intermediate prediction segmentation data. And calculating a loss function between the intermediate segmentation prediction data and the reference value by taking the segmentation marking data sample as the reference value and combining the segmentation uncertainty data, and training an initial segmentation model. Not only can the prediction difference calculated by each loop iteration be used for training the initial segmentation model. In addition, the segmentation uncertainty data can represent the determination degree of the segmentation result, and the initial segmentation model is trained by combining the segmentation uncertainty data, so that the optimization direction of the initial segmentation model can be close to a more determined direction, and the ROI segmentation accuracy is improved.
In the embodiment of the application, the feedback propagation module feeds back the information output by the previous loop iterative computation to the current loop iterative computation, so that each loop iterative computation can more fully utilize the image information, and the segmentation accuracy of the image segmentation model is improved. The segmentation uncertainty data of the model is determined by utilizing a plurality of intermediate prediction segmentation data in the model training process, and the uncertainty data is combined with the optimization loss function, so that the model optimization is optimized in the direction of reducing the segmentation uncertainty, and the segmentation accuracy of the image segmentation model is further improved.
Fig. 2 is a schematic flow chart of a model training method according to an embodiment of the present application. The method comprises the steps of training an initial segmentation model based on segmentation uncertainty data, a plurality of intermediate segmentation prediction data and segmentation annotation data samples to generate an image segmentation model, wherein the number of times of loop iteration calculation is N, N is a positive integer greater than or equal to 2, and as shown in FIG. 2.
Step 201: and (4) calculating corresponding intermediate segmentation prediction data and segmentation marking data samples based on the 1 st to the N-1 st loop iteration, and determining N-1 intermediate loss functions.
Specifically, with the segmentation marking data samples as reference values, the loss functions of the intermediate segmentation prediction data and the segmentation marking data samples corresponding to the 1 st to the N-1 st loop iteration calculation are calculated respectively, and N-1 intermediate loss functions are determined.
Step 202: and combining the segmentation uncertainty data, calculating corresponding intermediate segmentation prediction data and segmentation annotation data samples based on the Nth cycle iteration, and determining a loss function of the combination uncertainty.
Specifically, the segmentation uncertainty data can be determined by iteratively calculating corresponding intermediate segmentation prediction data from the 1 st to the (N-1) th loop. And combining the segmentation uncertainty data into the loss calculation of the intermediate segmentation prediction data and the segmentation marking data samples corresponding to the Nth cycle iteration calculation to obtain a loss function combined with uncertainty.
Step 203: an initial segmentation model is trained based on the N-1 intermediate loss functions and the loss function combined with uncertainty to generate an image segmentation model.
Specifically, training the initial segmentation model based on the N-1 intermediate loss functions and the loss function combined with uncertainty has the following advantages. Firstly, the prediction difference in each loop iteration calculation can be used for adjusting a loss function, so that each loop iteration calculation can act on a training initial segmentation model, and the direction is optimized to accurately segment the ROI as a target. Secondly, the model is optimized by combining with the uncertain loss function, so that the segmentation determined by the model is more determined, the uncertain segmentation of the model is given certain tolerance, the probability of the optimization with excessive confidence is reduced, and the segmentation accuracy of the segmentation model is improved.
It should be noted that the value of N is greater than or equal to 2, which may be determined according to a specific application scenario of the feedback propagation module, and in the embodiment of the present application, the upper limit of the value of N is not specifically limited.
In the embodiment of the application, corresponding intermediate segmentation prediction data and segmentation marking data samples are calculated based on 1 st to N-1 st loop iterations, N-1 intermediate loss functions are determined, segmentation uncertainty data are combined, corresponding intermediate segmentation prediction data and segmentation marking data samples are calculated based on the Nth loop iteration, a loss function combining uncertainty is determined, and an initial segmentation model is trained based on the N-1 intermediate loss functions and the loss function combining uncertainty. Each loop iteration calculation is made to act on the training initial segmentation model, the optimization direction is made to accurately segment the ROI as a target, and the probability of excessive confidence optimization is reduced by combining the uncertain loss function optimization model, so that the segmentation accuracy of the segmentation model is improved.
Fig. 3 is a schematic flow chart of a model training method according to an embodiment of the present application. As shown in fig. 3, the step of determining a loss function of the joint uncertainty by combining the joint uncertainty data and calculating corresponding intermediate joint prediction data and joint labeling data samples based on the nth loop iteration includes the following steps.
Step 301: and performing loss calculation on the intermediate segmentation prediction data and the segmentation marking data samples corresponding to the Nth loop iteration calculation to determine a first initial loss value.
Specifically, the segmentation label data sample is used as a reference value, and loss calculation is performed on the intermediate segmentation prediction data and the segmentation label data sample corresponding to the nth loop iteration calculation to determine a first initial loss value.
Step 302: and taking the segmentation uncertain data as a weight value, and carrying out weighting operation on the first initial loss value to obtain a loss value after the weighting operation.
Specifically, the segmentation uncertainty data is used as a weight value, the first initial loss value is weighted, and the weighted loss value is used for memorability and subsequent calculation.
Step 303: and acquiring a loss function combined with uncertainty based on the cross entropy function and the loss value after the weighting operation.
Specifically, segmentation uncertainty data is introduced into calculation of a loss function, and an initial segmentation model is trained by combining the loss function of uncertainty, so that a segmentation prediction model is forced to obtain more predictions with high confidence.
In the embodiment of the application, segmentation uncertain data are used as weight values, weighting operation is carried out on the first initial loss value, the weighted loss values are used, the segmentation uncertain data are introduced into calculation of a loss function, the initial segmentation model is trained by combining the uncertainty loss function, and the segmentation prediction model is forced to obtain more predictions with high confidence degree.
Fig. 4 is a schematic flowchart illustrating a model training method according to an embodiment of the present application. As shown in fig. 4, the step of determining N-1 intermediate loss functions based on the 1 st to N-1 st loop iterations to calculate corresponding intermediate segmentation prediction data and segmentation annotation data samples comprises the following steps.
Step 401: and aiming at each intermediate segmentation prediction data in the intermediate segmentation prediction data corresponding to the 1 st to the N-1 st loop iteration calculation, performing loss calculation on the intermediate segmentation prediction data and the segmentation marking data samples to determine a second initial loss value corresponding to the intermediate segmentation prediction data.
Specifically, in order to enable prediction differences in each loop iteration calculation to be used for adjusting the loss function, loss calculation is respectively performed on each intermediate segmentation prediction data and segmentation annotation data sample in the intermediate segmentation prediction data corresponding to the N-1 st to N-1 st loop iteration calculations, so as to determine N-1 second initial loss values corresponding to the intermediate segmentation prediction data corresponding to the 1 st to N-1 st loop iteration calculations.
Step 402: and calculating N-1 second initial loss values corresponding to the intermediate segmentation prediction data based on the cross entropy function and the 1 st to N-1 st loop iteration, and determining N-1 intermediate loss functions.
Specifically, a cross-entropy function is employed to compute the intermediate loss function.
In the embodiment of the application, each loop iterative computation can act on a training initial segmentation model, and N-1 intermediate loss functions are obtained by adopting the loss function computation method.
Fig. 5 is a schematic flowchart illustrating a model training method according to an embodiment of the present application. As shown in fig. 5, the step of training the initial segmentation model based on the N-1 intermediate loss functions and the loss function combined with uncertainty to generate the image segmentation model comprises the following steps.
Step 501: and superposing the N-1 intermediate loss functions and the loss function combined with uncertainty to obtain the loss function after superposition operation.
Specifically, in order to enable prediction differences in each loop iteration calculation to be used for adjusting the loss function, N-1 intermediate loss functions and the loss function combining uncertainty are superposed to obtain the loss function after superposition operation.
Step 502: and training an initial segmentation model based on the loss function after the superposition operation to generate an image segmentation model.
Specifically, the initial segmentation model is trained by using feedback superposition of prediction differences in all loop iteration calculations, so that the generated image segmentation model has higher segmentation accuracy.
In the embodiment of the application, N-1 intermediate loss functions and the loss function combined with uncertainty are superposed, so that the prediction difference in each loop iteration calculation can be used for adjusting the loss function, and therefore an initial segmentation model is trained, and the generated image segmentation model has higher segmentation accuracy.
Fig. 6 is a schematic flowchart illustrating a model training method according to an embodiment of the present application. As shown in fig. 6, the step of determining segmentation uncertainty data corresponding to the image sample to be segmented based on the intermediate segmentation prediction data includes the following steps.
Step 601: for each of the plurality of intermediate split prediction data, a prediction value for each pixel in the intermediate split prediction data is determined.
Illustratively, the prediction value is used to characterize the probability that the pixel belongs to the region of interest.
Specifically, for example, N pieces of intermediate prediction divided data are respectively denoted as Seg _1, Seg _2, … …, and Seg _ N, where N is the number of iteration loops. And aiming at each intermediate segmentation prediction data in the N intermediate segmentation prediction data, determining a prediction value of each pixel in the intermediate segmentation prediction data, and marking as Seg _ N (x, y), wherein N is the iteration cycle number, x and y are pixel coordinates of each pixel in the intermediate segmentation prediction data in a corresponding coordinate system, and Seg _ N (x, y) is used for representing the probability that the pixel belongs to the region of interest, and the value range is 0-1.
Step 602: and performing confidence calculation on the predicted values of a plurality of pixels positioned at the same pixel coordinate in the plurality of intermediate segmentation prediction data based on a confidence calculation formula to obtain a confidence value corresponding to each pixel coordinate.
Specifically, based on a confidence calculation formula, performing confidence calculation on predicted values of a plurality of pixels located at the same pixel coordinate in the plurality of intermediate segmented prediction data to obtain a confidence value corresponding to each pixel coordinate, and recording the confidence value as Conf (x, y), where x and y are pixel coordinates in a corresponding coordinate system, respectively, a value range of Conf (x, y) is 0-1, and a value of Conf (x, y) is closer to 1, indicating that the segmented prediction data is more definite, and a value of Conf (x, y) is closer to 0, indicating that the segmented prediction data is more uncertain.
Optionally, the calculation formula is:
Conf(x,y)=1-Var[Seg_1(x,y),Seg_2(x,y),Seg_N(x,y)]/E。
where Var denotes the variance calculation and E denotes a hyper-parameter, which is used to control the value of Conf (x, y) between 0 and 1.
Step 603: and forming a matrix by using the confidence value corresponding to each pixel coordinate to acquire segmentation uncertainty data.
Specifically, after the confidence calculation, the confidence values corresponding to the pixel coordinates form a matrix, and the segmentation uncertainty data is acquired. That is, the segmentation uncertainty data is a matrix in which a numerical value at each pixel coordinate is obtained by performing confidence calculation based on predicted values of a plurality of pixels located at the same pixel coordinate among the N intermediate segmentation prediction data.
In the embodiment of the application, pixel-by-pixel confidence calculation is performed on the predicted values of a plurality of pixels located at the same pixel coordinate in a plurality of intermediate segmentation prediction data, a matrix is obtained, and segmentation uncertainty data is obtained.
Fig. 7a is a schematic flowchart illustrating a model training method according to an embodiment of the present application. As shown in fig. 7a, the step of segmenting the image sample to be segmented by using the initial segmentation model to determine a plurality of intermediate segmentation prediction data comprises the following steps.
Step 701: and determining global shared characteristic data corresponding to the image sample to be segmented based on the image sample to be segmented.
Optionally, the initial segmentation model includes a low-level feature extraction module, and determines global shared feature data corresponding to the image sample to be segmented based on the image sample to be segmented, including: and inputting the image sample to be segmented into a low-level feature extraction module to obtain global shared feature data.
The global shared feature data is a general feature extracted from the image sample to be segmented, such as an edge or an intersection.
The low-level feature extraction module can adopt neural network models such as RNN, CNN and Transformer.
Step 702: and calculating corresponding characteristic data based on the global shared characteristic data and the Mth cycle iteration of the feedback propagation module, and determining the M +1 th cycle iteration to calculate the corresponding characteristic data.
Illustratively, M is a positive integer less than N, N being a positive integer greater than or equal to 2.
Specifically, referring to the human visual signal propagation process, the high-level information extracted from the low-level information may be fed back to the low-level information. And combining the global shared feature data with the feature data corresponding to the M-th cycle iterative computation and extracting the features, so that the M-th cycle iterative computation high-level information is fed back to the M + 1-th calculation low-level cycle iterative computation to obtain features with stronger expression capability, thereby more fully utilizing the information of the image to be detected.
Optionally, the feedback propagation module is a feature extractor comprising a plurality of loop iteration calculations.
Step 703: and calculating the characteristic data corresponding to the 1 st iteration to the Nth iteration, and determining a plurality of intermediate segmentation prediction data.
Optionally, the step of obtaining a plurality of intermediate segmented prediction data based on the feature data corresponding to the 1 st to nth loop iteration calculations includes inputting the feature data corresponding to the 1 st to nth loop iteration calculations into the segmented prediction module to obtain a plurality of intermediate segmented data.
Fig. 7b is a schematic flowchart of the process of determining a plurality of intermediate segmentation prediction data corresponding to the image sample to be segmented based on the image sample to be segmented in the embodiment shown in fig. 7 a.
As shown in fig. 7b, determining a plurality of intermediate segmentation prediction data corresponding to the image sample to be segmented based on the image sample to be segmented comprises the following steps. And inputting the image sample to be detected into a low-level feature extraction module for feature extraction, and acquiring global shared feature data. In an iterative loop process 1 of a feedback propagation module, inputting global shared feature data for further feature extraction to obtain feature data corresponding to the iterative loop process 1, in an iterative loop process 2-N, inputting the global shared feature data and feature data corresponding to the Mth loop iterative computation, performing feature extraction on the global shared feature data and the feature data, outputting the feature data corresponding to the M +1 th loop iterative computation, combining high-level feature data obtained by the Mth loop iterative computation with global shared low-level features, obtaining features with stronger expression capacity in the M +1 th loop iterative computation, and further obtaining the features with stronger expression capacity in each loop iterative computation. Inputting the corresponding characteristic data in the 1 st to Nth loop iteration calculation into a segmentation prediction module to obtain N pieces of intermediate prediction segmentation data.
In the embodiment of the application, the feature data corresponding to the M-th cycle iterative computation is combined with the globally shared low-level features, the features with stronger expression capability are obtained in the M + 1-th cycle iterative computation, a plurality of intermediate segmentation prediction data are obtained based on the corresponding feature data in the 1 st to Nth cycle iterative computations, and the information of the image to be detected is fully utilized, so that the segmentation prediction data are more accurate.
Exemplary image segmentation method
Fig. 8 is a flowchart illustrating an image segmentation method according to an embodiment of the present application. As shown in fig. 8, the image segmentation method includes the following steps. :
step 801: and acquiring an image to be segmented.
Specifically, the image to be segmented is an image in which a lesion region needs to be segmented, and may be a lung CT image, a brain CT image, or the like.
Step 802: and inputting the image to be segmented into the image segmentation model to determine segmentation prediction data corresponding to the image to be segmented.
Illustratively, the image segmentation model is obtained by training based on the model training method described in any of the above embodiments.
In the model training method according to any of the embodiments, the information output by the previous loop iterative computation is fed back to the current loop iterative computation by using a feedback propagation mechanism, so that each loop iterative computation can more fully utilize image information, segmentation uncertainty data of the model is determined in the model training process by using a plurality of intermediate prediction segmentation data, and the model optimization is optimized in the direction of reducing the segmentation uncertainty by combining with the uncertainty data optimization loss function. In the embodiment of the application, the image to be segmented is input into the image segmentation model obtained by training with the model training method described in any one of the embodiments, so that accurate segmentation prediction data can be obtained.
In one embodiment, the image segmentation method further includes: the image to be segmented is input into an image segmentation model to determine segmentation uncertainty data corresponding to the image to be segmented, and a segmentation uncertainty thermodynamic diagram corresponding to the image to be segmented is determined based on the segmentation uncertainty data. The image to be segmented is input into the image segmentation model obtained by training through any one of the model training methods, accurate ROI segmentation data can be obtained, a plurality of intermediate prediction segmentation data determined based on the output result of loop iteration calculation can be directly output, uncertainty data can be obtained based on the intermediate prediction segmentation data, segmentation uncertainty thermodynamic diagrams corresponding to the segmented image can be obtained based on the uncertainty data, and a doctor can be more intuitively assisted in judging the segmentation result.
Exemplary model training device
Fig. 9 is a schematic structural diagram of a model training apparatus according to an embodiment of the present application. The model training device is used for training an initial segmentation model comprising a feedback propagation module to generate an image segmentation model comprising the feedback propagation module. As shown in fig. 9, the model training apparatus 100 includes: a first determining module 101 configured to determine an image sample to be segmented and a segmentation annotation data sample corresponding to the image sample to be segmented; a second determining module 102 configured to segment the image sample to be segmented by using the initial segmentation model to determine a plurality of intermediate segmentation prediction data, wherein the plurality of intermediate segmentation prediction data are segmentation prediction data determined based on an output result of the loop iteration calculation of the feedback propagation module; a third determining module 103 configured to determine segmentation uncertainty data corresponding to the image sample to be segmented based on the plurality of intermediate segmentation prediction data; a generation module 104 configured to train an initial segmentation model based on the segmentation uncertainty data, the plurality of intermediate segmentation prediction data, and the segmentation annotation data samples to generate an image segmentation model.
In the embodiment of the application, the second determining module 102 feeds back information output by previous loop iterative computation to the current loop iterative computation through the feedback propagation module, so that each loop iterative computation can more fully utilize image information, thereby obtaining a plurality of intermediate prediction segmentation data, the third determining module 103 determines segmentation uncertainty data of the model in a model training process by utilizing the plurality of intermediate prediction segmentation data, and the generating module 104 optimizes a loss function by combining the uncertainty data, so that model optimization is optimized in a direction of reducing segmentation uncertainty, thereby further improving segmentation accuracy of the generated image segmentation model.
Fig. 10 is a schematic structural diagram of a model training apparatus according to an embodiment of the present application. As shown in fig. 10, the adjusting module 104 includes: an intermediate loss function determining unit 1041 configured to calculate corresponding first intermediate segmentation prediction data and segmentation label data samples based on loop iteration from 1 st time to N-1 st time, and determine N-1 intermediate loss functions; an uncertainty loss function determining unit 1042 for calculating corresponding second intermediate segmentation prediction data and segmentation label data samples based on the nth loop iteration in combination with the segmentation uncertainty data, and determining a loss function in combination with uncertainty; a generating unit 1043 configured to train an initial segmentation model based on the N-1 intermediate loss functions and the loss function combined with the uncertainty to generate an image segmentation model.
In one embodiment, the uncertainty loss function determining unit 1042 further comprises: a first initial loss value determining subunit 10421, configured to perform loss calculation on the intermediate segmentation prediction data and the segmentation annotation data sample corresponding to the nth loop iteration calculation to determine a first initial loss value; a weighting operation subunit 10422 configured to perform a weighting operation on the first initial loss value by using the segmentation uncertain data as a weight value, and obtain a loss value after the weighting operation; an uncertainty loss function subunit 10423 configured to obtain a loss function incorporating uncertainty based on the cross entropy function and the weighted post-computation loss values.
In one embodiment, the intermediate loss function determination unit 1041 further includes: a second initial loss value determining subunit 10411, configured to iteratively calculate, for each of the intermediate segmented prediction data corresponding to the loop from the 1 st time to the (N-1) th time, loss calculation is performed on the intermediate segmented prediction data and the segmentation label data samples to determine a second initial loss value corresponding to the intermediate segmented prediction data; the intermediate loss function determining subunit 10412 calculates, based on the cross entropy function and the 1 st to N-1 st loop iterations, N-1 second initial loss values corresponding to the intermediate partition prediction data, and determines N-1 intermediate loss functions.
In one embodiment, the generating unit 1043 further comprises: a superposition subunit 10431 configured to superpose N-1 intermediate loss functions and a loss function combining uncertainty, and obtain a loss function after superposition operation; a generating subunit 10432 configured to train an initial segmentation model based on the loss function after the superposition operation to generate an image segmentation model.
In one embodiment, the third determination module 103 further comprises: a pixel prediction value determination unit 1031 configured to determine, for each of the plurality of intermediate divided prediction data, a prediction value of each pixel in the intermediate divided prediction data, the prediction value being used to characterize a probability that the pixel belongs to the region of interest; a confidence value obtaining unit 1032 configured to perform confidence calculation on the prediction values of a plurality of pixels located at the same pixel coordinate in the plurality of intermediate segmentation prediction data based on a confidence calculation formula to obtain a confidence value corresponding to each pixel coordinate; a segmentation uncertainty data determination unit 1033 configured to form a matrix with the confidence values corresponding to each pixel coordinate, and acquire segmentation uncertainty data.
In one embodiment, the second determination module 102 further comprises: a first feature determination unit 1021 configured to determine global shared feature data corresponding to an image sample to be segmented based on the image sample to be segmented; a second feature determining unit 1022, configured to calculate corresponding feature data based on the global shared feature data and the mth cycle iteration of the feedback propagation module, and determine the characteristic data corresponding to the M +1 th cycle iteration calculation; wherein M is a positive integer less than N, and N is a positive integer greater than or equal to 2; the intermediate-divided prediction data determination unit 1023 is configured to determine a plurality of intermediate-divided prediction data by calculating feature data corresponding to each of the 1 st to nth loop iterations.
In one embodiment, the initial segmentation model comprises a low-level feature extraction module, and the first feature determination unit 1021 is further configured to input the image sample to be segmented into the low-level feature extraction module to obtain the global shared feature data.
Exemplary image segmentation apparatus
Fig. 11 is a schematic structural diagram of an image segmentation apparatus according to an embodiment of the present application. As shown in fig. 11, the image segmentation apparatus 200 includes: an obtaining module 201 configured to obtain an image to be segmented; an image segmentation module 202 configured to input an image to be segmented into an image segmentation model to determine segmentation prediction data corresponding to the image to be segmented, wherein the image segmentation model is trained based on any one of the above-mentioned model training methods.
In a further embodiment, the model training apparatus 200 further comprises a segmentation uncertainty data determination module configured to input the image to be segmented into the image segmentation model to determine segmentation uncertainty data corresponding to the image to be segmented; a first output module configured to determine a segmentation uncertainty thermodynamic diagram corresponding to the image to be segmented based on the segmentation uncertainty data.
Exemplary electronic device
Fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 12, electronic device 300 includes one or more processors 310 and memory 320.
The processor 310 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 300 to perform desired functions.
Memory 320 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 310 to implement the model training method and the image segmentation method of the various embodiments of the present application described above and/or other desired functions.
In one example, the electronic device 300 may further include: an input device 330 and an output device 340, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, the input device 330 may be a CT instrument. The output devices 240 may include, for example, a display, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device 300 relevant to the present application are shown in fig. 3, and components such as buses, input/output interfaces, and the like are omitted. In addition, electronic device 300 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the model training method according to various embodiments of the present application described in the "exemplary model training method" section above or the steps in the image segmentation method according to various embodiments of the present application described in the "exemplary image segmentation method" section above.
The computer program product may write program code for carrying out operations for embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the model training method according to various embodiments of the present application described in the "exemplary model training method" section above or the steps in the image segmentation method according to various embodiments of the present application described in the "exemplary image segmentation method" section above.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (12)

1. A model training method for training an initial segmentation model including a feedback propagation module to generate an image segmentation model including the feedback propagation module, wherein the feedback propagation module is a feature extractor including a plurality of loop iteration calculations, the method comprising:
determining an image sample to be segmented and a segmentation annotation data sample corresponding to the image sample to be segmented;
segmenting the image sample to be segmented by utilizing the initial segmentation model to determine a plurality of intermediate segmentation prediction data, wherein the intermediate segmentation prediction data are segmentation prediction data determined based on an output result of the loop iteration calculation of the feedback propagation module;
determining segmentation uncertainty data corresponding to the image sample to be segmented based on the intermediate segmentation prediction data, wherein the segmentation uncertainty data is used for representing the determination degree of segmentation prediction data obtained by a segmentation model;
training the initial segmentation model based on the segmentation uncertainty data, the plurality of intermediate segmentation prediction data, and the segmentation annotation data samples to generate the image segmentation model;
the number of times of the loop iteration calculation is N, and N is a positive integer greater than or equal to 2;
wherein the segmenting the image sample to be segmented by using the initial segmentation model to determine a plurality of intermediate segmentation prediction data comprises:
determining global shared feature data corresponding to the image sample to be segmented based on the image sample to be segmented;
calculating corresponding characteristic data based on the global shared characteristic data and the Mth cycle iteration of the feedback propagation module, and determining the characteristic data corresponding to the M +1 th cycle iteration calculation, wherein M is a positive integer smaller than N, and N is a positive integer greater than or equal to 2;
calculating respectively corresponding characteristic data based on the 1 st to Nth loop iteration, and determining the plurality of intermediate segmentation prediction data;
wherein training the initial segmentation model based on the segmentation uncertainty data, the plurality of intermediate segmentation prediction data, and the segmentation annotation data samples to generate the image segmentation model comprises:
calculating corresponding intermediate segmentation prediction data and segmentation marking data samples based on the 1 st to the N-1 st loop iteration, and determining N-1 intermediate loss functions;
combining the segmentation uncertainty data, calculating corresponding intermediate segmentation prediction data and segmentation annotation data samples based on the Nth cycle iteration, and determining a loss function of the combination uncertainty;
training the initial segmentation model based on the N-1 intermediate loss functions and the loss function of the binding uncertainty to generate the image segmentation model.
2. The model training method of claim 1, wherein the determining a loss function of the joint uncertainty by combining the joint uncertainty data and calculating corresponding intermediate joint prediction data and joint labeling data samples based on the nth loop iteration comprises:
performing loss calculation on the intermediate segmentation prediction data and the segmentation marking data samples corresponding to the Nth loop iteration calculation to determine a first initial loss value;
taking the segmentation uncertainty data as a weight value, performing weighting operation on the first initial loss value, and obtaining a loss value after the weighting operation;
and acquiring the loss function of the combination uncertainty based on the cross entropy function and the loss value after the weighting operation.
3. The model training method of claim 1, wherein the computing corresponding intermediate segmentation prediction data and segmentation labeling data samples based on the 1 st to N-1 st loop iterations determines N-1 intermediate loss functions, including:
performing loss calculation on the intermediate segmentation prediction data and the segmentation marking data samples aiming at each intermediate segmentation prediction data in the 1 st to N-1 st loop iteration calculation to determine a second initial loss value corresponding to the intermediate segmentation prediction data;
and calculating N-1 second initial loss values corresponding to the intermediate segmentation prediction data based on the cross entropy function and the 1 st to N-1 st loop iteration, and determining the N-1 intermediate loss functions.
4. The model training method of any one of claims 1 to 3, wherein the training of the initial segmentation model based on the N-1 intermediate loss functions and the loss function of the joint uncertainty to generate the image segmentation model comprises:
superposing the N-1 intermediate loss functions and the loss function of the combination uncertainty to obtain a loss function after superposition operation;
training the initial segmentation model based on the post-superposition loss function to generate the image segmentation model.
5. The model training method according to any one of claims 1 to 3, wherein the determining segmentation uncertainty data corresponding to the image sample to be segmented based on the plurality of intermediate segmentation prediction data comprises:
for each intermediate segmentation prediction data in the plurality of intermediate segmentation prediction data, determining a prediction value of each pixel in the intermediate segmentation prediction data, wherein the prediction value is used for representing the probability that the pixel belongs to the region of interest;
performing confidence calculation on the predicted values of a plurality of pixels located at the same pixel coordinate in the plurality of intermediate segmentation prediction data based on a confidence calculation formula to obtain a confidence value corresponding to each pixel coordinate;
and forming a matrix by using the confidence value corresponding to each pixel coordinate to acquire the segmentation uncertainty data.
6. The model training method of claim 1, wherein the initial segmentation model comprises a low-level feature extraction module, and the determining global shared feature data corresponding to the image sample to be segmented based on the image sample to be segmented comprises:
and inputting the image sample to be segmented into the low-level feature extraction module to obtain the global shared feature data.
7. An image segmentation method, comprising:
acquiring an image to be segmented;
inputting the image to be segmented into an image segmentation model to determine segmentation prediction data corresponding to the image to be segmented, wherein the image segmentation model is obtained by training based on the model training method of any one of claims 1 to 6.
8. The image segmentation method according to claim 7, wherein the image segmentation model is configured to generate segmentation prediction data and segmentation uncertainty data corresponding to the image to be segmented based on the image to be segmented, and the method further comprises:
and generating a segmentation uncertainty thermodynamic diagram corresponding to the image to be segmented based on the segmentation uncertainty data.
9. A model training apparatus for training an initial segmentation model including a feedback propagation module to generate an image segmentation model including the feedback propagation module, wherein the feedback propagation module is a feature extractor including a plurality of loop iteration calculations, the apparatus comprising:
the image segmentation method comprises a first determination module, a second determination module and a third determination module, wherein the first determination module is configured to determine an image sample to be segmented and a segmentation annotation data sample corresponding to the image sample to be segmented;
a second determination module configured to segment the image sample to be segmented by using the initial segmentation model to determine a plurality of intermediate segmentation prediction data, wherein the plurality of intermediate segmentation prediction data are segmentation prediction data determined based on an output result of the loop iteration calculation of the feedback propagation module;
a third determining module, configured to determine segmentation uncertainty data corresponding to the image sample to be segmented based on the plurality of intermediate segmentation prediction data, where the segmentation uncertainty data is used to characterize a determination degree of segmentation prediction data obtained by a segmentation model;
a generation module configured to train the initial segmentation model based on the segmentation uncertainty data, the plurality of intermediate segmentation prediction data, and the segmentation annotation data samples to generate the image segmentation model;
the number of times of the loop iteration calculation is N, and N is a positive integer greater than or equal to 2;
wherein the second determining module further comprises:
the first feature determination unit is configured to determine global shared feature data corresponding to the image sample to be segmented based on the image sample to be segmented;
a second feature determination unit, configured to determine feature data corresponding to the M +1 th iteration calculation based on the global shared feature data and the mth iteration calculation of the feedback propagation module, where M is a positive integer smaller than N, and N is a positive integer greater than or equal to 2;
an intermediate division prediction data determination unit configured to determine the plurality of intermediate division prediction data based on the feature data respectively corresponding to the 1 st to nth loop iterations;
wherein the generation module further comprises:
the intermediate loss function determining unit is configured to calculate corresponding intermediate segmentation prediction data and segmentation marking data samples based on the 1 st to the N-1 st loop iteration, and determine N-1 intermediate loss functions;
an uncertainty loss function determination unit configured to combine the segmentation uncertainty data, calculate corresponding intermediate segmentation prediction data and the segmentation annotation data samples based on the nth loop iteration, and determine a loss function combining uncertainty;
a generating unit configured to train the initial segmentation model based on the N-1 intermediate loss functions and the loss function of the joint uncertainty to generate the image segmentation model.
10. An image segmentation apparatus, comprising:
the acquisition module is configured to acquire an image to be segmented;
an image segmentation module configured to input the image to be segmented into an image segmentation model to determine segmentation prediction data corresponding to the image to be segmented, wherein the image segmentation model is trained based on the model training method according to any one of claims 1 to 6.
11. An electronic device, comprising:
a processor; and
memory having stored therein computer program instructions which, when executed by the processor, cause the processor to perform the method of any of claims 1 to 8.
12. A computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the method of any of claims 1 to 8.
CN202110560183.2A 2021-05-21 2021-05-21 Model training method and device, and image segmentation method and device Active CN113256651B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110560183.2A CN113256651B (en) 2021-05-21 2021-05-21 Model training method and device, and image segmentation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110560183.2A CN113256651B (en) 2021-05-21 2021-05-21 Model training method and device, and image segmentation method and device

Publications (2)

Publication Number Publication Date
CN113256651A CN113256651A (en) 2021-08-13
CN113256651B true CN113256651B (en) 2022-03-29

Family

ID=77183767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110560183.2A Active CN113256651B (en) 2021-05-21 2021-05-21 Model training method and device, and image segmentation method and device

Country Status (1)

Country Link
CN (1) CN113256651B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114399640B (en) * 2022-03-24 2022-07-15 之江实验室 Road segmentation method and device for uncertain region discovery and model improvement
CN114998588A (en) * 2022-06-06 2022-09-02 平安科技(深圳)有限公司 Image segmentation method, device and equipment based on artificial intelligence and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3942465B1 (en) * 2019-03-19 2023-03-08 Bühler AG Industrialized system for rice grain recognition and method thereof
CN111310624B (en) * 2020-02-05 2023-11-21 腾讯科技(深圳)有限公司 Occlusion recognition method, occlusion recognition device, computer equipment and storage medium
CN111311613B (en) * 2020-03-03 2021-09-07 推想医疗科技股份有限公司 Image segmentation model training method, image segmentation method and device
CN111681224A (en) * 2020-06-09 2020-09-18 上海联影医疗科技有限公司 Method and device for acquiring blood vessel parameters

Also Published As

Publication number Publication date
CN113256651A (en) 2021-08-13

Similar Documents

Publication Publication Date Title
KR102014385B1 (en) Method and apparatus for learning surgical image and recognizing surgical action based on learning
EP3382642B1 (en) Highly integrated annotation and segmentation system for medical imaging
US9984772B2 (en) Image analytics question answering
CN107665736B (en) Method and apparatus for generating information
US10258304B1 (en) Method and system for accurate boundary delineation of tubular structures in medical images using infinitely recurrent neural networks
CN109087306A (en) Arteries iconic model training method, dividing method, device and electronic equipment
Selvan et al. Uncertainty quantification in medical image segmentation with normalizing flows
CN112884060B (en) Image labeling method, device, electronic equipment and storage medium
CN113256651B (en) Model training method and device, and image segmentation method and device
CN109949300B (en) Method, system and computer readable medium for anatomical tree structure analysis
JP2021144675A (en) Method and program
CN110472049B (en) Disease screening text classification method, computer device and readable storage medium
KR20210036840A (en) Training method for specializing artificial intelligence model in deployed institution, and apparatus for training the artificial intelligence model
Chykeyuk et al. Class-specific regression random forest for accurate extraction of standard planes from 3D echocardiography
CN114549557A (en) Portrait segmentation network training method, device, equipment and medium
CN111724371A (en) Data processing method and device and electronic equipment
CN116936116A (en) Intelligent medical data analysis method and system
Liao et al. Transformer-based annotation bias-aware medical image segmentation
CN114266896A (en) Image labeling method, model training method and device, electronic equipment and medium
CN115240028A (en) Small intestinal stromal tumor target self-training detection method utilizing CAM and SAM in parallel
CN110120266B (en) Bone age assessment method
CN111863206A (en) Image preprocessing method, device, equipment and storage medium
CN113222989B (en) Image grading method and device, storage medium and electronic equipment
US20230103262A1 (en) Image processing method and device
CN111612770B (en) Active screening-based focus detection system of semi-supervised focus detection network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant