Nothing Special   »   [go: up one dir, main page]

CN116168052A - Gastric cancer pathological image segmentation method combining self-adaptive attention and feature pyramid - Google Patents

Gastric cancer pathological image segmentation method combining self-adaptive attention and feature pyramid Download PDF

Info

Publication number
CN116168052A
CN116168052A CN202310097139.1A CN202310097139A CN116168052A CN 116168052 A CN116168052 A CN 116168052A CN 202310097139 A CN202310097139 A CN 202310097139A CN 116168052 A CN116168052 A CN 116168052A
Authority
CN
China
Prior art keywords
gastric cancer
image
layer
model
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310097139.1A
Other languages
Chinese (zh)
Inventor
夏靖雯
丁勇
赵梦恋
阮世健
王亦凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202310097139.1A priority Critical patent/CN116168052A/en
Publication of CN116168052A publication Critical patent/CN116168052A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a gastric cancer pathological image segmentation method combining self-adaptive attention and a characteristic pyramid, which realizes automatic segmentation of gastric cancer pathological images and can be used for clinical auxiliary diagnosis of gastric cancer. Firstly, acquiring a gastric cancer pathology image and a doctor labeling result, and expanding data by an image enhancement method; then inputting the pathological image into a semantic segmentation network combining the self-adaptive attention and the feature pyramid, accurately positioning the cancerous region, and finely segmenting the edge of the focus; adopting similarity loss and cross entropy loss to relieve the problem of unbalance of the data set samples; and adopting a repeated training strategy to adaptively converge the model to the optimal. The invention can effectively reduce misjudgment on the cancerous region, can accurately capture the edge information of the cancerous region, realizes smooth and accurate segmentation of the focus edge, and provides reliable support for subsequent treatment of patients.

Description

Gastric cancer pathological image segmentation method combining self-adaptive attention and feature pyramid
Technical Field
The invention relates to the field of medical image processing and computer vision, in particular to a gastric cancer pathological image segmentation method combining self-adaptive attention and a characteristic pyramid.
Background
Gastric cancer is one of malignant tumors with highest mortality rate in China, diagnosis of gastric cancer needs to be identified with stomach canceration such as pseudolymphoma, gastric mucosa prolapse and the like, and early diagnosis of gastric cancer is important for improving prognosis effect of patients.
The diagnosis of gastric cancer requires collecting pathological sections of stomach tissues, and judging whether canceration occurs or not according to the experience of doctors. The pathological manifestations of cancerous regions are various, doctors have subjectivity on the judgment of cancerous regions, and meanwhile, the method for manually dividing focuses is time-consuming and labor-consuming. The clinical gastric cancer treatment method is usually surgical excision, and the accurate judgment of the range of the cancerous region can obviously relieve the pain of patients.
At present, a semantic segmentation network for gastric cancer pathological image segmentation is an important research direction. The semantic segmentation network obtains specific network parameters to fit the process of doctor film reading decision through a data-driven training mode, so that the network has the same analysis and learning capabilities as a person. In semantic segmentation networks, the PSPNet based on pyramid structures and the deep Lab network based on hole convolution show good effects in natural expected segmentation and street scene segmentation, but are far less widely applied in medical image segmentation than UNet. Firstly, medical image data is more difficult to obtain and difficult to support training of a complex network, secondly, medical image semantic information is simpler, and high-dimensional semantics are not constructed by excessive downsampling structures, so that the network is more required to pay attention to understanding of low-dimensional texture information. The UNet has a simple structure, can operate in a large space according to task specificity, and can avoid the loss of low-dimensional information caused by multi-layer downsampling and cavity convolution. The gastric cancer pathological image segmentation method combining the self-adaptive attention and the feature pyramid can realize accurate positioning of a cancerous region and accurate segmentation of the edge of the cancerous region, and has a certain clinical value.
Disclosure of Invention
In view of the above, the present invention aims to provide a gastric cancer pathological image segmentation method combining adaptive attention and feature pyramids. Firstly, constructing a gastric cancer data set, expanding data by an image enhancement method, and firstly inputting pathological images into a semantic segmentation network combining self-adaptive attention and a feature pyramid in a training stage to segment a cancerous region; in the training process, similarity loss and cross entropy loss are combined, so that the problem of unbalance of a data set sample is solved; and (3) adopting a repeated training strategy, reloading the model parameters after one training, and training again to enable the model to be adaptively converged to the optimal state. The invention can effectively reduce misjudgment on the cancerous region, can accurately capture the edge information of the cancerous region, realizes smooth and accurate segmentation of the focus edge, provides reliable support for subsequent treatment of patients, and has clinical practical value.
The invention is realized by adopting the following scheme:
a gastric cancer pathological image segmentation method combining self-adaptive attention and feature pyramid comprises the following steps:
step S1: obtaining a gastric cancer pathological image and a medical labeling result of a cancerous region, and constructing a gastric cancer data set;
step S2: expanding the gastric cancer dataset by using an image enhancement method;
step S3: using the expanded gastric cancer data set training and combining the self-adaptive attention and the segmentation model of the feature pyramid, and segmenting the cancerous region in the gastric cancer data set by the segmentation model to obtain the model parameters of the preliminary training and the cancerous region segmentation result corresponding to each gastric cancer pathological image;
step S4: repeating the step S3, and obtaining final parameters of the segmentation model by using a repeated training strategy;
and S5, processing the gastric cancer pathological image to be segmented by using the trained segmentation model to obtain a cancerous region segmentation result.
Further, the step S1 specifically includes the following steps:
step S11: acquiring a gastric cancer pathology image;
step S12: judging and drawing out a cancerous region in the pathological image according to experience by an expert;
step S13: generating a cancerous region labeling image according to expert sketching results, wherein the mask image only has two colors of white and black, the white region represents the cancerous region, and the black region represents the normal tissue region;
step S14: each gastric cancer pathology image and the corresponding cancerous region labeling image form image-label pairs, and all the image-label pairs form a gastric cancer data set.
Further, the step S2 specifically includes the following steps:
step S21: performing horizontal flipping on partial image-tag pairs in the dataset;
step S22: performing vertical flipping on partial image-tag pairs in the dataset;
step S23: randomly adding Gaussian noise to all images in the data set, and carrying out random tone transformation;
step S24: all image-tag pairs in the dataset were randomly cropped.
Further, the segmentation model combining the adaptive attention and the feature pyramid in step S3 includes:
the model encoder is used for representing the characteristics of the cancerous region in the pathological image to obtain a pathological image characteristic diagram;
a model decoder for reconstructing a cancerous region prediction mask image from the pathology image feature map obtained by the model encoder;
and the connection module is used for connecting corresponding layers of the model encoder and the model decoder to realize information interaction between the model encoder and the model decoder.
Further, the characteristic encoder module comprises a plurality of encoding layers formed by a self-adaptive attention network and a residual error network and a characteristic pyramid network, wherein the input of the first encoding layer is a gastric cancer pathological image, and the input of the latter encoding layer is the output of the former encoding layer; the output of the last coding layer is used as the input of the model decoder after passing through the characteristic pyramid network.
Further characterized in that the input of the adaptive attention network in each coding layer is the input of the current coding layer, the input of the residual network in each coding layer is the output of the adaptive attention network in the current coding layer, the output of the residual network in each coding layer is the output of the current coding layer.
Further, the model decoder comprises a plurality of decoding layers formed by an up-sampling network and a feature fusion network, and the first decoding layer does not contain the feature fusion network; the number of layers of the decoding layer is the same as that of the encoding layer; the input of the first decoding layer is the output of the feature pyramid network in the feature encoder module, the input of the latter decoding layer is the output of the former decoding layer and the output of the corresponding encoding layer, and the output of the last decoding layer is the output of the model decoder.
Further, the input of the feature fusion network in each decoding layer is the output of the previous decoding layer and the output of the corresponding encoding layer, the input of the upsampling network in the second layer and each decoding layer after the second layer is the output of the feature fusion network in the current decoding layer, and the output of the upsampling network in each decoding layer is the output of the current decoding layer.
Further, the step S4 specifically includes the following steps:
step S41: reloading and training model parameters obtained after model training;
step S42: and repeating training until the model precision is not improved any more, and obtaining final model parameters.
Further, the similarity loss and the cross entropy loss are adopted when the segmentation model combining the self-adaptive attention and the characteristic pyramid is trained.
Compared with the prior art, the invention has the following beneficial effects:
the invention discloses a gastric cancer pathological image segmentation method combining self-adaptive attention and a feature pyramid, which can automatically segment gastric cancer pathological images and solve the problem that a manual segmentation method is time-consuming and labor-consuming. According to the method, through constructing a gastric cancer data set and then inputting pathological images into a semantic segmentation network combining self-adaptive attention and a characteristic pyramid, the network adaptively learns to accurately position a cancerous region, the characteristic that naked eyes cannot observe in medical images can be captured, and subjectivity of doctors in judging cancerous is eliminated. The method adopts the strategies of similarity loss, cross entropy loss and repeated training, so that the model is adaptively converged to the optimal, the accurate segmentation of the cancerous region is realized, the operation can be accurately guided, the pain of a patient is effectively relieved, the method can be used for clinical auxiliary diagnosis of gastric cancer, and reliable support is provided for subsequent treatment of the patient.
Drawings
Fig. 1 is a flow chart of a gastric cancer pathological image segmentation method according to an embodiment of the invention.
Fig. 2 is a schematic diagram of an image enhancement process according to an embodiment of the invention.
Fig. 3 is a schematic diagram of a segmentation model structure combining adaptive attention and feature pyramids for gastric cancer pathological image segmentation according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a repeated training strategy for gastric cancer pathological image segmentation according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, other embodiments are all within the scope of the present invention, as would be obtained by one of ordinary skill in the art without making any inventive effort.
Referring to fig. 1, a flow chart of an embodiment of a gastric cancer pathological image segmentation method combining adaptive attention and feature pyramids according to the present invention specifically includes the following steps:
step S101: a gastric cancer dataset is constructed.
A clinical doctor firstly obtains stomach tissue of a patient for segmentation, and after staining treatment, a digital slide scanner is used for generating stomach pathology images. The purpose of the segmentation network is to imitate a doctor to read a film to find out a cancerous region, so that an expert doctor is required to sketch the region where the cancerous region occurs on the stomach pathological image according to experience, and a cancerous region labeling image surrounded by the expert doctor is generated according to the sketch. The labeling image is composed of two colors of white and black, wherein the white represents a cancerous region and the black represents a normal tissue region. Each pathology image and its corresponding labeling image form a set of image-label pairs, all of which form a gastric cancer dataset.
Step S102: the dataset is augmented using image enhancement methods.
The image enhancement method can allow a limited image to produce more images without a substantial increase in data. Under the conditions of shooting the same target at different angles, shooting light intensity, local shielding, position movement, different shooting distances and the like, the network can accurately judge, and the image enhancement simulates the process. The image enhancement method can reduce the sensitivity of the model to the image, force the model to learn more complex semantic information in the image, and improve the invariance of the model, for example, the image is partially shielded, the brightness of the image is adjusted, noise or partial blurring is added to the image, and the like, so that the model judgment can be effectively avoided from being interfered by information irrelevant to the target in the image. The image enhancement can also avoid unbalance of samples to a certain extent, and the data of the samples with smaller sample number are expanded in an image enhancement mode, so that the unbalanced proportion is reduced, and the method has positive effect on training of a network.
As shown in FIG. 2, the image enhancement process adopted in this embodiment firstly performs horizontal inversion on the input image with a probability of 0.5, then performs vertical inversion on the image with a probability of 0.5, and then adds Gaussian noise, random tone variation and random clipping to the image, wherein the inversion operation and the clipping operation need to operate on the image and the label at the same time, and the noise addition and tone conversion only need to operate on the image. The data set can be enlarged by 20 times through the image enhancement means, so that model overfitting is effectively relieved, and model generalization and model robustness are improved.
Step S103: training is performed using a segmentation model that combines adaptive attention with feature pyramids.
In one embodiment, a segmentation model structure diagram combining adaptive attention and feature pyramid is shown in FIG. 3, comprising:
and the model encoder is used for characterizing the characteristics of the cancerous region in the pathological image, measuring the importance degree of the features in the space dimension and the channel dimension, and adjusting the weights of the features according to the importance degree. The features of the cancerous region tend to be higher in importance degree, the feature weights are larger, and the attention of the model to the features is higher; the features of the non-cancerous region tend to be of lower insect bite, feature weights are lower, and the model selectively ignores the features. The characteristic mode of the image characteristic is adjusted in a self-adaptive mode by the encoder, the characteristic similarity of the cancerous region is improved, the characteristics of the cancerous region and the non-cancerous region are pulled as far as possible, the characteristic interference model judgment of the non-cancerous region is avoided, and the coded pathological image characteristic diagram is obtained.
And the model decoder is used for adaptively establishing the relation between the characteristic diagram and the target output result according to the pathological image characteristic diagram obtained by the encoder, so that the network prediction result is as close to the label result as possible. The input of the decoder is the output of the encoder, the decoder is composed of multiple layers of up-sampling, each layer of up-sampling carries out sampling reconstruction on the output image of the upper layer, the image size is enlarged, and the last layer of up-sampling output is a prediction mask image with the same size as the original pathological image. The prediction mask image consists of two colors of black and white, each pixel point on the image corresponds to the pixel point at the same position of the original pathological image, and the pixel point represents that the pixel point belongs to a cancerous region when the pixel point is white, otherwise, the pixel point represents that the pixel point belongs to a normal tissue region when the pixel point is black.
And a connection module between the encoder and the decoder for information interaction between the encoder and the decoder. The encoder and the decoder are hierarchical structures, the layers are the same, and the output of the last layer of the encoder is used as the input of the decoder after passing through the characteristic pyramid module. The last layer of feature map of the encoder is to highly model the whole semanteme of the image, but loses the texture detail of the image, and in order to compensate the lost information in the decoding process, the judgment of the texture detail by the decoder is affected, so that the edge of a cancerous region is difficult to accurately distinguish. In order to solve the problem, the connecting module is used for connecting the coding layer and the decoding layer of the corresponding hierarchy, and the texture information is directly introduced into the corresponding decoding layer, so that the accuracy of the model is effectively improved.
In the step, gastric cancer pathological images are firstly input into a model encoder combining self-adaptive attention and a feature pyramid, and a first layer to a fifth layer of feature images are obtained through five layers of self-adaptive attention modules and residual modules in sequence; specifically, the input of the first-layer self-adaptive attention module is a pathological image, the input of the first-layer residual error module is the output of the first-layer self-adaptive attention module, and the output of the first-layer residual error module is a first-layer feature map; the input of the self-adaptive attention module of the next layer is the output of the residual error module of the previous layer, the input of each residual error module of each layer is the output of the self-adaptive attention module of the layer, and each residual error module of each layer outputs a characteristic diagram.
The fifth-layer feature map output by the encoder combining the self-adaptive attention and the feature pyramid is input into a feature pyramid module, the output result of the feature pyramid module is input into a decoder combining the self-adaptive attention and the feature pyramid, and the intermediate results of the first layer to the fifth layer are obtained through five-layer up-sampling modules at a time; specifically, the input of the first layer up-sampling module is the output of the feature pyramid module, and the output of the first layer up-sampling module is the fifth layer intermediate result; the input of the second-layer up-sampling module is the fusion result of the fifth-layer intermediate result and the fourth-layer feature map, and the fusion result is realized by the feature fusion module and output is the fourth-layer intermediate result; and by the above-mentioned method, the prediction mask image of the gastric cancer pathological image is obtained after the fusion result of the final first layer of feature image and the second layer of intermediate result passes through the fifth layer of up-sampling module.
In one implementation of the present invention, a feature encoder architecture combining adaptive attention and a segmentation model of a feature pyramid is shown in FIG. 3, comprising:
an adaptive attention module for enhancing the network's attention to important features in both the channel dimension and the spatial dimension. The self-adaptive attention module firstly reduces the model size through downsampling, extracts the features through the convolution layer, and then sequentially passes through the channel attention module and the space attention mechanism module to respectively and adaptively allocate weights to different channel features and different space position features so as to realize channel and space attention. In this embodiment, the channel attention module may use an existing Squeeze and Excitation module (SENet), and the spatial attention mechanism module may use an existing spatial transformation neural network (STN).
The residual error module is used for directly mapping shallow layer information to a deep layer network, so that the symmetry of the network can be effectively broken, the information quantity represented by the high-dimensional feature vector is increased, the representation capacity of the network is improved, the nonlinearity of the network is increased, and the information attenuation caused by deepening of the network layer number is relieved. In this embodiment, the residual module may use an existing residual connection (res net).
The feature pyramid module is used for obtaining multi-scale image features, fusing shallow layer features and deep layer features, effectively integrating features of different receptive fields, integrating context information, improving the expression capacity of a model and enhancing the guiding effect of deep semantic information on segmentation. In this embodiment, the feature pyramid module may employ an existing Feature Pyramid Network (FPN).
Step S104: and obtaining model parameters and cancerous region segmentation results.
In this embodiment, the obtained model parameters and the segmentation result of the cancerous region are the model parameters and the prediction mask image generated by the first training model. And re-loading and re-training model parameters by adopting a repeated training strategy, calculating similarity loss and cross entropy loss according to the labels and the predictive mask images, and relieving the problem of unbalance of the data set samples until the accuracy of the trained model is not improved. The trained segmentation model can process gastric cancer pathological images to be segmented to obtain a cancerous region segmentation result.
As shown in fig. 4, a segmentation result diagram obtained in this embodiment is shown, where (a) is a schematic diagram of a gastric cancer pathological image, a gray closed curve is a boundary of a cancerous region outlined by a doctor, and a curve-containing region is a cancerous region; (b) The method is characterized in that a final segmentation mask result diagram of a gastric cancer pathological image segmentation method combining self-adaptive attention and a feature pyramid is shown, white is a cancerous region, and black is a normal tissue region; (c) The schematic is enlarged for the partial detail of the final segmentation mask result.
The foregoing list is only illustrative of specific embodiments of the invention. Obviously, the invention is not limited to the above embodiments, but many variations are possible. All modifications directly derived or suggested to one skilled in the art from the present disclosure should be considered as being within the scope of the present invention.

Claims (10)

1. The gastric cancer pathological image segmentation method combining the self-adaptive attention and the feature pyramid is characterized by comprising the following steps of:
step S1: obtaining a gastric cancer pathological image and a medical labeling result of a cancerous region, and constructing a gastric cancer data set;
step S2: expanding the gastric cancer dataset by using an image enhancement method;
step S3: using the expanded gastric cancer data set training and combining the self-adaptive attention and the segmentation model of the feature pyramid, and segmenting the cancerous region in the gastric cancer data set by the segmentation model to obtain the model parameters of the preliminary training and the cancerous region segmentation result corresponding to each gastric cancer pathological image;
step S4: repeating the step S3, and obtaining final parameters of the segmentation model by using a repeated training strategy;
and S5, processing the gastric cancer pathological image to be segmented by using the trained segmentation model to obtain a cancerous region segmentation result.
2. The gastric cancer pathology image segmentation method combining adaptive attention and feature pyramid of claim 1, wherein step S1 specifically comprises the steps of:
step S11: acquiring a gastric cancer pathology image;
step S12: judging and drawing out a cancerous region in the pathological image according to experience by an expert;
step S13: generating a cancerous region labeling image according to expert sketching results, wherein the mask image only has two colors of white and black, the white region represents the cancerous region, and the black region represents the normal tissue region;
step S14: each gastric cancer pathology image and the corresponding cancerous region labeling image form image-label pairs, and all the image-label pairs form a gastric cancer data set.
3. The gastric cancer pathology image segmentation method combining adaptive attention and feature pyramid of claim 2, wherein step S2 specifically comprises the steps of:
step S21: performing horizontal flipping on partial image-tag pairs in the dataset;
step S22: performing vertical flipping on partial image-tag pairs in the dataset;
step S23: randomly adding Gaussian noise to all images in the data set, and carrying out random tone transformation;
step S24: all image-tag pairs in the dataset were randomly cropped.
4. The gastric cancer pathology image segmentation method combining adaptive attention and feature pyramid of claim 1, wherein the segmentation model combining adaptive attention and feature pyramid of step S3 comprises:
the model encoder is used for representing the characteristics of the cancerous region in the pathological image to obtain a pathological image characteristic diagram;
a model decoder for reconstructing a cancerous region prediction mask image from the pathology image feature map obtained by the model encoder;
and the connection module is used for connecting corresponding layers of the model encoder and the model decoder to realize information interaction between the model encoder and the model decoder.
5. The gastric cancer pathology image segmentation method combining adaptive attention and feature pyramids according to claim 4, wherein the feature encoder module comprises a multi-layer encoding layer consisting of an adaptive attention network and a residual network, and a feature pyramid network, wherein the input of the first encoding layer is a gastric cancer pathology image, and the input of the latter encoding layer is the output of the former encoding layer; the output of the last coding layer is used as the input of the model decoder after passing through the characteristic pyramid network.
6. The gastric cancer pathology image segmentation method combining adaptive attention and feature pyramid of claim 5, wherein the input of the adaptive attention network in each encoding layer is the input of the current encoding layer, the input of the residual network in each encoding layer is the output of the adaptive attention network in the current encoding layer, and the output of the residual network in each encoding layer is the output of the current encoding layer.
7. The gastric cancer pathology image segmentation method combining adaptive attention and feature pyramid of claim 5, wherein the model decoder comprises a multi-layer decoding layer consisting of an upsampling network and a feature fusion network, and the first layer decoding layer is free of the feature fusion network; the number of layers of the decoding layer is the same as that of the encoding layer; the input of the first decoding layer is the output of the feature pyramid network in the feature encoder module, the input of the latter decoding layer is the output of the former decoding layer and the output of the corresponding encoding layer, and the output of the last decoding layer is the output of the model decoder.
8. The gastric cancer pathology image segmentation method combining adaptive attention and feature pyramid of claim 7, wherein the input of the feature fusion network in each decoding layer is the output of the previous decoding layer and the output of the corresponding encoding layer, the input of the upsampling network in each decoding layer after the second layer is the output of the feature fusion network in the current decoding layer, and the output of the upsampling network in each decoding layer is the output of the current decoding layer.
9. The gastric cancer pathology image segmentation method combining adaptive attention and feature pyramid according to claim 1, wherein the step S4 specifically comprises the following steps:
step S41: reloading and training model parameters obtained after model training;
step S42: and repeating training until the model precision is not improved any more, and obtaining final model parameters.
10. The gastric cancer pathology image segmentation method combining adaptive attention and feature pyramids according to claim 1, wherein similarity loss and cross entropy loss are used when training the segmentation model combining adaptive attention and feature pyramids.
CN202310097139.1A 2023-02-10 2023-02-10 Gastric cancer pathological image segmentation method combining self-adaptive attention and feature pyramid Pending CN116168052A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310097139.1A CN116168052A (en) 2023-02-10 2023-02-10 Gastric cancer pathological image segmentation method combining self-adaptive attention and feature pyramid

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310097139.1A CN116168052A (en) 2023-02-10 2023-02-10 Gastric cancer pathological image segmentation method combining self-adaptive attention and feature pyramid

Publications (1)

Publication Number Publication Date
CN116168052A true CN116168052A (en) 2023-05-26

Family

ID=86414257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310097139.1A Pending CN116168052A (en) 2023-02-10 2023-02-10 Gastric cancer pathological image segmentation method combining self-adaptive attention and feature pyramid

Country Status (1)

Country Link
CN (1) CN116168052A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541797A (en) * 2023-12-21 2024-02-09 浙江飞图影像科技有限公司 Interactive three-dimensional bronchus segmentation system and method for chest CT (computed tomography) flat scanning
CN118038450A (en) * 2024-03-01 2024-05-14 山东省农业科学院 Corn pest detection method based on remote sensing image
CN118470003A (en) * 2024-07-09 2024-08-09 吉林大学 Cervical cancer pathological image analysis system and method based on AI

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541797A (en) * 2023-12-21 2024-02-09 浙江飞图影像科技有限公司 Interactive three-dimensional bronchus segmentation system and method for chest CT (computed tomography) flat scanning
CN117541797B (en) * 2023-12-21 2024-05-31 浙江飞图影像科技有限公司 Interactive three-dimensional bronchus segmentation system and method for chest CT (computed tomography) flat scanning
CN118038450A (en) * 2024-03-01 2024-05-14 山东省农业科学院 Corn pest detection method based on remote sensing image
CN118470003A (en) * 2024-07-09 2024-08-09 吉林大学 Cervical cancer pathological image analysis system and method based on AI

Similar Documents

Publication Publication Date Title
CN111784671B (en) Pathological image focus region detection method based on multi-scale deep learning
CN111161273B (en) Medical ultrasonic image segmentation method based on deep learning
CN116168052A (en) Gastric cancer pathological image segmentation method combining self-adaptive attention and feature pyramid
CN110930416B (en) MRI image prostate segmentation method based on U-shaped network
CN113012172B (en) AS-UNet-based medical image segmentation method and system
CN113256641B (en) Skin lesion image segmentation method based on deep learning
CN112949838B (en) Convolutional neural network based on four-branch attention mechanism and image segmentation method
CN116309648A (en) Medical image segmentation model construction method based on multi-attention fusion
CN115661144A (en) Self-adaptive medical image segmentation method based on deformable U-Net
CN111179275A (en) Medical ultrasonic image segmentation method
CN114841320A (en) Organ automatic segmentation method based on laryngoscope medical image
CN114511502A (en) Gastrointestinal endoscope image polyp detection system based on artificial intelligence, terminal and storage medium
CN115375711A (en) Image segmentation method of global context attention network based on multi-scale fusion
CN117350979A (en) Arbitrary focus segmentation and tracking system based on medical ultrasonic image
CN115965630A (en) Intestinal polyp segmentation method and device based on depth fusion of endoscope image
CN115471470A (en) Esophageal cancer CT image segmentation method
CN112884788A (en) Cup optic disk segmentation method and imaging method based on rich context network
CN112869704B (en) Diabetic retinopathy area automatic segmentation method based on circulation self-adaptive multi-target weighting network
CN113538363A (en) Lung medical image segmentation method and device based on improved U-Net
CN117876690A (en) Ultrasonic image multi-tissue segmentation method and system based on heterogeneous UNet
CN117523204A (en) Liver tumor image segmentation method and device oriented to medical scene and readable storage medium
CN116797828A (en) Method and device for processing oral full-view film and readable storage medium
CN112967295B (en) Image processing method and system based on residual network and attention mechanism
CN116664587A (en) Pseudo-color enhancement-based mixed attention UNet ultrasonic image segmentation method and device
CN115410032A (en) OCTA image classification structure training method based on self-supervision learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination