WO2021196632A1 - Intelligent analysis system and method for panoramic digital pathological image - Google Patents
Intelligent analysis system and method for panoramic digital pathological image Download PDFInfo
- Publication number
- WO2021196632A1 WO2021196632A1 PCT/CN2020/129187 CN2020129187W WO2021196632A1 WO 2021196632 A1 WO2021196632 A1 WO 2021196632A1 CN 2020129187 W CN2020129187 W CN 2020129187W WO 2021196632 A1 WO2021196632 A1 WO 2021196632A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pathological image
- features
- pathological
- input
- feature
- Prior art date
Links
- 230000001575 pathological effect Effects 0.000 title claims abstract description 105
- 238000004458 analytical method Methods 0.000 title claims abstract description 38
- 238000000034 method Methods 0.000 title claims abstract description 31
- 206010028980 Neoplasm Diseases 0.000 claims abstract description 29
- 201000011510 cancer Diseases 0.000 claims abstract description 27
- 230000007246 mechanism Effects 0.000 claims abstract description 26
- 230000004927 fusion Effects 0.000 claims abstract description 16
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 9
- 238000012549 training Methods 0.000 claims description 20
- 238000003860 storage Methods 0.000 claims description 16
- 238000000605 extraction Methods 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 6
- 206010061218 Inflammation Diseases 0.000 claims description 5
- 230000004054 inflammatory process Effects 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000010191 image analysis Methods 0.000 abstract description 3
- 239000000463 material Substances 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 12
- 238000012545 processing Methods 0.000 description 11
- 230000007170 pathology Effects 0.000 description 10
- 210000001519 tissue Anatomy 0.000 description 9
- 238000013145 classification model Methods 0.000 description 8
- 238000003745 diagnosis Methods 0.000 description 6
- 210000002751 lymph Anatomy 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 5
- 238000004195 computer-aided diagnosis Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000007781 pre-processing Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 206010039083 rhinitis Diseases 0.000 description 3
- 238000007405 data analysis Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000004940 nucleus Anatomy 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 238000007619 statistical method Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 208000002454 Nasopharyngeal Carcinoma Diseases 0.000 description 1
- 206010061306 Nasopharyngeal cancer Diseases 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 201000011216 nasopharynx carcinoma Diseases 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000010827 pathological analysis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000010186 staining Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 210000004881 tumor cell Anatomy 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Definitions
- the present invention relates to the technical field of medical image processing, and more specifically, to an intelligent analysis system and method for panoramic digital pathological images.
- tumor pathological diagnosis and later statistical analysis are based on the work experience and knowledge accumulation of pathologists to complete the analysis.
- the evaluation results are easily affected by subjectivity.
- manual statistical analysis of H&E tumor cell nuclei is prone to errors.
- the statistical error assessment of the percentage of nuclei is as high as 45%, and the analysis results vary greatly depending on the pathologist.
- the operation The dynamic variation range of 10%-95% of inter-subject disparity can easily lead to false negative diagnosis results.
- Pathological examination is the current gold standard for clinical cancer diagnosis.
- the pathologist's cancer diagnosis mainly relies on visual inspection of tissue sample images captured by a microscope.
- pathologists they need to combine their long-term accumulated clinical analysis experience to judge whether there is cancer in the pathological section of rhinitis cancer. This method is not only time-consuming, but also requires extremely high professional knowledge of the doctor.
- CAD Computer Aided Diagnosis
- the main methods of CAD in the diagnosis of pathological images include traditional machine learning and the more popular deep learning in recent years.
- Traditional machine learning requires manually extracting image features and then classifying them through a classifier.
- the analysis effect of this method mainly depends on the effect of manual feature extraction in the early stage.
- deep learning does not require manual feature extraction, can automatically dig deep features of pathological images, and directly perform end-to-end optimization.
- CAD technology has achieved a lot of success in the field of pathological images, in the actual algorithm construction, the analysis is still based on the characteristics of a single scale, and the characteristics of different scales are ignored.
- pathologists diagnosis takes a long time and requires high professional abilities; traditional machine learning methods to analyze pathological images mainly rely on the effect of extracting features , which requires higher professional knowledge of researchers.
- the existing feature extraction based on deep convolutional networks is mainly for a single feature, and less consideration is given to features under different magnifications.
- the actual clinical diagnosis is all analyzed under different scales of the image.
- the purpose of the present invention is to overcome the above shortcomings of the prior art and provide a panoramic digital pathological image intelligent analysis system and method, which adopts a deep multi-scale feature convolution network based on the attention mechanism to classify the digital panoramic pathological image to solve the pathology Due to the large scale and complex shape of the panoramic image, the feature information is not fully represented, and the automatic intelligent analysis of the digital pathological image is finally realized.
- a method for intelligent analysis of panoramic digital pathological images includes the following steps:
- the fusion feature is input to a trained pathological image classifier to obtain a classification result of whether the pathological image slice contains cancer.
- the fusion feature is obtained according to the following steps:
- the pathological image slices to be analyzed are sequentially input into each feature layer of the attention mechanism-based convolutional neural network for feature extraction of the corresponding scale, and the features of each layer are processed to have the same resolution and channel;
- the fusion features are passed through a fully connected layer and then input to the trained pathological image classifier to obtain various probabilities.
- the pathological image slice to be analyzed is obtained according to the following steps:
- the sliced pathological image block is input to the deep convolution network, and after a convolution operation, three-dimensional features are obtained.
- the feature calculation model is expressed as:
- N out is the output feature vector
- N in(k) is the feature vector of the k-th channel of the previous layer
- C in is the total number of channels of the previous layer
- bias is the bias value.
- the loss function for training the pathological image classifier is expressed as:
- P (i, j) represents the classification result of the pixels of the input image
- P class (i, j) represents the classification label of the input image pixel
- weight (class) represents the specified weight of the category
- C in represents the total number of channels
- k represents For the k-th channel
- weight(w k ,k) represents the weight of the attention mechanism obtained for each channel
- (i,j) represents the position of each pixel
- W and H are the length and width of the image, respectively.
- the classification result includes cancer, benign tissue, and inflammation.
- the pathological image slice to be analyzed is a noise picture with black, white, and red pixels randomly added according to the pixel size of the picture.
- the classification result is various types of probability information obtained by softmax.
- an intelligent analysis system for panoramic digital pathological images includes:
- Feature extraction unit used to input the pathological image slice to be analyzed into the convolutional neural network model based on the attention mechanism, carry out the features of different scales of the pathological image, and fuse the extracted features to obtain the fused feature;
- Classification unit used to input the fusion feature into the trained pathological image classifier to obtain the classification result of whether the pathological image slice contains cancer.
- the advantage of the present invention is that the proposed pathological image analysis of the deep multi-scale feature convolutional network based on the attention mechanism is a fully automatic analysis method, which does not require manual feature extraction of pathological images and avoids
- the analysis results are over-reliant on the knowledge of pathological image features, and different scale features are considered, and weights are automatically assigned accordingly, which can effectively save analysis time and improve analysis efficiency.
- Fig. 1 is a flowchart of a method for intelligent analysis of panoramic digital pathological images according to an embodiment of the present invention
- Fig. 2 is a schematic diagram of a deep neural network model according to an embodiment of the present invention.
- Fig. 3 is a flowchart of training a deep neural network according to an embodiment of the present invention.
- Fig. 4 is a schematic diagram of experimental results according to an embodiment of the present invention.
- the intelligent analysis of panoramic digital case images is a multi-scale pathological image recognition method based on the attention mechanism.
- the method uses public pathology image data sets for training.
- the designed training network is a deep multi-scale feature convolutional network based on the attention mechanism.
- Corresponding scale weights are fused to obtain a richer feature expression of pathological images, so as to achieve accurate classification of pathological images.
- the analysis of clinical pathological image samples can be realized according to the obtained training model.
- the method for intelligent analysis of panoramic digital images includes the following steps:
- step S110 a training set is constructed based on the published pathological images of cancer tissues and clinical cancer data.
- Collect and mark public cancer histopathological images and clinical cancer data sets containing different subtypes for example, containing cancer, benign tissues such as lymph or surrounding tissues, inflammation, etc.
- pathological images of cancer tissues containing different subtypes Training set for example, containing cancer, benign tissues such as lymph or surrounding tissues, inflammation, etc.
- Step S120 preprocessing the pathological image.
- pathological images can be pre-processed, for example, noise images are added according to the original pixel size of the image to avoid the impact of slice quality (such as uneven staining) or irrelevant tissues such as blood vessels on the diagnosis.
- Noise pictures are mainly random single colors (black (0,0,0), white (255,255,255), red (255,0,0)).
- step S130 a multi-scale pathological image classification model based on the attention mechanism is constructed, and the constructed training set is used for training to obtain a pathological image classifier.
- step S120 a multi-scale feature convolution pathological image classification model based on the attention mechanism is established, and the pathological image obtained in step S120 is used to perform deep learning training on different pathological image classification models.
- the steps of the model training process include:
- Step S131 Establish a multi-scale feature fusion pathological image classification model based on the attention mechanism
- ADMCNN Attention-based Deep Multiple-scale Convolutional Neural Network
- ADMCNN includes multiple feature layers, denoted as ADMCNN56 ⁇ 56 ⁇ 64, ADMCNN28 ⁇ 28 ⁇ 128, ADMCNN14 ⁇ 14 ⁇ 256 and ADMCNN7 ⁇ 7 ⁇ 512.
- Step S132 cut the obtained pathological image into pacth with a size of 224 ⁇ 224.
- Step S133 Input the cut pathological image block to the deep convolution network, and after the first convolution operation, a feature with a size of 112 ⁇ 112 ⁇ 64 is obtained.
- Step S134 Input the features obtained in step S133 into the established multi-scale pathological image classification model based on the attention mechanism, and extract the features through multiple convolutional layers.
- the parameters of the convolutional layer are used as the parameters of the attention mechanism.
- the extracted features are input to the next convolutional layer, so that the features of each layer have the same resolution and channel for subsequent processing.
- the corresponding feature calculation model is expressed as:
- N out is the output feature vector
- N in(k) is the feature vector of the k-th channel of the previous layer
- C in is the total number of channels of the previous layer
- bias is the bias value
- input is the input
- weight is the weight.
- an attention mechanism is introduced to obtain a weight map corresponding to the scale, which is used to represent each scale. And the importance of pixels.
- step S135 these features are then cascaded to merge them into a feature with a size of 28 ⁇ 28 ⁇ 128.
- step S136 the merged features are passed through a fully connected layer.
- step S137 the features after the fully connected layer are input to the pathological image classifier, and various probabilities are obtained through softmax.
- the loss function model combined with the attention mechanism is designed as follows:
- P (i, j) represents the classification result of the pixels of the input image
- P class (i, j) represents the classification label of the input image pixel
- weight (class) represents the specified weight of the category
- C in represents the total number of channels
- k represents the k channels
- weight(w k ,k) represents the weight of the attention mechanism of each channel calculated after step S134
- (i,j) represents the position of each pixel
- W and H are the length and width of the image, respectively .
- softmax is used to normalize, and the relative importance of each position pixel in each scale is obtained.
- Step S140 using the trained pathological image classifier to analyze the pathological image to be detected to obtain a classification result.
- the trained pathological image classifier can analyze the target pathological image to be analyzed and obtain the classification result. Similar to the training process, the full clinical pathology slice to be analyzed is preprocessed, and the preprocessed pathological image is input into the deep convolutional network for feature extraction. Further, input the extracted features into the pathology classification network classifier to obtain the analysis result.
- model training, analysis and prediction process includes:
- Step S310 collecting public rhinitis cancer pathological images containing inflammation, lymph and cancer
- Step S320 preprocessing the pathological image
- Step S330 establishing a multi-scale feature convolution pathology image classification model, and using the pathology image obtained after preprocessing to perform deep learning training on the pathology image classification model to obtain a reference pathology image classifier;
- Step S340 acquiring digital pathological slice data of clinical cancer
- Step S350 the pathological image preprocessing similar to step S320;
- Step S360 input the preprocessed pathological image into the deep convolutional network for feature extraction
- Step S370 Input the extracted features into the pathological image classifier, and obtain the analysis result.
- the present invention also provides a panoramic digital pathological image intelligent analysis system, which is used to implement one or more aspects of the above method.
- the system includes: a feature extraction unit, which is used to input the pathological image slice to be analyzed into the convolutional neural network model based on the attention mechanism, perform features of different scales of the pathological image, and fuse the extracted features to obtain Fusion feature; a classification unit, which is used to input the fusion feature to a trained pathological image classifier to obtain a classification result of whether the pathological image slice contains cancer.
- the advantages of the embodiments of the present invention include: training on public data sets, adjusting clinical data for further learning, and finally testing on clinical data, which solves the limitation of the limited clinical labeling data set.
- Sex uses the attention mechanism-based multi-scale feature convolutional network for feature fusion to obtain richer feature representations, which makes the analysis better; treats lymph as a separate category and classifies it, reducing the misclassification of lymph Is the misdiagnosis rate of cancer.
- the verification is performed on the nasopharyngeal carcinoma cancerous image, and a data set established by collecting pathological images from a 20-fold digital pathological slice is used as a test data set.
- a data set established by collecting pathological images from a 20-fold digital pathological slice is used as a test data set.
- the average time for analyzing a digital pathology full slice image (about 50,000 ⁇ 50,000 pixels at 20 times) is taken as The calculation complexity index is shown in Figure 4, where the ordinate is the true positive rate (True Positive Rate) and the abscissa is the false positive rate (False Positive Rate), respectively showing cancer (cancerous tissue) and lymph (lymph tissue). ), the AUC of inflammation (benign tissue) are 0.862268, 0.868492, 0.855835, respectively. It can be seen that the present invention can obtain accurate classification results for digital pathological images.
- the present invention realizes fully automatic cancer analysis by constructing a pathological image classifier based on a deep multi-scale feature convolutional network based on the attention mechanism.
- the invention combines the actual clinical analysis steps from a technical point of view, considers characteristics of different scales and assigns weights, effectively saving the cost of manual data analysis and classification, and avoids the problem of excessive dependence on the doctor's technical level, and saves a lot of money
- the cost of manpower and material resources of doctors has been reduced, so that more cancer patients can be diagnosed and treated in a timely manner.
- the present invention may be a system, a method and/or a computer program product.
- the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the present invention.
- the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
- the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Non-exhaustive list of computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a printer with instructions stored thereon
- RAM random access memory
- ROM read-only memory
- EPROM erasable programmable read-only memory
- flash memory flash memory
- SRAM static random access memory
- CD-ROM compact disk read-only memory
- DVD digital versatile disk
- memory stick floppy disk
- mechanical encoding device such as a printer with instructions stored thereon
- the computer-readable storage medium used here is not interpreted as the instantaneous signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or through wires Transmission of electrical signals.
- the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network, and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
- the computer program instructions used to perform the operations of the present invention may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, status setting data, or in one or more programming languages.
- Source code or object code written in any combination, the programming language includes object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
- Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or entirely on the remote computer or server implement.
- the remote computer can be connected to the user's computer through any kind of network-including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to connect to the user's computer) connect).
- LAN local area network
- WAN wide area network
- an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by using the status information of the computer-readable program instructions.
- the computer-readable program instructions are executed to implement various aspects of the present invention.
- These computer-readable program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine that makes these instructions when executed by the processor of the computer or other programmable data processing device , A device that implements the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make computers, programmable data processing apparatuses, and/or other devices work in a specific manner, so that the computer-readable medium storing the instructions includes An article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
- each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more components for realizing the specified logical function.
- Executable instructions may also occur in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
- each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that implementation through hardware, implementation through software, and implementation through a combination of software and hardware are all equivalent.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Probability & Statistics with Applications (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (10)
- 一种全景数字病理图像智能分析方法,包括以下步骤:An intelligent analysis method for panoramic digital pathological images, including the following steps:将待分析的病理图像切片输入到基于注意力机制的卷积神经网络模型,进行病理图像不同尺度的特征,并将所提取的特征进行融合,获得融合特征;Input the pathological image slice to be analyzed into the convolutional neural network model based on the attention mechanism, carry out the features of different scales of the pathological image, and fuse the extracted features to obtain the fused features;将所述融合特征输入到经训练的病理图像分类器,获得病理图像切片是否包含癌变的分类结果。The fusion feature is input to a trained pathological image classifier to obtain a classification result of whether the pathological image slice contains cancer.
- 根据权利要求1所述的全景数字病理图像智能分析方法,其中,根据以下步骤获得所述融合特征:The intelligent analysis method for panoramic digital pathological images according to claim 1, wherein the fusion feature is obtained according to the following steps:将待分析的病理图像切片依次输入所述基于注意力机制的卷积神经网络的每个特征层进行对应尺度的特征提取,并经处理使得各层特征具有相同的分辨率和通道;The pathological image slices to be analyzed are sequentially input into each feature layer of the attention mechanism-based convolutional neural network for feature extraction of the corresponding scale, and the features of each layer are processed so that the features of each layer have the same resolution and channel;将所提取的特征进行级联和融合,获得融合特征;Cascade and merge the extracted features to obtain fusion features;将融合特征通过一层全连接层后输入经训练的病理图像分类器,得出各类概率。The fusion features are passed through a fully connected layer and then input to the trained pathological image classifier to obtain various probabilities.
- 根据权利要求1所述的全景数字病理图像智能分析方法,其中,根据以下步骤获得待分析的病理图像切片:The intelligent analysis method for panoramic digital pathological images according to claim 1, wherein the pathological image slices to be analyzed are obtained according to the following steps:将病理图像切成预设维度的二维切片;Cut the pathological image into two-dimensional slices with preset dimensions;将切好的病理图像块输入至深度卷积网络,经过一次卷积操作,获得三维特征的病理图像切片。The sliced pathological image block is input to the deep convolution network, and after a convolution operation, a pathological image slice with three-dimensional characteristics is obtained.
- 根据权利要求1所述的全景数字病理图像智能分析方法,其中,在所述提取不同尺度的特征的过程中,特征计算模型表示为:The intelligent analysis method for panoramic digital pathological images according to claim 1, wherein, in the process of extracting features of different scales, the feature calculation model is expressed as:其中,“*”为二维卷积运算,N out为输出的特征向量,N in(k)为前一层的第k个通道的特征向量,C in为前一层的通道总数,w k为第k个通道的权重,bias为偏置值。 Among them, "*" is the two-dimensional convolution operation, N out is the output feature vector, N in(k) is the feature vector of the k-th channel of the previous layer, C in is the total number of channels of the previous layer, w k Is the weight of the k-th channel, and bias is the bias value.
- 根据权利要求1所述的全景数字病理图像智能分析方法,其中训练病理图像分类器的损失函数表示为:The intelligent analysis method for panoramic digital pathological images according to claim 1, wherein the loss function of training the pathological image classifier is expressed as:其中,P (i,j)表示输入图像的像素的分类结果,P class(i,j)表示输入图像像素的分类标签,weight(class)表示该类别指定权重,C in表示通道总数,k表示第k个通道,weight(w k,k)表示得到的各通道的注意力机制权值,(i,j)代表每一个像素的位置,W和H分别为图像的长和宽。 Among them, P (i, j) represents the classification result of the pixels of the input image, P class (i, j) represents the classification label of the input image pixel, weight (class) represents the specified weight of the category, C in represents the total number of channels, and k represents For the k-th channel, weight(w k ,k) represents the weight of the attention mechanism obtained for each channel, (i,j) represents the position of each pixel, and W and H are the length and width of the image, respectively.
- 根据权利要求1所述的全景数字病理图像智能分析方法,其中,所述分类结果包括癌变、良性组织和炎症。The intelligent analysis method for panoramic digital pathological images according to claim 1, wherein the classification result includes cancer, benign tissue, and inflammation.
- 根据权利要求1所述的全景数字病理图像智能分析方法,其中,所述待分析的病理图像切片是根据图片像素尺寸随机添加有黑、白、红像素的噪声图片。The intelligent analysis method for panoramic digital pathological images according to claim 1, wherein the pathological image slice to be analyzed is a noise picture with black, white, and red pixels randomly added according to the pixel size of the picture.
- 根据权利要求2所述的全景数字病理图像智能分析方法,其中,所述分类结果是利用softmax得出的各类的概率信息。The intelligent analysis method for panoramic digital pathological images according to claim 2, wherein the classification result is various types of probability information obtained by softmax.
- 一种全景数字病理图像智能分析系统,包括:A panoramic digital pathological image intelligent analysis system, including:特征提取单元:用于将待分析的病理图像切片输入到基于注意力机制的 卷积神经网络模型,进行病理图像不同尺度的特征,并将所提取的特征进行融合,获得融合特征;Feature extraction unit: used to input the pathological image slice to be analyzed into the convolutional neural network model based on the attention mechanism, carry out the features of different scales of the pathological image, and fuse the extracted features to obtain the fused feature;分类单元:用于将所述融合特征输入到经训练的病理图像分类器,获得病理图像切片是否包含癌变的分类结果。Classification unit: used to input the fusion feature into the trained pathological image classifier to obtain the classification result of whether the pathological image slice contains cancer.
- 一种计算机可读存储介质,其上存储有计算机程序,其中,该程序被处理器执行时实现权利要求1至8任一项所述方法的步骤。A computer-readable storage medium having a computer program stored thereon, wherein the program is executed by a processor to realize the steps of the method according to any one of claims 1 to 8.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010237666.4A CN111488921B (en) | 2020-03-30 | 2020-03-30 | Intelligent analysis system and method for panoramic digital pathological image |
CN202010237666.4 | 2020-03-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021196632A1 true WO2021196632A1 (en) | 2021-10-07 |
Family
ID=71794502
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/129187 WO2021196632A1 (en) | 2020-03-30 | 2020-11-16 | Intelligent analysis system and method for panoramic digital pathological image |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111488921B (en) |
WO (1) | WO2021196632A1 (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114118258A (en) * | 2021-11-19 | 2022-03-01 | 武汉大学 | Pathological section feature fusion method based on background guidance attention mechanism |
CN114202510A (en) * | 2021-11-11 | 2022-03-18 | 西北大学 | Intelligent analysis system for pathological section images under microscope |
CN114240836A (en) * | 2021-11-12 | 2022-03-25 | 杭州迪英加科技有限公司 | Nasal polyp pathological section analysis method and system and readable storage medium |
CN114359666A (en) * | 2021-12-28 | 2022-04-15 | 清华珠三角研究院 | Multi-mode fusion lung cancer patient curative effect prediction method, system, device and medium |
CN114529554A (en) * | 2021-12-28 | 2022-05-24 | 福州大学 | Intelligent auxiliary interpretation method for gastric cancer HER2 digital pathological section |
CN115063592A (en) * | 2022-08-16 | 2022-09-16 | 之江实验室 | Multi-scale-based full-scanning pathological feature fusion extraction method and system |
CN116740041A (en) * | 2023-06-27 | 2023-09-12 | 新疆生产建设兵团医院 | CTA scanning image analysis system and method based on machine vision |
CN117036811A (en) * | 2023-08-14 | 2023-11-10 | 桂林电子科技大学 | Intelligent pathological image classification system and method based on double-branch fusion network |
CN117392428A (en) * | 2023-09-04 | 2024-01-12 | 深圳市第二人民医院(深圳市转化医学研究院) | Skin disease image classification method based on three-branch feature fusion network |
CN117764994A (en) * | 2024-02-22 | 2024-03-26 | 浙江首鼎视介科技有限公司 | biliary pancreas imaging system and method based on artificial intelligence |
CN118115787A (en) * | 2024-02-23 | 2024-05-31 | 齐鲁工业大学(山东省科学院) | Full-slice pathological image classification method based on graph neural network |
CN118299022A (en) * | 2024-05-28 | 2024-07-05 | 吉林大学 | Informationized management system and method for surgical equipment |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111488921B (en) * | 2020-03-30 | 2023-06-16 | 中国科学院深圳先进技术研究院 | Intelligent analysis system and method for panoramic digital pathological image |
CN112116559A (en) * | 2020-08-17 | 2020-12-22 | 您好人工智能技术研发昆山有限公司 | Digital pathological image intelligent analysis method based on deep learning |
CN113222933B (en) * | 2021-05-13 | 2023-08-04 | 西安交通大学 | Image recognition system applied to renal cell carcinoma full-chain diagnosis |
CN115082743B (en) * | 2022-08-16 | 2022-12-06 | 之江实验室 | Full-field digital pathological image classification system considering tumor microenvironment and construction method |
CN115482221A (en) * | 2022-09-22 | 2022-12-16 | 深圳先进技术研究院 | End-to-end weak supervision semantic segmentation labeling method for pathological image |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190102878A1 (en) * | 2017-09-30 | 2019-04-04 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for analyzing medical image |
CN109886346A (en) * | 2019-02-26 | 2019-06-14 | 四川大学华西医院 | A kind of cardiac muscle MRI image categorizing system |
CN110717905A (en) * | 2019-09-30 | 2020-01-21 | 上海联影智能医疗科技有限公司 | Brain image detection method, computer device, and storage medium |
CN110766643A (en) * | 2019-10-28 | 2020-02-07 | 电子科技大学 | Microaneurysm detection method facing fundus images |
CN111488921A (en) * | 2020-03-30 | 2020-08-04 | 中国科学院深圳先进技术研究院 | Panoramic digital pathological image intelligent analysis system and method |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10565708B2 (en) * | 2017-09-06 | 2020-02-18 | International Business Machines Corporation | Disease detection algorithms trainable with small number of positive samples |
CN108596882B (en) * | 2018-04-10 | 2019-04-02 | 中山大学肿瘤防治中心 | The recognition methods of pathological picture and device |
CN109165697B (en) * | 2018-10-12 | 2021-11-30 | 福州大学 | Natural scene character detection method based on attention mechanism convolutional neural network |
CN109784347B (en) * | 2018-12-17 | 2022-04-26 | 西北工业大学 | Image classification method based on multi-scale dense convolution neural network and spectral attention mechanism |
CN110570953A (en) * | 2019-09-09 | 2019-12-13 | 杭州憶盛医疗科技有限公司 | Automatic analysis method and system for digital pathology panoramic slice image |
-
2020
- 2020-03-30 CN CN202010237666.4A patent/CN111488921B/en active Active
- 2020-11-16 WO PCT/CN2020/129187 patent/WO2021196632A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190102878A1 (en) * | 2017-09-30 | 2019-04-04 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for analyzing medical image |
CN109886346A (en) * | 2019-02-26 | 2019-06-14 | 四川大学华西医院 | A kind of cardiac muscle MRI image categorizing system |
CN110717905A (en) * | 2019-09-30 | 2020-01-21 | 上海联影智能医疗科技有限公司 | Brain image detection method, computer device, and storage medium |
CN110766643A (en) * | 2019-10-28 | 2020-02-07 | 电子科技大学 | Microaneurysm detection method facing fundus images |
CN111488921A (en) * | 2020-03-30 | 2020-08-04 | 中国科学院深圳先进技术研究院 | Panoramic digital pathological image intelligent analysis system and method |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114202510A (en) * | 2021-11-11 | 2022-03-18 | 西北大学 | Intelligent analysis system for pathological section images under microscope |
CN114202510B (en) * | 2021-11-11 | 2024-01-19 | 西北大学 | Intelligent analysis system for pathological section image under microscope |
CN114240836A (en) * | 2021-11-12 | 2022-03-25 | 杭州迪英加科技有限公司 | Nasal polyp pathological section analysis method and system and readable storage medium |
CN114118258A (en) * | 2021-11-19 | 2022-03-01 | 武汉大学 | Pathological section feature fusion method based on background guidance attention mechanism |
CN114359666A (en) * | 2021-12-28 | 2022-04-15 | 清华珠三角研究院 | Multi-mode fusion lung cancer patient curative effect prediction method, system, device and medium |
CN114529554A (en) * | 2021-12-28 | 2022-05-24 | 福州大学 | Intelligent auxiliary interpretation method for gastric cancer HER2 digital pathological section |
CN114359666B (en) * | 2021-12-28 | 2024-10-15 | 清华珠三角研究院 | Multi-mode fused lung cancer patient curative effect prediction method, system, device and medium |
CN115063592A (en) * | 2022-08-16 | 2022-09-16 | 之江实验室 | Multi-scale-based full-scanning pathological feature fusion extraction method and system |
CN116740041B (en) * | 2023-06-27 | 2024-04-26 | 新疆生产建设兵团医院 | CTA scanning image analysis system and method based on machine vision |
CN116740041A (en) * | 2023-06-27 | 2023-09-12 | 新疆生产建设兵团医院 | CTA scanning image analysis system and method based on machine vision |
CN117036811A (en) * | 2023-08-14 | 2023-11-10 | 桂林电子科技大学 | Intelligent pathological image classification system and method based on double-branch fusion network |
CN117392428A (en) * | 2023-09-04 | 2024-01-12 | 深圳市第二人民医院(深圳市转化医学研究院) | Skin disease image classification method based on three-branch feature fusion network |
CN117764994B (en) * | 2024-02-22 | 2024-05-10 | 浙江首鼎视介科技有限公司 | Biliary pancreas imaging system and method based on artificial intelligence |
CN117764994A (en) * | 2024-02-22 | 2024-03-26 | 浙江首鼎视介科技有限公司 | biliary pancreas imaging system and method based on artificial intelligence |
CN118115787A (en) * | 2024-02-23 | 2024-05-31 | 齐鲁工业大学(山东省科学院) | Full-slice pathological image classification method based on graph neural network |
CN118299022A (en) * | 2024-05-28 | 2024-07-05 | 吉林大学 | Informationized management system and method for surgical equipment |
Also Published As
Publication number | Publication date |
---|---|
CN111488921A (en) | 2020-08-04 |
CN111488921B (en) | 2023-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021196632A1 (en) | Intelligent analysis system and method for panoramic digital pathological image | |
US11922626B2 (en) | Systems and methods for automatic detection and quantification of pathology using dynamic feature classification | |
WO2020253773A1 (en) | Medical image classification method, model training method, computing device and storage medium | |
US20220343623A1 (en) | Blood smear full-view intelligent analysis method, and blood cell segmentation model and recognition model construction method | |
CN109376636B (en) | Capsule network-based eye fundus retina image classification method | |
CN108305249B (en) | Rapid diagnosis and scoring method of full-scale pathological section based on deep learning | |
CN108464840B (en) | Automatic detection method and system for breast lumps | |
Pan et al. | Mitosis detection techniques in H&E stained breast cancer pathological images: A comprehensive review | |
WO2024060416A1 (en) | End-to-end weakly supervised semantic segmentation and labeling method for pathological image | |
WO2022167005A1 (en) | Deep neural network-based method for detecting living cell morphology, and related product | |
CN109670489B (en) | Weak supervision type early senile macular degeneration classification method based on multi-instance learning | |
WO2019184851A1 (en) | Image processing method and apparatus, and training method for neural network model | |
US11721023B1 (en) | Distinguishing a disease state from a non-disease state in an image | |
Dhawan et al. | Cervix image classification for prognosis of cervical cancer using deep neural network with transfer learning | |
Costa et al. | Eyequal: Accurate, explainable, retinal image quality assessment | |
CN112396605A (en) | Network training method and device, image recognition method and electronic equipment | |
CN113705595A (en) | Method, device and storage medium for predicting degree of abnormal cell metastasis | |
Yang et al. | The devil is in the details: a small-lesion sensitive weakly supervised learning framework for prostate cancer detection and grading | |
CN114140437A (en) | Fundus hard exudate segmentation method based on deep learning | |
CN111445456A (en) | Classification model, network model training method and device, and identification method and device | |
CN116645326A (en) | Glandular cell detection method, glandular cell detection system, electronic equipment and storage medium | |
Mathina Kani et al. | Classification of skin lesion images using modified Inception V3 model with transfer learning and augmentation techniques | |
CN114742119A (en) | Cross-supervised model training method, image segmentation method and related equipment | |
Liu et al. | A gastric cancer recognition algorithm on gastric pathological sections based on multistage attention‐DenseNet | |
CN112086174A (en) | Three-dimensional knowledge diagnosis model construction method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20928830 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20928830 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20928830 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 06/07/2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20928830 Country of ref document: EP Kind code of ref document: A1 |