Nothing Special   »   [go: up one dir, main page]

WO2021196632A1 - Intelligent analysis system and method for panoramic digital pathological image - Google Patents

Intelligent analysis system and method for panoramic digital pathological image Download PDF

Info

Publication number
WO2021196632A1
WO2021196632A1 PCT/CN2020/129187 CN2020129187W WO2021196632A1 WO 2021196632 A1 WO2021196632 A1 WO 2021196632A1 CN 2020129187 W CN2020129187 W CN 2020129187W WO 2021196632 A1 WO2021196632 A1 WO 2021196632A1
Authority
WO
WIPO (PCT)
Prior art keywords
pathological image
features
pathological
input
feature
Prior art date
Application number
PCT/CN2020/129187
Other languages
French (fr)
Chinese (zh)
Inventor
刁颂辉
秦文健
侯嘉馨
田引黎
谢耀钦
熊璟
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Publication of WO2021196632A1 publication Critical patent/WO2021196632A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Definitions

  • the present invention relates to the technical field of medical image processing, and more specifically, to an intelligent analysis system and method for panoramic digital pathological images.
  • tumor pathological diagnosis and later statistical analysis are based on the work experience and knowledge accumulation of pathologists to complete the analysis.
  • the evaluation results are easily affected by subjectivity.
  • manual statistical analysis of H&E tumor cell nuclei is prone to errors.
  • the statistical error assessment of the percentage of nuclei is as high as 45%, and the analysis results vary greatly depending on the pathologist.
  • the operation The dynamic variation range of 10%-95% of inter-subject disparity can easily lead to false negative diagnosis results.
  • Pathological examination is the current gold standard for clinical cancer diagnosis.
  • the pathologist's cancer diagnosis mainly relies on visual inspection of tissue sample images captured by a microscope.
  • pathologists they need to combine their long-term accumulated clinical analysis experience to judge whether there is cancer in the pathological section of rhinitis cancer. This method is not only time-consuming, but also requires extremely high professional knowledge of the doctor.
  • CAD Computer Aided Diagnosis
  • the main methods of CAD in the diagnosis of pathological images include traditional machine learning and the more popular deep learning in recent years.
  • Traditional machine learning requires manually extracting image features and then classifying them through a classifier.
  • the analysis effect of this method mainly depends on the effect of manual feature extraction in the early stage.
  • deep learning does not require manual feature extraction, can automatically dig deep features of pathological images, and directly perform end-to-end optimization.
  • CAD technology has achieved a lot of success in the field of pathological images, in the actual algorithm construction, the analysis is still based on the characteristics of a single scale, and the characteristics of different scales are ignored.
  • pathologists diagnosis takes a long time and requires high professional abilities; traditional machine learning methods to analyze pathological images mainly rely on the effect of extracting features , which requires higher professional knowledge of researchers.
  • the existing feature extraction based on deep convolutional networks is mainly for a single feature, and less consideration is given to features under different magnifications.
  • the actual clinical diagnosis is all analyzed under different scales of the image.
  • the purpose of the present invention is to overcome the above shortcomings of the prior art and provide a panoramic digital pathological image intelligent analysis system and method, which adopts a deep multi-scale feature convolution network based on the attention mechanism to classify the digital panoramic pathological image to solve the pathology Due to the large scale and complex shape of the panoramic image, the feature information is not fully represented, and the automatic intelligent analysis of the digital pathological image is finally realized.
  • a method for intelligent analysis of panoramic digital pathological images includes the following steps:
  • the fusion feature is input to a trained pathological image classifier to obtain a classification result of whether the pathological image slice contains cancer.
  • the fusion feature is obtained according to the following steps:
  • the pathological image slices to be analyzed are sequentially input into each feature layer of the attention mechanism-based convolutional neural network for feature extraction of the corresponding scale, and the features of each layer are processed to have the same resolution and channel;
  • the fusion features are passed through a fully connected layer and then input to the trained pathological image classifier to obtain various probabilities.
  • the pathological image slice to be analyzed is obtained according to the following steps:
  • the sliced pathological image block is input to the deep convolution network, and after a convolution operation, three-dimensional features are obtained.
  • the feature calculation model is expressed as:
  • N out is the output feature vector
  • N in(k) is the feature vector of the k-th channel of the previous layer
  • C in is the total number of channels of the previous layer
  • bias is the bias value.
  • the loss function for training the pathological image classifier is expressed as:
  • P (i, j) represents the classification result of the pixels of the input image
  • P class (i, j) represents the classification label of the input image pixel
  • weight (class) represents the specified weight of the category
  • C in represents the total number of channels
  • k represents For the k-th channel
  • weight(w k ,k) represents the weight of the attention mechanism obtained for each channel
  • (i,j) represents the position of each pixel
  • W and H are the length and width of the image, respectively.
  • the classification result includes cancer, benign tissue, and inflammation.
  • the pathological image slice to be analyzed is a noise picture with black, white, and red pixels randomly added according to the pixel size of the picture.
  • the classification result is various types of probability information obtained by softmax.
  • an intelligent analysis system for panoramic digital pathological images includes:
  • Feature extraction unit used to input the pathological image slice to be analyzed into the convolutional neural network model based on the attention mechanism, carry out the features of different scales of the pathological image, and fuse the extracted features to obtain the fused feature;
  • Classification unit used to input the fusion feature into the trained pathological image classifier to obtain the classification result of whether the pathological image slice contains cancer.
  • the advantage of the present invention is that the proposed pathological image analysis of the deep multi-scale feature convolutional network based on the attention mechanism is a fully automatic analysis method, which does not require manual feature extraction of pathological images and avoids
  • the analysis results are over-reliant on the knowledge of pathological image features, and different scale features are considered, and weights are automatically assigned accordingly, which can effectively save analysis time and improve analysis efficiency.
  • Fig. 1 is a flowchart of a method for intelligent analysis of panoramic digital pathological images according to an embodiment of the present invention
  • Fig. 2 is a schematic diagram of a deep neural network model according to an embodiment of the present invention.
  • Fig. 3 is a flowchart of training a deep neural network according to an embodiment of the present invention.
  • Fig. 4 is a schematic diagram of experimental results according to an embodiment of the present invention.
  • the intelligent analysis of panoramic digital case images is a multi-scale pathological image recognition method based on the attention mechanism.
  • the method uses public pathology image data sets for training.
  • the designed training network is a deep multi-scale feature convolutional network based on the attention mechanism.
  • Corresponding scale weights are fused to obtain a richer feature expression of pathological images, so as to achieve accurate classification of pathological images.
  • the analysis of clinical pathological image samples can be realized according to the obtained training model.
  • the method for intelligent analysis of panoramic digital images includes the following steps:
  • step S110 a training set is constructed based on the published pathological images of cancer tissues and clinical cancer data.
  • Collect and mark public cancer histopathological images and clinical cancer data sets containing different subtypes for example, containing cancer, benign tissues such as lymph or surrounding tissues, inflammation, etc.
  • pathological images of cancer tissues containing different subtypes Training set for example, containing cancer, benign tissues such as lymph or surrounding tissues, inflammation, etc.
  • Step S120 preprocessing the pathological image.
  • pathological images can be pre-processed, for example, noise images are added according to the original pixel size of the image to avoid the impact of slice quality (such as uneven staining) or irrelevant tissues such as blood vessels on the diagnosis.
  • Noise pictures are mainly random single colors (black (0,0,0), white (255,255,255), red (255,0,0)).
  • step S130 a multi-scale pathological image classification model based on the attention mechanism is constructed, and the constructed training set is used for training to obtain a pathological image classifier.
  • step S120 a multi-scale feature convolution pathological image classification model based on the attention mechanism is established, and the pathological image obtained in step S120 is used to perform deep learning training on different pathological image classification models.
  • the steps of the model training process include:
  • Step S131 Establish a multi-scale feature fusion pathological image classification model based on the attention mechanism
  • ADMCNN Attention-based Deep Multiple-scale Convolutional Neural Network
  • ADMCNN includes multiple feature layers, denoted as ADMCNN56 ⁇ 56 ⁇ 64, ADMCNN28 ⁇ 28 ⁇ 128, ADMCNN14 ⁇ 14 ⁇ 256 and ADMCNN7 ⁇ 7 ⁇ 512.
  • Step S132 cut the obtained pathological image into pacth with a size of 224 ⁇ 224.
  • Step S133 Input the cut pathological image block to the deep convolution network, and after the first convolution operation, a feature with a size of 112 ⁇ 112 ⁇ 64 is obtained.
  • Step S134 Input the features obtained in step S133 into the established multi-scale pathological image classification model based on the attention mechanism, and extract the features through multiple convolutional layers.
  • the parameters of the convolutional layer are used as the parameters of the attention mechanism.
  • the extracted features are input to the next convolutional layer, so that the features of each layer have the same resolution and channel for subsequent processing.
  • the corresponding feature calculation model is expressed as:
  • N out is the output feature vector
  • N in(k) is the feature vector of the k-th channel of the previous layer
  • C in is the total number of channels of the previous layer
  • bias is the bias value
  • input is the input
  • weight is the weight.
  • an attention mechanism is introduced to obtain a weight map corresponding to the scale, which is used to represent each scale. And the importance of pixels.
  • step S135 these features are then cascaded to merge them into a feature with a size of 28 ⁇ 28 ⁇ 128.
  • step S136 the merged features are passed through a fully connected layer.
  • step S137 the features after the fully connected layer are input to the pathological image classifier, and various probabilities are obtained through softmax.
  • the loss function model combined with the attention mechanism is designed as follows:
  • P (i, j) represents the classification result of the pixels of the input image
  • P class (i, j) represents the classification label of the input image pixel
  • weight (class) represents the specified weight of the category
  • C in represents the total number of channels
  • k represents the k channels
  • weight(w k ,k) represents the weight of the attention mechanism of each channel calculated after step S134
  • (i,j) represents the position of each pixel
  • W and H are the length and width of the image, respectively .
  • softmax is used to normalize, and the relative importance of each position pixel in each scale is obtained.
  • Step S140 using the trained pathological image classifier to analyze the pathological image to be detected to obtain a classification result.
  • the trained pathological image classifier can analyze the target pathological image to be analyzed and obtain the classification result. Similar to the training process, the full clinical pathology slice to be analyzed is preprocessed, and the preprocessed pathological image is input into the deep convolutional network for feature extraction. Further, input the extracted features into the pathology classification network classifier to obtain the analysis result.
  • model training, analysis and prediction process includes:
  • Step S310 collecting public rhinitis cancer pathological images containing inflammation, lymph and cancer
  • Step S320 preprocessing the pathological image
  • Step S330 establishing a multi-scale feature convolution pathology image classification model, and using the pathology image obtained after preprocessing to perform deep learning training on the pathology image classification model to obtain a reference pathology image classifier;
  • Step S340 acquiring digital pathological slice data of clinical cancer
  • Step S350 the pathological image preprocessing similar to step S320;
  • Step S360 input the preprocessed pathological image into the deep convolutional network for feature extraction
  • Step S370 Input the extracted features into the pathological image classifier, and obtain the analysis result.
  • the present invention also provides a panoramic digital pathological image intelligent analysis system, which is used to implement one or more aspects of the above method.
  • the system includes: a feature extraction unit, which is used to input the pathological image slice to be analyzed into the convolutional neural network model based on the attention mechanism, perform features of different scales of the pathological image, and fuse the extracted features to obtain Fusion feature; a classification unit, which is used to input the fusion feature to a trained pathological image classifier to obtain a classification result of whether the pathological image slice contains cancer.
  • the advantages of the embodiments of the present invention include: training on public data sets, adjusting clinical data for further learning, and finally testing on clinical data, which solves the limitation of the limited clinical labeling data set.
  • Sex uses the attention mechanism-based multi-scale feature convolutional network for feature fusion to obtain richer feature representations, which makes the analysis better; treats lymph as a separate category and classifies it, reducing the misclassification of lymph Is the misdiagnosis rate of cancer.
  • the verification is performed on the nasopharyngeal carcinoma cancerous image, and a data set established by collecting pathological images from a 20-fold digital pathological slice is used as a test data set.
  • a data set established by collecting pathological images from a 20-fold digital pathological slice is used as a test data set.
  • the average time for analyzing a digital pathology full slice image (about 50,000 ⁇ 50,000 pixels at 20 times) is taken as The calculation complexity index is shown in Figure 4, where the ordinate is the true positive rate (True Positive Rate) and the abscissa is the false positive rate (False Positive Rate), respectively showing cancer (cancerous tissue) and lymph (lymph tissue). ), the AUC of inflammation (benign tissue) are 0.862268, 0.868492, 0.855835, respectively. It can be seen that the present invention can obtain accurate classification results for digital pathological images.
  • the present invention realizes fully automatic cancer analysis by constructing a pathological image classifier based on a deep multi-scale feature convolutional network based on the attention mechanism.
  • the invention combines the actual clinical analysis steps from a technical point of view, considers characteristics of different scales and assigns weights, effectively saving the cost of manual data analysis and classification, and avoids the problem of excessive dependence on the doctor's technical level, and saves a lot of money
  • the cost of manpower and material resources of doctors has been reduced, so that more cancer patients can be diagnosed and treated in a timely manner.
  • the present invention may be a system, a method and/or a computer program product.
  • the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the present invention.
  • the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a printer with instructions stored thereon
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • flash memory flash memory
  • SRAM static random access memory
  • CD-ROM compact disk read-only memory
  • DVD digital versatile disk
  • memory stick floppy disk
  • mechanical encoding device such as a printer with instructions stored thereon
  • the computer-readable storage medium used here is not interpreted as the instantaneous signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or through wires Transmission of electrical signals.
  • the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network, and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
  • the computer program instructions used to perform the operations of the present invention may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, status setting data, or in one or more programming languages.
  • Source code or object code written in any combination, the programming language includes object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
  • Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer can be connected to the user's computer through any kind of network-including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to connect to the user's computer) connect).
  • LAN local area network
  • WAN wide area network
  • an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by using the status information of the computer-readable program instructions.
  • the computer-readable program instructions are executed to implement various aspects of the present invention.
  • These computer-readable program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine that makes these instructions when executed by the processor of the computer or other programmable data processing device , A device that implements the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make computers, programmable data processing apparatuses, and/or other devices work in a specific manner, so that the computer-readable medium storing the instructions includes An article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more components for realizing the specified logical function.
  • Executable instructions may also occur in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that implementation through hardware, implementation through software, and implementation through a combination of software and hardware are all equivalent.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

An intelligent analysis method and system for a panoramic digital pathological image. The method comprises: inputting a pathological image slice to be analyzed into an attention mechanism-based convolutional neural network model, extracting features of different scales of a pathological image, and fusing the extracted features to obtain fusion features; and inputting the fusion features into a trained pathological image classifier to obtain the classification result about whether the pathological image slice contains cancerization or not. According to the method, intelligent panoramic digital pathological image analysis is realized, the features of different scales are considered and the weight is given, so that the cost of manually analyzing and classifying the data is effectively saved, and the problem of excessive dependence on the technical level of doctors is avoided, the manpower and material resource cost of the doctors is greatly saved, and more cancer patients can be diagnosed and treated in time.

Description

一种全景数字病理图像智能分析系统及方法Intelligent analysis system and method for panoramic digital pathological images 【技术领域】【Technical Field】
本发明涉及医疗图像处理技术领域,更具体地,涉及一种全景数字病理图像智能分析系统及方法。The present invention relates to the technical field of medical image processing, and more specifically, to an intelligent analysis system and method for panoramic digital pathological images.
【背景技术】【Background technique】
目前肿瘤病理诊断和后期统计分析是基于病理医生工作经验和知识积累完成分析,其评判结果容易受主观性影响,癌症的亚型分类较多,部分亚型之间也有类似的特征,而且大量病理数据人工分析不仅耗时,并且过度疲劳也很容易影响分析结论。根据国际上最新临床研究成果表明,人工对H&E肿瘤细胞核的统计分析很容易产生误差,细胞核百分比统计过错评估高达45%,而且分析结果因病理医生不同有很大的差异性,对于同一肿瘤,操作者间差异性10%-95%动态变化范围,很容易造成假阴性诊断结果。分析结果不准确将直接影响患者的治疗方案,这给患者带来极大生命危险。病理检查是目前临床癌症诊断的金标准。病理医生的癌症诊断主要依赖于通过显微镜捕获的组织样本图像的视觉检查。然而对于病理医生来说,需要结合自己长期积累的临床分析经验来判断例如鼻炎癌病理切片中是否有癌变,该方法不仅费时,而且对医生的专业知识要求极高。At present, tumor pathological diagnosis and later statistical analysis are based on the work experience and knowledge accumulation of pathologists to complete the analysis. The evaluation results are easily affected by subjectivity. There are many classifications of cancer subtypes, and some subtypes have similar characteristics, and a large number of pathology Manual data analysis is not only time-consuming, and fatigue can easily affect the analysis conclusions. According to the latest international clinical research results, manual statistical analysis of H&E tumor cell nuclei is prone to errors. The statistical error assessment of the percentage of nuclei is as high as 45%, and the analysis results vary greatly depending on the pathologist. For the same tumor, the operation The dynamic variation range of 10%-95% of inter-subject disparity can easily lead to false negative diagnosis results. The inaccurate analysis result will directly affect the patient's treatment plan, which brings great danger to the patient's life. Pathological examination is the current gold standard for clinical cancer diagnosis. The pathologist's cancer diagnosis mainly relies on visual inspection of tissue sample images captured by a microscope. However, for pathologists, they need to combine their long-term accumulated clinical analysis experience to judge whether there is cancer in the pathological section of rhinitis cancer. This method is not only time-consuming, but also requires extremely high professional knowledge of the doctor.
近年来,随着人工智能技术的快速发展,计算机辅助诊断(Computer Aided Diagnosis,CAD)在医学领域取得了很大成功。CAD在病理图像的诊断中的主要方法包括传统的机器学习和近年较为流行的深度学习。传统的机器学习需要手动提取图像特征,然后通过分类器进行分类。该方法的分析效果主要依赖于前期手动特征提取的效果。相比于传统方法,深度学习无需手动特征提取,能够自动挖 掘病理图像深层特征,直接进行端到端优化。虽然CAD技术在病理图像领域已经取得了不少成功,然而,在实际算法的搭建中,仍然是以单一尺度的特征进行分析,而忽略了不同尺度的特征。In recent years, with the rapid development of artificial intelligence technology, Computer Aided Diagnosis (CAD) has achieved great success in the medical field. The main methods of CAD in the diagnosis of pathological images include traditional machine learning and the more popular deep learning in recent years. Traditional machine learning requires manually extracting image features and then classifying them through a classifier. The analysis effect of this method mainly depends on the effect of manual feature extraction in the early stage. Compared with traditional methods, deep learning does not require manual feature extraction, can automatically dig deep features of pathological images, and directly perform end-to-end optimization. Although CAD technology has achieved a lot of success in the field of pathological images, in the actual algorithm construction, the analysis is still based on the characteristics of a single scale, and the characteristics of different scales are ignored.
总之,目前病理图像分析存在的主要问题是:病理医生的诊断需要耗费较长的时间,而且对医生专业能力要求很高;传统的机器学习方法对病理图像进行分析,主要依赖于提取特征的效果,这对研究人员的专业知识要求较高。现有的基于深度卷积网络的特征提取主要针对单一特征,对不同倍率下的特征考虑较少,而临床实际诊断均是在图像的不同尺度下进行分析。In short, the main problems of pathological image analysis are: pathologists’ diagnosis takes a long time and requires high professional abilities; traditional machine learning methods to analyze pathological images mainly rely on the effect of extracting features , Which requires higher professional knowledge of researchers. The existing feature extraction based on deep convolutional networks is mainly for a single feature, and less consideration is given to features under different magnifications. However, the actual clinical diagnosis is all analyzed under different scales of the image.
【发明内容】[Summary of the invention]
本发明的目的是克服上述现有技术的缺陷,提供一种全景数字病理图像智能分析系统及方法,采用基于注意力机制的深度多尺度特征卷积网络对数字全景病理图像进行分类,以解决病理全景图像因尺度大、形态复杂引起的特征信息表征不充分问题,最终实现数字病理图像全自动智能分析。The purpose of the present invention is to overcome the above shortcomings of the prior art and provide a panoramic digital pathological image intelligent analysis system and method, which adopts a deep multi-scale feature convolution network based on the attention mechanism to classify the digital panoramic pathological image to solve the pathology Due to the large scale and complex shape of the panoramic image, the feature information is not fully represented, and the automatic intelligent analysis of the digital pathological image is finally realized.
根据本发明的第一方面,提供一种全景数字病理图像智能分析方法,该方法包括以下步骤:According to the first aspect of the present invention, a method for intelligent analysis of panoramic digital pathological images is provided. The method includes the following steps:
将待分析的病理图像切片输入到基于注意力机制的卷积神经网络模型,进行病理图像不同尺度的特征,并将所提取的特征进行融合,获得融合特征;Input the pathological image slice to be analyzed into the convolutional neural network model based on the attention mechanism, carry out the features of different scales of the pathological image, and fuse the extracted features to obtain the fused features;
将所述融合特征输入到经训练的病理图像分类器,获得病理图像切片是否包含癌变的分类结果。The fusion feature is input to a trained pathological image classifier to obtain a classification result of whether the pathological image slice contains cancer.
在一个实施例中,根据以下步骤获得所述融合特征:In one embodiment, the fusion feature is obtained according to the following steps:
将待分析的病理图像切片依次输入所述基于注意力机制的卷积神经网络的每个特征层进行对应尺度的特征提取,并经处理使得各层特征具有相同 的分辨率和通道;The pathological image slices to be analyzed are sequentially input into each feature layer of the attention mechanism-based convolutional neural network for feature extraction of the corresponding scale, and the features of each layer are processed to have the same resolution and channel;
将所提取的特征进行级联和融合,获得融合特征;Cascade and merge the extracted features to obtain fusion features;
将融合特征通过一层全连接层后输入经训练的病理图像分类器,得出各类概率。The fusion features are passed through a fully connected layer and then input to the trained pathological image classifier to obtain various probabilities.
在一个实施例中,根据以下步骤获得待分析的病理图像切片:In one embodiment, the pathological image slice to be analyzed is obtained according to the following steps:
将病理图像切成预设维度的二维切片;Cut the pathological image into two-dimensional slices with preset dimensions;
将切好的病理图像块输入至深度卷积网络,经过一次卷积操作,获得三维特征。The sliced pathological image block is input to the deep convolution network, and after a convolution operation, three-dimensional features are obtained.
在一个实施例中,在提取不同尺度的特征的过程中,特征计算模型表示为:In one embodiment, in the process of extracting features of different scales, the feature calculation model is expressed as:
Figure PCTCN2020129187-appb-000001
Figure PCTCN2020129187-appb-000001
其中,“*”为二维卷积运算,N out为输出的特征向量,N in(k)为前一层的第k个通道的特征向量,C in为前一层的通道总数,w k为第k个通道的权重,bias为偏置值。 Among them, "*" is the two-dimensional convolution operation, N out is the output feature vector, N in(k) is the feature vector of the k-th channel of the previous layer, C in is the total number of channels of the previous layer, w k Is the weight of the k-th channel, and bias is the bias value.
在一个实施例中,训练病理图像分类器的损失函数表示为:In one embodiment, the loss function for training the pathological image classifier is expressed as:
Figure PCTCN2020129187-appb-000002
Figure PCTCN2020129187-appb-000002
其中,P (i,j)表示输入图像的像素的分类结果,P class(i,j)表示输入图像像素的分类标签,weight(class)表示该类别指定权重,C in表示通道总数,k表示第 k个通道,weight(w k,k)表示得到的各通道的注意力机制权值,(i,j)代表每一个像素的位置,W和H分别为图像的长和宽。 Among them, P (i, j) represents the classification result of the pixels of the input image, P class (i, j) represents the classification label of the input image pixel, weight (class) represents the specified weight of the category, C in represents the total number of channels, and k represents For the k-th channel, weight(w k ,k) represents the weight of the attention mechanism obtained for each channel, (i,j) represents the position of each pixel, and W and H are the length and width of the image, respectively.
在一个实施例中,所述分类结果包括癌变、良性组织和炎症。In one embodiment, the classification result includes cancer, benign tissue, and inflammation.
在一个实施例中,所述待分析的病理图像切片是根据图片像素尺寸随机添加有黑、白、红像素的噪声图片。In one embodiment, the pathological image slice to be analyzed is a noise picture with black, white, and red pixels randomly added according to the pixel size of the picture.
在一个实施例中,所述分类结果是利用softmax得出的各类的概率信息。In one embodiment, the classification result is various types of probability information obtained by softmax.
根据本发明的第二方面,提供一种全景数字病理图像智能分析系统。该系统包括:According to the second aspect of the present invention, an intelligent analysis system for panoramic digital pathological images is provided. The system includes:
特征提取单元:用于将待分析的病理图像切片输入到基于注意力机制的卷积神经网络模型,进行病理图像不同尺度的特征,并将所提取的特征进行融合,获得融合特征;Feature extraction unit: used to input the pathological image slice to be analyzed into the convolutional neural network model based on the attention mechanism, carry out the features of different scales of the pathological image, and fuse the extracted features to obtain the fused feature;
分类单元:用于将所述融合特征输入到经训练的病理图像分类器,获得病理图像切片是否包含癌变的分类结果。Classification unit: used to input the fusion feature into the trained pathological image classifier to obtain the classification result of whether the pathological image slice contains cancer.
与现有技术相比,本发明的优点在于,所提出的基于注意力机制的深度多尺度特征卷积网络的病理图像分析是一种全自动分析方法,无需对病理图像进行手动特征提取,避免了分析结果对病理图像特征知识的过度依赖,而且考虑了不同的尺度特征,并相应自动化地赋予权重,从而能有效的节省分析时间,提高分析效率。Compared with the prior art, the advantage of the present invention is that the proposed pathological image analysis of the deep multi-scale feature convolutional network based on the attention mechanism is a fully automatic analysis method, which does not require manual feature extraction of pathological images and avoids The analysis results are over-reliant on the knowledge of pathological image features, and different scale features are considered, and weights are automatically assigned accordingly, which can effectively save analysis time and improve analysis efficiency.
通过以下参照附图对本发明的示例性实施例的详细描述,本发明的其它特征及其优点将会变得清楚。Through the following detailed description of exemplary embodiments of the present invention with reference to the accompanying drawings, other features and advantages of the present invention will become clear.
【附图说明】【Explanation of the drawings】
被结合在说明书中并构成说明书的一部分的附图示出了本发明的实施例,并 且连同其说明一起用于解释本发明的原理。The drawings incorporated in the specification and constituting a part of the specification illustrate the embodiments of the present invention, and together with the description are used to explain the principle of the present invention.
图1是根据本发明一个实施例的全景数字病理图像智能分析方法的流程图;Fig. 1 is a flowchart of a method for intelligent analysis of panoramic digital pathological images according to an embodiment of the present invention;
图2是根据本发明一个实施例的深度神经网络模型的示意图;Fig. 2 is a schematic diagram of a deep neural network model according to an embodiment of the present invention;
图3是根据本发明一个实施例的训练深度神经网络的流程图;Fig. 3 is a flowchart of training a deep neural network according to an embodiment of the present invention;
图4是根据本发明一个实施例的实验结果示意图。Fig. 4 is a schematic diagram of experimental results according to an embodiment of the present invention.
【具体实施方式】【Detailed ways】
现在将参照附图来详细描述本发明的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本发明的范围。Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that unless specifically stated otherwise, the relative arrangement of components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention.
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本发明及其应用或使用的任何限制。The following description of at least one exemplary embodiment is actually only illustrative, and in no way serves as any limitation to the present invention and its application or use.
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。The technologies, methods, and equipment known to those of ordinary skill in the relevant fields may not be discussed in detail, but where appropriate, the technologies, methods, and equipment should be regarded as part of the specification.
在这里示出和讨论的所有例子中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它例子可以具有不同的值。In all examples shown and discussed herein, any specific value should be interpreted as merely exemplary, rather than as a limitation. Therefore, other examples of the exemplary embodiment may have different values.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。It should be noted that similar reference numerals and letters indicate similar items in the following drawings, therefore, once an item is defined in one drawing, it does not need to be further discussed in the subsequent drawings.
本发明提供的全景数字病例图像智能分析,是基于注意力机制的多尺度病理图像识别方法。简言之,该方法利用公开病理图像数据集进行训练,设计的训练网络是基于注意力机制的深度多尺度特征卷积网络,通过提取不同尺度的特征,再结合注意力机制,通过网络框架学习相应的尺度权重,并将其进行融合以获取病理图像更丰富的特征表达,从而实现病理图像的精准分 类。进一步地,根据所得训练模型可实现对临床病理图像样本的分析。The intelligent analysis of panoramic digital case images provided by the present invention is a multi-scale pathological image recognition method based on the attention mechanism. In short, the method uses public pathology image data sets for training. The designed training network is a deep multi-scale feature convolutional network based on the attention mechanism. By extracting features of different scales, combined with the attention mechanism, learning through the network framework Corresponding scale weights are fused to obtain a richer feature expression of pathological images, so as to achieve accurate classification of pathological images. Further, the analysis of clinical pathological image samples can be realized according to the obtained training model.
具体地,参见图1所示,本发明实施例提供的全景数字图像智能分析方法包括以下步骤:Specifically, referring to FIG. 1, the method for intelligent analysis of panoramic digital images provided by the embodiment of the present invention includes the following steps:
步骤S110,基于公开的癌组织病理图像和临床癌症数据构建训练集。In step S110, a training set is constructed based on the published pathological images of cancer tissues and clinical cancer data.
搜集含有不同亚分型(例如,含癌变、良性组织如淋巴或周边组织、炎症等)的公开癌组织病理图像以及临床癌症数据集并做标记,构建包含不同亚分型类别的癌组织病理图像的训练集。Collect and mark public cancer histopathological images and clinical cancer data sets containing different subtypes (for example, containing cancer, benign tissues such as lymph or surrounding tissues, inflammation, etc.), and construct pathological images of cancer tissues containing different subtypes Training set.
步骤S120,对病理图像进行预处理。Step S120, preprocessing the pathological image.
为提高后续训练的精度,可对病理图像进行预处理,例如,根据图片原有像素尺寸添加噪声图片,以避免因切片质量(例如染色不均)或包含血管等无关组织对诊断造成的影响。噪声图片以随机单色(黑(0,0,0)、白(255,255,255)、红(255,0,0))为主。In order to improve the accuracy of subsequent training, pathological images can be pre-processed, for example, noise images are added according to the original pixel size of the image to avoid the impact of slice quality (such as uneven staining) or irrelevant tissues such as blood vessels on the diagnosis. Noise pictures are mainly random single colors (black (0,0,0), white (255,255,255), red (255,0,0)).
步骤S130,构建基于注意力机制的多尺度病理图像分类模型,并利用构建的训练集进行训练,获得病理图像分类器。In step S130, a multi-scale pathological image classification model based on the attention mechanism is constructed, and the constructed training set is used for training to obtain a pathological image classifier.
在此步骤中,建立基于注意力机制的多尺度特征卷积病理图像分类模型,使用步骤S120所得到的病理图像对不同的病理图像分类模型进行深度学习训练。In this step, a multi-scale feature convolution pathological image classification model based on the attention mechanism is established, and the pathological image obtained in step S120 is used to perform deep learning training on different pathological image classification models.
在一个实施例中,结合图2和图3所示,模型训练过程的步骤包括:In one embodiment, as shown in FIG. 2 and FIG. 3, the steps of the model training process include:
步骤S131:建立基于注意力机制的多尺度特征融合病理图像分类模型Step S131: Establish a multi-scale feature fusion pathological image classification model based on the attention mechanism
例如,参见图2所示,所建立的基于注意力深度多尺度卷积网络模型(Attention-based Deep Multiple-scale Convolutional Neural Network,ADMCNN)包括多个特征层,分别表示为ADMCNN56×56×64、 ADMCNN28×28×128、ADMCNN14×14×256和ADMCNN7×7×512。For example, referring to Figure 2, the established Attention-based Deep Multiple-scale Convolutional Neural Network (ADMCNN) includes multiple feature layers, denoted as ADMCNN56×56×64, ADMCNN28×28×128, ADMCNN14×14×256 and ADMCNN7×7×512.
步骤S132,将获得的病理图像切成大小为224×224的pacth。Step S132, cut the obtained pathological image into pacth with a size of 224×224.
步骤S133:将切好的病理图像块输入至深度卷积网络,经过第一次卷积操作,获得大小为112×112×64的特征。Step S133: Input the cut pathological image block to the deep convolution network, and after the first convolution operation, a feature with a size of 112×112×64 is obtained.
步骤S134:将经步骤S133获得特征输入到所建立的基于注意力机制的多尺度病理图像分类模型,通过多个卷积层提取特征。Step S134: Input the features obtained in step S133 into the established multi-scale pathological image classification model based on the attention mechanism, and extract the features through multiple convolutional layers.
在提取特征的过程中,分别以卷积层参数作为注意力机制的参数。将提取到的特征输入下一个卷积层,使得各层特征具有相同的分辨率和通道,以便后续处理。其相应的特征计算模型表示为:In the process of extracting features, the parameters of the convolutional layer are used as the parameters of the attention mechanism. The extracted features are input to the next convolutional layer, so that the features of each layer have the same resolution and channel for subsequent processing. The corresponding feature calculation model is expressed as:
Figure PCTCN2020129187-appb-000003
Figure PCTCN2020129187-appb-000003
其中,“*”为二维卷积运算,N out为输出的特征向量,N in(k)为上一层的第k个通道的特征向量,C in为上一层的通道总数,w k为第k个通道的权重,bias为偏置值,input表示输入,weight表示权重。 Among them, "*" is the two-dimensional convolution operation, N out is the output feature vector, N in(k) is the feature vector of the k-th channel of the previous layer, C in is the total number of channels of the previous layer, w k Is the weight of the k-th channel, bias is the bias value, input is the input, and weight is the weight.
在本发明实施例中,针对临床实际考虑了图像的多个特征并且每个特征所赋予的权重不同的特征的需要,引入注意力机制,以得到与尺度对应的权重图,用于表示各尺度及像素的重要性。In the embodiment of the present invention, in response to the need for clinical practice to consider multiple features of an image and each feature has a different weight, an attention mechanism is introduced to obtain a weight map corresponding to the scale, which is used to represent each scale. And the importance of pixels.
步骤S135,然后将这些特征进行级联使其进行融合成一个大小为28×28×128的特征。In step S135, these features are then cascaded to merge them into a feature with a size of 28×28×128.
步骤S136,将融合后的特征通过一层全连接层。In step S136, the merged features are passed through a fully connected layer.
步骤S137,经过全连接层后的特征输入病理图像分类器,通过softmax得出各类概率。In step S137, the features after the fully connected layer are input to the pathological image classifier, and various probabilities are obtained through softmax.
依据上述步骤设计结合了注意力机制的损失函数模型如下:According to the above steps, the loss function model combined with the attention mechanism is designed as follows:
Figure PCTCN2020129187-appb-000004
Figure PCTCN2020129187-appb-000004
其中,P (i,j)表示输入图像的像素的分类结果P class(i,j)表示输入图像像素的分类标签,weight(class)表示该类别指定权重,C in表示通道总数,k表示第k个通道,weight(w k,k)表示在步骤S134之后计算得到的各通道的注意力机制权值,(i,j)代表每一个像素的位置,W和H分别为图像的长和宽。 Among them, P (i, j) represents the classification result of the pixels of the input image P class (i, j) represents the classification label of the input image pixel, weight (class) represents the specified weight of the category, C in represents the total number of channels, and k represents the k channels, weight(w k ,k) represents the weight of the attention mechanism of each channel calculated after step S134, (i,j) represents the position of each pixel, W and H are the length and width of the image, respectively .
在此步骤中,使用softmax进行归一化,得到每个尺度中各位置像素点的相对重要性。In this step, softmax is used to normalize, and the relative importance of each position pixel in each scale is obtained.
重复上述过程对模型进行训练,直到收敛,获得经训练的病理图像分类器。Repeat the above process to train the model until it converges to obtain a trained pathological image classifier.
步骤S140,利用训练后的病理图像分类器对待检测的病理图像进行分析,获得分类结果。Step S140, using the trained pathological image classifier to analyze the pathological image to be detected to obtain a classification result.
利用经训练的病理图像分类器可对待分析的目标病理图像进行分析,获得分类结果。与训练过程类似,将待分析临床病理全切片进行预处理,对预处理后的病理图像输入深度卷积网络进行特征提取。进一步地,将提取的特征输入病理分类网络分类器,获取分析结果。The trained pathological image classifier can analyze the target pathological image to be analyzed and obtain the classification result. Similar to the training process, the full clinical pathology slice to be analyzed is preprocessed, and the preprocessed pathological image is input into the deep convolutional network for feature extraction. Further, input the extracted features into the pathology classification network classifier to obtain the analysis result.
为进一步理解,仍结合图3所示,以鼻炎癌病理图像为例,模型训练和分析预测过程包括:For further understanding, still in conjunction with Figure 3, taking the rhinitis cancer pathological image as an example, the model training, analysis and prediction process includes:
步骤S310,搜集含有炎症、淋巴及癌症的公开鼻炎癌病理图像;Step S310, collecting public rhinitis cancer pathological images containing inflammation, lymph and cancer;
步骤S320,对病理图像进行预处理;Step S320, preprocessing the pathological image;
步骤S330,建立多尺度特征卷积病理图像分类模型,使用预处理后所得到的病理图像对病理图像分类模型进行深度学习训练,得到基准病理图像分类器;Step S330, establishing a multi-scale feature convolution pathology image classification model, and using the pathology image obtained after preprocessing to perform deep learning training on the pathology image classification model to obtain a reference pathology image classifier;
步骤S340,获取临床癌症的数字病理切片数据;Step S340, acquiring digital pathological slice data of clinical cancer;
步骤S350,类似步骤S320的病理图像预处理;Step S350, the pathological image preprocessing similar to step S320;
步骤S360,将预处理后的病理图像输入深度卷积网络进行特征提取;Step S360, input the preprocessed pathological image into the deep convolutional network for feature extraction;
步骤S370,将提取的特征输入病理图像分类器,获取分析结果。Step S370: Input the extracted features into the pathological image classifier, and obtain the analysis result.
需要说明的是,上述实施例仅是示意性,本领域技术人员在不违背本发明精神的前提下,可作适当变型或改变,例如,采用现有技术的其它多分类函数来代替softmax函数;在基于注意力机制的多尺度病理图像分类模型中设置更多的卷积层以提取更多不同尺度的特征;采用其他或变型的损失函数来衡量模型训练的精度;将原始训练样本进行剪切、压缩或拉伸操作后得到的变换样本作为训练集。It should be noted that the above-mentioned embodiments are only illustrative, and those skilled in the art can make appropriate modifications or changes without departing from the spirit of the present invention, for example, using other multi-classification functions in the prior art to replace the softmax function; Set up more convolutional layers in the multi-scale pathological image classification model based on the attention mechanism to extract more features of different scales; use other or modified loss functions to measure the accuracy of model training; cut the original training samples The transformed samples obtained after compression or stretching operations are used as the training set.
相应地,本发明还提供一种全景数字病理图像智能分析系统,用于实现上述方法的一个方面或多个方面。例如该系统包括:特征提取单元,其用于将待分析的病理图像切片输入到基于注意力机制的卷积神经网络模型,进行病理图像不同尺度的特征,并将所提取的特征进行融合,获得融合特征;分类单元,其用于将所述融合特征输入到经训练的病理图像分类器,获得病理图像切片是否包含癌变的分类结果。Correspondingly, the present invention also provides a panoramic digital pathological image intelligent analysis system, which is used to implement one or more aspects of the above method. For example, the system includes: a feature extraction unit, which is used to input the pathological image slice to be analyzed into the convolutional neural network model based on the attention mechanism, perform features of different scales of the pathological image, and fuse the extracted features to obtain Fusion feature; a classification unit, which is used to input the fusion feature to a trained pathological image classifier to obtain a classification result of whether the pathological image slice contains cancer.
综上,相对于现有技术,本发明实施例的优点包括:在公开数据集上进行训练,对临床数据进行调整进一步学习,并最终在临床数据进行测试,解决了临床标注数据集有限的局限性;采用了基于注意力机制的多尺度特征卷积网络进行特征融合,获取更丰富的特征表示,使得分析效果更好;将淋巴作为单独一类,对其进行分类,降低了将淋巴错分为癌症的误诊率。In summary, compared to the prior art, the advantages of the embodiments of the present invention include: training on public data sets, adjusting clinical data for further learning, and finally testing on clinical data, which solves the limitation of the limited clinical labeling data set. Sex; uses the attention mechanism-based multi-scale feature convolutional network for feature fusion to obtain richer feature representations, which makes the analysis better; treats lymph as a separate category and classifies it, reducing the misclassification of lymph Is the misdiagnosis rate of cancer.
为进一步验证本发明的效果,在鼻咽癌癌变图像上进行验证,将从20倍 数字病理切片中采集病理图像建立的数据集作为测试数据集。以“图像是否包含癌变区域”这一分类问题的ROC曲线(受试者工作特征曲线)作为指标,对一张数字病理全切片图像(20倍下约50000×50000像素)进行分析的平均时间作为计算复杂度指标,结果如图4所示,其中纵坐标是真阳性率(True Positive Rate),横坐标是假阳性率(False Positive Rate),分别展示了cancer(癌变组织),lymph(淋巴组织),inflame(良性组织)的AUC,分别是0.862268,0.868492,0.855835,可见,本发明针对数字病理图像,能够获得准确的分类结果。In order to further verify the effect of the present invention, the verification is performed on the nasopharyngeal carcinoma cancerous image, and a data set established by collecting pathological images from a 20-fold digital pathological slice is used as a test data set. Taking the ROC curve (receiver operating characteristic curve) of the classification problem of "whether the image contains cancerous area" as an indicator, the average time for analyzing a digital pathology full slice image (about 50,000 × 50,000 pixels at 20 times) is taken as The calculation complexity index is shown in Figure 4, where the ordinate is the true positive rate (True Positive Rate) and the abscissa is the false positive rate (False Positive Rate), respectively showing cancer (cancerous tissue) and lymph (lymph tissue). ), the AUC of inflammation (benign tissue) are 0.862268, 0.868492, 0.855835, respectively. It can be seen that the present invention can obtain accurate classification results for digital pathological images.
综上所述,本发明通过构建基于注意力机制的深度多尺度特征卷积网络的病理图像分类器,实现癌症的全自动分析。本发明从技术角度上结合了临床的实际分析步骤,考虑不同尺度的特征并且赋予权重,有效节约人工对数据进行分析归类的成本,同时避免了对医生技术水平过度依赖的问题,并大量节省了医生的人力物力成本,使得更多癌症患者能够得到及时诊治。In summary, the present invention realizes fully automatic cancer analysis by constructing a pathological image classifier based on a deep multi-scale feature convolutional network based on the attention mechanism. The invention combines the actual clinical analysis steps from a technical point of view, considers characteristics of different scales and assigns weights, effectively saving the cost of manual data analysis and classification, and avoids the problem of excessive dependence on the doctor's technical level, and saves a lot of money The cost of manpower and material resources of doctors has been reduced, so that more cancer patients can be diagnosed and treated in a timely manner.
本发明可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本发明的各个方面的计算机可读程序指令。The present invention may be a system, a method and/or a computer program product. The computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the present invention.
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、 便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。The computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device. The computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (non-exhaustive list) of computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a printer with instructions stored thereon The protruding structure in the hole card or the groove, and any suitable combination of the above. The computer-readable storage medium used here is not interpreted as the instantaneous signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or through wires Transmission of electrical signals.
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。The computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network, and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
用于执行本发明操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可 以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本发明的各个方面。The computer program instructions used to perform the operations of the present invention may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, status setting data, or in one or more programming languages. Source code or object code written in any combination, the programming language includes object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages. Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or entirely on the remote computer or server implement. In the case of a remote computer, the remote computer can be connected to the user's computer through any kind of network-including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to connect to the user's computer) connect). In some embodiments, an electronic circuit, such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by using the status information of the computer-readable program instructions. The computer-readable program instructions are executed to implement various aspects of the present invention.
这里参照根据本发明实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本发明的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。Here, various aspects of the present invention are described with reference to flowcharts and/or block diagrams of methods, devices (systems) and computer program products according to embodiments of the present invention. It should be understood that each block of the flowcharts and/or block diagrams, and combinations of blocks in the flowcharts and/or block diagrams, can be implemented by computer-readable program instructions.
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。These computer-readable program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine that makes these instructions when executed by the processor of the computer or other programmable data processing device , A device that implements the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make computers, programmable data processing apparatuses, and/or other devices work in a specific manner, so that the computer-readable medium storing the instructions includes An article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。It is also possible to load computer-readable program instructions onto a computer, other programmable data processing device, or other equipment, so that a series of operation steps are executed on the computer, other programmable data processing device, or other equipment to produce a computer-implemented process , So that the instructions executed on the computer, other programmable data processing apparatus, or other equipment realize the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
附图中的流程图和框图显示了根据本发明的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。对于本领域技术人员来说公知的是,通过硬件方式实现、通过软件方式实现以及通过软件和硬件结合的方式实现都是等价的。The flowcharts and block diagrams in the accompanying drawings show the possible implementation architecture, functions, and operations of the system, method, and computer program product according to multiple embodiments of the present invention. In this regard, each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more components for realizing the specified logical function. Executable instructions. In some alternative implementations, the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved. It should also be noted that each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart, can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that implementation through hardware, implementation through software, and implementation through a combination of software and hardware are all equivalent.
以上已经描述了本发明的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。本发明的范围由所附权利要求来限定。The embodiments of the present invention have been described above, and the above description is exemplary, not exhaustive, and is not limited to the disclosed embodiments. Without departing from the scope and spirit of the illustrated embodiments, many modifications and changes are obvious to those of ordinary skill in the art. The choice of terms used herein is intended to best explain the principles, practical applications, or technical improvements of the various embodiments in the market, or to enable other ordinary skilled in the art to understand the various embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (10)

  1. 一种全景数字病理图像智能分析方法,包括以下步骤:An intelligent analysis method for panoramic digital pathological images, including the following steps:
    将待分析的病理图像切片输入到基于注意力机制的卷积神经网络模型,进行病理图像不同尺度的特征,并将所提取的特征进行融合,获得融合特征;Input the pathological image slice to be analyzed into the convolutional neural network model based on the attention mechanism, carry out the features of different scales of the pathological image, and fuse the extracted features to obtain the fused features;
    将所述融合特征输入到经训练的病理图像分类器,获得病理图像切片是否包含癌变的分类结果。The fusion feature is input to a trained pathological image classifier to obtain a classification result of whether the pathological image slice contains cancer.
  2. 根据权利要求1所述的全景数字病理图像智能分析方法,其中,根据以下步骤获得所述融合特征:The intelligent analysis method for panoramic digital pathological images according to claim 1, wherein the fusion feature is obtained according to the following steps:
    将待分析的病理图像切片依次输入所述基于注意力机制的卷积神经网络的每个特征层进行对应尺度的特征提取,并经处理使得各层特征具有相同的分辨率和通道;The pathological image slices to be analyzed are sequentially input into each feature layer of the attention mechanism-based convolutional neural network for feature extraction of the corresponding scale, and the features of each layer are processed so that the features of each layer have the same resolution and channel;
    将所提取的特征进行级联和融合,获得融合特征;Cascade and merge the extracted features to obtain fusion features;
    将融合特征通过一层全连接层后输入经训练的病理图像分类器,得出各类概率。The fusion features are passed through a fully connected layer and then input to the trained pathological image classifier to obtain various probabilities.
  3. 根据权利要求1所述的全景数字病理图像智能分析方法,其中,根据以下步骤获得待分析的病理图像切片:The intelligent analysis method for panoramic digital pathological images according to claim 1, wherein the pathological image slices to be analyzed are obtained according to the following steps:
    将病理图像切成预设维度的二维切片;Cut the pathological image into two-dimensional slices with preset dimensions;
    将切好的病理图像块输入至深度卷积网络,经过一次卷积操作,获得三维特征的病理图像切片。The sliced pathological image block is input to the deep convolution network, and after a convolution operation, a pathological image slice with three-dimensional characteristics is obtained.
  4. 根据权利要求1所述的全景数字病理图像智能分析方法,其中,在所述提取不同尺度的特征的过程中,特征计算模型表示为:The intelligent analysis method for panoramic digital pathological images according to claim 1, wherein, in the process of extracting features of different scales, the feature calculation model is expressed as:
    Figure PCTCN2020129187-appb-100001
    Figure PCTCN2020129187-appb-100001
    其中,“*”为二维卷积运算,N out为输出的特征向量,N in(k)为前一层的第k个通道的特征向量,C in为前一层的通道总数,w k为第k个通道的权重,bias为偏置值。 Among them, "*" is the two-dimensional convolution operation, N out is the output feature vector, N in(k) is the feature vector of the k-th channel of the previous layer, C in is the total number of channels of the previous layer, w k Is the weight of the k-th channel, and bias is the bias value.
  5. 根据权利要求1所述的全景数字病理图像智能分析方法,其中训练病理图像分类器的损失函数表示为:The intelligent analysis method for panoramic digital pathological images according to claim 1, wherein the loss function of training the pathological image classifier is expressed as:
    Figure PCTCN2020129187-appb-100002
    Figure PCTCN2020129187-appb-100002
    其中,P (i,j)表示输入图像的像素的分类结果,P class(i,j)表示输入图像像素的分类标签,weight(class)表示该类别指定权重,C in表示通道总数,k表示第k个通道,weight(w k,k)表示得到的各通道的注意力机制权值,(i,j)代表每一个像素的位置,W和H分别为图像的长和宽。 Among them, P (i, j) represents the classification result of the pixels of the input image, P class (i, j) represents the classification label of the input image pixel, weight (class) represents the specified weight of the category, C in represents the total number of channels, and k represents For the k-th channel, weight(w k ,k) represents the weight of the attention mechanism obtained for each channel, (i,j) represents the position of each pixel, and W and H are the length and width of the image, respectively.
  6. 根据权利要求1所述的全景数字病理图像智能分析方法,其中,所述分类结果包括癌变、良性组织和炎症。The intelligent analysis method for panoramic digital pathological images according to claim 1, wherein the classification result includes cancer, benign tissue, and inflammation.
  7. 根据权利要求1所述的全景数字病理图像智能分析方法,其中,所述待分析的病理图像切片是根据图片像素尺寸随机添加有黑、白、红像素的噪声图片。The intelligent analysis method for panoramic digital pathological images according to claim 1, wherein the pathological image slice to be analyzed is a noise picture with black, white, and red pixels randomly added according to the pixel size of the picture.
  8. 根据权利要求2所述的全景数字病理图像智能分析方法,其中,所述分类结果是利用softmax得出的各类的概率信息。The intelligent analysis method for panoramic digital pathological images according to claim 2, wherein the classification result is various types of probability information obtained by softmax.
  9. 一种全景数字病理图像智能分析系统,包括:A panoramic digital pathological image intelligent analysis system, including:
    特征提取单元:用于将待分析的病理图像切片输入到基于注意力机制的 卷积神经网络模型,进行病理图像不同尺度的特征,并将所提取的特征进行融合,获得融合特征;Feature extraction unit: used to input the pathological image slice to be analyzed into the convolutional neural network model based on the attention mechanism, carry out the features of different scales of the pathological image, and fuse the extracted features to obtain the fused feature;
    分类单元:用于将所述融合特征输入到经训练的病理图像分类器,获得病理图像切片是否包含癌变的分类结果。Classification unit: used to input the fusion feature into the trained pathological image classifier to obtain the classification result of whether the pathological image slice contains cancer.
  10. 一种计算机可读存储介质,其上存储有计算机程序,其中,该程序被处理器执行时实现权利要求1至8任一项所述方法的步骤。A computer-readable storage medium having a computer program stored thereon, wherein the program is executed by a processor to realize the steps of the method according to any one of claims 1 to 8.
PCT/CN2020/129187 2020-03-30 2020-11-16 Intelligent analysis system and method for panoramic digital pathological image WO2021196632A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010237666.4A CN111488921B (en) 2020-03-30 2020-03-30 Intelligent analysis system and method for panoramic digital pathological image
CN202010237666.4 2020-03-30

Publications (1)

Publication Number Publication Date
WO2021196632A1 true WO2021196632A1 (en) 2021-10-07

Family

ID=71794502

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/129187 WO2021196632A1 (en) 2020-03-30 2020-11-16 Intelligent analysis system and method for panoramic digital pathological image

Country Status (2)

Country Link
CN (1) CN111488921B (en)
WO (1) WO2021196632A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114118258A (en) * 2021-11-19 2022-03-01 武汉大学 Pathological section feature fusion method based on background guidance attention mechanism
CN114202510A (en) * 2021-11-11 2022-03-18 西北大学 Intelligent analysis system for pathological section images under microscope
CN114240836A (en) * 2021-11-12 2022-03-25 杭州迪英加科技有限公司 Nasal polyp pathological section analysis method and system and readable storage medium
CN114359666A (en) * 2021-12-28 2022-04-15 清华珠三角研究院 Multi-mode fusion lung cancer patient curative effect prediction method, system, device and medium
CN114529554A (en) * 2021-12-28 2022-05-24 福州大学 Intelligent auxiliary interpretation method for gastric cancer HER2 digital pathological section
CN115063592A (en) * 2022-08-16 2022-09-16 之江实验室 Multi-scale-based full-scanning pathological feature fusion extraction method and system
CN116740041A (en) * 2023-06-27 2023-09-12 新疆生产建设兵团医院 CTA scanning image analysis system and method based on machine vision
CN117036811A (en) * 2023-08-14 2023-11-10 桂林电子科技大学 Intelligent pathological image classification system and method based on double-branch fusion network
CN117392428A (en) * 2023-09-04 2024-01-12 深圳市第二人民医院(深圳市转化医学研究院) Skin disease image classification method based on three-branch feature fusion network
CN117764994A (en) * 2024-02-22 2024-03-26 浙江首鼎视介科技有限公司 biliary pancreas imaging system and method based on artificial intelligence
CN118115787A (en) * 2024-02-23 2024-05-31 齐鲁工业大学(山东省科学院) Full-slice pathological image classification method based on graph neural network
CN118299022A (en) * 2024-05-28 2024-07-05 吉林大学 Informationized management system and method for surgical equipment

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488921B (en) * 2020-03-30 2023-06-16 中国科学院深圳先进技术研究院 Intelligent analysis system and method for panoramic digital pathological image
CN112116559A (en) * 2020-08-17 2020-12-22 您好人工智能技术研发昆山有限公司 Digital pathological image intelligent analysis method based on deep learning
CN113222933B (en) * 2021-05-13 2023-08-04 西安交通大学 Image recognition system applied to renal cell carcinoma full-chain diagnosis
CN115082743B (en) * 2022-08-16 2022-12-06 之江实验室 Full-field digital pathological image classification system considering tumor microenvironment and construction method
CN115482221A (en) * 2022-09-22 2022-12-16 深圳先进技术研究院 End-to-end weak supervision semantic segmentation labeling method for pathological image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190102878A1 (en) * 2017-09-30 2019-04-04 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for analyzing medical image
CN109886346A (en) * 2019-02-26 2019-06-14 四川大学华西医院 A kind of cardiac muscle MRI image categorizing system
CN110717905A (en) * 2019-09-30 2020-01-21 上海联影智能医疗科技有限公司 Brain image detection method, computer device, and storage medium
CN110766643A (en) * 2019-10-28 2020-02-07 电子科技大学 Microaneurysm detection method facing fundus images
CN111488921A (en) * 2020-03-30 2020-08-04 中国科学院深圳先进技术研究院 Panoramic digital pathological image intelligent analysis system and method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10565708B2 (en) * 2017-09-06 2020-02-18 International Business Machines Corporation Disease detection algorithms trainable with small number of positive samples
CN108596882B (en) * 2018-04-10 2019-04-02 中山大学肿瘤防治中心 The recognition methods of pathological picture and device
CN109165697B (en) * 2018-10-12 2021-11-30 福州大学 Natural scene character detection method based on attention mechanism convolutional neural network
CN109784347B (en) * 2018-12-17 2022-04-26 西北工业大学 Image classification method based on multi-scale dense convolution neural network and spectral attention mechanism
CN110570953A (en) * 2019-09-09 2019-12-13 杭州憶盛医疗科技有限公司 Automatic analysis method and system for digital pathology panoramic slice image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190102878A1 (en) * 2017-09-30 2019-04-04 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for analyzing medical image
CN109886346A (en) * 2019-02-26 2019-06-14 四川大学华西医院 A kind of cardiac muscle MRI image categorizing system
CN110717905A (en) * 2019-09-30 2020-01-21 上海联影智能医疗科技有限公司 Brain image detection method, computer device, and storage medium
CN110766643A (en) * 2019-10-28 2020-02-07 电子科技大学 Microaneurysm detection method facing fundus images
CN111488921A (en) * 2020-03-30 2020-08-04 中国科学院深圳先进技术研究院 Panoramic digital pathological image intelligent analysis system and method

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114202510A (en) * 2021-11-11 2022-03-18 西北大学 Intelligent analysis system for pathological section images under microscope
CN114202510B (en) * 2021-11-11 2024-01-19 西北大学 Intelligent analysis system for pathological section image under microscope
CN114240836A (en) * 2021-11-12 2022-03-25 杭州迪英加科技有限公司 Nasal polyp pathological section analysis method and system and readable storage medium
CN114118258A (en) * 2021-11-19 2022-03-01 武汉大学 Pathological section feature fusion method based on background guidance attention mechanism
CN114359666A (en) * 2021-12-28 2022-04-15 清华珠三角研究院 Multi-mode fusion lung cancer patient curative effect prediction method, system, device and medium
CN114529554A (en) * 2021-12-28 2022-05-24 福州大学 Intelligent auxiliary interpretation method for gastric cancer HER2 digital pathological section
CN114359666B (en) * 2021-12-28 2024-10-15 清华珠三角研究院 Multi-mode fused lung cancer patient curative effect prediction method, system, device and medium
CN115063592A (en) * 2022-08-16 2022-09-16 之江实验室 Multi-scale-based full-scanning pathological feature fusion extraction method and system
CN116740041B (en) * 2023-06-27 2024-04-26 新疆生产建设兵团医院 CTA scanning image analysis system and method based on machine vision
CN116740041A (en) * 2023-06-27 2023-09-12 新疆生产建设兵团医院 CTA scanning image analysis system and method based on machine vision
CN117036811A (en) * 2023-08-14 2023-11-10 桂林电子科技大学 Intelligent pathological image classification system and method based on double-branch fusion network
CN117392428A (en) * 2023-09-04 2024-01-12 深圳市第二人民医院(深圳市转化医学研究院) Skin disease image classification method based on three-branch feature fusion network
CN117764994B (en) * 2024-02-22 2024-05-10 浙江首鼎视介科技有限公司 Biliary pancreas imaging system and method based on artificial intelligence
CN117764994A (en) * 2024-02-22 2024-03-26 浙江首鼎视介科技有限公司 biliary pancreas imaging system and method based on artificial intelligence
CN118115787A (en) * 2024-02-23 2024-05-31 齐鲁工业大学(山东省科学院) Full-slice pathological image classification method based on graph neural network
CN118299022A (en) * 2024-05-28 2024-07-05 吉林大学 Informationized management system and method for surgical equipment

Also Published As

Publication number Publication date
CN111488921A (en) 2020-08-04
CN111488921B (en) 2023-06-16

Similar Documents

Publication Publication Date Title
WO2021196632A1 (en) Intelligent analysis system and method for panoramic digital pathological image
US11922626B2 (en) Systems and methods for automatic detection and quantification of pathology using dynamic feature classification
WO2020253773A1 (en) Medical image classification method, model training method, computing device and storage medium
US20220343623A1 (en) Blood smear full-view intelligent analysis method, and blood cell segmentation model and recognition model construction method
CN109376636B (en) Capsule network-based eye fundus retina image classification method
CN108305249B (en) Rapid diagnosis and scoring method of full-scale pathological section based on deep learning
CN108464840B (en) Automatic detection method and system for breast lumps
Pan et al. Mitosis detection techniques in H&E stained breast cancer pathological images: A comprehensive review
WO2024060416A1 (en) End-to-end weakly supervised semantic segmentation and labeling method for pathological image
WO2022167005A1 (en) Deep neural network-based method for detecting living cell morphology, and related product
CN109670489B (en) Weak supervision type early senile macular degeneration classification method based on multi-instance learning
WO2019184851A1 (en) Image processing method and apparatus, and training method for neural network model
US11721023B1 (en) Distinguishing a disease state from a non-disease state in an image
Dhawan et al. Cervix image classification for prognosis of cervical cancer using deep neural network with transfer learning
Costa et al. Eyequal: Accurate, explainable, retinal image quality assessment
CN112396605A (en) Network training method and device, image recognition method and electronic equipment
CN113705595A (en) Method, device and storage medium for predicting degree of abnormal cell metastasis
Yang et al. The devil is in the details: a small-lesion sensitive weakly supervised learning framework for prostate cancer detection and grading
CN114140437A (en) Fundus hard exudate segmentation method based on deep learning
CN111445456A (en) Classification model, network model training method and device, and identification method and device
CN116645326A (en) Glandular cell detection method, glandular cell detection system, electronic equipment and storage medium
Mathina Kani et al. Classification of skin lesion images using modified Inception V3 model with transfer learning and augmentation techniques
CN114742119A (en) Cross-supervised model training method, image segmentation method and related equipment
Liu et al. A gastric cancer recognition algorithm on gastric pathological sections based on multistage attention‐DenseNet
CN112086174A (en) Three-dimensional knowledge diagnosis model construction method and system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20928830

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20928830

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20928830

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 06/07/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20928830

Country of ref document: EP

Kind code of ref document: A1