US20200034651A1 - Identifying and Excluding Blurred Areas of Images of Stained Tissue To Improve Cancer Scoring - Google Patents
Identifying and Excluding Blurred Areas of Images of Stained Tissue To Improve Cancer Scoring Download PDFInfo
- Publication number
- US20200034651A1 US20200034651A1 US16/593,968 US201916593968A US2020034651A1 US 20200034651 A1 US20200034651 A1 US 20200034651A1 US 201916593968 A US201916593968 A US 201916593968A US 2020034651 A1 US2020034651 A1 US 2020034651A1
- Authority
- US
- United States
- Prior art keywords
- pixel
- pixels
- blurred
- digital image
- class
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 206010028980 Neoplasm Diseases 0.000 title claims description 46
- 201000011510 cancer Diseases 0.000 title claims description 44
- 238000000034 method Methods 0.000 claims abstract description 49
- 239000000090 biomarker Substances 0.000 claims description 29
- 238000010191 image analysis Methods 0.000 claims description 20
- 102000004169 proteins and genes Human genes 0.000 claims description 11
- 108090000623 proteins and genes Proteins 0.000 claims description 11
- 238000007637 random forest analysis Methods 0.000 claims description 10
- 230000036210 malignancy Effects 0.000 claims description 5
- 238000012549 training Methods 0.000 abstract description 13
- 210000001519 tissue Anatomy 0.000 description 77
- WZUVPPKBWHMQCE-UHFFFAOYSA-N Haematoxylin Chemical compound C12=CC(O)=C(O)C=C2CC2(O)C1C1=CC=C(O)C(O)=C1OC2 WZUVPPKBWHMQCE-UHFFFAOYSA-N 0.000 description 28
- 238000003066 decision tree Methods 0.000 description 25
- SXEHKFHPFVVDIR-UHFFFAOYSA-N [4-(4-hydrazinylphenyl)phenyl]hydrazine Chemical compound C1=CC(NN)=CC=C1C1=CC=C(NN)C=C1 SXEHKFHPFVVDIR-UHFFFAOYSA-N 0.000 description 16
- 238000010186 staining Methods 0.000 description 15
- 210000004940 nucleus Anatomy 0.000 description 13
- 238000004458 analytical method Methods 0.000 description 12
- 210000004027 cell Anatomy 0.000 description 12
- 210000000481 breast Anatomy 0.000 description 9
- 102000015694 estrogen receptors Human genes 0.000 description 9
- 108010038795 estrogen receptors Proteins 0.000 description 9
- 230000009466 transformation Effects 0.000 description 7
- 230000002055 immunohistochemical effect Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 239000003086 colorant Substances 0.000 description 5
- 238000001914 filtration Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 206010006187 Breast cancer Diseases 0.000 description 3
- 208000026310 Breast neoplasm Diseases 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000011179 visual inspection Methods 0.000 description 3
- 101150029707 ERBB2 gene Proteins 0.000 description 2
- 101001012157 Homo sapiens Receptor tyrosine-protein kinase erbB-2 Proteins 0.000 description 2
- 102000018329 Keratin-18 Human genes 0.000 description 2
- 108010066327 Keratin-18 Proteins 0.000 description 2
- 102100030086 Receptor tyrosine-protein kinase erbB-2 Human genes 0.000 description 2
- 208000006265 Renal cell carcinoma Diseases 0.000 description 2
- 210000003855 cell nucleus Anatomy 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 239000012528 membrane Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 102000003998 progesterone receptors Human genes 0.000 description 2
- 108090000468 progesterone receptors Proteins 0.000 description 2
- 238000013077 scoring method Methods 0.000 description 2
- 101000878605 Homo sapiens Low affinity immunoglobulin epsilon Fc receptor Proteins 0.000 description 1
- 102100038007 Low affinity immunoglobulin epsilon Fc receptor Human genes 0.000 description 1
- VAYOSLLFUXYJDT-RDTXWAMCSA-N Lysergic acid diethylamide Chemical compound C1=CC(C=2[C@H](N(C)C[C@@H](C=2)C(=O)N(CC)CC)C2)=C3C2=CNC3=C1 VAYOSLLFUXYJDT-RDTXWAMCSA-N 0.000 description 1
- 206010060862 Prostate cancer Diseases 0.000 description 1
- 208000000236 Prostatic Neoplasms Diseases 0.000 description 1
- 102000040945 Transcription factor Human genes 0.000 description 1
- 108091023040 Transcription factor Proteins 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000012303 cytoplasmic staining Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- YQGOJNYOYNNSMM-UHFFFAOYSA-N eosin Chemical compound [Na+].OC(=O)C1=CC=CC=C1C1=C2C=C(Br)C(=O)C(Br)=C2OC2=C(Br)C(O)=C(Br)C=C21 YQGOJNYOYNNSMM-UHFFFAOYSA-N 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 210000004907 gland Anatomy 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000003364 immunohistochemistry Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 210000003292 kidney cell Anatomy 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000035755 proliferation Effects 0.000 description 1
- 208000023958 prostate neoplasm Diseases 0.000 description 1
- 238000003908 quality control method Methods 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 210000005239 tubule Anatomy 0.000 description 1
- 210000004881 tumor cell Anatomy 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G06K9/628—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2431—Multiple classes
-
- G06K9/0014—
-
- G06K9/036—
-
- G06K9/6202—
-
- G06K9/6256—
-
- G06K9/6277—
-
- G06K9/66—
-
- G06T5/002—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/98—Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
- G06V10/993—Evaluation of the quality of the acquired pattern
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/695—Preprocessing, e.g. image segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30068—Mammography; Breast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Definitions
- the present invention relates generally to image analysis of stained tissue, and more specifically to identifying blurred areas in digital images of tissue slices.
- Cancer is typically diagnosed by analyzing stained samples of tissue from cancer patients and then correlating target patterns in the tissue samples with grading and scoring methods for different kinds of cancers.
- the Gleason grading system indicates the malignancy of prostate cancer based on the architectural pattern of the glands of a stained prostate tumor.
- the Fuhrman nuclear grading system indicates the severity of renal cell carcinoma (RCC) based on the morphology of the nuclei of kidney cells.
- RRCC renal cell carcinoma
- Breast cancer can be diagnosed by grading stained breast tissue using the Allred score, the the Elston-Ellis score, the HercepTest® score or the Ki-67 test score.
- the Allred score indicates the severity of cancer based on the percentage of cells that have been stained to a certain intensity by the estrogen receptor (ER) antibody.
- ER estrogen receptor
- the Elston-Ellis score indicates the severity of cancer based on the proportion of tubules in the tissue sample, the similarity of nucleus sizes and the number of dividing cells per high power field of 40 ⁇ magnification.
- the HercepTest score indicates the severity of cancer based on the level of HER2 protein overexpresssion as indicated by the degree of membrane staining.
- the Ki-67 test measures the proliferation rate, which is the percentage of cancer cells in the breast tissue that are actively dividing.
- the Ki-67 labeling index is a measure of the percentage of cancer cells whose nuclei contain the Ki-67 protein that has been immunohistochemically stained. A level of greater than twenty percent indicates a high-risk, aggressive tumor.
- the pathologist manually marks the blurred areas of the image of each tissue slice that are to be avoided when performing the object and pattern recognition that is the basis for the diagnostic cancer scoring.
- the pathologist can only mark large blurred areas, such as a scanning stripe along the entire slide that is out of focus, as opposed to the thousands of smaller blurred areas in a high resolution image that can result from the differing light refraction caused by microdroplets on the tissue.
- a method is sought to identify and mark the many small blurred areas in digital images of tissue slices so as to improve the accuracy of cancer scoring by using image analysis results from only unblurred areas.
- a method for identifying blurred areas in digital images of stained tissue involves artificially blurring a learning tile and then training a pixel classifier to correctly classify each pixel as belonging either to the learning tile or to the blurred learning tile.
- a learning tile is selected from the digital image of a slice of tissue of a cancer patient that has been stained using a biomarker. A portion of the pixels exhibits the color stained using the biomarker.
- the learning tile is duplicated to create a copied learning region.
- the copied learning region is distorted by applying a filter to the pixel values of each pixel of the copied learning region so as artificially to blur the copied learning region.
- a pixel classifier is trained by analyzing the pixel values of each pixel of the learning region and the pixel values of a corresponding pixel in the copied learning region.
- the pixel classifier is trained to correctly classify each pixel as belonging either to the learning tile or to the copied learning tile.
- Each pixel of the digital image is classified as most likely resembling either the learning tile or the copied learning tile using the pixel classifier.
- the digital image is then segmented into blurred areas and unblurred areas based on the classifying of each pixel as belonging either to the learning tile or to the copied learning tile.
- the blurred areas and the unblurred areas of the digital image are identified on a graphical user interface
- the method for identifying blurred areas in digital images of stained tissue involves training a pixel classifier comprised of pixelwise descriptors on both unblurred and artificially blurred regions.
- a digital image of a slice of tissue from a cancer patient that has been stained using a biomarker is divided into tiles. For each pixel of the image, the color stained using the biomarker, which is defined by pixel values, has a magnitude derived from the pixel values.
- a learning region is selected as the tile whose pixel values represent the mean magnitude of the color stained using the biomarker.
- the learning region includes first and second subregions. The second subregion is distorted by applying a filter to the pixel values of each pixel of the second subregion so as artificially to blur the second subregion. The first subregion remains unblurred.
- a pixelwise descriptor of the pixel classifier is generated by analyzing and comparing the pixel values of each pixel of the learning region with the pixel values of neighboring pixels at predetermined offsets from each analyzed pixel.
- the pixelwise descriptor is trained to indicate, based on the comparing with neighboring pixels, that each pixel of the learning region most likely belongs either to an unblurred class of pixels such as those in the first subregion or to a blurred class of pixels such as those in the second subregion.
- Each pixel of the digital image is characterized as most likely belonging either to the unblurred class of pixels or to the blurred class of pixels using the pixelwise descriptor by classifying each characterized pixel based on the pixel values of neighboring pixels at predetermined offsets from each characterized pixel.
- the blurred areas of the digital image are identified based on the classifying of pixels as belonging to the blurred class of pixels.
- Image objects are generated by segmenting the digital image except in the identified blurred areas. Using the image objects, a score is determined that indicates a level of cancer malignancy of the slice of tissue from the cancer patient.
- FIG. 1 is a diagram of a system for analyzing digital images that uses pixel-oriented analysis to identify blurred areas in digital images of tissue slices.
- FIG. 2 illustrates a data network generated by the system of FIG. 1 in which data objects of a hierarchical network are linked to selected pixels of an image of a stained tissue.
- FIG. 3 is a flowchart of steps by which the system of FIG. 1 identifies blurred areas of digital images of stained tissue slices before recognizing patterns in the images using object-oriented analysis.
- FIG. 4 shows a high-resolution digital image of breast tissue upon which immunohistochemical (IHC) Ki-67 staining has been performed.
- FIG. 5 is a screenshot of the graphical user interface of the system of FIG. 1 in which the image of FIG. 4 is displayed in tiled sections.
- FIG. 6 shows 43 tiles of the digital image that have been selected to be used to identify a learning tile that exhibits the most representative IHC Ki-67 staining.
- FIG. 7 illustrates the staining of the digital image by hematoxylin through the color transformation H.
- FIG. 8 illustrates the staining of the digital image by DAB using the biomarker Ki-67 through the color transformation K.
- FIG. 9 is a scatter plot of points representing the mean color transformation values H and K for each of the 43 selected tiles shown in FIG. 6 .
- FIG. 10 is a diagram illustrating the step of applying a filter in order to artificially blur the copied learning tile.
- FIG. 11 shows a more detailed view of the selected learning tile as well as the blurred, copied learning tile.
- FIG. 12 is a schematic diagram of a decision tree with pixelwise descriptors used to determine the probability that a characterized pixel belongs to a blurred pixel class or an unblurred pixel class.
- FIG. 13 shows a matrix of pixels including a characterized pixel and a larger box of pixels whose lower left corner is offset from the characterized pixel by two pixels in the y dimension.
- FIG. 14 is a screenshot of the graphical user interface of the system of FIG. 1 showing two tiles of the image of stained tissue and the associated heat maps in which pixels are assigned the colors associated with the pixel class to which each pixel most probably belongs.
- FIG. 15 is a detailed view of tile #4 of the digital image of stained breast tissue of FIG. 4 .
- FIG. 16 is a heat map in which each pixel of tile #4 of FIG. 15 has the color associated with either the blurred pixel class or the unblurred pixel class.
- FIG. 17 is a segmented version of tile #4 of FIG. 15 identifying the blurred areas as black image objects.
- FIG. 18 is a flowchart of steps of another embodiment of a method for identifying blurred areas in digital images of stained tissue.
- FIG. 1 shows a system 10 for analyzing digital images that uses pixel-oriented analysis to identify blurred areas in digital images of tissue slices stained using biomarkers so that object-oriented analysis can be performed only on the unblurred areas in order to obtain a more accurate prognostic cancer score.
- System 10 is used to analyze images of tissue slices stained using various biomarkers, such as tissue stained with hematoxylin or with a dye attached to a protein-specific antibody using immunohistochemistry (IHC), such as a Ki-67 antibody stain.
- IHC immunohistochemistry
- Digital images 11 of the stained tissue slices are acquired at high magnification.
- a typical digital image of a tissue slice has a resolution of 100,000 ⁇ 200,000 pixels, or 20 billion pixels.
- the acquired digital images 11 are stored in a database 12 of digital images.
- Image analysis software executing on a data analysis server 13 then performs intelligent image processing and automated classification and quantification.
- the image analysis software is a computer program product tangibly embodied on a computer-readable storage medium in server 13 and comprises computer readable and executable program instructions that when executed by a processor on server 13 provide a visual display on a graphical user interface 14 of an interconnected display device 15 , such as a personal computer.
- the image analysis program analyzes, grades, scores and displays the digital images 11 of tissue slices that have been stained with various biomarkers.
- the image analysis program first identifies blurred areas in digital images 11 and then segments and classifies objects in the unblurred areas.
- the blurred areas are identified using statistical pixel-oriented analysis, whereas the grading is performed using object-oriented analysis.
- the image analysis software links pixels to objects such that the unlinked input data in the form of pixels is transformed into a hierarchical semantic network of image objects.
- the image analysis program prepares links between some objects and thereby generates higher hierarchically ranked objects.
- the image analysis program assigns the higher hierarchically ranked objects with properties, classifies them, and then links those objects again at a still higher level to other objects.
- the higher hierarchically ranked objects are used to find target patterns in the images, which are used to obtain a prognostic cancer score. More easily detected starting image objects are first found and then used to identify harder-to-find image objects in the hierarchical data structure.
- FIG. 2 illustrates an exemplary hierarchical network 16 that is generated by image analysis system 10 .
- System 10 generates first objects 17 from a digital image 18 based on the stained tissue.
- the image analysis program of system 10 uses object-oriented image analysis to generate data objects of hierarchical semantic network 16 by linking selected pixels 19 to image objects according to a classification network and according to a process hierarchy of steps and algorithms.
- object-oriented image analysis to generate data objects of hierarchical semantic network 16 by linking selected pixels 19 to image objects according to a classification network and according to a process hierarchy of steps and algorithms.
- Each digital image comprises pixel values associated with the locations of each of the pixels 19 .
- the image analysis program operates on the digital pixel values and links the pixels to form image objects.
- Each object is linked to a set of pixel locations based on the associated pixel values. For example, an object is generated by linking to the object those pixels having similar characteristics, such as hue, saturation and brightness as defined by the pixel values.
- the pixel values can be expressed in a 3-value color space. For example, in the RGB color space, three 3-digit numbers in the range from zero to 255 define the color. The three numbers represent the amounts of red, green and blue in the represented color.
- red is represented as 255-0-0
- dark green is represented as 0-100-0
- royal blue is designated as 65-105-225
- white is represented as 255-255-255
- black is represented as 0 - 0 - 0 .
- Smaller numbers represent darker colors, so 100-100-100 is a darker gray than 200-200-200, and 0-0-128 is a darker blue (navy) than straight blue 0-0-255.
- CMYK cyan, magenta, yellow, black
- Thresholds of brightness at pixel locations that are grouped together can be obtained from a histogram of the pixel values in the digital image.
- the pixels form the lowest hierarchical level of hierarchical network 16 .
- pixels having the color and intensity imparted by the stain of a biomarker are identified and linked to first objects 17 .
- the first objects 17 form the second hierarchical level of hierarchical network 16 .
- data objects are linked together into classes according to membership functions of the classes defined in the class network.
- objects representing nuclei are linked together to form objects 20 - 21 in a third hierarchical level of hierarchical network 16 .
- some of the first objects 17 correspond to stained pixels of a nucleus corresponding to object 20 .
- another of the first objects 17 corresponds to stained pixels of a separate nucleus represented by object 21 .
- An additional object 22 is generated in a fourth hierarchical level of hierarchical network 16 and is linked to all of the objects that represent stained nuclei.
- the objects 20 - 21 corresponding to stained nuclei are linked to object 22 .
- the knowledge and the program flow of the image analysis program are separated in the software structure.
- the parameters by which the image analysis is performed can be changed without having to revise the process hierarchy of software steps.
- the image analysis software displays both the original digital images 11 as well as the corresponding processed images and heat maps on the graphical user interface 14 . Pixels corresponding to classified and segmented objects in the digital images are colored, marked or highlighted to correspond to their object classification. For example, the pixels of objects that are members of the same object class are depicted in the same color. In addition, heat maps are displayed in which pixels of the same pixel class have the same color.
- FIG. 3 is a flowchart of steps 25 - 35 of a method 24 by which analysis system 10 identifies blurred areas of digital images of stained tissue slices before recognizing patterns in the images using object-oriented analysis.
- a first step 25 a high-resolution digital image is acquired of a tissue slice that has been stained using one or more biomarkers.
- FIG. 4 shows an exemplary digital image 36 of breast tissue upon which immunohistochemical (IHC) Ki-67 staining has been performed.
- IHC immunohistochemical
- both hematoxylin and the dye diaminobenzidine (DAB) are used in the staining.
- DAB dye diaminobenzidine
- the positive cell nuclei containing the Ki-67 protein are stained by DAB and appear as brown, whereas the negative cell nuclei that are not stained by DAB have the blue color of the counter stain hematoxylin.
- a slice of the stained breast tissue was placed on a slide before the digital image 36 was scanned.
- step 26 high-resolution digital image 36 is divided into tiles 37 .
- image 36 is divided into tiles 37 .
- FIG. 5 shows how digital image 36 is displayed in tiled sections 37 on graphical user interface 14 of system 10 after step 26 is performed.
- the length of the sides of each square tile in this example is eight hundred microns (800 ⁇ m), and the side length of each pixel at the resolution of image 36 is 0.5 ⁇ m.
- each tile is 1600 ⁇ 1600 pixels.
- step 27 system 10 selects the tiles that contain mostly tissue from which a learning tile is later chosen. Tiles that contain mostly image background and non-tissue artifacts are not used in the selection of the learning tile.
- FIG. 6 shows the forty-three tiles on digital image 36 that have been selected by system 10 to be used to identify the learning tile that exhibits the most representative staining by hematoxylin and DAB. The tiles are numbered 1-43 for identification.
- step 28 system 10 selects a learning region of digital image 36 on which to train a pixel-based machine learning model to recognize blurred areas.
- the learning region is a tile.
- the learning tile is chosen from among the forty-three selected tiles as the region of the image 36 that exhibits colors closest to both the median brown of the DAB stain and the median blue each pixel is defined by three 3-digit numbers in the range from zero to 255 that represent the amounts of red, green and blue in the pixel color.
- the amount of hematoxylin blue in each pixel i is defined by the transformation
- K i ( R i 1/2 /B i )/( R i +G i +B i ) 1/2 ,
- R i , G i and B i are the 3-digit values of the red, green and blue values of each pixel i.
- the values of H i and K i range from zero to 255 and will have a lighter color and a higher value in the presence of more hematoxylin stain and DAB stain of the Ki-67 protein, respectively.
- lower resolution tiles can be used to speed the calculation. In one implementation, the tiles are downsampled to achieve pixels whose sides have a length of 8 ⁇ m.
- FIG. 7 illustrates the staining by hematoxylin 38 in image 36 through the transformation H i .
- the inverse brightness (255-H i ) is shown in FIG. 7 so that darker shades of gray represent more staining by the hematoxylin 38 .
- FIG. 8 shows the staining by DAB 39 in image 36 through the transformation K i .
- the inverse brightness (255-K i ) is shown in FIG. 8 so that darker shades of gray represent more staining by DAB and presence of the Ki-67 protein.
- the mean values of H i and K i of all the pixels in each tile are calculated. Then the median value H MED from among the mean of the H i values of all of the tiles is chosen, and the median value K MED from among the mean of the K i values of all of the tiles is chosen.
- the two median values H MED and K MED are the medians of the mean values of the pixel colors of each tile. In this example, the median H MED of the mean values H i for the forty-three tiles is 41.52, and the median KMED of the mean values K i for the forty-three tiles is 16.03.
- the median value KMED is closer to zero than to 255 because even if all cells were cancerous, only the nuclei would be stained, and the pixels representing the nuclei make up a small proportion of the pixels of each tile.
- the learning tile is chosen as the tile whose means (averages) of the H i and K i values have the smallest Euclidian distance to the median values H MED and K MED for the forty-three tiles. For each tile j, the Euclidian distance is calculated as
- H j and K j are the averages of the hematoxylin blue values and the DAB brown values for each tile j.
- FIG. 9 is a scatter plot of points representing the mean H i and K i values of each of the forty-three selected tiles shown in FIG. 6 , where the mean hematoxylin blue value is the abscissa plotted on the horizontal axis and the mean DAB brown value is the ordinate plotted on the vertical axis.
- the scatter plot has forty-three points corresponding to the forty-three tiles.
- tile #14 has the mean H i and K i values with the smallest Euclidian distance to the median values H MED and K MED for all of the tiles.
- the mean H i value is 41.46
- the mean K i value is 16.05.
- Tile #14 has the smallest Euclidian distance of 0.06325, which is calculated as:
- step 28 is to select tile #14 as the learning tile 40 that will be used to train a pixel-based machine learning model to recognize blurred areas of image 16 .
- step 29 the learning region 40 of tile #14 is duplicated to create a copied learning region 41 .
- Step 29 is performed on a full resolution version of tile 40 in which the length of each side of each pixel is 0.5 ⁇ m.
- Both the learning tile 40 and the copied learning tile 41 are squares of 1600 ⁇ 1600 pixels. System 10 then operates on both the learning tile 40 and the copied learning tile 41 .
- the copied learning region 41 is distorted by applying a filter to the pixel values of each pixel of the copied learning region so as artificially to blur the copied learning region.
- the filter applied to each pixel of the copied learning region 41 is a Gaussian filter that modifies the value of each pixel based on the values of neighboring pixels.
- the blurred image of the copied learning tile most closely resembled an image of stained tissue blurred by natural causes when the filter was applied at a radius of twenty pixels corresponding to ten microns (10 ⁇ m).
- the 20-pixel radius is applied by modifying the pixel values of a center pixel in a 41 ⁇ 41 pixel box based on the pixel values of the other pixels in the box.
- Each of the R, G and B pixel values is modified separately based on the R, G and B pixel values of the neighboring pixels.
- FIG. 10 illustrates the step of applying a filter in order to artificially blur the copied learning region 41 .
- the filtering step 30 is now described in more detail using a smaller 2-pixel radius.
- FIG. 10 shows a 100-pixel portion of copied learning tile 41 .
- the pixel 42 is being filtered by applying a Gaussian filter to a 5 ⁇ 5 pixel box 43 centered on pixel 42 .
- Each of the R, G and B pixel values of pixel 42 is filtered separately.
- filtered pixel 42 has a brown color represented by the R, G and B values 200, 125 and 75, respectively. The modification of just the red pixel value 200 is described here.
- the red pixel value of each of the twenty-five pixels in box 43 is multiplied by the factor listed for that pixel in FIG. 10 .
- the red pixel value 200 of filtered pixel 42 is multiplied by the factor 41 .
- the twenty-five products of the factors times the red pixel values are summed.
- the sum is divided by the total of all of the factors, which equals 273 .
- the red pixel value 200 makes only a 15% contribution (41/273) to the magnitude of the filtered red pixel value.
- the filtered red pixel value is influenced by the red pixel values of the neighboring pixels, with more weighting allocated to closer pixels, as the weighting factors in FIG. 10 demonstrate.
- the effect of the filtering is to modify the red pixel value of filtered pixel 42 to more closely resemble the red pixel values of the neighboring pixels and to reduce the color contrast.
- the green and blue pixel values of filtered pixel 42 are modified in the same way as the red pixel value. Locally filtering the red, green and blue pixel values reduces the color contrast and artificially blurs the copied learning region 41 .
- the pixels of digital image 36 indicate color as a gray scale
- the filtering step 30 would then modify just the gray-scale pixel value for each pixel of the copied learning region 41 .
- FIG. 11 shows a more detailed view of learning tile 40 of FIG. 6 .
- FIG. 11 shows a blurred learning tile 44 generated by artificially blurring the copied learning tile 41 by applying a filter to the pixel values of copied learning tile 41 .
- a pixel classifier is trained on learning tile 40 and on blurred, copied learning tile 44 to classify each pixel as belonging either to the learning region or to the copied learning region.
- the pixel classifier is a binary classifier that is trained using supervised learning because system 10 knows that each pixel of learning tile 40 belongs to an unblurred class of pixels and that each pixel of the blurred, copied learning tile 44 belongs to a blurred class of pixels.
- Various kinds of pixel classifiers can be used, such as a random forest classifier, a convolutional neuronal network, a decision tree classifier, a support vector machine classifier or a Bayes classifier.
- the pixel classifier is a set of random forest pixelwise descriptors.
- Each pixelwise descriptor is generated by comparing learning pixels of the learning region 40 and the blurred learning region 44 to neighboring pixels at predetermined offsets from each of the learning pixels. Based on the comparing of learning pixels to their neighboring pixels, each pixelwise descriptor is trained to indicate that each of the learning pixels most likely belongs either to the unblurred class of pixels such as those in learning tile 40 or to the blurred class of pixels such as those in the blurred learning tile 44 .
- the pixelwise descriptors indicate the most likely class associated with each pixel without referencing any image objects that would be generated using object-based image analysis. Purely pixel-based image analysis is performed using the descriptors.
- the pixelwise descriptors indicate the probability that a characterized pixel belongs to a class based on a characteristic of a second pixel or group of pixels at a predetermined offset from the characterized pixel.
- the pixelwise descriptors are used in random forest decision trees to indicate the probability that each pixel belongs to a particular class.
- the class probability of each pixel is calculated using multiple decision trees of pixelwise descriptors. Then the average of the probabilities is taken as the result.
- the various decision trees are trained with random different neighboring pixels from the learning tiles 40 , 44 so that the average probability of belonging to a particular class in the execution mode is obtained from a random forest of decision trees in which overfitting to particular training pixels is avoided. Each decision tree is trained on a different random set of neighboring pixels.
- the average result from multiple random forest decision trees provides a more accurate classification result on the pixels outside of learning tile 40 and blurred learning tile 44 .
- an average probability of a pixel belonging to the blurred or unblurred class is calculated using twenty random forest decision trees.
- FIG. 12 is a schematic diagram illustrating how exemplary pixelwise descriptors 45 - 51 are applied in one of the random forest decision trees to determine the probability that a pixel belongs to one of three classes: blurred (bl), unblurred (ub) and background (bg).
- the pixelwise descriptors classify each pixel into just two classes: blurred (bl) and unblurred (ub).
- System 10 trains on random pixels from the learning tiles 40 , 44 in order to match the correct class by choosing the appropriate pixelwise descriptors and coefficients of those descriptors.
- the System 10 matches each pixel to the correct class by choosing the type of pixelwise descriptors, the order in which those descriptors are applied in the decision trees, the location of the pixels that are being compared and the comparison threshold used to make each decision.
- the type of pixelwise descriptor is characterized by the type of operation performed on the pixel values of the offset neighboring pixels. For example, the operation may calculate the mean of the pixel values, the standard deviation of the pixel values or the difference of the means or deviations for pixels in separate offset boxes.
- each pixel is first analyzed by pixelwise descriptor 45 .
- Descriptor 45 determines the average red value of the pixels in a 6 ⁇ 13 box of pixels that is offset from the characterized pixel by two pixels in the y dimension (0,2).
- FIG. 13 illustrates the characterized pixel 52 and the box 53 of pixels whose lower left corner is offset from characterized pixel 52 by zero pixels in the x dimension and two pixels in the y dimension.
- Pixel 52 belongs to a nucleus 54 containing the Ki-67 protein that has been stained with DAB dye connected to the Ki-67 antibody that attaches to the Ki-67 protein.
- the average red value of the pixels in box 53 is less than the threshold value of 142.9 used by the pixelwise descriptor 45 . Therefore, the analysis proceeds along the branch of the decision tree to pixelwise descriptor 46 .
- Descriptor 46 determines the average blue value of the pixels in a 2 ⁇ 1 box 55 of pixels that is offset from characterized pixel 52 by two pixels in the x dimension and one pixel in the y dimension.
- FIG. 13 shows the box 55 that is used for the determination of the blue value of the pixels.
- the average blue value of the pixels in box 55 is less than the threshold value of 119.1 used by the pixelwise descriptor 46 , so the analysis proceeds along the branch of the decision tree to pixelwise descriptor 48 .
- Descriptor 48 determines the average green value of the pixels in a 1 ⁇ 4 box 56 of pixels that is offset from characterized pixel 52 by one pixel in the x dimension and four pixels in the y dimension.
- the average green value of the pixels in box 56 is greater than the threshold value of 39.1 used by the pixelwise descriptor 48 , so the decision tree of pixelwise descriptors indicates that characterized pixel 52 most probably belongs to the unblurred class of pixels.
- the decision tree has been trained to correctly classify each pixel as belonging either to the unblurred class (ub) of pixels in the learning region 40 or to the blurred class (bl) of pixels in the blurred, copied learning region 44 .
- the decision tree of pixelwise descriptors outputs the posterior probabilities that each pixel belongs to one of the selected classes, in this example blurred pixels (bl), unblurred pixels (ub) and background pixels (bg).
- the class probabilities are divided between only blurred pixels (bl) and unblurred pixels (ub).
- the output probabilities are normalized so that the sum of the probabilities of belonging to a class within the selected classes is 100%.
- the decision tree indicates that the probability P(ub) that characterized pixel 52 belongs to the unblurred pixel class is 60%.
- the decision tree predicts that characterized pixel 52 has a 38% probability P(bl) of belonging to the blurred pixel class and a 2% probability P(bg) of belonging to the class of background pixels.
- nineteen other decision trees of pixelwise descriptors are also trained to predict that other random training pixels in the learning tiles 40 , 44 have the greatest probability of belonging to the selected pixel classes.
- Each random forest decision tree of pixelwise descriptors is trained so that, for all of the training pixels of the learning tiles, the same order of descriptors with the same offsets, boxes, thresholds and other coefficients output a highest probability class that matches the tile in which each training pixel is located.
- the parameters of each decision tree are modified during the training mode until each randomly selected training pixel is correctly classified as belonging either to the learning region 40 or to the blurred, copied learning region 44 . The best match is achieved when the highest probability class for all of the selected training pixels is correct, and those indicated probabilities are closest to 100%.
- the parameters that are modified to achieve the best match are (i) the comparison threshold at each pixelwise descriptor, (ii) the offset of the pixels being compared, (iii) the size and shape of the box of pixels being compared, (iv) the quality of the pixels that is being compared (e.g., mean color value), and (v) the order in which the pixelwise descriptors are placed in each decision tree.
- pixelwise descriptors can be more complex than merely comparing an average color value to a threshold.
- pixelwise descriptor 50 calculates the difference of the average (mean) color values in two offset boxes and then compares the difference to a threshold.
- Yet other pixelwise descriptors compare a threshold to other pixel values, such as (i) the color value of a second pixel at a predetermined offset, (ii) the difference between the color value of the characterized pixel and the color value of a second pixel at a predetermined offset, (iii) the standard deviation among the color values of pixels in a box of predetermined size at a predetermined offset from the characterized pixel, (iv) the difference between the standard deviations of the pixels in two boxes, (v) the sum of the gradient magnitude of the color values of pixels in a box of predetermined size at a predetermined offset from the characterized pixel and at a predetermined orientation, and (vi) the orientation of the gradient edge of the color values of pixels in a box of predetermined size at a predetermined offset from the characterized pixel.
- step 32 system 10 classifies each pixel of digital image 36 as most likely resembling either the learning region or the copied learning region using the pixel classifier trained in step 31 .
- the image analysis program applies the pixel-oriented image analysis of the decision trees of pixelwise descriptors to each of the pixels of the original digital image 36 of stained tissue, including the pixels of learning tile 40 (tile #14).
- system 10 classifies each pixel as belonging to the blurred pixel class corresponding to the blurred, copied learning region 44 if each decision tree of pixelwise descriptors indicates a probability P(bl) greater than 55% of belonging to the blurred pixel class.
- the pixel classifier applies a probability threshold of 0.55 to classify pixels as being blurred.
- Areas of digital image 36 that contain pixels in the blurred pixel class may be blurred for various reasons. For example, in order to acquire a high resolution digital image of a tissue slice, the tissue is typically scanned in multiple strips or stripes in order to cover all of the tissue. If the focal length is not optimally adjusted on a scanning pass, then an entire scanning stripe may be out of focus and blurred. Local areas may also be blurred if the areas of tissue are lifted from the glass slide so that the focal length is shorter than for the remainder of the tissue. Microdroplets are another possible cause of blurred areas on a digital image of stained tissue. If the stained tissue is scanned while small areas of moisture are present on the tissue surface, the light used to acquire the digital image may be refracted differently by the moisture and may create small blurred areas. There are also other causes of blurring other than scanning stripes, raised areas and microdroplets.
- each pixel that has greater than a 55% probability of belonging to the blurred class of pixels is assigned the color white (255, 255, 255), and all other pixels are assigned the color black (0, 0, 0).
- FIG. 14 is a screenshot of graphical user interface 14 displaying the tile #5 of digital image 36 in an upper left frame 61 .
- system 10 uses the classifying performed in step 32 , system 10 displays a heat map 62 of tile #5 in the lower left frame that was generated by applying pixelwise descriptors to the original image of stained tissue.
- the pixels of heat map 62 are assigned the color black for the blurred class of pixels and the color white for the nonblurred class of pixels and the background class of pixels.
- Tile #4 is also displayed on graphical user interface 14 in a frame 63 to the right of frame 61 .
- a heat map 64 of tile #4 is displayed below frame 63 and to the right of heat map 62 .
- the pixels of the blurred pixel class are also assigned the color black, and the pixels of the unblurred pixel class and the background class are assigned the color white.
- step 33 digital image 36 is segmented into image objects corresponding to blurred areas and unblurred areas based on the classifying of each pixel in step 32 as belonging either to the learning region 40 or to the blurred, copied learning region 44 .
- System 10 segments digital image 36 into blurred areas and unblurred areas based on each pixel being classified as belonging to the unblurred class of pixels or the blurred class of pixels.
- System 10 performs the object-based segmentation using a process hierarchy 65 of process steps and a classification network 66 of class membership functions. For example, the membership function of the class of blurred objects ignores individual pixels of the blurred pixel class that do not belong to the pixel class of the surrounding pixels. Only larger clumps of blurred pixels are segmented into image objects belonging to the blurred object class. Thus, the membership function of the class of blurred objects has a minimum area.
- FIG. 14 shows the parameters of the process hierarchy 65 and the classification network 66 being displayed on the graphical user interface 14 to the right of the frame 63 .
- the process hierarchy 65 lists the steps of the object-oriented analysis used in the segmentation.
- the class network 66 lists the membership functions as well as the colors assigned to the classes of objects.
- step 34 the blurred areas and the unblurred areas of digital image 36 are identified on the graphical user interface 14 .
- FIGS. 15-17 illustrate how the blurred areas are identified.
- FIG. 15 shows an image 67 of original tile #4 from FIG. 6 .
- FIG. 16 shows the heat map 64 that was generated from image 67 in which blurred pixels are black, and unblurred pixels are white.
- FIG. 17 is a segmented version 68 of image 67 (tile #4) in which blurred areas are identified as black image objects 69 . Only those black pixels of heat map 64 that are contiguous with a critical mass of other black pixels are segmented into the image objects 69 that represent blurred area.
- the minimum area of blurred image objects can be defined by the image analysis program, and the entire area is defined as the image object, including pixels within the area that belong to the unblurred pixel class.
- the entire high-resolution digital image 36 can be classified into blurred areas and unblurred areas in a computationally efficient manner, and the accuracy of the object-oriented segmentation can be improved.
- Method 24 involving both artificially blurring and training a pixel classifier for each digital image more accurately identifies blurred regions than applying the same blur detection algorithm and associated thresholds and parameters to all of the images of tissue slices.
- a “Difference of Gaussians” algorithm could be used for blur detection on all images by blurring each image using the same two parameters for blurring radii, and then subtracting the pixel values obtained using the two blurring radii from one another to obtain blur information.
- Such a blur detection algorithm would not as consistently identify blurred areas on images of different kinds of tissue as does method 24 , which trains a pixel classifier for each image of a tissue slice.
- step 35 system 10 segments image objects in only the areas of digital image 36 that have not been identified as being blurred.
- System 10 performs object-oriented image analysis on the unblurred areas of digital image 36 in order to obtain a prognostic cancer score for the stained tissue.
- the results of automated scoring of the Ki-67 test are improved by preventing the count of Ki-67 positive and negative nuclei from being performed on blurred areas of the image of stained tissue.
- the Ki-67 test counts the number of cancer cells whose nuclei have been stained using the Ki-67 marker compared to the overall number of cancer cells.
- the accuracy with which automated image analysis can recognize and count the stained cancer cells and the total number of cancer cells is drastically reduced when the image analysis is performed on blurred areas with low color contrast, and the Ki-67 score becomes less reliable when blurred regions are included in the scoring region. Consequently, the accuracy of the Ki-67 score is improved when blurred regions are excluded from the scoring region.
- method 24 is used to identify blurred areas of digital images of tissue stained using other biomarkers in order to improve the accuracy of other cancer grading systems that rely on the other biomarkers.
- method 24 can be used to detect blurred areas in breast tissue stained using the estrogen receptor (ER) antibody.
- ER estrogen receptor
- a more accurate Allred score indicating the severity of breast cancer is then obtained by determining the percentage of cells stained using ER only in the unblurred areas of the image.
- a more accurate HercepTest score can be obtained by determining the degree of membrane staining of the Human Epidermal growth factor Receptor 2 (Her2) protein only in unblurred areas of the image.
- Her2 Human Epidermal growth factor Receptor 2
- method 24 can be used to improve the cancer grading performed on images of tissue stained using biomarkers such as progesterone receptor (PR), Her2/neu cytoplasmic staining, cytokeratin 18 (CK18), transcription factor p63, Mib, SishChr17, SishHer2, cluster of differentiation 44 (CD44) antibody staining, CD23 antibody staining, and hematoxylin and eosin (H&E).
- biomarkers such as progesterone receptor (PR), Her2/neu cytoplasmic staining, cytokeratin 18 (CK18), transcription factor p63, Mib, SishChr17, SishHer2, cluster of differentiation 44 (CD44) antibody staining, CD23 antibody staining, and hematoxylin and eosin (H&E).
- method 24 is used to rate the image quality of each digital image of stained tissue.
- cancer scoring may be based on the image analysis of multiple slides of stained tissue, and low quality slide images may be excluded from the scoring.
- system 10 displays an indicator on graphical user interface 14 indicating the overall quality of each digital image of stained tissue.
- the indicator may specify the image quality as a percentage of blurred area, a list of the numbers of tiles that are mostly blurred or simply as a warning, such as a red exclamation mark or traffic hazard sign.
- a stop sign could be a warning indicator that the digital image exhibits insufficient quality for scoring.
- System 10 may also list metrics of image quality, such as the relative area of unblurred regions to the total tissue area, the absolute area of unblurred regions in square microns or square millimeters, or the number of tumor cells within the unblurred regions. If one of these measurements is lower than a predetermined threshold, then the image is not eligible for scoring, and the warning indicator is displayed to the user.
- Method 24 may also be used to automatically rate the image quality of large batches of images of stained tissue. For example, detailed manual inspection of excessive blur on thousands of tissue slides would not be economically feasible. Yet a pre-scoring exclusion of excessively blurred images could be performed with little additional effort because the quality control could use the same steps and results of method 24 that allow cancer scoring to be performed only in unblurred areas.
- FIG. 18 is a flowchart of steps 71 - 77 of another method 70 in which pixelwise descriptors are trained to indicate the probability that individual pixels in a learning region of a digital image belong to a blurred class of pixels or to an unblurred class of pixels.
- the pixelwise descriptors are not training on a blurred copy of a learning tile. Instead, the pixelwise descriptors of method 70 are trained on a blurred subregion of a learning region as well as on an unblurred subregion of the learning region.
- a learning region is selected on a digital image of a slice of tissue from a cancer patient that has been stained using a biomarker.
- a biomarker For example, breast tissue of the patient is stained with a dye attached to the estrogen receptor (ER) antibody that marks the corresponding protein.
- ER estrogen receptor
- Each pixel of the digital image has a color defined by pixel values, and a portion of the pixels exhibits the color of the dye stained using the biomarker.
- a subregion of the learning region is distorted by applying a filter to the pixel values of each pixel of the subregion so as artificially to blur the subregion.
- one or more pixelwise descriptors are generated by analyzing the pixel values of each pixel of the learning region and by comparing the pixel values of each analyzed pixel with the pixel values of neighboring pixels at predetermined offsets from each analyzed pixel.
- Each pixelwise descriptor is trained to indicate, based on the comparing with neighboring pixels, that each pixel of the learning region most likely belongs either to a blurred class of pixels such as those in the subregion or to an unblurred class of pixels such as those in the remainder of the learning region.
- each pixel of the digital image is characterized as most likely belonging either to the blurred class of pixels or to the unblurred class of pixels using the one or more pixelwise descriptors by classifying each characterized pixel based on the pixel values of neighboring pixels at predetermined offsets from each characterized pixel.
- step 75 blurred areas of the digital image are identified based on the classifying of pixels as belonging to the blurred class of pixels.
- image objects are generated by segmenting the digital image except in the identified blurred areas.
- the image objects represent cells of the stained breast tissue.
- system 10 determines a cancer score using the image objects.
- the score is indicative of a level of cancer malignancy of the slice of tissue from the cancer patient.
- the score is an Allred score that indicates the severity of breast cancer based on the percentage of cells in the unblurred areas of the digital image that have been stained to a threshold intensity by the estrogen receptor (ER) antibody.
- ER estrogen receptor
- Data analysis server 13 includes a computer-readable storage medium having program instructions thereon for performing method 24 and method 70 .
- a computer-readable storage medium includes instructions of the image analysis program for generating decision trees of pixelwise descriptors that indicate the probability that a pixel belongs to a pixel class based on characteristics of neighboring pixels.
- the computer-readable storage medium also includes instructions for generating image objects of a data network corresponding to patterns in digital images that have been stained by a particular biomarker.
- the present invention has been described in connection with certain specific embodiments for instructional purposes, the present invention is not limited thereto.
- methods 24 and 70 have been described as ways of identifying blurred pixels using pixel-oriented image analysis and then segmenting image objects using object-oriented image analysis
- the novel method can also be used to identify other qualities of pixels in stained tissue that reduce the accuracy of object-oriented image analysis performed subsequently.
- the novel method can use pixel classifiers to identify folds and stretch distortions in stained tissue so that object-oriented segmentation can be performed only on undistorted or unfolded areas of the tissue.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
Description
- This application is a continuation of, and claims priority under 35 U.S.C. § 120 from, nonprovisional U.S. patent application Ser. No. 15/391,088 entitled “Identifying and Excluding Blurred Areas of Images of Stained Tissue To Improve Cancer Scoring,” now U.S. Pat. No. 10,438,096, filed on Dec. 27, 2016, the subject matter of which is incorporated herein by reference.
- The present invention relates generally to image analysis of stained tissue, and more specifically to identifying blurred areas in digital images of tissue slices.
- Cancer is typically diagnosed by analyzing stained samples of tissue from cancer patients and then correlating target patterns in the tissue samples with grading and scoring methods for different kinds of cancers. For example, the Gleason grading system indicates the malignancy of prostate cancer based on the architectural pattern of the glands of a stained prostate tumor. The Fuhrman nuclear grading system indicates the severity of renal cell carcinoma (RCC) based on the morphology of the nuclei of kidney cells. Breast cancer can be diagnosed by grading stained breast tissue using the Allred score, the the Elston-Ellis score, the HercepTest® score or the Ki-67 test score. The Allred score indicates the severity of cancer based on the percentage of cells that have been stained to a certain intensity by the estrogen receptor (ER) antibody. The Elston-Ellis score indicates the severity of cancer based on the proportion of tubules in the tissue sample, the similarity of nucleus sizes and the number of dividing cells per high power field of 40× magnification. The HercepTest score indicates the severity of cancer based on the level of HER2 protein overexpresssion as indicated by the degree of membrane staining. The Ki-67 test measures the proliferation rate, which is the percentage of cancer cells in the breast tissue that are actively dividing. The Ki-67 labeling index is a measure of the percentage of cancer cells whose nuclei contain the Ki-67 protein that has been immunohistochemically stained. A level of greater than twenty percent indicates a high-risk, aggressive tumor.
- The accuracy of these scoring and grading systems depends, however, on the accuracy of the image analysis of the stained tissue. Human error is one cause of inconsistent scoring that results when a human operator, such as a pathologist, misjudges the target patterns and structures in the stained tissue due to fatigue or loss of concentration. Computer-assisted image analysis systems have been developed to support pathologists in the tedious task of grading and scoring digital images of stained tissue samples. But even the accuracy of computer-assisted scoring methods is limited by the quality of the digital images of the stained tissue. One cause of inaccuracy in scoring occurs when image analysis is performed on blurred areas of digital images of tissue slices. Conventionally, the pathologist manually marks the blurred areas of the image of each tissue slice that are to be avoided when performing the object and pattern recognition that is the basis for the diagnostic cancer scoring. However, the pathologist can only mark large blurred areas, such as a scanning stripe along the entire slide that is out of focus, as opposed to the thousands of smaller blurred areas in a high resolution image that can result from the differing light refraction caused by microdroplets on the tissue.
- A method is sought to identify and mark the many small blurred areas in digital images of tissue slices so as to improve the accuracy of cancer scoring by using image analysis results from only unblurred areas.
- A method for identifying blurred areas in digital images of stained tissue involves artificially blurring a learning tile and then training a pixel classifier to correctly classify each pixel as belonging either to the learning tile or to the blurred learning tile. A learning tile is selected from the digital image of a slice of tissue of a cancer patient that has been stained using a biomarker. A portion of the pixels exhibits the color stained using the biomarker. The learning tile is duplicated to create a copied learning region. The copied learning region is distorted by applying a filter to the pixel values of each pixel of the copied learning region so as artificially to blur the copied learning region. A pixel classifier is trained by analyzing the pixel values of each pixel of the learning region and the pixel values of a corresponding pixel in the copied learning region. The pixel classifier is trained to correctly classify each pixel as belonging either to the learning tile or to the copied learning tile. Each pixel of the digital image is classified as most likely resembling either the learning tile or the copied learning tile using the pixel classifier. The digital image is then segmented into blurred areas and unblurred areas based on the classifying of each pixel as belonging either to the learning tile or to the copied learning tile. The blurred areas and the unblurred areas of the digital image are identified on a graphical user interface
- In another embodiment, the method for identifying blurred areas in digital images of stained tissue involves training a pixel classifier comprised of pixelwise descriptors on both unblurred and artificially blurred regions. A digital image of a slice of tissue from a cancer patient that has been stained using a biomarker is divided into tiles. For each pixel of the image, the color stained using the biomarker, which is defined by pixel values, has a magnitude derived from the pixel values. A learning region is selected as the tile whose pixel values represent the mean magnitude of the color stained using the biomarker. The learning region includes first and second subregions. The second subregion is distorted by applying a filter to the pixel values of each pixel of the second subregion so as artificially to blur the second subregion. The first subregion remains unblurred.
- A pixelwise descriptor of the pixel classifier is generated by analyzing and comparing the pixel values of each pixel of the learning region with the pixel values of neighboring pixels at predetermined offsets from each analyzed pixel. The pixelwise descriptor is trained to indicate, based on the comparing with neighboring pixels, that each pixel of the learning region most likely belongs either to an unblurred class of pixels such as those in the first subregion or to a blurred class of pixels such as those in the second subregion.
- Each pixel of the digital image is characterized as most likely belonging either to the unblurred class of pixels or to the blurred class of pixels using the pixelwise descriptor by classifying each characterized pixel based on the pixel values of neighboring pixels at predetermined offsets from each characterized pixel. The blurred areas of the digital image are identified based on the classifying of pixels as belonging to the blurred class of pixels. Image objects are generated by segmenting the digital image except in the identified blurred areas. Using the image objects, a score is determined that indicates a level of cancer malignancy of the slice of tissue from the cancer patient.
- Other embodiments and advantages are described in the detailed description below. This summary does not purport to define the invention. The invention is defined by the claims.
- The accompanying drawings, where like numerals indicate like components, illustrate embodiments of the invention.
-
FIG. 1 is a diagram of a system for analyzing digital images that uses pixel-oriented analysis to identify blurred areas in digital images of tissue slices. -
FIG. 2 illustrates a data network generated by the system ofFIG. 1 in which data objects of a hierarchical network are linked to selected pixels of an image of a stained tissue. -
FIG. 3 is a flowchart of steps by which the system ofFIG. 1 identifies blurred areas of digital images of stained tissue slices before recognizing patterns in the images using object-oriented analysis. -
FIG. 4 shows a high-resolution digital image of breast tissue upon which immunohistochemical (IHC) Ki-67 staining has been performed. -
FIG. 5 is a screenshot of the graphical user interface of the system ofFIG. 1 in which the image ofFIG. 4 is displayed in tiled sections. -
FIG. 6 shows 43 tiles of the digital image that have been selected to be used to identify a learning tile that exhibits the most representative IHC Ki-67 staining. -
FIG. 7 illustrates the staining of the digital image by hematoxylin through the color transformation H. -
FIG. 8 illustrates the staining of the digital image by DAB using the biomarker Ki-67 through the color transformation K. -
FIG. 9 is a scatter plot of points representing the mean color transformation values H and K for each of the 43 selected tiles shown inFIG. 6 . -
FIG. 10 is a diagram illustrating the step of applying a filter in order to artificially blur the copied learning tile. -
FIG. 11 shows a more detailed view of the selected learning tile as well as the blurred, copied learning tile. -
FIG. 12 is a schematic diagram of a decision tree with pixelwise descriptors used to determine the probability that a characterized pixel belongs to a blurred pixel class or an unblurred pixel class. -
FIG. 13 shows a matrix of pixels including a characterized pixel and a larger box of pixels whose lower left corner is offset from the characterized pixel by two pixels in the y dimension. -
FIG. 14 is a screenshot of the graphical user interface of the system ofFIG. 1 showing two tiles of the image of stained tissue and the associated heat maps in which pixels are assigned the colors associated with the pixel class to which each pixel most probably belongs. -
FIG. 15 is a detailed view oftile # 4 of the digital image of stained breast tissue ofFIG. 4 . -
FIG. 16 is a heat map in which each pixel oftile # 4 ofFIG. 15 has the color associated with either the blurred pixel class or the unblurred pixel class. -
FIG. 17 is a segmented version oftile # 4 ofFIG. 15 identifying the blurred areas as black image objects. -
FIG. 18 is a flowchart of steps of another embodiment of a method for identifying blurred areas in digital images of stained tissue. - Reference will now be made in detail to some embodiments of the invention, examples of which are illustrated in the accompanying drawings.
-
FIG. 1 shows asystem 10 for analyzing digital images that uses pixel-oriented analysis to identify blurred areas in digital images of tissue slices stained using biomarkers so that object-oriented analysis can be performed only on the unblurred areas in order to obtain a more accurate prognostic cancer score.System 10 is used to analyze images of tissue slices stained using various biomarkers, such as tissue stained with hematoxylin or with a dye attached to a protein-specific antibody using immunohistochemistry (IHC), such as a Ki-67 antibody stain. -
Digital images 11 of the stained tissue slices are acquired at high magnification. A typical digital image of a tissue slice has a resolution of 100,000×200,000 pixels, or 20 billion pixels. The acquireddigital images 11 are stored in adatabase 12 of digital images. Image analysis software executing on adata analysis server 13 then performs intelligent image processing and automated classification and quantification. The image analysis software is a computer program product tangibly embodied on a computer-readable storage medium inserver 13 and comprises computer readable and executable program instructions that when executed by a processor onserver 13 provide a visual display on agraphical user interface 14 of aninterconnected display device 15, such as a personal computer. -
System 10 analyzes, grades, scores and displays thedigital images 11 of tissue slices that have been stained with various biomarkers. The image analysis program first identifies blurred areas indigital images 11 and then segments and classifies objects in the unblurred areas. The blurred areas are identified using statistical pixel-oriented analysis, whereas the grading is performed using object-oriented analysis. When performing object-oriented analysis, the image analysis software links pixels to objects such that the unlinked input data in the form of pixels is transformed into a hierarchical semantic network of image objects. The image analysis program prepares links between some objects and thereby generates higher hierarchically ranked objects. The image analysis program assigns the higher hierarchically ranked objects with properties, classifies them, and then links those objects again at a still higher level to other objects. The higher hierarchically ranked objects are used to find target patterns in the images, which are used to obtain a prognostic cancer score. More easily detected starting image objects are first found and then used to identify harder-to-find image objects in the hierarchical data structure. -
FIG. 2 illustrates an exemplaryhierarchical network 16 that is generated byimage analysis system 10.System 10 generates first objects 17 from adigital image 18 based on the stained tissue. The image analysis program ofsystem 10 uses object-oriented image analysis to generate data objects of hierarchicalsemantic network 16 by linking selectedpixels 19 to image objects according to a classification network and according to a process hierarchy of steps and algorithms. For a more detailed description of generating a data network using a process hierarchy and a class network, see U.S. Pat. No. 8,319,793, the contents of which are incorporated herein by reference. - Each digital image comprises pixel values associated with the locations of each of the
pixels 19. The image analysis program operates on the digital pixel values and links the pixels to form image objects. Each object is linked to a set of pixel locations based on the associated pixel values. For example, an object is generated by linking to the object those pixels having similar characteristics, such as hue, saturation and brightness as defined by the pixel values. Alternatively, the pixel values can be expressed in a 3-value color space. For example, in the RGB color space, three 3-digit numbers in the range from zero to 255 define the color. The three numbers represent the amounts of red, green and blue in the represented color. For example, red is represented as 255-0-0, dark green is represented as 0-100-0, royal blue is designated as 65-105-225, white is represented as 255-255-255, and black is represented as 0-0-0. Smaller numbers represent darker colors, so 100-100-100 is a darker gray than 200-200-200, and 0-0-128 is a darker blue (navy) than straight blue 0-0-255. Although the operation ofsystem 10 is described herein in relation to the RGB color space, other color spaces and representations may also be used, such as the CMYK (cyan, magenta, yellow, black) color model, the CIE 1931 color space, the 1964 xyz color space or the HSV and HSL representation of the RGB color space. Thresholds of brightness at pixel locations that are grouped together can be obtained from a histogram of the pixel values in the digital image. The pixels form the lowest hierarchical level ofhierarchical network 16. - In one example, pixels having the color and intensity imparted by the stain of a biomarker are identified and linked to
first objects 17. The first objects 17 form the second hierarchical level ofhierarchical network 16. Then data objects are linked together into classes according to membership functions of the classes defined in the class network. For example, objects representing nuclei are linked together to form objects 20-21 in a third hierarchical level ofhierarchical network 16. InFIG. 2 , some of thefirst objects 17 correspond to stained pixels of a nucleus corresponding to object 20. In addition, another of thefirst objects 17 corresponds to stained pixels of a separate nucleus represented byobject 21. Anadditional object 22 is generated in a fourth hierarchical level ofhierarchical network 16 and is linked to all of the objects that represent stained nuclei. Thus, the objects 20-21 corresponding to stained nuclei are linked to object 22. - The knowledge and the program flow of the image analysis program are separated in the software structure. The parameters by which the image analysis is performed, for example thresholds of size or brightness, can be changed without having to revise the process hierarchy of software steps. The image analysis software displays both the original
digital images 11 as well as the corresponding processed images and heat maps on thegraphical user interface 14. Pixels corresponding to classified and segmented objects in the digital images are colored, marked or highlighted to correspond to their object classification. For example, the pixels of objects that are members of the same object class are depicted in the same color. In addition, heat maps are displayed in which pixels of the same pixel class have the same color. -
FIG. 3 is a flowchart of steps 25-35 of amethod 24 by whichanalysis system 10 identifies blurred areas of digital images of stained tissue slices before recognizing patterns in the images using object-oriented analysis. In afirst step 25, a high-resolution digital image is acquired of a tissue slice that has been stained using one or more biomarkers. -
FIG. 4 shows an exemplarydigital image 36 of breast tissue upon which immunohistochemical (IHC) Ki-67 staining has been performed. In the embodiment ofFIG. 4 , both hematoxylin and the dye diaminobenzidine (DAB) are used in the staining. The positive cell nuclei containing the Ki-67 protein are stained by DAB and appear as brown, whereas the negative cell nuclei that are not stained by DAB have the blue color of the counter stain hematoxylin. A slice of the stained breast tissue was placed on a slide before thedigital image 36 was scanned. - In
step 26, high-resolutiondigital image 36 is divided intotiles 37. By splittingimage 36 into smaller areas, less processing memory is required for the computations performed on the pixel data of each tile.FIG. 5 shows howdigital image 36 is displayed intiled sections 37 ongraphical user interface 14 ofsystem 10 afterstep 26 is performed. The length of the sides of each square tile in this example is eight hundred microns (800 μm), and the side length of each pixel at the resolution ofimage 36 is 0.5 μm. Thus, each tile is 1600×1600 pixels. - In
step 27,system 10 selects the tiles that contain mostly tissue from which a learning tile is later chosen. Tiles that contain mostly image background and non-tissue artifacts are not used in the selection of the learning tile.FIG. 6 shows the forty-three tiles ondigital image 36 that have been selected bysystem 10 to be used to identify the learning tile that exhibits the most representative staining by hematoxylin and DAB. The tiles are numbered 1-43 for identification. - In
step 28,system 10 selects a learning region ofdigital image 36 on which to train a pixel-based machine learning model to recognize blurred areas. In this embodiment, the learning region is a tile. The learning tile is chosen from among the forty-three selected tiles as the region of theimage 36 that exhibits colors closest to both the median brown of the DAB stain and the median blue each pixel is defined by three 3-digit numbers in the range from zero to 255 that represent the amounts of red, green and blue in the pixel color. The amount of hematoxylin blue in each pixel i is defined by the transformation -
H i(2B i /R i)/(R i +G i +B i)1/2, - and the amount of DAB brown in each pixel i is defined by the transformation
-
K i=(R i 1/2 /B i)/(R i +G i +B i)1/2, - where Ri, Gi and Bi are the 3-digit values of the red, green and blue values of each pixel i. The values of Hi and Ki range from zero to 255 and will have a lighter color and a higher value in the presence of more hematoxylin stain and DAB stain of the Ki-67 protein, respectively. For purposes of calculating the hematoxylin blue Hi in each pixel i and the DAB brown Ki in each pixel i, lower resolution tiles can be used to speed the calculation. In one implementation, the tiles are downsampled to achieve pixels whose sides have a length of 8 μm.
-
FIG. 7 illustrates the staining byhematoxylin 38 inimage 36 through the transformation Hi. For ease of illustration, the inverse brightness (255-Hi) is shown inFIG. 7 so that darker shades of gray represent more staining by thehematoxylin 38.FIG. 8 shows the staining byDAB 39 inimage 36 through the transformation Ki. For ease of illustration, the inverse brightness (255-Ki) is shown inFIG. 8 so that darker shades of gray represent more staining by DAB and presence of the Ki-67 protein. - In order to identify the tile that closest matches the median DAB brown and the median hematoxylin blue of all of the tiles, the mean values of Hi and Ki of all the pixels in each tile are calculated. Then the median value HMED from among the mean of the Hi values of all of the tiles is chosen, and the median value KMED from among the mean of the Ki values of all of the tiles is chosen. The two median values HMED and KMED are the medians of the mean values of the pixel colors of each tile. In this example, the median HMED of the mean values Hi for the forty-three tiles is 41.52, and the median KMED of the mean values Ki for the forty-three tiles is 16.03. The median value KMED is closer to zero than to 255 because even if all cells were cancerous, only the nuclei would be stained, and the pixels representing the nuclei make up a small proportion of the pixels of each tile. The learning tile is chosen as the tile whose means (averages) of the Hi and Ki values have the smallest Euclidian distance to the median values HMED and KMED for the forty-three tiles. For each tile j, the Euclidian distance is calculated as
-
D j=((H j −H MED)2+(K j −K MED)2)1/2, - where Hj and Kj are the averages of the hematoxylin blue values and the DAB brown values for each tile j.
-
FIG. 9 is a scatter plot of points representing the mean Hi and Ki values of each of the forty-three selected tiles shown inFIG. 6 , where the mean hematoxylin blue value is the abscissa plotted on the horizontal axis and the mean DAB brown value is the ordinate plotted on the vertical axis. Thus, the scatter plot has forty-three points corresponding to the forty-three tiles. In this example,tile # 14 has the mean Hi and Ki values with the smallest Euclidian distance to the median values HMED and KMED for all of the tiles. Fortile # 14, the mean Hi value is 41.46, and the mean Ki value is 16.05.Tile # 14 has the smallest Euclidian distance of 0.06325, which is calculated as: -
0.06325=((41.46−41.52)2+(16.05−16.03)2)1/2 - Thus, the result of
step 28 is to selecttile # 14 as the learningtile 40 that will be used to train a pixel-based machine learning model to recognize blurred areas ofimage 16. - In
step 29, the learningregion 40 oftile # 14 is duplicated to create a copied learningregion 41.Step 29 is performed on a full resolution version oftile 40 in which the length of each side of each pixel is 0.5 μm. Both the learningtile 40 and the copied learningtile 41 are squares of 1600×1600 pixels.System 10 then operates on both the learningtile 40 and the copied learningtile 41. - In
step 30, the copied learningregion 41 is distorted by applying a filter to the pixel values of each pixel of the copied learning region so as artificially to blur the copied learning region. In one implementation, the filter applied to each pixel of the copied learningregion 41 is a Gaussian filter that modifies the value of each pixel based on the values of neighboring pixels. The blurred image of the copied learning tile most closely resembled an image of stained tissue blurred by natural causes when the filter was applied at a radius of twenty pixels corresponding to ten microns (10 μm). The 20-pixel radius is applied by modifying the pixel values of a center pixel in a 41×41 pixel box based on the pixel values of the other pixels in the box. Each of the R, G and B pixel values is modified separately based on the R, G and B pixel values of the neighboring pixels. -
FIG. 10 illustrates the step of applying a filter in order to artificially blur the copied learningregion 41. Although the best results were achieved by filtering with a 20-pixel radius, thefiltering step 30 is now described in more detail using a smaller 2-pixel radius.FIG. 10 shows a 100-pixel portion of copied learningtile 41. Thepixel 42 is being filtered by applying a Gaussian filter to a 5×5pixel box 43 centered onpixel 42. Each of the R, G and B pixel values ofpixel 42 is filtered separately. In one example, filteredpixel 42 has a brown color represented by the R, G and B values 200, 125 and 75, respectively. The modification of just the red pixel value 200 is described here. The red pixel value of each of the twenty-five pixels inbox 43 is multiplied by the factor listed for that pixel inFIG. 10 . For example, the red pixel value 200 of filteredpixel 42 is multiplied by thefactor 41. Then the twenty-five products of the factors times the red pixel values are summed. Finally, the sum is divided by the total of all of the factors, which equals 273. Thus, the red pixel value 200 makes only a 15% contribution (41/273) to the magnitude of the filtered red pixel value. The filtered red pixel value is influenced by the red pixel values of the neighboring pixels, with more weighting allocated to closer pixels, as the weighting factors inFIG. 10 demonstrate. The effect of the filtering is to modify the red pixel value of filteredpixel 42 to more closely resemble the red pixel values of the neighboring pixels and to reduce the color contrast. The green and blue pixel values of filteredpixel 42 are modified in the same way as the red pixel value. Locally filtering the red, green and blue pixel values reduces the color contrast and artificially blurs the copied learningregion 41. - In an embodiment in which the pixels of
digital image 36 indicate color as a gray scale, there would be only a single gray-scale channel. Thefiltering step 30 would then modify just the gray-scale pixel value for each pixel of the copied learningregion 41. -
FIG. 11 shows a more detailed view of learningtile 40 ofFIG. 6 . In addition,FIG. 11 shows ablurred learning tile 44 generated by artificially blurring the copied learningtile 41 by applying a filter to the pixel values of copied learningtile 41. - In
step 31, a pixel classifier is trained on learningtile 40 and on blurred, copied learningtile 44 to classify each pixel as belonging either to the learning region or to the copied learning region. The pixel classifier is a binary classifier that is trained using supervised learning becausesystem 10 knows that each pixel of learningtile 40 belongs to an unblurred class of pixels and that each pixel of the blurred, copied learningtile 44 belongs to a blurred class of pixels. Various kinds of pixel classifiers can be used, such as a random forest classifier, a convolutional neuronal network, a decision tree classifier, a support vector machine classifier or a Bayes classifier. - In this embodiment, the pixel classifier is a set of random forest pixelwise descriptors. Each pixelwise descriptor is generated by comparing learning pixels of the learning
region 40 and theblurred learning region 44 to neighboring pixels at predetermined offsets from each of the learning pixels. Based on the comparing of learning pixels to their neighboring pixels, each pixelwise descriptor is trained to indicate that each of the learning pixels most likely belongs either to the unblurred class of pixels such as those in learningtile 40 or to the blurred class of pixels such as those in theblurred learning tile 44. The pixelwise descriptors indicate the most likely class associated with each pixel without referencing any image objects that would be generated using object-based image analysis. Purely pixel-based image analysis is performed using the descriptors. The pixelwise descriptors indicate the probability that a characterized pixel belongs to a class based on a characteristic of a second pixel or group of pixels at a predetermined offset from the characterized pixel. The pixelwise descriptors are used in random forest decision trees to indicate the probability that each pixel belongs to a particular class. - The class probability of each pixel is calculated using multiple decision trees of pixelwise descriptors. Then the average of the probabilities is taken as the result. The various decision trees are trained with random different neighboring pixels from the learning
tiles tile 40 and blurred learningtile 44. In one implementation, an average probability of a pixel belonging to the blurred or unblurred class is calculated using twenty random forest decision trees. -
FIG. 12 is a schematic diagram illustrating how exemplary pixelwise descriptors 45-51 are applied in one of the random forest decision trees to determine the probability that a pixel belongs to one of three classes: blurred (bl), unblurred (ub) and background (bg). In other examples, the pixelwise descriptors classify each pixel into just two classes: blurred (bl) and unblurred (ub).System 10 trains on random pixels from the learningtiles System 10 matches each pixel to the correct class by choosing the type of pixelwise descriptors, the order in which those descriptors are applied in the decision trees, the location of the pixels that are being compared and the comparison threshold used to make each decision. The type of pixelwise descriptor is characterized by the type of operation performed on the pixel values of the offset neighboring pixels. For example, the operation may calculate the mean of the pixel values, the standard deviation of the pixel values or the difference of the means or deviations for pixels in separate offset boxes. - In a hypothetical training of the pixelwise descriptors 45-51 on the pixels of learning
tiles pixelwise descriptor 45.Descriptor 45 determines the average red value of the pixels in a 6×13 box of pixels that is offset from the characterized pixel by two pixels in the y dimension (0,2).FIG. 13 illustrates the characterizedpixel 52 and thebox 53 of pixels whose lower left corner is offset from characterizedpixel 52 by zero pixels in the x dimension and two pixels in the y dimension.Pixel 52 belongs to anucleus 54 containing the Ki-67 protein that has been stained with DAB dye connected to the Ki-67 antibody that attaches to the Ki-67 protein. In this hypothetical implementation, the average red value of the pixels inbox 53 is less than the threshold value of 142.9 used by thepixelwise descriptor 45. Therefore, the analysis proceeds along the branch of the decision tree topixelwise descriptor 46. -
Descriptor 46 determines the average blue value of the pixels in a 2×1box 55 of pixels that is offset from characterizedpixel 52 by two pixels in the x dimension and one pixel in the y dimension.FIG. 13 shows thebox 55 that is used for the determination of the blue value of the pixels. In this example, the average blue value of the pixels inbox 55 is less than the threshold value of 119.1 used by thepixelwise descriptor 46, so the analysis proceeds along the branch of the decision tree topixelwise descriptor 48.Descriptor 48 determines the average green value of the pixels in a 1×4box 56 of pixels that is offset from characterizedpixel 52 by one pixel in the x dimension and four pixels in the y dimension. In this case, the average green value of the pixels inbox 56 is greater than the threshold value of 39.1 used by thepixelwise descriptor 48, so the decision tree of pixelwise descriptors indicates that characterizedpixel 52 most probably belongs to the unblurred class of pixels. Thus, the decision tree has been trained to correctly classify each pixel as belonging either to the unblurred class (ub) of pixels in thelearning region 40 or to the blurred class (bl) of pixels in the blurred, copied learningregion 44. - The decision tree of pixelwise descriptors outputs the posterior probabilities that each pixel belongs to one of the selected classes, in this example blurred pixels (bl), unblurred pixels (ub) and background pixels (bg). In other implementations, the class probabilities are divided between only blurred pixels (bl) and unblurred pixels (ub). The output probabilities are normalized so that the sum of the probabilities of belonging to a class within the selected classes is 100%. The decision tree indicates that the probability P(ub) that characterized
pixel 52 belongs to the unblurred pixel class is 60%. The decision tree predicts that characterizedpixel 52 has a 38% probability P(bl) of belonging to the blurred pixel class and a 2% probability P(bg) of belonging to the class of background pixels. - In this embodiment, nineteen other decision trees of pixelwise descriptors are also trained to predict that other random training pixels in the
learning tiles learning region 40 or to the blurred, copied learningregion 44. The best match is achieved when the highest probability class for all of the selected training pixels is correct, and those indicated probabilities are closest to 100%. The parameters that are modified to achieve the best match are (i) the comparison threshold at each pixelwise descriptor, (ii) the offset of the pixels being compared, (iii) the size and shape of the box of pixels being compared, (iv) the quality of the pixels that is being compared (e.g., mean color value), and (v) the order in which the pixelwise descriptors are placed in each decision tree. - The pixelwise descriptors can be more complex than merely comparing an average color value to a threshold. For example,
pixelwise descriptor 50 calculates the difference of the average (mean) color values in two offset boxes and then compares the difference to a threshold. Yet other pixelwise descriptors compare a threshold to other pixel values, such as (i) the color value of a second pixel at a predetermined offset, (ii) the difference between the color value of the characterized pixel and the color value of a second pixel at a predetermined offset, (iii) the standard deviation among the color values of pixels in a box of predetermined size at a predetermined offset from the characterized pixel, (iv) the difference between the standard deviations of the pixels in two boxes, (v) the sum of the gradient magnitude of the color values of pixels in a box of predetermined size at a predetermined offset from the characterized pixel and at a predetermined orientation, and (vi) the orientation of the gradient edge of the color values of pixels in a box of predetermined size at a predetermined offset from the characterized pixel. - In
step 32,system 10 classifies each pixel ofdigital image 36 as most likely resembling either the learning region or the copied learning region using the pixel classifier trained instep 31. The image analysis program applies the pixel-oriented image analysis of the decision trees of pixelwise descriptors to each of the pixels of the originaldigital image 36 of stained tissue, including the pixels of learning tile 40 (tile #14). In one implementation,system 10 classifies each pixel as belonging to the blurred pixel class corresponding to the blurred, copied learningregion 44 if each decision tree of pixelwise descriptors indicates a probability P(bl) greater than 55% of belonging to the blurred pixel class. Thus, the pixel classifier applies a probability threshold of 0.55 to classify pixels as being blurred. - Areas of
digital image 36 that contain pixels in the blurred pixel class may be blurred for various reasons. For example, in order to acquire a high resolution digital image of a tissue slice, the tissue is typically scanned in multiple strips or stripes in order to cover all of the tissue. If the focal length is not optimally adjusted on a scanning pass, then an entire scanning stripe may be out of focus and blurred. Local areas may also be blurred if the areas of tissue are lifted from the glass slide so that the focal length is shorter than for the remainder of the tissue. Microdroplets are another possible cause of blurred areas on a digital image of stained tissue. If the stained tissue is scanned while small areas of moisture are present on the tissue surface, the light used to acquire the digital image may be refracted differently by the moisture and may create small blurred areas. There are also other causes of blurring other than scanning stripes, raised areas and microdroplets. - In one embodiment, each pixel that has greater than a 55% probability of belonging to the blurred class of pixels is assigned the color white (255, 255, 255), and all other pixels are assigned the color black (0, 0, 0).
FIG. 14 is a screenshot ofgraphical user interface 14 displaying thetile # 5 ofdigital image 36 in an upperleft frame 61. Using the classifying performed instep 32,system 10 displays aheat map 62 oftile # 5 in the lower left frame that was generated by applying pixelwise descriptors to the original image of stained tissue. The pixels ofheat map 62 are assigned the color black for the blurred class of pixels and the color white for the nonblurred class of pixels and the background class of pixels. By outputting posterior probabilities of belonging to only the selected three pixel classes, extraneous information is removed fromheat map 62, and a clearer presentation is provided to the pathologist to indicate the blurred regions that should not be used in grading and scoring the tissue sample.Tile # 4 is also displayed ongraphical user interface 14 in aframe 63 to the right offrame 61. Aheat map 64 oftile # 4 is displayed belowframe 63 and to the right ofheat map 62. Forheat map 64, using the classifying performed instep 32, the pixels of the blurred pixel class are also assigned the color black, and the pixels of the unblurred pixel class and the background class are assigned the color white. - In
step 33,digital image 36 is segmented into image objects corresponding to blurred areas and unblurred areas based on the classifying of each pixel instep 32 as belonging either to thelearning region 40 or to the blurred, copied learningregion 44.System 10 segmentsdigital image 36 into blurred areas and unblurred areas based on each pixel being classified as belonging to the unblurred class of pixels or the blurred class of pixels.System 10 performs the object-based segmentation using aprocess hierarchy 65 of process steps and aclassification network 66 of class membership functions. For example, the membership function of the class of blurred objects ignores individual pixels of the blurred pixel class that do not belong to the pixel class of the surrounding pixels. Only larger clumps of blurred pixels are segmented into image objects belonging to the blurred object class. Thus, the membership function of the class of blurred objects has a minimum area. -
FIG. 14 shows the parameters of theprocess hierarchy 65 and theclassification network 66 being displayed on thegraphical user interface 14 to the right of theframe 63. Theprocess hierarchy 65 lists the steps of the object-oriented analysis used in the segmentation. Theclass network 66 lists the membership functions as well as the colors assigned to the classes of objects. - In
step 34, the blurred areas and the unblurred areas ofdigital image 36 are identified on thegraphical user interface 14.FIGS. 15-17 illustrate how the blurred areas are identified.FIG. 15 shows animage 67 oforiginal tile # 4 fromFIG. 6 .FIG. 16 shows theheat map 64 that was generated fromimage 67 in which blurred pixels are black, and unblurred pixels are white.FIG. 17 is asegmented version 68 of image 67 (tile #4) in which blurred areas are identified as black image objects 69. Only those black pixels ofheat map 64 that are contiguous with a critical mass of other black pixels are segmented into the image objects 69 that represent blurred area. The minimum area of blurred image objects can be defined by the image analysis program, and the entire area is defined as the image object, including pixels within the area that belong to the unblurred pixel class. By assigning classes to pixels before segmenting those pixels into objects, the entire high-resolutiondigital image 36 can be classified into blurred areas and unblurred areas in a computationally efficient manner, and the accuracy of the object-oriented segmentation can be improved. -
Method 24 involving both artificially blurring and training a pixel classifier for each digital image more accurately identifies blurred regions than applying the same blur detection algorithm and associated thresholds and parameters to all of the images of tissue slices. For example, a “Difference of Gaussians” algorithm could be used for blur detection on all images by blurring each image using the same two parameters for blurring radii, and then subtracting the pixel values obtained using the two blurring radii from one another to obtain blur information. Such a blur detection algorithm would not as consistently identify blurred areas on images of different kinds of tissue as doesmethod 24, which trains a pixel classifier for each image of a tissue slice. - In
step 35,system 10 segments image objects in only the areas ofdigital image 36 that have not been identified as being blurred.System 10 performs object-oriented image analysis on the unblurred areas ofdigital image 36 in order to obtain a prognostic cancer score for the stained tissue. In one application ofmethod 24, the results of automated scoring of the Ki-67 test are improved by preventing the count of Ki-67 positive and negative nuclei from being performed on blurred areas of the image of stained tissue. The Ki-67 test counts the number of cancer cells whose nuclei have been stained using the Ki-67 marker compared to the overall number of cancer cells. However, the accuracy with which automated image analysis can recognize and count the stained cancer cells and the total number of cancer cells is drastically reduced when the image analysis is performed on blurred areas with low color contrast, and the Ki-67 score becomes less reliable when blurred regions are included in the scoring region. Consequently, the accuracy of the Ki-67 score is improved when blurred regions are excluded from the scoring region. - In other embodiments,
method 24 is used to identify blurred areas of digital images of tissue stained using other biomarkers in order to improve the accuracy of other cancer grading systems that rely on the other biomarkers. For example,method 24 can be used to detect blurred areas in breast tissue stained using the estrogen receptor (ER) antibody. A more accurate Allred score indicating the severity of breast cancer is then obtained by determining the percentage of cells stained using ER only in the unblurred areas of the image. Similarly, a more accurate HercepTest score can be obtained by determining the degree of membrane staining of the Human Epidermal growth factor Receptor 2 (Her2) protein only in unblurred areas of the image. In addition,method 24 can be used to improve the cancer grading performed on images of tissue stained using biomarkers such as progesterone receptor (PR), Her2/neu cytoplasmic staining, cytokeratin 18 (CK18), transcription factor p63, Mib, SishChr17, SishHer2, cluster of differentiation 44 (CD44) antibody staining, CD23 antibody staining, and hematoxylin and eosin (H&E). - Using
method 24 to exclude blurred areas from being considered in various cancer scoring and grading systems is a considerable improvement over the conventional method in which a pathologist manually marks areas of the images of stained tissue that appear to be blurred. First, identifying blurred areas by visually inspecting tissue slides is tedious and time-consuming. Thus, even an experienced pathologist may misjudge or overlook areas that are blurred due to fatigue and loss of concentration. Second, visual inspection can identify only relatively large blurred areas. Each tissue slide can have millions of pixels, and hundreds of small blurred areas on the slide can be caused by microdroplets that refract the light used to create the digital image. Visual inspection cannot identify blurred areas that includes only a few hundred pixels, such as the objects 69 representing small blurred areas shown inFIG. 17 . And even if visual inspection could identify that hundreds of small blurred areas indigital image 36, it would not be feasible manually to mark each of the regions so that the blurred areas can be excluded from the cancer scoring. - In yet another embodiment,
method 24 is used to rate the image quality of each digital image of stained tissue. For example, cancer scoring may be based on the image analysis of multiple slides of stained tissue, and low quality slide images may be excluded from the scoring. Afterstep 34,system 10 displays an indicator ongraphical user interface 14 indicating the overall quality of each digital image of stained tissue. The indicator may specify the image quality as a percentage of blurred area, a list of the numbers of tiles that are mostly blurred or simply as a warning, such as a red exclamation mark or traffic hazard sign. For example, a stop sign could be a warning indicator that the digital image exhibits insufficient quality for scoring.System 10 may also list metrics of image quality, such as the relative area of unblurred regions to the total tissue area, the absolute area of unblurred regions in square microns or square millimeters, or the number of tumor cells within the unblurred regions. If one of these measurements is lower than a predetermined threshold, then the image is not eligible for scoring, and the warning indicator is displayed to the user.Method 24 may also be used to automatically rate the image quality of large batches of images of stained tissue. For example, detailed manual inspection of excessive blur on thousands of tissue slides would not be economically feasible. Yet a pre-scoring exclusion of excessively blurred images could be performed with little additional effort because the quality control could use the same steps and results ofmethod 24 that allow cancer scoring to be performed only in unblurred areas. -
FIG. 18 is a flowchart of steps 71-77 of anothermethod 70 in which pixelwise descriptors are trained to indicate the probability that individual pixels in a learning region of a digital image belong to a blurred class of pixels or to an unblurred class of pixels. Unlikemethod 24, the pixelwise descriptors are not training on a blurred copy of a learning tile. Instead, the pixelwise descriptors ofmethod 70 are trained on a blurred subregion of a learning region as well as on an unblurred subregion of the learning region. - In
step 71, a learning region is selected on a digital image of a slice of tissue from a cancer patient that has been stained using a biomarker. For example, breast tissue of the patient is stained with a dye attached to the estrogen receptor (ER) antibody that marks the corresponding protein. Each pixel of the digital image has a color defined by pixel values, and a portion of the pixels exhibits the color of the dye stained using the biomarker. - In
step 72, a subregion of the learning region is distorted by applying a filter to the pixel values of each pixel of the subregion so as artificially to blur the subregion. - In
step 73, one or more pixelwise descriptors are generated by analyzing the pixel values of each pixel of the learning region and by comparing the pixel values of each analyzed pixel with the pixel values of neighboring pixels at predetermined offsets from each analyzed pixel. Each pixelwise descriptor is trained to indicate, based on the comparing with neighboring pixels, that each pixel of the learning region most likely belongs either to a blurred class of pixels such as those in the subregion or to an unblurred class of pixels such as those in the remainder of the learning region. - In
step 74, each pixel of the digital image is characterized as most likely belonging either to the blurred class of pixels or to the unblurred class of pixels using the one or more pixelwise descriptors by classifying each characterized pixel based on the pixel values of neighboring pixels at predetermined offsets from each characterized pixel. - In
step 75, blurred areas of the digital image are identified based on the classifying of pixels as belonging to the blurred class of pixels. - In
step 76, image objects are generated by segmenting the digital image except in the identified blurred areas. For example, the image objects represent cells of the stained breast tissue. - In
step 77,system 10 determines a cancer score using the image objects. The score is indicative of a level of cancer malignancy of the slice of tissue from the cancer patient. For example, the score is an Allred score that indicates the severity of breast cancer based on the percentage of cells in the unblurred areas of the digital image that have been stained to a threshold intensity by the estrogen receptor (ER) antibody. -
Data analysis server 13 includes a computer-readable storage medium having program instructions thereon for performingmethod 24 andmethod 70. Such a computer-readable storage medium includes instructions of the image analysis program for generating decision trees of pixelwise descriptors that indicate the probability that a pixel belongs to a pixel class based on characteristics of neighboring pixels. The computer-readable storage medium also includes instructions for generating image objects of a data network corresponding to patterns in digital images that have been stained by a particular biomarker. - Although the present invention has been described in connection with certain specific embodiments for instructional purposes, the present invention is not limited thereto. Although
methods
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/593,968 US10565479B1 (en) | 2016-12-27 | 2019-10-04 | Identifying and excluding blurred areas of images of stained tissue to improve cancer scoring |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/391,088 US10438096B2 (en) | 2016-12-27 | 2016-12-27 | Identifying and excluding blurred areas of images of stained tissue to improve cancer scoring |
US16/593,968 US10565479B1 (en) | 2016-12-27 | 2019-10-04 | Identifying and excluding blurred areas of images of stained tissue to improve cancer scoring |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/391,088 Continuation US10438096B2 (en) | 2016-12-27 | 2016-12-27 | Identifying and excluding blurred areas of images of stained tissue to improve cancer scoring |
Publications (2)
Publication Number | Publication Date |
---|---|
US20200034651A1 true US20200034651A1 (en) | 2020-01-30 |
US10565479B1 US10565479B1 (en) | 2020-02-18 |
Family
ID=60923259
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/391,088 Active 2037-07-04 US10438096B2 (en) | 2016-12-27 | 2016-12-27 | Identifying and excluding blurred areas of images of stained tissue to improve cancer scoring |
US16/593,968 Active US10565479B1 (en) | 2016-12-27 | 2019-10-04 | Identifying and excluding blurred areas of images of stained tissue to improve cancer scoring |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/391,088 Active 2037-07-04 US10438096B2 (en) | 2016-12-27 | 2016-12-27 | Identifying and excluding blurred areas of images of stained tissue to improve cancer scoring |
Country Status (2)
Country | Link |
---|---|
US (2) | US10438096B2 (en) |
EP (1) | EP3343440A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11756190B2 (en) * | 2017-03-30 | 2023-09-12 | Fujifilm Corporation | Cell image evaluation device, method, and program |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10410398B2 (en) * | 2015-02-20 | 2019-09-10 | Qualcomm Incorporated | Systems and methods for reducing memory bandwidth using low quality tiles |
WO2016203282A1 (en) | 2015-06-18 | 2016-12-22 | The Nielsen Company (Us), Llc | Methods and apparatus to capture photographs using mobile devices |
US10706535B2 (en) * | 2017-09-08 | 2020-07-07 | International Business Machines Corporation | Tissue staining quality determination |
GB2569103A (en) * | 2017-11-16 | 2019-06-12 | Univ Oslo Hf | Histological image analysis |
EP4235594A3 (en) * | 2018-02-15 | 2023-10-25 | Denka Company Limited | System, program, and method for determining hypermutated tumor |
US10861156B2 (en) | 2018-02-28 | 2020-12-08 | Case Western Reserve University | Quality control for digital pathology slides |
JP2019195304A (en) | 2018-05-10 | 2019-11-14 | 学校法人順天堂 | Image analysis method, device, computer program, and generation method of deep learning algorithm |
CN109145965A (en) * | 2018-08-02 | 2019-01-04 | 深圳辉煌耀强科技有限公司 | Cell recognition method and device based on random forest disaggregated model |
US11633146B2 (en) | 2019-01-04 | 2023-04-25 | Regents Of The University Of Minnesota | Automated co-registration of prostate MRI data |
US11631171B2 (en) * | 2019-01-10 | 2023-04-18 | Regents Of The University Of Minnesota | Automated detection and annotation of prostate cancer on histopathology slides |
JP7381003B2 (en) * | 2019-04-26 | 2023-11-15 | 学校法人順天堂 | METHODS, APPARATUS AND COMPUTER PROGRAMS TO ASSIST DISEASE ANALYSIS AND METHODS, APPARATUS AND PROGRAMS FOR TRAINING COMPUTER ALGORITHM |
US11903650B2 (en) | 2019-09-11 | 2024-02-20 | Ardeshir Rastinehad | Method for providing clinical support for surgical guidance during robotic surgery |
US11977723B2 (en) * | 2019-12-17 | 2024-05-07 | Palantir Technologies Inc. | Image tiling and distributive modification |
CN111242242B (en) * | 2020-02-27 | 2022-04-12 | 武汉大学 | Cervical tissue pathology whole-slide image automatic classification method based on confidence degree selection |
CN111462075B (en) * | 2020-03-31 | 2023-12-15 | 湖南国科智瞳科技有限公司 | Rapid refocusing method and system for full-slice digital pathological image fuzzy region |
CN115997129A (en) * | 2020-09-09 | 2023-04-21 | 安捷伦科技有限公司 | Immunohistochemical (IHC) protocols and methods for diagnosis and treatment of cancer |
CN112241766B (en) * | 2020-10-27 | 2023-04-18 | 西安电子科技大学 | Liver CT image multi-lesion classification method based on sample generation and transfer learning |
CN112990339B (en) * | 2021-03-31 | 2024-07-12 | 平安科技(深圳)有限公司 | Gastric pathological section image classification method, device and storage medium |
US12106550B2 (en) * | 2021-04-05 | 2024-10-01 | Nec Corporation | Cell nuclei classification with artifact area avoidance |
CN117355871A (en) * | 2021-04-29 | 2024-01-05 | 御眼视觉技术有限公司 | Multi-frame image segmentation |
AU2022309204A1 (en) * | 2021-07-06 | 2024-02-01 | PAIGE.AI, Inc. | Systems and methods to process electronic images to provide blur robustness |
CN116703742B (en) * | 2022-11-04 | 2024-05-17 | 荣耀终端有限公司 | Method for identifying blurred image and electronic equipment |
CN115713501B (en) * | 2022-11-10 | 2023-06-16 | 深圳市探鸽智能科技有限公司 | Detection processing method and system suitable for blurred pictures of camera |
WO2024210829A1 (en) * | 2023-04-06 | 2024-10-10 | Grabtaxi Holdings Pte. Ltd. | Automated image evaluation |
Family Cites Families (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6024463A (en) | 1983-07-20 | 1985-02-07 | Toshiba Corp | Nuclear magnetic resonance imaging method |
JPH11102414A (en) | 1997-07-25 | 1999-04-13 | Kuraritec Corp | Method and device for correcting optical character recognition by using bitmap selection and computer-readable record medium record with series of instructions to correct ocr output error |
WO2001093745A2 (en) | 2000-06-06 | 2001-12-13 | The Research Foundation Of State University Of New York | Computer aided visualization, fusion and treatment planning |
US6956373B1 (en) | 2002-01-02 | 2005-10-18 | Hugh Keith Brown | Opposed orthogonal fusion system and method for generating color segmented MRI voxel matrices |
US7801361B2 (en) | 2002-10-15 | 2010-09-21 | Definiens Ag | Analyzing pixel data using image, thematic and object layers of a computer-implemented network structure |
US7711409B2 (en) | 2006-10-04 | 2010-05-04 | Hampton University | Opposed view and dual head detector apparatus for diagnosis and biopsy with image processing methods |
US8229194B2 (en) | 2006-11-16 | 2012-07-24 | Visiopharm A/S | Feature-based registration of sectional images |
US20080144013A1 (en) | 2006-12-01 | 2008-06-19 | Institute For Technology Development | System and method for co-registered hyperspectral imaging |
US8160364B2 (en) | 2007-02-16 | 2012-04-17 | Raytheon Company | System and method for image registration based on variable region of interest |
WO2008107905A2 (en) | 2007-03-08 | 2008-09-12 | Sync-Rx, Ltd. | Imaging and tools for use with moving organs |
US7995864B2 (en) | 2007-07-03 | 2011-08-09 | General Electric Company | Method and system for performing image registration |
US8139831B2 (en) | 2007-12-06 | 2012-03-20 | Siemens Aktiengesellschaft | System and method for unsupervised detection and gleason grading of prostate cancer whole mounts using NIR fluorscence |
US8311344B2 (en) | 2008-02-15 | 2012-11-13 | Digitalsmiths, Inc. | Systems and methods for semantically classifying shots in video |
US8165425B2 (en) | 2008-07-24 | 2012-04-24 | Siemens Medical Solutions Usa, Inc. | Interactive manual deformable registration of images |
US8319793B2 (en) | 2009-04-17 | 2012-11-27 | Definiens Ag | Analyzing pixel data by imprinting objects of a computer-implemented network structure into other objects |
US20130170726A1 (en) | 2010-09-24 | 2013-07-04 | The Research Foundation Of State University Of New York | Registration of scanned objects obtained from different orientations |
US8351676B2 (en) | 2010-10-12 | 2013-01-08 | Sony Corporation | Digital image analysis using multi-step analysis |
US9779283B2 (en) | 2011-01-05 | 2017-10-03 | The Board Of Trustees Of The University Of Illinois | Automated prostate tissue referencing for cancer detection and diagnosis |
US8699769B2 (en) * | 2011-07-12 | 2014-04-15 | Definiens Ag | Generating artificial hyperspectral images using correlated analysis of co-registered images |
WO2013116735A1 (en) | 2012-02-01 | 2013-08-08 | 20/20 Gene Systems, Inc. | Methods for predicting tumor response to targeted therapies |
US9519868B2 (en) | 2012-06-21 | 2016-12-13 | Microsoft Technology Licensing, Llc | Semi-supervised random decision forests for machine learning using mahalanobis distance to identify geodesic paths |
US20140073907A1 (en) | 2012-09-12 | 2014-03-13 | Convergent Life Sciences, Inc. | System and method for image guided medical procedures |
CN104956338A (en) * | 2012-12-04 | 2015-09-30 | 惠普发展公司,有限责任合伙企业 | Displaying information technology conditions with heat maps |
US9060672B2 (en) | 2013-02-11 | 2015-06-23 | Definiens Ag | Coregistering images of needle biopsies using multiple weighted landmarks |
EP2973397B1 (en) * | 2013-03-15 | 2017-08-02 | Ventana Medical Systems, Inc. | Tissue object-based machine learning system for automated scoring of digital whole slides |
SE538435C2 (en) * | 2014-05-14 | 2016-06-28 | Cellavision Ab | Method, device and computer program product for determining colour transforms between images comprising a plurality of image elements |
US9740957B2 (en) * | 2014-08-29 | 2017-08-22 | Definiens Ag | Learning pixel visual context from object characteristics to generate rich semantic images |
US9805248B2 (en) * | 2014-08-29 | 2017-10-31 | Definiens Ag | Applying pixelwise descriptors to a target image that are generated by segmenting objects in other images |
-
2016
- 2016-12-27 US US15/391,088 patent/US10438096B2/en active Active
-
2017
- 2017-12-18 EP EP17207980.8A patent/EP3343440A1/en not_active Withdrawn
-
2019
- 2019-10-04 US US16/593,968 patent/US10565479B1/en active Active
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11756190B2 (en) * | 2017-03-30 | 2023-09-12 | Fujifilm Corporation | Cell image evaluation device, method, and program |
Also Published As
Publication number | Publication date |
---|---|
EP3343440A1 (en) | 2018-07-04 |
US10438096B2 (en) | 2019-10-08 |
US20180182099A1 (en) | 2018-06-28 |
US10565479B1 (en) | 2020-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10565479B1 (en) | Identifying and excluding blurred areas of images of stained tissue to improve cancer scoring | |
US10474874B2 (en) | Applying pixelwise descriptors to a target image that are generated by segmenting objects in other images | |
US10445557B2 (en) | Learning pixel visual context from object characteristics to generate rich semantic images | |
US11669971B2 (en) | Colony contrast gathering | |
CN110533684B (en) | Chromosome karyotype image cutting method | |
CA3010836C (en) | Systems and methods for segmentation and processing of tissue images and feature extraction from same for treating, diagnosing, or predicting medical conditions | |
EP2681715B1 (en) | Method and software for analysing microbial growth | |
US8335374B2 (en) | Image segmentation | |
US11348231B2 (en) | Deep learning method for predicting patient response to a therapy | |
US20130342694A1 (en) | Method and system for use of intrinsic images in an automotive driver-vehicle-assistance device | |
EP3140778B1 (en) | Method and apparatus for image scoring and analysis | |
CN107730499A (en) | A kind of leucocyte classification method based on nu SVMs | |
CN109509188A (en) | A kind of transmission line of electricity typical defect recognition methods based on HOG feature | |
CN115294377A (en) | System and method for identifying road cracks | |
US8913829B2 (en) | Automatic processing scale estimation for use in an image process |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: DEFINIENS AG, GERMANY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LESNIAK, JAN MARTIN;REEL/FRAME:050633/0337 Effective date: 20161223 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: DEFINIENS GMBH, GERMANY Free format text: CHANGE OF NAME;ASSIGNOR:DEFINIENS AG;REEL/FRAME:051554/0001 Effective date: 20190829 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |