US11763461B2 - Specimen container characterization using a single deep neural network in an end-to-end training fashion - Google Patents
Specimen container characterization using a single deep neural network in an end-to-end training fashion Download PDFInfo
- Publication number
- US11763461B2 US11763461B2 US17/251,756 US201917251756A US11763461B2 US 11763461 B2 US11763461 B2 US 11763461B2 US 201917251756 A US201917251756 A US 201917251756A US 11763461 B2 US11763461 B2 US 11763461B2
- Authority
- US
- United States
- Prior art keywords
- neural network
- convolutional neural
- specimen
- classification
- serum
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N33/00—Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
- G01N33/48—Biological material, e.g. blood, urine; Haemocytometers
- G01N33/483—Physical analysis of biological material
- G01N33/487—Physical analysis of biological material of liquid biological material
- G01N33/49—Blood
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/02—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor using a plurality of sample containers moved by a conveyor system past one or more treatment or analysis stations
- G01N35/04—Details of the conveyor system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2431—Multiple classes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/174—Segmentation; Edge detection involving the use of two or more images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/40—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for data related to laboratory analysis, e.g. patient specimen analysis
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/02—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor using a plurality of sample containers moved by a conveyor system past one or more treatment or analysis stations
- G01N35/04—Details of the conveyor system
- G01N2035/0401—Sample carriers, cuvettes or reaction vessels
- G01N2035/0406—Individual bottles or tubes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/02—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor using a plurality of sample containers moved by a conveyor system past one or more treatment or analysis stations
- G01N35/04—Details of the conveyor system
- G01N2035/0474—Details of actuating means for conveyors or pipettes
- G01N2035/0491—Position sensing, encoding; closed-loop control
- G01N2035/0493—Locating samples; identifying different tube sizes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Definitions
- This disclosure relates to methods and apparatus for characterizing a specimen container (and specimen therein) in an automated diagnostic analysis system.
- Automated diagnostic analysis systems may conduct assays or clinical analyses using one or more reagents to identify an analyte or other constituent in a specimen such as urine, blood serum, blood plasma, interstitial liquid, cerebrospinal liquid, and the like.
- a specimen such as urine, blood serum, blood plasma, interstitial liquid, cerebrospinal liquid, and the like.
- specimen containers e.g., specimen collection tubes.
- the testing reactions generate various changes that may be read and/or manipulated to determine a concentration of an analyte or other constituent in the specimen.
- LASs Laboratory Automation Systems
- LASs may also automatically transport a specimen in a specimen container to a number of specimen processing stations so various operations (e.g., pre-analytical or analytical testing) can be performed thereon.
- LASs may handle a number of different specimens contained in standard, barcode-labeled specimen containers, which may be of different sizes (e.g., diameters and heights).
- the barcode label may contain an accession number that may contain or be correlated to patient information and other information that may have been entered into a hospital's Laboratory Information System (LIS) along with test orders.
- LIS Laboratory Information System
- An operator may place the labeled specimen containers onto the LAS system, which may automatically route the specimen containers for pre-analytical operations such as centrifugation, de-capping, and/or aliquot preparation before the specimen is subjected to clinical analysis or assaying by one or more analyzers (e.g., clinical chemistry or assaying instruments) that may also be part of the LAS.
- analyzers e.g., clinical chemistry or assaying instruments
- a biological liquid such as a serum or plasma portion (obtained from whole blood by centrifugation) may be analyzed.
- a gel separator may be added to the specimen container to aid in the separation of a settled blood portion from the serum or plasma portion.
- the specimen container may be transported to an appropriate analyzer that may extract a portion of the biological fluid (e.g., serum or plasma portion) from the specimen container and combine the fluid with one or more reagents and possibly other materials in a reaction vessel (e.g., a cuvette).
- Analytical measurements may then be performed via photometric or fluorometric absorption readings by using a beam of interrogating radiation or the like. The measurements allow determination of end-point rate or other values, from which an amount of an analyte or other constituent in the biological fluid is determined using well-known techniques.
- any interferent e.g., Hemolysis, Icterus, and/or Lipemia
- the presence of any interferent may adversely affect test results of the analyte or constituent measurement obtained from the one or more analyzers.
- the presence of hemolysis (H) in the specimen which may be unrelated to a patient's disease state, may cause a different interpretation of the disease condition of the patient.
- the presence of icterus (I) and/or lipemia (L) in the specimen may also cause a different interpretation of the disease condition of the patient.
- a skilled laboratory technician may visually inspect and rate the integrity of the serum or plasma portion of the specimen as either normal (N) or as having a degree of H, I, and/or L (e.g., by assigning an index). This may involve a review of the color of the serum or plasma portion against known standards.
- N normal
- H, I, and/or L degree of H, I, and/or L
- pre-screening involves automated detection of an interferent, such as H, I, and/or L, in a serum or plasma portion obtained from whole blood by fractionation (e.g., by centrifugation).
- an interferent such as H, I, and/or L
- one or more of the above-described barcode-labels may be affixed directly on the specimen container. Such labels may partially occlude and obscure certain lateral viewpoints of the specimen, so that there may be some orientations that do not provide a clear opportunity to visually observe the serum or plasma portion.
- automation of such pre-screening has included, for example, rotationally orienting the specimen in such a way that allows for automated pre-screening for H, I, and/or L or N (see e.g., U.S. Pat. No. 9,322,761).
- the specimen container and specimen are imaged from multiple viewpoints and processed with model-based systems so that rotation of the specimen container is not needed (see, e.g., WO 2016/133,900).
- the serum or plasma portion may be visible, so that any H, I, and/or L, or N reading taken on the serum or plasma portion may not involve a high level of confidence.
- such systems may be complicated and processing of the image data may be computationally burdensome.
- a method of characterizing a specimen container includes capturing multiple images of the specimen container from multiple viewpoints wherein the specimen container includes a serum or plasma portion of a specimen therein; inputting image data from the multiple images to a segmentation convolutional neural network and processing the image data with the segmentation convolutional neural network to simultaneously output multiple label maps; inputting the multiple label maps to a classification convolutional neural network and processing the multiple label maps with the classification convolutional neural network; and outputting from the classification convolutional neural network a classification of the serum or plasma portion as being one or more of hemolytic, icteric, lipemic, or normal.
- a quality check module includes a plurality of image capture devices configured to capture multiple images from multiple viewpoints of a specimen container containing a serum or plasma portion of a specimen therein, and a computer coupled to the plurality of image capture devices.
- the computer is configured and operative to: input image data from the multiple images to a segmentation convolutional neural network and process the image data with the segmentation convolutional neural network to simultaneously output multiple label maps, input the multiple label maps to a classification convolutional neural network and process the multiple label maps with the classification convolutional neural network, and output from the classification convolutional neural network a classification of the serum or plasma portion as being one or more of hemolytic, icteric, lipemic, or normal.
- a specimen testing apparatus in a further aspect, includes a track, a carrier moveable on the track and configured to contain a specimen container containing a serum or plasma portion of a specimen therein, a plurality of image capture devices arranged around the track and configured to capture multiple images from multiple viewpoints of the specimen container and the serum or plasma portion of the specimen, and a computer coupled to the plurality of image capture devices.
- the computer is configured and operative to: input image data from the multiple images to a segmentation convolutional neural network and process the image data with the segmentation convolutional neural network to simultaneously output multiple label maps, input the multiple label maps to a classification convolutional neural network and process the multiple label maps with the classification convolutional neural network; and output from the classification convolutional neural network a classification of the serum or plasma portion as being one or more of hemolytic, icteric, lipemic, or normal.
- FIG. 1 illustrates a top schematic view of a specimen testing apparatus including one or more quality check modules configured to carry out HILN detection methods according to one or more embodiments.
- FIG. 2 illustrates a side view of a specimen container including a separated specimen with a serum or plasma portion containing an interferent, and wherein the specimen container includes a label thereon.
- FIG. 3 A illustrates a side view of a specimen container including a label, a separated specimen including a serum or plasma portion containing an interferent, and a gel separator therein.
- FIG. 3 B illustrates a side view of the specimen container of FIG. 3 A held in an upright orientation in a holder.
- FIG. 4 A illustrates a schematic top view of a quality check module (with top removed) including multiple viewpoints and configured to capture and analyze multiple backlit images to enable a determination of a presence of an interferent according to one or more embodiments.
- FIG. 4 B illustrates a schematic side view of the quality check module (with front enclosure wall removed) of FIG. 4 A taken along section line 4 B- 4 B of FIG. 4 A according to one or more embodiments.
- FIG. 5 illustrates a block diagram of functional components of a quality check module including a single deep convolutional neural network (SDNN) configured to determine a presence of H, I, and/or L or N in a specimen according to one or more embodiments.
- SDNN single deep convolutional neural network
- FIG. 6 illustrates a block diagram of an architecture of the segmentation convolutional neural network (SCNN) of FIG. 5 according to one or more embodiments.
- SCNN segmentation convolutional neural network
- FIG. 7 illustrates a block diagram of an architecture of the classification convolutional neural network (CONN) of FIG. 5 according to one or more embodiments.
- FIG. 8 is flowchart of a method of determining H, I, and/or L, or N in a specimen according to one or more embodiments.
- the serum or plasma portion may be the liquid component of blood and may be found above the settled blood portion after fractionation (e.g., by centrifugation).
- the settled blood portion may be a packed semi-solid made up of blood cells such as white blood cells (leukocytes), red blood cells (erythrocytes), and platelets (thrombocytes).
- Plasma and serum may differ from each other in the content of coagulating components, primarily fibrinogen. Plasma may be the un-clotted liquid, whereas serum may refer to blood plasma that has been allowed to clot either under the influence of endogenous enzymes or exogenous components.
- HILN normal
- H hemolysis
- I icterus
- L lipemia
- H normal
- H normal
- Hemolysis may be defined as a condition in the serum or plasma portion wherein red blood cells are destroyed during processing, which leads to the release of hemoglobin from the red blood cells into the serum or plasma portion such that the serum or plasma portion takes on a reddish hue.
- the degree of hemolysis may be quantified by assigning a Hemolytic Index.
- Icterus may be defined as a condition of the blood where the serum or plasma portion is discolored dark yellow, caused by an accumulation of bile pigment (bilirubin).
- the degree of icterus may be quantified by assigning an Icteric Index.
- Lipemia may be defined as a presence in the blood of an abnormally high concentration of emulsified fat, such that the serum or plasma portion has a whitish or milky appearance. The degree of lipemia may be quantified by assigning a Lipemic Index.
- the method in accordance with embodiments may determine just HILN or N-Class H (e.g., H1, H2, H3, or more), N-Class I (e.g., I1, I2, I3, or more), and/or N-Class L (e.g., L1, L2, L3, or more), or N.
- the method may classify (or “segment”) various regions of the specimen container and specimen, such as serum or plasma portion, settled blood portion, gel separator (if used), air, label, type of specimen container (indicating, e.g., height and width/diameter), and/or type and/or color of a specimen container cap.
- a specimen container holder or background may also be classified.
- Differentiation of the serum and plasma portion from the region comprising one or more labels on the specimen container is a particularly vexing problem, because the one or more labels may wrap around the specimen container to various degrees. Thus, the one or more labels may obscure one or more views, such that a clear view of the serum or plasma portion may be difficult to obtain.
- classification of the serum or plasma portion may be challenging due to interference from the one or more labels, whose placement may vary substantially from one specimen container to the next.
- the obstruction caused by the one or more labels may heavily influence the spectral responses, such as from various viewpoints, given that the one or more labels may appear on a back side and thus may affect light transmission received at a front side.
- embodiments of this disclosure provide methods and apparatus configured to determine the presence of HILN using a single semantic segmentation convolutional neural network (SCNN) whose output is coupled as input to a classification convolutional neural network (CONN), which are collectively referred to herein as a single deep neural network (SDNN).
- SCNN semantic segmentation convolutional neural network
- CONN classification convolutional neural network
- the SDNN may include a large number of operational layers (e.g., 50-100; other numbers of operational layers are possible), described further below.
- the input to the SCNN may be multi-spectral, multi-exposure image data, which may be consolidated and normalized, and obtained from a plurality of image capture devices.
- An image capture device may be any device capable of capturing a pixelated image (e.g., digital image) for analysis, such as a digital camera, a CCD (charge-coupled device), one or more CMOS (complementary metal-oxide semiconductor) sensors, an array of sensors, or the like.
- the plurality of image capture devices may be arranged and configured to capture images from multiple viewpoints (e.g., three viewpoints; other numbers of viewpoints are possible).
- the methods described herein may use high dynamic range (HDR) image processing of the specimen container and serum or plasma portion as an input to the SCNN.
- HDR high dynamic range
- HDR imaging may involve capturing multiple exposures while using multiple spectral illuminations.
- the SCNN is trained to recognize regions occluded by one or more labels on the specimen container so that the SCNN can better account for the presence of labels on the back side of the specimen container from any viewpoint in characterizing HILN.
- the specimen may be collected in a specimen container, such as a blood collection tube and may include a settled blood portion and a serum and plasma portion after fractionation (e.g., separation by centrifugation).
- a gel separator may be used, which positions itself between the settled blood portion and the serum or plasma portion during centrifugation.
- the gel separator serves as a physical barrier between the two portions (liquid and semi-solid, settled blood cells), and may minimize remixing thereof.
- the specimen containers may be of different sizes and thus may be supplied for pre-screening and to the analyzers in a number of different configurations.
- the specimen containers may have sizes such as 13 mm ⁇ 75 mm, 13 mm ⁇ 100 mm, 16 mm ⁇ 100 mm, and 16 mm ⁇ 125 mm. Other suitable sizes may be used.
- the characterization method may be carried out by a quality check module, and in specimen testing systems, each including the SDNN.
- the SDNN may include operational layers including, e.g., BatchNorm, ReLU activation, convolution (e.g., 2D), dropout, and deconvolution (e.g., 2D) layers to extract features, such as simple edges, texture, and parts of the serum or plasma portion and label-containing regions.
- Top layers such as fully convolutional layers, may be used to provide correlation between parts.
- the output of the layer may be fed to a SoftMax layer, which produces an output on a per pixel (or per patch—including n ⁇ n pixels) basis concerning whether each pixel or patch includes HILN.
- the output of the CCNN may be fine-grained HILN, such as H1, H2, H3, I1, I2, I3, L1, L2, L3, or N, so that for each interferent present an estimate of the level (index) of the interferent is also obtained.
- the ability to pre-screen for HILN may advantageously (a) minimize time wasted analyzing specimens that are not of the proper quality for analysis, (b) may avoid or minimize erroneous test results, (c) may minimize patient test result delay, and/or (d) may avoid wasting of patient specimen.
- combinations of segmentation output and HILN output may be provided.
- the outputs may result from multiple branches of the SDNN.
- the branches may include separate convolutional layers and deconvolution and SoftMax layers, wherein one branch may be dedicated to segmentation and the other to HILN detection.
- Multi-branch embodiments including HILN, segmentation, specimen container type detection, and/or cap type detection may also be provided.
- inventive characterization methods configured to carry out the characterization methods, and specimen testing apparatus including one or more quality check modules will be further described with reference to FIGS. 1 - 8 herein.
- FIG. 1 illustrates a specimen testing apparatus 100 capable of automatically processing multiple specimen containers 102 containing specimens 212 (see, e.g., FIGS. 2 - 3 B ).
- the specimen containers 102 may be provided in one or more racks 104 at a loading area 105 prior to transportation to, and analysis by, one or more analyzers (e.g., first, second, and third analyzer 106 , 108 , and/or 110 , respectively) arranged about the specimen testing apparatus 100 . More or less numbers of analyzers can be used.
- the analyzers may be any combination of clinical chemistry analyzers and/or assaying instruments, or the like.
- the specimen containers 102 may be any suitably transparent or translucent container, such as a blood collection tube, test tube, sample cup, cuvette, or other clear or opaque glass or plastic container capable of containing and allowing imaging of the specimen 212 contained therein.
- the specimen containers 102 may be varied in size.
- Specimens 212 may be provided to the specimen testing apparatus 100 in the specimen containers 102 , which may be capped with a cap 214 .
- the caps 214 may be of different types and/or colors (e.g., red, royal blue, light blue, green, grey, tan, yellow, or color combinations), which may have meaning in terms of what test the specimen container 102 is used for, the type of additive included therein, whether the container includes a gel separator, or the like. Other colors may be used.
- the cap type may be determined by the characterization method described herein.
- Each of the specimen containers 102 may be provided with a label 218 , which may include identification information 218 i (i.e., indicia) thereon, such as a barcode, alphabetic, numeric, or combination thereof.
- the identification information 218 i may be machine readable at various locations about the specimen testing apparatus 100 .
- the machine readable information may be darker (e.g., black) than the label material (e.g., white paper) so that it can be readily imaged.
- the identification information 218 i may indicate, or may otherwise be correlated, via a Laboratory Information System (LIS) 147 , to a patient's identification as well as tests to be performed on the specimen 212 .
- the identification information 218 i may indicate other or additional information.
- LIS Laboratory Information System
- Such identification information 218 i may be provided on the label 218 , which may be adhered to or otherwise provided on an outside surface of the tube 215 . As shown in FIG. 2 , the label 218 may not extend all the way around the specimen container 102 or all along a length of the specimen container 102 such that from the particular front viewpoint shown, a large part of the serum or plasma portion 212 SP is viewable (the part shown dotted) and unobstructed by the label 218 .
- multiple labels 218 may have been provided (such as from multiple facilities that have handled the specimen container 102 ), and they may overlap each other to some extent.
- two labels e.g., a manufacturer's label and a barcode label
- the label(s) 218 may occlude some portion of the specimen 212 (an occluded portion), some portion of the specimen 212 and serum and plasma portion 212 SP may still be viewable from at least one viewpoint (an un-occluded portion).
- embodiments of the SDNN configured to carry out the characterization method can be trained to recognize the occluded and un-occluded portions, such that improved HILN detection may be provided.
- the specimen 212 may include the serum or plasma portion 212 SP and a settled blood portion 212 SB contained within the tube 215 .
- Air 216 may be provided above the serum and plasma portion 212 SP and a line of demarcation between them is defined as the liquid-air interface (LA).
- the line of demarcation between the serum or plasma portion 212 SP and the settled blood portion 212 SB is defined as a serum-blood interface (SB).
- An interface between the air 216 and cap 214 is defined as a tube-cap interface (TC).
- the height of the tube (HT) is defined as a height from a bottom-most part of the tube 215 to a bottom of the cap 214 , and may be used for determining tube size.
- a height of the serum or plasma portion 212 SP is (HSP) and is defined as a height from a top of the serum or plasma portion 212 SP to a top of the settled blood portion 212 SB.
- a height of the settled blood portion 212 SB is (HSB) and is defined as a height from the bottom of the settled blood portion 212 SB to a top of the settled blood portion 212 SB at SB.
- HTOT is a total height of the specimen 212 and equals HSP plus HSB.
- the height of the serum or plasma portion 212 SP is (HSP) and is defined as a height from the top of the serum or plasma portion 212 SP at LA to the top of the gel separator 313 at SG, wherein SG is an interface between the serum or plasma portion 212 SP and the gel separator 313 .
- a height of the settled blood portion 212 SB is (HSB) and is defined as a height from the bottom of the settled blood portion 212 SB to the bottom of the gel separator 313 at BG, wherein BG is an interface between the settled blood portion 212 SB and the gel separator 313 .
- HTOT is the total height of the specimen 212 and equals HSP plus HSB plus height of the gel separator 313 .
- Tw is a wall thickness
- W is an outer width, which may also be used for determining the size of the specimen container 102
- Wi is an inner width of the specimen container 102 .
- specimen testing apparatus 100 may include a base 120 ( FIG. 1 ) (e.g., a frame, floor, or other structure) upon which a track 121 may be mounted.
- the track 121 may be a railed track (e.g., a mono rail or a multiple rail), a collection of conveyor belts, conveyor chains, moveable platforms, or any other suitable type of conveyance mechanism.
- Track 121 may be circular or any other suitable shape and may be a closed track (e.g., endless track) in some embodiments.
- Track 121 may, in operation, transport individual ones of the specimen containers 102 to various locations spaced about the track 121 in carriers 122 .
- Carriers 122 may be passive, non-motored pucks that may be configured to carry a single specimen container 102 on the track 121 , or optionally, an automated carrier including an onboard drive motor, such as a linear motor that is programmed to move about the track 121 and stop at pre-programmed locations. Other configurations of carrier 122 may be used. Carriers 122 may each include a holder 122 H ( FIG. 3 B ) configured to hold the specimen container 102 in a defined upright position and orientation. The holder 122 H may include a plurality of fingers or leaf springs that secure the specimen container 102 on the carrier 122 , but some may be moveable or flexible to accommodate different sizes of the specimen containers 102 .
- carriers 122 may leave from the loading area 105 after being offloaded from the one or more racks 104 .
- the loading area 105 may serve a dual function of also allowing reloading of the specimen containers 102 from the carriers 122 to the loading area 105 after pre-screening and/or analysis is completed.
- a robot 124 may be provided at the loading area 105 and may be configured to grasp the specimen containers 102 from the one or more racks 104 and load the specimen containers 102 onto the carriers 122 , such as onto an input lane of the track 121 .
- Robot 124 may also be configured to reload specimen containers 102 from the carriers 122 to the one or more racks 104 .
- the robot 124 may include one or more (e.g., least two) robot arms or components capable of X (lateral) and Z (vertical—out of the paper, as shown), Y and Z, X, Y, and Z, or r (radial) and theta (rotational) motion.
- Robot 124 may be a gantry robot, an articulated robot, an R-theta robot, or other suitable robot wherein the robot 124 may be equipped with robotic gripper fingers oriented, sized, and configured to pick up and place the specimen containers 102 .
- the specimen containers 102 carried by carriers 122 may progress to a first pre-processing station 125 .
- the first pre-processing station 125 may be an automated centrifuge configured to carry out fractionation of the specimen 212 .
- Carriers 122 carrying specimen containers 102 may be diverted to the first pre-processing station 125 by inflow lane or other suitable robot. After being centrifuged, the specimen containers 102 may exit on outflow lane, or otherwise be removed by a robot, and continue along the track 121 .
- the specimen container 102 in carrier 122 may next be transported to a quality check module 130 to carry out pre-screening, as will be further described herein with reference to FIGS. 4 A- 8 herein.
- the quality check module 130 is configured to pre-screen and carry out the characterization methods described herein, and is configured to automatically determine a presence of, and possibly an extent of H, I, and/or L contained in a specimen 212 or whether the specimen is normal (N). If found to contain effectively-low amounts of H, I and/or L, so as to be considered normal (N), the specimen 212 may continue on the track 121 and then may be analyzed by the one or more analyzers (e.g., first, second, and/or third analyzers 106 , 108 , and/or 110 ). Thereafter, the specimen container 102 may be returned to the loading area 105 for reloading to the one or more racks 104 .
- the analyzers e.g., first, second, and/or third analyzers 106 , 108 , and/or 110 .
- segmentation of the specimen container 102 and specimen 212 may take place. From the segmentation data, post processing may be used for quantification of the specimen 212 (i.e., determination of HSP, HSB, HTOT, and determination of location of SB or SG, and LA). In some embodiments, characterization of the physical attributes (e.g., size—height and width/diameter) of the specimen container 102 may take place at the quality check module 130 . Such characterization may include determining HT and W, and possibly TC, and/or Wi. From this characterization, the size of the specimen container 102 may be extracted. Moreover, in some embodiments, the quality check module 130 may also determine cap type, which may be used as a safety check and may catch whether a wrong tube type has been used for the test ordered.
- cap type may be used as a safety check and may catch whether a wrong tube type has been used for the test ordered.
- a remote station 132 may be provided on the specimen testing apparatus 100 that is not directly linked to the track 121 .
- an independent robot 133 may carry specimen containers 102 containing specimens 212 to the remote station 132 and return them after testing/pre-processing.
- the specimen containers 102 may be manually removed and returned.
- Remote station 132 may be used to test for certain constituents, such as a hemolysis level, or may be used for further processing, such as to lower a lipemia level through one or more additions and/or through additional processing, or to remove a clot, bubble or foam, for example. Other pre-screening using the HILN detection methods described herein may be accomplished at remote station 132 .
- Additional station(s) may be provided at one or more locations on or along the track 121 .
- the additional station(s) may include a de-capping station, aliquoting station, one or more additional quality check modules 130 , and the like.
- the specimen testing apparatus 100 may include a number of sensors 116 at one or more locations around the track 121 . Sensors 116 may be used to detect a location of specimen containers 102 on the track 121 by means of reading the identification information 218 i , or like information (not shown) provided on each carrier 122 . Any suitable means for tracking the location may be used, such as proximity sensors. All of the sensors 116 may interface with the computer 143 , so that the location of each specimen container 102 may be known at all times.
- the pre-processing stations and the analyzers 106 , 108 , 110 may be equipped with robotic mechanisms and/or inflow lanes configured to remove carriers 122 from the track 121 , and with robotic mechanisms and/or outflow lanes configured to reenter carriers 122 to the track 121 .
- Specimen testing apparatus 100 may be controlled by the computer 143 , which may be a microprocessor-based central processing unit CPU, having a suitable memory and suitable conditioning electronics and drivers for operating the various system components.
- Computer 143 may be housed as part of, or separate from, the base 120 of the specimen testing apparatus 100 .
- the computer 143 may operate to control movement of the carriers 122 to and from the loading area 105 , motion about the track 121 , motion to and from the first pre-processing station 125 as well as operation of the first pre-processing station 125 (e.g., centrifuge), motion to and from the quality check module 130 as well as operation of the quality check module 130 , and motion to and from each analyzer 106 , 108 , 110 as well as operation of each analyzer 106 , 108 , 110 for carrying out the various types of testing (e.g., assay or clinical chemistry).
- various types of testing e.g., assay or clinical chemistry
- the computer 143 may control the specimen testing apparatus 100 according to software, firmware, and/or hardware commands or circuits such as those used on the Dimension® clinical chemistry analyzer sold by Siemens Healthcare Diagnostics Inc. of Tarrytown, N.Y. and such control is typical to those skilled in the art of computer-based electromechanical control programming and will not be further described herein.
- Other suitable systems for controlling the specimen testing apparatus 100 may be used.
- the control of the quality check module 130 may also be provided by the computer 143 , but in accordance with the inventive characterization methods described in detail herein.
- the computer 143 used for image processing and to carry out the characterization methods described herein may include a CPU or GPU, sufficient processing capability and RAM, and suitable storage.
- the computer 143 may be a multi-processor-equipped PC with one or more GPUs, 8 GB Ram or more, and a Terabyte or more of storage.
- the computer 143 may be a GPU-equipped PC, or optionally a CPU-equipped PC operated in a parallelized mode. MKL could be used as well, 8 GB RAM or more, and suitable storage.
- Embodiments of the disclosure may be implemented using a computer interface module (CIM) 145 that allows a user to easily and quickly access a variety of control and status display screens. These control and status display screens may display and enable control of some or all aspects of a plurality of interrelated automated devices used for preparation and analysis of specimens 212 .
- the CIM 145 may be employed to provide information about the operational status of a plurality of interrelated automated devices as well as information describing the location of any specimen 212 , as well as a status of tests to be performed on, or being performed on, the specimen 212 .
- the CIM 145 is thus adapted to facilitate interactions between an operator and the specimen testing apparatus 100 .
- the CIM 145 may include a display screen operative to display a menu including icons, scroll bars, boxes, and buttons through which the operator may interface with the specimen testing apparatus 100 .
- the menu may comprise a number of function elements programmed to display and/or operate functional aspects of the specimen testing apparatus 100 .
- FIGS. 4 A and 4 B show a first embodiment of a quality check module 130 configured to carry out the characterization methods as shown and described herein.
- Quality check module 130 may be configured to pre-screen for presence of an interferent (e.g., H, I, and/or L) in a specimen 212 (e.g., in a serum or plasma portion 212 SP thereof) prior to analysis by the one or more analyzers 106 , 108 , 110 . Pre-screening in this manner allows for additional processing, additional quantification or characterization, and/or discarding and/or redrawing of a specimen 212 without wasting valuable analyzer resources or possibly having the presence of an interferent affect the veracity of the test results.
- an interferent e.g., H, I, and/or L
- Pre-screening in this manner allows for additional processing, additional quantification or characterization, and/or discarding and/or redrawing of a specimen 212 without wasting valuable analyzer resources or possibly having the presence of an interfere
- a method may be carried out at the quality check module 130 to provide segmentation as an output from the SDNN.
- the segmentation data may be used in a post processing step to quantify the specimen 212 , i.e., determine certain physical dimensional characteristics of the specimen 212 (e.g., LA and SB, and/or determination of HSP, HSB, and/or HTOT). Quantification may also involve estimating, e.g., a volume of the serum or plasma portion (VSP) and/or a volume of the settled blood portion (VSB). Other quantifiable geometrical features may also be determined.
- VSP volume of the serum or plasma portion
- VSB settled blood portion
- the quality check module 130 may be used to quantify geometry of the specimen container 102 , i.e., quantify certain physical dimensional characteristics of the specimen container 102 , such as the location of TC, HT, and/or W or Wi of the specimen container 102 .
- the quality check module 130 may include multiple image capture devices 440 A- 4400 .
- Three image capture devices 440 A- 440 C are shown and are preferred, but optionally two or four or more can be used.
- Image capture devices 440 A- 4400 may be any suitable device for capturing well-defined digital images, such as conventional digital cameras capable of capturing a pixelated image, charged coupled devices (CCD), an array of photodetectors, one or more CMOS sensors, or the like.
- CCD charged coupled devices
- CMOS sensors one or more CMOS sensors, or the like.
- the three image capture devices 440 A, 440 B, 440 C are illustrated in FIG. 4 A and are configured to capture images from three different lateral viewpoints (viewpoints labeled 1 , 2 , and 3 ).
- the captured image size may be, e.g., about 2560 ⁇ 694 pixels.
- the image capture devices 440 A, 440 B, 440 C may capture an image size that may be about 1280 ⁇ 387 pixels, for example. Other image sizes and pixel densities may be used.
- Each of the image capture devices 440 A, 440 B, 440 C may be configured and operable to capture lateral images of at least a portion of the specimen container 102 , and at least a portion of the specimen 212 .
- the image capture devices 440 A- 4400 may capture a part of the label 218 and part or all of the serum or plasma portion 212 SP.
- part of a viewpoint 1-3 may be partially occluded by label 218 .
- one or more of the viewpoints 1-3 may be fully occluded, i.e., no clear view of the serum or plasma portion 212 SP may be possible.
- the characterization method may still be able to distinguish the boundaries of the serum or plasma portion 212 SP through the one or more occluding labels 218 .
- the plurality of image capture devices 440 A, 440 B, 440 C are configured to capture lateral images of the specimen container 102 and specimen 212 at an imaging location 432 from the multiple viewpoints 1-3.
- the viewpoints 1-3 may be arranged so that they are approximately equally spaced from one another, such as about 120° from one another, as shown.
- the image capture devices 440 A, 440 B, 440 C may be arranged around the track 121 .
- Other arrangements of the plurality of image capture devices 440 A, 440 B, 440 C may be used. In this way, the images of the specimen 212 in the specimen container 102 may be taken while the specimen container 102 is residing in the carrier 122 at the imaging location 432 .
- the field of view of the multiple images obtained by the image capture devices 440 A, 440 B, 440 C may overlap slightly in a circumferential extent.
- the carriers 122 may be stopped at a pre-determined location in the quality check module 130 , such as at the imaging location 432 , i.e., such as at a point where normal vectors from each of the image capture devices 440 A, 440 B, 440 C intersect each other.
- a gate or the linear motor of the carrier 122 may be provided to stop the carriers 122 at the imaging location 432 , so that multiple quality images may be captured thereat.
- one or more sensors may be used to determine the presence of a carrier 122 at the quality check module 130 .
- the image capture devices 440 A, 440 B, 440 C may be provided in close proximity to and trained or focused to capture an image window at the imaging location 432 , wherein the image window is an area including an expected location of the specimen container 102 .
- the specimen container 102 may be stopped so that it is approximately located in a center of the view window in some embodiments.
- one or more reference datum may be present within the images captured.
- each image may be triggered and captured responsive to a triggering signal provided in communication lines 443 A, 443 B, 443 C that may be sent by the computer 143 .
- Each of the captured images may be processed by the computer 143 according to one or more embodiments.
- high dynamic range (HDR) processing may be used to capture and process the image data from the captured images.
- multiple images are captured of the specimen 212 at the quality check module 130 at multiple different exposures (e.g., at different exposure times), while being sequentially illuminated at one or more different spectra.
- each image capture device 440 A, 440 B, 440 C may take 4-8 images of the specimen container 102 including the serum or plasma portion 212 SP at different exposure times at each of multiple spectra.
- 4-8 images may be taken by image capture device 440 A at viewpoint 1 while the specimen 212 is backlit illuminated with light source 444 A that has a red spectrum. Additional like images may be taken sequentially at viewpoints 2 and 3.
- the multiple spectral images may be accomplished using different light sources 444 A- 444 C emitting different spectral illumination.
- the light sources 444 A- 444 C may back light the specimen container 102 (as shown).
- a light diffuser may be used in conjunction with the light sources 444 A- 444 C in some embodiments.
- the multiple different spectra light sources 444 A- 444 C may be RGB light sources, such as LEDs emitting nominal wavelengths of 634 nm+/ ⁇ 35 nm (Red), 537 nm+/ ⁇ 35 nm (Green), and 455 nm+/ ⁇ 35 nm (Blue).
- the light sources 444 A- 444 C may be white light sources.
- the light sources 444 A- 444 C may emit one or more spectra having a nominal wavelength between about 700 nm and about 1200 nm.
- three red light sources 444 A- 444 C may be used to sequentially illuminate the specimen 212 from three lateral locations.
- the red illumination by the light sources 444 A- 444 C may occur as the multiple images (e.g., 4-8 images or more) at different exposure times are captured by each image capture device 440 A- 4400 from each viewpoint 1-3.
- the exposure times may be between about 0.1 ms and 256 ms. Other exposure times may be used.
- each of the respective images for each image capture device 440 A- 4400 may be taken sequentially, for example.
- a group of images are sequentially obtained that have red spectral backlit illumination and multiple (e.g., 4-8 exposures, such as different exposure times).
- the images may be taken in a round robin fashion, for example, where all images from viewpoint 1 are taken followed sequentially by viewpoints 2 and 3.
- the quality check module 130 may include a housing 446 that may at least partially surround or cover the track 121 to minimize outside lighting influences.
- the specimen container 102 may be located inside the housing 446 during the image-taking sequences.
- Housing 446 may include one or more doors 446 D to allow the carriers 122 to enter into and/or exit from the housing 446 .
- the ceiling may include an opening 4460 to allow a specimen container 102 to be loaded into the carrier 122 by a robot including moveable robot fingers from above.
- green spectral light sources 444 A- 444 C may be turned on (nominal wavelength of about 537 nm with a bandwidth of about +/ ⁇ 35 nm), and multiple images (e.g., 4-8 or more images) at different exposure times may be sequentially captured by each image capture device 440 A, 440 B, 440 C. This may be repeated with blue spectral light sources 444 A- 4440 (nominal wavelength of about 455 nm with a bandwidth of about +/ ⁇ 35 nm) for each image capture devices 440 A, 440 B, 440 C.
- the different nominal wavelength spectral light sources 444 A- 444 C may be accomplished by light panels including banks of different desired spectral light sources (e.g., R, G, B, W, IR, and/or NIR) that can be selectively turned on and off, for example. Other means for backlighting may be used.
- desired spectral light sources e.g., R, G, B, W, IR, and/or NIR
- Other means for backlighting may be used.
- the multiple images taken at multiple exposures (e.g., exposure times) for each respective wavelength spectra may be obtained in rapid succession, such that the entire collection of backlit images for the specimen container 102 and specimen 212 from multiple viewpoints 1-3 may be obtained in less than a few seconds, for example.
- the processing of the image data may involve a preprocessing step including, for example, selection of optimally-exposed pixels from the multiple captured images at the different exposure times at each wavelength spectrum and for each image capture device 440 A- 440 C, so as to generate optimally-exposed image data for each spectrum and for each viewpoint 1-3.
- image consolidation This is referred to as “image consolidation” herein.
- optimal image intensity may be pixels (or patches) that fall within a predetermined range of intensities (e.g., between 180-254 on a scale of 0-255), for example. In another embodiment, optimal image intensity may be between 16-254 on a scale of 0-255), for example. If more than one pixel (or patch) in the corresponding pixel (or patch) locations of two exposure images is determined to be optimally exposed, the higher of the two is selected.
- the selected pixels (or patches) exhibiting optimal image intensity may be normalized by their respective exposure times.
- the result is a plurality of normalized and consolidated spectral image data sets for the illumination spectra (e.g., R, G, B, white light, IR, and/or IR—depending on the combination used) and for each image capture device 440 A- 440 C where all of the pixels (or patches) are optimally exposed (e.g., one image data set per spectrum) and normalized.
- the data pre-processing carried out by the computer 143 results in a plurality of optimally-exposed and normalized image data sets, one for each illumination spectra employed.
- FIG. 5 shows apparatus 500 that includes functional components configured carry out the HILN characterization method described herein.
- Apparatus 500 may be embodied as a quality check module 130 controlled by the computer 143 .
- the specimen container 102 may be provided at the imaging location 432 ( FIGS. 4 A and 4 B ) of the quality check module 130 in functional block 502 .
- the multi-view images are captured in functional block 504 by the plurality of image capture devices 440 A- 440 C.
- the image data for each of the multi-view, multi-spectral, multi-exposure images may be pre-processed in functional block 506 as discussed above to provide a plurality of optimally-exposed and normalized image data sets (hereinafter “image data sets”).
- respective image data from each of the plurality of image capture devices 440 A- 4400 may be stacked, or correlated, as a single input with additional channels corresponding to the number of image capture devices (e.g., three times more channels corresponding to the three image capture devices 440 A- 4400 ).
- images from the three image capture devices 440 A- 440 C may be stacked along the channel dimension, wherein each image capture device may generate a hyper-spectrum image having a dimension of 1272 ⁇ 360 ⁇ 6.
- the resulting stacked image input may then have a dimension of 1272 ⁇ 360 ⁇ 18, wherein the first 6 channels belong to the image capture device 440 A, the second 6 channels belong to the image capture devices 440 B, and the third 6 channels belong to the image capture device 440 C.
- This stacked image input may be provided to a single deep convolutional neural network (SDNN) 535 .
- SDNN deep convolutional neural network
- the SDNN 535 is advantageous over known techniques that separately process images from each image capture device via a respective convolutional neural network. That is, if three image capture devices were used, known techniques included three separate convolutional neural networks and three separate statistical analyses to determine HILN. Such techniques are not memory efficient or computationally efficient. In contrast, by stacking the images from the plurality of image capture devices 440 A- 4400 into a single stacked image input and by processing the stacked image inputs with the SDNN 535 as described herein, higher memory and computational efficiency is achieved.
- the SDNN 535 may include a segmentation convolutional neural network (SCNN) 536 that receives the stacked image data and simultaneously outputs multiple pixel label maps 537 , wherein the number of pixel label maps 537 corresponds to the number of image capture devices (e.g., three, corresponding to the three image capture devices 440 A- 4400 ).
- the SDNN 535 may also include a classification convolutional neural network (CONN) 538 that receives the multiple pixel label maps 537 as input and outputs a determination of HILN 540 H, 540 I, 540 L, 540 N.
- the SCNN 536 may output serum segmentation information 542 and/or specimen container/cap type information 544 .
- the SDNN 535 Prior to receiving image data from image capture devices 440 A- 4400 for determining HILN (and/or optionally segmentation and/or cap type information), the SDNN 535 may have been previously trained to recognize HILN and optionally serum segmentation and/or specimen container/cap type.
- the SCNN 536 may first be trained without the CONN 538 . Multiple sets of training examples may be used to train the SCNN 536 .
- the SCNN 536 may be trained by imaging with the quality check module 130 a multitude of samples of specimen containers 102 containing specimen 212 by graphically outlining various regions of a multitude of examples of specimens 212 having various specimen HILN conditions, outlining the various regions of occlusion by label 218 , levels of serum or plasma portion 212 SP, and the like.
- class characterization information for each area may be provided. As many as 500 or more, 1000 or more, 2,000 or more, or even 5,000 or more images may be used for training the SCNN 536 . Each training image may have at least the serum or plasma portion 212 SP, its H, I. L, or N identified, various index levels (if output), and the label 218 outlined manually to identify and teach the SCNN 536 the areas that belong to each class that will be a possible output. The SCNN 536 may be tested intermittently with a sample specimen container to see if the SCNN 536 is operating at a sufficiently high level of confidence.
- CCNN 538 may be added to SCNN 536 and both networks may be additionally trained end-to-end, wherein any segmentation and classification losses can be combined at the end. That is, the loss from the CCNN 538 can be back-propagated to the SCNN 536 .
- the output of the SDNN 535 may be N-class hemolytic 540 H, N-class icteric 540 I, N-class lipemic 540 L, or normal 540 N, wherein N-class is the number (N) of class options in that interferent class.
- N-class is the number (N) of class options in that interferent class.
- stacked multi-view, multi-spectral, multi-exposure consolidated and normalized image data sets may be input into the SDNN 535 and the image data sets may be operated upon and processed by the SCNN 536 and CCNN 538 .
- the output of the processing by the SDNN 535 may be multiple output possibilities (N-classes) for each of HIL, and/or for each viewpoint.
- N may equal three, wherein the outputs may include H1, H2, and H3 at 540 H; I1, I2, and I3 at 540 I; and L1, L2, L3 at 540 L.
- FIG. 6 illustrates an architecture 636 of SCNN 536 in accordance with one or more embodiments.
- SCNN 536 may be coded using any suitable scientific computing framework, program, or toolbox, such as, for example, Caffe available from Berkley Vision and Learning Center (BVLC), Theano, a Python framework for fast computation of mathematical expressions, TensorFlow, Torch, and the like.
- BVLC Berkley Vision and Learning Center
- Architecture 636 may include the following operational layers: two convolutional layers (CONV1 and CONV2) 602 and 630 ; five dense block layers (DB1-DB5) 604 , 610 , 616 , 622 , and 628 ; four concatenation layers (C1-C4) 606 , 612 , 620 , and 626 ; two transition down layers (TD1 and TD2) 608 and 614 ; and two transition up layers (TU1 and TU2), arranged as shown in FIG. 6 wherein multiple pixel label maps 637 are output. Note that the input to each dense block layer 604 and 610 is concatenated (at C1 and C2, respectively) with its output, which may result in a linear growth of the number of pixel label maps.
- a first layer receives an input and outputs a number of pixel label maps, which are concatenated to the input.
- a second layer then receives the concatenated output as its input and outputs a number of pixel label maps, which are again concatenated to the previous pixel label maps. This is repeated for each layer in the dense label block.
- Each transition up layer 618 and 624 may include a 3 ⁇ 3 transposed convolutional layer with stride 2.
- FIG. 7 illustrates an architecture 738 of CCNN 538 in accordance with one or more embodiments.
- CCNN 536 may be coded using any suitable scientific computing framework, program, or toolbox, such as, for example, Caffe available from Berkley Vision and Learning Center (BVLC), Theano, a Python framework for fast computation of mathematical expressions, TensorFlow, Torch, and the like.
- BVLC Berkley Vision and Learning Center
- Theano Theano
- Python framework for fast computation of mathematical expressions
- TensorFlow Torch
- Architecture 738 may include the following operational layers: five sets of convolutional layers (CONV1-CONV5) 702 , 706 , 710 , 714 , and 718 and max pooling layers (POOL1-POOL5) 704 , 708 , 712 , 716 , and 720 , followed by two fully-connected layers (FC1 and FC2) 722 and 724 , followed by a softmax layer 726 .
- Convolutional layer 702 receives as input multiple pixel label maps 737 , which may be multiple pixel label maps 537 or 637 ( FIGS. 5 and 6 , respectively).
- Convolutional layer 702 may be one or more 3 ⁇ 3 convolutional layers of depth of 64 (i.e., 64 filters).
- Convolutional layer 706 may be one or more 3 ⁇ 3 convolutional layers of depth of 128 (i.e., 128 filters).
- Convolutional layer 710 may be one or more 3 ⁇ 3 convolutional layers of depth of 256 (i.e., 256 filters).
- Convolutional layer 714 may be one or more 3 ⁇ 3 convolutional layers of depth of 512 (i.e., 512 filters).
- convolutional layer 718 may be one or more 3 ⁇ 3 convolutional layers also of depth of 512 (i.e., 512 filters).
- Fully-connected layers 722 and 724 may each be of size 4096, while softmax layer may be of size 1000.
- a convolution layer is a processing step that may apply a filter (also referred to as a kernel) to input image data (e.g., pixel intensity values) to output an activation map that may indicate detection of some specific type of feature (e.g., from a simple curve after application of a first convolution layer to somewhat more complex features after application of several convolution layers) at some spatial position in the input image data.
- a filter also referred to as a kernel
- a max pooling layer is a processing step that may apply a filter to generate output activation maps having maximum pixel values appearing in the one or more activation maps received from a convolutional layer.
- a ReLU (rectified linear unit) layer is a processing step that may apply a nonlinear function to all values in a received activation map resulting in, e.g., all negative activation values being assigned a value of zero.
- a fully connected layer is a processing step that aggregates previous activation maps (each of which may indicate detection of lower level features) to indicate detection of higher-level features.
- a softmax layer is typically a final processing step that outputs a probability distribution highlighting or identifying the most likely feature of one or more images from a class of image features.
- the single deep convolutional neural network design has higher memory and computational efficiency as well as increased system performance as compared to other known convolutional neural networks.
- FIG. 8 illustrates a flowchart of a characterization method 800 according to embodiments of the disclosure.
- the characterization method 800 may be carried out by quality check module 130 as described herein.
- the characterization method 800 may determine a presence of an interferent in a specimen 212 according to one or more embodiments.
- the characterization method 800 includes, in process block 802 , capturing multiple images of a specimen container (e.g., specimen container 102 ) including a serum or plasma portion (e.g., serum or plasma portion 212 SP) of a specimen (e.g., specimen 212 ) from multiple viewpoints (e.g., viewpoints 1, 2, and 3).
- the specimen container 102 may include one or more labels (e.g., label 218 ) thereon.
- the one or more images may be digital, pixelated images captured using one or more image capture devices (e.g., image capture devices 440 A- 4400 ).
- the characterization method 800 further includes, in process block 804 , inputting image data (e.g., stacked, consolidated, and normalized image data sets) from the multiple images to a segmentation convolutional neural network (e.g., SCNN 536 ) and processing the image data with the SCNN to simultaneously output multiple label maps.
- image data e.g., stacked, consolidated, and normalized image data sets
- SCNN 536 segmentation convolutional neural network
- the characterization method 800 includes inputting the multiple label maps to a classification convolutional neural network (e.g., CCNN 538 ) and processing the multiple label maps with the CCNN.
- a classification convolutional neural network e.g., CCNN 538
- the processing may be accomplished by the computer 143 described herein after suitable training of the SCNN 536 and the CCNN 538 .
- the characterization method 800 further includes, in process block 808 , outputting from the classification convolutional neural network (e.g., CCNN 538 ) a classification of the serum or plasma portion as being one or more of hemolytic, icteric, lipemic, and normal (i.e., H, I, L, H and I, H and L, I and L, H, I, and L, or N).
- the classification convolutional neural network e.g., CCNN 538
- normal i.e., H, I, L, H and I, H and L, I and L, H, I, and L, or N.
- the multiple images from the multiple viewpoints may be captured at different exposure times and/or at a different spectral illumination (e.g., R, G, B, white light, IR, and/or near IR). For example, there may be 4-8 different exposures or more taken at different exposure times for each viewpoint under the different spectral illumination conditions.
- a different spectral illumination e.g., R, G, B, white light, IR, and/or near IR.
- a segmentation of the image data sets may be obtained.
- the method 800 may, in process block 810 , output from the SCNN (e.g. SCNN 536 ) a segmentation of the specimen container 102 and specimen 212 .
- the image data may be segmented into N′-classes (e.g., 7 classes), such as (1) Tube, (2) Gel Separator, (3) Cap, (4) Air, (5) Label, (6) Settled Blood Portion, and/or (7) Serum or Plasma Portion. Other numbers of classes may be used.
- the characterization method 800 may also optionally include, in process block 812 , outputting from the SCNN (e.g., SCNN 536 ) a cap type ( 544 ), which may be a specific cap shape or cap color that was pre-trained into the SCNN 536 and the CONN 538 .
- SCNN e.g., SCNN 536
- cap type 544
- an improved characterization method 800 that better characterizes the serum or plasma portion 212 SP by accounting for labels that may occlude the one or more viewpoints.
- the improved characterization may be used to provide a rapid and robust characterization of a presence of HILN in the specimen 212 , and in some embodiments, an interferent level (H1, H2, H3, I1, I2, I3, L1, L2, L3) may be assessed and output from the CONN 538 .
- a quality check module e.g., quality check module 130
- a quality check module comprising a plurality of image capture devices (e.g., image capture devices) 440 A- 440 C arranged around an imaging location (e.g., imaging location 432 ), and configured to capture multiple images from multiple viewpoints (e.g., multiple viewpoints 1-3) of a specimen container 102 including one or more labels 218 and containing a serum or plasma portion 212 SP of a specimen 212 , and a computer (e.g., computer 143 ) coupled to the plurality of image capture devices and configured to process image data of the multiple images.
- a quality check module e.g., quality check module 130
- image capture devices e.g., image capture devices
- 440 A- 440 C arranged around an imaging location (e.g., imaging location 432 ), and configured to capture multiple images from multiple viewpoints (e.g., multiple viewpoints 1-3) of a specimen container 102 including one or more labels 218 and
- the computer e.g., computer 143
- the computer may be configured and capable of being operated to process and stack the multiple images from the multiple viewpoints (e.g., viewpoints 1-3) to provide HILN determination or HILN determination in combination with segmentation for each of the multiple viewpoints.
- the characterization method 800 may be carried out on a specimen testing apparatus 100 that includes the quality check module 130 .
- the specimen testing apparatus 100 may include a track 121 , and a carrier 122 moveable on the track 121 .
- the carrier 122 may be configured to contain and support the specimen container 102 including the one or more labels 218 and containing a serum or plasma portion 212 SP of a specimen 212 and to carry the specimen container 102 to the quality check module 130 to accomplish the characterization and the pre-screening for the presence of an interferent.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Theoretical Computer Science (AREA)
- Chemical & Material Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Hematology (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Pathology (AREA)
- Immunology (AREA)
- Ecology (AREA)
- Biophysics (AREA)
- Medicinal Chemistry (AREA)
- Food Science & Technology (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- Urology & Nephrology (AREA)
- Epidemiology (AREA)
- Public Health (AREA)
- Primary Health Care (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Investigating Or Analysing Biological Materials (AREA)
- Automatic Analysis And Handling Materials Therefor (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
Abstract
Description
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/251,756 US11763461B2 (en) | 2018-06-15 | 2019-06-10 | Specimen container characterization using a single deep neural network in an end-to-end training fashion |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201862685344P | 2018-06-15 | 2018-06-15 | |
PCT/US2019/036351 WO2019241134A1 (en) | 2018-06-15 | 2019-06-10 | Specimen container characterization using a single deep neural network in an end-to-end training fashion |
US17/251,756 US11763461B2 (en) | 2018-06-15 | 2019-06-10 | Specimen container characterization using a single deep neural network in an end-to-end training fashion |
Publications (2)
Publication Number | Publication Date |
---|---|
US20210164965A1 US20210164965A1 (en) | 2021-06-03 |
US11763461B2 true US11763461B2 (en) | 2023-09-19 |
Family
ID=68843531
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/251,756 Active 2040-06-20 US11763461B2 (en) | 2018-06-15 | 2019-06-10 | Specimen container characterization using a single deep neural network in an end-to-end training fashion |
Country Status (5)
Country | Link |
---|---|
US (1) | US11763461B2 (en) |
EP (1) | EP3807650A4 (en) |
JP (1) | JP7089071B2 (en) |
CN (1) | CN112639482B (en) |
WO (1) | WO2019241134A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3659065A4 (en) * | 2017-07-28 | 2020-08-19 | Siemens Healthcare Diagnostics Inc. | Deep learning volume quantifying methods and apparatus |
EP3853615B1 (en) | 2018-09-20 | 2024-01-17 | Siemens Healthcare Diagnostics, Inc. | Methods and apparatus for hiln determination with a deep adaptation network for both serum and plasma samples |
CN113592842B (en) * | 2021-08-09 | 2024-05-24 | 南方医科大学南方医院 | Sample serum quality identification method and identification equipment based on deep learning |
EP4216170A1 (en) | 2022-01-19 | 2023-07-26 | Stratec SE | Instrument parameter determination based on sample tube identification |
LU102902B1 (en) * | 2022-01-19 | 2023-07-19 | Stratec Se | Instrument parameter determination based on sample tube identification |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9322761B2 (en) | 2009-08-13 | 2016-04-26 | Siemens Healthcare Diagnostics Inc. | Methods and apparatus for ascertaining interferents and physical dimensions in liquid samples and containers to be analyzed by a clinical analyzer |
CN105825509A (en) | 2016-03-17 | 2016-08-03 | 电子科技大学 | Cerebral vessel segmentation method based on 3D convolutional neural network |
WO2016133900A1 (en) | 2015-02-17 | 2016-08-25 | Siemens Healthcare Diagnostics Inc. | Model-based methods and apparatus for classifying an interferent in specimens |
CN106372390A (en) | 2016-08-25 | 2017-02-01 | 姹ゅ钩 | Deep convolutional neural network-based lung cancer preventing self-service health cloud service system |
CN106408562A (en) | 2016-09-22 | 2017-02-15 | 华南理工大学 | Fundus image retinal vessel segmentation method and system based on deep learning |
WO2017106645A1 (en) | 2015-12-18 | 2017-06-22 | The Regents Of The University Of California | Interpretation and quantification of emergency features on head computed tomography |
US20170364771A1 (en) | 2016-06-17 | 2017-12-21 | Pedro Henrique Oliveira Pinheiro | Generating Object Proposals Using Deep-Learning Models |
WO2018022280A1 (en) | 2016-07-25 | 2018-02-01 | Siemens Healthcare Diagnostics Inc. | Systems, methods and apparatus for identifying a specimen container cap |
WO2018039380A1 (en) | 2016-08-26 | 2018-03-01 | Elekta, Inc. | Systems and methods for image segmentation using convolutional neural network |
WO2018089938A1 (en) | 2016-11-14 | 2018-05-17 | Siemens Healthcare Diagnostics Inc. | Methods, apparatus, and quality check modules for detecting hemolysis, icterus, lipemia, or normality of a specimen |
WO2018105062A1 (en) | 2016-12-07 | 2018-06-14 | オリンパス株式会社 | Image processing device and image processing method |
CN108596166A (en) | 2018-04-13 | 2018-09-28 | 华南师范大学 | A kind of container number identification method based on convolutional neural networks classification |
WO2018191287A1 (en) | 2017-04-13 | 2018-10-18 | Siemens Healthcare Diagnostics Inc. | Methods and apparatus for hiln characterization using convolutional neural network |
WO2018188023A1 (en) | 2017-04-13 | 2018-10-18 | Siemens Healthcare Diagnostics Inc. | Methods and apparatus for determining label count during specimen characterization |
US10198832B2 (en) | 2017-06-28 | 2019-02-05 | Deepmind Technologies Limited | Generalizable medical image analysis using segmentation and classification neural networks |
WO2019241128A1 (en) | 2018-06-15 | 2019-12-19 | Siemens Healthcare Diagnostics Inc. | Methods and apparatus for fine-grained hil index determination with advanced semantic segmentation and adversarial training |
US20210064927A1 (en) * | 2018-01-10 | 2021-03-04 | Siemens Healthcare Diagnostics Inc. | Methods and apparatus for bio-fluid specimen characterization using neural network having reduced training |
US20210334972A1 (en) * | 2018-09-20 | 2021-10-28 | Siemens Healthcare Diagnostics Inc. | Methods and apparatus for hiln determination with a deep adaptation network for both serum and plasma samples |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108603998B (en) * | 2015-12-16 | 2021-12-31 | 文塔纳医疗系统公司 | Method, system and product for obtaining focused images of a specimen on a slide by determining an optimal scan trajectory |
US10746753B2 (en) * | 2016-01-28 | 2020-08-18 | Siemens Healthcare Diagnostics Inc. | Methods and apparatus for multi-view characterization |
-
2019
- 2019-06-10 US US17/251,756 patent/US11763461B2/en active Active
- 2019-06-10 JP JP2020569857A patent/JP7089071B2/en active Active
- 2019-06-10 EP EP19819602.4A patent/EP3807650A4/en active Pending
- 2019-06-10 CN CN201980039811.3A patent/CN112639482B/en active Active
- 2019-06-10 WO PCT/US2019/036351 patent/WO2019241134A1/en unknown
Patent Citations (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9322761B2 (en) | 2009-08-13 | 2016-04-26 | Siemens Healthcare Diagnostics Inc. | Methods and apparatus for ascertaining interferents and physical dimensions in liquid samples and containers to be analyzed by a clinical analyzer |
WO2016133900A1 (en) | 2015-02-17 | 2016-08-25 | Siemens Healthcare Diagnostics Inc. | Model-based methods and apparatus for classifying an interferent in specimens |
US20180045654A1 (en) | 2015-02-17 | 2018-02-15 | Siemens Healthcare Diagnostics Inc. | Model-based methods and apparatus for classifying an interferent in specimens |
JP2019500110A (en) | 2015-12-18 | 2019-01-10 | ザ リージェンツ オブ ザ ユニバーシティ オブ カリフォルニア | Interpretation and quantification of urgency features in head computed tomography |
WO2017106645A1 (en) | 2015-12-18 | 2017-06-22 | The Regents Of The University Of California | Interpretation and quantification of emergency features on head computed tomography |
CN105825509A (en) | 2016-03-17 | 2016-08-03 | 电子科技大学 | Cerebral vessel segmentation method based on 3D convolutional neural network |
US20170364771A1 (en) | 2016-06-17 | 2017-12-21 | Pedro Henrique Oliveira Pinheiro | Generating Object Proposals Using Deep-Learning Models |
WO2018022280A1 (en) | 2016-07-25 | 2018-02-01 | Siemens Healthcare Diagnostics Inc. | Systems, methods and apparatus for identifying a specimen container cap |
JP2019531463A (en) | 2016-07-25 | 2019-10-31 | シーメンス・ヘルスケア・ダイアグノスティックス・インコーポレーテッドSiemens Healthcare Diagnostics Inc. | System, method and apparatus for identifying a sample container cap |
CN106372390A (en) | 2016-08-25 | 2017-02-01 | 姹ゅ钩 | Deep convolutional neural network-based lung cancer preventing self-service health cloud service system |
JP2019531783A (en) | 2016-08-26 | 2019-11-07 | エレクタ、インク.Elekta, Inc. | System and method for image segmentation using convolutional neural networks |
WO2018039380A1 (en) | 2016-08-26 | 2018-03-01 | Elekta, Inc. | Systems and methods for image segmentation using convolutional neural network |
CN106408562A (en) | 2016-09-22 | 2017-02-15 | 华南理工大学 | Fundus image retinal vessel segmentation method and system based on deep learning |
JP2019537011A (en) | 2016-11-14 | 2019-12-19 | シーメンス・ヘルスケア・ダイアグノスティックス・インコーポレーテッドSiemens Healthcare Diagnostics Inc. | Method, apparatus and quality check module for detecting hemolysis, jaundice, lipemia or normality of a sample |
WO2018089938A1 (en) | 2016-11-14 | 2018-05-17 | Siemens Healthcare Diagnostics Inc. | Methods, apparatus, and quality check modules for detecting hemolysis, icterus, lipemia, or normality of a specimen |
WO2018105062A1 (en) | 2016-12-07 | 2018-06-14 | オリンパス株式会社 | Image processing device and image processing method |
WO2018188023A1 (en) | 2017-04-13 | 2018-10-18 | Siemens Healthcare Diagnostics Inc. | Methods and apparatus for determining label count during specimen characterization |
WO2018191287A1 (en) | 2017-04-13 | 2018-10-18 | Siemens Healthcare Diagnostics Inc. | Methods and apparatus for hiln characterization using convolutional neural network |
US20200158745A1 (en) * | 2017-04-13 | 2020-05-21 | Siemens Healthcare Diagnostics Inc. | Methods and apparatus for determining label count during specimen characterization |
US10198832B2 (en) | 2017-06-28 | 2019-02-05 | Deepmind Technologies Limited | Generalizable medical image analysis using segmentation and classification neural networks |
US20210064927A1 (en) * | 2018-01-10 | 2021-03-04 | Siemens Healthcare Diagnostics Inc. | Methods and apparatus for bio-fluid specimen characterization using neural network having reduced training |
JP2021510201A (en) | 2018-01-10 | 2021-04-15 | シーメンス・ヘルスケア・ダイアグノスティックス・インコーポレイテッド | Methods and equipment for characterization of biofluid specimens using less trained neural networks |
CN108596166A (en) | 2018-04-13 | 2018-09-28 | 华南师范大学 | A kind of container number identification method based on convolutional neural networks classification |
WO2019241128A1 (en) | 2018-06-15 | 2019-12-19 | Siemens Healthcare Diagnostics Inc. | Methods and apparatus for fine-grained hil index determination with advanced semantic segmentation and adversarial training |
US20210133971A1 (en) * | 2018-06-15 | 2021-05-06 | Siemens Healthcare Diagnostics Inc. | Methods and apparatus for fine-grained hil index determination with advanced semantic segmentation and adversarial training |
US20210334972A1 (en) * | 2018-09-20 | 2021-10-28 | Siemens Healthcare Diagnostics Inc. | Methods and apparatus for hiln determination with a deep adaptation network for both serum and plasma samples |
Non-Patent Citations (12)
Title |
---|
Convolutional neural network https://ml4a.github.io/ml4a/jp/convnets/. |
Deshpande, Adit, "A Beginner's Guide to Understanding Convolutional Neural Networks Part 1", 13 pages (Jul. 20, 2016). |
Deshpande, Adit, "A Beginner's Guide to Understanding Convolutional Neural Networks Part 2", 6 pages (Jul. 29, 2016). |
Extended EP Search Report dated Oct. 19, 2021 of corresponding European Application No. 19819602.4, 5 Pages. |
For understanding of basis of convolutional neural network, Jul. 25, 2018, https://www.imagazine.co.jp/%E7%95%B3%E3%81%BF%E8%BE%BC%E3%81%BF%E3%83%8D%E3%83%83%E3%83%88%E3%83%AF%E3%83%BC%E3%82%AF%E3%81%AE%E3%80%8C%E5%9F%BA%E7%A4%8E%E3%81%AE%E5%9F%BA%E7%A4%8E%3%80%8D%E3%82%92%E7%90%86%E8%A7%A3%E3%81%99/. |
For understanding of basis of convolutional neural network, Jun. 7, 2021 https://leadinge.co. convolutional neural networkjp/rd/2021/06/07/863/. |
Hideki, Aso: Deep Representation Learning by Multi-Layer Neural Networks; the Japanese Society for Artificial Intelligence; Year: Jul. 2013, vol. 28 No. 4, pp. 649-659. |
Jegou, Simon et al., "The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation"; arXiv:1611.09326v3 [cs.CV] Oct. 31, 2017, 9 pages. |
Lecun, Yann et al., Gradient-based learning applied to document recognition, Proceedings of the IEEE, Nov. 1998, vol. 86, Issue No. 11. |
PCT International Search Report and Written Opinion dated Aug. 29, 2019 (7 Pages). |
Shaoqing Ren et al., Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, Computer Vision and Computer Recognition, Jan. 6, 2016, arXiv:1506.01497. |
Simonyan, Karen, et al., "Very Deep Convolutional Networks for Large-Scale Image Recognition"; Published as a conference paper at ICLR 2015; arXiv:1409.1556v6 [cs.CV] Apr. 10, 2015, pp. 1-14. |
Also Published As
Publication number | Publication date |
---|---|
CN112639482A (en) | 2021-04-09 |
EP3807650A1 (en) | 2021-04-21 |
CN112639482B (en) | 2024-08-13 |
JP2021527219A (en) | 2021-10-11 |
US20210164965A1 (en) | 2021-06-03 |
JP7089071B2 (en) | 2022-06-21 |
EP3807650A4 (en) | 2021-11-17 |
WO2019241134A1 (en) | 2019-12-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11238318B2 (en) | Methods and apparatus for HILN characterization using convolutional neural network | |
US11313869B2 (en) | Methods and apparatus for determining label count during specimen characterization | |
CN108603817B (en) | Method and apparatus adapted to identify sample containers from multiple side views | |
US11386291B2 (en) | Methods and apparatus for bio-fluid specimen characterization using neural network having reduced training | |
US11657593B2 (en) | Deep learning volume quantifying methods and apparatus | |
US11763461B2 (en) | Specimen container characterization using a single deep neural network in an end-to-end training fashion | |
US11927736B2 (en) | Methods and apparatus for fine-grained HIL index determination with advanced semantic segmentation and adversarial training | |
EP3610270B1 (en) | Methods and apparatus for label compensation during specimen characterization | |
US11852642B2 (en) | Methods and apparatus for HILN determination with a deep adaptation network for both serum and plasma samples | |
CN112689763B (en) | Hypothesis and verification network and method for sample classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: SIEMENS HEALTHCARE DIAGNOSTICS INC., NEW YORK Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:POLLACK, BENJAMIN S.;REEL/FRAME:063582/0817 Effective date: 20180713 Owner name: SIEMENS HEALTHCARE DIAGNOSTICS INC., NEW YORK Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SIEMENS MEDICAL SOLUTIONS USA, INC.;REEL/FRAME:063582/0695 Effective date: 20180802 Owner name: SIEMENS MEDICAL SOLUTIONS USA, INC., PENNSYLVANIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MA, KAI;CHANG, YAO-JEN;CHEN, TERRENCE;SIGNING DATES FROM 20170717 TO 20180713;REEL/FRAME:063582/0568 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |