CN114947912A - Medical image display device, medical image display method, and storage medium - Google Patents
Medical image display device, medical image display method, and storage medium Download PDFInfo
- Publication number
- CN114947912A CN114947912A CN202111441480.1A CN202111441480A CN114947912A CN 114947912 A CN114947912 A CN 114947912A CN 202111441480 A CN202111441480 A CN 202111441480A CN 114947912 A CN114947912 A CN 114947912A
- Authority
- CN
- China
- Prior art keywords
- medical image
- display
- eyeball
- display mode
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000001514 detection method Methods 0.000 claims abstract description 73
- 210000005252 bulbus oculi Anatomy 0.000 claims abstract description 55
- 230000000007 visual effect Effects 0.000 claims abstract description 40
- 210000001508 eye Anatomy 0.000 claims description 98
- 238000012545 processing Methods 0.000 claims description 55
- 210000004204 blood vessel Anatomy 0.000 claims description 37
- 238000003384 imaging method Methods 0.000 claims description 27
- 208000004350 Strabismus Diseases 0.000 claims description 24
- 238000009877 rendering Methods 0.000 claims description 17
- 239000003086 colorant Substances 0.000 claims description 7
- 210000003484 anatomy Anatomy 0.000 claims description 4
- 230000004044 response Effects 0.000 claims description 4
- 238000005562 fading Methods 0.000 claims description 3
- 230000004438 eyesight Effects 0.000 claims 1
- 208000007536 Thrombosis Diseases 0.000 description 54
- 238000010968 computed tomography angiography Methods 0.000 description 41
- 208000006011 Stroke Diseases 0.000 description 29
- 230000002085 persistent effect Effects 0.000 description 26
- 238000002059 diagnostic imaging Methods 0.000 description 22
- 238000012546 transfer Methods 0.000 description 21
- 210000000695 crystalline len Anatomy 0.000 description 18
- 238000011156 evaluation Methods 0.000 description 16
- 238000013500 data storage Methods 0.000 description 15
- 208000020658 intracerebral hemorrhage Diseases 0.000 description 14
- 230000008569 process Effects 0.000 description 14
- 206010008111 Cerebral haemorrhage Diseases 0.000 description 13
- 238000002595 magnetic resonance imaging Methods 0.000 description 13
- 238000002591 computed tomography Methods 0.000 description 12
- 210000004556 brain Anatomy 0.000 description 11
- 230000002792 vascular Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 9
- 208000032005 Spinocerebellar ataxia with axonal neuropathy type 2 Diseases 0.000 description 5
- 208000033361 autosomal recessive with axonal neuropathy 2 spinocerebellar ataxia Diseases 0.000 description 5
- 238000003745 diagnosis Methods 0.000 description 5
- 230000033001 locomotion Effects 0.000 description 5
- 230000011218 segmentation Effects 0.000 description 5
- 238000003703 image analysis method Methods 0.000 description 4
- 230000001151 other effect Effects 0.000 description 4
- 230000035945 sensitivity Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000004087 circulation Effects 0.000 description 3
- 239000002872 contrast media Substances 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000012447 hatching Effects 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000001105 regulatory effect Effects 0.000 description 3
- 208000024891 symptom Diseases 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 230000005856 abnormality Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000001154 acute effect Effects 0.000 description 2
- 230000017531 blood circulation Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 125000001475 halogen functional group Chemical group 0.000 description 2
- 238000007917 intracranial administration Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 238000002600 positron emission tomography Methods 0.000 description 2
- 238000002603 single-photon emission computed tomography Methods 0.000 description 2
- 239000000243 solution Substances 0.000 description 2
- 238000002560 therapeutic procedure Methods 0.000 description 2
- 238000002604 ultrasonography Methods 0.000 description 2
- ZCYVEMRRCGMTRW-UHFFFAOYSA-N 7553-56-2 Chemical compound [I] ZCYVEMRRCGMTRW-UHFFFAOYSA-N 0.000 description 1
- 208000031104 Arterial Occlusive disease Diseases 0.000 description 1
- 208000003174 Brain Neoplasms Diseases 0.000 description 1
- 206010010904 Convulsion Diseases 0.000 description 1
- 208000019749 Eye movement disease Diseases 0.000 description 1
- 206010015995 Eyelid ptosis Diseases 0.000 description 1
- 208000016988 Hemorrhagic Stroke Diseases 0.000 description 1
- 208000032382 Ischaemic stroke Diseases 0.000 description 1
- 206010033799 Paralysis Diseases 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 208000021328 arterial occlusion Diseases 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000010924 continuous production Methods 0.000 description 1
- 230000001351 cycling effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000004373 eye development Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 229910052740 iodine Inorganic materials 0.000 description 1
- 239000011630 iodine Substances 0.000 description 1
- 208000028867 ischemia Diseases 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000007170 pathology Effects 0.000 description 1
- 230000002688 persistence Effects 0.000 description 1
- 201000003004 ptosis Diseases 0.000 description 1
- 210000003625 skull Anatomy 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002459 sustained effect Effects 0.000 description 1
- 238000013151 thrombectomy Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/032—Transmission computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0033—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0033—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
- A61B5/0035—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for acquisition of images from more than one imaging mode, e.g. combining MRI and optical tomography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0033—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
- A61B5/004—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0033—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
- A61B5/004—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
- A61B5/0042—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the brain
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/50—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/50—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
- A61B6/501—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of the head, e.g. neuroimaging or craniography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/50—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
- A61B6/504—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of blood vessels, e.g. by angiography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5205—Devices using data or image processing specially adapted for radiation diagnosis involving processing of raw data to produce diagnostic data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5229—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
- A61B6/5235—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from the same or different ionising radiation imaging techniques, e.g. PET and CT
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5229—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
- A61B6/5247—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from an ionising-radiation diagnostic technique and a non-ionising radiation diagnostic technique, e.g. X-ray and ultrasound
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Pathology (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Biophysics (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- High Energy & Nuclear Physics (AREA)
- Optics & Photonics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Theoretical Computer Science (AREA)
- Neurology (AREA)
- Quality & Reliability (AREA)
- General Physics & Mathematics (AREA)
- Vascular Medicine (AREA)
- Neurosurgery (AREA)
- Pulmonology (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
An object is to provide a medical image display apparatus, a medical image display method, and a storage medium that improve visibility in a line of sight direction in a medical image. The medical image display device according to the embodiment includes an acquisition unit, a line-of-sight detection unit, and a display control unit. The acquisition unit acquires a medical image including an eyeball of at least one of the subjects. The visual line detection unit detects a visual line direction of the eyeball included in the medical image. The display control unit determines a display mode of the medical image based on a visual line direction of the eyeball, and displays the medical image on the display unit in the determined display mode.
Description
And (4) related application:
this application is based on the priority claim of U.S. patent application 17/180098, filed on 2021, 2/19, the entire contents of which are incorporated herein by reference.
Technical Field
Embodiments described herein relate generally to a medical image display apparatus and a medical image display method for displaying a medical image such as an image of a patient suspected of having a stroke, and a computer-readable non-volatile storage medium storing a program.
Background
Stroke is an example of a serious life-threatening medical condition, sometimes requiring urgent medical treatment. Typically, Non-Contrast CT scan (NCCT) is performed first in stroke diagnosis. NCCT scan results are sometimes used to exclude hemorrhagic stroke from the cause. NCCT scan results are sometimes used to rule out conditions similar to stroke symptoms, such as seizures and brain tumors. NCCT scan results are sometimes used to determine, for example, high density blood vessels (dense vessels) showing thrombi and/or to determine ischemia.
The subsequent CT angiography (CTA), which combines a CT scan with the injection of a contrast agent, is sometimes used to confirm the initial diagnosis and/or to further obtain information useful in determining a therapy.
One of the causes of ischemic stroke is, for example, the presence of large vessel occlusion. Large vessel occlusion is an acute occlusion of the anterior and posterior circulation.
If the patient has a large vessel blockage, thrombus removal may become an appropriate treatment. Mechanical thrombus removal is achieved by removing a thrombus that interferes with blood flow using a thrombus removal device that is delivered via an intravascular catheter, thereby restoring blood flow. In order to receive thrombus removal treatment, it is sometimes necessary to transport a patient to a hospital where thrombus removal can be performed. Sometimes, it is necessary to perform a process for judging the presence of a large blood vessel blockage and showing thrombus removal in an emergency.
The decision as to which patient has a large vessel blockage and is eligible for potentially life-saving thrombus removal therapy may constitute an important part of the clinical workflow for stroke. Evidence of the benefit of intravascular thrombus removal continues to promote the necessity of rapidly identifying patients who are likely to be suitable.
According to the American Heart Association (AHA) guidelines, a determination of whether a patient has large vessel occlusion typically requires an intracranial vascular imaging study. There are several problems associated with this. Sometimes the expert knowledge available for interpretation in vascular assessments is lacking. Sometimes the timing of the contrast agent is problematic or a non-diagnostic study. Among patients, some patients have contraindications to iodine contrast agents used in vascular assessment. Nor is intracranial vascular assessment routinely performed in all hospitals.
In non-contrast evaluation, the presence of a given imaging feature sometimes indicates the presence of an occlusion. The presence of signs of clogging is sometimes low in sensitivity although specificity is high. One of such imaging features is an arterial high density feature (Hyper-dense organ Sign (HAS)) in Non-Contrast Computed Tomography (Non-Contrast Computed Tomography: NCCT). As another such imaging feature, there is a magnetically Sensitive Vascular Signature (SVS) in the T2 gradient echo GRE magnetic resonance assessment (magnetic resonance study: MRI).
Clinical triage criteria are sometimes used to show whether a large vessel occlusion exists. Examples of clinical criteria that may be used for LVO triage include Rapid Arterial oCclusion Evaluation (RACE) criteria and Cincinnati Prehospital Stroke criteria.
Studies have shown that ocular strabismus can identify patients with a high likelihood of large vessel occlusion. In most clinical standards used in LVO triage (e.g. RACE or cincinnati), ocular strabism is included as an element.
In several cases, the sensitivity and specificity of non-contrast imaging features or clinical triage criteria may sometimes be inadequate in determining all thrombus removal candidates that are likely to be suitable.
Several currently available LVO triage methods determine CTA assessment for the presence of potential LVO. In such currently available LVO triage methods, the LVO is shown directly.
Disclosure of Invention
The problems to be solved by the invention are:
one of the problems to be solved by the embodiments disclosed in the present specification and the like is to improve visibility in the line of sight direction in a medical image. However, the problems to be solved by the embodiments disclosed in the present specification and the drawings are not limited to the above problems. Problems corresponding to the respective configurations shown in the embodiments described later can be also positioned as other problems.
Means for solving the problems:
a medical image display device according to an embodiment includes an acquisition unit, a line-of-sight detection unit, and a display control unit. The acquisition unit acquires a medical image including an eyeball of at least one of subjects. The visual line detection unit detects a visual line direction of the eyeball included in the medical image. The display control unit determines a display mode of the medical image based on a visual line direction of the eyeball, and displays the medical image on the display unit in the determined display mode.
Drawings
The embodiments are described herein by way of example and not by way of limitation, and are shown in the following drawings.
Fig. 1 is a schematic diagram of an apparatus according to an embodiment.
Fig. 2 is a flowchart schematically showing a method according to the embodiment.
Fig. 3 is a schematic diagram of notification according to the embodiment.
Fig. 4 is a schematic diagram of a user interface according to an embodiment.
Fig. 5 is a schematic diagram of a user interface according to an embodiment.
Fig. 6 is a schematic diagram of a user interface according to an embodiment with persistent eye deviation.
Fig. 7 is a schematic diagram of a user interface according to the embodiment having a high density blood vessel.
Fig. 8 is a diagram of an eye of a patient with a line-of-sight direction highlighted according to the embodiment.
Fig. 9 is a diagram of an eye of a patient with a line-of-sight direction highlighted according to the embodiment.
Fig. 10 is a diagram of an eye of a patient with a line-of-sight direction highlighted according to the embodiment.
Detailed Description
The medical image display device described in the following embodiment includes an acquisition unit, a line-of-sight detection unit, and a display control unit. The acquisition unit acquires a medical image including an eyeball of at least one of the subjects. The visual line detection unit detects a visual line direction of the eyeball included in the medical image. The display control unit determines a display mode of the medical image based on a visual line direction of the eyeball, and displays the medical image on the display unit in the determined display mode.
One embodiment provides an image display device (medical image display device) including a processing circuit. The processing circuit is configured to receive medical image data including an expression of at least one eye of a subject, process the medical image data to determine a line of sight direction of the at least one eye of the subject, and select a display mode for displaying the medical image data according to the determined line of sight direction.
An embodiment provides a method, comprising: medical image data including an expression of at least one eye of a subject is received, the medical image data is processed to determine a line of sight direction of the at least one eye of the subject, and a display mode for displaying the medical image data is selected according to the determined line of sight direction.
Fig. 1 schematically illustrates a medical image processing apparatus 10 according to an embodiment. The medical image processing apparatus 10 is configured to process and display medical images of a patient or other subject. The medical image processing apparatus 10 is also sometimes referred to as an image display apparatus or an image display apparatus (medical image display apparatus).
The medical image processing apparatus 10 includes a computing apparatus 12, which is a Personal Computer (PC) or a workstation in this example. The computing device 12 is connected to a display device 16, such as a screen, and 1 or more input devices 18, such as a computer keyboard, mouse, etc. In some embodiments, the display device 16 is a touch screen that also functions as an input device 18. The computing device 12 is connected to a data storage unit 20.
The medical image processing apparatus 10 is connected to a CT scanner 14 configured to perform non-contrast CT scanning (NCCT) and CT angiography (CTA) scanning on a patient or another subject in order to obtain volume medical imaging data. In the present embodiment, each scan includes a scan of the brain. In other embodiments, any suitable body part may be scanned.
In alternative embodiments, the data may be obtained using any suitable medical device and/or acquisition method. The CT scanner 14 may be replaced or supplemented by 1 or more scanners configured to obtain 2-dimensional or 3-dimensional imaging data by any suitable diagnostic imaging method. Examples of such scanners include a CT scanner, a cone-beam CT scanner, a Magnetic Resonance Imaging (MRI) scanner, an X-ray scanner, an ultrasound scanner, a Positron Emission Tomography (PET) scanner, and a Single Photon Emission Computed Tomography (SPECT) scanner.
Data obtained using the CT scanner 14 is stored in the data storage unit 20 and supplied to the computing device 12. In other embodiments, the computing device 12 may also obtain this data directly from the CT scanner 14. In an alternative embodiment, the medical image processing apparatus 10 replaces the data storage unit 20 and receives medical imaging data and/or medical images from 1 or more other data storage units (not shown) in addition to or instead of the data storage unit 20. For example, the Medical image processing apparatus 10 may receive Medical imaging data from 1 or more remote data storage units, and the 1 or more remote data storage units may form part of a Medical image Archiving and Communication System (PACS) or other information systems such as a clinical examination value Archiving, Electronic Medical Record (EMR) System, or an Admission Discharge and Transfer (ADT) System. Here, the medical image processing apparatus 10 that realizes the function of receiving medical imaging data and/or medical images is an example of the acquisition unit.
The computing device 12 includes a Central Processing Unit (CPU) 22. The computing device 12 automatically or semi-automatically provides processing resources for processing the data set. In the present embodiment, the data set includes medical imaging data.
The computing device 12 includes, for example: a feature detection circuit 23 configured to detect a region of a high-density blood vessel (thrombus) and process imaging data to specify 1 or a plurality of imaging features; a line-of-sight detection circuit 24 configured to determine a direction of a line of sight; a notification circuit 25 configured to set up a marker of a potential thrombus removal candidate; a rendering circuit 26 configured to render an image based on the imaging data; and a display circuit 28 configured to select and position the rendered view on the display screen 16 or any suitable display. Here, the feature detection circuit 23 is an example of a feature detection unit. The line-of-sight detection circuit 24 is an example of a line-of-sight detection unit. The notification circuit 25 is an example of a notification unit. The rendering circuit 26 is an example of an image processing unit. The display circuit 28 is an example of a display control unit. The display device 16 (display screen 16) is an example of a display unit.
In the present embodiment, each of the circuits 23, 24, 25, 26, 28 is installed in the computing device 12 by a computer program having computer readable instructions executable to perform the method of the embodiments. However, in other embodiments, the various circuits may be implemented as 1 or more Application Specific Integrated Circuits (ASICs) or Field Programmable Gate Arrays (FPGAs). In this embodiment, the circuits 23, 24, 25, 26, and 28 are mounted as part of the CPU 22. In alternative embodiments, the circuits 23, 24, 25, 26, 28 may be mounted individually, or may form part of 2 or more CPUs. In other embodiments, at least a portion of the method may also be performed on 1 or more Graphics Processing Units (GPUs).
In addition, the computing device 12 has a hard disk drive and other components of the PC including RAM, ROM, a data bus, an operating system including various device drivers, and hardware devices including a graphics card. Such components are not shown in fig. 1 for clarity.
The system of fig. 1 is configured to perform a series of stages schematically illustrated by the flow chart of fig. 2.
In stage 30, the CT scanner determines a non-contrast CT (ncct) scan of the brain of the patient suspected of having a stroke. In the case of a suspected stroke, a non-contrast CT scan of the brain is a common clinical practice.
The volume data (Volumetric data) obtained by the NCCT scan is supplied to the data storage unit 20, and is supplied from the data storage unit 20 to the feature detection circuit 23 and the line of sight detection circuit 24. In another embodiment, the feature detection circuit 23 and the line-of-sight detection circuit 24 may obtain a set of volume NCCT data from any suitable data storage unit. The NCCT data may be obtained by the feature detection circuit 23 and the line-of-sight detection circuit 24 at any appropriate timing after the NCCT scan is performed. In these cases, the feature detection circuit 23 and the line-of-sight detection circuit 24 can be represented as an example of the acquisition unit.
In stage 32, the feature detection circuit 23 performs high-density blood vessel detection processing. The high density vessel detection processing includes processing of the volumetric NCCT data received in stage 30 in order to determine whether an arterial high density feature (HAS) is present within the NCCT scan. HAS is also sometimes referred to as high density vascular characterization.
In the embodiment of fig. 2, the high-density blood vessel detection processing is performed using a convolutional neural network, for example, by the method described below: lisouska A., Beveridge E., Muir K.and Poole I.Thrombus Detection In CT Brain Scans using a capacitive Neural network.DOI:10.5220/0006114600240033In Proceedings of the 10th International Joint Conference on biological Engineering Systems and Technologies (BIOSTEC 2017), pages 24-33.
In other embodiments, any suitable high density blood vessel detection method may be used. For example, any suitable image analysis method may be used to automatically detect high-density blood vessels. The associated portion of the patient's anatomy may also be displayed to the user.
In other embodiments, the feature detection circuit 23 may be configured to detect 1 or more non-contrast imaging features associated with an occlusion. For example, in several embodiments, Magnetic Resonance Imaging (MRI) data is obtained in stage 30 instead of CT data. MRI data includes, for example, T2 Gradient Refocusing Echo (GRE) magnetic resonance assessment. In such embodiments, the high-density vessel detection process includes, for example, processing the MRI data to determine whether magnetically Sensitive Vascular Signatures (SVS) are present in the MRI data.
It is known that the presence of given images of HAS and SVS in non-contrast assessments means high specificity and low sensitivity, and the presence of occlusions.
In the present embodiment, the feature detection circuit 23 determines that a region of high-density blood vessels is present when HAS is detected in NCCT data. The feature detection circuit 23 may infer the location of a region of high density blood vessels.
In stage 34, in response to detection of a region of high density blood vessels, the notification circuit 25 designates the patient as a potential thrombus removal candidate. Specifying a patient as a potential thrombus removal candidate can also be interpreted as: the patient is marked as a potential LVO patient or as a potential thrombus removal patient. The notification circuit 25 can also add the patient to the worklist. The notification circuit 25 may also issue a notification of the movement of the LVO candidate as described later with reference to fig. 3.
The rendering circuit 26 renders a rendering image 36 representing a slice aligned with the previous cycle based on the NCCT data. The rendered image 36 is aligned with the slice containing NCCT data that has the highest likelihood of occluding the imaged feature.
The display circuitry 28 displays the rendered slice 36 on the display screen 16 or any suitable display. In some embodiments, the areas of the detected high density blood vessels are highlighted on the rendered image 36. For example, the areas of the detected high density blood vessels may be represented by a different color than other portions of the rendered image 36. Alternatively, the region of the detected high-density blood vessels may be framed in the rendering image 36.
The rendered image 36 can be used for evaluation by a clinician, as described below with reference to stage 60.
If the feature detection circuit 23 does not detect a region of high density blood vessels in stage 32, stage 34 and/or stage 36 may be omitted in some embodiments. In other embodiments, slices aligned with the anterior circulation are rendered and displayed even if no high density blood vessels are detected. In some embodiments, an indication that no high density blood vessels are detected may also be displayed to the user.
In stage 38, the line of sight detection circuit 24 performs line of sight detection processing. The gaze detection processing includes processing NCCT scan data to derive an inference of a gaze direction of at least a single eye of the patient. In the present embodiment, line-of-sight detection is performed regardless of the result of the high-density blood vessel detection processing. In other embodiments, the line-of-sight detection may be omitted, for example, in situations where a high density blood vessel has already been detected. The line-of-sight detection may be performed after the high-density blood vessel detection processing at stage 32, may be performed before the high-density blood vessel detection processing, or may be performed simultaneously with the high-density blood vessel detection processing.
Reference to the eye hereinafter refers to the eyeball, also known as the eyeball. In the case of both eyes, it is conceivable that both eyes observe the same direction.
The output of the line-of-sight detection process includes a classification of the line-of-sight direction in the NCCT data for one of 3 classes. In class 1, the line of sight is tilted to the right. In class 2, the line of sight is tilted to the left. In category 3, the line of sight is neither tilted to the left nor to the right, or the line of sight is unknown. In the following, the classification is referred to as right, left, non-right/unknown.
In clinical practice, clinical strabismus of the eye is well documented as a symptom of stroke. Clinical strabismus of the eye is known as Prevost's sign. Gaze squinting is defined as: both eyes are equally persistently squinting from the midline position towards the same side. In some cases, the eye is biased towards the damaged side of the hemisphere of the brain due to stroke. The injured hemisphere is the side opposite to the side of the body where symptoms such as paralysis or facial ptosis are present.
The viewing angle of the eye is specified, for example, relative to the contralateral plane of the patient's skull. The viewing angle can be determined, for example, as follows: kobayashi, M., Horizontal size definition on Computed tomogry the Visual characteristics and division characteristics in the biochemical stream, Acta neural Belg (2018)118:581.https:// doi.org/10.1007/s13760-018-0949-1 or Sporoyny, Ilana et al, Visual definition of connection Eye development on synthesized topic scales of stream codes, Journal of stream and library of stream dispersions, Volume 25, Issue 28012, 9-2813.
In several embodiments, the gaze detection circuitry 24 can, for example, return a value as the gaze angle. The gaze angle may be additionally or alternatively returned for a determination of whether the gaze direction is left, right, or non-left, non-right/unknown.
In the present embodiment, a trained model is used to determine whether the eye is tilted to the right, to the left, or not to the right/unknown. The trained model is, for example, a deep learning classifier. For example, an R-CNN (regional Convolutional Neural Network features) method similar to those described in r.girshick, j.donahue, t.darrell, j.malik, "Rich features technologies for access object detection and management," The IEEE Conference on Computer Vision and Pattern Registration (CVPR), and June 2014 may be used. In other embodiments, any suitable method may be used to determine whether the viewing direction is left, right, or not left, not right, or unknown. For example, any suitable image analysis method may be used to classify the direction of the line of sight as desired. An associated portion of the patient's biological construct may also be determined and displayed to the user.
In several situations, the brain may also be scanned in a manner that excludes the patient's eyes from the anatomical region scanned using the CT scanner. At least in such a case, the line of sight can be obtained based on a biological structure other than the eyeball, for example. For example, compression of the extraocular muscles may show the direction of sight.
The output of stage 38 is a determination that the line-of-sight direction is left, right, or non-left non-right/unknown.
In stage 40 of fig. 2, the CT scanner 14 takes a contrast CT scan (CTA scan) of the patient's brain.
The volume data from the CTA scan is supplied to the data storage unit 20, and is supplied from the data storage unit 20 to the line-of-sight detection circuit 24. In other embodiments, the line-of-sight detection circuit 24 may obtain the set of volume CTA data from any suitable data storage unit. The line-of-sight detection circuit 24 may obtain CTA data at any appropriate timing after CTA scanning.
In stage 42, the line-of-sight detection circuit 24 performs line-of-sight detection processing on the CTA scan data. The gaze detection processing includes processing the CTA scan data to derive an inference of the gaze direction of at least a single eye of the patient. The gaze detection circuit 24 outputs a classification of whether the gaze direction is right, left, or non-left, non-right/unknown.
In other embodiments, the gaze detection circuitry 24 may also return a value as the gaze angle, for example. The gaze angle may be additionally or alternatively returned to the determination that the gaze direction is left, right, or non-left, non-right/unknown.
In the present embodiment, a trained model is used to determine whether the eye is tilted to the right, left, or not left but not right/unknown. In some embodiments, the trained model may be different from the trained model used in stage 38, in view of performing this determination with contrast data instead of non-contrast data. In other embodiments, the same trained models may be used in stage 38 and stage 42.
The output of stage 42 is a determination that the line of sight direction of the CTA data is left, right, non-left, non-right/unknown.
In stage 44, the gaze detection circuitry 24 compares the gaze direction determined in stage 38 with the gaze direction determined in stage 42. If the line of sight direction determined at the stage 42 is the same as the line of sight direction determined at the stage 38, the line of sight detection circuit 24 determines that the line of sight direction is continuously strayed. In response to a determination that a persistent deviation in the gaze direction has occurred, the method of fig. 2 proceeds to stages 46 and 50. If it is determined that the gaze direction is not continuously misaligned, the stage 50 may be omitted. In several embodiments, some or all of stage 46 may also be omitted.
In stage 46, the notification circuit 25 designates the patient as a potential thrombus removal candidate based on the fact that the line of sight is in the same direction in the CTA data and the NCCT data. In the event that the patient has been designated as a potential thrombus removal candidate in stage 34, the notification circuit 25 does not alter the designation of the patient as a potential thrombus removal candidate.
Persistent hemiopia involving both NCCT and CTA has shown good sensitivity and specificity for large vessel occlusion (attention et al, The suspended DeyeCOM Sign As a Predictor of large vessel occlusion and Stroke Mimics, J.Stroke Cerebrovasc Dis.2018 June; 27 (6); 1466-. Persistent ocular deviation was used in the method of fig. 2 as 1 indicator that the patient is a potential thrombus removal candidate.
The rendering circuit 26 receives indications of the eyeball region within the volumetric NCCT data and/or within the CTA data from the gaze detection circuit 24. For example, the eyeball region may be represented by a bounding box obtained by segmentation. The rendering circuit 26 renders at least 1 image 48 representing the eyeball region based on the volumetric NCCT data and/or CTA data. In the present embodiment, the rendering image 48 represents a portion of the body-axis slice of the head passing through the lens of the eye.
The display circuit 28 displays at least 1 rendered image 48 of the eye region on the display screen 16. The at least 1 rendered image 48 may give a quick view of the line of sight. The at least 1 rendered image 48 may give a profile of the eyes contained to represent the side suspected of being occluded.
By displaying the profile of the eyes, the clinician can be guided to the hemisphere of the brain corresponding to the direction of sight. It is also possible for the clinician to use indications of the left/right brain (side of the brain) in order to determine an occlusion.
In other embodiments, derivative measurements associated with gaze deviation may also be displayed. For example, the angle of view may also be displayed. In still other embodiments, any other clinical information may be displayed.
Various ways of viewing the line of sight will be apparent further below with reference to fig. 8(a) to 10 (c). Derived measurements or other clinical information may also be displayed along with any line-of-sight views described below.
In stage 50, the rendering circuit 26 renders a set of CTA views 52, 54, 56, 58. The display circuit 28 displays the CTA views 52, 54, 56, 58 on the display screen 16, for example, in accordance with display rules such as the hanging protocol (hanging protocol). The display of CTA views may be optimized to enable a person to identify an occlusion.
CTA views 52, 54, 56, 58 are stroke views based on markers (landmark) within the CTA. A stroke view may be a view that evaluates specific anatomical and vascular regions of the brain associated with stroke. View 1 is a view 56 of the anterior cycle, views 2, 3, 54, 58 are views of the 2 posterior cycles, and view 4 52 is a view of the collateral cycle. The CTA views 52, 54, 56, 58 are displayed according to the selected display parameters. The images of the CTA views 52, 54, 56, 58 are aligned relative to the biological structure. The image is sliced (slabbed) at a selected thickness having a window level set to a selected value. The selected values for thickness and window level may be considered as optimal values for the biological structure of the view object, and/or the pathology of the view object.
In the present embodiment, when the persistent eye deviation is detected, only the CTA views 52, 54, 56, and 58 are displayed. If persistent eye deviation is detected, the 1 st display mode in which the CTA views 52, 54, 56, 58 are displayed is used. In the case where the persistent eye deviation is not detected, the 2 nd display mode in which the CTA views 52, 54, 56, 58 are not displayed is used. The display circuit 28 is configured to display the scan that is most appropriate for a given outcome or reference.
In another embodiment, the display circuit 28 displays both the rendered image 36 of the slice aligned with the previous cycle and the CTA views 52, 54, 56, and 58, as will be described later with reference to fig. 6 and 7, for example, but the display type differs depending on the determined line of sight deviation. When the continuous visual line strabismus is detected, the following 1 st display mode is used: the CTA views 52, 54, 56, 58 are given as a large main image display and the rendered image 36 containing slices aligned with the previous cycle is given as a smaller image, e.g., a thumbnail image. When the continuous visual line strabismus is not detected, the following 2 nd display mode is used: the rendered image 36, including the slice aligned with the previous loop, is given as a large main image display while the CTA views 52, 54, 56, 58 are given as smaller images, e.g., thumbnail images. To carefully view the small images, the clinician may, for example, click on the small images to select the small images.
The display mode may be used based on the time of acquiring the scans or the time interval between acquiring the scans using, for example, the elapsed time between scans.
If a high density of blood vessel features is detected in stage 32, the clinician evaluates the rendered image 36, including the slice aligned with the anterior circulation, in stage 60. In the event that persistent eye deviation is detected in stage 44, the clinician evaluates the CTA views 52, 54, 56, 58. The images 36, 52, 54, 56, 58 may be the most relevant images to prompt the clinician in the initial assessment. The clinician discusses thrombus removal for this patient. The clinician, for example, determines that the patient is suitable for thrombus removal. The clinician selects, for example, a transfer (transfer) of the patient.
A reference set in advance for thrombus removal and/or transfer may be used. A panel 62 of thrombus removal and/or transfer criteria may also be displayed to the clinician. The thrombus removal and/or transfer criteria and the display thereof on the panel 62 will be described later with reference to fig. 4 to 7.
Fig. 3 illustrates notification process 70 that may be performed as part of stage 36 or stage 46 of fig. 2. In the present embodiment, the notification process 70 includes a work list of LVO candidates or a movement notification.
The mobile notification includes a message sent to a mobile device, such as the smartphone 72. The message indicates that an urgent evaluation of potential thrombus removal candidates is required.
The workflow notification includes the metrics present on the workflow 74. The workflow includes a list of patients whose data are the subject of evaluation. One of the patients is added with an index, and an urgent evaluation of the patient is prompted. In some embodiments, the workflow may also be reordered so that patients identified for urgent evaluation are moved up the patient list.
The mobile notification or workflow notification may also contain summary information about the status of the patient.
For example, if a notification such as a movement notification or a workflow notification is received, the clinician may decide to prioritize the evaluation of the patient who issued the notification. This can reduce the time until the patient is evaluated. Alternatively, the time between scan acquisition and treatment, such as thrombus removal, can be reduced.
The methods of fig. 2 and 3 may provide a method of presenting clinical-relevant information for determining the presence of a large vessel occlusion and for determining a thrombus removal candidate in an acute stroke patient. Continuous line-of-sight detection is performed on successive images. In the method of fig. 2, the successive images contain NCCT data and CTA data, respectively. The method of fig. 2 also determines a non-contrast imaging characteristic associated with the occlusion. The case is marked based on persistent eye deviation or the presence of an occlusion imaging feature, which is a high density blood vessel feature in the present embodiment. If the patient is known to meet the criteria for potential thrombus removal candidates, the clinician is notified.
The display mode may also be selected to quickly process the evaluation of potential thrombus removal candidates. The clinician may be prompted first for the most relevant information. The clinician can use the most relevant information in assessing whether an occlusion exists.
The medical image processing apparatus 10 provides the clinician with the relevant information without performing direct detection of the LVO.
In some embodiments, additional predetermined connotations or exclusionary criteria may be combined with the methods described above with reference to fig. 2 and 3. For example, ASPECTS or ICH may also be used. Such an embodiment will be described below with reference to fig. 4 to 7.
FIG. 4 shows elements of a user interface that displays information to a user, such as a clinician. The user interface may be displayed, for example, on any suitable screen such as display screen 16. Fig. 4 also shows a smart phone 72 displaying notification messages.
The user interface is provided with a panel 62. 2 views 82, 84 of the patient's eye region are displayed on the panel 62. View 1 82 is obtained by rendering image data from scan 1 obtained at time 1. In the embodiment shown in fig. 4, scan 1 is an NCCT scan. The patient's eye is biased to the left in scan 1. In order to indicate eye deviation in view 1 82, a plus sign (+) is displayed beside view 1 82. The plus sign may also be displayed in case the eye is squinted to the right. The minus sign (-) may be displayed without eye deviation.
In other embodiments, any suitable index of 1 or more eye deviations may be used. Any suitable visual effect, for example, the visual effect described below with reference to fig. 8(a) to 10(c), may be used to highlight or emphasize the eyeball squint.
The 2 nd view 84 is obtained by rendering image data from the 2 nd scan obtained at the 2 nd time. In the embodiment shown in fig. 4, scan 2 is a CTA scan. The patient's eye was deflected in scan 2 in the same way as in scan 1. The patient's eye consistently shows a squint in both scans, so 2 plus signs are displayed alongside the 2 nd view 84. In other embodiments, any suitable 1 or more indicators may be used in order to represent persistent bias. Any suitable visual effect may be used to highlight persistent bias.
In other embodiments, one or both of the scans may be MRI scans. In several embodiments, the scan contains 2 MRI sequences that the eye can visually recognize. The SVS of the high density blood vessels is replaced with GRE.
In other embodiments, the scans may also be a 3D scout scan obtained at time 1 and an NCCT scan obtained at time 2. When performing a CT scan of a region of a patient's body, a three-dimensional (3D) scout scan is typically first performed. A 3D scout scan may have a large field of view of the area where the local scan is performed. The 3D scout scan may comprise a low resolution scan of a large area of the patient's body, e.g. the entire body of the patient. By using NCCT 3D scout scan followed by NCCT scan, it is also possible to provide a solution for reference centre only for NCCT.
In several embodiments, the 1 st scan is an optical image obtained by an optical camera within the scanner. The eye deviation is determined by the optical image and subsequent scanning. In some embodiments, to determine the eye deviation, a camera, such as an AI camera, is used in the imaging, and the eye deviation in the moving image obtained by the camera is compared with the eye deviation in subsequent scans.
By always making the 1 st and 2 nd views 82 and 84 of the eyeball area exist, the user can be indicated that there is an abnormality. The results may not be output to the user, for example, no explicit indication that the patient may have LVO may be output, but rather the side with the abnormality. In several situations, the requirements for obtaining approval from a regulatory authority may differ in systems that display a diagnosis versus systems that do not present a diagnosis. Giving the user associated information without giving a diagnosis may be important for gaining acceptance by the regulatory authorities. The tool may not be readily approved by regulatory authorities due to its avoidance of Computer Aided Detection (CADe).
In fig. 4, the panel 62 also contains information relating to a reference for factors other than eye strabism. In the embodiment of FIG. 4, the baseline is associated with IntraCerebral Hemorrhage (ICH), occlusion, early CT scan in Alberta stroke plan (CT: ASPECTS) score, and collateral (laterals). Display element 90 shows ICH (intracerebral hemorrhage) scores. The 2 nd display element 92 represents jam information. The 3 rd display element 94 represents an ASPECTS (Alberta Stroke Program Early CT) score. The 4 th display element 96 represents a side branch. The 1 st to 4 th display elements 90, 92, 94, 96 and the views 82, 84 may together be considered as a set giving clinically relevant information for determining the presence of LVO and thrombus removal candidates in an acute stroke patient.
To give a full LVO triage solution, other fiducials (ICH, occlusion, ASPECTS, collateral) are considered in combination with the line of sight bias shown in views 1 and 2 82, 84. In some embodiments, the hospital can also select which information related to the criteria is displayed and configure the display. It is also possible to select which information related to the reference is displayed based on the imaging that can be achieved, for example, in the hospital or the imaging that can be achieved by the individual patient.
In the embodiment of fig. 4, in the case where the results associated with the display elements 90, 92, 94, 96 satisfy the thrombus removal or transfer criteria, the display elements 90, 92, 94, 96 are marked with green, respectively. For example, when the ICH score satisfies a predetermined thrombus removal or transfer criterion, the display element 90 turns green. Fig. 4 is black and white, so the green color is not represented in fig. 4.
In other embodiments, any suitable method may be used to indicate that the results of the display elements 90, 92, 94, 96 satisfy the thrombus removal or transfer criteria. For example, any suitable color, line, shape or shading, or any suitable visual or other effect may be used.
When the results of all the factors expressed by the display elements 90, 92, 94, 96 satisfy the thrombus removal or transfer criteria, a notification 100 is displayed on the panel 62. In the example of fig. 4, the notified text is "emergency evaluation, thrombus removal candidate". The display of the notification 100 may be a cause of the clinician urgently evaluating the imaging of the patient in order to determine whether the patient is a thrombus removal candidate.
FIG. 5 shows an example of a contradiction or lack of some result of thrombus removal or transfer criteria. The panel 62 includes the same 1 st and 2 nd views 82 and 84 and display elements 90, 92, 94, and 96 as those of fig. 4. As shown in fig. 4, persistent eye deviation is shown by the plus sign of view 1 82 and the 2 plus signs of view 2 84.
In the example shown in fig. 5, both ICH results and occlusion results meet the criteria for thrombus removal or transfer, and the display elements 90, 92 are marked with green color (green color is not shown in fig. 5). The display of green indicates that the connotative reference is satisfied. The connotative reference indicates that the blockage is in a given position.
The results obtained from the algorithm for determining the ASPECTS score are contraindicated for thrombus removal or transfer criteria. The display elements 94 are marked with a red color (red color not shown in fig. 5, but instead represented by a bold outline).
In the embodiment of fig. 5, when any of the display elements 90, 92, 94, and 96 results in a prohibition against thrombus removal or transfer criteria, the display element may be marked with a red color.
Even when other criteria are satisfied, contraindications may sometimes occur. For example, although an occlusion may be present, the patient may be excluded from treatment based on other imaging characteristics such as collateral weakness, or other clinical information.
In other embodiments, any suitable method may be used to indicate that the display elements 90, 92, 94, and 96 are contraindicated for thrombus removal or transfer criteria. For example, any suitable color, line, shape, or shading, or any suitable visual or other effect may be used.
In the embodiment shown in fig. 5, no appropriate scan or result can be utilized in order to provide information about the side branch. To show that the result is unusable, the display element 96 is ashed. In another embodiment, if a factor associated with one of the display elements 90, 92, 94, and 96 does not have an appropriate result that can be utilized, the display element may be grayed out. In other embodiments, any suitable method may be used to indicate that the factors associated with the display elements 90, 92, 94, and 96 are not available. In some embodiments, the display elements may also be completely hidden. In other embodiments, any suitable color, line, shape, or shading, or any suitable visual or other effect may be used.
The evaluation of the list of criteria (ICH, congestion, ASPECTS, collateral in the present embodiment) can function as an examination list for the clinician to examine all relevant information. Regardless of whether the automated results are given to the reference, the clinician can check all relevant information.
FIG. 6 shows clinician interaction on panel 62. Panel 62 displays 1 st and 2 nd views 82, 84 of the patient's eye region, and display elements 90, 92, 94, 96 representing ICH score, occlusion, ASPECTS score, collateral, respectively. Both the 1 st and 2 nd views 82, 84 show a line of sight bias, which is additionally shown by the plus sign of the 1 st view 82 and the 2 nd plus sign of the 2 nd view 84. All factors associated with display elements 90, 92, 94, 96 meet the thrombus removal or transfer criteria and are highlighted in green (not shown in fig. 6).
The clinician evaluates the factors expressed by the display elements 90, 92, 94, 96(ICH, occlusion, ASPECTS, collateral) corresponding to the thrombus removal or transfer criteria. If the clinician selects each display element 90, 92, 94, 96, for example by clicking on the display element, the associated information is displayed to the clinician. For example, appropriate imaging may be displayed.
Fig. 6 shows an example in which the clinician selects an occlusion by clicking on the display element 92, for example. The display element 92 is subjected to shading processing to indicate that the display element 92 is selected. In other embodiments, any suitable visual indication may be used to indicate that 1 of the display elements 90, 92, 94, 96 is selected.
If 1 of the fiducials (ICH, occlusion, ASPECTS, collateral) is selected by the clinician, the display circuitry 28 selects the most appropriate scan for displaying a view of the fiducials. In the example shown in fig. 6, the patient has persistent eye deviation, but no high density blood vessels. If the clinician selects the display element 92 and the occlusion is selected, the display circuitry 28 displays a set of stroke views 52, 54, 56, 58 for immediate evaluation of the critical LVO location within the vascular structure.
The display circuitry also highlights a view of the eye that corresponds to the scan that is displaying the view. The scan being displayed is highlighted in such a way that the eye and the left/right (side) information is always highlighted.
In fig. 6, a stroke view is obtained from a 2 nd (CTA) scan, and a 2 nd view 84 of the eye is highlighted. In the example of fig. 6, the 2 nd view 84 is highlighted using the halo effect of the surroundings. In other embodiments, any suitable visual or other effect may be used to highlight the scan of the view object.
All scans except the stroke views 52, 54, 56, 58 prompted to the clinician can be displayed in normal view. For example, the slice 36 aligned with the previous cycle is displayed in FIG. 6 as a small thumbnail image that is selectable by the clinician.
When the clinician starts evaluating the reference and selects a display element associated with 1 or more references, the display circuit 28 selects the most appropriate scan to display the view of the reference. For example, in the case where the ICH is selected, the display circuit 28 automatically displays NCCT, or in the present embodiment, GRE from which MRI scan is obtained. The most appropriate scan displayed may depend on the workflow and imaging used. The selection of the most appropriate scan to display may also be set by the hospital.
Fig. 7 shows that in a case where the patient does not show persistent eye deviation but has high density of blood vessel features, the clinician selects an occlusion from the panel 62. Panel 62 displays 1 st and 2 nd views 82, 84 of the patient's eye, and display elements 90, 92, 94, 96 representing the basis for thrombus removal or transfer. All criteria of the display elements 90, 92, 94, 96 meet the thrombus removal or transfer criteria and are highlighted in green (not shown in fig. 7).
In view 1 82 of the patient's eye, there is a bias view as indicated by the plus sign beside view 1 82. In view 2 84 of the patient's eye, there is no squint. Minus signs are included next to the 2 nd view 84. The visual line detection circuit 24 determines that there is no continuous visual line strabismus.
If the clinician selects the display element 92 representing the occlusion, the display circuitry 28 selects the most appropriate scan. Since the patient has no persistent view bias but has high density blood vessel characteristics, the scan selected as the most appropriate scan to be initially displayed is the NCCT. To illustrate the case where the NCCT scan is view displayed, the 1 st view 82 is highlighted. In fig. 7, the 1 st view 82 is highlighted by a halo effect around the 1 st view 82.
In this embodiment, the NCCT scan is shown by displaying the output of a pre-set slice 36 aligned with the slice with the highest likelihood of the previous cycle. The slice 36 is selected as the slice with the highest probability of identifying high density vessels. The rendering circuit 26 selects the slice 36 with the highest probability, thereby minimizing the amount of scrolling required between images.
In several embodiments, the display of the slice 36 may include segmentation of high density blood vessels. In several embodiments, the display of the slice 36 may also include labeling of high density vessels. In several cases, the presence or absence of segmentation and/or labeling may depend on the manner of restriction.
In the embodiment of fig. 7, the stroke views 52, 54, 56, 58 can also be used for presetting after the high-density blood vessel view display. The clinician can also perform the display of 1 or more of the stroke views 52, 54, 56, 58 by selecting from thumbnail views, or by any suitable method. The most appropriate preset of stroke views can be selected based on the location of high density vessels. In several embodiments, the display circuit 28 automatically selects the most appropriate preset of stroke views based on prescribed rules. In several embodiments, the pre-set of stroke views may be determined by a clinician.
In the embodiment shown in fig. 6 and 7, the display circuit 28 selects the most appropriate scan and view for the selected reference. When an occlusion is selected, and there are high density blood vessels without continuous line of sight bias, the display circuit 28 uses the alignment of the slice with the highest likelihood of containing a loop in the non-contrast scan in order to perform an immediate evaluation of the imaging characteristics associated with the occlusion. When occlusion is selected and there is persistent gaze deviation, the display circuit 28 displays a demonstration of the stroke view in CTA for immediate assessment of the vascular architecture. In the case where neither high density blood vessels nor sustained line of sight strabismus is detected, an initial view of the NCCT scan is displayed. The initial view of the NCCT scan is the initial display of the NCCT scan as loaded into the viewer. In other embodiments, any suitable 1 or more views may also be displayed.
Different scans may be displayed for different fiducials. Different scans may be displayed for different results. The display circuit 28 also selects the most appropriate view for each scan (e.g., stroke view).
The panel 62 provides an interactive display of information associated with thrombus removal or transfer criteria. In several embodiments, a summary view may be given in addition to or instead of panel 62. Prompt information may be determined and prompted to the user in summary form. For example, the information may also be displayed in the form of a report or other non-interactive view.
Fig. 8(a) to 8(c) and fig. 9(a) to 10(c) show a method of displaying a line of sight on a user interface in order to highlight the persistent squint between the cross scans. By highlighting the persistent strabismus, a user such as a clinician can quickly determine the persistent strabismus and quickly evaluate whether or not the patient is a candidate for thrombus removal or transfer. The eye displays shown in fig. 8(a) to 8(c) and 9(a) to 10(c) may be replaced with views 82 and 84 shown in fig. 4 to 7.
Fig. 8(a) to 8(c) show user interfaces in the embodiment in which hatching is used to highlight the offset direction.
In each of fig. 8(a) to 8(c), the visual line detection circuit 24 divides the eyeballs 112 and 114 and the crystalline lenses 116 and 118 of each eye. Any suitable division may be used for the line-of-sight detection circuit 24. The rendering circuit 26 renders an image so that the divided eyeballs 112 and 114 are framed by each other. The divided crystalline lenses 116 and 118 are represented by graphic elements as white-out elements in the images of fig. 8(a) to 8(c), respectively.
Eye deviation is represented by shading of the segmented eyes. In fig. 8(a), the eyeball squint of the patient to the right side is indicated by hatching containing oblique lines aligned with the visual line direction. In fig. 8(b), there is no bias view, and no shading is used. In fig. 8(c), the eyeball squint of the patient to the left side is indicated by hatching containing oblique lines aligned with the visual line direction.
In other embodiments, any suitable graphical elements may be used to emphasize the eyeball and/or the lens. Any suitable pattern or visual feature may be used to highlight the direction of the line of sight. For example, to indicate the direction of the line of sight, arrows may also be used.
In several embodiments, different colors are used in order to represent different scans, such that the colors that draw the eye vary in different scans at different times. The 1 st color may be used when rendering the eye within the image derived from the scan at time 1. Color 2 may be used when rendering the eye within the image derived from the scan at time 2. Color 3 may be used when rendering the eye within the image derived from the scan at time 3. Colors may also be used in combination with other visual effects such as shadows.
Fig. 9(a) to 10(c) show user interfaces in the embodiment in which both shading and color are used to highlight the offset direction. Fig. 9(a) to 10(c) do not show colors, and are black and white.
A single image 120 is used to represent the patient's eye region. 2 or more scans (e.g., NCCT and CTA scans) taken at various times are registered together so as to be able to be displayed on the same image.
The slider bar 130 represents time for switching views of 2 or more scans. The user moves the indicator 132 of the slider 130 to change the view that appears at a different time. In the embodiment of fig. 9(a) to 9(c), the scale of the slider 130 represents the elapsed time from the 1 st scan represented by time 0. In other embodiments, the scale of slider 130 may also represent the actual time of the scan.
Fig. 9(a) and 9(b) show examples of persistent views (DeyeCOM +/+). In fig. 9(a), a baseline scan is shown. The baseline scan shows an eye deviation with time 0 as shown by slider bar 130. The detection of the eye and the segmentation of the crystalline lens are performed for use in highlighting the patient's eye in the baseline scan. In the embodiment of fig. 9(a) to 9(c), an outer frame is added to each eyeball 122, 124, and an outer frame is added to each lens 126, 128. In other embodiments, any suitable method of highlighting the eyeball and/or the lens may be used.
In fig. 9(a), the direction of eyeball squint in the baseline scan is indicated by shades of red and white aligned with the direction of the line of sight (red is not visually recognizable in fig. 9 (a)). A red plus sign is displayed at time 0 of the slider bar 130. The indicator 132 of the slider is at time 0.
In fig. 9(b), the pointer 132 is moved to a position where the time is +2 minutes, and the image 120 shows the 2 nd scan obtained 2 minutes after the baseline scan. The detection and segmentation of the eye in scan 2 is registered to the baseline scan. In scan 2, the direction of eye deviation of scan 2 is indicated by shades of green and white aligned with the direction of the line of sight (green is not visually recognizable in fig. 9 (b)). In addition, a plus sign of green is displayed at the time of the slider 130 of +2 minutes. The eyeballs 122, 124 and lenses 125, 128 are framed.
In the embodiment of fig. 9(a) and 9(b), if the indicator 132 of the slider 130 is moved between time 0 and +2 minutes, the color of the image 120 is faded (fading) between red at 0 and green at +2 minutes. The displayed image may fade between the image represented at 0 and the image represented at +2 minutes.
The user can understand at a glance whether the bias is persistent. The user can move the indicator 132 back and forth over the slider bar 130 to switch between the baseline scan and the 2 nd scan. By using color, the user can simply distinguish between the baseline scan and the 2 nd scan. The color may also gradually fade between times. By sliding bar 130 and varying color, a method of visually representing line of sight bias in both baseline and 2 nd scans may be provided.
In other embodiments, there is no inter-image gradation. In several embodiments, the image at 0 may be registered with the image at +2 minutes. In still other embodiments, any suitable visual method may be used in order to combine or transform the image at 0 with the image at +2 minutes.
In several cases, a baseline scan and a 2 nd scan may also follow, with a follow-up scan taken at a longer interval after the 2 nd scan. A follow-up scan may not be urgent, but may be used to confirm whether the acute illness has been eliminated.
Fig. 9(c) shows an example of the follow-up scan. The follow-up scan is registered along with the baseline scan. In the image 120, the result of the follow-up scan when the user slides the pointer 132 of the slider bar 130 to the time of the follow-up scan is represented. In fig. 9(c), the time of the follow-up scan is +23 hours, which is 23 hours after the baseline scan. The slide bar 130 may not be given as a linear scale. Instead, the scale of the slide bar 130 is selected so that the scanning times 0, +2 minutes, +23 hours can be easily distinguished and correspond to the same scale.
In the follow-up scan of fig. 9(c), there is no eye strabismus. The eye is colored blue (blue is not shown in fig. 9 (c)). Not shaded. The minus sign is displayed with time +23 hours on slider 130 (blue is not shown in fig. 9 (c)).
By sliding the pointer 132 along the slide bar 130, the user can switch between 2 or more scans to confirm eye deviation over multiple times. The image may also be graded between different scans.
Fig. 10(a) and 10(b) show the same display as fig. 9(a) and 9(b) except for the case where the persistence bias (DeyeCOM +/-) is not present. Fig. 10(a) shows the image 120 at time 0. Fig. 10(a) shows a baseline scan such as an NCCT scan, for example, and is similar to fig. 9 (a). Indicator 132 of slider 130 is at time 0. Eye deviation is indicated by the red and white shading of the eyes 122, 124 (red is not shown in fig. 10 (a)). The shadow is aligned with the gaze direction. Eye strabismus is also indicated by a plus sign at time 0.
In fig. 10(b), the 2 nd scan at time +2 minutes, such as a CTA scan, is illustrated in the image 120. The 2 nd scan is registered to the baseline of fig. 10 (a). The indicator 132 of the slider 130 is located at time +2 minutes.
In fig. 10(b), there is no line of sight bias. The eyeball is shaded with green, which represents the time (green is not shown in fig. 10 b). Grayed out versions of the shadows are represented to the eye. Grayed out shading indicates to the user: previously there was a line of sight bias but currently there has not been. In other embodiments, any suitable method of visually representing the line of sight direction or a change in the line of sight direction may be used.
A minus sign is shown at time +2 minutes of slider 130.
If the user moves the indicator 132 of the slider 130 between time 0 and time +2 minutes, the color is faded between the painted scans to know the transition in the direction of the line of sight between the red and white shaded image at time 0 and the green and gray shaded image at time +2 minutes. In order to confirm the change in the direction of the line of sight on the user interface, the color between the colored scans can be made to fade.
Fig. 10(c) shows the result of the follow-up scan within the image 120. The follow-up scan was registered to the baseline of fig. 10 (a). Follow-up scans were taken at a time of +23 hours. The follow-up scan does not show line of sight strabismus. The eyeball is shaded with blue (blue is not shown in fig. 10 (c)), and the minus sign of blue is shown at +23 hours of time of slider 130.
In the embodiment of fig. 9(a) to 10(c), data is registered, and the eye and the crystalline lens are divided and given colors that can be distinguished between scans. If the pointer 132 of the slider 130 is moved between moments, the image 120 fades between the different scanned images.
As the examples of fig. 9(a) to 9(c), a scan of a specific time (0, +2 minutes, +23 hours) is used, but in other embodiments, the scan may be acquired at any appropriate timing. Any suitable time interval and time range may also be displayed.
In several embodiments, the geometry of the eye shows motion, and thus deformation (morphing) across scans. For example, the appearance of the lens may move from the 1 st position in the baseline scan to the 2 nd position in the 2 nd scan. The appearance of the lens may be moved by a continuous process to make the eye appear to rotate between the 1 st and 2 nd positions.
In several embodiments, if the user performs a triggering action, the sequence between images of 2 or more scans automatically cycles between views of the scan. For example, the triggering action may be hovering over the image 120, or hovering over the slider 130. The 2 or more than 2 scanned images may be automatically cycled back and forth. The automatic cycling may include fading between images. The automatic loop may include distortion between images. In several embodiments, the dynamic image of the sequence of fades or deformations may be automatically saved as a screenshot. The moving image may be stored in the data storage unit 20 or any other suitable data storage unit. The moving picture may be saved to the PACS.
In the embodiment of fig. 9(a) to 10(c), the segmented lens and the segmented eye are respectively framed in the image 120 using solid lines. In other embodiments, the outer frame of the lens and/or the outer frame of the eye may be visualized with different lines per scan. For example, the lens and eye in the baseline scan may be framed using solid lines, and the lens and eye in the 2 nd scan may be framed using dashed lines.
The same method as that described above with reference to fig. 2 and 3 and the same interface as that described above with reference to fig. 4 to 10(c) can be used for processing and displaying arbitrary appropriate data. For example, any suitable 1 or more medical devices using data such as CT data, cone-beam CT data, X-ray data, ultrasound data, MR data, PET data, or SPECT data may be used. Any suitable 1 st and 2 nd scans may be used. The scan may be of any suitable patient or other subject. The displayed images and data may be evaluated by any suitable user, such as any suitable clinician or researcher. In some embodiments, instead of displaying images or representations of both eyes of the patient, images or representations of one eye of the patient may be displayed.
Certain embodiments provide a medical imaging device. The medical imaging device is provided with a method of prompting clinically relevant information for marking potential LVO/thrombectomy patients, at least a portion of which includes (persistent) line of sight strabism.
The eye deviation may be automatically detected by using an image analysis method in which a relevant part of the biological structure is also specified and displayed to the user.
The high-density blood vessels may be automatically detected by using an image analysis method, or a part related to the biological structure may be identified and displayed to the user.
A CTA view optimized to enable a person to identify a blockage may also be automatically decided and prompted to the user.
Other clinically relevant information may also be detected/displayed.
It may also decide to present, prompting the user in abstract form (of a report or other non-interactive view form, etc.).
The presentation may be set as a UI.
Notifications may also be sent to the user (e.g., via a smart phone, layout and summary information).
The work list is given priority according to the automatically detected information.
Direct LVO detection or other CADe may also be included in the display.
Derived measurements related to gaze deviation or any other clinical information may also be displayed.
This data may be registered, or the glasses and the lens may be divided and colors that can be distinguished for each scan may be added.
The user interface can be arranged to fade between the scans that are painted so that the transition in the direction of the line of sight is known.
The geometry of the eye may also be deformed across the scan to show motion instead of gray scale transitions.
It is also possible that the sequence automatically cycles back and forth as the user hovers over the view.
The dynamic images of the sequence of fades/deformations may also be automatically saved to the PACS as screenshots.
In the report, the outer frames of the lens and the eye from the two scans may be visualized with different lines (for example, one is a solid line and the other is a broken line) for each scan.
One embodiment provides an image display device (medical image display device) including a processing circuit. The processing circuit is configured to receive medical image data including at least an eye of a patient and control a display mode of the medical image data based on a parallax angle of the eye of the patient.
The processing circuit may be further configured to receive a plurality of medical image data scanned at least 2 different times and control a display mode of the plurality of medical image data based on the times.
The processing circuit may be further configured to receive a plurality of medical image data scanned at least 2 different times and control a display mode of the plurality of medical image data based on an interval between the times.
One embodiment provides a medical data processing apparatus including a processing circuit. The processing circuit is configured to receive 1 st medical imaging data representing a 1 st scan of a subject, automatically processing the 1 st medical imaging data to determine the 1 st line-of-sight direction of the subject, receiving the 2 nd medical imaging data representing a 2 nd scan following the subject, automatically processes the 2 nd medical imaging data in order to determine the 2 nd view direction of the subject, in order to determine whether or not a persistent view-strabismus has occurred, it is determined whether or not the subject is a candidate for potential Large Vessel Occlusion (LVO) or thrombus removal using the 1 st and 2 nd gaze directions, the determination is made depending on whether or not persistent eye deviation has occurred, and in a case where the subject is determined to be a candidate for potential LVO or thrombus removal, the subject is notified to the user as a candidate for potential LVO or thrombus removal.
The processing circuit may be further configured to display a 1 st image of the eyeball area of the subject rendered based on the 1 st medical imaging data and/or a 2 nd image of the eyeball area of the subject rendered based on the 2 nd medical imaging data.
The processing circuit may be further configured to process the 1 st medical imaging data and/or the 2 nd medical imaging data in order to obtain at least 1 image feature, and may be further configured to determine whether the subject is a candidate for potential LVO or thrombus removal based on the at least 1 image feature.
The at least 1 image feature may also comprise at least 1 of a non-contrast image feature, a high density blood vessel, an arterial high density feature (HAS), a magnetically sensitive vascular feature (SVS).
The processing circuit may be further configured to display at least 1 image representing an anatomical region including the at least 1 image feature, the at least 1 image rendered from the 1 st medical imaging data and/or the 2 nd medical imaging data.
The processing circuit may be further configured to select and display at least 1 image for user evaluation, the at least 1 image being selected to enable a person to identify the occlusion.
The at least 1 image used for evaluation may also comprise a plurality of Computed Tomography Angiography (CTA) views.
The at least 1 image used for evaluation may also contain multiple stroke views.
The processing circuit may be further configured to display a plurality of criteria for LVO/thrombus removal to a user.
The processing circuit may be further configured to automatically determine whether the subject satisfies at least some of the plurality of criteria, and to display an instruction to a user whether the subject satisfies each of the criteria.
The processing circuit may be further configured to prioritize the workflow based on the determination of the persistent view-strabism and/or based on other information obtained by processing the 1 st medical imaging data and/or the 2 nd medical imaging data.
The notification may also include sending the notification to a mobile device, such as a smartphone.
Alternatively, the 1 st scan may be a non-contrast CT (ncct) scan and the 2 nd scan may be a contrast CT scan. Alternatively, the 1 st scan may be an NCCT scan and the 2 nd scan may be an NCCT scan. Alternatively, the 1 st scan may be an MRI scan and the 2 nd scan may be an MRI scan. Alternatively, the 1 st scan may be an optical imaging process, and the 2 nd scan may be a CT scan or an MRI scan. Alternatively, the 1 st scan may be a video imaging process and the 2 nd scan may be a CT scan or an MRI scan.
The processing circuit may be further configured to determine to automatically process the 1 st medical imaging data and/or the 2 nd medical imaging data in order to detect the LVO. It may also be determined whether the subject is a candidate for potential LVO or thrombus removal based on the detection of LVO.
While specific circuitry is described in this specification, in alternative embodiments 1 or more of these circuits can be provided by 1 processing resource or other component, or the functionality provided by 1 circuit can be provided by combining 2 or more than 2 processing resources or other components. Reference to 1 circuit encompasses a plurality of components that provide the functionality of the circuit, without limitation as to whether such components are separate from one another. Reference to multiple circuits includes 1 component that provides the functionality of these circuits.
According to at least 1 embodiment described above, visibility in the line-of-sight direction in a medical image can be improved.
While specific embodiments have been described above, these embodiments are merely illustrative and are not intended to limit the scope of the present invention. The novel embodiments described above may be implemented in various other ways, and various omissions, substitutions, and changes may be made to the embodiments described above without departing from the spirit of the invention. The appended claims and their equivalents are intended to cover such forms or modifications as would be included within the scope and spirit of the invention.
Claims (20)
1. A medical image display device is provided with:
an acquisition unit that acquires a medical image including an eyeball of at least one of subjects;
a visual line detection unit that detects a visual line direction of the eyeball included in the medical image; and
and a display control unit which determines a display mode of the medical image based on the eye sight direction and displays the medical image on a display unit in the determined display mode.
2. The medical image display apparatus according to claim 1,
the sight line detection part judges whether the sight line direction of the eyeball deviates leftwards or rightwards,
the display control unit determines a display mode of the medical image to be a 1 st display mode when it is determined that the eye-gaze direction strabismus exists, and determines a display mode of the medical image to be a 2 nd display mode different from the 1 st display mode when it is determined that the eye-gaze direction strabismus does not exist.
3. The medical image display apparatus according to claim 1,
the acquisition unit acquires a plurality of medical images obtained at least 2 different times,
the display control unit determines a display mode of the medical images based on a time when the plurality of medical images are obtained or an interval between times when the plurality of medical images are obtained.
4. The medical image display apparatus according to claim 1,
the acquisition unit acquires a plurality of medical images obtained at least 2 different times,
the gaze detection unit determines a gaze direction of the eyeball for each of the plurality of medical images,
the display control unit determines a display mode of the medical image based on the eye gaze direction determined for each of the plurality of medical images.
5. The medical image display apparatus according to claim 1,
the sight line detection unit determines that the eye has a continuous strabismus in the sight line direction when the determined sight line strabismus coincides with the 1 st time and the 2 nd time,
the display control unit determines a display mode of the medical image to be a 1 st display mode when it is determined that the eye has a continuous strabism in the direction of sight, and determines a display mode of the medical image to be a 2 nd display mode different from the 1 st display mode when it is determined that the eye has no continuous strabism in the direction of sight.
6. The medical image display apparatus according to claim 1, further comprising:
a feature detection unit that detects at least 1 imaging feature included in the medical image,
the display control unit further determines a display mode of the medical image based on the at least 1 detected imaging feature.
7. The medical image display apparatus according to claim 6,
the at least 1 imaging feature comprises at least 1 of a high density blood vessel feature, an arterial high density feature, and a magnetic sensitive blood vessel feature.
8. The medical image display apparatus according to claim 6,
the acquisition unit acquires the medical image obtained in a non-contrast scan,
the feature detection unit detects the at least 1 imaging feature from the medical image obtained in the non-contrast scan.
9. The medical image display apparatus according to claim 6,
the display control unit determines a display mode of the medical image for displaying the anatomical region in which the at least 1 imaging feature is detected.
10. The medical image display apparatus according to claim 2,
the 1 st display includes displaying a collection of CTA views,
the 2 nd display mode includes displaying the medical image obtained in a non-contrast scan.
11. The medical image display apparatus according to claim 6, further comprising:
and a notification unit configured to issue a notification to an external device in accordance with the eye gaze direction of the eyeball or the detected at least 1 imaging feature.
12. The medical image display apparatus according to claim 1, further comprising:
and a notification unit that detects a large blood vessel blockage based on the medical image.
13. The medical image display apparatus according to claim 1, further comprising:
an image processing unit that renders an image representing a region including the eyeball based on the medical image,
the display control unit displays an image representing a region including the eyeball on the display unit.
14. The medical image display apparatus according to claim 3,
the sight line detection unit divides a region of at least one eyeball included in each of a plurality of medical images acquired at least 2 different times and registered together,
the display control unit displays the divided at least one eyeball on the display unit.
15. The medical image display apparatus according to claim 14,
the display control unit performs at least 1 of:
displaying the divided lens of the at least one eyeball; and
and highlighting the visual line direction of the at least one segmented eyeball.
16. The medical image display apparatus according to claim 13,
the image processing unit draws an image representing a region including the eyeball based on each of a plurality of medical images obtained at least 2 different times,
the display control unit displays a region including the eyeball with a different visual effect for each of the at least 2 different time points.
17. The medical image display apparatus according to claim 16,
the display control unit performs at least 1 of:
using different colors for the segmented eyeballs at the at least 2 different moments respectively as the different visual effects;
respectively attaching outer frames to the segmented eyeballs through different line types at the at least 2 different moments as the different visual effects; and
fading between the different visual effects at each of the at least 2 different times in response to input from the user.
18. The medical image display apparatus according to claim 13,
the image processing unit draws an image representing a region including the eyeball based on each of a plurality of medical images obtained at least 2 different times,
the display control unit performs at least 1 of:
determining a display in which the geometric shape of the at least one eyeball is distorted between the at least 2 different time points as a display mode of the medical image;
determining, as a display mode of the medical image, a display mode in which an image representing the at least one eyeball is cyclically displayed between the at least 2 different times in response to an input from a user; and
and rendering and outputting a moving image of an image in which the image representing the at least one eyeball is circulated among the at least 2 different time points.
19. A medical image display method, comprising:
acquiring a medical image including an eyeball of at least one of subjects;
detecting a visual line direction of the eyeball contained in the medical image; and
a display mode of the medical image is determined based on a visual line direction of the eyeball, and the medical image is displayed on a display unit in the determined display mode.
20. A computer-readable nonvolatile storage medium storing a program for causing a computer to execute:
acquiring a medical image including an eyeball of at least one of subjects;
detecting a visual line direction of the eyeball contained in the medical image; and
a display mode of the medical image is determined based on a visual line direction of the eyeball, and the medical image is displayed on a display unit in the determined display mode.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/180,098 US20220265183A1 (en) | 2021-02-19 | 2021-02-19 | Method and apparatus for displaying medical images |
US17/180,098 | 2021-02-19 | ||
JP2021113558A JP2022127566A (en) | 2021-02-19 | 2021-07-08 | MEDICAL IMAGE DISPLAY DEVICE, MEDICAL IMAGE DISPLAY METHOD AND PROGRAM |
JP2021-113558 | 2021-07-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114947912A true CN114947912A (en) | 2022-08-30 |
Family
ID=82974616
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111441480.1A Pending CN114947912A (en) | 2021-02-19 | 2021-11-30 | Medical image display device, medical image display method, and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114947912A (en) |
-
2021
- 2021-11-30 CN CN202111441480.1A patent/CN114947912A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10782862B2 (en) | Systems and methods for viewing medical images | |
US10307077B2 (en) | Medical image display apparatus | |
EP3814984B1 (en) | Systems and methods for automated detection of visual objects in medical images | |
EP3035287B1 (en) | Image processing apparatus, and image processing method | |
NL1032508C2 (en) | Clinical overview and analysis workflow for assessing pulmonary nodules. | |
US7133546B2 (en) | Digital medical image analysis | |
US8020993B1 (en) | Viewing verification systems | |
US20060215894A1 (en) | System and method for smart display of CAD markers | |
US20060056673A1 (en) | User interface for computed tomography (ct) scan analysis | |
EP2116974B1 (en) | Statistics collection for lesion segmentation | |
US11544844B2 (en) | Medical image processing method and apparatus | |
JP2015515296A (en) | Providing image information of objects | |
US10390726B2 (en) | System and method for next-generation MRI spine evaluation | |
JP2021077331A (en) | Data processing device and data processing method | |
US20220265183A1 (en) | Method and apparatus for displaying medical images | |
US11295442B2 (en) | Medical information display apparatus displaying cavity region in brain image, medical information display method, and medical information display program | |
US20230298178A1 (en) | Medical image display system, medical image display method, and recording medium | |
CN111820898A (en) | Medical image processing apparatus and medical image processing system | |
JP7475968B2 (en) | Medical image processing device, method and program | |
CN114947912A (en) | Medical image display device, medical image display method, and storage medium | |
US20230410981A1 (en) | Medical image display system, medical image display method, and recording medium | |
US20230143966A1 (en) | Image processing apparatus and non-transitory computer readable medium | |
US12141417B1 (en) | Method and apparatus of utilizing artificial intelligence in the scrolling process | |
WO2024120958A1 (en) | Method, system, and computer program element for controlling an interface displaying medical images | |
JP2023551421A (en) | Suppression of landmark elements in medical images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |