Nothing Special   »   [go: up one dir, main page]

WO2017030276A1 - Medical image display device and medical image processing method - Google Patents

Medical image display device and medical image processing method Download PDF

Info

Publication number
WO2017030276A1
WO2017030276A1 PCT/KR2016/006087 KR2016006087W WO2017030276A1 WO 2017030276 A1 WO2017030276 A1 WO 2017030276A1 KR 2016006087 W KR2016006087 W KR 2016006087W WO 2017030276 A1 WO2017030276 A1 WO 2017030276A1
Authority
WO
WIPO (PCT)
Prior art keywords
medical image
image
medical
anatomical
region
Prior art date
Application number
PCT/KR2016/006087
Other languages
French (fr)
Korean (ko)
Inventor
남우현
오지훈
박용섭
이재성
Original Assignee
삼성전자(주)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020160044817A external-priority patent/KR102522539B1/en
Application filed by 삼성전자(주) filed Critical 삼성전자(주)
Priority to EP16837218.3A priority Critical patent/EP3338625B1/en
Priority to US15/753,051 priority patent/US10682111B2/en
Publication of WO2017030276A1 publication Critical patent/WO2017030276A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves

Definitions

  • the present invention relates to a medical image display apparatus for displaying a screen including a medical image and a medical image processing method thereof.
  • the medical image display apparatus is a device for acquiring an internal structure of an object as an image.
  • the medical image display device is a non-invasive inspection device, and photographs and processes structural details, internal tissues, and fluid flow in the body and shows them to the user.
  • a user such as a doctor may diagnose a patient's health condition and a disease by using the medical image output from the medical image display apparatus.
  • Medical image display devices include magnetic resonance imaging (MRI) devices, computed tomography (CT) devices, X-ray (X-ray) devices, and ultrasound to provide magnetic resonance imaging. Device and the like.
  • MRI magnetic resonance imaging
  • CT computed tomography
  • X-ray X-ray
  • ultrasound ultrasound to provide magnetic resonance imaging. Device and the like.
  • Magnetic resonance imaging is a device for photographing a subject using a magnetic field, and is widely used for accurate disease diagnosis because it shows bones, disks, joints, nerve ligaments, etc. in three dimensions at a desired angle.
  • the magnetic resonance imaging apparatus acquires a magnetic resonance (MR) signal by using a high frequency multi-coil including RF coils, a permanent magnet, a gradient coil, and the like.
  • the magnetic resonance image is reconstructed by sampling the magnetic resonance signal (MR signal).
  • Computed tomography (CT) device can provide a cross-sectional image of the object, and has the advantage that the internal structure of the object (for example, organs, such as kidneys, lungs, etc.) do not overlap, compared to the general X-ray device, the disease Widely used for the precise diagnosis of CT.
  • CT Computed tomography
  • the computed tomography apparatus radiates X-rays to the object and detects X-rays passing through the object. Then, the image is restored using the detected X-rays.
  • the X-ray imaging apparatus images the inside of the object by radiating X-rays to the object and detecting X-rays passing through the object.
  • the ultrasound apparatus transmits an ultrasound signal to the object and receives an ultrasound signal reflected from the object to form a two-dimensional or three-dimensional ultrasound image of the object of interest in the object.
  • the medical images acquired by the various medical image display apparatuses express the object in various ways according to the type and the photographing method of the medical image display apparatus.
  • the doctor reads the medical image to determine whether the patient's illness or health is abnormal. Therefore, it is necessary for a doctor to provide a medical image display device that can facilitate the diagnosis of a doctor so that a medical image suitable for diagnosing a patient can be selected and read.
  • a medical image display apparatus includes: a display unit configured to display a first medical image of an object including at least one anatomical object; Extracting reference region information corresponding to the anatomical entity from at least one second medical image that is a reference image of the first medical image, and extracting the region corresponding to the anatomical entity from the first medical image based on the extracted reference region information.
  • At least one processor may be configured to detect and control the display unit to display the detected anatomical region separately from the non-anatomical region.
  • the processor generates a third medical image including display area information on the anatomical object detected in the first medical image by matching the first medical image with the second medical image, and generates the third medical image based on the display area information.
  • the display unit may be controlled to display an area of the anatomical object detected in the medical image separately from an area that is not an anatomical object. As a result, it is possible to display an individual object by the image generated using medical registration.
  • the display unit may allow regions of the plurality of anatomical objects to be displayed separately. Thus, it is possible to distinguish the identification of the unidentified objects, thereby providing convenience for diagnosis.
  • the plurality of anatomical entities may comprise at least one of blood vessels and lymph nodes.
  • the first medical image may be a non-contrast medical image.
  • the first medical image may be a non-contrast medical image.
  • the second medical image may be a contrast-enhanced medical image.
  • the contrast-enhanced medical image which is easy to distinguish the individual by segmentation, may be used as a reference image.
  • the second medical image may be a medical image obtained by photographing an object from which the first medical image is acquired at another time point.
  • the past history of the patient can be utilized for the present diagnosis.
  • an area of the anatomical object detected by at least one of color, pattern, pointer, highlight, and animation effect may be displayed separately from an area that is not an anatomical object.
  • various division display functions are provided according to the user's preference.
  • the division mark of the area of the anatomical object may be activated or deactivated by user selection.
  • the user selection convenience for the object display function may be activated or deactivated by user selection.
  • the apparatus may further include a user input unit configured to receive a user input, and the processor may control the display unit to adjust the level of the division display of the anatomical object in response to the user input.
  • a user input unit configured to receive a user input
  • the processor may control the display unit to adjust the level of the division display of the anatomical object in response to the user input.
  • the processor may further detect the lesion extension region in the region of the anatomical entity, and control the display unit so that the lesion extension region detected in the region of the anatomical entity is identifiably displayed.
  • the information on the progress of the lesion is further provided to provide convenience of the diagnosis of the lesion.
  • the processor may extract reference region information corresponding to the anatomical object by using brightness values of pixels of the second medical image. As a result, necessary information can be obtained by efficiently utilizing the information of the pre-stored video.
  • the processor may perform image registration using a predetermined conversion model parameter to maximize the result of the similarity measurement function between the first medical image and the second medical image.
  • a predetermined conversion model parameter to maximize the result of the similarity measurement function between the first medical image and the second medical image.
  • the processor may perform image registration using a predetermined conversion model parameter such that a result value of the cost function of the first medical image and the second medical image is minimized. As a result, the possibility of error of the matched image generated as a result of the image matching process is lowered.
  • the processor may be configured to map a coordinate system between the first medical image and the second medical image, and maintain the image characteristics of the second medical image with respect to the first medical image and the second medical image to which the coordinate system is mapped. Homogeneous matching may be performed. Thus, a registration image is provided in which lymph nodes and blood vessels are separately displayed.
  • the processor may further perform a heterogeneous registration on the first medical image and the second medical image on which the homogeneous registration has been performed, by modifying the image characteristics of the second medical image to completely match the first medical image.
  • the medical image processing method comprises the steps of: displaying a first medical image photographed on an object including at least one anatomical object; Extracting reference region information corresponding to the anatomical object from at least one second medical image that is a reference image of the first medical image; Detecting a region corresponding to the anatomical entity in the first medical image based on the extracted reference region information, and displaying the detected anatomical entity's region separately from the non-anatomical entity. .
  • a division display function for an object that has not been identified in the medical image using information extracted from the reference image is provided.
  • the region of the anatomical entity detected in the third medical image generated based on the display region information may be displayed separately from the region which is not the anatomical entity. As a result, it is possible to display an individual object by the image generated using medical registration.
  • the anatomical objects are plural, and the step of displaying the anatomical objects separately may include displaying the areas of the plurality of anatomical objects separately. Thus, it is possible to distinguish the identification of the unidentified objects, thereby providing convenience for diagnosis.
  • the plurality of anatomical entities may comprise at least one of blood vessels and lymph nodes.
  • the first medical image may be a non-contrast medical image.
  • the first medical image may be a non-contrast medical image.
  • the second medical image may be a contrast-enhanced medical image.
  • the contrast-enhanced medical image which is easy to distinguish the individual by segmentation, may be used as a reference image.
  • the second medical image may be a medical image obtained by photographing an object from which the first medical image is acquired at another time point.
  • the past history of the patient can be utilized for the present diagnosis.
  • the distinguishing display may be performed by distinguishing the detected anatomical region from the non-anatomical region by at least one of color, pattern, pointer, highlight, and animation effect.
  • various division display functions are provided according to the user's preference.
  • the method may further include receiving a user selection of activating or deactivating the division mark of the anatomical object.
  • the user selection convenience for the object display function may be provided.
  • the method may further include receiving a user input for adjusting a level of the division mark of the anatomical object.
  • the method may further include detecting a lesion extension region in the region of the anatomical entity that is displayed separately, and causing the lesion extension region detected in the region of the anatomical entity to be distinguishably displayed.
  • the information on the progress of the lesion is further provided to provide convenience of the diagnosis of the lesion.
  • the reference region information corresponding to the anatomical object may be extracted using the brightness values of the pixels of the second medical image. As a result, necessary information can be obtained by efficiently utilizing the information of the pre-stored video.
  • image registration may be performed so that a result value of the similarity measurement function between the first medical image and the second medical image is maximized using a predetermined conversion model parameter.
  • image registration may be performed such that a result value of a cost function of the first medical image and the second medical image is minimized using a predetermined conversion model parameter. As a result, the possibility of error of the matched image generated as a result of the image matching process is lowered.
  • the generating of the third medical image comprises: mapping coordinate systems of the first medical image and the second medical image;
  • the method may include performing homogeneous matching on the first medical image and the second medical image to which the coordinate system is mapped to the first medical image while maintaining the image characteristics of the second medical image.
  • a registration image is provided in which lymph nodes and blood vessels are separately displayed.
  • the generating of the third medical image may include performing heterogeneous matching on the first medical image and the second medical image on which the homogeneous registration has been performed, by modifying image characteristics of the second medical image to completely match the first medical image. It may further comprise a step. Thus, it is possible to provide the user with quantification and results of lesion expansion in lymph nodes.
  • the medical image processing method may include capturing an object including at least one anatomical object. Displaying a first medical image; Extracting reference region information corresponding to the anatomical object from at least one second medical image that is a reference image of the first medical image; Detecting a region corresponding to the anatomical entity in the first medical image based on the extracted reference region information, and displaying the detected anatomical entity's region separately from the non-anatomical entity. .
  • a division display function for an object that has not been identified in the medical image using information extracted from the reference image is provided.
  • the region of the anatomical entity detected in the third medical image generated based on the display region information may be displayed separately from the region which is not the anatomical entity. As a result, it is possible to display an individual object by the image generated using medical registration.
  • the anatomical objects are plural, and the step of displaying the anatomical objects separately may include displaying the areas of the plurality of anatomical objects separately. Thus, it is possible to distinguish the identification of the unidentified objects, thereby providing convenience for diagnosis.
  • the plurality of anatomical entities may comprise at least one of blood vessels and lymph nodes.
  • the first medical image may be a non-contrast medical image.
  • the first medical image may be a non-contrast medical image.
  • the second medical image may be a contrast-enhanced medical image.
  • the contrast-enhanced medical image which is easy to distinguish the individual by segmentation, may be used as a reference image.
  • the second medical image may be a medical image obtained by photographing an object from which the first medical image is acquired at another time point.
  • the past history of the patient can be utilized for the present diagnosis.
  • the detected anatomical region may be distinguished from the non-anatomical region by at least one of color, pattern, pointer, highlight, and animation effect.
  • various division display functions are provided according to the user's preference.
  • the method may further include receiving a user selection of activating or deactivating the division mark of the anatomical object.
  • the user selection convenience for the object display function may be provided.
  • the method may further include receiving a user input for adjusting a level of the division mark of the anatomical object.
  • the method may further include detecting a lesion extension region in the region of the anatomical entity that is displayed separately, and causing the lesion extension region detected in the region of the anatomical entity to be distinguishably displayed.
  • the information on the progress of the lesion is further provided to provide convenience of the diagnosis of the lesion.
  • the reference region information corresponding to the anatomical object may be extracted using the brightness values of the pixels of the second medical image. As a result, necessary information can be obtained by efficiently utilizing the information of the pre-stored video.
  • image registration may be performed so that a result value of the similarity measurement function between the first medical image and the second medical image is maximized using a predetermined conversion model parameter.
  • image registration may be performed such that a result value of a cost function of the first medical image and the second medical image is minimized using a predetermined conversion model parameter. As a result, the possibility of error of the matched image generated as a result of the image matching process is lowered.
  • the generating of the third medical image comprises: mapping coordinate systems of the first medical image and the second medical image;
  • the method may include performing homogeneous matching on the first medical image and the second medical image to which the coordinate system is mapped to the first medical image while maintaining the image characteristics of the second medical image.
  • a registration image is provided in which lymph nodes and blood vessels are separately displayed.
  • the generating of the third medical image may include performing heterogeneous matching on the first medical image and the second medical image on which the homogeneous registration has been performed, by modifying image characteristics of the second medical image to completely match the first medical image. It may further comprise a step. Thus, it is possible to provide the user with quantification and results of lesion expansion in lymph nodes.
  • lymph node follow-up is possible even in patients with weak renal function who are burdened to actively use the contrast agent.
  • the present embodiment can be applied to non-images for general examination, and is used for early diagnosis of cancer diseases such as cancer metastasis.
  • FIG. 1 is a view for explaining a medical image display apparatus according to an embodiment of the present invention
  • FIG. 2 is a view schematically showing an MRI apparatus according to an embodiment of the present invention
  • FIG. 3 is a view showing a CT device according to an embodiment of the present invention.
  • FIG. 4 is a view schematically showing the configuration of the CT device of FIG.
  • FIG. 5 is a diagram schematically illustrating a configuration of a communication unit that performs communication with an external device in a network system.
  • FIG. 6 is a diagram illustrating a system including a first medical device, a second medical device, and a medical image registration device according to an embodiment of the present invention.
  • FIG. 7 is a diagram conceptually illustrating lymph nodes and blood vessel distribution of a thoracic region.
  • FIG. 8 is a diagram illustrating contrast-enhanced CT images taken of a chest region.
  • FIG. 9 is a diagram illustrating non-contrast CT images taken of a chest region.
  • FIG. 10 is a block diagram showing the configuration of a medical image display apparatus according to an embodiment of the present invention.
  • FIG. 11 is a block diagram illustrating a configuration of an image processor of FIG. 10.
  • FIG. 12 is a view showing a first medical image according to an embodiment of the present invention.
  • FIG. 13 is a view showing a second medical image according to an embodiment of the present invention.
  • FIG. 14 is a diagram illustrating a second medical image from which an object is extracted
  • 15 is a diagram for explaining an image registration process according to the present embodiment.
  • 18 is a flowchart illustrating processes of performing a matching process in an embodiment of the present invention.
  • FIG. 19 is a view showing a third medical image according to an embodiment of the present invention.
  • FIG. 21 is an enlarged view of a portion of an object area in FIG. 20.
  • FIG. 22 is a diagram illustrating a screen displayed according to driving of an application having a medical diagnosis function in a medical image display apparatus according to an embodiment of the present invention.
  • 23 to 26 illustrate various examples of using image registration for diagnosis in a medical image display apparatus according to an embodiment of the present invention.
  • FIG. 27 is a flowchart illustrating a medical image processing method according to an embodiment of the present invention.
  • part refers to a hardware component such as software, FPGA, or ASIC, and “part” plays a role. However, “part” is not meant to be limited to software or hardware.
  • the “unit” may be configured to be in an addressable storage medium and may be configured to play one or more processors.
  • a “part” refers to components such as software components, object-oriented software components, class components, and task components, processes, functions, properties, procedures, Subroutines, segments of program code, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays and variables.
  • the functionality provided within the components and “parts” may be combined into a smaller number of components and “parts” or further separated into additional components and “parts”.
  • image refers to multi-dimensional data consisting of discrete image elements (e.g., pixels in a two-dimensional image and voxels in a three-dimensional image).
  • image may include a medical image of an object acquired by X-ray, CT, MRI, ultrasound, and other medical imaging systems.
  • an "object” may include a person or an animal, or a part of a person or an animal.
  • the subject may include organs such as the liver, heart, uterus, brain, breast, abdomen, or blood vessels.
  • the "object” may include a phantom. Phantom means a material having a volume very close to the density and effective atomic number of an organism, and may include a sphere phantom having properties similar to the body.
  • the "user” may be a doctor, a nurse, a clinical pathologist, a medical imaging expert, or the like, and may be a technician who repairs a medical device, but is not limited thereto.
  • FIG. 1 is a view for explaining a medical image display apparatus 100 according to an embodiment of the present invention.
  • the medical image display apparatus 100 may be a device for obtaining a medical image and displaying the medical image on the screen.
  • the medical image display apparatus 100 includes a magnetic resonance imaging apparatus (hereinafter referred to as an MRI apparatus) 101 and a computed tomography apparatus (hereinafter referred to as a CT apparatus) 102.
  • MRI apparatus magnetic resonance imaging apparatus
  • CT apparatus computed tomography apparatus
  • X-ray imaging device not shown
  • angiography device not shown
  • ultrasonic device 103 ultrasonic device
  • the MRI apparatus 101 is an apparatus that obtains an image of a tomography region of an object by expressing intensity of a magnetic resonance (MR) signal with respect to a radio frequency (RF) signal generated in a magnetic field of a specific intensity in contrast.
  • MR magnetic resonance
  • RF radio frequency
  • the CT device 102 may provide a cross-sectional image of the object, the CT device 102 may express the internal structure of the object (for example, an organ such as a kidney or a lung) without overlapping as compared to a general X-ray imaging apparatus.
  • the CT device 102 may provide a relatively accurate cross-sectional image of an object by acquiring and processing an image having a thickness of 2 mm or less at several tens or hundreds per second.
  • the X-ray imaging apparatus refers to a device that transmits X-rays through a human body to image an internal structure of the human body.
  • Angiography apparatus is a device that allows the blood vessels (arteries and veins) of a subject to be injected with a contrast agent through a 2 mm thin tube called a catheter through X-rays.
  • the ultrasound apparatus 103 transmits an ultrasound signal from a body surface of the object toward a predetermined part of the body, and uses the information of an ultrasound signal (hereinafter, also referred to as an ultrasound echo signal) reflected from tissues of the body to detect a monolayer of soft tissue, Means a device for obtaining an image of blood flow.
  • an ultrasound signal hereinafter, also referred to as an ultrasound echo signal
  • the medical image display apparatus 100 may be implemented in various forms.
  • the medical image display apparatus 100 described herein may be implemented in the form of a mobile terminal as well as a fixed terminal.
  • a mobile terminal may be a smartphone, a smart pad, a tablet PC, a laptop computer, a PDA, and the like.
  • the medical image display apparatus 100 may exchange medical image data with a hospital server connected to a medical image information system (PACS) or another medical apparatus in the hospital. have.
  • the medical image display apparatus 100 may perform data communication with a server or the like according to a digital imaging and communications in medicine (DICOM) standard.
  • DICOM digital imaging and communications in medicine
  • the medical image display apparatus 100 may include a touch screen.
  • the touch screen may be configured to detect not only the touch input position and the touched area but also the touch input pressure.
  • the touch screen may be configured to detect proximity touch as well as real-touch.
  • a real touch is a touch pen (eg, a pointing device, a stylus, a haptic pen) that is actually provided on the screen as a user's body (eg, a finger) or a touch tool.
  • a user's body eg, a finger
  • a touch tool e.g., An electronic pen, etc.
  • proximity-touch refers to a case in which the user's body or a touch tool does not actually touch the screen but approaches a predetermined distance from the screen (for example, Detectable spacing refers to hovering of 30 mm or less.
  • the touch screen may be implemented by, for example, a resistive method, a capacitive method, an infrared method, or an acoustic wave method.
  • the medical image display apparatus 100 may detect a gesture input as a user's teach input to the medical image through the touch screen.
  • the touch input of a user described herein includes a tap, a click that touches harder than a tap, a touch and hold, a double tap, a double click, and a touch.
  • Drag, drag and drop, slide, flicking, panning, swipe, pinch, etc. Includes all of them.
  • Input such as drag, slide, flicking, swipe, etc. may consist of a press on which the finger (or touch pen) touches the touch screen, a movement of a certain distance, and a release from the touch screen. Includes all curve-shaped movements.
  • the various touch inputs are included in the gesture input.
  • the medical image display apparatus 100 may provide some or all of the buttons for controlling the medical image in the form of a graphical user interface (GUI).
  • GUI graphical user interface
  • FIG. 2 is a diagram schematically showing an MRI apparatus 101 according to an embodiment of the present invention.
  • a magnetic resonance image refers to an image of an object acquired using the nuclear magnetic resonance principle.
  • the MRI apparatus 101 is an apparatus that obtains an image of a tomography region of an object by expressing the intensity of a magnetic resonance (MR) signal with respect to a radio frequency (RF) signal generated in a magnetic field of a specific intensity in contrast. For example, if a subject is placed in a strong magnetic field and then irradiates the subject with an RF signal that resonates only a particular nucleus (eg, a hydrogen nucleus, etc.), the MR signal is emitted from the particular nucleus. 101 may obtain the MR image by receiving the MR signal.
  • the MR signal refers to an RF signal radiated from the object.
  • the magnitude of the MR signal may be determined by the concentration of a predetermined atom (eg, hydrogen, etc.) included in the subject, a relaxation time T1, a relaxation time T2, and a flow of blood flow.
  • the MRI apparatus 101 includes different features from other imaging apparatuses. Unlike imaging devices such as CT where the acquisition of the image depends on the direction of the detection hardware, the MRI device 101 may acquire a 2D image or a 3D volume image directed to an arbitrary point. In addition, unlike CT, X-ray, PET, and SPECT, the MRI apparatus 101 does not expose radiation to the subject and the inspector, and may acquire an image having high soft tissue contrast, thereby causing abnormality. Neuralological images, intravascular images, musculoskeletal images, oncologic images, etc., in which a clear description of tissue is important can be obtained.
  • the MRI apparatus 101 includes a gantry 220, a signal transceiver 230, a monitor 240, a system controller 250, and an operating unit 260. It may include.
  • the gantry 220 blocks electromagnetic waves generated by the main magnet 222, the gradient coil 224, the RF coil 226, and the like from radiating to the outside.
  • a static magnetic field and a gradient magnetic field are formed in the bore in the gantry 220, and an RF signal is irradiated toward the object 210.
  • the main magnet 222, the gradient coil 224 and the RF coil 226 may be disposed along a predetermined direction of the gantry 220.
  • the predetermined direction may include a coaxial cylindrical direction or the like.
  • the object 210 may be positioned on a table 228 that can be inserted into the cylinder along the horizontal axis of the cylinder.
  • the main magnet 222 generates a static magnetic field or a static magnetic field for aligning the directions of the magnetic dipole moments of the nuclei included in the object 210 in a predetermined direction. As the magnetic field generated by the main magnet is stronger and more uniform, a relatively precise and accurate MR image of the object 210 may be obtained.
  • the gradient coil 224 includes X, Y, and Z coils that generate gradient magnetic fields in the X-, Y-, and Z-axis directions that are perpendicular to each other.
  • the gradient coil 224 may induce resonance frequencies differently for each part of the object 210 to provide location information of each part of the object 210.
  • the RF coil 226 may radiate an RF signal to the patient and receive an MR signal emitted from the patient.
  • the RF coil 226 transmits an RF signal having a frequency equal to the frequency of the precession toward the atomic nucleus during precession to the patient, stops transmitting the RF signal, and receives the MR signal emitted from the patient. Can be.
  • the RF coil 226 generates an electromagnetic signal, for example, an RF signal having a radio frequency corresponding to the type of the nuclear nucleus, such as an RF signal, in order to transition a nuclear nucleus from a low energy state to a high energy state.
  • an electromagnetic signal generated by the RF coil 226 is applied to a certain nucleus, the nucleus can transition from a low energy state to a high energy state. Thereafter, when the electromagnetic wave generated by the RF coil 226 disappears, the atomic nucleus to which the electromagnetic wave is applied may radiate an electromagnetic wave having a Lamor frequency while transitioning from a high energy state to a low energy state.
  • the RF coil 226 may receive an electromagnetic wave signal radiated from atomic nuclei inside the object 210.
  • the RF coil 226 may be implemented as one RF transmission / reception coil having a function of generating an electromagnetic wave having a radio frequency corresponding to the type of the atomic nucleus and a function of receiving the electromagnetic wave radiated from the atomic nucleus. Further, it may be implemented as a transmitting RF coil having a function of generating an electromagnetic wave having a radio frequency corresponding to the type of atomic nucleus and a receiving RF coil having a function of receiving electromagnetic waves radiated from the atomic nucleus.
  • the RF coil 226 may be a form fixed to the gantry 220, it may be a removable form.
  • the detachable RF coil 226 may include RF coils for portions of the object, including head RF coils, chest RF coils, leg RF coils, neck RF coils, shoulder RF coils, wrist RF coils and ankle RF coils, and the like. have.
  • the RF coil 226 may communicate with an external device in a wired and / or wireless manner, and may also perform dual tune communication according to a communication frequency band.
  • the RF coil 226 may include a cage coil, a surface coil, and a transverse electromagnetic coil (TEM coil) according to the structure of the coil.
  • TEM coil transverse electromagnetic coil
  • the RF coil 226 may include a transmission-only coil, a reception-only coil, and a transmission / reception combined coil according to an RF signal transmission / reception method.
  • the RF coil 226 may include RF coils of various channels, such as 16 channels, 32 channels, 72 channels, and 144 channels.
  • the RF coil 226 is a radio frequency multi coil including N coils corresponding to the first to Nth channels, which are the plurality of channels, will be described as an example.
  • the high frequency multi-coil may be referred to as a multichannel RF coil.
  • the gantry 220 may further include a display 229 positioned outside the gantry 220 and a display (not shown) positioned inside the gantry 220. Predetermined information may be provided to a user or an object through a display positioned inside and / or outside the gantry 220.
  • the signal transceiver 230 may control the gradient magnetic field formed in the gantry 220, that is, the bore according to a predetermined MR sequence, and may control the transmission and reception of the RF signal and the MR signal.
  • the signal transceiver 230 may include a gradient magnetic field amplifier 232, a transceiver switch 234, an RF transmitter 236, and an RF data acquirer 238.
  • the gradient amplifier 232 drives the gradient coil 224 included in the gantry 220, and outputs a pulse signal for generating the gradient magnetic field under the control of the gradient magnetic field controller 254. Can be supplied to By controlling the pulse signal supplied from the gradient amplifier 232 to the gradient coil 224, gradient magnetic fields in the X-axis, Y-axis, and Z-axis directions can be synthesized.
  • the RF transmitter 236 and the RF data acquirer 238 may drive the RF coil 226.
  • the RF transmitter 236 may supply an RF pulse of a Larmor frequency to the RF coil 226, and the RF data acquirer 238 may receive an MR signal received by the RF coil 226.
  • the transmission / reception switch 234 may adjust the transmission and reception directions of the RF signal and the MR signal.
  • the RF signal may be irradiated to the object 210 through the RF coil 226 during the transmission mode, and the MR signal may be received from the object 210 through the RF coil 226 during the reception mode.
  • the transmission / reception switch 234 may be controlled by a control signal from the RF control unit 256.
  • the monitoring unit 240 may monitor or control the gantry 220 or the devices mounted on the gantry 220.
  • the monitoring unit 240 may include a system monitoring unit 242, an object monitoring unit 244, a table control unit 246, and a display control unit 248.
  • the system monitoring unit 242 may include a state of a static magnetic field, a state of a gradient magnetic field, a state of an RF signal, a state of an RF coil, a state of a table, a state of a device measuring body information of an object, a state of a power supply, a state of a heat exchanger, It can monitor and control the condition of the compressor.
  • the object monitoring unit 244 monitors the state of the object 210.
  • the object monitoring unit 244 may include a camera for observing the movement or position of the object 210, a respiration meter for measuring the respiration of the object 210, an ECG meter for measuring the electrocardiogram of the object 210, Or it may include a body temperature meter for measuring the body temperature of the object 210.
  • the table controller 246 controls the movement of the table 228 in which the object 210 is located.
  • the table controller 246 may control the movement of the table 228 according to the sequence control of the sequence controller 252. For example, in moving imaging of an object, the table controller 246 may continuously or intermittently move the table 228 according to the sequence control by the sequence controller 252.
  • the object may be photographed with an FOV larger than the field of view (FOV) of the gantry.
  • FOV field of view
  • the display controller 248 controls the display 229 positioned outside and / or inside the gantry 220.
  • the display controller 248 may control on / off of the display 229 located at the outside and / or the inside of the gantry 220 or a screen to be output to the display 229.
  • the display controller 248 may control the on / off of the speaker or the sound to be output through the speaker.
  • the system controller 250 includes a sequence controller 252 for controlling a sequence of signals formed in the gantry 220, and a gantry controller 258 for controlling the gantry 220 and the devices mounted on the gantry 220. can do.
  • the sequence controller 252 controls the gradient magnetic field controller 254 for controlling the gradient magnetic field amplifier 232, and the RF controller 256 for controlling the RF transmitter 236, the RF data acquisition unit 238, and the transmission / reception switch 234. It may include.
  • the sequence controller 252 may control the gradient magnetic field amplifier 232, the RF transmitter 236, the RF data acquirer 238, and the transmit / receive switch 234 according to the pulse sequence received from the operating unit 260.
  • the pulse sequence means a continuation of a signal repeatedly applied by the MRI apparatus 101.
  • the pulse sequence may include a time parameter of the RF pulse, for example, a repetition time (TR), an echo time (Time to Echo, TE), and the like.
  • the pulse sequence includes all the information necessary for controlling the gradient magnetic field amplifier 232, the RF transmitter 236, the RF data acquisition unit 238, and the transmit / receive switch 234, for example, the gradient coil.
  • Information on the intensity of the pulse signal applied to the 224, the application time, the application timing (timing), and the like may be included.
  • the operating unit 260 may command pulse sequence information to the system control unit 250 and control the operation of the entire MRI apparatus 101.
  • the operating unit 260 may include an image processor 262, an output unit 264, and a user interface 266 that process an MR signal received from the RF data acquirer 238.
  • the image processor 262 may generate MR image data of the object 210 by processing the MR signal received from the RF data acquirer 238.
  • the image processor 262 applies various signal processing such as amplification, frequency conversion, phase detection, low frequency amplification, filtering, etc. to the MR signal received by the RF data acquisition unit 238.
  • the image processor 262 may, for example, arrange digital data in k-space of the memory, and reconstruct the data into image data by performing two-dimensional or three-dimensional Fourier transform.
  • the image processing unit 262 may also perform composition processing, difference calculation processing, and the like of the image data.
  • the composition process may include an addition process for pixels, a maximum projection (MIP) process, and the like.
  • the image processing unit 262 may store not only the image data to be reconstructed but also image data subjected to the synthesis process or the difference calculation process to a memory (not shown) or an external server.
  • various signal processings applied to the MR signal by the image processor 262 may be performed in parallel.
  • signal processing may be applied in parallel to a plurality of MR signals received by the multi-channel RF coil to reconstruct the plurality of MR signals into image data.
  • the output unit 264 may output the image data or the reconstructed image data generated by the image processor 262 to the user.
  • the output unit 264 may output information necessary for the user to operate the MRI system, such as a user interface (UI), user information, or object information.
  • UI user interface
  • the output unit 264 may include a speaker, a printer, a display, and the like.
  • the implementation manner of the display is not limited, for example, liquid crystal, plasma, light-emitting diode, organic light-emitting diode, surface-conduction electron- It can be implemented by various display methods such as emitters, carbon nano-tubes, and nano-crystals.
  • the display may be implemented to display an image in a 3D form, and in some cases, may be implemented as a transparent display.
  • the output unit 264 may include various output devices within a range apparent to those skilled in the art.
  • the user may input object information, parameter information, scan conditions, pulse sequences, information on image composition or difference calculation, etc. using the user input unit 266.
  • the user input unit 266 may include a keyboard, a mouse, a trackball, a voice recognizer, a gesture recognizer, a touch pad, and the like, and may include various input devices within a range apparent to those skilled in the art.
  • FIG. 2 illustrates the signal transceiver 230, the monitor 240, the system controller 250, and the operating unit 260 as separate objects, the signal transceiver 230, the monitor 240, and the system are shown. It will be understood by those skilled in the art that the functions performed by each of the controller 250 and the operating unit 260 may be performed in other objects.
  • the image processing unit 262 described above converts the MR signal received by the RF data acquisition unit 238 into a digital signal, but the conversion to the digital signal is performed by the RF data acquisition unit 238 or the RF coil ( 226 may be performed directly.
  • the gantry 220, the RF coil 226, the signal transmitting and receiving unit 230, the monitoring unit 240, the system control unit 250, and the operating unit 260 may be connected to each other wirelessly or by wire.
  • a device (not shown) for synchronizing clocks with each other may be further included.
  • Communication between the gantry 220, the RF coil 226, the signal transmitting and receiving unit 230, the monitoring unit 240, the system control unit 250, and the operating unit 260 is performed at a high speed such as low voltage differential signaling (LVDS).
  • Digital interface, asynchronous serial communication such as universal asynchronous receiver transmitter (UART), low delay type network protocol such as error synchronization serial communication or controller area network (CAN), optical communication, etc. can be used, and it will be apparent to those skilled in the art.
  • Various communication methods may be used.
  • FIG. 3 is a view showing a CT device 102 according to an embodiment of the present invention
  • Figure 4 is a schematic view showing the configuration of the CT device 102 of FIG.
  • the CT device 102 may include a gantry 302, a table 305, an X-ray generator 306, and an X-ray detector 308.
  • a tomography device such as the CT device 102, may provide a cross-sectional image of an object, so that the internal structure of the object (for example, an organ such as a kidney or a lung, etc.) does not overlap with a general X-ray imaging device.
  • the internal structure of the object for example, an organ such as a kidney or a lung, etc.
  • the tomography apparatus may include all tomography apparatuses such as a computed tomography (CT) device, an optical coherenc tomography (OCT) device, or a positron emission tomography (PET) -CT device.
  • CT computed tomography
  • OCT optical coherenc tomography
  • PET positron emission tomography
  • a tomography image is an image obtained by tomography imaging an object in a tomography apparatus.
  • the tomography image may refer to an image that is imaged by using projected data after irradiating a light ray such as an X-ray to the object.
  • the CT image may refer to a composite image of a plurality of X-ray images obtained by photographing an object while rotating about at least one axis of the object.
  • CT apparatus 102 illustrated in FIGS. 2 and 3 will be described as an example of the tomography apparatus 300.
  • the CT device 102 may provide a relatively accurate cross-sectional image of an object by acquiring and processing image data having a thickness of 2 mm or less tens or hundreds per second.
  • image reconstruction techniques Conventionally, there has been a problem that only the horizontal cross section of the object is expressed, but it is overcome by the appearance of various image reconstruction techniques as follows. Three-dimensional reconstruction imaging techniques are as follows.
  • SSD Shade surface display
  • VR volume rendering
  • Virtual endoscopy A technique that allows endoscopic observation in three-dimensional images reconstructed by the VR or SSD technique.
  • MPR multi planar reformation
  • VOI voxel of interest
  • Computed tomography (CT) device 102 can be described with reference to FIGS. 3 and 4.
  • CT device 102 according to an embodiment of the present invention may include various types of devices as shown in FIG.
  • the gantry 302 may include an X-ray generator 306 and an X-ray detector 308.
  • the object 30 may be located on the table 305.
  • the table 305 may move in a predetermined direction (eg, at least one of up, down, left, and right) during the CT imaging process.
  • a predetermined direction eg, at least one of up, down, left, and right
  • the table 305 may be tilted or rotated by a predetermined angle in a predetermined direction.
  • the gantry 302 may be tilted by a predetermined angle in a predetermined direction.
  • the CT device 102 may include a gantry 302, a table 305, a controller 318, a storage 324, an image processor 326, and a user.
  • the input unit 328 may include a display unit 330 and a communication unit 332.
  • the object 310 may be located on the table 305.
  • Table 305 according to an embodiment of the present invention can be moved in a predetermined direction (eg, at least one of up, down, left, right), the movement can be controlled by the controller 318.
  • the gantry 302 includes a rotating frame 304, an X-ray generator 306, an X-ray detector 308, a rotation driver 310, a data acquisition circuit 316, and a data transmitter 320. It may include.
  • the gantry 302 may include a rotatable frame 304 that is rotatable based on a predetermined rotation axis (RA).
  • the rotating frame 304 may also be in the form of a disc.
  • the rotation frame 304 may include an X-ray generator 306 and an X-ray detector 308 disposed to face each other to have a predetermined field of view (FOV).
  • the rotating frame 304 may also include an anti-scatter grid 314.
  • the scattering prevention grid 314 may be located between the X-ray generator 306 and the X-ray detector 308.
  • the X-ray radiation that reaches the detector includes scattered radiation that degrades the quality of the image as well as attenuated primary radiation that forms a useful image. Etc. are included.
  • An anti-scattering grid can be placed between the patient and the detector (or photosensitive film) in order to transmit most of the main radiation and attenuate the scattered radiation.
  • anti-scatter grids include strips of lead foil, solid polymer materials or solid polymers and fiber composite materials. It may be configured in the form of alternately stacked space filling material (interspace material) of. However, the shape of the anti-scattering grid is not necessarily limited thereto.
  • the rotation frame 304 may rotate the X-ray generator 306 and the X-ray detector 308 at a predetermined rotation speed based on the driving signal received from the rotation driver 310.
  • the rotation frame 304 may receive a driving signal and power from the rotation driver 310 in a contact manner through a slip ring (not shown).
  • the rotation frame 304 may receive a drive signal and power from the rotation driver 310 through wireless communication.
  • the X-ray generator 306 generates an X-ray by receiving a voltage and a current through a high voltage generator (not shown) through a slip ring (not shown) in a power distribution unit (PDU) (not shown). Can be released.
  • a high voltage generator applies a predetermined voltage (hereinafter referred to as a tube voltage)
  • the X-ray generator 306 may generate X-rays having a plurality of energy spectra corresponding to the predetermined tube voltage. have.
  • the X-rays generated by the X-ray generator 306 may be emitted in a predetermined form by the collimator 112.
  • the X-ray detector 308 may be positioned to face the X-ray generator 306.
  • the X-ray detector 308 may include a plurality of X-ray detection elements.
  • the single X-ray detection element may form a single channel, but is not necessarily limited thereto.
  • the X-ray detector 308 may detect the X-rays generated by the X-ray generator 306 and transmitted through the object 30 and generate an electric signal corresponding to the intensity of the detected X-rays.
  • the X-ray detector 308 may include an indirect method of converting radiation into light and a direct method detector for converting and detecting radiation into direct charge.
  • the indirect X-ray detector may use a scintillator.
  • the direct type X-ray detector may use a photon counting detector.
  • the data acquisitino system (DAS) 316 may be connected to the X-ray detector 308.
  • the electrical signal generated by the X-ray detector 308 may be collected by the DAS 316.
  • the electrical signal generated by the X-ray detector 308 may be collected by the DAS 316 by wire or wirelessly.
  • the electrical signal generated by the X-ray detector 308 may be provided to an analog / digital converter (not shown) through an amplifier (not shown).
  • Only some data collected from the X-ray detector 308 may be provided to the image processor 326 according to the slice thickness or the number of slices, or only some data may be selected by the image processor 326.
  • the digital signal may be provided to the image processor 326 through the data transmitter 320.
  • the digital signal may be transmitted to the image processor 326 by wire or wirelessly through the data transmitter 320.
  • the control unit 318 of the CT device 102 may control the operation of each module in the CT device 102.
  • the controller 318 may include a table 305, a rotation driver 310, a collimator 312, a DAS 316, a storage 324, an image processor 326, a user input unit 328, and a display unit. 330 and the communication unit 332 may be controlled.
  • the image processor 326 may receive data (for example, pure data before processing) from the DAS 316 through the data transmitter 320 to perform a pre-processing process.
  • the preprocessing may include, for example, a process of correcting the sensitivity nonuniformity between the channels, a sharp reduction in signal strength, or a process of correcting the loss of a signal due to an X-ray absorber such as metal.
  • the output data of the image processor 326 may be referred to as raw data or projection data.
  • Such projection data may be stored in the storage unit 324 together with photographing conditions (eg, tube voltage, photographing angle, etc.) at the time of data acquisition.
  • the projection data may be a set of data values corresponding to the intensity of X-rays passing through the object.
  • a set of projection data acquired simultaneously at the same photographing angle for all channels is referred to as a projection data set.
  • the storage unit 324 may include a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (SD, XD memory, etc.), RAM ; Random Access Memory (Static Random Access Memory), ROM (Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), PROM (Programmable Read-Only Memory) magnetic memory, magnetic disk, optical disk It may include at least one type of storage medium.
  • the image processor 326 may reconstruct a cross-sectional image of the object by using the obtained projection data set.
  • the cross-sectional image may be a 3D image.
  • the image processor 326 may generate a 3D image of the object by using a cone beam reconstruction method or the like based on the obtained projection data set.
  • X-ray tomography conditions may include a plurality of tube voltages, energy values of a plurality of X-rays, imaging protocol selection, image reconstruction method selection, FOV region setting, number of slices, slice thickness, post-image Processing parameter settings, and the like.
  • image processing condition may include a resolution of an image, attenuation coefficient setting for the image, a combination ratio setting of the image, and the like.
  • the user input unit 328 may include a device for receiving a predetermined input from the outside.
  • the user input unit 328 may include a microphone, a keyboard, a mouse, a joystick, a touch pad, a touch pen, a voice, a gesture recognition device, and the like.
  • the display 330 may display the X-ray photographed image reconstructed by the image processor 326.
  • the transmission and reception of data, power, and the like between the above-described elements may be performed using at least one of wired, wireless, and optical communication.
  • the communication unit 332 may communicate with an external device, an external medical device, or the like through the server 334.
  • FIG. 5 is a diagram schematically illustrating a configuration of a communication unit 532 that communicates with the outside in a network system.
  • the communication unit 532 illustrated in FIG. 5 is also connected to at least one of the gantry 220, the signal transceiver 230, the monitoring unit 240, the system controller 250, and the operating unit 260 illustrated in FIG. 2. It is possible. That is, the communication unit 532 may exchange data with a hospital server or another medical device in a hospital connected through a PACS (Picture Archiving and Communication System), and may use digital digital imaging and communication (DICOM, Digital Imaging and Communication System). Communications in Medicine) can communicate data.
  • PACS Picture Archiving and Communication System
  • DICOM Digital Imaging and Communication System
  • the communication unit 532 is connected to the network 501 by wire or wirelessly to connect with an external server 534, an external medical device 536, or an external device 538 such as a portable device. Communication can be performed.
  • the communication unit 532 may transmit and receive data related to diagnosis of the object through the network 501, and may also transmit and receive medical images photographed by another medical device 536 such as CT, ultrasound, and X-ray.
  • another medical device 536 such as CT, ultrasound, and X-ray.
  • the communication unit 532 shown in FIG. 5 may be included in the CT device 102 of FIG. 4.
  • the communication unit 532 shown in FIG. 4 is the same as the communication unit 332 shown in FIG.
  • the other medical device 536 may be, for example, the MRI device 101 or the ultrasound device 103 of FIG. 1.
  • the communication unit 532 illustrated in FIG. 5 may be included in the MRI apparatus 101 of FIG. 2.
  • the MRI apparatus 101 shown in FIG. 2 may be implemented in a form further including the communication unit 532 of FIG. 5.
  • the other medical device 536 may be, for example, the CT device 102 or the ultrasound device 103 of FIG. 1.
  • the communication unit 532 may be connected to the network 501 by wire or wirelessly to perform communication with the server 534, the external medical device 536, or the external device 538.
  • the communication unit 532 may exchange data with a hospital server or another medical device in the hospital connected through a PACS (Picture Archiving and Communication System).
  • the communication unit 532 may perform data communication with an external device 538 or the like according to a digital imaging and communications in medicine (DICOM) standard.
  • DICOM digital imaging and communications in medicine
  • the communication unit 532 may transmit / receive an image of the object and / or data related to the diagnosis of the object through the network 501.
  • the communication unit 532 may receive a medical image acquired from another medical device 536 such as an MRI apparatus 101 and an X-ray imaging apparatus.
  • the communication unit 532 may receive a diagnosis history or treatment schedule of the patient from the server 534 and use it for clinical diagnosis of the patient.
  • the communication unit 532 may perform data communication with not only a server 534 or a medical device 536 in a hospital, but also a portable device (terminal device) 538 of a user or a patient.
  • the medical images acquired by the various medical image display apparatuses express the object in various ways according to the type and the photographing method of the medical image display apparatus.
  • characteristics of the acquired medical image may vary according to a photographing method and a type of the medical image display apparatus. For example, in one medical image, cancer tissue may be easily identified, and in another medical image, blood vessels may be easily identified.
  • a medical image display device that can provide a medical image for facilitating a diagnosis of a user with respect to a predetermined region in a medical image.
  • the medical image display apparatus may be any image processing apparatus capable of displaying, storing, and / or processing a medical image.
  • the medical image display apparatus 100 may be provided to be included in a tomography apparatus such as the MRI apparatus 101 or the CT apparatus 102 described with reference to FIGS. 2 to 4. have.
  • the medical image display apparatus 100 may include the communication unit 532 described with reference to FIG. 5.
  • the medical image display apparatus 100 may include at least one of a tomography apparatus such as the MRI apparatus 101 and the CT apparatus 102 described above with reference to FIGS. It may be included in the server 534, the medical device 536, or an external device that is connected through the 501, that is, the portable terminal 538.
  • the server 534, the medical device 536, or the portable terminal 538 may be an image processing device capable of displaying, storing, or processing at least one of an MRI image and a tomography image.
  • the medical image display apparatus may have a form of a server 534, a medical device 536, or a portable terminal 538, and at least one of an MRI image and a tomography image. It may be a PACS (Picture Archiving and Communication System) that can display, store, or process the data.
  • PACS Picture Archiving and Communication System
  • the medical image display apparatus 100 may be configured to process / restore an image using data acquired by scanning an object, in addition to the MRI apparatus 101 or the CT apparatus 102. It may be included in the medical imaging device / system, or may be provided in connection with all medical imaging devices / systems.
  • the medical image display apparatus 100 obtains the first medical image and the second medical image from two or more different medical devices, for example, the first medical device and the second medical device. It may be implemented as a medical image registration device for displaying an image (third medical image) matching the first medical image and the second medical image.
  • FIG. 6 is a diagram illustrating a system including a first medical device 610, a second medical device 620, and a medical image registration device 630 according to an exemplary embodiment.
  • the first medical device 610 and the second medical device 620 generate the first medical image and the second medical image, respectively, and provide them to the medical image matching device 630.
  • the first medical image and the second medical image may be images generated by the same principle.
  • first medical image and the second medical image may have different image modalities. That is, the first medical image and the second medical image may have different generation methods and principles.
  • the medical image matching device 630 acquires the first medical image and the second medical image, respectively, and matches the first medical image and the second medical image.
  • the image matched by the medical image matching device 630 is displayed on the display unit 632.
  • the first medical device 610, the second medical device 620, and the medical image registration device 630 each constitute independent devices.
  • the medical device 610 and the medical image registration device 630 may be implemented as a single device, or the second medical device 620 and the medical image registration device 630 may be implemented as a single device.
  • the medical image matching device 630 includes the main body 631 and the display unit 632, a separate display device for receiving and displaying image data from the medical image matching device 630 is included in the system. It may be implemented.
  • the medical image registration device 630 of the present embodiment is a computer system that is capable of communicating with at least one medical device and is included in another medical device having a display, or is capable of communicating with two or more medical devices. It may be implemented in a computer system that includes.
  • the first medical device 610 may provide the first medical image in real time with respect to the volume of interest of the object. For example, when deformation and displacement of an organ due to physical activity of the subject occur, a change appears in the first medical image in real time.
  • the first medical device 620 is an ultrasonography machine (103 of FIG. 1) that generates an image in real time during an interventional medical procedure for a patient. It can be configured as. For example, when deformation and displacement of an organ due to physical activity of an object occur, a change is indicated in a medical image displayed on a display in real time.
  • the first medical device 620 may be another medical device such as an OCT that provides an image in real time.
  • the first medical device 610 configured as an ultrasound device generates an ultrasound image by radiating an ultrasound signal to a region of interest of an object using a probe 611 and detecting a reflected ultrasound signal, that is, an ultrasound echo signal. .
  • the probe 611 is a part in contact with the object and may include a plurality of transducer elements (hereinafter, referred to as transducers) (not shown) and a light source (not shown).
  • transducers transducer elements
  • a light source not shown.
  • the transducer may be, for example, a magnetostrictive ultrasonic transducer using a magnetostrictive effect of a magnetic body, a piezoelectric ultrasonic transducer 118 using a piezoelectric effect of a piezoelectric material, hundreds of microfabricated or Various kinds of ultrasonic transducers, such as capacitive micromachined ultrasonic transducers (cMUTs), which transmit and receive ultrasonic waves using thousands of thin film vibrations, may be used.
  • cMUTs capacitive micromachined ultrasonic transducers
  • the plurality of conversion elements may be arranged in a straight line (Linear array), or may be arranged in a curve (Convex array).
  • a cover for covering the plurality of conversion elements may be provided on the change element.
  • the light source is for irradiating light into the object.
  • at least one light source for generating light having a specific wavelength may be used as the light source.
  • a plurality of light sources for generating light having different wavelengths may be used as the light source.
  • the wavelength of light generated by the light source may be selected in consideration of a target in the object.
  • Such a light source may be implemented by a semiconductor laser (LD), a light emitting diode (LED), a solid state laser, a gas laser, an optical fiber, or a combination thereof.
  • the transducer provided in the probe 611 generates an ultrasonic signal according to the control signal, and irradiates the generated ultrasonic signal into the object.
  • the ultrasound echo signal reflected from a specific tissue (eg, a lesion) in the object is received, that is, detected.
  • the reflected ultrasonic waves vibrate the transducer of the probe 611, and the transducer outputs electrical pulses according to the vibrations. Such electrical pulses are converted into an image.
  • the anatomical objects have different ultrasonic reflection characteristics from each other, for example, in an ultrasound image of a B mode (brightness mode), each anatomical object appears with different brightness values.
  • Types of ultrasound images include a B mode (brightness mode) image representing the magnitude of an ultrasound echo signal reflected from an object and a Doppler modem (spectral form) representing an image of a moving object using a Doppler effect.
  • a D mode also called a P-Doppler mode
  • M mode showing the movement of the object over time at a certain position
  • a difference in response between applying and not applying pressure to the object may be classified into an elastic mode image represented by an image, a C mode image expressing a speed of a moving object in color using a Doppler effect, and the like.
  • the Doppler image may include both a blood flow Doppler image (or a color Doppler image) and a tissue Doppler image representing tissue movement.
  • the 3D ultrasound image may be generated by forming volume data from a signal received from the probe head 611 and performing volume rendering on the volume data.
  • the first medical apparatus 610 includes a probe 611 and an image processing apparatus 612 for processing to generate an image based on the ultrasonic echo signal detected by the probe 611.
  • the image processing apparatus 612 may be provided with an image processor that supports a plurality of modes and generates an ultrasound image corresponding to each mode.
  • FIG. 6 illustrates an example in which the image processing apparatus 612 is implemented as a computer body and is connected to the probe 611 which is a fixed terminal by wire.
  • the image processing apparatus 612 may further include a display unit displaying an ultrasound image.
  • the probe 611 may be implemented in the form of a mobile terminal (portable terminal) provided to move a place in a state in which the user grips, as well as a fixed terminal.
  • the probe 611 may perform wireless communication with the image processing apparatus 612.
  • the wireless communication is a short-range communication of a predetermined frequency, Wi-Fi (Wifi), Wi-Fi Direct (Wifi Direct), UWB (Ultra Wideband), Bluetooth (Bluetooth), RF (Radio Frequency), Zigbee (Zigbee), Wireless LAN (Wireless LAN) And at least one of various wireless communication modules such as Near Field Communication (NFC).
  • An example of an image processing apparatus 612 for processing to generate an ultrasound image based on an ultrasound echo signal received from the probe 611 may include a smart phone, a smart pad such as a tablet, and a smart. Smart TVs, desktop computers, laptop computers, personal digital assistants (PDAs), and the like.
  • PDAs personal digital assistants
  • an image processor for generating an ultrasound image corresponding to a plurality of modes is provided inside the probe 611, and the image processing apparatus 612 may wire the image generated by the probe 611 by wire or wirelessly. It may be implemented to receive and display through the display unit.
  • the second medical apparatus 620 may generate a second medical image of a volume of interest (VOI) of the object in real time.
  • the second medical apparatus 620 may have a non real-time characteristic compared with the first medical apparatus 610, and may provide the medical image matching apparatus 630 with the second medical image generated before the medical procedure. .
  • the second medical device 620 may be the CT device 102 or the MRI device 101 described with reference to FIGS. 2 to 4.
  • the second medical apparatus 620 may also be implemented as an X-ray imaging apparatus, a single photon emission computed tomography (SPECT) device, a position emission tomography (PET) device, or the like.
  • SPECT single photon emission computed tomography
  • PET position emission tomography
  • the second medical image is an MR or CT image for convenience of description, but the scope of the present invention is not limited thereto.
  • the medical images photographed by the first medical device 610 or the second medical device 620 may be three-dimensional images generated by scaling two-dimensional cross sections.
  • the second medical apparatus 620 photographs a plurality of cross-sectional images while changing a location and orientation of the cross-sectional image.
  • image data of a three-dimensional volume representing a specific part of the patient's body in three dimensions may be generated.
  • a method of generating image data of a 3D volume by accumulating cross-sectional images is referred to as a multiplanar reconstruction (MPR) method.
  • MPR multiplanar reconstruction
  • the first medical device 610 may hand sweep the probe 611 or generate 3D volume image data through the Wabbler type or the 3D Array type probe 621.
  • FIG. 6 illustrates a case where the first medical image and the second medical image are generated by different types of medical devices, the first medical image and the second medical image are the same type of medical device, for example, the CT device 102.
  • the images are captured at different time points by the other, they are included in the scope of the present invention.
  • the first medical image is a non-contrast medical image taken in the state not administered the contrast agent
  • the second medical image is a contrast enhancement image taken in the state administered the contrast agent
  • Contrast agents administered to patients have the problem of causing various side effects. For example, at least the patient may feel numb or burning, and may cause urticaria, itching, vomiting, nausea, rash, etc., and even seriously, the patient may die.
  • non-contrast imaging is mainly used for the follow-up examination of lung cancer and simple diagnosis of lung lesions (such as bronchial disease and emphysema).
  • diagnosis is based on non-contrast imaging based on the As Low As Reasonably Achievable (ALARA) principle (an international regulation that recommends minimizing the use of doses and contrast agents) and the National Comprehensive Cancer Network (NCCN) guidelines. This is being done.
  • ALARA As Low As Reasonably Achievable
  • NCCN National Comprehensive Cancer Network
  • FIG. 7 is a diagram conceptually illustrating lymph nodes and blood vessel distribution of a chest region
  • FIG. 8 is a diagram showing contrast-enhanced CT images taken of a chest region
  • FIG. 9 is a non-contrast CT shot of a chest region. It is a figure which shows an image.
  • Lymph node (lymph node, lymph node) 701 shown in Figure 7 is involved in causing an immune response by recognizing pathogens (eg, inflammation, cancer cells, etc.) in the human body. Therefore, the degree of change in the size of lymph nodes, the change in the number and distribution of changed lymph nodes are important clinical judgment factors for diagnosis and treatment monitoring.
  • pathogens eg, inflammation, cancer cells, etc.
  • lymph nodes when the cancer cells develop or metastasize, the lymph nodes increase in size. Therefore, for detecting and diagnosing lymph nodes, it is useful to use contrast-enhanced images that are relatively easy to distinguish from other structures, particularly the blood vessels 702. May be advantageous.
  • the lymph node 703 and the pulmonary blood vessel 704 are distinguishably displayed in a contrast-enhanced CT image photographed by administering a contrast agent to a patient.
  • a contrast agent to a patient.
  • the non-contrast CT image of FIG. 9 it is confirmed that the lymph node and the blood vessel are not easily distinguished in the region 705 where the lymph node is located.
  • contrast agents are gradually limited due to the various side effects described above.
  • contrast agents cannot be administered, and thus, an inevitable diagnosis based on non-contrast medical images occurs.
  • lymph node region information is an important landmark for lung cancer diagnosis (metastasis, state changes, etc.) and lung lesion diagnosis, it is difficult to distinguish lymph node / vessel regions in non-contrast imaging, Missing early diagnosis of the disease, or missed treatment time occurs.
  • FIG. 10 is a block diagram illustrating a configuration of the medical image display apparatus 1000 according to an exemplary embodiment.
  • FIG. 11 is a block diagram illustrating the configuration of the image processor 1030 of FIG. 10.
  • the medical image display apparatus 1000 may include a control unit 1010, a display unit 1020, an image processing unit 1030, a user input unit 1040, and a storage unit ( 1050 and the communication unit 1060.
  • the illustrated components are not all essential components, and other general components may be further included in addition to the illustrated components.
  • the medical image display apparatus 1000 When the medical image display apparatus 1000 is included in the MRI apparatus 101 illustrated in FIG. 2, at least a part of the medical image display apparatus 1000 may correspond to the operating unit 260.
  • the image processor 1030 and the display 1020 may correspond to the image processor 262 and the output unit 264 of FIG. 2, respectively.
  • the controller 1010 may correspond to at least a portion of the operating unit 260 and / or the display control unit 248. Therefore, in the medical image display apparatus 1000, a description overlapping with that of FIG. 2 will be omitted.
  • the controller 1010, the display unit 1020, the image processor 1030, and the user input unit 1040 may be used.
  • the storage unit 1050 may correspond to the control unit 318, the display unit 330, the image processor 326, the user input unit 328, and the storage unit 324 of FIG. 4, respectively. Therefore, in the medical image display apparatus 1000, a description overlapping with that of FIG. 3 or 4 will be omitted.
  • the medical image display apparatus 1000 may be included in any one of the server 534, the medical apparatus 536, the portable terminal 538, and the ultrasound apparatus 610 described with reference to FIG. 6.
  • the display unit 1020 displays an application related to the operation of the medical image display apparatus.
  • the display unit 1020 may display menus or guidance items necessary for diagnosis using a medical device.
  • the display 1020 may display the images acquired during the diagnosis process and a user interface (UI) for helping the user manipulate the medical image display device.
  • UI user interface
  • FIG. 10 illustrates an example in which one display unit 1020 is provided in the medical image display apparatus 1000, but the present invention is not limited thereto, and includes a plurality of display units, for example, a main display and a sub display. Can be implemented.
  • the display 1020 may include a first image (first medical image) and / or a third image in which a matching process, which will be described below, is performed on the first image of an object including at least one anatomical object. (Third Medical Image) is displayed.
  • the display unit 1020 may further display a fourth image (fourth medical image) displaying up to an extended area of the lesion, which will be described later.
  • the display 1020 may further display a second image (second medical image) that is a reference image of the first image.
  • the first image may be a medical image photographing an object, and may be all medical images photographed for diagnosing a disease, such as a tomography image, an X-ray image, and an ultrasound image.
  • the image processor 1030 processes the image to be displayed on the display 1020.
  • the image processor 1030 may process a signal obtained by capturing an object and image it as image data that can be displayed on the display unit 1020.
  • a method of imaging a medical image includes a method of photographing an object by irradiating a ray, such as an X-ray, with the object, like the imaging method of an X-ray image.
  • This method is a method of imaging an object without discriminating a photographing technique or a scan mode.
  • the method can image the object directly, without a separate reconstruction or calculation operation for the image to be acquired.
  • the method may obtain a target image by performing a separate reconstruction or calculation operation for the image to be acquired.
  • a technique applied in scanning a subject and taking a medical image is referred to as a 'scan protocol' or a 'protocol', hereinafter referred to as a 'protocol'.
  • the image processor 1030 may generate a medical image by applying a predetermined protocol to the acquired image data.
  • the medical image display apparatus 700 may generate calculated or post-processed image data (third image) by using image data (first image) obtained by applying a protocol.
  • the calculation or post-processing process includes a matching process, so that the generated image becomes a third image and / or a fourth image.
  • the MRI apparatus 101 scans an object by applying various protocols, and generates an image of the object by using the acquired MR signal.
  • data acquired by scanning an object for example, MR signals or K-space data is called scan data
  • image data an image of the object generated using the scan data.
  • the image data corresponds to the first image described above.
  • the acquired image data may be a sinogram or projection data, and the image data, that is, the first image may be generated using the acquired scan data. have.
  • the user input unit 1040 is provided to receive a command from the user.
  • the medical image display apparatus 1040 receives an input for operating the medical image display apparatus 1040 from the user through the user input unit 1040, and in response thereto, obtains the medical image display apparatus 1040 obtained by the medical image display apparatus 1040.
  • the first medical image, the second medical image, and / or the matched third medical image (or fourth medical image) may be output through the display unit 1020.
  • the user input unit 1040 may include a button, a keypad, a switch, a dial, or a user interface displayed on the display unit 1020 for direct manipulation of the medical image display apparatus 1040 by the user.
  • the user input unit 1040 may include a touch screen provided on the display unit 1020.
  • the medical image display apparatus 1000 may receive at least one point selected from the medical image (first image) displayed on the display unit 1020 through the user input unit 1040.
  • the selected point may correspond to the lymph node / vessel area in the non-contrast CT image (first image) of FIG. 9, and is performed by the image processor 1030 in response to the user's selection of the display unit 1020.
  • the processed image third image may be displayed to distinguish the lymphnote and the blood vessel at the selected point.
  • the display unit 1020 may enlarge and display the selected point.
  • the storage unit 1050 stores unlimited data under the control of the controller 1010.
  • the storage unit 1050 is implemented as a nonvolatile storage medium such as a flash memory and a hard disk drive.
  • the storage unit 1050 is accessed by the control unit 1010, and read / record / modify / delete / update data, etc., by the control unit 1010.
  • the data stored in the storage unit 1050 includes, for example, an operating system for driving the medical image display apparatus 1000, and various applications, image data, additional data, etc. executable on the operating system.
  • the storage unit 1050 of the present embodiment may store various data related to a medical image.
  • the storage unit 1050 stores at least one image data generated by applying at least one protocol in the medical image display apparatus 1000 and / or at least one medical image data received from the outside.
  • the storage unit 1050 may further store at least one image data generated by performing a matching process on the image data.
  • Image data stored in the storage unit 1050 is displayed by the display unit 1050.
  • the communication unit 1060 includes a wired / wireless network communication module for communicating with various external devices.
  • the communication unit 1060 transmits a command / data / information / signal received from an external device to the control unit 1010.
  • the communication unit 1060 may transmit a command / data / information / signal received from the control unit 1010 to an external device.
  • the communication unit 150 may be embedded in the medical image display apparatus 1000, but in one embodiment, the communication unit 150 may be implemented in a dongle or module form to connect the connector of the medical image display apparatus 1000 (not shown). It may be detached from the device.
  • the communication unit 1060 may include an I / O port for connecting human interface devices (HIDs).
  • the medical image display apparatus 1000 may transmit and receive image data with an external device connected by wire through an I / O port.
  • the communication unit 1060 of the present embodiment may receive medical image data generated by another medical device.
  • the other medical device may be the same kind of medical device as the medical image display device 1000 or may be another medical device.
  • the medical image display apparatus 1000 is a CT device
  • the other medical device may be another CT device
  • the other medical device may be an MRI device or an ultrasound device.
  • the medical image display apparatus 1000 may be directly connected to another medical apparatus through the communication unit 1060.
  • the communication unit 160 may include a connection for connecting to an external storage medium storing a medical image.
  • the controller 1010 performs a control operation on various components of the medical image display apparatus 1000.
  • the controller 1010 may perform an image processing / image registration process processed by the image processor 1030 and perform a control operation corresponding to a command from the user input unit 1040, thereby causing the medical image display apparatus 1000 to perform the control. To control the overall operation.
  • the controller 1010 includes at least one processor. At least one processor loads and executes a program corresponding to a volatile memory (RAM) from a nonvolatile memory (ROM) in which the program is stored.
  • RAM volatile memory
  • ROM nonvolatile memory
  • the control unit 1010 includes at least one general purpose processor such as a central processing unit (CPU), an application processor (AP), a microcomputer (MICOM), for example, a predetermined algorithm stored in a ROM.
  • the corresponding program may be loaded into the RAM and executed to execute various operations of the medical image display apparatus 1000.
  • the controller 1010 of the medical image display apparatus 1000 When the controller 1010 of the medical image display apparatus 1000 is implemented as a single processor, for example, a CPU, the CPU may display various functions that may be performed by the medical image display apparatus 1000, for example, the display unit 1020.
  • Various image processing processes for imaging medical images for example, selection of an applied protocol and control of imaging accordingly, response to commands received through the user input unit 1040, and a wired / wireless network with an external device. It may be provided to be able to control the communication and the like.
  • the processor may include single core, dual core, triple core, quad core, and multiple cores thereof.
  • the processor may include a plurality of processors, for example, a main processor and a sub processor.
  • the subprocessor is provided to operate in a standby mode (hereinafter, also referred to as a sleep mode) in which only standby power is supplied and does not operate as the medical image display apparatus 1000.
  • the processor, ROM, and RAM included in the controller 1010 may be connected to each other through an internal bus.
  • the controller 1010 when the medical image display apparatus 1000 is implemented as a laptop or desktop computer, the controller 1010 may be provided in a main body and further include a GPU (Graphic Processing Unit, not shown) for graphic processing. Can be.
  • the processor when the medical image display apparatus 1000 is implemented as a portable terminal such as a smart phone or a smart pad, the processor may include a GPU.
  • the processor may include a core and a GPU. SoC (System On Chip) can be implemented.
  • the controller 1010 is dedicated to execute a program for performing a specific function supported by the medical image display apparatus 1000, for example, a function for detecting an error occurrence in a predetermined configuration including a main processor.
  • a chip provided as a processor may include, for example, an integrated circuit (IC) chip.
  • the controller 1010 may receive a user command to execute a predetermined application as a platform capable of analyzing a medical image through the user input unit 1040.
  • the executed application may be a user selectable GUI and may include an input area (2220 of FIG. 22) on which various buttons are displayed and a display area (2210 of FIG. 22) on which a medical image is displayed.
  • a user may load, i.e., load, a medical image stored internally or externally using a GUI of an input area of an application, and the loaded medical image is displayed on the display unit 1020 through the display area of the application.
  • the user may input a user command to register the first medical image and the second medical image in the executed application.
  • the image processing unit 1030 may be implemented as a medical image analysis application which is a software configuration driven by the controller 1010 including at least one processor having a hardware configuration.
  • the operations of the image processor 1030 described below are performed according to the execution of the software driven by the controller 1010. Therefore, the various operations performed by the image processor 1030 may be regarded as being performed by the controller 1010, that is, at least one processor.
  • the controller 1010 of the medical image display apparatus 1000 controls the image processor 1030 to perform an image matching process on the non-contrast medical image, that is, the first medical image.
  • the image processor 1030 may perform image registration on the first medical image by using the first medical image and the second medical image.
  • the second medical image is a contrast-enhanced medical image obtained at another time point and serves as a reference image of the first medical image.
  • the contrast-enhanced medical image is an image captured by an object at a predetermined time point in the past, and is stored in another medical device, a server, or the like, and is loaded into the medical image display apparatus 1000 through the communication unit 1060, or internal or externally.
  • the storage unit 1050 may be stored in advance.
  • the contrast-enhanced medical image may be a medical image taken in the past with respect to the same object as the first medical image, that is, the same patient.
  • the user may select the contrast-enhanced medical image that can be used as the second medical image by using history information about the patient.
  • the user may select at least one contrast enhancement medical image as the second medical image.
  • the second medical image may be an image generated by using a plurality of contrast-enhanced medical images previously photographed on the same object.
  • the contrast enhanced medical image may be a standardized medical image.
  • a medical image database in which brain CT images of a plurality of objects are accumulated, subjects having conditions similar to those of the first medical subject, that is, age, gender, and disease progression, etc. It may be a standardized medical image generated using the contrast-enhanced medical images taken for.
  • the image processor 1030 extracts at least one anatomical object by dividing or segmenting the second medical image.
  • the image processor 1030 may extract reference region information corresponding to at least one anatomical object from the second medical image that is a reference image of the first medical image.
  • the image processing unit 1030 may include a region corresponding to the first object (hereinafter, also referred to as a first anatomical object) and a second object different from the first object (hereinafter, referred to as “first”). 2 may be further extracted from the second medical image.
  • the first subject may be a blood vessel and the second subject may be a lymph node. In another embodiment, the first subject may be a blood vessel and the second subject may be a bronchus.
  • the image processor 1030 registers the first medical image and the second medical image by using reference region information extracted from the second medical image.
  • the matched image is displayed on the display unit 1020 as a third medical image, and is displayed separately from other regions of the detected anatomical object, that is, the region other than the detected anatomical object.
  • the image processing unit 1030 may use the geometric relationship between the anatomical object of the first medical image and the anatomical object of the second medical image in performing image registration, and the geometric relationship refers to the relative positional relationship of the anatomical objects. May contain a vector to represent.
  • Registration of the medical images includes a process of mapping coordinates of the first medical image and the second medical image to each other.
  • the first medical image and the second medical image may be medical images generated using a coordinate system according to digital imaging and communication in medicine (DICOM), respectively.
  • DICOM digital imaging and communication in medicine
  • the image processor 1030 calculates a coordinate transformation function for converting or inversely transforming the coordinates of the second medical image into the coordinates of the first medical image through a matching process of the first medical image and the second medical image.
  • the coordinate transformation function may include a first transformation equation that maintains unique characteristics of the anatomical object of the previous point of time calculated in the homogeneous registration process described later, and a second transformation equation in which two image information calculated in the heterogeneous registration process are completely matched. Can be.
  • the image processor 1030 may synchronize the coordinates and the views of the first medical image and the second medical image by using a coordinate transformation function.
  • the matched image may be an image obtained by converting the first medical image.
  • the matched image may be a fusion image of the first medical image and the second medical image.
  • the display unit 1020 may display the first medical image and display the third medical image and / or the fourth medical image generated by matching the first medical image with the second medical image.
  • FIG. 12 is a diagram illustrating a first medical image 1210 according to an embodiment of the present invention.
  • the first medical image 1210 is a relatively recently image of the object.
  • the first medical image 1210 may be a non-contrast image taken without a contrast agent being administered to the subject.
  • the first medical image 1210 may be a brain CT image as shown in FIG. 12.
  • the first medical image 1210 may be a captured image capable of real-time display, for example, an ultrasound image.
  • the first medical image 1210 is a non-contrast CT image, it is not easy to distinguish the first object and the second object, that is, the blood vessel and the lymph node, that is, the identification in the image 1210. .
  • the user may select an area 1211 in which the blood vessel and the lymph node are expected to be located in the first medical image 1210 using the user input unit 1040.
  • the controller 1010 may control the display 1020 to enlarge and display the selected area 1211. In the enlarged area 1211, blood vessels and lymph nodes are not identified.
  • FIG. 13 is a diagram illustrating a second medical image 1310 according to an embodiment of the present invention
  • FIG. 14 is a diagram illustrating a second medical image 1410 from which an object is extracted.
  • the second medical image 1310 may be a contrast enhancement image taken in a state where a contrast agent is administered to the subject, and may be, for example, a brain CT image as shown in FIGS. 13 and 14.
  • the image processor 1030 includes a first object extractor 1031, a second object extractor 1032, and a matching unit.
  • the matching unit includes a coordinate converting unit 1033, a homogeneous matching unit 1034, and a heterogeneous matching unit 1035.
  • the image processor 1030 is illustrated as including the first object extractor 1031 and the second object extractor 1032 to extract two anatomical objects, but is not limited thereto. That is, for example, three or more anatomical objects may be extracted and displayed in a third medical image so as to be distinguishable.
  • the image processor 1030 is provided with one object extractor to extract an area of the first object from the second medical image, and display the area of the first object separately from other areas except for the first object. can do.
  • the first individual region is a blood vessel region, and a vessel region and a non-vascular region are divided and displayed.
  • the non-vascular region includes a lymph node region.
  • the non-vascular region may comprise a bronchial region.
  • the first object extractor 1031 and the second object extractor 1032 extract area information of the first anatomical object and area information of the second anatomical object from the second medical image, respectively.
  • the regions of the first and second objects extracted in this way are reference regions, and the extracted region information is used as reference region information by the matching unit.
  • the first object extractor 1031 extracts a region corresponding to the first object in the second medical image by using anatomical features of the first object, and the second object extractor 1032 anatomically extracts the second object.
  • the region corresponding to the second object is extracted from the second medical image by using the feature.
  • the first object extractor 1031 may use a brightness value for each pixel included in the second medical image to determine an area of the first object.
  • the first object extractor 1031 detects points having a brightness value within a preset first range in the contrast-enhanced second medical image, and corresponds to the first object including the detected points. You can decide which area to do.
  • the first object extractor 1031 may detect the points having anatomical characteristics similar to those of the selected point. The difference between the brightness values of, i.e., the points whose contrast is equal to or less than the first threshold value may be detected to determine a first object region including the detected points.
  • the second object extractor 1032 detects points having a brightness value within a preset second range in the contrast-enhanced second medical image, and detects an area corresponding to the second object including the detected points. You can decide. In another embodiment, when a specific point in the second medical image is selected by the user input unit 1040, the second object extractor 1032 may detect a point having an anatomical characteristic similar to the selected point. By detecting the difference of the brightness value of, i.e., the contrast is equal to or less than the second threshold value, the second object region including the detected points may be determined.
  • the first range and the second range for the brightness value may be preset in correspondence to the anatomical features of each of the first and second objects.
  • the first threshold value and the second threshold value may be preset in correspondence with the anatomical characteristics of each of the first object and the second object, or in some cases, the first threshold value and the second threshold value may be set to the same value. have.
  • the controller 1010 displays the display unit 1020 to display the first object region 1411 and the second object region 1412, which are reference regions extracted from the second medical image, to be identified. Can be controlled.
  • the user may identify the blood vessel region 1411 corresponding to the first object and the lymph node region 1412 corresponding to the second object by using the displayed image 1410 of FIG. 14, and use the same for diagnosis.
  • the information of the first object region and the information of the second object region extracted by the first object extractor 1031 and the second object extractor 1032 are transferred to the matching unit.
  • the matching unit registers the first medical image and the second medical image based on a predetermined algorithm.
  • the matching unit uses the first medical image by using the reference region information, that is, the information of the first object region received from the first object extractor 1031 and the information of the second object region received from the second object extractor 1032.
  • the second medical image may be registered.
  • first object region 1411 and the second object region 1412 segmented from the second medical image are arranged through the image registration process of the matching unit in the image processor 1030 and the first object region and the second object image of the first medical image. Matches to the object area respectively.
  • 15 is a diagram for describing an image registration process according to the present embodiment.
  • image registration may include different sets of different image data in a captured same scene as illustrated in FIG. 15.
  • image data that is, a process of transforming I f and I m into a coordinate system, and maximizes the similarity between the images I f 'and I m that are to be matched. Is implemented by optimization algorithms to minimize.
  • Equation 1 a result of a similarity measure is maximized as shown in Equation 1 below using a predetermined conversion model parameter, or a result of a cost function as shown in Equation 2 is obtained.
  • P final the final parameter
  • the process of finding the final parameter may include homogeneous matching and hetero matching, which will be described later.
  • I f is a fixed image, for example, a first medical image which is a non-contrast image
  • I m is a moving image, for example, a second medical image, which is a contrast enhancement image
  • S is a similarity measure (similarity measure)
  • C is a cost function
  • P is a parameter set of transformation model.
  • transformation model parameters that can be used in the embodiments of the present invention as described above include rigid transformation, affine transformation, thin-plate-spline free form deformation (TPS FFD), and B. Spline (B-spline) FFD, elastic model (elastic model) and the like.
  • the result of the cost function may be determined by weights assigned to each of the similarity (or dis-similarity) measure and the regularization metric.
  • Similarity or dissimilarity measurement functions include Mutual information (MI), Normalized mutual information (NMI), Gradient-magnitude, Gradient-orientation, Sum of Squared Difference (SSD), Normalized Gradient-vector Flow (NGF), Gradient NMI (GNMI) And the like.
  • normalization metrics include Volume regularization, Diffusion regularization, Curvature regularization, Local rigidity constraint, and the like.
  • the coordinate transformation unit 1033 maps the coordinate systems of the first medical image and the second medical image to each other.
  • the coordinate system mapping is to match the coordinate system of the first medical image and the coordinate system of the second medical image.
  • the coordinate converting unit 1033 may follow a direction in which the first anatomical object of the first medical image is disposed.
  • the coordinate system of the second medical image may be aligned such that the first anatomical object (reference area) of the second medical image may be disposed.
  • the coordinate transformation unit 1033 may rotate or move the second medical image in a range where the alignment state is not shifted between the first anatomical objects of the first medical image and the second medical image.
  • the image processing unit 1030 sequentially performs homogeneous matching and heterogeneous matching on the first medical image and the second medical image in which the coordinate system is adjusted through the coordinate transformation unit 1033.
  • FIG. 16 is a diagram conceptually illustrating a homogeneous matching process
  • FIG. 17 is a diagram conceptually illustrating a heterogeneous matching process.
  • Homogeneous registration is to match a fixed image while maintaining an image characteristic (shape) of a moving image during image registration.
  • In-homogeneous registration is to modify an image characteristic (shape) of a moving image so as to completely match a fixed image when the image is matched. .
  • the homogeneous matching unit 1034 and the heterogeneous matching unit 1035 calculate a cost function through a conversion process between the images I f ′ and I m , which are coordinate matching objects, and calculate the cost function. By repeating the process of updating the parameter P based on the cost function, the final parameter P final is found to minimize the result of the cost function.
  • homogeneous matching is performed in such a way as to gradually change the weights assigned to each of the similarity (or dis-similarity) measure and the regularization metric.
  • the matching may be performed, and the change of the weight may be performed in a direction of gradually increasing the degree of freedom of the second medical image used as the moving image.
  • the image processing unit 1030 includes a homogeneous matching unit 1033 and a heterogeneous matching unit 1034, but it is not a process in which homozygous matching and hetero matching are completely separated from each other. Some of the processes in the first half of the process of updating P correspond to homogeneous matching, and some of the subsequent processes correspond to heterogeneous matching.
  • the processes are repeatedly performed until the heterogeneous matching is completed even after the homogeneous matching is completed.
  • the present invention may be implemented to perform only homogeneous matching and not heterogeneous matching.
  • 18 is a flowchart illustrating processes of performing a matching process in an embodiment of the present invention.
  • the image processor 1030 continuously updates P in the algorithm of FIG. 18 to design a normalization metric constituting a transformation model and a cost function. In this way, different results can be derived from homozygous and heterozygous.
  • known models such as Rigid global model, Non-rigid global model, Rigid local model, and Non-rigid local model are used.
  • the image processor 1030 first initializes the transformation model parameter P (S1801).
  • the second medical image I m is transformed such that the coordinate system of the first medical image I f is aligned.
  • the mapping of the coordinate system may use a coordinate system according to an affine space.
  • the process of S1803 may also be referred to as affine registration.
  • the second medical image (I m ') converted in the process of S1803 and the first medical image (I f) to compute a cost function using the pixel in the overlap region (overlapped regions) (C) calculate the (S1805).
  • the result of the cost function (C) is determined using a normalization metric based on a similarity (or dis-similarity) measure and prior information. For example, it may be determined by a sum of weights assigned to each of the similarity measure function and the normalization metric.
  • the overlapped areas may be, for example, areas corresponding to at least one anatomical entity.
  • the image processor 1030 determines whether the result value of the cost function C calculated in the process of S1805 is minimum (S1807).
  • the conversion model parameter P is updated (S1809).
  • the image processor 1030 repeatedly performs the processes of S1803 to S1807 until the result value of the cost function is minimum, based on the determination result of the process of S1807.
  • This process is the process of homogeneous and heterogeneous matching, and finds the optimization algorithms for each.
  • the homogeneous matching unit 1034 obtains the first transform information as an optimized algorithm in which the unique characteristics of the lymph node and the blood vessel information of the previous time point are maintained.
  • the heterogeneous matching unit 1035 obtains the second transform information as an optimized algorithm in which the two image information are completely matched.
  • the heterogeneous registration may be a quantification process for tracking changes in the second anatomical entity, for example, lymph nodes, and the degree of change in the lymph nodes may be quantified to be displayed on the medical image according to the quantification.
  • the similarity (or dissimilarity) measurement function can be evaluated whether the image is matched between I f 'and I m , and the weights given to each of the similarity (or dissimilarity) measurement function and the normalization metric are It can be a factor that distinguishes between homogeneous and heterogeneous matching.
  • the image processor 1030 generates a third medical image from the first medical image according to the homogeneous matching process as described above.
  • the fourth medical image may be further generated from the third medical image according to the heterogeneous matching process.
  • the controller 1010 controls the display unit 1020 to display the third medical image and / or the fourth medical image generated by the image processor 1030.
  • FIG. 19 is a view showing a third medical image 1910 according to an embodiment of the present invention
  • FIG. 20 is a view showing a fourth medical image 1610
  • FIG. 21 is an enlarged view of some object regions in FIG. 20. The figure is shown.
  • the controller 1010 may display the first object region 1911 and the second object region 1912 in a third medical image generated by homogeneous matching so as to be identified. Can be controlled. Accordingly, the user may identify the blood vessel region 1911 corresponding to the first entity and the lymph node region 1912 corresponding to the second entity, which are not identified in FIG. 12, and use the same for diagnosis.
  • the controller 1010 may display the area of the anatomical object detected by at least one of color, pattern, pointer, highlight, and animation effects on the display 1020 separately from an area that is not an anatomical object. have.
  • different colors, patterns, pointers, highlights, and animation effects may be applied to each of the areas.
  • a combination of colors, patterns, pointers, highlights, and animation effects may be applied to a plurality of areas.
  • the first object region 1911 may be displayed by color and the second object region 1912 by a pattern.
  • the first object region 1911 may be provided with a predetermined pattern and a pointer to the first object region 1911. It may be implemented by various modifications such as to be distinguished from other areas.
  • FIG. 19 illustrates that the detected first object region 1911 and the second object region 1912 are divided and displayed by a pattern, various embodiments in which the user is visually discernible are applicable. It is possible.
  • the pattern includes a plurality of horizontal lines, vertical lines, diagonal lines in a predetermined direction, various types of dot patterns, wave patterns, and the like including a circle.
  • the pointer includes a solid line displayed along the circumference of the detected area and a dotted line of various forms, and the brightness of the pointer may be displayed brighter than the surrounding area.
  • Highlighting includes displaying the brightness of the detected area differently, for example, brighter than other areas.
  • the animation effect is to apply various visual effects, such as flickering at predetermined time intervals, gradually brightening / darkening, to the detected area.
  • the division marks of the regions 1911 and 1912 of the anatomical entity can be activated or deactivated by user selection. That is, the user may activate a function of distinguishing the first area 1911 and the second area 1912 by color or the like by manipulating the user input unit 1040, and the user may input a user input for activating the corresponding function. can do.
  • the user interface selectable whether to activate the object classification that is, the GUI is displayed on the display unit 1020, or the user input unit 1040 includes a toggle switch assigned to correspond to the object classification function, such as in various ways It may be provided to enable user selection.
  • the level (ie, degree, intensity, etc.) of the division display of the anatomical object detected through the user input unit 1040 may be adjusted. That is, the medical image display apparatus 1000 according to the present invention may be provided to distinguish and display anatomical objects in various ways according to the user's preference and taste.
  • the fourth medical image 2010 generated by performing heterogeneous registration not only the first object region 2011 and the second object region 2012 are displayed to be identified, but also the previous time point.
  • the change in the second object area 2012 with respect to the display may be displayed.
  • the previous time point may be a time point at which the second medical image is taken.
  • the diagnosis of the user is easier.
  • the user may be instructed to enlarge and display a predetermined region by using the user input unit 1040.
  • the blood vessel region 2111 corresponding to the first object may be displayed on the fourth medical image 2110 as shown in FIG.
  • the lymph node region 2112 corresponding to the second individual is distinguishably displayed, and the existing lesion region 2113 and the lesion expansion region 2114 are distinguishable within the lymph node region 2112.
  • the controller 1010 distinguishes and displays the existing lesion areas 2013 and 2113 and the lesion extension areas 2014 and 2114 in FIGS. 20 and 21 by at least one of color, pattern, pointer, highlight, and animation effects.
  • the display unit 1020 may be controlled.
  • the controller 1010 may apply a combination of colors, patterns, pointers, highlights, and animation effects to the existing lesion areas 2013 and 2113 or the lesion extension areas 2014 and 2114.
  • the existing lesion areas 2013 and 2113 may be displayed by patterns
  • the lesion extension areas 2014 and 2114 may be displayed by a pointer, and predetermined patterns and highlights may be displayed on the lesion extension areas 2014 and 2114.
  • the present invention may be modified in various ways, such as to be displayed separately from other areas.
  • the division display of the lesion extension regions 2014 and 2114 may be activated or deactivated by user selection. That is, the user may operate the user input unit 1040 to activate a function in which the lesion extension regions 2014 and 2114 are distinguished and displayed by color, and the user may perform a user input for activating the corresponding function.
  • a user interface that can select whether to enable extended area classification, that is, a GUI is displayed on the display unit 1020 or the user input unit 1040 includes a toggle switch allocated to correspond to the extended area classification function. May be provided to enable user selection. By selecting to enable / disable extension area division, the user can more easily grasp the extent of lesion extension.
  • the display level (ie, degree or intensity, etc.) of the existing lesion areas 2013 and 2113 and the lesion extension areas 2013 and 2114 detected through the user input unit 1040 may be adjusted. That is, the medical image display apparatus 1000 according to the present invention may be provided so as to be able to distinguish and display the anatomical object in various ways according to the user's preference and taste.
  • FIG. 22 is a diagram illustrating a screen displayed according to the driving of an application having a medical diagnosis function in a medical image display apparatus according to an exemplary embodiment of the present invention.
  • the medical image 2210 illustrated in FIG. 22 is an image displaying a result of performing an image registration process.
  • the medical image 2210 may include a first anatomical object 2211, a second anatomical object 2112, and a second anatomical object 2112.
  • a user selectable user interface that is, an input area 2220 including various GUIs, is positioned on the left side of the display area in which the existing lesion area 2113 and the lesion extension area 2114 are distinguishable.
  • the user selects a predetermined button of the user interface of the input area 2220 to call up the reference image used as the second medical image and then display the first image. It can be used for image registration of medical images.
  • various medical images may be displayed on the display area 2210 of FIG. 22 including the images illustrated in FIGS. 12 to 14 and 19 to 21.
  • two or more images including the images of FIGS. 12 to 14 and 19 to 21 may be displayed in a horizontal and / or vertical direction so as to be comparable.
  • the display unit 1020 when the display unit 1020 is provided to include a plurality of main displays and sub-displays, for example, two or more images may be displayed to be compared in various combinations.
  • 23 to 26 illustrate various examples of using image registration for diagnosis in the medical image display apparatus 1000 according to an exemplary embodiment of the present invention.
  • control unit 1010 of the medical image display apparatus 100 may include the first medical image (non-contrast medical image) 2301 and the second medical image (other view) as described above.
  • Acquisition contrast-enhanced medical images) 2311 are acquired, respectively, to generate a fused display that matches them.
  • the process of generating a fusion display includes image matching 2302 between a non-contrast medical image and another view-enhanced contrast-enhancing medical image, converting and propagating an image generated according to the matching 2303, and correcting an area 2304.
  • the anatomical objects that is, the lymph nodes and the blood vessel region
  • the transformation and propagation 2303)
  • region correction corresponding to the coordinates of the two medical images
  • quantification is performed to compare changes of lymph nodes in the non-contrast medical images taken with respect to other viewpoint-acquired contrast enhanced medical images (2313). The result of quantification is displayed.
  • control unit 1010 of the medical image display apparatus 100 may include a first medical image (non-contrast medical image) 2401 and a second medical image (other view acquisition contrast enhancement medical image). (2411), respectively, to obtain a matched fusion display, and in this process, data is loaded from the medical image data base (2421), and machine learning is performed (2422), which is further added to the area correction (2404) process. It can be utilized.
  • the controller 1010 classifies data (including images) of conditions similar to the object (such as the age, gender, and progression of the lesion) among the stored information, and classifies the classified data. Through the training process using the machine learning to predict the data can proceed.
  • the controller 1010 may control the image processor 1030 to utilize data predicted according to machine learning for segmentation and rectification in the image registration process of the other embodiment.
  • the accuracy of image registration may be further improved as compared with the embodiment using only reference information extracted from the second image (enhanced medical image at another point in time).
  • the process of generating a fusion display includes image matching 2401 between a non-contrast medical image and another point-enhanced contrast-enhancing medical image, converting and propagating an image generated according to the matching, and a region correction 2403.
  • the anatomical objects that is, the lymph node and the blood vessel region
  • the anatomical objects are divided (2412) with respect to other viewpoint-acquired contrast-enhanced medical images, and transformed and propagated (2403) and region correction (corresponding to coordinates of the two medical images). 2404).
  • region correction 2404 is complete, the resulting fused image is displayed as a registration image (2405).
  • quantification is performed to compare changes in lymph nodes in the non-contrast medical images taken with respect to other time-acquired contrast enhanced medical images (2413). In the process, data from machine learning is used more. The quantification result is then displayed 2414.
  • the control unit 1010 of the medical image display apparatus 100 may include a first medical image (non-contrast medical image) 2501 and a second medical image (other view acquisition contrast enhancement medical image). ) Obtain each of the 2525 and create a fused display that matches them, and further utilize a standard image / model in this process. For example, the controller 1010 loads a plurality of images corresponding to conditions similar to the object (age, gender, lesion progression, etc.) of the object from the data stored in the standard image / model (2521), and the loaded image The image registration 2522 and the transformation and propagation 2523 may be performed.
  • the process of generating a fusion display includes image matching 2502 between a non-contrast medical image and another point-enhanced contrast-enhancing medical image, converting and propagating an image generated according to the registration, and correcting an area 2504.
  • a predetermined anatomical object that is, a lymph node and a blood vessel region
  • data obtained by image matching 2522 and transformation and propagation 2523 from the standard image / model is further utilized.
  • the acquired area correction 2504 is complete, the resulting fused image is displayed as a registration image (2505).
  • quantification is performed to compare changes in lymph nodes in the non-contrast medical images taken with respect to other viewpoint-acquired contrast enhanced medical images (2513).
  • data of the standard image / model is further utilized in the process, the accuracy of image registration may be further improved.
  • the quantification result is then displayed 2514.
  • FIG. 26 when there is no contrast enhancement image captured at another time point with respect to an object, two or more non-contrast medical images t1 and t2 are matched, and a standard image is used to improve accuracy.
  • the / model 2621 and / or the medical image database 2651 may be utilized.
  • two or more non-contrast images photographed at different time points t1 and t2 are matched to determine the extent of lesion progression in the anatomical object according to the photographing order so that the lesions are identified and displayed.
  • control unit 1010 of the medical image display apparatus 100 obtains the non-contrast medical image 2601 taken at the time point t2 and the non-contrast medical image 2611 taken at the past time point t1, respectively. And create a matched fusion display.
  • the process of generating a fusion display includes image matching 2602 of non-contrast medical images taken at the time point t2 and non-contrast medical images taken at a past time point t1, and converting and propagating images generated according to the matching 2603. , Area correction 2604.
  • segmentation and correction (2612) of certain anatomical entities i.e., lymph nodes and blood vessel regions, for non-contrast medical images of a past time point t1, where standard image / model 2621 and / or medical image database ( The stored information of 2631 is utilized.
  • the controller 1010 classifies and classifies data (including images) of conditions similar to the object (eg, age, gender, and progression of the lesion) among the stored information.
  • a training process using the collected data may proceed to machine learning to predict the data (2632).
  • the controller 1010 loads a plurality of images corresponding to a condition similar to the object (age, gender, lesion progression, etc.) of the object from data stored in the standard image / model (2621), and the image of the loaded images. Matching 2622 and transformation and propagation 2623 may be performed. Here, the predicted data machine-learned in the image registration 2622 process may be further utilized.
  • the controller 1010 corrects the extracted lymph node / vascular region for the non-contrast medical image of the past time point t1 by using the machine learning 2632 and / or the data converted and propagated from the standard image / model ( 2612).
  • region correction 2603 may be used for transformation and propagation 2602 and region correction 2603 to correspond to the coordinates of two medical images.
  • region correction 2603 is complete, the resulting fused image is displayed as a registered image (2605).
  • quantification is performed to compare changes in lymph nodes in non-contrast medical images at time t2 with respect to non-contrast medical images at another time t1. (2613), data from machine learning is further utilized in the process. The quantification result is then displayed 2614.
  • the third medical image and / or the fourth medical image generated in the embodiments of the present invention as described above are stored in a medical image database or a standard image / model, and the stored images are matched / converted by machine learning or two or more images.
  • Radio waves may be used for image registration for identifying and displaying anatomical objects in other non-contrast images.
  • FIG. 27 is a flowchart illustrating a medical image processing method according to an embodiment of the present invention.
  • a first medical image of an object including at least one anatomical object may be displayed on the display unit 1020 of the medical image display apparatus 1000 in operation S2701.
  • the first medical image may be a non-contrast medical image.
  • the image processor 1030 extracts reference region information corresponding to at least one anatomical object from the second medical image that is the reference image of the first medical image displayed in step S2701 under the control of the controller 1010 (S2703).
  • the second medical image may be a contrast-enhanced medical image obtained by capturing an object obtained with the first medical image at another time point.
  • the second medical image may be a standard image generated based on images having a condition similar to that of the object.
  • the anatomical entity from which the information is extracted may be plural and include blood vessels, lymph nodes, bronchus, and the like.
  • the reference area information of step S2703 may be extracted corresponding to a predetermined anatomical object using brightness values of pixels constituting the second medical image.
  • the controller 1010 controls the image processor 1030 to detect at least an area corresponding to the anatomical object in the first medical image displayed in step S2701 based on the reference area information in step S2703 (S2705).
  • the controller 1010 displays a third medical image in which the area detected in operation S2705 is distinguished from an area that is not a corresponding anatomical entity (S2707).
  • the third medical image is generated by matching the first medical image with the second medical image, and may include display area information on the anatomical object detected in operation S2705.
  • the controller 1010 controls the display unit 1020 such that the region of the anatomical object is displayed separately from other regions in the third medical image based on the display region information.
  • the controller 1010 may display a fourth medical image in which the lesion extension area is divided and displayed in the anatomical object displayed in step S2705 (S2709).
  • Steps S2707 and S2709 may be performed during the image registration process of the first medical image and the second medical image.
  • the third medical image displayed in step S2707 corresponds to the result of homozygous matching
  • the fourth medical image displayed in step S2709 corresponds to the result of heterogeneous matching.
  • the medical image registration of steps S2707 and S2709 is performed using a predetermined conversion model parameter, and the result of the similarity measurement function between the first medical image and the second medical image is maximized, or the first medical image and the second medical image are maximized. It can be performed repeatedly until the result of the cost function of is minimized.
  • homogeneous matching is performed to map the coordinate systems of the first medical image and the second medical image, and obtains conversion information in which the intrinsic characteristics of the anatomical object are maintained. Heterogeneous matching to obtain information may be performed sequentially.
  • a segmentation display function of an anatomical entity for example, a lymph node and a blood vessel region
  • a non-contrast image is provided based on a non-contrast image.
  • non-contrast imaging based lymph node follow-up and quantification functions are provided.
  • each of the features of the various embodiments of the present invention may be combined or combined with each other in part or in whole, various technically interlocking and driving as can be understood by those skilled in the art, each of the embodiments may be implemented independently of each other It may be possible to carry out together in an association.
  • lymph node follow-up examination is possible even in patients with weak renal function who are burdened to actively use the contrast agent.
  • the present embodiment can be applied to non-images for general examination, and is used for early diagnosis of cancer diseases such as cancer metastasis.
  • Computer-readable recording media include transmission media and storage media that store data readable by a computer system.
  • the transmission medium may be implemented through a wired or wireless network in which computer systems are interconnected.
  • the controller 1010 may include a nonvolatile memory in which a computer program, which is software, is stored, a RAM in which the computer program stored in the nonvolatile memory is loaded, and a CPU that executes a computer program loaded in the RAM.
  • Nonvolatile memories include, but are not limited to, hard disk drives, flash memory, ROMs, CD-ROMs, magnetic tapes, floppy disks, optical storage, data transfer devices using the Internet, and the like.
  • the nonvolatile memory is an example of a computer-readable recording medium in which a computer-readable program of the present invention is recorded.
  • the computer program is code that the CPU can read and execute, and includes code for performing operations of the control unit 1010 such as steps S1801 to S1809 shown in FIG. 18 and steps S2301 to S2309 shown in FIG.
  • the computer program may be implemented by being included in software including an operating system or an application included in the medical image display apparatus 1000 and / or software for interfacing with an external device.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Engineering & Computer Science (AREA)
  • Pathology (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Radiology & Medical Imaging (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The present invention relates to a medical image display device and a medical image processing method. The medical image display device comprises: a display unit for displaying a first medical image obtained by photographing an image of an object including at least one anatomical entity; and at least one processor for extracting reference area information corresponding to the anatomical entity from at least one second medical image which is a reference image of the first medical image, detecting an area corresponding to the anatomical entity from the first medical image on the basis of the extracted reference area information, and controls the display unit such that the detected area of the anatomical entity is displayed to be distinguished from an area which does not correspond to the anatomical entity. Thus, the present invention allows distinctive displaying of an anatomical entity which is not been identified in a non-contrast medical image in the prior art. Therefore, the present invention allows a lymph node tracking test with respect to even a patient on which a contrast medium is difficult to use, allows an early diagnosis of various diseases, and can further improve the accuracy of a diagnosis.

Description

의료영상 표시장치 및 의료영상 처리방법Medical image display device and medical image processing method
본 발명은 의료영상을 포함하는 화면을 디스플레이 하는 의료영상 표시장치 및 그에 따른 의료영상 처리방법에 관한 것이다.The present invention relates to a medical image display apparatus for displaying a screen including a medical image and a medical image processing method thereof.
의료영상 표시장치는 대상체의 내부 구조를 영상으로 획득하기 위한 장비이다. 의료영상 표시장치는 비침습 검사 장치로서, 신체 내의 구조적 세부사항, 내부 조직 및 유체의 흐름 등을 촬영 및 처리하여 사용자에게 보여준다. 의사 등의 사용자는 의료영상 표시장치에서 출력되는 의료영상을 이용하여 환자의 건강 상태 및 질병을 진단할 수 있다.The medical image display apparatus is a device for acquiring an internal structure of an object as an image. The medical image display device is a non-invasive inspection device, and photographs and processes structural details, internal tissues, and fluid flow in the body and shows them to the user. A user such as a doctor may diagnose a patient's health condition and a disease by using the medical image output from the medical image display apparatus.
의료영상 표시장치로는 자기 공명 영상을 제공하기 위한 자기 공명 영상(MRI: magnetic resonance imaging) 장치, 컴퓨터 단층 촬영(CT: Computed Tomography) 장치, 엑스선(X-ray) 촬영 장치, 및 초음파(Ultrasound) 장치 등이 있다.Medical image display devices include magnetic resonance imaging (MRI) devices, computed tomography (CT) devices, X-ray (X-ray) devices, and ultrasound to provide magnetic resonance imaging. Device and the like.
자기 공명 영상(MRI) 장치는 자기장을 이용해 피사체를 촬영하는 장치로, 뼈는 물론 디스크, 관절, 신경 인대 등을 원하는 각도에서 입체적으로 보여주기 때문에 정확한 질병 진단을 위해서 널리 이용되고 있다. Magnetic resonance imaging (MRI) is a device for photographing a subject using a magnetic field, and is widely used for accurate disease diagnosis because it shows bones, disks, joints, nerve ligaments, etc. in three dimensions at a desired angle.
자기 공명 영상 장치는 RF 코일들을 포함하는 고주파 멀티 코일, 영구자석 및 그래디언트 코일 등을 이용하여 자기 공명(MR: magnetic resonance) 신호를 획득한다. 그리고, 자기 공명 신호(MR 신호)를 샘플링하여 자기 공명 영상을 복원한다.The magnetic resonance imaging apparatus acquires a magnetic resonance (MR) signal by using a high frequency multi-coil including RF coils, a permanent magnet, a gradient coil, and the like. The magnetic resonance image is reconstructed by sampling the magnetic resonance signal (MR signal).
컴퓨터 단층 촬영(CT) 장치는 대상체에 대한 단면 영상을 제공할 수 있고, 일반적인 엑스선 장치에 비하여 대상체의 내부 구조(예컨대, 신장, 폐 등의 장기 등)가 겹치지 않게 표현할 수 있다는 장점이 있어서, 질병의 정밀한 진단을 위하여 널리 이용된다. Computed tomography (CT) device can provide a cross-sectional image of the object, and has the advantage that the internal structure of the object (for example, organs, such as kidneys, lungs, etc.) do not overlap, compared to the general X-ray device, the disease Widely used for the precise diagnosis of
컴퓨터 단층 촬영 장치는 대상체로 엑스선을 조사하며, 대상체를 통과한 엑스선을 감지한다. 그리고, 감지된 엑스선을 이용하여 영상을 복원한다.The computed tomography apparatus radiates X-rays to the object and detects X-rays passing through the object. Then, the image is restored using the detected X-rays.
엑스선 촬영 장치는 대상체에 엑스선을 조사하고 대상체를 투과한 엑스선을 검출하는 방식으로 대상체 내부를 영상화한다. The X-ray imaging apparatus images the inside of the object by radiating X-rays to the object and detecting X-rays passing through the object.
초음파 장치는 초음파 신호를 대상체에 송신하고 대상체로부터 반사되는 초음파 신호를 수신하여 대상체내의 관심객체에 대한 2차원 또는 3차원 초음파 영상을 형성한다.The ultrasound apparatus transmits an ultrasound signal to the object and receives an ultrasound signal reflected from the object to form a two-dimensional or three-dimensional ultrasound image of the object of interest in the object.
전술한 바와 같이, 다양한 의료영상 표시장치에 의해서 획득된 의료영상들은 의료영상 표시장치의 종류 및 촬영 방식에 따라서 대상체를 다양한 방식으로 표현한다. 의사는 의료영상을 판독하여 환자의 질병 또는 건강에 이상에 생겼는지를 판단한다. 따라서, 의사는 환자를 진단하기에 적절한 의료영상을 선택하여 판독할 수 있도록, 의사의 진단을 용이하게 할 수 있는 의료영상 표시장치를 제공할 필요가 있다.As described above, the medical images acquired by the various medical image display apparatuses express the object in various ways according to the type and the photographing method of the medical image display apparatus. The doctor reads the medical image to determine whether the patient's illness or health is abnormal. Therefore, it is necessary for a doctor to provide a medical image display device that can facilitate the diagnosis of a doctor so that a medical image suitable for diagnosing a patient can be selected and read.
본 발명 일실시예에 따른 의료영상 표시장치는, 적어도 하나의 해부학적 개체를 포함하는 대상체를 촬상한 제1 의료영상을 표시하는 디스플레이부와; 제1 의료영상의 참조영상인 적어도 하나의 제2 의료영상으로부터 해부학적 개체에 대응하는 참조영역정보를 추출하고, 추출된 참조영역정보에 기초하여 제1 의료영상에서 해부학적 개체에 대응하는 영역을 검출하고, 검출된 해부학적 개체의 영역이 해부학적 개체가 아닌 영역과 구분하여 표시되도록 디스플레이부를 제어하는 적어도 하나의 프로세서를 포함할 수 있다. 이에 의해, 참조영상으로부터 추출된 정보를 이용하여 의료영상에서 식별되지 않았던 개체에 대한 구분 표시 기능이 제공된다.According to an embodiment of the present invention, a medical image display apparatus includes: a display unit configured to display a first medical image of an object including at least one anatomical object; Extracting reference region information corresponding to the anatomical entity from at least one second medical image that is a reference image of the first medical image, and extracting the region corresponding to the anatomical entity from the first medical image based on the extracted reference region information. At least one processor may be configured to detect and control the display unit to display the detected anatomical region separately from the non-anatomical region. As a result, a division display function for an object that has not been identified in the medical image using information extracted from the reference image is provided.
프로세서는, 제1 의료영상과 제2 의료영상을 정합하여 제1 의료영상에서 검출된 해부학적 개체에 대한 표시영역정보를 포함하는 제3 의료영상을 생성하고, 표시영역정보에 기초하여 생성된 제3 의료영상에서 검출된 해부학적 개체의 영역이 해부학적 개체가 아닌 영역과 구분하여 표시되도록 디스플레이부를 제어할 수 있다. 이에, 의료정합을 이용하여 생성된 영상에 의해 개체에 대한 구분 표시가 가능하게 된다.The processor generates a third medical image including display area information on the anatomical object detected in the first medical image by matching the first medical image with the second medical image, and generates the third medical image based on the display area information. 3 The display unit may be controlled to display an area of the anatomical object detected in the medical image separately from an area that is not an anatomical object. As a result, it is possible to display an individual object by the image generated using medical registration.
해부학적 개체는 복수이며, 디스플레이부는, 복수의 해부학적 개체의 영역들이 각각 구분하여 표시되도록 할 수 있다. 이에, 식별되지 않았던 개체들에 대한 구분 표시가 가능하므로, 진단에 편의를 제공한다.There are a plurality of anatomical objects, and the display unit may allow regions of the plurality of anatomical objects to be displayed separately. Thus, it is possible to distinguish the identification of the unidentified objects, thereby providing convenience for diagnosis.
복수의 해부학적 개체는, 혈관 및 림프노드 중 적어도 하나를 포함할 수 있다. 이에, 중요 임상 판정 요소인 혈관과 림프노드가 구분되어 표시되는 효과가 발생한다. The plurality of anatomical entities may comprise at least one of blood vessels and lymph nodes. As a result, there is an effect in which blood vessels and lymph nodes, which are important clinical judgment factors, are divided and displayed.
제1 의료영상은 비 조영 의료영상일 수 있다. 이에, 조영제 부작용이 우려되는 대상체에 의해 촬영된 비 조영 의료영상에서 해부학적 개체의 구분 표시가 가능하다. The first medical image may be a non-contrast medical image. Thus, it is possible to distinguish and display anatomical objects in non-contrast medical images taken by a subject of concern about side effects of the contrast agent.
제2 의료영상은 조영 증강된 의료영상일 수 있다. 이에, 영역 분할에 의한 개체 구분이 용이한 조영 증강된 의료영상을 참조영상으로 활용할 수 있다.The second medical image may be a contrast-enhanced medical image. Thus, the contrast-enhanced medical image, which is easy to distinguish the individual by segmentation, may be used as a reference image.
제2 의료영상은, 제1 의료영상을 획득한 대상체를 타 시점에 촬상하여 획득된 의료영상일 수 있다. 이에, 환자의 과거 히스토리를 현재의 진단에 활용 가능하다. The second medical image may be a medical image obtained by photographing an object from which the first medical image is acquired at another time point. Thus, the past history of the patient can be utilized for the present diagnosis.
디스플레이부에서, 색상, 패턴, 포인터, 하이라이트 및 애니메이션 효과 중 적어도 하나에 의해 검출된 해부학적 개체의 영역이 해부학적 개체가 아닌 영역과 구분하여 표시될 수 있다. 이에, 사용자의 취항에 따른 다양한 구분표시 기능이 제공된다.In the display unit, an area of the anatomical object detected by at least one of color, pattern, pointer, highlight, and animation effect may be displayed separately from an area that is not an anatomical object. Thus, various division display functions are provided according to the user's preference.
해부학적 개체의 영역의 구분 표시는 사용자 선택에 의해 활성화 또는 비활성화 가능할 수 있다. 이에, 개체 구분 표시 기능에 대한 사용자 선택 편의를 돕는다.The division mark of the area of the anatomical object may be activated or deactivated by user selection. Thus, the user selection convenience for the object display function.
사용자 입력을 수신하는 사용자 입력부를 더 포함하며, 프로세서는, 사용자 입력에 응답하여 해부학적 개체의 구분 표시의 레벨을 조정하도록 디스플레이부를 제어할 수 있다. 이에, 사용자의 취향에 부합하는 기능 제공이 가능하다. The apparatus may further include a user input unit configured to receive a user input, and the processor may control the display unit to adjust the level of the division display of the anatomical object in response to the user input. Thus, it is possible to provide a function corresponding to the taste of the user.
프로세서는, 해부학적 개체의 영역에서 병변 확장 영역을 더 검출하고, 해부학적 개체의 영역 내에서 검출된 병변 확장 영역이 식별 가능하게 표시되도록 디스플레이부를 제어할 수 있다. 이에, 병변의 진행 상황에 대한 정보가 더 제공됨으로써 병변 진단의 편의를 제공한다.The processor may further detect the lesion extension region in the region of the anatomical entity, and control the display unit so that the lesion extension region detected in the region of the anatomical entity is identifiably displayed. Thus, the information on the progress of the lesion is further provided to provide convenience of the diagnosis of the lesion.
프로세서는, 제2 의료영상의 픽셀들의 밝기 값을 이용하여 해부학적 개체에 대응하는 참조영역정보를 추출할 수 있다. 이에, 기저장된 영상의 정보를 효율적으로 활용하여, 필요한 정보를 취득할 수 있다.The processor may extract reference region information corresponding to the anatomical object by using brightness values of pixels of the second medical image. As a result, necessary information can be obtained by efficiently utilizing the information of the pre-stored video.
프로세서는, 소정 변환모델 파라미터를 사용하여 제1 의료영상과 제2 의료영상의 유사도 측정 함수의 결과값이 최대가 되도록 영상 정합을 수행할 수 있다. 이에, 영상정합 프로세스의 결과 생성된 정합된 영상의 정확도가 향상된다. The processor may perform image registration using a predetermined conversion model parameter to maximize the result of the similarity measurement function between the first medical image and the second medical image. Thus, the accuracy of the matched image generated as a result of the image registration process is improved.
프로세서는, 소정 변환모델 파라미터를 사용하여 제1 의료영상과 제2 의료영상의 비용 함수의 결과값이 최소가 되도록 영상 정합을 수행할 수 있다. 이에, 영상정합 프로세스의 결과 생성된 정합된 영상의 오류 가능성이 낮아진다.The processor may perform image registration using a predetermined conversion model parameter such that a result value of the cost function of the first medical image and the second medical image is minimized. As a result, the possibility of error of the matched image generated as a result of the image matching process is lowered.
프로세서는, 제1 의료영상과 제2 의료영상의 좌표계를 매핑하고, 좌표계가 매핑된 제1 의료영상 및 제2 의료영상에 대해, 제2 의료영상의 영상 특성을 유지한 상태로 제1의료영상에 매칭시키는 동형정합을 수행할 수 있다. 이에, 림프노드와 혈관이 구분 표시되는 정합 영상이 제공된다. The processor may be configured to map a coordinate system between the first medical image and the second medical image, and maintain the image characteristics of the second medical image with respect to the first medical image and the second medical image to which the coordinate system is mapped. Homogeneous matching may be performed. Thus, a registration image is provided in which lymph nodes and blood vessels are separately displayed.
프로세서는, 동형정합이 수행된 제1 의료영상 및 제2 의료영상에 대해, 제2 의료영상의 영상 특성을 변형시켜 제1의료영상에 완전히 매칭시키는 이형정합을 더 수행할 수 있다. 이에, 림프노드 내에서 병변의 확장에 대한 정량과 결과까지 사용자에게 제공 가능하다. The processor may further perform a heterogeneous registration on the first medical image and the second medical image on which the homogeneous registration has been performed, by modifying the image characteristics of the second medical image to completely match the first medical image. Thus, it is possible to provide the user with quantification and results of lesion expansion in lymph nodes.
한편, 본 발명 일실시예에 따른 의료영상 처리방법은, 적어도 하나의 해부학적 개체를 포함하는 대상체를 촬상한 제1 의료영상을 표시하는 단계와; 제1 의료영상의 참조영상인 적어도 하나의 제2 의료영상으로부터 해부학적 개체에 대응하는 참조영역정보를 추출하는 단계와; 추출된 참조영역정보에 기초하여 제1 의료영상에서 해부학적 개체에 대응하는 영역을 검출하고, 검출된 해부학적 개체의 영역이 해부학적 개체가 아닌 영역과 구분하여 표시되도록 하는 단계를 포함할 수 있다. 이에 의해, 참조영상으로부터 추출된 정보를 이용하여 의료영상에서 식별되지 않았던 개체에 대한 구분 표시 기능이 제공된다.On the other hand, the medical image processing method according to an embodiment of the present invention comprises the steps of: displaying a first medical image photographed on an object including at least one anatomical object; Extracting reference region information corresponding to the anatomical object from at least one second medical image that is a reference image of the first medical image; Detecting a region corresponding to the anatomical entity in the first medical image based on the extracted reference region information, and displaying the detected anatomical entity's region separately from the non-anatomical entity. . As a result, a division display function for an object that has not been identified in the medical image using information extracted from the reference image is provided.
제1 의료영상과 제2 의료영상을 정합하여 제1 의료영상에서 검출된 해부학적 개체에 대한 표시영역정보를 포함하는 제3 의료영상을 생성하는 단계를 더 포함하며, 구분하여 표시되도록 하는 단계는, 표시영역정보에 기초하여 생성된 제3 의료영상에서 검출된 해부학적 개체의 영역이 해부학적 개체가 아닌 영역과 구분하여 표시되도록 할 수 있다. 이에, 의료정합을 이용하여 생성된 영상에 의해 개체에 대한 구분 표시가 가능하게 된다.And matching the first medical image with the second medical image to generate a third medical image including display area information on the anatomical object detected in the first medical image, and displaying the divided medical images. The region of the anatomical entity detected in the third medical image generated based on the display region information may be displayed separately from the region which is not the anatomical entity. As a result, it is possible to display an individual object by the image generated using medical registration.
해부학적 개체는 복수이며, 구분하여 표시되도록 하는 단계는, 복수의 해부학적 개체의 영역들이 각각 구분하여 표시되도록 할 수 있다. 이에, 식별되지 않았던 개체들에 대한 구분 표시가 가능하므로, 진단에 편의를 제공한다.The anatomical objects are plural, and the step of displaying the anatomical objects separately may include displaying the areas of the plurality of anatomical objects separately. Thus, it is possible to distinguish the identification of the unidentified objects, thereby providing convenience for diagnosis.
복수의 해부학적 개체는, 혈관 및 림프노드 중 적어도 하나를 포함할 수 있다. 이에, 중요 임상 판정 요소인 혈관과 림프노드가 구분되어 표시되는 효과가 발생한다.The plurality of anatomical entities may comprise at least one of blood vessels and lymph nodes. As a result, there is an effect in which blood vessels and lymph nodes, which are important clinical judgment factors, are divided and displayed.
제1 의료영상은 비 조영 의료영상일 수 있다. 이에, 조영제 부작용이 우려되는 대상체에 의해 촬영된 비 조영 의료영상에서 해부학적 개체의 구분 표시가 가능하다.The first medical image may be a non-contrast medical image. Thus, it is possible to distinguish and display anatomical objects in non-contrast medical images taken by a subject of concern about side effects of the contrast agent.
제2 의료영상은 조영 증강된 의료영상일 수 있다. 이에, 영역 분할에 의한 개체 구분이 용이한 조영 증강된 의료영상을 참조영상으로 활용할 수 있다.The second medical image may be a contrast-enhanced medical image. Thus, the contrast-enhanced medical image, which is easy to distinguish the individual by segmentation, may be used as a reference image.
제2 의료영상은, 제1 의료영상을 획득한 대상체를 타 시점에 촬상하여 획득된 의료영상일 수 있다. 이에, 환자의 과거 히스토리를 현재의 진단에 활용 가능하다.The second medical image may be a medical image obtained by photographing an object from which the first medical image is acquired at another time point. Thus, the past history of the patient can be utilized for the present diagnosis.
구분하여 표시되도록 하는 단계는, 색상, 패턴, 포인터, 하이라이트 및 애니메이션 효과 중 적어도 하나에 의해, 검출된 해부학적 개체의 영역이 해부학적 개체가 아닌 영역과 구분하여 표시되도록 할 수 있다. 이에, 사용자의 취항에 따른 다양한 구분표시 기능이 제공된다.The distinguishing display may be performed by distinguishing the detected anatomical region from the non-anatomical region by at least one of color, pattern, pointer, highlight, and animation effect. Thus, various division display functions are provided according to the user's preference.
해부학적 개체의 영역의 구분 표시를 활성화 또는 비활성화하는 사용자 선택을 수신하는 단계를 더 포함할 수 있다. 이에, 개체 구분 표시 기능에 대한 사용자 선택 편의를 돕는다.The method may further include receiving a user selection of activating or deactivating the division mark of the anatomical object. Thus, the user selection convenience for the object display function.
해부학적 개체의 구분 표시의 레벨을 조정하는 사용자 입력을 수신하는 단계를 더 포함할 수 있다. 이에, 사용자의 취향에 부합하는 기능 제공이 가능하다.The method may further include receiving a user input for adjusting a level of the division mark of the anatomical object. Thus, it is possible to provide a function corresponding to the taste of the user.
구분하여 표시된 해부학적 개체의 영역에서 병변 확장 영역을 검출하고, 해부학적 개체의 영역 내에서 검출된 병변 확장 영역이 식별 가능하게 표시되도록 하는 단계를 더 포함할 수 있다. 이에, 병변의 진행 상황에 대한 정보가 더 제공됨으로써 병변 진단의 편의를 제공한다.The method may further include detecting a lesion extension region in the region of the anatomical entity that is displayed separately, and causing the lesion extension region detected in the region of the anatomical entity to be distinguishably displayed. Thus, the information on the progress of the lesion is further provided to provide convenience of the diagnosis of the lesion.
참조영역정보를 추출하는 단계는, 제2 의료영상의 픽셀들의 밝기 값을 이용하여 해부학적 개체에 대응하는 참조영역정보를 추출할 수 있다. 이에, 기저장된 영상의 정보를 효율적으로 활용하여, 필요한 정보를 취득할 수 있다.In the extracting of the reference region information, the reference region information corresponding to the anatomical object may be extracted using the brightness values of the pixels of the second medical image. As a result, necessary information can be obtained by efficiently utilizing the information of the pre-stored video.
제3 의료영상을 생성하는 단계는, 소정 변환모델 파라미터를 사용하여 제1 의료영상과 제2 의료영상의 유사도 측정 함수의 결과값이 최대가 되도록 영상 정합을 수행할 수 있다. 이에, 영상정합 프로세스의 결과 생성된 정합된 영상의 정확도가 향상된다.In the generating of the third medical image, image registration may be performed so that a result value of the similarity measurement function between the first medical image and the second medical image is maximized using a predetermined conversion model parameter. Thus, the accuracy of the matched image generated as a result of the image registration process is improved.
제3 의료영상을 생성하는 단계는, 소정 변환모델 파라미터를 사용하여 제1 의료영상과 제2 의료영상의 비용 함수의 결과값이 최소가 되도록 영상 정합을 수행할 수 있다. 이에, 영상정합 프로세스의 결과 생성된 정합된 영상의 오류 가능성이 낮아진다.In the generating of the third medical image, image registration may be performed such that a result value of a cost function of the first medical image and the second medical image is minimized using a predetermined conversion model parameter. As a result, the possibility of error of the matched image generated as a result of the image matching process is lowered.
제3 의료영상을 생성하는 단계는, 제1 의료영상과 제2 의료영상의 좌표계를 매핑하는 단계와; 좌표계가 매핑된 제1 의료영상 및 제2 의료영상에 대해, 제2 의료영상의 영상 특성을 유지한 상태로 제1의료영상에 매칭시키는 동형정합을 수행하는 단계를 포함할 수 있다. 이에, 림프노드와 혈관이 구분 표시되는 정합 영상이 제공된다.The generating of the third medical image comprises: mapping coordinate systems of the first medical image and the second medical image; The method may include performing homogeneous matching on the first medical image and the second medical image to which the coordinate system is mapped to the first medical image while maintaining the image characteristics of the second medical image. Thus, a registration image is provided in which lymph nodes and blood vessels are separately displayed.
제3 의료영상을 생성하는 단계는, 동형정합이 수행된 제1 의료영상 및 제2 의료영상에 대해, 제2 의료영상의 영상 특성을 변형시켜 제1의료영상에 완전히 매칭시키는 이형정합을 수행하는 단계를 더 포함할 수 있다. 이에, 림프노드 내에서 병변의 확장에 대한 정량과 결과까지 사용자에게 제공 가능하다.The generating of the third medical image may include performing heterogeneous matching on the first medical image and the second medical image on which the homogeneous registration has been performed, by modifying image characteristics of the second medical image to completely match the first medical image. It may further comprise a step. Thus, it is possible to provide the user with quantification and results of lesion expansion in lymph nodes.
한편, 본 발명 일실시예에 따른 컴퓨터가 읽을 수 있는 프로그램으로서 의료영상 처리방법을 수행하는 프로그램이 기록된 기록매체에서, 의료영상 처리방법은, 적어도 하나의 해부학적 개체를 포함하는 대상체를 촬상한 제1 의료영상을 표시하는 단계와; 제1 의료영상의 참조영상인 적어도 하나의 제2 의료영상으로부터 해부학적 개체에 대응하는 참조영역정보를 추출하는 단계와; 추출된 참조영역정보에 기초하여 제1 의료영상에서 해부학적 개체에 대응하는 영역을 검출하고, 검출된 해부학적 개체의 영역이 해부학적 개체가 아닌 영역과 구분하여 표시되도록 하는 단계를 포함할 수 있다. 이에 의해, 참조영상으로부터 추출된 정보를 이용하여 의료영상에서 식별되지 않았던 개체에 대한 구분 표시 기능이 제공된다.Meanwhile, in a recording medium on which a program for performing a medical image processing method is recorded as a computer-readable program according to an embodiment of the present invention, the medical image processing method may include capturing an object including at least one anatomical object. Displaying a first medical image; Extracting reference region information corresponding to the anatomical object from at least one second medical image that is a reference image of the first medical image; Detecting a region corresponding to the anatomical entity in the first medical image based on the extracted reference region information, and displaying the detected anatomical entity's region separately from the non-anatomical entity. . As a result, a division display function for an object that has not been identified in the medical image using information extracted from the reference image is provided.
제1 의료영상과 제2 의료영상을 정합하여 제1 의료영상에서 검출된 해부학적 개체에 대한 표시영역정보를 포함하는 제3 의료영상을 생성하는 단계를 더 포함하며, 구분하여 표시되도록 하는 단계는, 표시영역정보에 기초하여 생성된 제3 의료영상에서 검출된 해부학적 개체의 영역이 해부학적 개체가 아닌 영역과 구분하여 표시되도록 할 수 있다. 이에, 의료정합을 이용하여 생성된 영상에 의해 개체에 대한 구분 표시가 가능하게 된다.And matching the first medical image with the second medical image to generate a third medical image including display area information on the anatomical object detected in the first medical image, and displaying the divided medical images. The region of the anatomical entity detected in the third medical image generated based on the display region information may be displayed separately from the region which is not the anatomical entity. As a result, it is possible to display an individual object by the image generated using medical registration.
해부학적 개체는 복수이며, 구분하여 표시되도록 하는 단계는, 복수의 해부학적 개체의 영역들이 각각 구분하여 표시되도록 할 수 있다. 이에, 식별되지 않았던 개체들에 대한 구분 표시가 가능하므로, 진단에 편의를 제공한다.The anatomical objects are plural, and the step of displaying the anatomical objects separately may include displaying the areas of the plurality of anatomical objects separately. Thus, it is possible to distinguish the identification of the unidentified objects, thereby providing convenience for diagnosis.
복수의 해부학적 개체는, 혈관 및 림프노드 중 적어도 하나를 포함할 수 있다. 이에, 중요 임상 판정 요소인 혈관과 림프노드가 구분되어 표시되는 효과가 발생한다.The plurality of anatomical entities may comprise at least one of blood vessels and lymph nodes. As a result, there is an effect in which blood vessels and lymph nodes, which are important clinical judgment factors, are divided and displayed.
제1 의료영상은 비 조영 의료영상일 수 있다. 이에, 조영제 부작용이 우려되는 대상체에 의해 촬영된 비 조영 의료영상에서 해부학적 개체의 구분 표시가 가능하다.The first medical image may be a non-contrast medical image. Thus, it is possible to distinguish and display anatomical objects in non-contrast medical images taken by a subject of concern about side effects of the contrast agent.
제2 의료영상은 조영 증강된 의료영상일 수 있다. 이에, 영역 분할에 의한 개체 구분이 용이한 조영 증강된 의료영상을 참조영상으로 활용할 수 있다.The second medical image may be a contrast-enhanced medical image. Thus, the contrast-enhanced medical image, which is easy to distinguish the individual by segmentation, may be used as a reference image.
제2 의료영상은, 제1 의료영상을 획득한 대상체를 타 시점에 촬상하여 획득된 의료영상일 수 있다. 이에, 환자의 과거 히스토리를 현재의 진단에 활용 가능하다.The second medical image may be a medical image obtained by photographing an object from which the first medical image is acquired at another time point. Thus, the past history of the patient can be utilized for the present diagnosis.
구분하여 표시되도록 하는 단계는, 색상, 패턴, 포인터, 하이라이트 및 애니메이션 효과 중 적어도 하나에 의해, 검출된 해부학적 개체의 영역이 해부학적 개체가 아닌 영역과 구분하여 표시될 수 있다. 이에, 사용자의 취항에 따른 다양한 구분표시 기능이 제공된다.In the distinguishing display, the detected anatomical region may be distinguished from the non-anatomical region by at least one of color, pattern, pointer, highlight, and animation effect. Thus, various division display functions are provided according to the user's preference.
해부학적 개체의 영역의 구분 표시를 활성화 또는 비활성화하는 사용자 선택을 수신하는 단계를 더 포함할 수 있다. 이에, 개체 구분 표시 기능에 대한 사용자 선택 편의를 돕는다.The method may further include receiving a user selection of activating or deactivating the division mark of the anatomical object. Thus, the user selection convenience for the object display function.
해부학적 개체의 구분 표시의 레벨을 조정하는 사용자 입력을 수신하는 단계를 더 포함할 수 있다. 이에, 사용자의 취향에 부합하는 기능 제공이 가능하다.The method may further include receiving a user input for adjusting a level of the division mark of the anatomical object. Thus, it is possible to provide a function corresponding to the taste of the user.
구분하여 표시된 해부학적 개체의 영역에서 병변 확장 영역을 검출하고, 해부학적 개체의 영역 내에서 검출된 병변 확장 영역이 식별 가능하게 표시되도록 하는 단계를 더 포함할 수 있다. 이에, 병변의 진행 상황에 대한 정보가 더 제공됨으로써 병변 진단의 편의를 제공한다.The method may further include detecting a lesion extension region in the region of the anatomical entity that is displayed separately, and causing the lesion extension region detected in the region of the anatomical entity to be distinguishably displayed. Thus, the information on the progress of the lesion is further provided to provide convenience of the diagnosis of the lesion.
참조영역정보를 추출하는 단계는, 제2 의료영상의 픽셀들의 밝기 값을 이용하여 해부학적 개체에 대응하는 참조영역정보를 추출할 수 있다. 이에, 기저장된 영상의 정보를 효율적으로 활용하여, 필요한 정보를 취득할 수 있다.In the extracting of the reference region information, the reference region information corresponding to the anatomical object may be extracted using the brightness values of the pixels of the second medical image. As a result, necessary information can be obtained by efficiently utilizing the information of the pre-stored video.
제3 의료영상을 생성하는 단계는, 소정 변환모델 파라미터를 사용하여 제1 의료영상과 제2 의료영상의 유사도 측정 함수의 결과값이 최대가 되도록 영상 정합을 수행할 수 있다. 이에, 영상정합 프로세스의 결과 생성된 정합된 영상의 정확도가 향상된다.In the generating of the third medical image, image registration may be performed so that a result value of the similarity measurement function between the first medical image and the second medical image is maximized using a predetermined conversion model parameter. Thus, the accuracy of the matched image generated as a result of the image registration process is improved.
제3 의료영상을 생성하는 단계는, 소정 변환모델 파라미터를 사용하여 제1 의료영상과 제2 의료영상의 비용 함수의 결과값이 최소가 되도록 영상 정합을 수행할 수 있다. 이에, 영상정합 프로세스의 결과 생성된 정합된 영상의 오류 가능성이 낮아진다.In the generating of the third medical image, image registration may be performed such that a result value of a cost function of the first medical image and the second medical image is minimized using a predetermined conversion model parameter. As a result, the possibility of error of the matched image generated as a result of the image matching process is lowered.
제3 의료영상을 생성하는 단계는, 제1 의료영상과 제2 의료영상의 좌표계를 매핑하는 단계와; 좌표계가 매핑된 제1 의료영상 및 제2 의료영상에 대해, 제2 의료영상의 영상 특성을 유지한 상태로 제1의료영상에 매칭시키는 동형정합을 수행하는 단계를 포함할 수 있다. 이에, 림프노드와 혈관이 구분 표시되는 정합 영상이 제공된다.The generating of the third medical image comprises: mapping coordinate systems of the first medical image and the second medical image; The method may include performing homogeneous matching on the first medical image and the second medical image to which the coordinate system is mapped to the first medical image while maintaining the image characteristics of the second medical image. Thus, a registration image is provided in which lymph nodes and blood vessels are separately displayed.
제3 의료영상을 생성하는 단계는, 동형정합이 수행된 제1 의료영상 및 제2 의료영상에 대해, 제2 의료영상의 영상 특성을 변형시켜 제1의료영상에 완전히 매칭시키는 이형정합을 수행하는 단계를 더 포함할 수 있다. 이에, 림프노드 내에서 병변의 확장에 대한 정량과 결과까지 사용자에게 제공 가능하다.The generating of the third medical image may include performing heterogeneous matching on the first medical image and the second medical image on which the homogeneous registration has been performed, by modifying image characteristics of the second medical image to completely match the first medical image. It may further comprise a step. Thus, it is possible to provide the user with quantification and results of lesion expansion in lymph nodes.
본 발명의 실시예에 의하면, 조영제를 적극적으로 사용하기 부담스러운 신장 기능 미약 환자들도, 림프 노드 추적 검사가 가능하다. According to the embodiment of the present invention, lymph node follow-up is possible even in patients with weak renal function who are burdened to actively use the contrast agent.
또한, 비조영 영상기반의 림프노드 질환 판정 관련 오진 가능성이 줄어들어, 진단 시스템의 개선 (Under/Over-estimation) 및 진단 정확도 향상을 도모할 수 있다.In addition, the possibility of misdiagnosis related to non-image based lymph node disease determination can be reduced, thereby improving the diagnosis system (Under / Over-estimation) and improving the accuracy of diagnosis.
또한, 본 실시예는 일반 검진용 비조영 영상에 적용 가능하여, 암 전이 여부 등 암 질환 조기 진단 등에 활용된다.In addition, the present embodiment can be applied to non-images for general examination, and is used for early diagnosis of cancer diseases such as cancer metastasis.
도 1은 본 발명의 일 실시예에 따른 의료영상 표시장치를 설명하기 위한 도면이며, 1 is a view for explaining a medical image display apparatus according to an embodiment of the present invention,
도 2는 본 발명 일 실시예에 의한 MRI 장치를 개략적으로 도시한 도면이며,2 is a view schematically showing an MRI apparatus according to an embodiment of the present invention,
도 3은 본 발명 일실시예에 의한 CT 장치를 도시한 도면이며, 3 is a view showing a CT device according to an embodiment of the present invention,
도 4는 도 3의 CT 장치의 구성을 개략적으로 도시한 도면이며,4 is a view schematically showing the configuration of the CT device of FIG.
도 5는 네트워크 시스템에서 외부와 통신을 수행하는 통신부의 구성을 간략하게 도시한 도면이며,FIG. 5 is a diagram schematically illustrating a configuration of a communication unit that performs communication with an external device in a network system.
도 6은 본 발명 일 실시예에 따른 제1 의료 장치, 제2 의료 장치 및 의료영상 정합장치를 포함하는 시스템을 도시한 도면이며,6 is a diagram illustrating a system including a first medical device, a second medical device, and a medical image registration device according to an embodiment of the present invention.
도 7은 흉부영역의 림프노드 및 혈관분포를 개념적으로 도시한 도면이며, 7 is a diagram conceptually illustrating lymph nodes and blood vessel distribution of a thoracic region.
도 8은 흉부영역에 대해 촬영된 조영 증강 CT 영상을 도시한 도면이며, FIG. 8 is a diagram illustrating contrast-enhanced CT images taken of a chest region.
도 9는 흉부영역에 대해 촬영된 비 조영 CT 영상을 도시한 도면이며, 9 is a diagram illustrating non-contrast CT images taken of a chest region.
도 10은 본 발명 일실시예에 따른 의료영상 표시장치의 구성을 도시한 블록도이며,10 is a block diagram showing the configuration of a medical image display apparatus according to an embodiment of the present invention.
도 11은 도 10의 영상처리부의 구성을 도시한 블록도이며, FIG. 11 is a block diagram illustrating a configuration of an image processor of FIG. 10.
도 12는 본 발명 일실시예에 따른 제1 의료영상을 도시한 도면이며,12 is a view showing a first medical image according to an embodiment of the present invention,
도 13은 본 발명 일실시예에 따른 제2 의료영상을 도시한 도면이며,13 is a view showing a second medical image according to an embodiment of the present invention,
도 14는 개체가 추출된 제2 의료영상을 도시한 도면이며,14 is a diagram illustrating a second medical image from which an object is extracted;
도 15는 본 실시예에 따른 영상 정합 과정을 설명하기 위한 도면이며,15 is a diagram for explaining an image registration process according to the present embodiment;
도 16은 동형정합 프로세스를 개념적으로 도시한 도면이며, 16 conceptually illustrates a homogeneous matching process,
도 17은 이형정합 프로세스를 개념적으로 도시한 도면이며,17 conceptually illustrates a heterogeneous matching process,
도 18은 본 발명 실시예에서 정합 프로세스를 수행하는 과정들을 도시한 흐름도이며,18 is a flowchart illustrating processes of performing a matching process in an embodiment of the present invention.
도 19는 본 발명 일실시예에 따른 제3 의료영상을 도시한 도면이며, 19 is a view showing a third medical image according to an embodiment of the present invention,
도 20은 제4 의료영상을 도시한 도면이며, 20 is a view showing a fourth medical image,
도 21은 도 20에서 일부 개체 영역을 확대하여 도시한 도면이며,FIG. 21 is an enlarged view of a portion of an object area in FIG. 20.
도 22는 본 발명 일 실시예에서 의료영상 표시장치에서 의료 진단 기능을 가지는 어플리케이션의 구동에 따라 표시되는 화면을 도시한 도면이며,FIG. 22 is a diagram illustrating a screen displayed according to driving of an application having a medical diagnosis function in a medical image display apparatus according to an embodiment of the present invention.
도 23 내지 도 26은 본 발명의 실시예에 따른 의료영상 표시장치에서 영상 정합을 진단에 활용하는 다양한 예시들을 도시한 도면들이며,23 to 26 illustrate various examples of using image registration for diagnosis in a medical image display apparatus according to an embodiment of the present invention.
도 27은 본 발명 일실시예에 의한 의료영상 처리방법을 도시한 흐름도이다.27 is a flowchart illustrating a medical image processing method according to an embodiment of the present invention.
이하, 첨부한 도면을 참고로 하여 본 발명의 실시예들에 대하여 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자가 용이하게 실시할 수 있도록 상세히 설명한다. 본 발명은 여기에서 설명하는 실시예들에 한정되지 않으며, 여러 가지 상이한 형태로 구현될 수 있다. Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art may easily implement the present invention. The present invention is not limited to the embodiments described herein, and may be implemented in various different forms.
본 발명에서 사용되는 용어는 본 발명에서의 기능을 고려하면서 가능한 현재 널리 사용되는 일반적인 용어들을 선택하였으나, 이는 당 분야에 종사하는 기술자의 의도 또는 판례, 새로운 기술의 출현 등에 따라 달라질 수 있다. 또한, 특정한 경우는 출원인이 임의로 선정한 용어도 있으며, 이 경우 해당되는 발명의 설명 부분에서 상세히 그 의미를 기재할 것이다. 따라서 본 발명에서 사용되는 용어는 단순한 용어의 명칭이 아닌, 그 용어가 가지는 의미와 본 발명의 전반에 걸친 내용을 토대로 정의되어야 한다.The terms used in the present invention have been selected as widely used general terms as possible in consideration of the functions in the present invention, but this may vary according to the intention or precedent of the person skilled in the art, the emergence of new technologies and the like. In addition, in certain cases, there is also a term arbitrarily selected by the applicant, in which case the meaning will be described in detail in the description of the invention. Therefore, the terms used in the present invention should be defined based on the meanings of the terms and the contents throughout the present invention, rather than the names of the simple terms.
또한, 실시예에서 “포함하다” 또는 “가지다”와 같은 용어는 명세서 상에 기재된 특징, 숫자, 단계, 동작, 구성요소 또는 이들의 조합이 존재함을 지정하기 위한 것이며, 하나 이상의 다른 특징, 숫자, 단계, 동작, 구성요소 또는 이들의 조합이 존재하거나 부가되는 가능성을 배제하는 것은 아니다. In addition, the term “comprises” or “having” in an embodiment is intended to designate that there exists a feature, number, step, action, component, or combination thereof described on the specification, and one or more other features, numbers It is not intended to exclude the possibility of the presence or addition of steps, actions, components or combinations thereof.
명세서에서 사용되는 "부"라는 용어는 소프트웨어, FPGA 또는 ASIC과 같은 하드웨어 구성요소를 의미하며, "부"는 어떤 역할들을 수행한다. 그렇지만 "부"는 소프트웨어 또는 하드웨어에 한정되는 의미는 아니다. "부"는 어드레싱 할 수 있는 저장 매체에 있도록 구성될 수도 있고 하나 또는 그 이상의 프로세서들을 재생시키도록 구성될 수도 있다. 따라서, 일 예로서 "부"는 소프트웨어 구성요소들, 객체지향 소프트웨어 구성요소들, 클래스 구성요소들 및 태스크 구성요소들과 같은 구성요소들과, 프로세스들, 함수들, 속성들, 프로시저들, 서브루틴들, 프로그램 코드의 세그먼트들, 드라이버들, 펌웨어, 마이크로 코드, 회로, 데이터, 데이터베이스, 데이터 구조들, 테이블들, 어레이들 및 변수들을 포함한다. 구성요소들과 "부"들 안에서 제공되는 기능은 더 작은 수의 구성요소들 및 "부"들로 결합되거나 추가적인 구성요소들과 "부"들로 더 분리될 수 있다.The term "part" as used herein refers to a hardware component such as software, FPGA, or ASIC, and "part" plays a role. However, "part" is not meant to be limited to software or hardware. The “unit” may be configured to be in an addressable storage medium and may be configured to play one or more processors. Thus, as an example, a "part" refers to components such as software components, object-oriented software components, class components, and task components, processes, functions, properties, procedures, Subroutines, segments of program code, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays and variables. The functionality provided within the components and "parts" may be combined into a smaller number of components and "parts" or further separated into additional components and "parts".
본 명세서에서 "영상"은 이산적인 영상 요소들(예를 들어, 2차원 영상에 있어서의 픽셀들(pixel) 및 3차원 영상에 있어서의 복셀들(voxel)로 구성된 다차원(multi-dimensional) 데이터를 의미할 수 있다. 예를 들어, 영상은 X-ray, CT, MRI, 초음파 및 다른 의료영상 시스템에 의해 획득된 대상체의 의료영상 등을 포함할 수 있다.As used herein, "image" refers to multi-dimensional data consisting of discrete image elements (e.g., pixels in a two-dimensional image and voxels in a three-dimensional image). For example, the image may include a medical image of an object acquired by X-ray, CT, MRI, ultrasound, and other medical imaging systems.
또한, 본 명세서에서 "대상체(object)"는 사람 또는 동물, 또는 사람 또는 동물의 일부를 포함할 수 있다. 예를 들어, 대상체는 간, 심장, 자궁, 뇌, 유방, 복부 등의 장기, 또는 혈관을 포함할 수 있다. 또한, "대상체"는 팬텀(phantom)을 포함할 수도 있다. 팬텀은 생물의 밀도와 실효 원자 번호에 아주 근사한 부피를 갖는 물질을 의미하는 것으로, 신체와 유사한 성질을 갖는 구형(sphere)의 팬텀을 포함할 수 있다.Also, as used herein, an "object" may include a person or an animal, or a part of a person or an animal. For example, the subject may include organs such as the liver, heart, uterus, brain, breast, abdomen, or blood vessels. Also, the "object" may include a phantom. Phantom means a material having a volume very close to the density and effective atomic number of an organism, and may include a sphere phantom having properties similar to the body.
또한, 본 명세서에서 "사용자"는 의료 전문가로서 의사, 간호사, 임상 병리사, 의료영상 전문가 등이 될 수 있으며, 의료 장치를 수리하는 기술자가 될 수 있으나, 이에 한정되지 않는다.In addition, in the present specification, the "user" may be a doctor, a nurse, a clinical pathologist, a medical imaging expert, or the like, and may be a technician who repairs a medical device, but is not limited thereto.
이하에서는 도면과 관련하여 본 발명을 명확하게 설명하기 위해서 본 발명의 구성과 직접적으로 관련되지 않은 부분은 설명을 생략할 수 있으며, 명세서 전체를 통하여 동일 또는 유사한 구성요소에 대해서는 동일한 참조부호를 부여하기로 한다. Hereinafter, in order to clearly describe the present invention with reference to the drawings, parts not directly related to the configuration of the present invention may be omitted, and like reference numerals refer to the same or similar elements throughout the specification. Shall be.
도 1은 본 발명의 일 실시예에 따른 의료영상 표시장치(100)를 설명하기 위한 도면이다.1 is a view for explaining a medical image display apparatus 100 according to an embodiment of the present invention.
본 발명의 일 실시예에 의하면, 의료영상 표시장치(100)는 의료영상을 획득하고, 화면에 의료영상을 표시하는 장치일 수 있다. 예를 들어, 의료영상 표시장치(100)는, 도 1에 도시된 바와 같이, 자기 공명 영상 장치(이하 MRI 장치 라고도 한다)(101), 컴퓨터 단층 촬영 장치(이하 CT 장치 라고도 한다)(102), 엑스선(X-ray, 이하 X선 또는 엑스레이 라고도 한다) 촬영 장치(도시되지 아니함), 혈관 조영 검사(angiography) 장치(도시되지 아니함), 초음파 (ultrasound) 장치(103) 등이 있을 수 있으나, 이에 한정되는 것은 아니다.According to an embodiment of the present invention, the medical image display apparatus 100 may be a device for obtaining a medical image and displaying the medical image on the screen. For example, as shown in FIG. 1, the medical image display apparatus 100 includes a magnetic resonance imaging apparatus (hereinafter referred to as an MRI apparatus) 101 and a computed tomography apparatus (hereinafter referred to as a CT apparatus) 102. , X-ray (hereinafter referred to as X-ray or X-ray) imaging device (not shown), angiography device (not shown), ultrasonic device 103, etc. It is not limited to this.
MRI 장치(101)는 특정 세기의 자기장에서 발생하는 RF(Radio Frequency) 신호에 대한 MR(Magnetic Resonance) 신호의 세기를 명암 대비로 표현하여 대상체의 단층 부위에 대한 이미지를 획득하는 기기이다.The MRI apparatus 101 is an apparatus that obtains an image of a tomography region of an object by expressing intensity of a magnetic resonance (MR) signal with respect to a radio frequency (RF) signal generated in a magnetic field of a specific intensity in contrast.
CT 장치(102)는 대상체에 대하여 단면 영상을 제공할 수 있으므로, 일반적인 엑스선 촬영 장치에 비하여 대상체의 내부 구조(예컨대, 신장, 폐 등의 장기 등)를 겹치지 않게 표현할 수 있다는 장점이 있다. CT 장치(102)는, 예를 들어, 2mm 두께 이하의 영상을 초당 수십, 수백 장씩 획득하여 가공함으로써 대상체에 대하여 비교적 정확한 단면 영상을 제공할 수 있다.Since the CT device 102 may provide a cross-sectional image of the object, the CT device 102 may express the internal structure of the object (for example, an organ such as a kidney or a lung) without overlapping as compared to a general X-ray imaging apparatus. For example, the CT device 102 may provide a relatively accurate cross-sectional image of an object by acquiring and processing an image having a thickness of 2 mm or less at several tens or hundreds per second.
엑스선 촬영 장치는, 엑스선을 인체에 투과하여, 인체 내부 구조물을 영상화하는 장치를 의미한다. 혈관 조영 검사 장치는, 카테터라고 하는 2mm 내외의 가느다란 관을 통해 조영제가 주입된 피검사자의 혈관(동맥, 정맥)을 엑스선을 통해서 볼 수 있게 하는 장치이다.The X-ray imaging apparatus refers to a device that transmits X-rays through a human body to image an internal structure of the human body. Angiography apparatus is a device that allows the blood vessels (arteries and veins) of a subject to be injected with a contrast agent through a 2 mm thin tube called a catheter through X-rays.
초음파 장치(103)는, 대상체의 체표로부터 체내의 소정 부위를 향하여 초음파 신호를 전달하고, 체내의 조직에서 반사된 초음파 신호(이하, 초음파 에코신호 라고도 한다)의 정보를 이용하여 연부조직의 단층이나 혈류에 관한 이미지를 얻는 장치를 의미한다.The ultrasound apparatus 103 transmits an ultrasound signal from a body surface of the object toward a predetermined part of the body, and uses the information of an ultrasound signal (hereinafter, also referred to as an ultrasound echo signal) reflected from tissues of the body to detect a monolayer of soft tissue, Means a device for obtaining an image of blood flow.
본 발명의 일 실시예에 의하면, 의료영상 표시장치(100)는 다양한 형태로 구현될 수 있다. 예를 들어, 본 명세서에서 기술되는 의료영상 표시장치(100)는 고정식 단말뿐만 아니라 이동식 단말 형태로도 구현될 수 있다. 이동식 단말의 일례로 스마트폰, 스마트패드, 태블릿 PC, 랩탑 컴퓨터, PDA 등이 있을 수 있다.According to an embodiment of the present invention, the medical image display apparatus 100 may be implemented in various forms. For example, the medical image display apparatus 100 described herein may be implemented in the form of a mobile terminal as well as a fixed terminal. An example of a mobile terminal may be a smartphone, a smart pad, a tablet PC, a laptop computer, a PDA, and the like.
본 발명의 일 실시예에 의하면, 의료영상 표시장치(100)는, 의료영상 정보 시스템(PACS: Picture Archiving and Communication System)을 통해 연결된 병원 서버나 병원 내의 다른 의료 장치와 의료영상 데이터를 주고받을 수 있다. 또한, 의료영상 표시장치(100)는, 의료용 디지털 영상 및 통신(DICOM: Digital Imaging and Communications in Medicine) 표준에 따라 서버 등과 데이터 통신을 수행할 수 있다.According to an embodiment of the present invention, the medical image display apparatus 100 may exchange medical image data with a hospital server connected to a medical image information system (PACS) or another medical apparatus in the hospital. have. In addition, the medical image display apparatus 100 may perform data communication with a server or the like according to a digital imaging and communications in medicine (DICOM) standard.
본 발명의 일 실시예에 의하면, 의료영상 표시장치(100)는, 터치 스크린을 포함할 수도 있다. 터치스크린은 터치 입력 위치, 터치된 면적뿐만 아니라 터치 입력 압력까지도 검출할 수 있도록 구성될 수 있다. 또한, 터치 스크린은 직접 터치(real-touch) 뿐만 아니라 근접 터치(proximity touch)도 검출될 수 있도록 구성될 수 있다.According to an embodiment of the present invention, the medical image display apparatus 100 may include a touch screen. The touch screen may be configured to detect not only the touch input position and the touched area but also the touch input pressure. In addition, the touch screen may be configured to detect proximity touch as well as real-touch.
본 명세서에서 직접 터치(real-touch) 라 함은 화면에 실제로 사용자의 신체(예를 들어, 손가락) 또는 터치 도구로서 마련되는 터치펜(예를 들어, 포인팅 디바이스, 스타일러스, 햅틱 펜(haptic pen), 전자펜 등)이 터치된 경우를 말하고, 근접 터치(proximity-touch) 라 함은 사용자의 신체 또는 터치 도구가 화면에 실제로 터치는 되지 않고, 화면으로부터 소정 거리 떨어져 접근된 경우(예를 들어, 검출가능한 간격이 30 mm 이하의 호버링(hovering))를 말한다.In this specification, a real touch is a touch pen (eg, a pointing device, a stylus, a haptic pen) that is actually provided on the screen as a user's body (eg, a finger) or a touch tool. , An electronic pen, etc.) is touched, and proximity-touch refers to a case in which the user's body or a touch tool does not actually touch the screen but approaches a predetermined distance from the screen (for example, Detectable spacing refers to hovering of 30 mm or less.
터치 스크린은 예를 들면, 저항막(resistive) 방식, 정전 용량(capacitive) 방식, 적외선(infrared) 방식 또는 초음파(acoustic wave) 방식으로 구현될 수 있다.The touch screen may be implemented by, for example, a resistive method, a capacitive method, an infrared method, or an acoustic wave method.
본 발명의 일 실시예에 의하면, 의료영상 표시장치(100)는, 터치 스크린을 통해 의료영상에 대한 사용자의 티치 입력으로서, 제스처 입력을 감지할 수 있다. According to an embodiment of the present invention, the medical image display apparatus 100 may detect a gesture input as a user's teach input to the medical image through the touch screen.
본 명세서에서 기술되는 사용자의 터치 입력에는 탭(tap), 탭보다 강하게 터치하는 클릭(click), 터치 앤 홀드(touch and hold), 더블 탭(double tap), 더블 클릭(double click), 터치를 유지한 상태로 소정 거리를 이동하는 드래그(drag), 드래그 앤 드롭(drag and drop), 슬라이드(slide), 플리킹(flicking), 패닝(panning), 스와이프(swipe), 핀치(pinch) 등을 모두 포함한다. 드래그, 슬라이드, 플리킹, 스와이프 등의 입력은 터치스크린에 손가락(또는 터치펜)이 닿는 프레스(press), 소정 거리의 이동 및 터치스크린으로부터의 릴리즈(release)로 구성될 수 있으며, 직선 또는 곡선 형태의 이동을 모두 포함한다. 상기의 다양한 터치 입력들을 제스처 입력에 포함된다The touch input of a user described herein includes a tap, a click that touches harder than a tap, a touch and hold, a double tap, a double click, and a touch. Drag, drag and drop, slide, flicking, panning, swipe, pinch, etc. Includes all of them. Input such as drag, slide, flicking, swipe, etc. may consist of a press on which the finger (or touch pen) touches the touch screen, a movement of a certain distance, and a release from the touch screen. Includes all curve-shaped movements. The various touch inputs are included in the gesture input.
본 발명의 일 실시예에 의하면, 의료영상 표시장치(100)는, 의료영상을 제어하기 위한 버튼 중 일부 또는 전부를 GUI(graphical user interface) 형태로 제공할 수 있다.According to an embodiment of the present invention, the medical image display apparatus 100 may provide some or all of the buttons for controlling the medical image in the form of a graphical user interface (GUI).
도 2는 본 발명 일실시예에 의한 MRI 장치(101)를 개략적으로 도시한 도면이다. 2 is a diagram schematically showing an MRI apparatus 101 according to an embodiment of the present invention.
본 실시예에서 자기 공명 영상 (MRI: Magnetic Resonance Image) 이란 핵자기 공명 원리를 이용하여 획득된 대상체에 대한 영상을 의미한다. In the present embodiment, a magnetic resonance image (MRI) refers to an image of an object acquired using the nuclear magnetic resonance principle.
MRI 장치(101)는 특정 세기의 자기장에서 발생하는 RF(Radio Frequency) 신호에 대한 MR(Magnetic Resonance) 신호의 세기를 명암 대비로 표현하여 대상체의 단층 부위에 대한 영상을 획득하는 기기이다. 예를 들어, 대상체를 강력한 자기장 속에 눕힌 후 특정의 원자핵(예컨대, 수소 원자핵 등)만을 공명시키는 RF 신호를 대상체에 순간적으로 조사했다가 중단하면 상기 특정의 원자핵에서 MR 신호가 방출되는데, MRI 장치(101)는 이 MR 신호를 수신하여 MR 영상을 획득할 수 있다. MR 신호는 대상체로부터 방사되는 RF 신호를 의미한다. MR 신호의 크기는 대상체에 포함된 소정의 원자(예컨대, 수소 등)의 농도, 이완시간 T1, 이완시간 T2 및 혈류 등의 흐름에 의해 결정될 수 있다.The MRI apparatus 101 is an apparatus that obtains an image of a tomography region of an object by expressing the intensity of a magnetic resonance (MR) signal with respect to a radio frequency (RF) signal generated in a magnetic field of a specific intensity in contrast. For example, if a subject is placed in a strong magnetic field and then irradiates the subject with an RF signal that resonates only a particular nucleus (eg, a hydrogen nucleus, etc.), the MR signal is emitted from the particular nucleus. 101 may obtain the MR image by receiving the MR signal. The MR signal refers to an RF signal radiated from the object. The magnitude of the MR signal may be determined by the concentration of a predetermined atom (eg, hydrogen, etc.) included in the subject, a relaxation time T1, a relaxation time T2, and a flow of blood flow.
MRI 장치(101)는 다른 이미징 장치들과는 다른 특징들을 포함한다. 영상의 획득이 감지 하드웨어(detecting hardware)의 방향에 의존하는 CT와 같은 이미징 장치들과 달리, MRI 장치(101)는 임의의 지점으로 지향된 2D 영상 또는 3D 볼륨 영상을 획득할 수 있다. 또한, MRI 장치(101)는, CT, X-ray, PET 및 SPECT와 달리, 대상체 및 검사자에게 방사선을 노출시키지 않으며, 높은 연부 조직(soft tissue) 대조도를 갖는 영상의 획득이 가능하여, 비정상적인 조직의 명확한 묘사가 중요한 신경(neurological) 영상, 혈관 내부(intravascular) 영상, 근 골격(musculoskeletal) 영상 및 종양(oncologic) 영상 등을 획득할 수 있다.The MRI apparatus 101 includes different features from other imaging apparatuses. Unlike imaging devices such as CT where the acquisition of the image depends on the direction of the detection hardware, the MRI device 101 may acquire a 2D image or a 3D volume image directed to an arbitrary point. In addition, unlike CT, X-ray, PET, and SPECT, the MRI apparatus 101 does not expose radiation to the subject and the inspector, and may acquire an image having high soft tissue contrast, thereby causing abnormality. Neuralological images, intravascular images, musculoskeletal images, oncologic images, etc., in which a clear description of tissue is important can be obtained.
도 1에 도시된 바와 같이, 본 실시에의 MRI 장치(101)는 갠트리(gantry)(220), 신호 송수신부(230), 모니터링부(240), 시스템 제어부(250) 및 오퍼레이팅부(260)를 포함할 수 있다.As shown in FIG. 1, the MRI apparatus 101 according to the present embodiment includes a gantry 220, a signal transceiver 230, a monitor 240, a system controller 250, and an operating unit 260. It may include.
갠트리(220)는 주 자석(222), 경사 코일(224), RF 코일(226) 등에 의하여 생성된 전자파가 외부로 방사되는 것을 차단한다. 갠트리(220) 내 보어(bore)에는 정자기장 및 경사자장이 형성되며, 대상체(210)를 향하여 RF 신호가 조사된다.The gantry 220 blocks electromagnetic waves generated by the main magnet 222, the gradient coil 224, the RF coil 226, and the like from radiating to the outside. A static magnetic field and a gradient magnetic field are formed in the bore in the gantry 220, and an RF signal is irradiated toward the object 210.
주 자석(222), 경사 코일(224) 및 RF 코일(226)은 갠트리(220)의 소정의 방향을 따라 배치될 수 있다. 소정의 방향은 동축 원통 방향 등을 포함할 수 있다. 원통의 수평축을 따라 원통 내부로 삽입 가능한 테이블(table)(228)상에 대상체(210)가 위치될 수 있다.The main magnet 222, the gradient coil 224 and the RF coil 226 may be disposed along a predetermined direction of the gantry 220. The predetermined direction may include a coaxial cylindrical direction or the like. The object 210 may be positioned on a table 228 that can be inserted into the cylinder along the horizontal axis of the cylinder.
주 자석(222)은 대상체(210)에 포함된 원자핵들의 자기 쌍극자 모멘트(magnetic dipole moment)의 방향을 일정한 방향으로 정렬하기 위한 정자기장 또는 정자장(static magnetic field)을 생성한다. 주 자석에 의하여 생성된 자장이 강하고 균일할수록 대상체(210)에 대한 비교적 정밀하고 정확한 MR 영상을 획득할 수 있다.The main magnet 222 generates a static magnetic field or a static magnetic field for aligning the directions of the magnetic dipole moments of the nuclei included in the object 210 in a predetermined direction. As the magnetic field generated by the main magnet is stronger and more uniform, a relatively precise and accurate MR image of the object 210 may be obtained.
경사 코일(Gradient coil)(224)은 서로 직교하는 X축, Y축 및 Z축 방향의 경사자장을 발생시키는 X, Y, Z 코일을 포함한다. 경사 코일(224)은 대상체(210)의 부위 별로 공명 주파수를 서로 다르게 유도하여 대상체(210)의 각 부위의 위치 정보를 제공할 수 있다.The gradient coil 224 includes X, Y, and Z coils that generate gradient magnetic fields in the X-, Y-, and Z-axis directions that are perpendicular to each other. The gradient coil 224 may induce resonance frequencies differently for each part of the object 210 to provide location information of each part of the object 210.
RF 코일(226)은 환자에게 RF 신호를 조사하고, 환자로부터 방출되는 MR 신호를 수신할 수 있다. 구체적으로, RF 코일(226)은, 세차 운동을 하는 원자핵을 향하여 세차운동의 주파수와 동일한 주파수의 RF 신호를 환자에게 전송한 후 RF 신호의 전송을 중단하고, 환자로부터 방출되는 MR 신호를 수신할 수 있다.The RF coil 226 may radiate an RF signal to the patient and receive an MR signal emitted from the patient. In detail, the RF coil 226 transmits an RF signal having a frequency equal to the frequency of the precession toward the atomic nucleus during precession to the patient, stops transmitting the RF signal, and receives the MR signal emitted from the patient. Can be.
예를 들어, RF 코일(226)은 어떤 원자핵을 낮은 에너지 상태로부터 높은 에너지 상태로 천이시키기 위하여 이 원자핵의 종류에 대응하는 무선 주파수(Radio Frequency)를 갖는 전자파 신호, 예컨대 RF 신호를 생성하여 대상체(210)에 인가할 수 있다. RF 코일(226)에 의해 생성된 전자파 신호가 어떤 원자핵에 가해지면, 이 원자핵은 낮은 에너지 상태로부터 높은 에너지 상태로 천이될 수 있다. 이후에, RF 코일(226)에 의해 생성된 전자파가 사라지면, 전자파가 가해졌던 원자핵은 높은 에너지 상태로부터 낮은 에너지 상태로 천이하면서 라모어 주파수를 갖는 전자파를 방사할 수 있다. 다시 말해서, 원자핵에 대하여 전자파 신호의 인가가 중단되면, 전자파가 가해졌던 원자핵에서는 높은 에너지에서 낮은 에너지로의 에너지 준위의 변화가 발생하면서 라모어 주파수를 갖는 전자파가 방사될 수 있다. RF 코일(226)은 대상체(210) 내부의 원자핵들로부터 방사된 전자파 신호를 수신할 수 있다.For example, the RF coil 226 generates an electromagnetic signal, for example, an RF signal having a radio frequency corresponding to the type of the nuclear nucleus, such as an RF signal, in order to transition a nuclear nucleus from a low energy state to a high energy state. 210 may be applied. When an electromagnetic signal generated by the RF coil 226 is applied to a certain nucleus, the nucleus can transition from a low energy state to a high energy state. Thereafter, when the electromagnetic wave generated by the RF coil 226 disappears, the atomic nucleus to which the electromagnetic wave is applied may radiate an electromagnetic wave having a Lamor frequency while transitioning from a high energy state to a low energy state. In other words, when the application of the electromagnetic wave signal to the atomic nucleus is stopped, in the atomic nucleus to which the electromagnetic wave is applied, a change in the energy level from a high energy to a low energy may occur and an electromagnetic wave having a lamore frequency may be emitted. The RF coil 226 may receive an electromagnetic wave signal radiated from atomic nuclei inside the object 210.
RF 코일(226)은 원자핵의 종류에 대응하는 무선 주파수를 갖는 전자파를 생성하는 기능과 원자핵으로부터 방사된 전자파를 수신하는 기능을 함께 갖는 하나의 RF 송수신 코일로서 구현될 수도 있다. 또한, 원자핵의 종류에 대응하는 무선 주파수를 갖는 전자파를 생성하는 기능을 갖는 송신 RF 코일과 원자핵으로부터 방사된 전자파를 수신하는 기능을 갖는 수신 RF 코일로서 각각 구현될 수도 있다.The RF coil 226 may be implemented as one RF transmission / reception coil having a function of generating an electromagnetic wave having a radio frequency corresponding to the type of the atomic nucleus and a function of receiving the electromagnetic wave radiated from the atomic nucleus. Further, it may be implemented as a transmitting RF coil having a function of generating an electromagnetic wave having a radio frequency corresponding to the type of atomic nucleus and a receiving RF coil having a function of receiving electromagnetic waves radiated from the atomic nucleus.
또한, 이러한 RF 코일(226)은 갠트리(220)에 고정된 형태일 수 있고, 착탈이 가능한 형태일 수 있다. 착탈이 가능한 RF 코일(226)은 머리 RF 코일, 흉부 RF 코일, 다리 RF 코일, 목 RF 코일, 어깨 RF 코일, 손목 RF 코일 및 발목 RF 코일 등을 포함한 대상체의 일부분에 대한 RF 코일을 포함할 수 있다.In addition, the RF coil 226 may be a form fixed to the gantry 220, it may be a removable form. The detachable RF coil 226 may include RF coils for portions of the object, including head RF coils, chest RF coils, leg RF coils, neck RF coils, shoulder RF coils, wrist RF coils and ankle RF coils, and the like. have.
또한, RF 코일(226)은 유선 및/또는 무선으로 외부 장치와 통신할 수 있으며, 통신 주파수 대역에 따른 듀얼 튠(dual tune) 통신도 수행할 수 있다.In addition, the RF coil 226 may communicate with an external device in a wired and / or wireless manner, and may also perform dual tune communication according to a communication frequency band.
또한, RF 코일(226)은 코일의 구조에 따라 새장형 코일(birdcage coil), 표면 부착형 코일(surface coil) 및 횡전자기파 코일(TEM 코일)을 포함할 수 있다.In addition, the RF coil 226 may include a cage coil, a surface coil, and a transverse electromagnetic coil (TEM coil) according to the structure of the coil.
또한, RF 코일(226)은 RF 신호 송수신 방법에 따라, 송신 전용 코일, 수신 전용 코일 및 송/수신 겸용 코일을 포함할 수 있다.In addition, the RF coil 226 may include a transmission-only coil, a reception-only coil, and a transmission / reception combined coil according to an RF signal transmission / reception method.
또한, RF 코일(226)은 16 채널, 32 채널, 72채널 및 144 채널 등 다양한 채널의 RF 코일을 포함할 수 있다.In addition, the RF coil 226 may include RF coils of various channels, such as 16 channels, 32 channels, 72 channels, and 144 channels.
이하에서는, RF 코일(226)이 다수개의 채널들인 제1 내지 제 N 채널에 각각 대응되는 N 개의 코일들을 포함하는 고주파 멀티 코일(Radio Frequency multi coil)인 경우를 예로 들어 설명한다. 여기서, 고주파 멀티 코일은 다채널 RF 코일이라 칭할 수도 있다.Hereinafter, a case where the RF coil 226 is a radio frequency multi coil including N coils corresponding to the first to Nth channels, which are the plurality of channels, will be described as an example. Here, the high frequency multi-coil may be referred to as a multichannel RF coil.
갠트리(220)는 갠트리(220)의 외측에 위치하는 디스플레이(229)와 갠트리(220)의 내측에 위치하는 디스플레이(도시되지 아니함)를 더 포함할 수 있다. 갠트리(220)의 내측 및/또는 외측에 위치하는 디스플레이를 통해 사용자 또는 대상체에게 소정의 정보를 제공할 수 있다.The gantry 220 may further include a display 229 positioned outside the gantry 220 and a display (not shown) positioned inside the gantry 220. Predetermined information may be provided to a user or an object through a display positioned inside and / or outside the gantry 220.
신호 송수신부(230)는 소정의 MR 시퀀스에 따라 갠트리(220) 내부, 즉 보어에 형성되는 경사자장을 제어하고, RF 신호와 MR 신호의 송수신을 제어할 수 있다.The signal transceiver 230 may control the gradient magnetic field formed in the gantry 220, that is, the bore according to a predetermined MR sequence, and may control the transmission and reception of the RF signal and the MR signal.
신호 송수신부(230)는 경사자장 증폭기(232), 송수신 스위치(234), RF 송신부(236) 및 RF 데이터 획득부(238)를 포함할 수 있다.The signal transceiver 230 may include a gradient magnetic field amplifier 232, a transceiver switch 234, an RF transmitter 236, and an RF data acquirer 238.
경사자장 증폭기(Gradient Amplifier)(232)는 갠트리(220)에 포함된 경사 코일(224)을 구동시키며, 경사자장 제어부(254)의 제어 하에 경사자장을 발생시키기 위한 펄스 신호를 경사 코일(224)에 공급할 수 있다. 경사자장 증폭기(232)로부터 경사 코일(224)에 공급되는 펄스 신호를 제어함으로써, X축, Y축, Z축 방향의 경사 자장이 합성될 수 있다.The gradient amplifier 232 drives the gradient coil 224 included in the gantry 220, and outputs a pulse signal for generating the gradient magnetic field under the control of the gradient magnetic field controller 254. Can be supplied to By controlling the pulse signal supplied from the gradient amplifier 232 to the gradient coil 224, gradient magnetic fields in the X-axis, Y-axis, and Z-axis directions can be synthesized.
RF 송신부(236) 및 RF 데이터 획득부(238)는 RF 코일(226)을 구동시킬 수 있다. RF 송신부(236)는 라모어 주파수(Larmor frequency)의 RF 펄스를 RF 코일(226)에 공급하고, RF 데이터 획득부(238)는 RF 코일(226)이 수신한 MR 신호를 수신할 수 있다.The RF transmitter 236 and the RF data acquirer 238 may drive the RF coil 226. The RF transmitter 236 may supply an RF pulse of a Larmor frequency to the RF coil 226, and the RF data acquirer 238 may receive an MR signal received by the RF coil 226.
송수신 스위치(234)는 RF 신호와 MR 신호의 송수신 방향을 조절할 수 있다. 예를 들어, 송신 모드 동안에 RF 코일(226)을 통하여 대상체(210)로 RF 신호가 조사되게 하고, 수신 모드 동안에는 RF 코일(226)을 통하여 대상체(210)로부터의 MR 신호가 수신되게 할 수 있다. 이러한 송수신 스위치(234)는 RF 제어부(256)로부터의 제어 신호에 의하여 제어될 수 있다.The transmission / reception switch 234 may adjust the transmission and reception directions of the RF signal and the MR signal. For example, the RF signal may be irradiated to the object 210 through the RF coil 226 during the transmission mode, and the MR signal may be received from the object 210 through the RF coil 226 during the reception mode. . The transmission / reception switch 234 may be controlled by a control signal from the RF control unit 256.
모니터링부(240)는 갠트리(220) 또는 갠트리(220)에 장착된 기기들을 모니터링 또는 제어할 수 있다. 모니터링부(240)는 시스템 모니터링부(242), 대상체 모니터링부(244), 테이블 제어부(246) 및 디스플레이 제어부(248)를 포함할 수 있다.The monitoring unit 240 may monitor or control the gantry 220 or the devices mounted on the gantry 220. The monitoring unit 240 may include a system monitoring unit 242, an object monitoring unit 244, a table control unit 246, and a display control unit 248.
시스템 모니터링부(242)는 정자기장의 상태, 경사자장의 상태, RF 신호의 상태, RF 코일의 상태, 테이블의 상태, 대상체의 신체 정보를 측정하는 기기의 상태, 전원 공급 상태, 열 교환기의 상태, 컴프레셔의 상태 등을 모니터링하고 제어할 수 있다.The system monitoring unit 242 may include a state of a static magnetic field, a state of a gradient magnetic field, a state of an RF signal, a state of an RF coil, a state of a table, a state of a device measuring body information of an object, a state of a power supply, a state of a heat exchanger, It can monitor and control the condition of the compressor.
대상체 모니터링부(244)는 대상체(210)의 상태를 모니터링한다. 구체적으로, 대상체 모니터링부(244)는 대상체(210)의 움직임 또는 위치를 관찰하기 위한 카메라, 대상체(210)의 호흡을 측정하기 위한 호흡 측정기, 대상체(210)의 심전도를 측정하기 위한 ECG 측정기, 또는 대상체(210)의 체온을 측정하기 위한 체온 측정기를 포함할 수 있다.The object monitoring unit 244 monitors the state of the object 210. In detail, the object monitoring unit 244 may include a camera for observing the movement or position of the object 210, a respiration meter for measuring the respiration of the object 210, an ECG meter for measuring the electrocardiogram of the object 210, Or it may include a body temperature meter for measuring the body temperature of the object 210.
테이블 제어부(246)는 대상체(210)가 위치하는 테이블(228)의 이동을 제어한다. 테이블 제어부(246)는 시퀀스 제어부(252)의 시퀀스 제어에 따라 테이블(228)의 이동을 제어할 수도 있다. 예를 들어, 대상체의 이동 영상 촬영(moving imaging)에 있어서, 테이블 제어부(246)는 시퀀스 제어부(252)에 의한 시퀀스 제어에 따라 지속적으로 또는 단속적으로 테이블(228)을 이동시킬 수 있으며, 이에 의해, 갠트리의 FOV(field of view)보다 큰 FOV로 대상체를 촬영할 수 있다.The table controller 246 controls the movement of the table 228 in which the object 210 is located. The table controller 246 may control the movement of the table 228 according to the sequence control of the sequence controller 252. For example, in moving imaging of an object, the table controller 246 may continuously or intermittently move the table 228 according to the sequence control by the sequence controller 252. The object may be photographed with an FOV larger than the field of view (FOV) of the gantry.
디스플레이 제어부(248)는 갠트리(220)의 외측 및/또는 내측에 위치하는 디스플레이(229)를 제어한다. 구체적으로, 디스플레이 제어부(248)는 갠트리(220)의 외측 및/또는 내측에 위치하는 디스플레이(229)의 온/오프 또는 디스플레이(229)에 출력될 화면 등을 제어할 수 있다. 또한, 갠트리(220) 내측 또는 외측에 스피커가 위치하는 경우, 디스플레이 제어부(248)는 스피커의 온/오프 또는 스피커를 통해 출력될 사운드 등을 제어할 수도 있다.The display controller 248 controls the display 229 positioned outside and / or inside the gantry 220. In detail, the display controller 248 may control on / off of the display 229 located at the outside and / or the inside of the gantry 220 or a screen to be output to the display 229. In addition, when the speaker is located inside or outside the gantry 220, the display controller 248 may control the on / off of the speaker or the sound to be output through the speaker.
시스템 제어부(250)는 갠트리(220) 내부에서 형성되는 신호들의 시퀀스를 제어하는 시퀀스 제어부(252), 및 갠트리(220)와 갠트리(220)에 장착된 기기들을 제어하는 갠트리 제어부(258)를 포함할 수 있다.The system controller 250 includes a sequence controller 252 for controlling a sequence of signals formed in the gantry 220, and a gantry controller 258 for controlling the gantry 220 and the devices mounted on the gantry 220. can do.
시퀀스 제어부(252)는 경사자장 증폭기(232)를 제어하는 경사자장 제어부(254), 및 RF 송신부(236), RF 데이터 획득부(238) 및 송수신 스위치(234)를 제어하는 RF 제어부(256)를 포함할 수 있다. 시퀀스 제어부(252)는 오퍼레이팅부(260)로부터 수신된 펄스 시퀀스에 따라 경사자장 증폭기(232), RF 송신부(236), RF 데이터 획득부(238) 및 송수신 스위치(234)를 제어할 수 있다. The sequence controller 252 controls the gradient magnetic field controller 254 for controlling the gradient magnetic field amplifier 232, and the RF controller 256 for controlling the RF transmitter 236, the RF data acquisition unit 238, and the transmission / reception switch 234. It may include. The sequence controller 252 may control the gradient magnetic field amplifier 232, the RF transmitter 236, the RF data acquirer 238, and the transmit / receive switch 234 according to the pulse sequence received from the operating unit 260.
여기에서, 펄스 시퀀스(pulse sequence)란, MRI 장치(101)에서 반복적으로 인가되는 신호의 연속을 의미한다. 펄스 시퀀스는 RF 펄스의 시간 파라미터, 예를 들어, 반복 시간(Repetition Time, TR) 및 에코 시간(Time to Echo, TE) 등을 포함할 수 있다.Here, the pulse sequence means a continuation of a signal repeatedly applied by the MRI apparatus 101. The pulse sequence may include a time parameter of the RF pulse, for example, a repetition time (TR), an echo time (Time to Echo, TE), and the like.
본 실시예에서, 펄스 시퀀스는 경사자장 증폭기(232), RF 송신부(236), RF 데이터 획득부(238) 및 송수신 스위치(234)를 제어하기 위해 필요한 모든 정보를 포함하며, 예를 들면 경사 코일(224)에 인가하는 펄스(pulse) 신호의 강도, 인가 시간, 인가 타이밍(timing) 등에 관한 정보 등을 포함할 수 있다.In this embodiment, the pulse sequence includes all the information necessary for controlling the gradient magnetic field amplifier 232, the RF transmitter 236, the RF data acquisition unit 238, and the transmit / receive switch 234, for example, the gradient coil. Information on the intensity of the pulse signal applied to the 224, the application time, the application timing (timing), and the like may be included.
오퍼레이팅부(260)는 시스템 제어부(250)에 펄스 시퀀스 정보를 지령하는 것과 동시에, MRI 장치(101) 전체의 동작을 제어할 수 있다.The operating unit 260 may command pulse sequence information to the system control unit 250 and control the operation of the entire MRI apparatus 101.
오퍼레이팅부(260)는 RF 데이터 획득부(238)로부터 수신되는 MR 신호를 처리하는 영상 처리부(262), 출력부(264) 및 사용자 인터페이스 부(266)를 포함할 수 있다.The operating unit 260 may include an image processor 262, an output unit 264, and a user interface 266 that process an MR signal received from the RF data acquirer 238.
영상 처리부(262)는 RF 데이터 획득부(238)로부터 수신되는 MR 신호를 처리하여, 대상체(210)에 대한 MR 화상 데이터를 생성할 수 있다.The image processor 262 may generate MR image data of the object 210 by processing the MR signal received from the RF data acquirer 238.
영상 처리부(262)는 RF 데이터 획득부(238)가 수신한 MR 신호에 증폭, 주파수 변환, 위상 검파, 저주파 증폭, 필터링(filtering) 등과 같은 각종의 신호 처리를 가한다.The image processor 262 applies various signal processing such as amplification, frequency conversion, phase detection, low frequency amplification, filtering, etc. to the MR signal received by the RF data acquisition unit 238.
영상 처리부(262)는, 예를 들어, 메모리의 k 공간에 디지털 데이터를 배치하고, 이러한 데이터를 2차원 또는 3차원 푸리에 변환을 하여 화상 데이터로 재구성할 수 있다.The image processor 262 may, for example, arrange digital data in k-space of the memory, and reconstruct the data into image data by performing two-dimensional or three-dimensional Fourier transform.
또한, 영상 처리부(262)는 필요에 따라, 화상 데이터(data)의 합성 처리나 차분 연산 처리 등도 수행할 수 있다. 합성 처리는, 픽셀에 대한 가산 처리, 최대치 투영(MIP)처리 등을 포함할 수 있다. 또한, 영상 처리부(262)는 재구성되는 화상 데이터뿐만 아니라 합성 처리나 차분 연산 처리가 행해진 화상 데이터를 메모리(도시되지 아니함) 또는 외부의 서버에 저장할 수 있다.In addition, the image processing unit 262 may also perform composition processing, difference calculation processing, and the like of the image data. The composition process may include an addition process for pixels, a maximum projection (MIP) process, and the like. In addition, the image processing unit 262 may store not only the image data to be reconstructed but also image data subjected to the synthesis process or the difference calculation process to a memory (not shown) or an external server.
또한, 영상 처리부(262)가 MR 신호에 대해 적용하는 각종 신호 처리는 병렬적으로 수행될 수 있다. 예를 들어, 다채널 RF 코일에 의해 수신되는 복수의 MR 신호에 신호 처리를 병렬적으로 가하여 복수의 MR 신호를 화상 데이터로 재구성할 수도 있다.In addition, various signal processings applied to the MR signal by the image processor 262 may be performed in parallel. For example, signal processing may be applied in parallel to a plurality of MR signals received by the multi-channel RF coil to reconstruct the plurality of MR signals into image data.
출력부(264)는 영상 처리부(262)에 의해 생성된 화상 데이터 또는 재구성 화상 데이터를 사용자에게 출력할 수 있다. 또한, 출력부(264)는 UI(user interface), 사용자 정보 또는 대상체 정보 등 사용자가 MRI 시스템을 조작하기 위해 필요한 정보를 출력할 수 있다. The output unit 264 may output the image data or the reconstructed image data generated by the image processor 262 to the user. In addition, the output unit 264 may output information necessary for the user to operate the MRI system, such as a user interface (UI), user information, or object information.
출력부(264)는 스피커, 프린터, 디스플레이 등을 포함할 수 있다. 디스플레이의 구현 방식은 한정되지 않으며, 예컨대 액정(liquid crystal), 플라즈마(plasma), 발광 다이오드(light-emitting diode), 유기발광 다이오드(organic light-emitting diode), 면전도 전자총(surface-conduction electron-emitter), 탄소 나노 튜브(carbon nano-tube), 나노 크리스탈(nano-crystal) 등의 다양한 디스플레이 방식으로 구현될 수 있다. 또한, 디스플레이는 영상을 3D 형태로 표시 가능하게 구현될 수 있으며, 경우에 따라 투명 디스플레이로 구현될 수도 있다.The output unit 264 may include a speaker, a printer, a display, and the like. The implementation manner of the display is not limited, for example, liquid crystal, plasma, light-emitting diode, organic light-emitting diode, surface-conduction electron- It can be implemented by various display methods such as emitters, carbon nano-tubes, and nano-crystals. In addition, the display may be implemented to display an image in a 3D form, and in some cases, may be implemented as a transparent display.
본 실시예에서 출력부(264)는 기타 당업자에게 자명한 범위 내에서 다양한 출력장치들을 포함할 수 있다. In this embodiment, the output unit 264 may include various output devices within a range apparent to those skilled in the art.
사용자는 사용자 입력부(266)를 이용하여 대상체 정보, 파라미터 정보, 스캔 조건, 펄스 시퀀스, 화상 합성이나 차분의 연산에 관한 정보 등을 입력할 수 있다. 사용자 입력부(266)는 키보드, 마우스, 트랙볼, 음성 인식부, 제스처 인식부, 터치 패드 등을 포함할 수 있고, 기타 당업자에게 자명한 범위 내에서 다양한 입력 장치들을 포함할 수 있다.The user may input object information, parameter information, scan conditions, pulse sequences, information on image composition or difference calculation, etc. using the user input unit 266. The user input unit 266 may include a keyboard, a mouse, a trackball, a voice recognizer, a gesture recognizer, a touch pad, and the like, and may include various input devices within a range apparent to those skilled in the art.
도 2는 신호 송수신부(230), 모니터링부(240), 시스템 제어부(250) 및 오퍼레이팅부(260)를 서로 분리된 객체로 도시하였지만, 신호 송수신부(230), 모니터링부(240), 시스템 제어부(250) 및 오퍼레이팅부(260) 각각에 의해 수행되는 기능들이 다른 객체에서 수행될 수도 있다는 것은 당업자라면 충분히 이해할 수 있을 것이다. 예를 들어, 영상 처리부(262)는, RF 데이터 획득부(238)가 수신한 MR 신호를 디지털 신호로 변환한다고 전술하였지만, 이 디지털 신호로의 변환은 RF 데이터 획득부(238) 또는 RF 코일(226)에 의해 직접 수행될 수도 있다.Although FIG. 2 illustrates the signal transceiver 230, the monitor 240, the system controller 250, and the operating unit 260 as separate objects, the signal transceiver 230, the monitor 240, and the system are shown. It will be understood by those skilled in the art that the functions performed by each of the controller 250 and the operating unit 260 may be performed in other objects. For example, the image processing unit 262 described above converts the MR signal received by the RF data acquisition unit 238 into a digital signal, but the conversion to the digital signal is performed by the RF data acquisition unit 238 or the RF coil ( 226 may be performed directly.
갠트리(220), RF 코일(226), 신호 송수신부(230), 모니터링부(240), 시스템 제어부(250) 및 오퍼레이팅부(260)는 서로 무선 또는 유선으로 연결될 수 있고, 무선으로 연결된 경우에는 서로 간의 클럭(clock)을 동기화하기 위한 장치(도시되지 아니함)를 더 포함할 수 있다. 갠트리(220), RF 코일(226), 신호 송수신부(230), 모니터링부(240), 시스템 제어부(250) 및 오퍼레이팅부(260) 사이의 통신은, LVDS(Low Voltage Differential Signaling) 등의 고속 디지털 인터페이스, UART(universal asynchronous receiver transmitter) 등의 비동기 시리얼 통신, 과오 동기 시리얼 통신 또는 CAN(Controller Area Network) 등의 저지연형의 네트워크 프로토콜, 광통신 등이 이용될 수 있으며, 당업자에게 자명한 범위 내에서 다양한 통신 방법이 이용될 수 있다.The gantry 220, the RF coil 226, the signal transmitting and receiving unit 230, the monitoring unit 240, the system control unit 250, and the operating unit 260 may be connected to each other wirelessly or by wire. A device (not shown) for synchronizing clocks with each other may be further included. Communication between the gantry 220, the RF coil 226, the signal transmitting and receiving unit 230, the monitoring unit 240, the system control unit 250, and the operating unit 260 is performed at a high speed such as low voltage differential signaling (LVDS). Digital interface, asynchronous serial communication such as universal asynchronous receiver transmitter (UART), low delay type network protocol such as error synchronization serial communication or controller area network (CAN), optical communication, etc. can be used, and it will be apparent to those skilled in the art. Various communication methods may be used.
도 3은 본 발명 일실시예에 의한 CT 장치(102)를 도시한 도면이고, 도 4는 도 3의 CT 장치(102)의 구성을 개략적으로 도시한 도면이다.3 is a view showing a CT device 102 according to an embodiment of the present invention, Figure 4 is a schematic view showing the configuration of the CT device 102 of FIG.
도 3에 도시된 바와 같이, CT 장치(102)는 갠트리(302), 테이블(305), 엑스선 생성부(306) 및 엑스선 검출부(308)를 포함할 수 있다.As shown in FIG. 3, the CT device 102 may include a gantry 302, a table 305, an X-ray generator 306, and an X-ray detector 308.
CT 장치(102)과 같은 단층 촬영 장치는 대상체에 대하여 단면 영상을 제공할 수 있으므로, 일반적인 X-ray 촬영 기기에 비하여 대상체의 내부 구조(예컨대, 신장, 폐 등의 장기 등)가 겹치지 않게 표현할 수 있다는 장점이 있다.A tomography device, such as the CT device 102, may provide a cross-sectional image of an object, so that the internal structure of the object (for example, an organ such as a kidney or a lung, etc.) does not overlap with a general X-ray imaging device. There is an advantage.
단층 촬영 장치는 CT(computed Tomography) 장치, OCT(Optical Coherenc Tomography), 또는 PET(positron emission tomography)-CT 장치 등과 같은 모든 단층 촬영 장치들을 포함할 수 있다.The tomography apparatus may include all tomography apparatuses such as a computed tomography (CT) device, an optical coherenc tomography (OCT) device, or a positron emission tomography (PET) -CT device.
본 실시예에서 단층(Tomography) 영상이란, 단층 촬영 장치에서 대상체를 단층 촬영하여 획득된 영상으로, 엑스레이 등과 같은 광선을 대상체로 조사한 후 투영된 데이터를 이용하여 이미징된 영상을 의미할 수 있다. 구체적으로, CT(Computed Tomography) 영상 이란 대상체에 대한 적어도 하나의 축을 중심으로 회전하며 대상체를 촬영함으로써 획득된 복수개의 엑스레이 영상들의 합성 영상을 의미할 수 있다.In the present embodiment, a tomography image is an image obtained by tomography imaging an object in a tomography apparatus. The tomography image may refer to an image that is imaged by using projected data after irradiating a light ray such as an X-ray to the object. In detail, the CT image may refer to a composite image of a plurality of X-ray images obtained by photographing an object while rotating about at least one axis of the object.
이하에서는, 단층 촬영 장치(300)로서 도 2 및 도 3에 도시된 CT 장치(102)를 예로 들어 설명하기로 한다.Hereinafter, the CT apparatus 102 illustrated in FIGS. 2 and 3 will be described as an example of the tomography apparatus 300.
CT 장치(102)는, 예를 들어, 2mm 두께 이하의 영상데이터를 초당 수십, 수백 회 획득하여 가공함으로써 대상체에 대하여 비교적 정확한 단면 영상을 제공할 수 있다. 종래에는 대상체의 가로 단면만으로 표현된다는 문제점이 있었지만, 다음과 같은 여러 가지 영상 재구성 기법의 등장에 의하여 극복되었다. 3차원 재구성 영상기법들로 는 다음과 같은 기법들이 있다.For example, the CT device 102 may provide a relatively accurate cross-sectional image of an object by acquiring and processing image data having a thickness of 2 mm or less tens or hundreds per second. Conventionally, there has been a problem that only the horizontal cross section of the object is expressed, but it is overcome by the appearance of various image reconstruction techniques as follows. Three-dimensional reconstruction imaging techniques are as follows.
- SSD(Shade surface display): 초기 3차원 영상기법으로 일정 HU값을 가지는 복셀들만 나타내도록 하는 기법.SSD (Shade surface display): A technique for displaying only voxels having a certain HU value as an initial three-dimensional imaging technique.
- MIP(maximum intensity projection)/MinIP(minimum intensity projection): 영상을 구성하는 복셀 중에서 가장 높은 또는 낮은 HU값을 가지는 것들만 나타내는 3D 기법.Maximum intensity projection (MIP) / minimum intensity projection (MinIP): A 3D technique showing only those having the highest or lowest HU value among the voxels constituting the image.
- VR(volume rendering): 영상을 구성하는 복셀들을 관심영역별로 색 및 투과도를 조절할 수 있는 기법.VR (volume rendering): A technique that can adjust the color and transmittance of the voxels constituting the image for each region of interest.
- 가상내시경(Virtual endoscopy): VR 또는 SSD 기법으로 재구성한 3차원 영상에서 내시경적 관찰이 가능한 기법.Virtual endoscopy: A technique that allows endoscopic observation in three-dimensional images reconstructed by the VR or SSD technique.
- MPR(multi planar reformation): 다른 단면 영상으로 재구성하는 영상 기법. 사용자가 원하는 방향으로의 자유자제의 재구성이 가능하다.MPR (multi planar reformation): An image technique for reconstructing different cross-sectional images. It is possible to reconstruct freedom in the direction desired by the user.
- Editing: VR에서 관심부위를 보다 쉽게 관찰하도록 주변 복셀들을 정리하는 여러 가지 기법.Editing: Various techniques for arranging surrounding voxels to make it easier to observe the point of interest in VR.
- VOI(voxel of interest): 선택 영역만을 VR로 표현하는 기법.VOI (voxel of interest): A technique for representing only a selected area in VR.
본 발명의 실시예에 따른 컴퓨터 단층촬영(CT) 장치(102)는 도 3 및 도 4를 참조하여 설명될 수 있다. 본 발명의 일 실시예에 따른 CT 장치(102)는 도 4에 도시된 바와 같이 다양한 형태의 장치들을 포함할 수 있다.Computed tomography (CT) device 102 according to an embodiment of the present invention can be described with reference to FIGS. 3 and 4. CT device 102 according to an embodiment of the present invention may include various types of devices as shown in FIG.
갠트리(302)는 엑스선 생성부(306) 및 엑스선 검출부(308)를 포함할 수 있다. The gantry 302 may include an X-ray generator 306 and an X-ray detector 308.
대상체(30)는 테이블(305) 상에 위치될 수 있다. The object 30 may be located on the table 305.
테이블(305)은 CT 촬영 과정에서 소정의 방향(예컨대, 상, 하, 좌, 우 중 적어도 한 방향)으로 이동할 수 있다. 또한, 테이블(305)은 소정의 방향으로 소정의 각도만큼 기울어질 수 있거나(tilting) 또는 회전(rotating)될 수 있다. The table 305 may move in a predetermined direction (eg, at least one of up, down, left, and right) during the CT imaging process. In addition, the table 305 may be tilted or rotated by a predetermined angle in a predetermined direction.
또한, 갠트리(302)도 소정의 방향으로 소정의 각도만큼 기울어질 수 있다.In addition, the gantry 302 may be tilted by a predetermined angle in a predetermined direction.
도 4에 도시된 바와 같이, 본 발명의 일 실시예에 따른 CT 장치(102)는 갠트리(302), 테이블(305), 제어부(318), 저장부(324), 영상 처리부(326), 사용자 입력부(328), 디스플레이부(330), 통신부(332)를 포함할 수 있다.As shown in FIG. 4, the CT device 102 according to an embodiment of the present invention may include a gantry 302, a table 305, a controller 318, a storage 324, an image processor 326, and a user. The input unit 328 may include a display unit 330 and a communication unit 332.
전술한 바와 같이, 대상체(310)는 테이블(305) 상에 위치할 수 있다. 본 발명의 일 실시예에 따른 테이블(305)은 소정의 방향(예컨대, 상, 하, 좌, 우 중 적어도 한 방향)으로 이동 가능하고, 제어부(318)에 의하여 움직임이 제어될 수 있다.As described above, the object 310 may be located on the table 305. Table 305 according to an embodiment of the present invention can be moved in a predetermined direction (eg, at least one of up, down, left, right), the movement can be controlled by the controller 318.
본 발명의 일 실시예에 따른 갠트리(302)는 회전 프레임(304), 엑스선 생성부(306), 엑스선 검출부(308), 회전구동부(310), 데이터 획득 회로(316), 데이터 송신부(320)를 포함할 수 있다.The gantry 302 according to an embodiment of the present invention includes a rotating frame 304, an X-ray generator 306, an X-ray detector 308, a rotation driver 310, a data acquisition circuit 316, and a data transmitter 320. It may include.
본 발명의 일 실시예에 따른 갠트리(302)는 소정의 회전축(RA; Rotation Axis)에 기초하여 회전 가능한 고리 형태의 회전 프레임(304)을 포함할 수 있다. 또한, 회전 프레임(304)은 디스크의 형태일 수도 있다.The gantry 302 according to an embodiment of the present invention may include a rotatable frame 304 that is rotatable based on a predetermined rotation axis (RA). The rotating frame 304 may also be in the form of a disc.
회전 프레임(304)은 소정의 시야 범위(FOV; Field Of View)를 갖도록 각각 대향하여 배치된 엑스선 생성부(306) 및 엑스선 검출부(308)를 포함할 수 있다. 또한, 회전 프레임(304)은 산란 방지 그리드(anti-scatter grid, 314)를 포함할 수 있다. 산란 방지 그리드(314)는 엑스선 생성부(306)와 엑스선 검출부(308)의 사이에서 위치할 수 있다.The rotation frame 304 may include an X-ray generator 306 and an X-ray detector 308 disposed to face each other to have a predetermined field of view (FOV). The rotating frame 304 may also include an anti-scatter grid 314. The scattering prevention grid 314 may be located between the X-ray generator 306 and the X-ray detector 308.
의료영상 표시장치에 있어서, 검출기(또는 감광성 필름)에 도달하는 X-선 방사선에는, 유용한 영상을 형성하는 감쇠된 주 방사선 (attenuated primary radiation) 뿐만 아니라 영상의 품질을 떨어뜨리는 산란 방사선(scattered radiation) 등이 포함되어 있다. 주 방사선은 대부분 투과시키고 산란 방사선은 감쇠시키기 위해, 환자와 검출기(또는 감광성 필름)와의 사이에 산란 방지 그리드를 위치시킬 수 있다.In medical image displays, the X-ray radiation that reaches the detector (or photosensitive film) includes scattered radiation that degrades the quality of the image as well as attenuated primary radiation that forms a useful image. Etc. are included. An anti-scattering grid can be placed between the patient and the detector (or photosensitive film) in order to transmit most of the main radiation and attenuate the scattered radiation.
예를 들어, 산란 방지 그리드는, 납 박편의 스트립(strips of lead foil)과, 중공이 없는 폴리머 물질(solid polymer material)이나 중공이 없는 폴리머(solid polymer) 및 섬유 합성 물질(fiber composite material) 등의 공간 충전 물질(interspace material)을 교대로 적층한 형태로 구성될 수 있다. 그러나, 산란 방지 그리드의 형태는 반드시 이에 제한되는 것은 아니다.For example, anti-scatter grids include strips of lead foil, solid polymer materials or solid polymers and fiber composite materials. It may be configured in the form of alternately stacked space filling material (interspace material) of. However, the shape of the anti-scattering grid is not necessarily limited thereto.
회전 프레임(304)은 회전 구동부(310)로부터 수신된 구동 신호에 기초하여, 엑스선 생성부(306)와 엑스선 검출부(308)를 소정의 회전 속도로 회전시킬 수 있다. 회전 프레임(304)은 슬립 링(도시되지 아니함)을 통하여 접촉 방식으로 회전 구동부(310)로부터 구동 신호, 파워를 수신할 수 있다. 또한, 회전 프레임(304)은 무선 통신을 통하여 회전 구동부(310)로부터 구동 신호, 파워를 수신할 수 있다.The rotation frame 304 may rotate the X-ray generator 306 and the X-ray detector 308 at a predetermined rotation speed based on the driving signal received from the rotation driver 310. The rotation frame 304 may receive a driving signal and power from the rotation driver 310 in a contact manner through a slip ring (not shown). In addition, the rotation frame 304 may receive a drive signal and power from the rotation driver 310 through wireless communication.
엑스선 생성부(306)는 파워 분배부(PDU; Power Distribution Unit, 도시되지 아니함)에서 슬립 링(도시되지 아니함)을 거쳐 고전압 생성부(도시되지 아니함)를 통하여 전압, 전류를 인가 받아 엑스선을 생성하여 방출할 수 있다. 고전압 생성부가 소정의 전압(이하에서 튜브 전압으로 지칭함)을 인가할 때, 엑스선 생성부(306)는 이러한 소정의 튜브 전압에 상응하게 복수의 에너지 스펙트럼을 갖는 엑스선(X-ray)들을 생성할 수 있다.The X-ray generator 306 generates an X-ray by receiving a voltage and a current through a high voltage generator (not shown) through a slip ring (not shown) in a power distribution unit (PDU) (not shown). Can be released. When the high voltage generator applies a predetermined voltage (hereinafter referred to as a tube voltage), the X-ray generator 306 may generate X-rays having a plurality of energy spectra corresponding to the predetermined tube voltage. have.
엑스선 생성부(306)에 의하여 생성되는 엑스선은, 콜리메이터(collimator, 112)에 의하여 소정의 형태로 방출될 수 있다.The X-rays generated by the X-ray generator 306 may be emitted in a predetermined form by the collimator 112.
엑스선 검출부(308)는 엑스선 생성부(306)와 마주하여 위치할 수 있다. 엑스선 검출부(308)는 복수의 엑스선 검출소자들을 포함할 수 있다. 단일 엑스선 검출 소자는 단일 채널을 형성할 수 있지만, 반드시 이에 제한되는 것은 아니다.The X-ray detector 308 may be positioned to face the X-ray generator 306. The X-ray detector 308 may include a plurality of X-ray detection elements. The single X-ray detection element may form a single channel, but is not necessarily limited thereto.
엑스선 검출부(308)는 엑스선 생성부(306)로부터 생성되고 대상체(30)를 통하여 전송된 엑스선을 감지하고, 감지된 엑스선의 강도에 상응하게 전기 신호를 생성할 수 있다.The X-ray detector 308 may detect the X-rays generated by the X-ray generator 306 and transmitted through the object 30 and generate an electric signal corresponding to the intensity of the detected X-rays.
엑스선 검출부(308)는 방사선을 광으로 전환하여 검출하는 간접방식과 방사선을 직접 전하로 변환하여 검출하는 직접방식 검출기를 포함할 수 있다. 간접방식의 엑스선 검출부는 신틸레이터(Scintillator)를 사용할 수 있다. 또한, 직접방식의 엑스선 검출부는 광자계수형 디텍터(photon counting detector)를 사용할 수 있다. 데이터 획득 회로(DAS; Data Acquisitino System)(316)는 엑스선 검출부(308)와 연결될 수 있다. 엑스선 검출부(308)에 의하여 생성된 전기 신호는 DAS(316)에서 수집될 수 있다. 엑스선 검출부(308)에 의하여 생성된 전기 신호는 유선 또는 무선으로 DAS(316)에서 수집될 수 있다. 또한, 엑스선 검출부(308)에 의하여 생성된 전기 신호는 증폭기(도시되지 아니함)를 거쳐 아날로그/디지털 컨버터(도시되지 아니함)로 제공될 수 있다.The X-ray detector 308 may include an indirect method of converting radiation into light and a direct method detector for converting and detecting radiation into direct charge. The indirect X-ray detector may use a scintillator. In addition, the direct type X-ray detector may use a photon counting detector. The data acquisitino system (DAS) 316 may be connected to the X-ray detector 308. The electrical signal generated by the X-ray detector 308 may be collected by the DAS 316. The electrical signal generated by the X-ray detector 308 may be collected by the DAS 316 by wire or wirelessly. In addition, the electrical signal generated by the X-ray detector 308 may be provided to an analog / digital converter (not shown) through an amplifier (not shown).
슬라이스 두께(slice thickness)나 슬라이스 개수에 따라 엑스선 검출부(308)로부터 수집된 일부 데이터만이 영상 처리부(326)에 제공될 수 있고, 또는 영상 처리부(326)에서 일부 데이터만을 선택할 수 있다.Only some data collected from the X-ray detector 308 may be provided to the image processor 326 according to the slice thickness or the number of slices, or only some data may be selected by the image processor 326.
이러한 디지털 신호는 데이터 송신부(320)를 통하여 영상 처리부(326)로 제공될 수 있다. 이러한 디지털 신호는 데이터 송신부(320)를 통하여 유선 또는 무선으로 영상 처리부(326)로 송신될 수 있다.The digital signal may be provided to the image processor 326 through the data transmitter 320. The digital signal may be transmitted to the image processor 326 by wire or wirelessly through the data transmitter 320.
본 발명의 일 실시예에 따른 CT 장치(102)의 제어부(318)는 CT 장치(102) 내 각각의 모듈의 동작을 제어할 수 있다. 예를 들어, 제어부(318)는 테이블(305), 회전 구동부(310), 콜리메이터(312), DAS(316), 저장부(324), 영상 처리부(326), 사용자 입력부(328), 디스플레이부(330), 통신부(332) 등의 동작들을 제어할 수 있다.The control unit 318 of the CT device 102 according to an embodiment of the present invention may control the operation of each module in the CT device 102. For example, the controller 318 may include a table 305, a rotation driver 310, a collimator 312, a DAS 316, a storage 324, an image processor 326, a user input unit 328, and a display unit. 330 and the communication unit 332 may be controlled.
영상 처리부(326)는 DAS(316)로부터 획득된 데이터(예컨대, 가공 전 순수(pure) 데이터)를 데이터 송신부(320)를 통하여 수신하여, 전처리(pre-processing)하는 과정을 수행할 수 있다.The image processor 326 may receive data (for example, pure data before processing) from the DAS 316 through the data transmitter 320 to perform a pre-processing process.
전처리는, 예를 들면, 채널들 사이의 감도 불균일 정정 프로세스, 신호 세기의 급격한 감소 또는 금속 같은 X선 흡수재로 인한 신호의 유실 정정 프로세스 등을 포함할 수 있다.The preprocessing may include, for example, a process of correcting the sensitivity nonuniformity between the channels, a sharp reduction in signal strength, or a process of correcting the loss of a signal due to an X-ray absorber such as metal.
영상 처리부(326)의 출력 데이터는 로 데이터(raw data) 또는 프로젝션(projection) 데이터로 지칭될 수 있다. 이러한 프로젝션 데이터는 데이터 획득시의 촬영 조건(예컨대, 튜브 전압, 촬영 각도 등)등과 함께 저장부(324)에 저장될 수 있다.The output data of the image processor 326 may be referred to as raw data or projection data. Such projection data may be stored in the storage unit 324 together with photographing conditions (eg, tube voltage, photographing angle, etc.) at the time of data acquisition.
프로젝션 데이터는 대상체롤 통과한 X선의 세기에 상응하는 데이터 값의 집합일 수 있다. 설명의 편의를 위해, 모든 채널들에 대하여 동일한 촬영 각도로 동시에 획득된 프로젝션 데이터의 집합을 프로젝션 데이터 세트로 지칭한다.The projection data may be a set of data values corresponding to the intensity of X-rays passing through the object. For convenience of description, a set of projection data acquired simultaneously at the same photographing angle for all channels is referred to as a projection data set.
저장부(324)는 플래시 메모리 타입(flash memory type), 하드디스크 타입(hard disk type), 멀티미디어 카드 마이크로 타입(multimedia card micro type), 카드 타입의 메모리(SD, XD 메모리 등), 램(RAM; Random Access Memory) SRAM(Static Random Access Memory), 롬(ROM; Read-Only Memory), EEPROM(Electrically Erasable Programmable Read-Only Memory), PROM(Programmable Read-Only Memory) 자기 메모리, 자기 디스크, 광디스크 중 적어도 하나의 타입의 저장매체를 포함할 수 있다.The storage unit 324 may include a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (SD, XD memory, etc.), RAM ; Random Access Memory (Static Random Access Memory), ROM (Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), PROM (Programmable Read-Only Memory) magnetic memory, magnetic disk, optical disk It may include at least one type of storage medium.
또한, 영상 처리부(326)는 획득된 프로젝션 데이터 세트를 이용하여 대상체에 대한 단면 영상을 재구성할 수 있다. 이러한 단면 영상은 3차원 영상일 수 있다. 다시 말해서, 영상 처리부(326)는 획득된 프로젝션 데이터 세트에 기초하여 콘 빔 재구성(cone beam reconstruction) 방법 등을 이용하여 대상체에 대한 3차원 영상을 생성할 수 있다.In addition, the image processor 326 may reconstruct a cross-sectional image of the object by using the obtained projection data set. The cross-sectional image may be a 3D image. In other words, the image processor 326 may generate a 3D image of the object by using a cone beam reconstruction method or the like based on the obtained projection data set.
사용자 입력부(328)를 통하여 X선 단층 촬영 조건, 영상 처리 조건 등에 대한 외부 입력이 수신될 수 있다. 예를 들면, X선 단층 촬영 조건은, 복수의 튜브 전압, 복수의 X선들의 에너지 값 설정, 촬영 프로토콜 선택, 영상재구성 방법 선택, FOV 영역 설정, 슬라이스 개수, 슬라이스 두께(slice thickness), 영상 후처리 파라미터 설정 등을 포함할 수 있다. 또한 영상 처리 조건은 영상의 해상도, 영상에 대한 감쇠 계수 설정, 영상의 조합비율 설정 등을 포함할 수 있다.External input for X-ray tomography conditions, image processing conditions, etc. may be received through the user input unit 328. For example, X-ray tomography conditions may include a plurality of tube voltages, energy values of a plurality of X-rays, imaging protocol selection, image reconstruction method selection, FOV region setting, number of slices, slice thickness, post-image Processing parameter settings, and the like. Also, the image processing condition may include a resolution of an image, attenuation coefficient setting for the image, a combination ratio setting of the image, and the like.
사용자 입력부(328)는 외부로부터 소정의 입력을 인가 받기 위한 디바이스 등을 포함할 수 있다. 예컨대, 사용자 입력부(328)는 마이크로폰, 키보드, 마우스, 조이스틱, 터치 패드, 터치펜, 음성, 제스처 인식장치 등을 포함할 수 있다.The user input unit 328 may include a device for receiving a predetermined input from the outside. For example, the user input unit 328 may include a microphone, a keyboard, a mouse, a joystick, a touch pad, a touch pen, a voice, a gesture recognition device, and the like.
디스플레이부(330)는 영상 처리부(326)에 의해 재구성된 엑스선 촬영 영상을 디스플레이할 수 있다.The display 330 may display the X-ray photographed image reconstructed by the image processor 326.
전술한 엘리먼트들 사이의 데이터, 파워 등의 송수신은 유선, 무선 및 광통신 중 적어도 하나를 이용하여 수행될 수 있다.The transmission and reception of data, power, and the like between the above-described elements may be performed using at least one of wired, wireless, and optical communication.
통신부(332)는 서버(334) 등을 통하여 외부 디바이스, 외부 의료 장치 등과의 통신을 수행할 수 있다. The communication unit 332 may communicate with an external device, an external medical device, or the like through the server 334.
도 5는 네트워크 시스템에서 외부와 통신을 수행하는 통신부(532)의 구성을 간략하게 도시한 도면이다.FIG. 5 is a diagram schematically illustrating a configuration of a communication unit 532 that communicates with the outside in a network system.
도 5에 도시된 통신부(532)는 도 2에 도시된 갠트리(220), 신호 송수신부(230), 모니터링부(240), 시스템 제어부(250) 및 오퍼레이팅부(260) 중 적어도 하나와도 연결 가능하다. 즉, 통신부(532)는 의료영상 정보 시스템(PACS, Picture Archiving and Communication System)을 통해 연결된 병원 서버나 병원 내의 다른 의료 장치와 데이터를 주고 받을 수 있으며, 의료용 디지털 영상 및 통신(DICOM, Digital Imaging and Communications in Medicine) 표준에 따라 데이터 통신할 수 있다.The communication unit 532 illustrated in FIG. 5 is also connected to at least one of the gantry 220, the signal transceiver 230, the monitoring unit 240, the system controller 250, and the operating unit 260 illustrated in FIG. 2. It is possible. That is, the communication unit 532 may exchange data with a hospital server or another medical device in a hospital connected through a PACS (Picture Archiving and Communication System), and may use digital digital imaging and communication (DICOM, Digital Imaging and Communication System). Communications in Medicine) can communicate data.
도 5에 도시된 바와 같이, 통신부(532)는 유선 또는 무선으로 네트워크(501)와 연결되어 외부의 서버(534), 외부의 의료 장치(536), 또는 휴대용 장치와 같은 외부 디바이스(538)와 통신을 수행할 수 있다.As shown in FIG. 5, the communication unit 532 is connected to the network 501 by wire or wirelessly to connect with an external server 534, an external medical device 536, or an external device 538 such as a portable device. Communication can be performed.
구체적으로, 통신부(532)는 네트워크(501)를 통해 대상체의 진단과 관련된 데이터를 송수신할 수 있으며, CT, 초음파, 엑스선 등 다른 의료 장치(536)에서 촬영한 의료영상 또한 송수신할 수 있다. In detail, the communication unit 532 may transmit and receive data related to diagnosis of the object through the network 501, and may also transmit and receive medical images photographed by another medical device 536 such as CT, ultrasound, and X-ray.
본 발명 일실시예에서, 도 5에 도시된 통신부(532)는 도 4의 CT 장치(102)에 포함될 수도 있다. 이 경우, 도 4에 도시된 통신부(532)는 도 3에 도시된 통신부(332)와 동일하다. 그리고, 다른 의료장치(536)는 예컨대 도 1의 MRI 장치(101) 또는 초음파 장치(103)일 수 있다.In one embodiment of the present invention, the communication unit 532 shown in FIG. 5 may be included in the CT device 102 of FIG. 4. In this case, the communication unit 532 shown in FIG. 4 is the same as the communication unit 332 shown in FIG. The other medical device 536 may be, for example, the MRI device 101 or the ultrasound device 103 of FIG. 1.
또한, 도 5에 도시된 통신부(532)는 도 2의 MRI 장치(101)에 포함될 수도 있다. 이 경우, 도 2에 도시된 MRI 장치(101)가 도 5의 통신부(532)를 더 포함하는 형태로 구현 가능하다. 그리고, 다른 의료장치(536)는 예컨대 도 1의 CT 장치(102) 또는 초음파 장치(103)일 수 있다.In addition, the communication unit 532 illustrated in FIG. 5 may be included in the MRI apparatus 101 of FIG. 2. In this case, the MRI apparatus 101 shown in FIG. 2 may be implemented in a form further including the communication unit 532 of FIG. 5. The other medical device 536 may be, for example, the CT device 102 or the ultrasound device 103 of FIG. 1.
통신부(532)의 구체적인 동작은 이하와 같다.Specific operations of the communication unit 532 are as follows.
통신부(532)는, 유선 또는 무선으로 네트워크(501)와 연결되어 서버(534), 외부 의료 장치(536) 또는 외부 디바이스(538)와의 통신을 수행할 수 있다. 통신부(532)는 의료영상 정보 시스템(PACS, Picture Archiving and Communication System)을 통해 연결된 병원 서버나 병원 내의 다른 의료 장치와 데이터를 주고받을 수 있다.The communication unit 532 may be connected to the network 501 by wire or wirelessly to perform communication with the server 534, the external medical device 536, or the external device 538. The communication unit 532 may exchange data with a hospital server or another medical device in the hospital connected through a PACS (Picture Archiving and Communication System).
또한, 통신부(532)는 의료용 디지털 영상 및 통신(DICOM, Digital Imaging and Communications in Medicine) 표준에 따라 외부 디바이스(538) 등과 데이터 통신을 수행할 수 있다.In addition, the communication unit 532 may perform data communication with an external device 538 or the like according to a digital imaging and communications in medicine (DICOM) standard.
통신부(532)는 네트워크(501)를 통해 대상체에 대한 영상 및/또는 대상체의 진단과 관련된 데이터를 송수신할 수 있다. 통신부(532)는 MRI 장치(101), 엑스선 촬영 장치 등 다른 의료 기기(536)에서 획득된 의료영상 등을 수신할 수 있다.The communication unit 532 may transmit / receive an image of the object and / or data related to the diagnosis of the object through the network 501. The communication unit 532 may receive a medical image acquired from another medical device 536 such as an MRI apparatus 101 and an X-ray imaging apparatus.
나아가, 통신부(532)는 서버(534)로부터 환자의 진단 이력이나 치료 일정 등을 수신하여 환자의 임상적 진단 등에 활용할 수도 있다. 또한, 통신부(532)는 병원 내의 서버(534)나 의료 장치(536)뿐만 아니라, 사용자나 환자의 휴대용 장치(단말장치)(538) 등과 데이터 통신을 수행할 수도 있다.In addition, the communication unit 532 may receive a diagnosis history or treatment schedule of the patient from the server 534 and use it for clinical diagnosis of the patient. In addition, the communication unit 532 may perform data communication with not only a server 534 or a medical device 536 in a hospital, but also a portable device (terminal device) 538 of a user or a patient.
또한 장비의 이상유무 및 품질 관리현황 정보를 네트워크를 통해 시스템 관리자나 서비스 담당자에게 송신하고 그에 대한 피드백(feedback)을 수신할 수 있다.In addition, it is possible to send the equipment abnormality and quality control status information to the system administrator or service personnel through the network and receive feedback about it.
전술한 바와 같이, 다양한 의료영상 표시장치에 의해서 획득된 의료영상들은 의료영상 표시장치의 종류 및 촬영 방식에 따라서 대상체를 다양한 방식으로 표현한다. 또한, 의료영상 표시장치의 촬영 방식 및 종류에 따라서, 획득된 의료영상의 특성이 달라진다. 예를 들어, 일 의료영상에서는 암 조직의 파악이 용이하며, 다른 의료영상에서는 혈관의 파악이 용이할 수 있다.As described above, the medical images acquired by the various medical image display apparatuses express the object in various ways according to the type and the photographing method of the medical image display apparatus. In addition, characteristics of the acquired medical image may vary according to a photographing method and a type of the medical image display apparatus. For example, in one medical image, cancer tissue may be easily identified, and in another medical image, blood vessels may be easily identified.
따라서, 영상의 판독 부위를 고려하여 사용자의 의도에 맞는 의료영상을 제공하는 장치를 제공할 필요가 있다. Accordingly, there is a need to provide an apparatus for providing a medical image that is suitable for a user's intention in consideration of a reading part of an image.
이하에서는, 의료영상 내의 소정 영역에 대하여 사용자의 진단을 용이하게 하는 의료영상을 제공할 수 있는 본 발명의 일 또는 다른 실시예에 따른 의료영상 표시장치를 첨부된 도면들을 참조하여, 상세히 설명한다.Hereinafter, with reference to the accompanying drawings, a medical image display device according to one or another embodiment of the present invention that can provide a medical image for facilitating a diagnosis of a user with respect to a predetermined region in a medical image.
본 발명의 일 또는 다른 실시예에 따른 의료영상 표시장치는 의료영상을 디스플레이, 저장 및/또는 처리할 수 있는 모든 영상 처리 장치가 될 수 있다.The medical image display apparatus according to an embodiment of the present invention may be any image processing apparatus capable of displaying, storing, and / or processing a medical image.
구체적으로, 본 발명의 일 또는 다른 실시예에 따른 의료영상 표시장치(100)는 도 2 내지 도 4에서 설명한 MRI 장치(101) 또는 CT 장치(102) 등과 같은 단층 촬영 장치에 포함되도록 마련될 수 있다. 이 경우, 의료영상 표시장치(100)는 도 5에서 설명한 통신부(532)를 포함할 수 있다.In detail, the medical image display apparatus 100 according to an embodiment of the present invention may be provided to be included in a tomography apparatus such as the MRI apparatus 101 or the CT apparatus 102 described with reference to FIGS. 2 to 4. have. In this case, the medical image display apparatus 100 may include the communication unit 532 described with reference to FIG. 5.
또한, 본 발명의 일 또는 다른 실시예에 따른 의료영상 표시장치(100)는 도 2 내지도 도 4에서 설명한 MRI 장치(101) 및 CT 장치(102) 등과 같은 단층 촬영 장치 중 적어도 하나와 네트워크(501)를 통하여 연결되는 서버(534), 의료 장치(536) 또는 외부 디바이스 즉, 휴대용 단말(538)에 포함될 수도 있을 것이다. 여기서, 서버(534), 의료 장치(536) 또는 휴대용 단말(538)은 MRI 영상 및 단층 영상 중 적어도 하나를 디스플레이, 저장 또는 처리할 수 있는 영상 처리 장치가 될 수 있다. 예를 들어, 본 발명의 일 또는 다른 실시예에 따른 의료영상 표시장치는 서버(534), 의료 장치(536) 또는 휴대용 단말(538)의 형태를 가질 수 있으며, MRI 영상 및 단층 영상 중 적어도 하나를 디스플레이, 저장 또는 처리할 수 있는 의료영상 정보 시스템(PACS, Picture Archiving and Communication System)이 될 수도 있다.In addition, the medical image display apparatus 100 according to an embodiment of the present invention may include at least one of a tomography apparatus such as the MRI apparatus 101 and the CT apparatus 102 described above with reference to FIGS. It may be included in the server 534, the medical device 536, or an external device that is connected through the 501, that is, the portable terminal 538. The server 534, the medical device 536, or the portable terminal 538 may be an image processing device capable of displaying, storing, or processing at least one of an MRI image and a tomography image. For example, the medical image display apparatus according to an embodiment of the present invention may have a form of a server 534, a medical device 536, or a portable terminal 538, and at least one of an MRI image and a tomography image. It may be a PACS (Picture Archiving and Communication System) that can display, store, or process the data.
또한, 본 발명의 일 또는 다른 실시예에 따른 의료영상 표시장치(100)는 MRI 장치(101) 또는 CT 장치(102) 이외에도, 대상체를 스캔하여 획득된 데이터를 이용하여 영상을 처리/복원하는 모든 의료영상 장치/시스템에 포함되어 구비될 수 있으며, 또는 모든 의료영상 장치/시스템과 연결되어 구비될 수도 있을 것이다.In addition, the medical image display apparatus 100 according to an embodiment of the present invention may be configured to process / restore an image using data acquired by scanning an object, in addition to the MRI apparatus 101 or the CT apparatus 102. It may be included in the medical imaging device / system, or may be provided in connection with all medical imaging devices / systems.
본 발명의 일 또는 다른 실시예에 따른 의료영상 표시장치(100)는 서로 다른 2 이상의 의료장치 예컨대, 제1 의료장치 및 제2 의료장치로부터 제1 의료영상과 제2 의료영상을 획득하고, 제1 의료영상과 제2 의료영상을 정합한 영상(제3 의료영상)을 표시하는 의료영상 정합장치로 구현될 수 있다.According to an embodiment of the present invention, the medical image display apparatus 100 obtains the first medical image and the second medical image from two or more different medical devices, for example, the first medical device and the second medical device. It may be implemented as a medical image registration device for displaying an image (third medical image) matching the first medical image and the second medical image.
도 6은 본 발명 일 실시예에 따른 제1 의료 장치(610), 제2 의료 장치(620) 및 의료영상 정합장치(630)를 포함하는 시스템을 도시한 도면이다.FIG. 6 is a diagram illustrating a system including a first medical device 610, a second medical device 620, and a medical image registration device 630 according to an exemplary embodiment.
제1 의료 장치(610)와 제2 의료장치(620)는 각각 제1 의료영상과 제2 의료영상을 생성하여, 의료영상 정합장치(630)에 제공한다. 제1 의료영상과 제2 의료영상은 동일한 원리에 의해 생성된 영상일 수 있다. The first medical device 610 and the second medical device 620 generate the first medical image and the second medical image, respectively, and provide them to the medical image matching device 630. The first medical image and the second medical image may be images generated by the same principle.
또한, 제1 의료영상과 제2 의료영상은 상이한 영상 모달리티를 가질 수 있다. 즉, 제1 의료영상과 제2 의료영상은 생성 방식 및 원리가 상이할 수 있다. In addition, the first medical image and the second medical image may have different image modalities. That is, the first medical image and the second medical image may have different generation methods and principles.
의료영상 정합장치(630)는 제1 의료영상과 제2 의료영상을 각각 획득하고, 제1 의료영상과 제2 의료영상을 정합한다. 의료영상 정합장치(630)가 정합한 영상은 디스플레이부(632)를 통해 디스플레이 된다.The medical image matching device 630 acquires the first medical image and the second medical image, respectively, and matches the first medical image and the second medical image. The image matched by the medical image matching device 630 is displayed on the display unit 632.
도 6에 도시된 본 발명 일 실시예에서는 제1 의료 장치(610), 제2 의료 장치(620), 의료영상 정합장치(630)가 각각 독립된 장치를 구성하고 있으나, 다른 실시예에 따르면 제1 의료 장치(610)와 의료영상 정합장치(630)가 단일의 장치로 구현되거나, 또는 제2 의료 장치(620)와 의료영상 정합장치(630)가 단일의 장치로 구현될 수 있다. 또한, 의료영상 정합장치(630)가 본체(631)와 디스플레이부(632)를 포함하는 것으로 도시하였으나, 의료영상 정합장치(630)로부터 영상 데이터를 수신하여 표시하는 별도의 디스플레이장치가 시스템에 포함되도록 구현될 수도 있을 것이다. In the embodiment of the present invention illustrated in FIG. 6, the first medical device 610, the second medical device 620, and the medical image registration device 630 each constitute independent devices. The medical device 610 and the medical image registration device 630 may be implemented as a single device, or the second medical device 620 and the medical image registration device 630 may be implemented as a single device. In addition, although the medical image matching device 630 includes the main body 631 and the display unit 632, a separate display device for receiving and displaying image data from the medical image matching device 630 is included in the system. It may be implemented.
즉, 본 실시예의 의료영상 정합장치(630)는 적어도 하나의 의료 장치와 통신 가능하며 디스플레이를 구비한 다른 하나의 의료 장치에 포함된 컴퓨터 시스템이거나, 2 이상의 의료장치와 통신 가능하며 디스플레이 및 본체를 포함하는 컴퓨터 시스템으로 구현될 수 있을 것이다. That is, the medical image registration device 630 of the present embodiment is a computer system that is capable of communicating with at least one medical device and is included in another medical device having a display, or is capable of communicating with two or more medical devices. It may be implemented in a computer system that includes.
일 실시예에서 제1 의료 장치(610)는 대상체의 관심 볼륨에 대하여 실시간으로 제1 의료영상을 제공할 수 있다. 예를 들어, 대상체의 신체 활동에 따른 장기의 변형과 변위가 발생되면, 실시간으로 제1 의료영상에 변화가 나타난다. In an embodiment, the first medical device 610 may provide the first medical image in real time with respect to the volume of interest of the object. For example, when deformation and displacement of an organ due to physical activity of the subject occur, a change appears in the first medical image in real time.
즉, 도 6에 도시된 본 발명 일 실시예에 따르면, 제1 의료 장치(620)는 환자에 대한 중재적 의료 시술과정에서 실시간으로 영상을 생성하는 초음파 장치(ultrasonography machine)(도 1의 103)로 구성될 수 있다. 예를 들어, 대상체의 신체 활동에 따른 장기의 변형과 변위가 발생되면, 실시간으로 디스플레이에 표시되는 의료영상에 변화가 나타난다. 다만, 제1 의료 장치(620)는 실시간으로 영상을 제공하는 OCT 등의 다른 의료장치일 수도 있을 것이다.That is, according to the exemplary embodiment of the present invention illustrated in FIG. 6, the first medical device 620 is an ultrasonography machine (103 of FIG. 1) that generates an image in real time during an interventional medical procedure for a patient. It can be configured as. For example, when deformation and displacement of an organ due to physical activity of an object occur, a change is indicated in a medical image displayed on a display in real time. However, the first medical device 620 may be another medical device such as an OCT that provides an image in real time.
초음파 장치로 구성된 제1 의료 장치(610)는 프로브(probe)(611)를 이용하여 초음파 신호를 대상체의 관심영역에 조사하고, 반사되는 초음파 신호 즉, 초음파 에코 신호를 검출함으로써 초음파 영상을 생성한다. The first medical device 610 configured as an ultrasound device generates an ultrasound image by radiating an ultrasound signal to a region of interest of an object using a probe 611 and detecting a reflected ultrasound signal, that is, an ultrasound echo signal. .
프로브(611)는 대상체에 접촉하는 부분으로, 복수의 변환소자(transducer element)(이하, 트랜스듀서 라고도 함)(도시되지 아니함) 및 광원(도시되지 아니함)을 포함할 수 있다. 프로브(611)로부터 수 내지 수백 MHz 범위의 초음파가 환자 신체 내부의 특정 부위에 전달되면, 이 초음파는 여러 다른 조직들(tissues) 사이의 계층들로부터 부분적으로 반사된다. 초음파는 신체 내부에서의 밀도 변화가 있는 해부학적 개체들, 예를 들어, 혈장(blood plasma) 내의 혈구들(blood cells), 장기들(organs) 내의 작은 조직들(structures) 등에서 반사된다.The probe 611 is a part in contact with the object and may include a plurality of transducer elements (hereinafter, referred to as transducers) (not shown) and a light source (not shown). When ultrasound in the range of several hundreds to hundreds of MHz from the probe 611 is delivered to a specific area within the patient's body, the ultrasound is partially reflected from the layers between the different tissues. Ultrasound is reflected in anatomical entities with density changes inside the body, such as blood cells in blood plasma, small structures in organs, and the like.
트랜스듀서로는 예를 들어, 자성체의 자왜효과를 이용하는 자왜 초음파 트랜스듀서(Magnetostrictive Ultrasonic Transducer)나, 압전 물질의 압전 효과를 이용한 압전 초음파 트랜스듀서(118)(Piezoelectric Ultrasonic Transducer), 미세 가공된 수백 또는 수천 개의 박막의 진동을 이용하여 초음파를 송수신하는 정전용량형 미세가공 초음파 트랜스듀서(Capacitive Micromachined Ultrasonic Transducer; cMUT) 등 다양한 종류의 초음파 트랜스듀서가 사용될 수 있다.The transducer may be, for example, a magnetostrictive ultrasonic transducer using a magnetostrictive effect of a magnetic body, a piezoelectric ultrasonic transducer 118 using a piezoelectric effect of a piezoelectric material, hundreds of microfabricated or Various kinds of ultrasonic transducers, such as capacitive micromachined ultrasonic transducers (cMUTs), which transmit and receive ultrasonic waves using thousands of thin film vibrations, may be used.
복수의 변환소자는 직선으로 배열되거나(Linear array), 곡선으로 배열될 수도 있다(Convex array). 이러한 변화소자의 상부에는 복수의 변환소자를 덮는 덮개가 마련될 수 있다.The plurality of conversion elements may be arranged in a straight line (Linear array), or may be arranged in a curve (Convex array). A cover for covering the plurality of conversion elements may be provided on the change element.
광원은 대상체 내로 광을 조사하기 위한 것이다. 일 예로, 광원으로는 특정 파장의 광을 발생시키는 적어도 하나의 광원이 사용될 수도 있다. 다른 예로, 광원으로는 서로 다른 파장의 광을 발생시키는 복수의 광원이 사용될 수도 있다. 광원에서 발생되는 광의 파장은 대상체 내의 표적을 고려하여 선택될 수 있다. 이러한 광원은 반도체 레이저(LD), 발광다이오드(LED), 고체 레이저, 가스 레이저, 광섬유, 또는 이들의 조합으로 구현될 수 있다.The light source is for irradiating light into the object. For example, at least one light source for generating light having a specific wavelength may be used as the light source. As another example, a plurality of light sources for generating light having different wavelengths may be used as the light source. The wavelength of light generated by the light source may be selected in consideration of a target in the object. Such a light source may be implemented by a semiconductor laser (LD), a light emitting diode (LED), a solid state laser, a gas laser, an optical fiber, or a combination thereof.
프로브(611)에 마련된 트랜스듀서는 제어신호에 따라 초음파 신호를 생성하여, 생성된 초음파 신호를 대상체 내로 조사한다. 그리고 대상체 내의 특정 조직(예를 들어, 병변)에서 반사된 초음파 에코 신호를 수신 즉, 검출한다.The transducer provided in the probe 611 generates an ultrasonic signal according to the control signal, and irradiates the generated ultrasonic signal into the object. The ultrasound echo signal reflected from a specific tissue (eg, a lesion) in the object is received, that is, detected.
이와 같이 반사된 초음파들은 프로브(611)의 트랜스듀서를 진동시키고, 트랜스듀서는 이 진동들에 따른 전기적 펄스들(electrical pulses)을 출력한다. 이와 같은 전기적 펄스들이 영상으로 변환된다. 해부학적 개체들이 서로 상이한 초음파 반사 특성을 갖는 경우, 예를 들어, B 모드(brightness mode)의 초음파 영상에서는 각 해부학적 개체들이 서로 상이한 밝기 값으로 나타난다.The reflected ultrasonic waves vibrate the transducer of the probe 611, and the transducer outputs electrical pulses according to the vibrations. Such electrical pulses are converted into an image. When the anatomical objects have different ultrasonic reflection characteristics from each other, for example, in an ultrasound image of a B mode (brightness mode), each anatomical object appears with different brightness values.
초음파 영상의 종류는 대상체로부터 반사되는 초음파 에코 신호의 크기를 밝기로 나타내는 B 모드(brightness mode) 영상, 도플러 효과(doppler effect)를 이용하여 움직이는 대상체의 영상을 스펙트럼 형태로 나타내는 도플러 모드(doppler modem, 또는 D 모드(D(PW-Doppler) mode 라고도 한다) 영상, 어느 일정 위치에서 시간에 따른 대상체의 움직임을 나타내는 M 모드(motion mode) 영상, 대상체에 압력을 가할 때와 가하지 않을 때의 반응 차이를 영상으로 나타내는 탄성 모드 영상, 및 도플러 효과(doppler effect)를 이용하여 움직이는 대상체의 속도를 컬러로 표현하는 C 모드(color mode) 영상 등으로 구분될 수 있다. 도플러 영상은 정지 화상에 대한 도플러 영상뿐 아니라 동영상과 같은 연속 화상에 대한 도플러 영상을 포함할 수 있으며, 평면 공간에 대한 도플러 영상(2D 도플러) 및 입체 공간에 대한 도플러 영상(3D 도플러)을 모두 포함할 수 있다. 또한, 도플러 영상은 혈액의 흐름을 나타내는 혈류 도플러 영상(또는, 컬러 도플러 영상으로도 불림) 및 조직의 움직임을 나타내는 티슈 도플러 영상을 포함할 수 있다. 또한, 3차원 영상인 경우, 프로브 헤드(611)로부터 수신된 신호로부터 볼륨 데이터를 형성하고, 볼륨 데이터에 대해 볼륨 렌더링을 수행함으로써 3차원 초음파 영상을 생성할 수 있다.Types of ultrasound images include a B mode (brightness mode) image representing the magnitude of an ultrasound echo signal reflected from an object and a Doppler modem (spectral form) representing an image of a moving object using a Doppler effect. Or a D mode (also called a P-Doppler mode) image, an M mode image showing the movement of the object over time at a certain position, and a difference in response between applying and not applying pressure to the object. The image may be classified into an elastic mode image represented by an image, a C mode image expressing a speed of a moving object in color using a Doppler effect, and the like. But also include Doppler images for continuous images, such as video, and Doppler images for planar space (2D Doppler). The Doppler image may include both a blood flow Doppler image (or a color Doppler image) and a tissue Doppler image representing tissue movement. In the case of a 3D image, the 3D ultrasound image may be generated by forming volume data from a signal received from the probe head 611 and performing volume rendering on the volume data.
일 실시예에서 제1 의료장치(610)는 프로브(611) 및 프로브(611)에서 검출된 초음파 에코 신호에 기초한 영상이 생성되도록 처리하는 영상처리장치(612)를 포함한다. 영상처리장치(612)에는 복수의 모드를 지원하며, 각 모드에 대응하는 초음파 영상을 생성하는 영상처리부가 마련될 수 있다. 도 6은 영상처리장치(612)가 컴퓨터 본체로 구현되어, 고정식 단말인 프로브(611)와 유선으로 연결된 경우를 예를 들어 도시한 것이다. 영상처리장치(612)는 초음파 영상을 표시하는 디스플레이부를 더 포함할 수 있다.In an exemplary embodiment, the first medical apparatus 610 includes a probe 611 and an image processing apparatus 612 for processing to generate an image based on the ultrasonic echo signal detected by the probe 611. The image processing apparatus 612 may be provided with an image processor that supports a plurality of modes and generates an ultrasound image corresponding to each mode. FIG. 6 illustrates an example in which the image processing apparatus 612 is implemented as a computer body and is connected to the probe 611 which is a fixed terminal by wire. The image processing apparatus 612 may further include a display unit displaying an ultrasound image.
다른 실시예에 의하면, 프로브(611)는 고정식 단말뿐만 아니라, 사용자가 파지한 상태에서 장소를 이동할 수 있도록 제공되는 이동식 단말(휴대용 단말) 형태로도 구현될 수 있다. 프로브(611)가 이동식 단말로 구현되는 경우, 프로브(611)는 영상처리장치(612)와 무선 통신을 수행할 수 있다. 여기서, 무선 통신은 소정 주파수의 근거리 통신, 와이파이(Wifi), 와이파이 다이렉트(Wifi Direct), UWB(Ultra Wideband), 블루투스(Bluetooth), RF(Radio Frequency), 지그비(Zigbee), 무선랜(Wireless LAN) 및 NFC(Near Field Communication) 등의 다양한 무선 통신 모듈의 적어도 하나를 포함할 수 있다. 프로브(611)에서 수신된 초음파 에코 신호에 기초한 초음파 영상이 생성되도록 처리하는 영상처리장치(612)의 일례로는 스마트 폰(smart phone), 태블릿(tablet)과 같은 스마트 패드(smart pad), 스마트 TV(smart TV), 데스크탑 컴퓨터(desktop), 랩탑 컴퓨터(laptop), PDA(personal digital assistant, 개인 휴대용 정보 단말기) 등이 있을 수 있다.According to another embodiment, the probe 611 may be implemented in the form of a mobile terminal (portable terminal) provided to move a place in a state in which the user grips, as well as a fixed terminal. When the probe 611 is implemented as a mobile terminal, the probe 611 may perform wireless communication with the image processing apparatus 612. Here, the wireless communication is a short-range communication of a predetermined frequency, Wi-Fi (Wifi), Wi-Fi Direct (Wifi Direct), UWB (Ultra Wideband), Bluetooth (Bluetooth), RF (Radio Frequency), Zigbee (Zigbee), Wireless LAN (Wireless LAN) And at least one of various wireless communication modules such as Near Field Communication (NFC). An example of an image processing apparatus 612 for processing to generate an ultrasound image based on an ultrasound echo signal received from the probe 611 may include a smart phone, a smart pad such as a tablet, and a smart. Smart TVs, desktop computers, laptop computers, personal digital assistants (PDAs), and the like.
또 다른 실시예에서, 프로브(611)의 내부에 복수의 모드에 대응하는 초음파 영상을 생성하는 영상처리부가 마련되며, 영상처리장치(612)는 프로브(611)에서 생성된 영상을 유선 또는 무선으로 수신하여 디스플레이부를 통해 표시하도록 구현될 수도 있을 것이다.In another embodiment, an image processor for generating an ultrasound image corresponding to a plurality of modes is provided inside the probe 611, and the image processing apparatus 612 may wire the image generated by the probe 611 by wire or wirelessly. It may be implemented to receive and display through the display unit.
제2 의료 장치(620)는 비 실시간으로 대상체의 관심 볼륨(VOI: Volume of Interest)에 대한 제2 의료영상을 생성할 수 있다. 제2 의료장치(620)는 제1 의료장치(610)와 비교하여 비 실시간 특성을 가질 수 있으며, 의료 시술 이전에 미리 생성된 제2 의료영상을 의료영상 정합장치(630)로 제공할 수 있다.The second medical apparatus 620 may generate a second medical image of a volume of interest (VOI) of the object in real time. The second medical apparatus 620 may have a non real-time characteristic compared with the first medical apparatus 610, and may provide the medical image matching apparatus 630 with the second medical image generated before the medical procedure. .
본 실시예에서, 제2 의료 장치(620)는 도 2 내지 도 4에서 설명한 CT 장치(102) 또는 MRI 장치(101)가 될 수 있다. 제2 의료 장치(620)는 엑스선 촬영 장치, 단일광자 단층 촬영 (SPECT: single photon emission computed tomography) 장치, 양전자 단층 촬영(PET: position emission tomography) 장치 등으로도 구현될 수 있다. In the present embodiment, the second medical device 620 may be the CT device 102 or the MRI device 101 described with reference to FIGS. 2 to 4. The second medical apparatus 620 may also be implemented as an X-ray imaging apparatus, a single photon emission computed tomography (SPECT) device, a position emission tomography (PET) device, or the like.
이하의 실시예에서는 설명의 편의를 위하여 제2 의료영상이 MR 또는 CT 영상인 것을 가정하나, 본 발명의 권리범위는 이에 한정되지 않는다.In the following embodiments, it is assumed that the second medical image is an MR or CT image for convenience of description, but the scope of the present invention is not limited thereto.
제1 의료 장치(610) 또는 제2 의료장치(620)에 의해 촬영되는 의료영상들은 2차원의 단면들을 축척하여 생성된 3차원 영상일 수 있다. 예컨대, 제2 의료장치(620)는 단면 영상의 위치(location)와 방향(orientation)을 변화시키면서, 다수의 단면 영상들을 촬영한다. 이와 같은 단면 영상들이 축적되면 환자 신체의 특정 부위를 3차원적으로 나타내는 3차원 볼륨(volume)의 영상 데이터가 생성될 수 있다. 이와 같이 단면 영상들을 축적하여 3차원 볼륨의 영상 데이터를 생성하는 방식을 MPR(Multiplanar reconstruction) 방식이라고 한다. 이와 유사하게, 제1 의료장치(610)는 프로브(611)를 Hand Sweep 하거나, Wabbler 방식, 3D Array 방식의 프로브(621)를 통해서 3차원 볼륨의 영상 데이터를 생성할 수 있다.The medical images photographed by the first medical device 610 or the second medical device 620 may be three-dimensional images generated by scaling two-dimensional cross sections. For example, the second medical apparatus 620 photographs a plurality of cross-sectional images while changing a location and orientation of the cross-sectional image. When such cross-sectional images are accumulated, image data of a three-dimensional volume representing a specific part of the patient's body in three dimensions may be generated. As such, a method of generating image data of a 3D volume by accumulating cross-sectional images is referred to as a multiplanar reconstruction (MPR) method. Similarly, the first medical device 610 may hand sweep the probe 611 or generate 3D volume image data through the Wabbler type or the 3D Array type probe 621.
도 6에서는 제1 의료영상과 제2 의료영상이 서로 다른 종류의 의료 장치에서 생성된 경우를 설명하였으나, 제1 의료영상과 제2 의료영상이 같은 종류의 의료장치 예를 들어 CT 장치(102)에 의해 서로 다른 시점에 촬영된 영상인 경우도 본 발명의 권리범위에 포함된다.6 illustrates a case where the first medical image and the second medical image are generated by different types of medical devices, the first medical image and the second medical image are the same type of medical device, for example, the CT device 102. In the case where the images are captured at different time points by the other, they are included in the scope of the present invention.
이하의 본 발명 실시예에 따른 의료영상 표시장치에서는, 제1 의료영상이 조영제를 투여하지 않은 상태에서 촬영된 비조영 의료영상이고, 제2 의료영상이 조영제를 투여한 상태에서 촬영된 조영 증강 영상인 경우를 예로 들어 설명한다.In the following medical image display device according to an embodiment of the present invention, the first medical image is a non-contrast medical image taken in the state not administered the contrast agent, the second medical image is a contrast enhancement image taken in the state administered the contrast agent The case is described as an example.
환자에게 투여되는 조영제는 다양한 부작용을 초래하는 문제점이 있다. 예컨대, 작게는 환자가 몸이 저리거나 화끈거리는 느낌을 받을 수 있으며, 두드러기, 가려움증, 구토, 오심, 발진 등을 유발할 수 있고, 나아가 심각하게는 환자가 사망에 이르는 경우도 발생할 수 있다.Contrast agents administered to patients have the problem of causing various side effects. For example, at least the patient may feel numb or burning, and may cause urticaria, itching, vomiting, nausea, rash, etc., and even seriously, the patient may die.
특히 신장 기능이 좋지 않은 환자들은 부득이한 경우를 제외하고는 조영제를 사용할 수 없으며, 장기간 치료가 필요한 환자의 경우 조영제 사용에 따른 비용 문제도 간과하기 어렵다.In particular, patients with poor kidney function cannot use contrast agents except inevitable cases, and the cost of using contrast agents is difficult to be overlooked in patients requiring long-term treatment.
상기와 같은 조영제 부작용을 최소화 하고자, 폐암의 추적검사 및 폐 병변 단순 진단 (기관지 질환 및 폐기종 등)에는, 비 조영 영상이 주로 사용된다. 구체적으로는, ALARA(As Low As Reasonably Achievable) 원칙(선량 및 조영제 사용량을 가능한 최소화 할 것을 권고한 국제 규정) 및 NCCN(National Comprehensive Cancer Network) 가이드라인에 근거하여 80% 이상이 비 조영 영상으로 진단이 이루어지고 있다.In order to minimize the above-mentioned contrast agent side effects, non-contrast imaging is mainly used for the follow-up examination of lung cancer and simple diagnosis of lung lesions (such as bronchial disease and emphysema). Specifically, 80% or more of the diagnosis is based on non-contrast imaging based on the As Low As Reasonably Achievable (ALARA) principle (an international regulation that recommends minimizing the use of doses and contrast agents) and the National Comprehensive Cancer Network (NCCN) guidelines. This is being done.
도 7은 흉부영역의 림프노드 및 혈관분포를 개념적으로 도시한 도면이며, 도 8은 흉부영역에 대해 촬영된 조영 증강 CT 영상을 도시한 도면이며, 도 9는 흉부영역에 대해 촬영된 비 조영 CT 영상을 도시한 도면이다.FIG. 7 is a diagram conceptually illustrating lymph nodes and blood vessel distribution of a chest region, FIG. 8 is a diagram showing contrast-enhanced CT images taken of a chest region, and FIG. 9 is a non-contrast CT shot of a chest region. It is a figure which shows an image.
도 7에 도시된 림프노드(Lymph node, 임파선, 림프절)(701)는 인체내의 병원체(예컨대, 염증, 암세포 등)를 인식하여 면역 반응을 일으키는데 관여한다. 따라서, 림프노드의 크기 변화 정도, 변화된 림프노드의 개수 및 분포의 변화는 진단 및 치료 모니터링에 중요 임상 판정 요소가 된다. Lymph node (lymph node, lymph node) 701 shown in Figure 7 is involved in causing an immune response by recognizing pathogens (eg, inflammation, cancer cells, etc.) in the human body. Therefore, the degree of change in the size of lymph nodes, the change in the number and distribution of changed lymph nodes are important clinical judgment factors for diagnosis and treatment monitoring.
예를 들어, 암세포가 발생 또는 전이가 되면 림프노드의 크기가 커지게 되므로, 림프노드의 검출 및 진단을 위해, 타 구조물 특히 혈관(702)과의 분별이 비교적 용이한 조영 증강된 영상을 이용하는 것이 유리할 수 있다.For example, when the cancer cells develop or metastasize, the lymph nodes increase in size. Therefore, for detecting and diagnosing lymph nodes, it is useful to use contrast-enhanced images that are relatively easy to distinguish from other structures, particularly the blood vessels 702. May be advantageous.
도 8에 도시된 바와 같이, 환자에게 조영제를 투여하여 촬영된 조영 증강 CT 영상에서는 림프노드(703)와 폐혈관(704)이 구분 가능하게 표시된다. 그와 비교하여 도 9의 비 조영 CT 영상에서는 림프노드가 위치된 영역(705)에서 림프노드와 혈관의 구별이 용이하지 않은 것을 확인할 수 있다.As shown in FIG. 8, the lymph node 703 and the pulmonary blood vessel 704 are distinguishably displayed in a contrast-enhanced CT image photographed by administering a contrast agent to a patient. In contrast, in the non-contrast CT image of FIG. 9, it is confirmed that the lymph node and the blood vessel are not easily distinguished in the region 705 where the lymph node is located.
그러나, 전술한 다양한 부작용에 의해 조영제 사용은 점차적으로 제한되고 있으며 특히, 신장 질환 환자의 경우 조영제 투여가 불가능하므로, 부득이하게 비조영 의료영상에 기반한 진단이 이루어지는 경우가 발생된다. However, the use of contrast agents is gradually limited due to the various side effects described above. In particular, in the case of kidney disease patients, contrast agents cannot be administered, and thus, an inevitable diagnosis based on non-contrast medical images occurs.
그에 따라, 림프노드 영역 정보가 폐암 진단 (전이 여부, 상태 변화 등) 및 폐 병변 진단에 중요한 랜드마크(Landmark)임에도 불구하고, 비조영 영상에서는 림프노드/혈관 영역의 구별이 어려워, 림프노드 관련 질환의 조기 진단을 놓치거나, 이에 대한 치료 시기를 놓치는 경우가 발생된다.Thus, although lymph node region information is an important landmark for lung cancer diagnosis (metastasis, state changes, etc.) and lung lesion diagnosis, it is difficult to distinguish lymph node / vessel regions in non-contrast imaging, Missing early diagnosis of the disease, or missed treatment time occurs.
도 10은 본 발명 일 실시예에 따른 의료영상 표시장치(1000)의 구성을 도시한 블록도이고, 도 11은 도 10의 영상처리부(1030)의 구성을 도시한 블록도이다.FIG. 10 is a block diagram illustrating a configuration of the medical image display apparatus 1000 according to an exemplary embodiment. FIG. 11 is a block diagram illustrating the configuration of the image processor 1030 of FIG. 10.
도 10에 도시된 바와 같이, 본 발명의 일 실시예에 따른 의료영상 표시장치(1000)는 제어부(1010), 디스플레이부(1020), 영상처리부(1030), 사용자 입력부(1040), 저장부(1050) 및 통신부(1060)를 포함한다. 다만, 도시된 구성요소들이 모두 필수 구성요소들은 아니며, 도시된 구성요소들 이외에 다른 범용적인 구성요소들이 더 포함될 수도 있다.As shown in FIG. 10, the medical image display apparatus 1000 according to an exemplary embodiment may include a control unit 1010, a display unit 1020, an image processing unit 1030, a user input unit 1040, and a storage unit ( 1050 and the communication unit 1060. However, the illustrated components are not all essential components, and other general components may be further included in addition to the illustrated components.
의료영상 표시장치(1000)가 도 2에 도시된 MRI 장치(101)에 포함되는 경우, 의료영상 표시장치(1000)의 적어도 일부는 오퍼레이팅부(260)에 대응될 수 있다. 구체적으로, 영상처리부(1030) 및 디스플레이부(1020)는 각각 도 2의 영상처리부(262) 및 출력부(264)에 대응될 수 있다. 제어부(1010)는 오퍼레이팅부(260) 및/또는 디스플레이 제어부(248)의 적어도 일부에 대응될 수 있다. 따라서, 의료영상 표시장치(1000)에 있어서, 도 2에서와 중복되는 설명은 생략한다.When the medical image display apparatus 1000 is included in the MRI apparatus 101 illustrated in FIG. 2, at least a part of the medical image display apparatus 1000 may correspond to the operating unit 260. In detail, the image processor 1030 and the display 1020 may correspond to the image processor 262 and the output unit 264 of FIG. 2, respectively. The controller 1010 may correspond to at least a portion of the operating unit 260 and / or the display control unit 248. Therefore, in the medical image display apparatus 1000, a description overlapping with that of FIG. 2 will be omitted.
또한, 의료영상 표시장치(1000)가 도 3 및 도 4에 도시된 CT 장치(102)에 포함되는 경우, 제어부(1010), 디스플레이부(1020), 영상처리부(1030), 사용자 입력부(1040) 및 저장부(1050)는 각각 도 4의 제어부(318), 디스플레이부(330), 영상처리부(326), 사용자 입력부(328) 및 저장부(324)에 대응될 수 있다. 따라서, 의료영상 표시장치(1000)에 있어서, 도 3 또는 도 4에서와 중복되는 설명은 생략한다.In addition, when the medical image display apparatus 1000 is included in the CT apparatus 102 illustrated in FIGS. 3 and 4, the controller 1010, the display unit 1020, the image processor 1030, and the user input unit 1040 may be used. The storage unit 1050 may correspond to the control unit 318, the display unit 330, the image processor 326, the user input unit 328, and the storage unit 324 of FIG. 4, respectively. Therefore, in the medical image display apparatus 1000, a description overlapping with that of FIG. 3 or 4 will be omitted.
또한, 의료영상 표시장치(1000)는 도 5에서 설명한 서버(534), 의료 장치(536), 휴대용 단말(538), 도 6에서 설명한 초음파 장치(610) 중 어느 하나에 포함될 수도 있다.In addition, the medical image display apparatus 1000 may be included in any one of the server 534, the medical apparatus 536, the portable terminal 538, and the ultrasound apparatus 610 described with reference to FIG. 6.
디스플레이부(1020)는 의료영상 표시장치의 동작과 관련된 어플리케이션을 디스플레이한다. 예를 들면, 디스플레이부(1020)는 의료 장치를 이용한 진단에 필요한 메뉴나 안내 사항 등을 디스플레이할 수 있다. 또한, 디스플레이부(1020)는 진단 과정에서 획득된 영상들과, 사용자의 의료영상 표시장치 조작을 돕기 위한 사용자 인터페이스(UI)를 디스플레이할 수 있다. The display unit 1020 displays an application related to the operation of the medical image display apparatus. For example, the display unit 1020 may display menus or guidance items necessary for diagnosis using a medical device. In addition, the display 1020 may display the images acquired during the diagnosis process and a user interface (UI) for helping the user manipulate the medical image display device.
도 10에서는 의료영상 표시장치(1000)에 하나의 디스플레이부(1020)가 마련된 경우를 예로 들어 도시하였지만, 본 발명은 이에 한정되는 것이 아니며, 복수의 디스플레이부 예컨대, 메인 디스플레이와 서브 디스플레이를 포함하도록 구현될 수 있다.10 illustrates an example in which one display unit 1020 is provided in the medical image display apparatus 1000, but the present invention is not limited thereto, and includes a plurality of display units, for example, a main display and a sub display. Can be implemented.
본 실시예에서 디스플레이부(1020)는 적어도 하나의 해부학적 개체를 포함하는 대상체를 촬상한 제1 영상(제1 의료영상) 및/또는 제1 영상에 대해 후술하는 정합 프로세스가 수행된 제3 영상(제3 의료영상)을 디스플레이 한다. 또한, 디스플레이부(1020)는 후술하는 병변의 확장영역까지 표시하는 제4 영상(제4 의료영상)을 더 표시할 수 있다. 또한, 디스플레이부(1020)는 제1영상의 참조영상인 제2 영상(제2 의료영상)을 더 표시할 수 있다.In the present embodiment, the display 1020 may include a first image (first medical image) and / or a third image in which a matching process, which will be described below, is performed on the first image of an object including at least one anatomical object. (Third Medical Image) is displayed. In addition, the display unit 1020 may further display a fourth image (fourth medical image) displaying up to an extended area of the lesion, which will be described later. In addition, the display 1020 may further display a second image (second medical image) that is a reference image of the first image.
여기서, 제1 영상은 대상체를 촬영한 의료영상으로, MRI 영상, CT 영상 등과 같은 단층 영상, 엑스레이 영상, 초음파 영상 등 질병 진단을 위해 촬영된 모든 의료영상이 될 수 있다. Here, the first image may be a medical image photographing an object, and may be all medical images photographed for diagnosing a disease, such as a tomography image, an X-ray image, and an ultrasound image.
영상처리부(1030)는 디스플레이부(1020)에 영상이 표시되도록 처리한다. 구체적으로, 영상처리부(1030)는 대상체를 촬상하여 획득된 신호를 처리하여 디스플레이부(1020)에 표시 가능한 화상 데이터로 이미징할 수 있다.The image processor 1030 processes the image to be displayed on the display 1020. In detail, the image processor 1030 may process a signal obtained by capturing an object and image it as image data that can be displayed on the display unit 1020.
의료영상을 이미징하는 방법에는 첫 번째로 엑스레이 영상의 이미징 방법과 같이 대상체로 엑스레이와 같은 광선을 조사하여 대상체를 촬영하는 방법이 있다. 이 방법은 촬영 기법 또는 스캔 모드의 구별 없이 대상체를 이미징하는 방법이다. 또한, 이 방법은 획득하고자 하는 영상을 위한 별도의 복원 또는 계산 동작 없이, 바로 대상체를 이미징할 수 있다.First of all, a method of imaging a medical image includes a method of photographing an object by irradiating a ray, such as an X-ray, with the object, like the imaging method of an X-ray image. This method is a method of imaging an object without discriminating a photographing technique or a scan mode. In addition, the method can image the object directly, without a separate reconstruction or calculation operation for the image to be acquired.
두 번째로, MRI 또는 CT 영상과 같이 대상체를 촬영하는데 있어서 촬영 기법 또는 스캔 모드를 다양하게 적용하여 대상체를 이미징하는 방법이 있다. 전술한 두 번째 방법의 경우, 대상체를 스캔할 때 고려할 수 있는 다양한 변수를 이용하여, 신체의 동일 부위를 촬영하더라도 서로 다른 특성을 갖는 영상을 획득할 수 있다. 즉, 용도 또는 목적에 맞춰서 스캔 모드를 변경함으로써, 목적에 부합하는 영상을 획득할 수 있다. 또한, 이 방법은 획득하고자 하는 영상을 위한 별도의 복원 또는 계산 동작을 수행하여 목적하는 영상을 획득할 수 있다.Secondly, there is a method of imaging an object by applying various imaging techniques or scanning modes in photographing the object, such as an MRI or CT image. In the above-described second method, an image having different characteristics may be acquired even if the same part of the body is photographed using various variables that may be considered when scanning the object. That is, by changing the scan mode in accordance with the purpose or purpose, it is possible to obtain an image corresponding to the purpose. In addition, the method may obtain a target image by performing a separate reconstruction or calculation operation for the image to be acquired.
여기서, 대상체를 스캔하여 의료영상을 촬영하는데 있어서 적용되는 기법을 '스캔 프로토콜(scan protocol)' 또는 '프로토콜'이라 하며, 이하에서는 '프로토콜'이라 한다. 또한, 영상처리부(1030)는 획득된 영상 데이터에 소정 프로토콜를 적용하여 의료영상을 생성할 수 있다.Herein, a technique applied in scanning a subject and taking a medical image is referred to as a 'scan protocol' or a 'protocol', hereinafter referred to as a 'protocol'. In addition, the image processor 1030 may generate a medical image by applying a predetermined protocol to the acquired image data.
본 발명 일실시예에 따른 의료영상 표시장치(700)는 프로토콜을 적용하여 획득된 영상 데이터(제1 영상)를 이용하여, 계산 또는 후처리된 영상 데이터(제3 영상)를 생성할 수 있다. 본 실시예에서 계산 또는 후처리 과정은 정합 프로세스를 포함하며, 그에 따라 생성된 영상이 제3 영상 및/또는 제4 영상이 된다. The medical image display apparatus 700 according to an exemplary embodiment may generate calculated or post-processed image data (third image) by using image data (first image) obtained by applying a protocol. In this embodiment, the calculation or post-processing process includes a matching process, so that the generated image becomes a third image and / or a fourth image.
MRI 장치(101)의 경우 다양한 프로토콜을 적용하여 대상체를 스캔하며, 그에 따라서 획득된 MR 신호를 이용해 대상체에 대한 영상을 생성한다. 이하에서는, 대상체를 스캔하여 획득된 데이터, 예를 들어, MR 신호 또는 K 공간 데이터를 스캔 데이터라 하고, 스캔 데이터를 이용하여 생성된 대상체에 대한 영상을 영상 데이터라 한다. 영상 데이터는 전술한 제1 영상에 해당한다. The MRI apparatus 101 scans an object by applying various protocols, and generates an image of the object by using the acquired MR signal. Hereinafter, data acquired by scanning an object, for example, MR signals or K-space data is called scan data, and an image of the object generated using the scan data is called image data. The image data corresponds to the first image described above.
CT 장치(102)의 경우 조영제(contrast media)를 투여하는지 여부에 따라서 서로 다른 프로토콜을 적용하여 대상체를 스캔 할 수 있다. 또한, CT 장치(102)의 경우 획득되는 영상 데이터는 사이노그램(sonogram) 또는 프로젝션 데이터(projection data)가 될 수 있으며, 획득된 스캔 데이터를 이용하여 영상 데이터 즉, 제1 영상을 생성할 수 있다.In the case of the CT device 102, different protocols may be applied to scan an object depending on whether contrast medium is administered. In addition, in the case of the CT device 102, the acquired image data may be a sinogram or projection data, and the image data, that is, the first image may be generated using the acquired scan data. have.
사용자 입력부(1040)는 사용자로부터 명령을 수신 가능하도록 제공된다. 본 실시예의 의료영상 표시장치(1040)는 사용자 입력부(1040)를 통해 사용자로부터 의료영상 표시장치(1040)를 조작하기 위한 입력을 수신하고, 그에 응답하여 의료영상 표시장치(1040)가 획득한 제1 의료영상, 제2 의료영상 및/또는 정합된 제3 의료영상(또는 제4 의료영상)이 디스플레이부(1020)를 통해 출력될 수 있다. The user input unit 1040 is provided to receive a command from the user. The medical image display apparatus 1040 according to the present exemplary embodiment receives an input for operating the medical image display apparatus 1040 from the user through the user input unit 1040, and in response thereto, obtains the medical image display apparatus 1040 obtained by the medical image display apparatus 1040. The first medical image, the second medical image, and / or the matched third medical image (or fourth medical image) may be output through the display unit 1020.
사용자 입력부(1040)는 사용자가 직접 의료영상 표시장치(1040)를 조작하기 위한 버튼, 키 패드, 스위치, 다이얼 또는 디스플레이부(1020) 상에 표시되는 사용자 인터페이스 즉, GUI를 포함할 수 있다. 본 발명 일실시예에서 사용자 입력부(1040)는 디스플레이부(1020) 상에 마련된 터치스크린을 포함할 수 있다. The user input unit 1040 may include a button, a keypad, a switch, a dial, or a user interface displayed on the display unit 1020 for direct manipulation of the medical image display apparatus 1040 by the user. In one embodiment of the present invention, the user input unit 1040 may include a touch screen provided on the display unit 1020.
일 실시예에서, 의료영상 표시장치(1000)는 사용자 입력부(1040)를 통해 디스플레이부(1020)에 표시된 의료영상(제1 영상)에서 적어도 하나의 지점을 선택 받을 수 있다. 여기서, 선택되는 지점은 도 9의 비 조영 CT 영상(제1 영상)에서의 림프노드/혈관 영역에 대응될 수 있으며, 디스플레이부(1020) 사용자 선택에 응답하여 영상처리부(1030)에 의해 수행되는 정합 프로세스에 따라 선택된 지점에서 림프노트와 혈관이 구분 가능하도록 처리된 영상(제3 영상)을 표시할 수 있다. 디스플레이부(1020)는 선택된 지점을 확대하여 표시할 수도 있다.In an embodiment, the medical image display apparatus 1000 may receive at least one point selected from the medical image (first image) displayed on the display unit 1020 through the user input unit 1040. Here, the selected point may correspond to the lymph node / vessel area in the non-contrast CT image (first image) of FIG. 9, and is performed by the image processor 1030 in response to the user's selection of the display unit 1020. According to the matching process, the processed image (third image) may be displayed to distinguish the lymphnote and the blood vessel at the selected point. The display unit 1020 may enlarge and display the selected point.
저장부(1050)는 제어부(1010)의 제어에 따라서 한정되지 않은 데이터가 저장된다. 저장부(1050)는 플래시메모리(flash-memory), 하드디스크 드라이브(hard-disc drive)와 같은 비휘발성 저장매체로 구현된다. 저장부(1050)는 제어부(1010)에 의해 액세스되며, 제어부(1010)에 의한 데이터의 독취/기록/수정/삭제/갱신 등이 수행된다.The storage unit 1050 stores unlimited data under the control of the controller 1010. The storage unit 1050 is implemented as a nonvolatile storage medium such as a flash memory and a hard disk drive. The storage unit 1050 is accessed by the control unit 1010, and read / record / modify / delete / update data, etc., by the control unit 1010.
저장부(1050)에 저장되는 데이터는, 예를 들면 의료영상 표시장치(1000)의 구동을 위한 운영체제를 비롯하여, 이 운영체제 상에서 실행 가능한 다양한 어플리케이션, 영상데이터, 부가데이터 등을 포함한다.The data stored in the storage unit 1050 includes, for example, an operating system for driving the medical image display apparatus 1000, and various applications, image data, additional data, etc. executable on the operating system.
본 실시예의 저장부(1050)는 의료영상에 관련된 각종 데이터를 저장할 수 있다. 구체적으로, 저장부(1050)에는 의료영상 표시장치(1000)에서 적어도 하나의 프로토콜을 적용하여 생성된 적어도 하나의 영상 데이터 및/또는 외부로부터 수신된 적어도 하나의 의료영상 데이터가 저장된다. 또한, 저장부(1050)에는 영상 데이터에 대해 정합 프로세스를 수행하여 생성된 적어도 하나의 영상 데이터가 더 저장될 수 있다. 저장부(1050)에 저장된 영상 데이터는 디스플레이부(1050)에 의해 표시된다. The storage unit 1050 of the present embodiment may store various data related to a medical image. In detail, the storage unit 1050 stores at least one image data generated by applying at least one protocol in the medical image display apparatus 1000 and / or at least one medical image data received from the outside. In addition, the storage unit 1050 may further store at least one image data generated by performing a matching process on the image data. Image data stored in the storage unit 1050 is displayed by the display unit 1050.
통신부(1060)는 다양한 외부장치와 통신을 수행하기 위한 유무선 네트워크 통신 모듈을 포함한다. 통신부(1060)는 외부장치로부터 수신되는 커맨드/데이터/정보/신호를 제어부(1010)에 전달한다. 또한, 통신부(1060)는 제어부(1010)로부터 전달받은 커맨드/데이터/정보/신호를 외부장치에 전송할 수도 있다. The communication unit 1060 includes a wired / wireless network communication module for communicating with various external devices. The communication unit 1060 transmits a command / data / information / signal received from an external device to the control unit 1010. In addition, the communication unit 1060 may transmit a command / data / information / signal received from the control unit 1010 to an external device.
통신부(150)는 본 실시예에 따르면 의료영상 표시장치(1000)에 내장되나, 일 실시예에서 동글(dongle) 또는 모듈(module) 형태로 구현되어 의료영상 표시장치(1000)의 커넥터(도시되지 아니함)에 착탈될 수도 있다.According to the present embodiment, the communication unit 150 may be embedded in the medical image display apparatus 1000, but in one embodiment, the communication unit 150 may be implemented in a dongle or module form to connect the connector of the medical image display apparatus 1000 (not shown). It may be detached from the device.
또 다른 실시예에서, 통신부(1060)는 HID(Human Interface Device) 들을 연결하기 위한 I/O 포트를 포함할 수 있다. 의료영상 표시장치(1000)는 I/O 포트를 통해 유선으로 연결된 외부장치와의 영상 데이터의 송수신이 가능할 수 있다.In another embodiment, the communication unit 1060 may include an I / O port for connecting human interface devices (HIDs). The medical image display apparatus 1000 may transmit and receive image data with an external device connected by wire through an I / O port.
본 실시예의 통신부(1060)는 타 의료장치에서 생성된 의료영상 데이터를 수신할 수 있다. 여기서, 타 의료장치는 의료영상 표시장치(1000)와 같은 종류의 의료장치이거나, 다른 의료장치일 수 있다. 예를 들어, 의료영상 표시장치(1000)가 CT 장치인 경우, 타 의료장치는 다른 CT 장치일 수 있으며, 경우에 따라 타 의료장치가 MRI 장치 또는 초음파 장치일 수도 있을 것이다.The communication unit 1060 of the present embodiment may receive medical image data generated by another medical device. The other medical device may be the same kind of medical device as the medical image display device 1000 or may be another medical device. For example, when the medical image display apparatus 1000 is a CT device, the other medical device may be another CT device, and in some cases, the other medical device may be an MRI device or an ultrasound device.
일 실시예에서, 의료영상 표시장치(1000)는 타 의료장치와 통신부(1060)를 통해 직접 연결될 수 있다. 다른 실시예에서 통신부(160)는 의료영상이 저장된 외부 저장매체와의 연결을 위한 접속부를 포함할 수 있다. In an embodiment, the medical image display apparatus 1000 may be directly connected to another medical apparatus through the communication unit 1060. In another embodiment, the communication unit 160 may include a connection for connecting to an external storage medium storing a medical image.
제어부(1010)는 의료영상 표시장치(1000)의 다양한 구성에 대한 제어동작을 수행한다. 예를 들면, 제어부(1010)는 영상처리부(1030)가 처리하는 영상처리/영상정합 프로세스의 진행, 사용자입력부(1040)로부터의 커맨드에 대한 대응 제어동작을 수행함으로써, 의료영상 표시장치(1000)의 전체 동작을 제어한다.The controller 1010 performs a control operation on various components of the medical image display apparatus 1000. For example, the controller 1010 may perform an image processing / image registration process processed by the image processor 1030 and perform a control operation corresponding to a command from the user input unit 1040, thereby causing the medical image display apparatus 1000 to perform the control. To control the overall operation.
제어부(1010)는 적어도 하나의 프로세서를 포함한다. 적어도 하나의 프로세서는 프로그램이 저장된 비휘발성 메모리(ROM)로부터 휘발성 메모리(RAM)으로 대응되는 프로그램을 로드하여 실행한다.The controller 1010 includes at least one processor. At least one processor loads and executes a program corresponding to a volatile memory (RAM) from a nonvolatile memory (ROM) in which the program is stored.
본 실시예에 따른 제어부(1010)는 CPU(Central Processing Unit), AP(Application Processor), 마이컴(Micro Computer, MICOM)과 같은 적어도 하나의 범용 프로세서를 포함하여, 예를 들어, ROM에 저장된 소정 알고리즘에 따라 대응하는 프로그램을 RAM에 로드하여 실행함으로써 의료영상 표시장치(1000)의 다양한 동작들을 수행하도록 구현 가능하다. The control unit 1010 according to the present embodiment includes at least one general purpose processor such as a central processing unit (CPU), an application processor (AP), a microcomputer (MICOM), for example, a predetermined algorithm stored in a ROM. In accordance with the present invention, the corresponding program may be loaded into the RAM and executed to execute various operations of the medical image display apparatus 1000.
의료영상 표시장치(1000)의 제어부(1010)가 단일 프로세서 예를 들어 CPU로 구현되는 경우, CPU는 의료영상 표시장치(1000)에서 수행 가능한 다양한 기능들 예를 들어, 디스플레이부(1020)에 표시되는 의료영상의 이미징을 위한 다양한 영상처리 프로세스의 진행으로서 예를 들어 적용되는 프로토콜의 선택 및 그에 따른 이미징에 대한 제어, 사용자입력부(1040)를 통해 수신된 커맨드에 대한 대응, 외부 장치와의 유무선 네트워크 통신의 제어 등을 실행 가능하도록 마련될 수 있다. When the controller 1010 of the medical image display apparatus 1000 is implemented as a single processor, for example, a CPU, the CPU may display various functions that may be performed by the medical image display apparatus 1000, for example, the display unit 1020. Various image processing processes for imaging medical images, for example, selection of an applied protocol and control of imaging accordingly, response to commands received through the user input unit 1040, and a wired / wireless network with an external device. It may be provided to be able to control the communication and the like.
프로세서는 싱글 코어, 듀얼 코어, 트리플 코어, 쿼드 코어 및 그 배수의 코어를 포함할 수 있다. 프로세서는 복수의 프로세서, 예를 들어, 메인 프로세서(main processor) 및 서브 프로세서(sub processor)를 포함할 수 있다. 서브 프로세서는 대기전원만 공급되고 의료영상 표시장치(1000)로서 동작하지 않는 대기모드(standby mode, 이하, 슬립모드(sleep mode) 라고도 한다)에서 동작하도록 마련된다. The processor may include single core, dual core, triple core, quad core, and multiple cores thereof. The processor may include a plurality of processors, for example, a main processor and a sub processor. The subprocessor is provided to operate in a standby mode (hereinafter, also referred to as a sleep mode) in which only standby power is supplied and does not operate as the medical image display apparatus 1000.
상기와 같은 제어부(1010)에 포함되는 프로세서, 롬 및 램은 내부 버스(bus)를 통해 상호 연결될 수 있다.The processor, ROM, and RAM included in the controller 1010 may be connected to each other through an internal bus.
본 발명 일실시예에서 의료영상 표시장치(1000)가 랩탑 또는 데스크탑 컴퓨터로 구현되는 경우, 제어부(1010)는 본체에 마련되며 그래픽 처리를 위한 GPU(Graphic Processing Unit, 도시되지 아니함)를 더 포함할 수 있다. 또한, 다른 실시예에서 의료영상 표시장치(1000)가 스마트 폰, 스마트 패드 등의 휴대용 단말로 구현되는 경우, 프로세서가 GPU를 포함할 수 있으며, 예를 들어 프로세서는 코어(core)와 GPU가 결합된 SoC(System On Chip) 형태로 구현될 수 있다.In one embodiment of the present invention, when the medical image display apparatus 1000 is implemented as a laptop or desktop computer, the controller 1010 may be provided in a main body and further include a GPU (Graphic Processing Unit, not shown) for graphic processing. Can be. In another embodiment, when the medical image display apparatus 1000 is implemented as a portable terminal such as a smart phone or a smart pad, the processor may include a GPU. For example, the processor may include a core and a GPU. SoC (System On Chip) can be implemented.
한편, 제어부(1010)는 의료영상 표시장치(1000)에서 지원되는 특정 기능, 예를 들어, 메인 프로세서를 포함한 소정 구성에서의 오류 발생을 감지하는 기능을 수행하기 위한 프로그램과 해당 프로그램을 실행하는 전용 프로세서로서 마련되는 칩(chip) 예를 들어, IC(integrated circuit) 칩을 포함할 수 있다.On the other hand, the controller 1010 is dedicated to execute a program for performing a specific function supported by the medical image display apparatus 1000, for example, a function for detecting an error occurrence in a predetermined configuration including a main processor. A chip provided as a processor may include, for example, an integrated circuit (IC) chip.
일 실시예에서, 제어부(1010)는 사용자 입력부(1040)를 통해 의료영상의 분석이 가능한 플랫폼(platform)으로서, 소정 어플리케이션을 실행하도록 하는 사용자 명령을 수신할 수 있다. 실행된 어플리케이션은 사용자 선택이 가능한 GUI로서 각종 버튼이 표시되는 입력 영역(도 22의 2220)과, 의료영상이 표시되는 표시 영역(도 22의 2210)을 포함할 수 있다.In an embodiment, the controller 1010 may receive a user command to execute a predetermined application as a platform capable of analyzing a medical image through the user input unit 1040. The executed application may be a user selectable GUI and may include an input area (2220 of FIG. 22) on which various buttons are displayed and a display area (2210 of FIG. 22) on which a medical image is displayed.
사용자는 어플리케이션의 입력 영역의 GUI를 이용하여 내부 또는 외부에 저장된 의료영상의 불러오기 즉, 로드(load)가 가능하며, 로드된 의료영상은 어플리케이션의 표시 영역을 통해 디스플레이부(1020)에 표시된다. 또한, 사용자는 실행된 어플리케이션에서 제1 의료영상과 제2 의료영상을 정합(registration)하도록 하는 사용자 명령을 입력할 수 있다. A user may load, i.e., load, a medical image stored internally or externally using a GUI of an input area of an application, and the loaded medical image is displayed on the display unit 1020 through the display area of the application. . In addition, the user may input a user command to register the first medical image and the second medical image in the executed application.
본 발명 일 실시예에서 영상처리부(1030)는 하드웨어 구성인 적어도 하나의 프로세서를 포함하는 제어부(1010)에 의해 구동되는 소프트웨어 구성인 의료영상 분석 어플리케이션으로 구현될 수 있다. In one embodiment of the present invention, the image processing unit 1030 may be implemented as a medical image analysis application which is a software configuration driven by the controller 1010 including at least one processor having a hardware configuration.
즉, 이하에서 설명하는 영상처리부(1030)의 동작들은 제어부(1010)에 의해 구동되는 소프트웨어의 실행에 따라 이루어지는 것이 된다. 따라서, 영상처리부(1030)가 수행하는 각종 동작들은 제어부(1010) 즉, 적어도 하나의 프로세서에 의해 수행되는 것으로도 볼 수 있다.That is, the operations of the image processor 1030 described below are performed according to the execution of the software driven by the controller 1010. Therefore, the various operations performed by the image processor 1030 may be regarded as being performed by the controller 1010, that is, at least one processor.
본 발명 일 실시예에 따른 의료영상 표시장치(1000)의 제어부(1010)는 비 조영 의료영상 즉, 제1 의료영상에 대해 영상 정합 프로세스를 수행하도록 영상처리부(1030)를 제어한다. 여기서, 영상처리부(1030)는 제1 의료영상과, 제2 의료영상을 이용하여 제1 의료영상에 대한 영상 정합을 수행할 수 있다.The controller 1010 of the medical image display apparatus 1000 according to an exemplary embodiment of the present invention controls the image processor 1030 to perform an image matching process on the non-contrast medical image, that is, the first medical image. Here, the image processor 1030 may perform image registration on the first medical image by using the first medical image and the second medical image.
제2 의료영상은 타 시점에 획득된 조영 증강된 의료영상으로서, 제1 의료영상의 참조영상이 된다. 예를 들어, 조영 증강 의료영상은 과거의 소정 시점에 대상체를 촬상한 영상으로서, 타 의료기기, 서버 등에 저장되어 통신부(1060)를 통해 의료영상 표시장치(1000)로 로드되거나, 내부 또는 외부의 저장부(1050)에 미리 저장될 수 있다.The second medical image is a contrast-enhanced medical image obtained at another time point and serves as a reference image of the first medical image. For example, the contrast-enhanced medical image is an image captured by an object at a predetermined time point in the past, and is stored in another medical device, a server, or the like, and is loaded into the medical image display apparatus 1000 through the communication unit 1060, or internal or externally. The storage unit 1050 may be stored in advance.
일 실시예에서 조영 증강 의료영상은 제1 의료영상과 동일 대상체 즉, 동일 환자에 대해 과거에 촬영된 의료영상일 수 있다. 사용자는 환자에 대한 히스토리 정보를 이용하여 제2 의료영상으로 사용 가능한 조영 증강 의료영상을 선택할 수 있다. 여기서, 사용자는 적어도 하나의 조영 증강 의료영상을 제2 의료영상으로 선택할 수 있다. 또한, 제2 의료영상은 동일 대상체에 대해 과거에 촬영된 복수의 조영 증강 의료영상을 이용하여 생성된 영상일 수 있다.In one embodiment, the contrast-enhanced medical image may be a medical image taken in the past with respect to the same object as the first medical image, that is, the same patient. The user may select the contrast-enhanced medical image that can be used as the second medical image by using history information about the patient. Here, the user may select at least one contrast enhancement medical image as the second medical image. In addition, the second medical image may be an image generated by using a plurality of contrast-enhanced medical images previously photographed on the same object.
다른 실시예에서 조영 증강 의료영상은 표준화된 의료영상일 수 있다. 예를 들어, 복수의 대상체에 대한 뇌 CT 영상이 축적된 의료영상 데이터 베이스에 저장된 정보를 이용하여, 제1 의료대상의 대상체와 유사한 조건 즉, 나이, 성별, 질병의 진행 정도 등을 가지는 대상체들에 대해 촬상된 조영 증강 의료 영상들을 이용하여 생성된 표준화된 의료영상이 될 수 있다.In another embodiment, the contrast enhanced medical image may be a standardized medical image. For example, by using information stored in a medical image database in which brain CT images of a plurality of objects are accumulated, subjects having conditions similar to those of the first medical subject, that is, age, gender, and disease progression, etc. It may be a standardized medical image generated using the contrast-enhanced medical images taken for.
즉, 이하에서 설명하는 일 실시예에서는 하나의 조영 증강 의료영상을 이용한 영상 정합을 예로 들어 설명하지만, 복수의 조영 증강 의료영상이 이용되는 경우도 본 발명의 권리범위에서 제외되는 것은 아니다.That is, in an embodiment described below, an image registration using one contrast-enhanced medical image is described as an example, but a case where a plurality of contrast-enhanced medical images are used is not excluded from the scope of the present invention.
영상처리부(1030)는 제2 의료영상을 분할 즉, 세그멘테이션(segmentation)하여 적어도 하나의 해부학적 개체를 추출한다. 구체적으로, 영상처리부(1030)는 제1 의료영상의 참조영상인 제2 의료영상으로부터 적어도 하나의 해부학적 개체에 대응하는 참조영역정보를 추출할 수 있다.The image processor 1030 extracts at least one anatomical object by dividing or segmenting the second medical image. In detail, the image processor 1030 may extract reference region information corresponding to at least one anatomical object from the second medical image that is a reference image of the first medical image.
여기서, 해부학적 개체는 복수일 수 있으며, 그에 따라 영상처리부(1030)는 제1 개체(이하, 제1 해부학적 개체 라고도 한다)에 대응하는 영역과 제1 개체와 다른 제2 개체(이하, 제2 해부학적 객체 라고도 한다)에 대응하는 영역을 제2 의료영상으로부터 더 추출할 수 있다.Here, the anatomical object may be plural, and accordingly, the image processing unit 1030 may include a region corresponding to the first object (hereinafter, also referred to as a first anatomical object) and a second object different from the first object (hereinafter, referred to as “first”). 2 may be further extracted from the second medical image.
일 실시예에서 제1 개체는 혈관이고, 제2 개체는 림프노드일 수 있다. 또한, 다른 실시예에서 제1 개체는 혈관이고, 제2 개체는 기관지일 수 있다.In one embodiment, the first subject may be a blood vessel and the second subject may be a lymph node. In another embodiment, the first subject may be a blood vessel and the second subject may be a bronchus.
영상처리부(1030)는 제2 의료영상으로부터 추출된 참조영역정보를 이용하여 제1 의료영상과 제2 의료영상을 정합한다. 여기서, 정합된 영상은 제3 의료영상으로 디스플레이부(1020)에 표시되며, 제3 의료영상에서는 검출된 해부학적 개체의 다른 영역 즉, 검출된 해부학적 개체가 아닌 영역과 구분하여 표시된다.The image processor 1030 registers the first medical image and the second medical image by using reference region information extracted from the second medical image. Here, the matched image is displayed on the display unit 1020 as a third medical image, and is displayed separately from other regions of the detected anatomical object, that is, the region other than the detected anatomical object.
여기서, 영상처리부(1030)는 영상 정합을 수행하는데 있어 제1 의료영상의 해부학적 개체와 제2 의료영상의 해부학적 개체 간의 기하학적 관계를 이용할 수 있으며, 기하학적 관계란 해부학적 개체들의 상대적인 위치 관계를 나타내는 벡터(vector)를 포함할 수 있다.Here, the image processing unit 1030 may use the geometric relationship between the anatomical object of the first medical image and the anatomical object of the second medical image in performing image registration, and the geometric relationship refers to the relative positional relationship of the anatomical objects. May contain a vector to represent.
의료영상들의 정합은 제1 의료영상과 제2 의료영상의 좌표를 서로 대응시키는 과정을 포함한다. 본 실시예에서 제1 의료영상과 제2 의료영상은 각각 DICOM(Digital Imaging and Communication in Medicine)에 따른 좌표계를 사용하여 생성된 의료영상이 될 수 있다.Registration of the medical images includes a process of mapping coordinates of the first medical image and the second medical image to each other. In the present embodiment, the first medical image and the second medical image may be medical images generated using a coordinate system according to digital imaging and communication in medicine (DICOM), respectively.
영상처리부(1030)는 제1 의료영상과 제2 의료영상의 정합 과정을 통해서, 제2 의료영상의 좌표를 제1 의료영상의 좌표로 변환 또는 역변환하는 좌표 변환함수를 산출한다. 여기서, 좌표 변환함수는 후술하는 동형정합 과정에서 산출되는 이전시점의 해부학적 개체의 고유 특성이 유지된 제1 변환식과, 이형정합 과정에서 산출되는 두 영상 정보가 완전히 일치되는 제2 변환식을 포함할 수 있다. The image processor 1030 calculates a coordinate transformation function for converting or inversely transforming the coordinates of the second medical image into the coordinates of the first medical image through a matching process of the first medical image and the second medical image. Here, the coordinate transformation function may include a first transformation equation that maintains unique characteristics of the anatomical object of the previous point of time calculated in the homogeneous registration process described later, and a second transformation equation in which two image information calculated in the heterogeneous registration process are completely matched. Can be.
영상처리부(1030)는 좌표 변환함수를 이용하여, 제1 의료영상과 제2 의료영상의 좌표와 뷰(View)를 동기화할 수 있다.The image processor 1030 may synchronize the coordinates and the views of the first medical image and the second medical image by using a coordinate transformation function.
일 실시예에서, 정합된 영상은 제1 의료영상이 변환된 영상일 수 있다. 다른 실시예에서 정합된 영상은 제1 의료영상과 제2 의료영상이 융합된 영상(fusion image)일 수 있다. 디스플레이부(1020)는 제1 의료영상을 디스플레이하고, 제1 의료영상과 제2 의료영상의 정합에 따라 생성된 제3 의료영상 및/또는 제4 의료영상을 디스플레이할 수 있다. In an embodiment, the matched image may be an image obtained by converting the first medical image. In another embodiment, the matched image may be a fusion image of the first medical image and the second medical image. The display unit 1020 may display the first medical image and display the third medical image and / or the fourth medical image generated by matching the first medical image with the second medical image.
도 12는 본 발명 일실시예에 따른 제1 의료영상(1210)을 도시한 도면이다. 12 is a diagram illustrating a first medical image 1210 according to an embodiment of the present invention.
본 실시예에서 제1 의료영상(1210)은 대상체에 대하여 비교적 최근에 촬영된 영상이다. 제1 의료영상(1210)은 대상체에 조영제가 투여되지 않은 상태에서 촬영된 비조영 영상으로, 예를 들어 도 12와 같은, 뇌 CT 영상일 수 있다. 다른 실시예로 제1 의료영상(1210)은 실시간 디스플레이가 가능한 촬영 영상 예를 들어, 초음파 영상일 수 있다.In the present embodiment, the first medical image 1210 is a relatively recently image of the object. The first medical image 1210 may be a non-contrast image taken without a contrast agent being administered to the subject. For example, the first medical image 1210 may be a brain CT image as shown in FIG. 12. In another embodiment, the first medical image 1210 may be a captured image capable of real-time display, for example, an ultrasound image.
도 12에 도시된 바와 같이, 제1 의료영상(1210)은 비조영 CT 영상이므로, 영상(1210) 내에서 제1 개체와 제2 개체 즉, 혈관과 림프노드의 구분 즉, 식별이 용이하지 않다. As shown in FIG. 12, since the first medical image 1210 is a non-contrast CT image, it is not easy to distinguish the first object and the second object, that is, the blood vessel and the lymph node, that is, the identification in the image 1210. .
사용자는 사용자 입력부(1040)를 이용하여 제1 의료영상(1210)에서 혈관과 림프노드가 위치할 것으로 예상되는 영역(1211)을 선택할 수 있다. 제어부(1010)는 도 12와 같이, 선택된 영역(1211)을 확대하여 표시하도록 디스플레이부(1020)를 제어할 수 있다. 확대된 영역(1211)에서도 혈관과 림프노드는 식별되지 않는다.The user may select an area 1211 in which the blood vessel and the lymph node are expected to be located in the first medical image 1210 using the user input unit 1040. As illustrated in FIG. 12, the controller 1010 may control the display 1020 to enlarge and display the selected area 1211. In the enlarged area 1211, blood vessels and lymph nodes are not identified.
도 13은 본 발명 일실시예에 따른 제2 의료영상(1310)을 도시한 도면이며, 도 14는 개체가 추출된 제2 의료영상(1410)을 도시한 도면이다.FIG. 13 is a diagram illustrating a second medical image 1310 according to an embodiment of the present invention, and FIG. 14 is a diagram illustrating a second medical image 1410 from which an object is extracted.
제2 의료영상(1310)은 대상체에 조영제가 투여된 상태에서 촬영된 조영 증강 영상으로 예를 들어 도 13 및 도 14와 같이 뇌 CT 영상일 수 있다. The second medical image 1310 may be a contrast enhancement image taken in a state where a contrast agent is administered to the subject, and may be, for example, a brain CT image as shown in FIGS. 13 and 14.
도 11을 참조하면, 영상처리부(1030)는 제1 개체 추출부(1031), 제2 개체 추출부(1032) 및 정합부를 포함한다. 정합부는 좌표 변환부(1033), 동형정합부(1034) 및 이형정합부(1035)를 포함한다.Referring to FIG. 11, the image processor 1030 includes a first object extractor 1031, a second object extractor 1032, and a matching unit. The matching unit includes a coordinate converting unit 1033, a homogeneous matching unit 1034, and a heterogeneous matching unit 1035.
일 실시예에서 영상처리부(1030)는 2개의 해부학적 개체가 추출되도록 제1 개체 추출부(1031)와 제2 개체 추출부(1032)를 포함하는 것으로 예시하나, 이에 한정되지 않는다. 즉, 보다 많은 개수 예를 들어, 3개 이상의 해부학적 개체들을 추출하여, 제3 의료영상을 통해 식별 가능하게 표시할 수 있다. In one embodiment, the image processor 1030 is illustrated as including the first object extractor 1031 and the second object extractor 1032 to extract two anatomical objects, but is not limited thereto. That is, for example, three or more anatomical objects may be extracted and displayed in a third medical image so as to be distinguishable.
또한, 다른 실시예에서 영상처리부(1030)에는 하나의 개체 추출부가 마련되어, 제2 의료영상으로부터 제1 개체의 영역을 추출하고, 제1 개체의 영역을 제1 개체를 제외한 다른 영역과 구분하여 표시할 수 있다. 예를 들어, 제1 개체 영역은 혈관 영역으로, 혈관 영역과 혈관이 아닌 영역이 구분되어 표시된다. 여기서, 혈관이 아닌 영역은 림프노드 영역을 포함한다. 다른 실시예에서, 혈관이 아닌 영역은 기관지 영역을 포함할 수 있다.Also, in another embodiment, the image processor 1030 is provided with one object extractor to extract an area of the first object from the second medical image, and display the area of the first object separately from other areas except for the first object. can do. For example, the first individual region is a blood vessel region, and a vessel region and a non-vascular region are divided and displayed. Here, the non-vascular region includes a lymph node region. In other embodiments, the non-vascular region may comprise a bronchial region.
제1 개체 추출부(1031)와 제2 개체 추출부(1032)는 각각 제2 의료영상에서 제1 해부학적 개체의 영역정보와 제2 해부학적 개체의 영역정보를 추출한다. 이렇게 추출된 제1 개체 및 제2 개체의 영역들은 참조영역으로서, 추출된 영역정보는 정합부에 의해 참조영역정보로 활용된다.The first object extractor 1031 and the second object extractor 1032 extract area information of the first anatomical object and area information of the second anatomical object from the second medical image, respectively. The regions of the first and second objects extracted in this way are reference regions, and the extracted region information is used as reference region information by the matching unit.
제1 개체 추출부(1031)는 제1 개체의 해부학적 특징을 이용하여 제2 의료영상에서 제1 개체에 대응하는 영역을 추출하고, 제2 개체 추출부(1032)는 제2 개체의 해부학적 특징을 이용하여 제2 의료영상에서 제2 개체에 대응하는 영역을 추출한다. The first object extractor 1031 extracts a region corresponding to the first object in the second medical image by using anatomical features of the first object, and the second object extractor 1032 anatomically extracts the second object. The region corresponding to the second object is extracted from the second medical image by using the feature.
제1 개체 추출부(1031)는 제1 개체의 영역을 결정하기 위하여, 제2 의료영상에 포함된 각 픽셀들에 대한 밝기 값을 이용할 수 있다.The first object extractor 1031 may use a brightness value for each pixel included in the second medical image to determine an area of the first object.
구체적으로, 제1 개체 추출부(1031)는 조영 증강된(contrast enhanced) 제2 의료영상에서 기 설정된 제1 범위 내의 밝기 값을 가지는 지점들을 검출하고, 검출된 지점들을 포함하는 제1 개체에 대응하는 영역을 결정할 수 있다. 다른 실시예로서, 사용자 입력부(1040)에 의해 제2 의료영상 내의 특정 지점이 선택되면, 제1 개체 추출부(1031)는 선택된 지점과 유사한 해부학적 특징을 갖는 지점들을 검출하기 위하여, 선택된 지점과의 밝기 값의 차이, 즉 콘트라스트가 제1 임계값 이하인 지점들을 검출하여, 검출된 지점들을 포함하는 제1 개체 영역을 결정할 수 있다.In detail, the first object extractor 1031 detects points having a brightness value within a preset first range in the contrast-enhanced second medical image, and corresponds to the first object including the detected points. You can decide which area to do. In another embodiment, when a specific point in the second medical image is selected by the user input unit 1040, the first object extractor 1031 may detect the points having anatomical characteristics similar to those of the selected point. The difference between the brightness values of, i.e., the points whose contrast is equal to or less than the first threshold value may be detected to determine a first object region including the detected points.
제2 개체 추출부(1032)는 조영 증강된(contrast enhanced) 제2 의료영상에서 기 설정된 제 2 범위 내의 밝기 값을 가지는 지점들을 검출하고, 검출된 지점들을 포함하는 제 2 개체에 대응하는 영역을 결정할 수 있다. 다른 실시예로서, 사용자 입력부(1040)에 의해 제2 의료영상 내의 특정 지점이 선택되면, 제2 개체 추출부(1032)는 선택된 지점과 유사한 해부학적 특징을 갖는 지점들을 검출하기 위하여, 선택된 지점과의 밝기 값의 차이, 즉 콘트라스트가 제2 임계값 이하인 지점들을 검출하여, 검출된 지점들을 포함하는 제2 개체 영역을 결정할 수 있다.The second object extractor 1032 detects points having a brightness value within a preset second range in the contrast-enhanced second medical image, and detects an area corresponding to the second object including the detected points. You can decide. In another embodiment, when a specific point in the second medical image is selected by the user input unit 1040, the second object extractor 1032 may detect a point having an anatomical characteristic similar to the selected point. By detecting the difference of the brightness value of, i.e., the contrast is equal to or less than the second threshold value, the second object region including the detected points may be determined.
밝기 값에 대한 제1 범위와 제2 범위는 제1 개체와 제2 개체 각각의 해부학적 특징에 대응하여 미리 설정될 수 있다. 마찬가지로, 제1 임계값과 제2 임계값은 제1 개체와 제2 개체 각각의 해부학적 특징에 대응하여 미리 설정되거나, 경우에 따라 제1 임계값과 제2 임계값이 같은 값으로 설정될 수 있다. The first range and the second range for the brightness value may be preset in correspondence to the anatomical features of each of the first and second objects. Similarly, the first threshold value and the second threshold value may be preset in correspondence with the anatomical characteristics of each of the first object and the second object, or in some cases, the first threshold value and the second threshold value may be set to the same value. have.
제어부(1010)는, 도 14에 도시된 바와 같이, 제2 의료영상에서 추출된 참조영역인 제1 개체 영역(1411)과 제2 개체 영역(1412)을 식별되게 표시하도록 디스플레이부(1020)를 제어할 수 있다. 사용자는 도 14의 표시된 영상(1410)을 이용하여 제1 개체에 대응하는 혈관 영역(1411)과, 제2 개체에 대응하는 림프노드 영역(1412)를 각각 확인하고, 이를 진단에 활용할 수 있다.As shown in FIG. 14, the controller 1010 displays the display unit 1020 to display the first object region 1411 and the second object region 1412, which are reference regions extracted from the second medical image, to be identified. Can be controlled. The user may identify the blood vessel region 1411 corresponding to the first object and the lymph node region 1412 corresponding to the second object by using the displayed image 1410 of FIG. 14, and use the same for diagnosis.
제1 개체 추출부(1031)와 제2 개체 추출부(1032)에 의해 추출된 제1 개체 영역의 정보와 제2 개체 영역의 정보는 정합부로 전달된다. The information of the first object region and the information of the second object region extracted by the first object extractor 1031 and the second object extractor 1032 are transferred to the matching unit.
정합부는 소정 알고리즘에 기초하여 제1 의료영상과 제2 의료영상을 정합한다. 정합부는 참조영역정보 즉, 제1 개체 추출부(1031)로부터 수신한 제1 개체 영역의 정보와 제2 개체 추출부(1032)로부터 수신한 제2 개체 영역의 정보를 이용하여 제1 의료영상과 제2 의료영상을 정합할 수 있다.The matching unit registers the first medical image and the second medical image based on a predetermined algorithm. The matching unit uses the first medical image by using the reference region information, that is, the information of the first object region received from the first object extractor 1031 and the information of the second object region received from the second object extractor 1032. The second medical image may be registered.
즉, 제2 의료영상에서 분할된 제1 개체 영역(1411)과 제2 개체 영역(1412)은 영상처리부(1030) 내 정합부의 영상정합 과정을 통해 제1 의료영상의 제1 개체 영역과 제2 개체 영역에 각각 매칭(matching)된다. That is, the first object region 1411 and the second object region 1412 segmented from the second medical image are arranged through the image registration process of the matching unit in the image processor 1030 and the first object region and the second object image of the first medical image. Matches to the object area respectively.
도 15는 본 실시예에 따른 영상 정합 과정을 설명하기 위한 도면이다.15 is a diagram for describing an image registration process according to the present embodiment.
본 발명 실시예에 따른 의료영상 표시장치(1000)에서 영상정합(image registration)은, 도 15에 도시된 와 같이, 캡쳐된 동일 화면(captured same scene)에서의 서로 다른 영상 데이터 세트(different sets of image data) 즉, If와 Im을 하나의 좌표 시스템으로 변환(transforming)하는 프로세스를 포함하며, 정합 대상인 영상들(If'와 Im) 간의 유사도(similarity)는 최대화하고 비용(cost)은 최소화하도록 하는 최적화 알고리즘(optimization algorithms)에 의해 구현된다.In the medical image display apparatus 1000 according to an exemplary embodiment of the present invention, image registration may include different sets of different image data in a captured same scene as illustrated in FIG. 15. image data), that is, a process of transforming I f and I m into a coordinate system, and maximizes the similarity between the images I f 'and I m that are to be matched. Is implemented by optimization algorithms to minimize.
예를 들어, 영상정합은 소정 변환 모델 파라미터를 이용하여 아래 수학식 1과 같이 유사도 측정 함수(similarity measure)의 결과값이 최대가 되거나, 수학식 2와 같이 비용 함수(cost function)의 결과값이 최소가 되는 최종 파라미터(Pfinal)를 찾는 과정을 포함하게 된다. 여기서, 최종 파라미터를 찾는 과정에는 후술하는 동형정합과 이형정합이 포함될 수 있다. For example, in image matching, a result of a similarity measure is maximized as shown in Equation 1 below using a predetermined conversion model parameter, or a result of a cost function as shown in Equation 2 is obtained. This involves finding the final parameter (P final ) to be the minimum. Here, the process of finding the final parameter may include homogeneous matching and hetero matching, which will be described later.
Figure PCTKR2016006087-appb-M000001
Figure PCTKR2016006087-appb-M000001
Figure PCTKR2016006087-appb-M000002
Figure PCTKR2016006087-appb-M000002
여기서, If는 고정된 영상(fixed image)로서 예컨대 비조영 영상인 제1 의료영상이고, Im 은 이동 영상(moving image)으로서 예컨대 조영 증강 영상인 제2 의료영상이 된다. 또한, S는 유사도 측정 함수(similarity measure)이고, C는 비용 함수(cost function)이고, P는 변환모델의 파라미터 세트(parameter set of transformation model)가 된다.Here, I f is a fixed image, for example, a first medical image which is a non-contrast image, and I m is a moving image, for example, a second medical image, which is a contrast enhancement image. In addition, S is a similarity measure (similarity measure), C is a cost function and P is a parameter set of transformation model.
상기와 같은 본 발명 실시예에서 사용 가능한 변환모델 파라미터는 강체변환(rigid transformation), 어파인변환(affine transformation), 박판 스플라인 자유형태 변형기법(TPS FFD: thin-plate-spline free form deformation), B 스플라인(B-spline) FFD, 탄성모델(elastic model) 등을 포함한다.The transformation model parameters that can be used in the embodiments of the present invention as described above include rigid transformation, affine transformation, thin-plate-spline free form deformation (TPS FFD), and B. Spline (B-spline) FFD, elastic model (elastic model) and the like.
또한, 비용 함수의 결과값은 유사도(또는 비유사도) 측정 함수(Similarity (or Dis-similarity) measure)와 정규화 메트릭(Regularization metric) 각각에 대해 부여된 가중치(weight)에 의해 결정될 수 있다.Further, the result of the cost function may be determined by weights assigned to each of the similarity (or dis-similarity) measure and the regularization metric.
유사도 또는 비유사도 측정 함수는 Mutual information (MI), Normalized mutual information (NMI), Gradient-magnitude, Gradient-orientation, Sum of Squared Difference (SSD), Normalized Gradient-vector Flow (NGF), Gradient NMI (GNMI) 등을 포함한다. 또한, 정규화 메트릭은 Volume regularization, Diffusion regularization, Curvature regularization, Local rigidity constraint 등을 포함한다.Similarity or dissimilarity measurement functions include Mutual information (MI), Normalized mutual information (NMI), Gradient-magnitude, Gradient-orientation, Sum of Squared Difference (SSD), Normalized Gradient-vector Flow (NGF), Gradient NMI (GNMI) And the like. In addition, normalization metrics include Volume regularization, Diffusion regularization, Curvature regularization, Local rigidity constraint, and the like.
좌표 변환부(1033)는 제1 의료영상과 제2 의료영상의 좌표계를 서로 매핑(mapping)시킨다. 여기서, 좌표계 매핑은 제1 의료영상의 좌표계와 제2 의료영상의 좌표계를 서로 맞춰주는 것으로, 예를 들어 좌표 변환부(1033)는 제1 의료영상의 제1 해부학적 개체가 배치된 방향을 따라 제2 의료영상의 제1 해부학적 개체(참조영역)가 배치될 수 있도록 제2 의료영상의 좌표계를 정렬(align)할 수 있다. 여기서, 좌표 변환부(1033)는 제1 의료영상과 제2 의료영상의 제1 해부학적 개체들 간에 정렬 상태가 어긋나지 않는 범위에서 제2 의료영상을 회전 또는 이동할 수 있다. The coordinate transformation unit 1033 maps the coordinate systems of the first medical image and the second medical image to each other. Here, the coordinate system mapping is to match the coordinate system of the first medical image and the coordinate system of the second medical image. For example, the coordinate converting unit 1033 may follow a direction in which the first anatomical object of the first medical image is disposed. The coordinate system of the second medical image may be aligned such that the first anatomical object (reference area) of the second medical image may be disposed. Here, the coordinate transformation unit 1033 may rotate or move the second medical image in a range where the alignment state is not shifted between the first anatomical objects of the first medical image and the second medical image.
본 발명 일 실시예에서 영상처리부(1030)는 좌표 변환부(1033)를 통해 좌표계가 맞춰진 제1 의료영상과 제2 의료영상에 대해 동형정합과 이형정합을 순차적으로 수행하게 된다.In an embodiment of the present invention, the image processing unit 1030 sequentially performs homogeneous matching and heterogeneous matching on the first medical image and the second medical image in which the coordinate system is adjusted through the coordinate transformation unit 1033.
도 16은 동형정합 프로세스를 개념적으로 도시한 도면이며, 도 17은 이형정합 프로세스를 개념적으로 도시한 도면이다.16 is a diagram conceptually illustrating a homogeneous matching process, and FIG. 17 is a diagram conceptually illustrating a heterogeneous matching process.
동형정합(Homogeneous registration)은, 도 16에 도시된 바와 같이, 영상 정합 시, 이동영상(moving image)의 영상 특성(모양)은 유지한 상태로, 고정영상(fixed image)에 매칭되도록 하는 것이다. Homogeneous registration, as shown in FIG. 16, is to match a fixed image while maintaining an image characteristic (shape) of a moving image during image registration.
이형정합(In-homogeneous registration)은, 도 17에 도시된 바와 같이, 영상 정합 시, 이동영상(moving image)의 영상 특성(모양)을 변형시켜, 고정영상(fixed image)에 완전히 매칭되도록 하는 것이다.In-homogeneous registration, as shown in FIG. 17, is to modify an image characteristic (shape) of a moving image so as to completely match a fixed image when the image is matched. .
일 실시예에서 동형정합부(1034)와 이형정합부(1035)는 좌표가 매칭된 정합 대상인 영상들(If'와 Im) 간의 변환 과정을 통해 비용 함수(cost function)를 연산하고, 연산된 비용 함수에 기초하여 파라미터(P)를 업데이트하는 과정을 반복적으로 수행하여, 결과적으로 비용 함수의 결과값이 최소가 되도록 하는 최종 파라미터(Pfinal)를 찾게 된다.In an embodiment, the homogeneous matching unit 1034 and the heterogeneous matching unit 1035 calculate a cost function through a conversion process between the images I f ′ and I m , which are coordinate matching objects, and calculate the cost function. By repeating the process of updating the parameter P based on the cost function, the final parameter P final is found to minimize the result of the cost function.
여기서, 유사도(또는 비유사도) 측정 함수(Similarity (or Dis-similarity) measure)와 정규화 메트릭(Regularization metric) 각각에 대해 부여된 가중치(weight)를 점차적으로 변경시키는 방식으로 동형정합이 수행되고 이어서 이형정합이 수행될 수 있으며, 가중치의 변경은 이동영상으로 사용되는 제2 의료영상의 자유도를 점차 증가시키는 방향으로 수행될 수 있다. Here, homogeneous matching is performed in such a way as to gradually change the weights assigned to each of the similarity (or dis-similarity) measure and the regularization metric. The matching may be performed, and the change of the weight may be performed in a direction of gradually increasing the degree of freedom of the second medical image used as the moving image.
즉, 도 11에서는 편의상 영상처리부(1030)이 동형정합부(1033)와 이형정합부(1034)를 포함하는 것으로 도시하였으나, 동형정합과 이형정합이 완전히 분리되어 이루어지는 프로세스는 아니며, 가중치를 변경하여 P를 업데이트하는 과정들에서 전반에 이루어지는 일부 과정은 동형정합에 대응되며, 그에 연속하여 후반에 이루어지는 일부 과정이 이형정합에 대응된다.That is, in FIG. 11, for convenience, the image processing unit 1030 includes a homogeneous matching unit 1033 and a heterogeneous matching unit 1034, but it is not a process in which homozygous matching and hetero matching are completely separated from each other. Some of the processes in the first half of the process of updating P correspond to homogeneous matching, and some of the subsequent processes correspond to heterogeneous matching.
또한, 도 11에서는 동형정합이 완료된 이후에도 계속해서 이형정합이 완료될까지 과정들이 반복적으로 수행되도록 도시되었으나, 본 발명은 동형정합만 수행하고, 이형정합은 수행되지 않도록 구현될 수 있다.In addition, in FIG. 11, the processes are repeatedly performed until the heterogeneous matching is completed even after the homogeneous matching is completed. However, the present invention may be implemented to perform only homogeneous matching and not heterogeneous matching.
도 18은 본 발명 실시예에서 정합 프로세스를 수행하는 과정들을 도시한 흐름도이다. 18 is a flowchart illustrating processes of performing a matching process in an embodiment of the present invention.
일 실시예에서 영상처리부(1030)는 도 18의 알고리즘에서 P를 계속해서 업데이트시키는 과정을 통해, 변환모델(Transformation model)과 비용함수(Cost function)를 구성하는 정규화 메트릭(Regularization metric)을 디자인하는 방식으로, 동형정합 및 이형정합에서 다른 결과를 유도할 수 있다. 이 과정에서 Rigid global model, Non-rigid global model, Rigid local model, Non-rigid local model 등의 알려진 모델들이 사용된다. In an embodiment, the image processor 1030 continuously updates P in the algorithm of FIG. 18 to design a normalization metric constituting a transformation model and a cost function. In this way, different results can be derived from homozygous and heterozygous. In this process, known models such as Rigid global model, Non-rigid global model, Rigid local model, and Non-rigid local model are used.
도 18을 참조하면, 영상처리부(1030)는 먼저 변환 모델 파라미터(P)를 초기화한다(initialize)(S1801).Referring to FIG. 18, the image processor 1030 first initializes the transformation model parameter P (S1801).
그리고, 제2 의료영상(Im)이 제1 의료영상(If)의 좌표계가 맞춰지도록 제2 의료영상을 변환한다(transform)(S1803). 일 실시예에서 좌표계의 매핑은 어파인 공간(Affin space)에 따른 좌표계를 사용할 수 있으며, 이 경우 S1803의 과정을 어파인 정렬(Affine Registration)이라고도 한다.In operation S1803, the second medical image I m is transformed such that the coordinate system of the first medical image I f is aligned. In one embodiment, the mapping of the coordinate system may use a coordinate system according to an affine space. In this case, the process of S1803 may also be referred to as affine registration.
다음, S1803의 과정에서 변환된 제2 의료영상 (Im')과 제1 의료영상(If)의 오버랩된 영역(overlapped regions)들 내의 픽셀들을 사용하여 비용 함수(C)를 연산한다(calculate)(S1805). 여기서, 비용함수(C)의 결과값은 유사도 (또는 비유사도) 측정 함수(Similarity (or Dis-similarity) measure)와 이전 정보(prior information)에 기초한 정규화 메트릭(Regularization metric)을 이용하여 결정되며, 예컨대 유사도 측정함수와 정규화 메트릭 각각에 대해 부여된 가중치(weight)의 합(sum)에 의해 결정될 수 있다. 오버랩된 영역들은 예를 들어 적어도 하나의 해부학적 개체에 대응하는 영역들이 될 수 있을 것이다.Next, the second medical image (I m ') converted in the process of S1803 and the first medical image (I f) to compute a cost function using the pixel in the overlap region (overlapped regions) (C) ( calculate the (S1805). Here, the result of the cost function (C) is determined using a normalization metric based on a similarity (or dis-similarity) measure and prior information. For example, it may be determined by a sum of weights assigned to each of the similarity measure function and the normalization metric. The overlapped areas may be, for example, areas corresponding to at least one anatomical entity.
영상처리부(1030)는 S1805의 과정에서 연산된 비용함수(C)의 결과값이 최소가 되는지를 판단한다(S1807).The image processor 1030 determines whether the result value of the cost function C calculated in the process of S1805 is minimum (S1807).
S1807의 판단 결과에 따라 변환 모델 파라미터(P)는 업데이트된다(S1809). According to the determination result of S1807, the conversion model parameter P is updated (S1809).
영상처리부(1030)는, S1807의 과정에서의 판단 결과에 기초하여, S1803 내지 S1807의 과정을 비용함수의 결과값이 최소가 될 때까지 반복적으로 수행한다. 이 과정이 동형정합과 이형정합의 프로세스이며, 각각에 대해 최적화된 알고리즘(optimization algorithms)을 찾아내는 과정이 된다. The image processor 1030 repeatedly performs the processes of S1803 to S1807 until the result value of the cost function is minimum, based on the determination result of the process of S1807. This process is the process of homogeneous and heterogeneous matching, and finds the optimization algorithms for each.
상기와 같은 과정들을 거쳐 의해 동형정합부(1034)는 이전 시점의 림프노드와 혈관 정보의 고유 특성이 유지된 최적화된 알고리즘으로서 제1 변환식 정보를 획득한다.Through the above processes, the homogeneous matching unit 1034 obtains the first transform information as an optimized algorithm in which the unique characteristics of the lymph node and the blood vessel information of the previous time point are maintained.
또한, 이형정합부(1035)는 두 영상 정보가 완전히 일치된 최적화된 알고리즘으로서 제2 변환식 정보를 획득한다. 일 실시예에서 이형정합은 제2 해부학적 개체 예를 들어, 림프노드의 변화를 추적하는 정량화 과정이 될 수 있으며, 정량화에 따라 림프노드의 변화 정도가 의료영상에서 표시 가능하도록 수치화될 수 있다.In addition, the heterogeneous matching unit 1035 obtains the second transform information as an optimized algorithm in which the two image information are completely matched. In one embodiment, the heterogeneous registration may be a quantification process for tracking changes in the second anatomical entity, for example, lymph nodes, and the degree of change in the lymph nodes may be quantified to be displayed on the medical image according to the quantification.
여기서, 유사도 (또는 비유사도) 측정 함수를 통해 If' 와 Im 간의 영상의 일치 여부가 평가될 수 있으며, 유사도(또는 비유사도) 측정 함수와 정규화 메트릭 각각에 대해 부여되는 가중치(weight)가 동형정합과 이형정합을 구분하는 팩터(factor)가 될 수 있다. Here, the similarity (or dissimilarity) measurement function can be evaluated whether the image is matched between I f 'and I m , and the weights given to each of the similarity (or dissimilarity) measurement function and the normalization metric are It can be a factor that distinguishes between homogeneous and heterogeneous matching.
일 실시예에서 영상처리부(1030)는 상기와 같은 동형정합 프로세스에 따라 제1 의료영상으로부터 제3 의료영상을 생성한다. 그리고, 상기의 이형정합 프로세스에 따라 제3 의료영상으로부터 제4 의료영상을 더 생성할 수 있다.In one embodiment, the image processor 1030 generates a third medical image from the first medical image according to the homogeneous matching process as described above. The fourth medical image may be further generated from the third medical image according to the heterogeneous matching process.
제어부(1010)는 영상처리부(1030)에 의해 생성된 제3 의료영상 및/또는 제4 의료영상을 표시하도록 디스플레이부(1020)를 제어한다. The controller 1010 controls the display unit 1020 to display the third medical image and / or the fourth medical image generated by the image processor 1030.
도 19는 본 발명 일실시예에 따른 제3 의료영상(1910)을 도시한 도면이며, 도 20은 제4 의료영상(1610)을 도시한 도면이며, 도 21은 도 20에서 일부 개체 영역을 확대하여 도시한 도면이다.19 is a view showing a third medical image 1910 according to an embodiment of the present invention, FIG. 20 is a view showing a fourth medical image 1610, and FIG. 21 is an enlarged view of some object regions in FIG. 20. The figure is shown.
제어부(1010)는, 도 19에 도시된 바와 같이, 동형정합에 따라 생성된 제3 의료영상에서 제1 개체 영역(1911)과 제2 개체 영역(1912)을 식별되게 표시하도록 디스플레이부(1020)를 제어할 수 있다. 따라서, 사용자는 도 12에서 식별되지 않았던 제1 개체에 대응하는 혈관 영역(1911)과, 제2 개체에 대응하는 림프노드 영역(1912)을 각각 확인하고, 이를 진단에 활용할 수 있다.As illustrated in FIG. 19, the controller 1010 may display the first object region 1911 and the second object region 1912 in a third medical image generated by homogeneous matching so as to be identified. Can be controlled. Accordingly, the user may identify the blood vessel region 1911 corresponding to the first entity and the lymph node region 1912 corresponding to the second entity, which are not identified in FIG. 12, and use the same for diagnosis.
여기서, 제어부(1010)는, 디스플레이부(1020)에서 색상, 패턴, 포인터, 하이라이트 및 애니메이션 효과 중 적어도 하나에 의해 검출된 해부학적 개체의 영역이 해부학적 개체가 아닌 영역과 구분하여 표시되도록 할 수 있다. 검출된 해부학적 개체의 영역이 복수인 경우, 복수의 영역 각각에 대해 다른 색상, 패턴, 포인터, 하이라이트, 애니메이션 효과가 적용될 수 있다. 또한, 복수의 영역에 색상, 패턴, 포인터, 하이라이트 및 애니메이션 효과를 조합하여 적용할 수 있다. 예를 들어, 제1 개체 영역(1911)은 색상에 의해, 제2 개체 영역(1912)에는 패턴에 의해 구분되게 표시될 수 있으며, 제1 개체 영역(1911)에 소정 패턴 및 포인터를 부여하여 그 이외의 영역과 구분되게 표시하는 등 다양하게 변형하여 실시 가능할 것이다.Here, the controller 1010 may display the area of the anatomical object detected by at least one of color, pattern, pointer, highlight, and animation effects on the display 1020 separately from an area that is not an anatomical object. have. When there are a plurality of areas of the detected anatomical object, different colors, patterns, pointers, highlights, and animation effects may be applied to each of the areas. In addition, a combination of colors, patterns, pointers, highlights, and animation effects may be applied to a plurality of areas. For example, the first object region 1911 may be displayed by color and the second object region 1912 by a pattern. The first object region 1911 may be provided with a predetermined pattern and a pointer to the first object region 1911. It may be implemented by various modifications such as to be distinguished from other areas.
즉, 도 19에서는 검출된 제1 개체 영역(1911)과 제2 개체 영역(1912)이 패턴에 의해 구분되어 표시되는 것을 예로 들어 도시하였으나, 사용자가 시각적으로 식별 가능하게 구분되는 다양한 실시예들이 적용 가능하다. That is, although FIG. 19 illustrates that the detected first object region 1911 and the second object region 1912 are divided and displayed by a pattern, various embodiments in which the user is visually discernible are applicable. It is possible.
여기서, 패턴은 복수의 가로선, 세로선, 소정 방향으로의 사선, 원형을 포함하는 다양한 형태의 점무늬, 물결무늬 등을 포함한다. 포인터는 검출된 영역의 둘레를 따라 표시되는 실선, 다양한 형태의 점선을 포함하며, 포인터의 밝기는 주변 영역보다 밝게 표시될 수 있다. 하이라이트는 검출된 영역의 밝기를 다른 영역과 다르게 예를 들어 더 밝게 표시하는 것을 포함한다. 애니메이션 효과는 소정 시각 간격으로의 깜빡임, 점차적으로 밝아짐/어두워짐 등의 다양한 시각적 효과를 검출된 영역에 적용하는 것이 된다. Here, the pattern includes a plurality of horizontal lines, vertical lines, diagonal lines in a predetermined direction, various types of dot patterns, wave patterns, and the like including a circle. The pointer includes a solid line displayed along the circumference of the detected area and a dotted line of various forms, and the brightness of the pointer may be displayed brighter than the surrounding area. Highlighting includes displaying the brightness of the detected area differently, for example, brighter than other areas. The animation effect is to apply various visual effects, such as flickering at predetermined time intervals, gradually brightening / darkening, to the detected area.
도 19에 도시된 바와 같은, 해부학적 개체의 영역(1911, 1912)의 구분 표시는 사용자 선택에 의해 활성화 또는 비활성화 가능하다. 즉, 사용자는 사용자 입력부(1040)를 조작하여 제1 영역(1911)과 제2 영역(1912)를 색상 등에 의해 구분되어 표시하는 기능을 활성화 시킬 수 있으며, 사용자는 해당 기능을 활성화시키는 사용자 입력을 할 수 있다.As shown in FIG. 19, the division marks of the regions 1911 and 1912 of the anatomical entity can be activated or deactivated by user selection. That is, the user may activate a function of distinguishing the first area 1911 and the second area 1912 by color or the like by manipulating the user input unit 1040, and the user may input a user input for activating the corresponding function. can do.
이를 위해, 개체 구분의 활성화 여부를 선택 가능한 사용자 인터페이스 즉, GUI가 디스플레이부(1020)에 표시되거나, 사용자 입력부(1040)가 개체 구분 기능에 대응하도록 할당된 토글 스위치를 포함하는 등, 다양한 방식으로 사용자 선택이 가능하도록 제공될 수 있다.To this end, the user interface selectable whether to activate the object classification, that is, the GUI is displayed on the display unit 1020, or the user input unit 1040 includes a toggle switch assigned to correspond to the object classification function, such as in various ways It may be provided to enable user selection.
또한, 사용자 입력부(1040)를 통해 검출된 해부학적 개체의 구분 표시의 레벨(즉, 정도 또는 강도 등)을 조정할 수도 있다. 즉, 본 발명에서의 의료영상 표시장치(1000)는 사용자의 선호도, 취향 등에 따라 다양한 방식으로 해부학적 개체들의 구분 표시가 가능하도록 마련될 수 있다.In addition, the level (ie, degree, intensity, etc.) of the division display of the anatomical object detected through the user input unit 1040 may be adjusted. That is, the medical image display apparatus 1000 according to the present invention may be provided to distinguish and display anatomical objects in various ways according to the user's preference and taste.
한편, 도 20에 도시된 바와 같이, 이형정합까지 수행하여 생성된 제4 의료영상(2010)에서는 제1 개체 영역(2011)과 제2 개체 영역(2012)이 식별되게 표시될 뿐 아니라, 이전 시점에 대한 제2 개체 영역(2012)의 변화까지 추적하여 표시할 수 있다. 여기서 이전 시점이란 제2 의료영상이 촬영된 시점이 될 수 있다.Meanwhile, as shown in FIG. 20, in the fourth medical image 2010 generated by performing heterogeneous registration, not only the first object region 2011 and the second object region 2012 are displayed to be identified, but also the previous time point. The change in the second object area 2012 with respect to the display may be displayed. Here, the previous time point may be a time point at which the second medical image is taken.
그에 따라, 림프노드 영역(2012) 내에서 병변확장영역(2014)을 표시함으로써, 사용자의 진단이 보다 용이하게 된다.Accordingly, by displaying the lesion extension region 2014 in the lymph node region 2012, the diagnosis of the user is easier.
사용자는 사용자 입력부(1040)를 이용하여 소정 영역을 확대하여 표시하도록 하는 명령할 수 있으며, 그에 응답하여 도 21과 같이 제4 의료영상(2110)에서 제1 개체에 대응하는 혈관 영역(2111)과, 제2 개체에 대응하는 림프노드 영역(2112)이 식별 가능하게 표시되며, 림프노드 영역(2112) 내에서 기존 병변 영역(2113)과 병변 확장 영역(2114)이 구분 가능하며, 사용자는 병변의 확장되는 정도를 판단함으로써 진단에 활용할 수 있게 된다.The user may be instructed to enlarge and display a predetermined region by using the user input unit 1040. In response thereto, the blood vessel region 2111 corresponding to the first object may be displayed on the fourth medical image 2110 as shown in FIG. In this case, the lymph node region 2112 corresponding to the second individual is distinguishably displayed, and the existing lesion region 2113 and the lesion expansion region 2114 are distinguishable within the lymph node region 2112. By determining the extent of expansion can be used for diagnosis.
제어부(1010)는 색상, 패턴, 포인터, 하이라이트 및 애니메이션 효과 중 적어도 하나에 의해, 도 20 및 도 21에서, 기존 병변 영역(2013, 2113)과 병변 확장 영역(2014, 2114)을 구분하여 표시하도록 디스플레이부(1020)를 제어할 수 있다. 여기서, 제어부(1010)는 기존 병변 영역(2013, 2113) 또는 병변 확장 영역(2014, 2114)에 색상, 패턴, 포인터, 하이라이트 및 애니메이션 효과를 조합하여 적용할 수 있다. 예를 들어, 기존 병변 영역(2013, 2113)은 패턴에 의해, 병변 확장 영역(2014, 2114)에는 포인터에 의해 구분되게 표시될 수 있으며, 병변 확장 영역(2014, 2114)에 소정 패턴 및 하이라이트를 부여하여 그 이외의 영역과 구분되게 표시하는 등 다양하게 변형 실시 가능할 것이다.The controller 1010 distinguishes and displays the existing lesion areas 2013 and 2113 and the lesion extension areas 2014 and 2114 in FIGS. 20 and 21 by at least one of color, pattern, pointer, highlight, and animation effects. The display unit 1020 may be controlled. Herein, the controller 1010 may apply a combination of colors, patterns, pointers, highlights, and animation effects to the existing lesion areas 2013 and 2113 or the lesion extension areas 2014 and 2114. For example, the existing lesion areas 2013 and 2113 may be displayed by patterns, and the lesion extension areas 2014 and 2114 may be displayed by a pointer, and predetermined patterns and highlights may be displayed on the lesion extension areas 2014 and 2114. The present invention may be modified in various ways, such as to be displayed separately from other areas.
도 20 및 도 21에 도시된 바와 같은, 병변 확장 영역(2014, 2114)의 구분 표시는 사용자 선택에 의해 활성화 또는 비활성화 가능하다. 즉, 사용자는 사용자 입력부(1040)를 조작하여 병변 확장 영역(2014, 2114)이 색상 등에 의해 구분되어 표시하는 기능을 활성화 시킬 수 있으며, 사용자는 해당 기능을 활성화시키는 사용자 입력을 할 수 있다.As shown in FIGS. 20 and 21, the division display of the lesion extension regions 2014 and 2114 may be activated or deactivated by user selection. That is, the user may operate the user input unit 1040 to activate a function in which the lesion extension regions 2014 and 2114 are distinguished and displayed by color, and the user may perform a user input for activating the corresponding function.
이를 위해, 확장 영역 구분의 활성화 여부를 선택 가능한 사용자 인터페이스 즉, GUI가 디스플레이부(1020)에 표시되거나, 사용자 입력부(1040)가 확장 영역 구분 기능에 대응하도록 할당된 토글 스위치를 포함하는 등, 다양한 방식으로 사용자 선택이 가능하도록 제공될 수 있다. 사용자는 확장 영역 구분을 활성화/비활성화하도록 선택함으로써, 병변의 확장 정도를 보다 용이하게 파악할 수 있다.To this end, a user interface that can select whether to enable extended area classification, that is, a GUI is displayed on the display unit 1020 or the user input unit 1040 includes a toggle switch allocated to correspond to the extended area classification function. May be provided to enable user selection. By selecting to enable / disable extension area division, the user can more easily grasp the extent of lesion extension.
또한, 사용자 입력부(1040)를 통해 검출된 기존 병변 영역(2013, 2113)과 병변 확장 영역(2013, 2114) 각각의 표시 레벨(즉, 정도 또는 강도 등)을 조정할 수도 있다. 즉, 본 발명에서의 의료영상 표시장치(1000)는 사용자의 선호도, 취향 등에 따라 다양한 방식으로 해부학적 개체 내에서 구분 표시가 가능하도록 마련될 수 있다.In addition, the display level (ie, degree or intensity, etc.) of the existing lesion areas 2013 and 2113 and the lesion extension areas 2013 and 2114 detected through the user input unit 1040 may be adjusted. That is, the medical image display apparatus 1000 according to the present invention may be provided so as to be able to distinguish and display the anatomical object in various ways according to the user's preference and taste.
도 22는 본 발명 일 실시예에서 의료영상 표시장치에서 의료 진단 기능을 가지는 어플리케이션의 구동에 따라 표시되는 화면을 도시한 도면이다. FIG. 22 is a diagram illustrating a screen displayed according to the driving of an application having a medical diagnosis function in a medical image display apparatus according to an exemplary embodiment of the present invention.
도 22에 표시된 의료영상(2210)은 영상 정합 프로세스가 수행된 결과를 표시하는 영상으로, 제1 해부학적 개체(2211)와 제2 해부학적 개체(2112) 및 제2 해부학적 개체(2112) 내의 기존 병변 영역(2113)과 병변 확장 영역(2114)이 구분 가능하게 표시된 표시영역의 좌측에 사용자 선택이 가능한 사용자 인터페이스 즉, 다양한 GUI가 포함된 입력영역(2220)이 위치됨을 확인할 수 있다. The medical image 2210 illustrated in FIG. 22 is an image displaying a result of performing an image registration process. The medical image 2210 may include a first anatomical object 2211, a second anatomical object 2112, and a second anatomical object 2112. It can be seen that a user selectable user interface, that is, an input area 2220 including various GUIs, is positioned on the left side of the display area in which the existing lesion area 2113 and the lesion extension area 2114 are distinguishable.
사용자는 실행된 어플리케이션에서 표시영역(2110)에 제1 의료영상이 표시된 상태에서, 입력영역(2220)의 사용자 인터페이스의 소정 버튼을 선택하여, 제2 의료영상으로 사용 되는 참조영상을 불러와 제1 의료영상의 영상정합에 활용할 수 있다. In a state where the first medical image is displayed on the display area 2110 in the executed application, the user selects a predetermined button of the user interface of the input area 2220 to call up the reference image used as the second medical image and then display the first image. It can be used for image registration of medical images.
즉, 도 22의 표시영역(2210)에는 도 12 내지 도 14 및 도 19 내지 도 21에 도시된 영상들을 포함하여 다양한 의료영상들이 모두 표시될 수 있다. 또한, 표시영역(2210)에 대한 영역 분할을 통해, 도 12 내지 도 14 및 도 19 내지 도 21의 영상들을 포함한 2 이상의 영상이 비교 가능하도록 가로 및/또는 세로 방향으로 표시 가능하다.That is, various medical images may be displayed on the display area 2210 of FIG. 22 including the images illustrated in FIGS. 12 to 14 and 19 to 21. In addition, by segmenting the display area 2210, two or more images including the images of FIGS. 12 to 14 and 19 to 21 may be displayed in a horizontal and / or vertical direction so as to be comparable.
아울러, 디스플레이부(1020)가 복수 예컨대, 메인 디스플레이와 서브 디스플레이를 포함하도록 마련되는 경우, 다양한 조합으로 2 이상의 영상들이 비교 가능하게 표시될 수 있을 것이다.In addition, when the display unit 1020 is provided to include a plurality of main displays and sub-displays, for example, two or more images may be displayed to be compared in various combinations.
도 23 내지 도 26은 본 발명의 실시예에 따른 의료영상 표시장치(1000)에서 영상 정합을 진단에 활용하는 다양한 예시들을 도시한 도면들이다.23 to 26 illustrate various examples of using image registration for diagnosis in the medical image display apparatus 1000 according to an exemplary embodiment of the present invention.
도 23에 도시된 본 발명 일 실시예에서, 의료영상 표시장치(100)의 제어부(1010)는 전술한 바와 같이 제1 의료영상(비조영 의료영상)(2301)과 제2 의료영상(타 시점 획득 조영증강 의료영상)(2311)을 각각 획득하고, 이들을 정합한 융합 디스플레이를 생성하도록 한다. In an embodiment of the present invention illustrated in FIG. 23, the control unit 1010 of the medical image display apparatus 100 may include the first medical image (non-contrast medical image) 2301 and the second medical image (other view) as described above. Acquisition contrast-enhanced medical images) 2311 are acquired, respectively, to generate a fused display that matches them.
융합 디스플레이 생성 과정은 비조영 의료영상과 타 시점 획득 조영증강 의료영상과의 영상 정합(2302), 정합에 따라 생성된 영상의 변환 및 전파(2303), 영역 교정(2304)의 과정을 포함한다. The process of generating a fusion display includes image matching 2302 between a non-contrast medical image and another view-enhanced contrast-enhancing medical image, converting and propagating an image generated according to the matching 2303, and correcting an area 2304.
여기서, 타 시점 획득 조영증강 의료영상에 대해 소정 해부학적 개체들 즉, 림프노드와 혈관 영역을 분할하고(2312), 이를 두 의료영상의 좌표를 대응시키기 위한 변환 및 전파(2303)와 영역 교정(2304)에 이용할 수 있다. 영역 교정(2304)이 완료되면, 그 결과로 융합된 영상이 정합 영상으로서 디스플레이된다(2305). Here, the anatomical objects, that is, the lymph nodes and the blood vessel region, are divided (2312) with respect to the other view-enhanced contrast-enhanced medical images, and the transformation and propagation (2303) and region correction (corresponding to the coordinates of the two medical images) are performed. 2304). When region correction 2304 is complete, the resulting fused image is displayed as a registration image (2305).
그리고, 영상의 정합, 변환 및 전파, 교정이 순차적으로 수행됨에 따라, 타 시점 획득 조영 증강 의료영상에 대하여 당해 촬영된 비조영 의료영상에서의 림프노드의 변화를 비교하는 정량화가 이루어지고(2313), 그 정량화 결과가 디스플레이된다.As image registration, transformation, propagation, and correction are sequentially performed, quantification is performed to compare changes of lymph nodes in the non-contrast medical images taken with respect to other viewpoint-acquired contrast enhanced medical images (2313). The result of quantification is displayed.
도 24에 도시된 다른 실시예에서, 의료영상 표시장치(100)의 제어부(1010)는 제1 의료영상(비조영 의료영상)(2401)과 제2 의료영상(타 시점 획득 조영증강 의료영상)(2411)을 각각 획득하고 이들을 정합한 융합 디스플레이를 생성하도록 하며, 이 과정에서 의료영상 데이터페이스로부터 데이터를 로드하고(2421), 기계학습을 수행하여(2422) 이를 영역교정(2404) 과정에 더 활용할 수 있다. In another embodiment illustrated in FIG. 24, the control unit 1010 of the medical image display apparatus 100 may include a first medical image (non-contrast medical image) 2401 and a second medical image (other view acquisition contrast enhancement medical image). (2411), respectively, to obtain a matched fusion display, and in this process, data is loaded from the medical image data base (2421), and machine learning is performed (2422), which is further added to the area correction (2404) process. It can be utilized.
의료영상 데이터베이스에는 다양한 정보가 저장되어 있으며, 제어부(1010)는 저장된 정보 중 대상체와 유사한 조건(대상체의 나이, 성별, 병변의 진행 정도 등)의 데이터(영상 포함)를 분류하고, 분류된 데이터를 이용한 트레이닝(training) 과정을 통해 데이터를 예측하는 기계학습을 진행할 수 있다. Various information is stored in the medical image database, and the controller 1010 classifies data (including images) of conditions similar to the object (such as the age, gender, and progression of the lesion) among the stored information, and classifies the classified data. Through the training process using the machine learning to predict the data can proceed.
제어부(1010)는 기계학습에 따라 예측된 데이터를 상기 다른 실시예의 영상 정합 과정에서의 분할 및 정랑화에 활용하도록 영상처리부(1030)를 제어할 수 있다. 이 경우, 제2 영상(타 시점의 조영증강 의료영상)으로부터 추출된 참조정보만을 이용하는 실시예와 비교하여, 영상정합의 정확도가 보다 향상될 수 있을 것이다. The controller 1010 may control the image processor 1030 to utilize data predicted according to machine learning for segmentation and rectification in the image registration process of the other embodiment. In this case, the accuracy of image registration may be further improved as compared with the embodiment using only reference information extracted from the second image (enhanced medical image at another point in time).
융합 디스플레이 생성 과정은 비조영 의료영상과 타 시점 획득 조영증강 의료영상과의 영상 정합(2401), 정합에 따라 생성된 영상의 변환 및 전파(2402), 영역 교정(2403)의 과정을 포함한다. The process of generating a fusion display includes image matching 2401 between a non-contrast medical image and another point-enhanced contrast-enhancing medical image, converting and propagating an image generated according to the matching, and a region correction 2403.
여기서, 타 시점 획득 조영증강 의료영상에 대해 소정 해부학적 개체들 즉, 림프노드와 혈관 영역을 분할하고(2412), 이를 두 의료영상의 좌표를 대응시키기 위한 변환 및 전파(2403)와 영역 교정(2404)에 이용할 수 있다. 영역 교정(2404)이 완료되면, 그 결과로 융합된 영상이 정합 영상으로서 디스플레이된다(2405). Here, the anatomical objects, that is, the lymph node and the blood vessel region, are divided (2412) with respect to other viewpoint-acquired contrast-enhanced medical images, and transformed and propagated (2403) and region correction (corresponding to coordinates of the two medical images). 2404). When region correction 2404 is complete, the resulting fused image is displayed as a registration image (2405).
영상의 정합, 변환 및 전파, 교정이 순차적으로 수행됨에 따라, 타 시점 획득 조영 증강 의료영상에 대하여 당해 촬영된 비조영 의료영상에서의 림프노드의 변화를 비교하는 정량화가 이루어지고(2413), 그 과정에서 기계학습에 따른 데이터가 더 활용된다. 그리고, 정량화 결과가 디스플레이된다(2414).As image registration, transformation, propagation, and correction are performed sequentially, quantification is performed to compare changes in lymph nodes in the non-contrast medical images taken with respect to other time-acquired contrast enhanced medical images (2413). In the process, data from machine learning is used more. The quantification result is then displayed 2414.
도 25에 도시된 또 다른 실시예에서, 의료영상 표시장치(100)의 제어부(1010)는 제1 의료영상(비조영 의료영상)(2501)과 제2 의료영상(타 시점 획득 조영증강 의료영상)(2511)을 각각 획득하고 이들을 정합한 융합 디스플레이를 생성하도록 하며, 이 과정에서 표준 영상/모델을 더 활용할 수 있다. 예를 들어, 제어부(1010)는 표준 영상/모델에 저장된 데이터로부터 대상체와 유사한 조건(대상체의 나이, 성별, 병변의 진행 정도 등)에 해당하는 복수의 영상을 로드하고(2521), 로드된 영상들에 대한 영상정합(2522)과 변환 및 전파(2523)를 수행할 수 있다. In another embodiment illustrated in FIG. 25, the control unit 1010 of the medical image display apparatus 100 may include a first medical image (non-contrast medical image) 2501 and a second medical image (other view acquisition contrast enhancement medical image). ) Obtain each of the 2525 and create a fused display that matches them, and further utilize a standard image / model in this process. For example, the controller 1010 loads a plurality of images corresponding to conditions similar to the object (age, gender, lesion progression, etc.) of the object from the data stored in the standard image / model (2521), and the loaded image The image registration 2522 and the transformation and propagation 2523 may be performed.
융합 디스플레이 생성 과정은 비조영 의료영상과 타 시점 획득 조영증강 의료영상과의 영상 정합(2502), 정합에 따라 생성된 영상의 변환 및 전파(2503), 영역 교정(2504)의 과정을 포함한다. The process of generating a fusion display includes image matching 2502 between a non-contrast medical image and another point-enhanced contrast-enhancing medical image, converting and propagating an image generated according to the registration, and correcting an area 2504.
여기서, 타 시점 획득 조영증강 의료영상에 대해 소정 해부학적 개체들 즉, 림프노드와 혈관 영역을 분할하는데(2512), 이를 의료영상들의 좌표를 대응시키기 위한 변환 및 전파(2503)와 영역 교정(2504)에 이용할 수 있다. 여기서, 표준 영상/모델로부터 영상 정합(2522)과 변환 및 전파(2523)가 수행된 데이터를 더 활용한다. 획득한 영역 교정(2504)이 완료되면, 그 결과로 융합된 영상이 정합 영상으로서 디스플레이된다(2505). Here, a predetermined anatomical object, that is, a lymph node and a blood vessel region, is segmented (2512) for another viewpoint-acquired contrast-enhanced medical image, which is converted and propagated (2503) and region corrected (2504) to correspond to coordinates of the medical images. Available). Here, data obtained by image matching 2522 and transformation and propagation 2523 from the standard image / model is further utilized. When the acquired area correction 2504 is complete, the resulting fused image is displayed as a registration image (2505).
영상의 정합, 변환 및 전파, 교정이 순차적으로 수행됨에 따라, 타 시점 획득 조영 증강 의료영상에 대하여 당해 촬영된 비조영 의료영상에서의 림프노드의 변화를 비교하는 정량화가 이루어지고(2513), 그 과정에서 표준 영상/모델의 데이터가 더 활용됨으로써, 영상 정합의 정확도가 보다 향상될 수 있을 것이다. 그리고, 정량화 결과가 디스플레이된다(2514).As image registration, transformation, propagation, and correction are performed sequentially, quantification is performed to compare changes in lymph nodes in the non-contrast medical images taken with respect to other viewpoint-acquired contrast enhanced medical images (2513). As data of the standard image / model is further utilized in the process, the accuracy of image registration may be further improved. The quantification result is then displayed 2514.
도 26에 도시된 또 다른 실시예에서, 대상체에 대해 타 시점에 촬상된 조영 증강 영상이 존재하지 않는 경우, 2 이상의 비조영 의료영상(t1, t2)을 정합하며, 정확도를 향상시키기 위해 표준 영상/모델(2621) 및/또는 의료영상 데이터베이스(2631)를 활용할 수 있다. In another embodiment illustrated in FIG. 26, when there is no contrast enhancement image captured at another time point with respect to an object, two or more non-contrast medical images t1 and t2 are matched, and a standard image is used to improve accuracy. The / model 2621 and / or the medical image database 2651 may be utilized.
즉, 도 26의 실시예는 서로 다른 시점(t1, t2)에 촬영된 2 이상의 비조영 영상을 정합하여 촬영 순서에 따라 해부학적 개체 내에서의 병변 진행 정도를 판단하여 식별되게 표시하는 것이다.That is, in the embodiment of FIG. 26, two or more non-contrast images photographed at different time points t1 and t2 are matched to determine the extent of lesion progression in the anatomical object according to the photographing order so that the lesions are identified and displayed.
구체적으로, 의료영상 표시장치(100)의 제어부(1010)는 당해 시점(t2)에 촬영된 비조영 의료영상(2601)과 과거 시점(t1)에 촬영된 비조영의료영상(2611)을 각각 획득하고, 이들을 정합한 융합 디스플레이를 생성하도록 한다. In detail, the control unit 1010 of the medical image display apparatus 100 obtains the non-contrast medical image 2601 taken at the time point t2 and the non-contrast medical image 2611 taken at the past time point t1, respectively. And create a matched fusion display.
융합 디스플레이 생성 과정은 당해 시점(t2)에 촬영된 비조영 의료영상과 과거 시점(t1)에 촬영된 비조영 의료영상의 영상 정합(2602), 정합에 따라 생성된 영상의 변환 및 전파(2603), 영역 교정(2604)의 과정을 포함한다. The process of generating a fusion display includes image matching 2602 of non-contrast medical images taken at the time point t2 and non-contrast medical images taken at a past time point t1, and converting and propagating images generated according to the matching 2603. , Area correction 2604.
여기서, 과거 시점(t1)의 비조영 의료영상에 대해 소정 해부학적 개체들 즉, 림프노드와 혈관 영역을 분할 및 교정하는데(2612), 여기서 표준 영상/모델(2621) 및/또는 의료영상 데이터베이스(2631)의 저장된 정보가 활용되게 된다. Here, segmentation and correction (2612) of certain anatomical entities, i.e., lymph nodes and blood vessel regions, for non-contrast medical images of a past time point t1, where standard image / model 2621 and / or medical image database ( The stored information of 2631 is utilized.
의료영상 데이터베이스(2631)에는 다양한 정보가 저장되어 있으며, 제어부(1010)는 저장된 정보 중 대상체와 유사한 조건(대상체의 나이, 성별, 병변의 진행 정도 등)의 데이터(영상 포함)를 분류하고, 분류된 데이터를 이용한 트레이닝(training) 과정을 통해 데이터를 예측하는 기계학습을 진행할 수 있다(2632).Various information is stored in the medical image database 2661, and the controller 1010 classifies and classifies data (including images) of conditions similar to the object (eg, age, gender, and progression of the lesion) among the stored information. A training process using the collected data may proceed to machine learning to predict the data (2632).
제어부(1010)는 표준 영상/모델에 저장된 데이터로부터 대상체와 유사한 조건(대상체의 나이, 성별, 병변의 진행 정도 등)에 해당하는 복수의 영상을 로드하고(2621), 로드된 영상들에 대한 영상정합(2622)과 변환 및 전파(2623)를 수행할 수 있다. 여기서, 영상정합(2622) 과정에 기계학습된 예측 데이터가 더 활용될 수 있다.The controller 1010 loads a plurality of images corresponding to a condition similar to the object (age, gender, lesion progression, etc.) of the object from data stored in the standard image / model (2621), and the image of the loaded images. Matching 2622 and transformation and propagation 2623 may be performed. Here, the predicted data machine-learned in the image registration 2622 process may be further utilized.
제어부(1010)는 기계학습(2632) 및/또는 표준 영상/모델로부터 변환 및 전파된 데이터를 이용하여, 과거 시점(t1)의 비조영 의료영상에 대해 추출된 림프노드/혈관 영역을 교정한다(2612).The controller 1010 corrects the extracted lymph node / vascular region for the non-contrast medical image of the past time point t1 by using the machine learning 2632 and / or the data converted and propagated from the standard image / model ( 2612).
그리고, 이를 두 의료영상의 좌표를 대응시키기 위한 변환 및 전파(2602)와 영역 교정(2603)에 이용할 수 있다. 영역 교정(2603)이 완료되면, 그 결과로 융합된 영상이 정합 영상으로서 디스플레이된다(2605). In addition, it may be used for transformation and propagation 2602 and region correction 2603 to correspond to the coordinates of two medical images. When region correction 2603 is complete, the resulting fused image is displayed as a registered image (2605).
영상의 정합, 변환 및 전파, 교정이 순차적으로 수행됨에 따라, 타 시점(t1)의 비조영 의료영상에 대하여 당해 시점(t2)의 비조영 의료영상에서의 림프노드의 변화를 비교하는 정량화가 이루어지고(2613), 그 과정에서 기계학습에 따른 데이터가 더 활용된다. 그리고, 정량화 결과가 디스플레이된다(2614).As images are matched, transformed, propagated, and corrected sequentially, quantification is performed to compare changes in lymph nodes in non-contrast medical images at time t2 with respect to non-contrast medical images at another time t1. (2613), data from machine learning is further utilized in the process. The quantification result is then displayed 2614.
한편, 상기와 같은 본 발명 실시예들에서 생성된 제3 의료영상 및/또는 제4 의료영상은 의료영상 데이터베이스 또는 표준 영상/모델에 저장되며, 저장된 영상들은 기계학습 또는 2 이상의 영상의 정합/변환/전파에 의해 다른 비조영 영상에서의 해부학적 개체의 식별 표시를 위한 영상 정합 등에 활용 가능할 것이다.Meanwhile, the third medical image and / or the fourth medical image generated in the embodiments of the present invention as described above are stored in a medical image database or a standard image / model, and the stored images are matched / converted by machine learning or two or more images. / Radio waves may be used for image registration for identifying and displaying anatomical objects in other non-contrast images.
이하, 본 발명의 실시예들에 따른 의료영상 처리방법에 관해 도면을 참조하여 설명한다.Hereinafter, a medical image processing method according to embodiments of the present invention will be described with reference to the drawings.
도 27은 본 발명 일실시예에 의한 의료영상 처리방법을 도시한 흐름도이다.27 is a flowchart illustrating a medical image processing method according to an embodiment of the present invention.
도 27에 도시된 바와 같이, 의료영상 표시장치(1000)의 디스플레이부(1020)에는 적어도 하나의 해부학적 개체를 포함하는 대상체를 촬상한 제1 의료영상이 표시될 수 있다(S2701). 여기서, 제1 의료영상은 비 조영 의료영상이 될 수 있다.As illustrated in FIG. 27, a first medical image of an object including at least one anatomical object may be displayed on the display unit 1020 of the medical image display apparatus 1000 in operation S2701. Here, the first medical image may be a non-contrast medical image.
영상처리부(1030)는 제어부(1010)의 제어에 따라 단계 S2701에서 표시된 제1 의료영상의 참조영상인 제2 의료영상으로부터 적어도 하나의 해부학적 개체에 대응하는 참조영역정보를 추출한다(S2703). 여기서, 제2 의료영상은 제1 의료영상을 획득한 대상체를 타 시점에 촬상하여 획득된 조영 증강된 의료영상일 수 있다. 다른 실시예에서, 제2 의료영상은 대상체와 유사한 조건을 가지는 영상들에 기초하여 생성된 표준영상일 수 있다. 또한, 정보가 추출되는 해부학적 개체는 복수일 일 수 있으며, 혈관, 림프노드, 기관지 등을 포함한다. 단계 S2703의 참조영역정보는 제2 의료영상을 구성하는 픽셀들의 밝기 값을 이용하여 소정 해부학적 개체에 대응하여 추출될 수 있다.The image processor 1030 extracts reference region information corresponding to at least one anatomical object from the second medical image that is the reference image of the first medical image displayed in step S2701 under the control of the controller 1010 (S2703). Here, the second medical image may be a contrast-enhanced medical image obtained by capturing an object obtained with the first medical image at another time point. In another embodiment, the second medical image may be a standard image generated based on images having a condition similar to that of the object. In addition, the anatomical entity from which the information is extracted may be plural and include blood vessels, lymph nodes, bronchus, and the like. The reference area information of step S2703 may be extracted corresponding to a predetermined anatomical object using brightness values of pixels constituting the second medical image.
제어부(1010)는 단계 S2703에서 참조영역정보에 기초하여 단계 S2701에서 표시된 제1 의료영상에서 적어도 해부학적 개체에 대응하는 영역을 검출하도록 영상처리부(1030)를 제어한다(S2705). The controller 1010 controls the image processor 1030 to detect at least an area corresponding to the anatomical object in the first medical image displayed in step S2701 based on the reference area information in step S2703 (S2705).
그리고, 제어부(1010)는 단계 S2705에서 검출된 영역이 해당 해부학적 개체가 아닌 영역과 구분하여 표시된 제3 의료영상을 표시한다(S2707). 여기서, 제3 의료영상은 제1 의료영상과 제2 의료영상을 정합하여 생성된 것으로, 단계 S2705에서 검출된 해부학적 개체에 대한 표시영역정보를 포함할 수 있다. 제어부(1010)는 표시영역정보에 기초하여 제3 의료영상에서 해부학적 개체의 영역이 다른 영역과 구분하여 표시되도록 디스플레이부(1020)를 제어하게 된다. In addition, the controller 1010 displays a third medical image in which the area detected in operation S2705 is distinguished from an area that is not a corresponding anatomical entity (S2707). Here, the third medical image is generated by matching the first medical image with the second medical image, and may include display area information on the anatomical object detected in operation S2705. The controller 1010 controls the display unit 1020 such that the region of the anatomical object is displayed separately from other regions in the third medical image based on the display region information.
제어부(1010)는 단계 S2705에서 구분하여 표시된 해부학적 개체 내에서 병변 확장영역이 구분하여 표시된 제4 의료영상을 표시할 수 있다(S2709). The controller 1010 may display a fourth medical image in which the lesion extension area is divided and displayed in the anatomical object displayed in step S2705 (S2709).
단계 S2707 및 S2709는 제1 의료영상과 제2 의료영상의 영상정합 과정에서 이루어질 수 있다. 또한, 단계 S2707에서 표시되는 제3 의료영상은 동형정합의 결과에 대응하며, 단계 S2709에서 표시되는 제4 의료영상은 이형정합의 결과에 대응된다.Steps S2707 and S2709 may be performed during the image registration process of the first medical image and the second medical image. In addition, the third medical image displayed in step S2707 corresponds to the result of homozygous matching, and the fourth medical image displayed in step S2709 corresponds to the result of heterogeneous matching.
단계 S2707 및 S2709의 의료영상 정합은 소정 변환모델 파라미터를 사용하여 수행되며, 제1 의료영상과 제2 의료영상의 유사도 측정함수의 결과값이 최대가 되거나, 또는 제1 의료영상과 제2 의료영상의 비용 함수의 결과값이 최소가 될 때까지 반복적으로 수행될 수 있다. 그리고, 제1 의료영상과 제2 의료영상의 좌표계를 매핑하고, 해부학적 개체의 고유 특성이 유지된 변환식 정보를 획득하는 동형정합 및 제1 의료영상과 제2 의료영상의 정보가 완전히 일치되는 변환식 정보를 획득하는 이형정합을 순차적으로 수행할 수 있다. The medical image registration of steps S2707 and S2709 is performed using a predetermined conversion model parameter, and the result of the similarity measurement function between the first medical image and the second medical image is maximized, or the first medical image and the second medical image are maximized. It can be performed repeatedly until the result of the cost function of is minimized. In addition, homogeneous matching is performed to map the coordinate systems of the first medical image and the second medical image, and obtains conversion information in which the intrinsic characteristics of the anatomical object are maintained. Heterogeneous matching to obtain information may be performed sequentially.
상기와 같은 본 발명 실시예에 따르면, 비 조영 영상을 기반으로 해부학적 개체 예를 들어, 림프노드 및 혈관 영역의 분할 표시 기능이 제공된다. 또한, 비 조영 영상 기반 림프노드 추적 검사 및 정량화 기능 (Volume, Density, Shape, Distribution 변화 등)이 제공된다.According to the embodiment of the present invention as described above, a segmentation display function of an anatomical entity, for example, a lymph node and a blood vessel region, is provided based on a non-contrast image. In addition, non-contrast imaging based lymph node follow-up and quantification functions (Volume, Density, Shape, Distribution changes, etc.) are provided.
본 발명의 여러 실시예들의 각각 특징들이 부분적으로 또는 전체적으로 서로 결합 또는 조합 가능하며, 당업자가 충분히 이해할 수 있듯이 기술적으로 다양한 연동 및 구동이 가능하며, 각 실시예들이 서로에 대하여 독립적으로 실시 가능할 수도 있고 연관 관계로 함께 실시 가능할 수도 있다.Each of the features of the various embodiments of the present invention may be combined or combined with each other in part or in whole, various technically interlocking and driving as can be understood by those skilled in the art, each of the embodiments may be implemented independently of each other It may be possible to carry out together in an association.
이와 같이, 본 발명의 실시예에 의하면, 조영제를 적극적으로 사용하기 부담스러운 신장 기능 미약 환자들도, 림프 노드 추적 검사가 가능하다. As described above, according to the embodiment of the present invention, lymph node follow-up examination is possible even in patients with weak renal function who are burdened to actively use the contrast agent.
또한, 비조영 영상기반의 림프노드 질환 판정 관련 오진 가능성이 줄어들어, 진단 시스템의 개선 (Under/Over-estimation) 및 진단 정확도 향상을 도모할 수 있다.In addition, the possibility of misdiagnosis related to non-image based lymph node disease determination can be reduced, thereby improving the diagnosis system (Under / Over-estimation) and improving the accuracy of diagnosis.
또한, 본 실시예는 일반 검진용 비조영 영상에 적용 가능하여, 암 전이 여부 등 암 질환 조기 진단 등에 활용된다.In addition, the present embodiment can be applied to non-images for general examination, and is used for early diagnosis of cancer diseases such as cancer metastasis.
한편, 상기와 같은 본 발명의 다양한 실시예들은 컴퓨터가 판독 가능한 기록매체로 실시될 수 있다. 컴퓨터가 판독 가능한 기록매체는 전송매체 및 컴퓨터 시스템에 의해 판독 가능한 데이터를 저장하는 저장매체를 포함한다. 전송매체는 컴퓨터 시스템이 상호 결합된 유무선 네트워크를 통해 구현 가능하다.On the other hand, various embodiments of the present invention as described above may be implemented in a computer-readable recording medium. Computer-readable recording media include transmission media and storage media that store data readable by a computer system. The transmission medium may be implemented through a wired or wireless network in which computer systems are interconnected.
본 발명의 다양한 실시예들은 하드웨어와 하드웨어 및 소프트웨어의 결합에 의해 구현될 수 있다. 하드웨어로서, 제어부(1010)는 소프트웨어인 컴퓨터프로그램이 저장되는 비휘발성메모리와, 비휘발성메모리에 저장된 컴퓨터프로그램이 로딩되는 RAM과, RAM에 로딩된 컴퓨터프로그램을 실행하는 CPU를 포함할 수 있다. 비휘발성메모리는 하드디스크드라이브, 플래쉬메모리, ROM, CD-ROMs, 자기테이프(magnetic tapes), 플로피 디스크, 광기억 장치(optical storage), 인터넷을 이용한 데이터 전송장치 등을 포함하며, 이에 한정되지 않는다. 비휘발성메모리는 본 발명의 컴퓨터가 읽을 수 있는 프로그램이 기록된 기록매체(computer-readable recording medium)의 일례이다.Various embodiments of the present invention may be implemented by hardware and a combination of hardware and software. As hardware, the controller 1010 may include a nonvolatile memory in which a computer program, which is software, is stored, a RAM in which the computer program stored in the nonvolatile memory is loaded, and a CPU that executes a computer program loaded in the RAM. Nonvolatile memories include, but are not limited to, hard disk drives, flash memory, ROMs, CD-ROMs, magnetic tapes, floppy disks, optical storage, data transfer devices using the Internet, and the like. . The nonvolatile memory is an example of a computer-readable recording medium in which a computer-readable program of the present invention is recorded.
컴퓨터프로그램은 CPU가 읽고 실행할 수 있는 코드로서, 도 18에 도시된 단계 S1801 내지 S1809, 도 23에 도시된 단계 S2301 내지 단계 S2309과 같은 제어부(1010)의 동작을 수행하도록 하는 코드를 포함한다.The computer program is code that the CPU can read and execute, and includes code for performing operations of the control unit 1010 such as steps S1801 to S1809 shown in FIG. 18 and steps S2301 to S2309 shown in FIG.
컴퓨터프로그램은 의료영상 표시장치(1000)에 구비된 운영체제(operating system) 또는 어플리케이션을 포함하는 소프트웨어 및/또는 외부장치와 인터페이스하는 소프트웨어에 포함되어 구현될 수 있다.The computer program may be implemented by being included in software including an operating system or an application included in the medical image display apparatus 1000 and / or software for interfacing with an external device.
이상, 바람직한 실시예를 통하여 본 발명에 관하여 상세히 설명하였으나, 본 발명은 이에 한정되는 것은 아니며 특허청구범위 내에서 다양하게 실시될 수 있다.As mentioned above, the present invention has been described in detail through preferred embodiments, but the present invention is not limited thereto and may be variously implemented within the scope of the claims.

Claims (15)

  1. 의료영상 표시장치에 있어서,In the medical image display device,
    적어도 하나의 해부학적 개체를 포함하는 대상체를 촬상한 제1 의료영상을 표시하는 디스플레이부와;A display unit configured to display a first medical image of an object including at least one anatomical object;
    상기 제1 의료영상의 참조영상인 적어도 하나의 제2 의료영상으로부터 상기 해부학적 개체에 대응하는 참조영역정보를 추출하고, 상기 추출된 참조영역정보에 기초하여 상기 제1 의료영상에서 상기 해부학적 개체에 대응하는 영역을 검출하고, 상기 검출된 해부학적 개체의 영역이 상기 해부학적 개체가 아닌 영역과 구분하여 표시되도록 상기 디스플레이부를 제어하는 적어도 하나의 프로세서를 포함하는 것을 특징으로 하는 의료영상 표시장치.Extracting reference region information corresponding to the anatomical entity from at least one second medical image which is a reference image of the first medical image, and based on the extracted reference region information, the anatomical entity in the first medical image And at least one processor for detecting a region corresponding to the control unit and controlling the display unit to display the detected anatomical region separately from the non-anatomical region.
  2. 제1항에 있어서, The method of claim 1,
    상기 프로세서는, The processor,
    상기 제1 의료영상과 상기 제2 의료영상을 정합하여 상기 제1 의료영상에서 상기 검출된 해부학적 개체에 대한 표시영역정보를 포함하는 제3 의료영상을 생성하고, Generating a third medical image including display area information on the detected anatomical object in the first medical image by matching the first medical image with the second medical image;
    상기 표시영역정보에 기초하여 상기 생성된 제3 의료영상에서 상기 검출된 해부학적 개체의 영역이 상기 해부학적 개체가 아닌 영역과 구분하여 표시되도록 디스플레이부를 제어하는 것을 특징으로 하는 의료영상 표시장치. And a display unit configured to control the display unit to display the detected anatomical object in a region different from the non-anatomical object in the generated third medical image based on the display area information.
  3. 제1항에 있어서, The method of claim 1,
    상기 해부학적 개체는 복수이며,The anatomical entity is a plurality;
    상기 디스플레이부는, 상기 복수의 해부학적 개체의 영역들이 각각 구분하여 표시되도록 하는 것을 특징으로 하는 의료영상 표시장치. The display unit, characterized in that for displaying the areas of the plurality of anatomical objects are displayed separately.
  4. 제1항에 있어서, The method of claim 1,
    상기 제1 의료영상은 비 조영 의료영상이며,The first medical image is a non-contrast medical image,
    상기 제2 의료영상은 조영 증강된 의료영상이고, 상기 제1 의료영상을 획득한 상기 대상체를 타 시점에 촬상하여 획득된 의료영상인 것을 특징으로 하는 의료영상 표시장치. The second medical image is a contrast-enhanced medical image, and the medical image display device, characterized in that the medical image obtained by capturing the object obtained the first medical image at another point in time.
  5. 제1항에 있어서, The method of claim 1,
    상기 디스플레이부에서, 색상, 패턴, 포인터, 하이라이트 및 애니메이션 효과 중 적어도 하나에 의해 상기 검출된 해부학적 개체의 영역이 상기 해부학적 개체가 아닌 영역과 구분하여 표시되며,In the display unit, an area of the detected anatomical object is displayed separately from the non-anatomical area by at least one of color, pattern, pointer, highlight, and animation effect.
    상기 해부학적 개체의 영역의 구분 표시는 사용자 선택에 의해 활성화 또는 비활성화 가능한 것을 특징으로 하는 의료영상 표시장치. The divisional display of the region of the anatomical object may be activated or deactivated by user selection.
  6. 제1항 내지 제5항 중 어느 한 항에 있어서, The method according to any one of claims 1 to 5,
    상기 프로세서는, 상기 해부학적 개체의 영역에서 병변 확장 영역을 더 검출하고, 상기 해부학적 개체의 영역 내에서 상기 검출된 병변 확장 영역이 식별 가능하게 표시되도록 상기 디스플레이부를 제어하는 것을 특징으로 하는 의료영상 표시장치. The processor may further detect a lesion extension region in the region of the anatomical entity, and control the display unit to display the detected lesion extension region in a distinguishable manner within the region of the anatomical entity. Display.
  7. 제2항 내지 제5항 중 어느 한 항에 있어서,The method according to any one of claims 2 to 5,
    상기 프로세서는, 소정 변환모델 파라미터를 사용하여 상기 제1 의료영상과 상기 제2 의료영상의 유사도 측정 함수의 결과값이 최대가 되도록 영상 정합을 수행하는 것을 특징으로 하는 의료영상 표시장치. And the processor is configured to perform image registration using a predetermined conversion model parameter so that a result value of the similarity measurement function between the first medical image and the second medical image is maximized.
  8. 제2항 내지 제5항 중 어느 한 항에 있어서, The method according to any one of claims 2 to 5,
    상기 프로세서는, 소정 변환모델 파라미터를 사용하여 상기 제1 의료영상과 상기 제2 의료영상의 비용 함수의 결과값이 최소가 되도록 영상 정합을 수행하는 것을 특징으로 하는 의료영상 표시장치. And the processor performs image registration using a predetermined conversion model parameter so that a result value of the cost function of the first medical image and the second medical image is minimized.
  9. 제2항 내지 제5항 중 어느 한 항에 있어서, The method according to any one of claims 2 to 5,
    상기 프로세서는, The processor,
    상기 제1 의료영상과 상기 제2 의료영상의 좌표계를 매핑하고, Mapping a coordinate system of the first medical image and the second medical image,
    상기 좌표계가 매핑된 상기 제1 의료영상 및 상기 제2 의료영상에 대해, 상기 제2 의료영상의 영상 특성을 유지한 상태로 상기 제1의료영상에 매칭시키는 동형정합을 수행하는 것을 특징으로 하는 의료영상 표시장치. And performing homogeneous matching on the first medical image and the second medical image to which the coordinate system is mapped, to match the first medical image while maintaining image characteristics of the second medical image. Video display.
  10. 제9항에 있어서, The method of claim 9,
    상기 프로세서는, The processor,
    상기 동형정합이 수행된 상기 제1 의료영상 및 상기 제2 의료영상에 대해, 상기 제2 의료영상의 영상 특성을 변형시켜 상기 제1의료영상에 완전히 매칭시키는 이형정합을 더 수행하는 것을 특징으로 하는 의료영상 표시장치. And performing heterogeneous matching on the first medical image and the second medical image on which the homogeneous registration has been performed, by modifying image characteristics of the second medical image to completely match the first medical image. Medical image display device.
  11. 의료영상 처리방법에 있어서,In the medical image processing method,
    적어도 하나의 해부학적 개체를 포함하는 대상체를 촬상한 제1 의료영상을 표시하는 단계와;Displaying a first medical image of an object including at least one anatomical object;
    상기 제1 의료영상의 참조영상인 적어도 하나의 제2 의료영상으로부터 상기 해부학적 개체에 대응하는 참조영역정보를 추출하는 단계와;Extracting reference region information corresponding to the anatomical object from at least one second medical image that is a reference image of the first medical image;
    상기 추출된 참조영역정보에 기초하여 상기 제1 의료영상에서 상기 해부학적 개체에 대응하는 영역을 검출하고, 상기 검출된 해부학적 개체의 영역이 상기 해부학적 개체가 아닌 영역과 구분하여 표시되도록 하는 단계를 포함하는 것을 특징으로 하는 의료영상 처리방법.Detecting a region corresponding to the anatomical entity in the first medical image based on the extracted reference region information, and displaying the region of the detected anatomical entity separately from a region which is not the anatomical entity; Medical image processing method comprising a.
  12. 제11항에 있어서, The method of claim 11,
    상기 제1 의료영상과 상기 제2 의료영상을 정합하여 상기 제1 의료영상에서 상기 검출된 해부학적 개체에 대한 표시영역정보를 포함하는 제3 의료영상을 생성하는 단계를 더 포함하며, And matching the first medical image with the second medical image to generate a third medical image including display area information on the detected anatomical object in the first medical image.
    상기 구분하여 표시되도록 하는 단계는, 상기 표시영역정보에 기초하여 상기 생성된 제3 의료영상에서 상기 검출된 해부학적 개체의 영역이 상기 해부학적 개체가 아닌 영역과 구분하여 표시되도록 하는 것을 특징으로 하는 의료영상 처리방법. The distinguishing and displaying may include displaying the detected anatomical object in a region different from the non-anatomical object in the generated third medical image based on the display area information. Medical image processing method.
  13. 제11항에 있어서, The method of claim 11,
    상기 제1 의료영상은 비 조영 의료영상이며,The first medical image is a non-contrast medical image,
    상기 제2 의료영상은 조영 증강된 의료영상이고, 상기 제1 의료영상을 획득한 상기 대상체를 타 시점에 촬상하여 획득된 의료영상인 것을 특징으로 하는 의료영상 처리방법. The second medical image is a contrast-enhanced medical image, medical image processing method, characterized in that the medical image obtained by imaging the object obtained the first medical image at another point in time.
  14. 제11항 내지 제13항 중 어느 한 항에 있어서, The method according to any one of claims 11 to 13,
    상기 구분하여 표시된 해부학적 개체의 영역에서 병변 확장 영역을 검출하고, 상기 해부학적 개체의 영역 내에서 상기 검출된 병변 확장 영역이 식별 가능하게 표시되도록 하는 단계를 더 포함하는 것을 특징으로 하는 의료영상 처리방법. Detecting the lesion extension region in the region of the anatomically distinguished object and displaying the detected lesion extension region in the region of the anatomical entity. Way.
  15. 제12항 또는 제13항에 있어서, The method according to claim 12 or 13,
    상기 제3 의료영상을 생성하는 단계는, Generating the third medical image,
    상기 제1 의료영상과 상기 제2 의료영상의 좌표계를 매핑하는 단계와; Mapping a coordinate system of the first medical image and the second medical image;
    상기 좌표계가 매핑된 상기 제1 의료영상 및 상기 제2 의료영상에 대해, 상기 제2 의료영상의 영상 특성을 유지한 상태로 상기 제1의료영상에 매칭시키는 동형정합을 수행하는 단계와;Performing homogeneous matching on the first medical image and the second medical image to which the coordinate system is mapped to the first medical image while maintaining image characteristics of the second medical image;
    상기 동형정합이 수행된 상기 제1 의료영상 및 상기 제2 의료영상에 대해, 상기 제2 의료영상의 영상 특성을 변형시켜 상기 제1의료영상에 완전히 매칭시키는 이형정합을 수행하는 단계를 포함하는 것을 특징으로 하는 의료영상 처리방법. And performing heterogeneous matching on the first medical image and the second medical image on which the homogeneous registration has been performed, by modifying image characteristics of the second medical image to completely match the first medical image. Medical image processing method characterized in.
PCT/KR2016/006087 2015-08-17 2016-06-09 Medical image display device and medical image processing method WO2017030276A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP16837218.3A EP3338625B1 (en) 2015-08-17 2016-06-09 Medical image display device and medical image processing method
US15/753,051 US10682111B2 (en) 2015-08-17 2016-06-09 Medical image display device and medical image processing method

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR20150115661 2015-08-17
KR10-2015-0115661 2015-08-17
KR1020160044817A KR102522539B1 (en) 2015-08-17 2016-04-12 Medical image displaying apparatus and medical image processing method thereof
KR10-2016-0044817 2016-04-12

Publications (1)

Publication Number Publication Date
WO2017030276A1 true WO2017030276A1 (en) 2017-02-23

Family

ID=58050905

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2016/006087 WO2017030276A1 (en) 2015-08-17 2016-06-09 Medical image display device and medical image processing method

Country Status (1)

Country Link
WO (1) WO2017030276A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018159868A1 (en) * 2017-02-28 2018-09-07 메디컬아이피 주식회사 Medical image region segmentation method and device therefor
US10402975B2 (en) 2017-02-28 2019-09-03 Medicalip Co., Ltd. Method and apparatus for segmenting medical images
CN110742631A (en) * 2019-10-23 2020-02-04 深圳蓝韵医学影像有限公司 Imaging method and device for medical image
CN111789618A (en) * 2020-08-10 2020-10-20 上海联影医疗科技有限公司 Imaging system and method
JP2020536638A (en) * 2017-10-09 2020-12-17 ザ ボード オブ トラスティーズ オブ ザ レランド スタンフォード ジュニア ユニバーシティー Contrast agent dose reduction for medical imaging using deep learning
CN112259197A (en) * 2020-10-14 2021-01-22 北京赛迈特锐医疗科技有限公司 Intelligent analysis system and method for acute abdomen plain film
CN113362934A (en) * 2021-06-03 2021-09-07 深圳市妇幼保健院 System for simulating disease attack characterization based on electroencephalogram of children
WO2021187675A1 (en) * 2020-03-17 2021-09-23 인하대학교 산학협력단 Reliable subsurface scattering method for volume rendering in three-dimensional ultrasound image
CN113557714A (en) * 2019-03-11 2021-10-26 佳能株式会社 Medical image processing apparatus, medical image processing method, and program
US20210398259A1 (en) 2019-03-11 2021-12-23 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US20220122266A1 (en) * 2019-12-20 2022-04-21 Brainlab Ag Correcting segmentation of medical images using a statistical analysis of historic corrections
US11922601B2 (en) 2018-10-10 2024-03-05 Canon Kabushiki Kaisha Medical image processing apparatus, medical image processing method and computer-readable medium
US12040079B2 (en) 2018-06-15 2024-07-16 Canon Kabushiki Kaisha Medical image processing apparatus, medical image processing method and computer-readable medium
US12039704B2 (en) 2018-09-06 2024-07-16 Canon Kabushiki Kaisha Image processing apparatus, image processing method and computer-readable medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140200433A1 (en) * 2013-01-16 2014-07-17 Korea Advanced Institute Of Science And Technology Apparatus and method for estimating malignant tumor
KR20140105101A (en) * 2013-02-21 2014-09-01 삼성전자주식회사 Method and Apparatus for performing registraton of medical images
WO2014155299A1 (en) * 2013-03-28 2014-10-02 Koninklijke Philips N.V. Interactive follow-up visualization
KR20140120236A (en) * 2013-04-02 2014-10-13 재단법인 아산사회복지재단 Integrated analysis method of matching myocardial and cardiovascular anatomy informations
US8983179B1 (en) * 2010-11-10 2015-03-17 Google Inc. System and method for performing supervised object segmentation on images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8983179B1 (en) * 2010-11-10 2015-03-17 Google Inc. System and method for performing supervised object segmentation on images
US20140200433A1 (en) * 2013-01-16 2014-07-17 Korea Advanced Institute Of Science And Technology Apparatus and method for estimating malignant tumor
KR20140105101A (en) * 2013-02-21 2014-09-01 삼성전자주식회사 Method and Apparatus for performing registraton of medical images
WO2014155299A1 (en) * 2013-03-28 2014-10-02 Koninklijke Philips N.V. Interactive follow-up visualization
KR20140120236A (en) * 2013-04-02 2014-10-13 재단법인 아산사회복지재단 Integrated analysis method of matching myocardial and cardiovascular anatomy informations

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3338625A4 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018159868A1 (en) * 2017-02-28 2018-09-07 메디컬아이피 주식회사 Medical image region segmentation method and device therefor
US10402975B2 (en) 2017-02-28 2019-09-03 Medicalip Co., Ltd. Method and apparatus for segmenting medical images
JP2023078236A (en) * 2017-10-09 2023-06-06 ザ ボード オブ トラスティーズ オブ ザ レランド スタンフォード ジュニア ユニバーシティー Contrast dose reduction for medical imaging using deep learning
JP7244499B2 (en) 2017-10-09 2023-03-22 ザ ボード オブ トラスティーズ オブ ザ レランド スタンフォード ジュニア ユニバーシティー Contrast Agent Dose Reduction in Medical Imaging Using Deep Learning
JP2020536638A (en) * 2017-10-09 2020-12-17 ザ ボード オブ トラスティーズ オブ ザ レランド スタンフォード ジュニア ユニバーシティー Contrast agent dose reduction for medical imaging using deep learning
JP7476382B2 (en) 2017-10-09 2024-04-30 ザ ボード オブ トラスティーズ オブ ザ レランド スタンフォード ジュニア ユニバーシティー Contrast agent dose reduction in medical imaging using deep learning
US12040079B2 (en) 2018-06-15 2024-07-16 Canon Kabushiki Kaisha Medical image processing apparatus, medical image processing method and computer-readable medium
US12039704B2 (en) 2018-09-06 2024-07-16 Canon Kabushiki Kaisha Image processing apparatus, image processing method and computer-readable medium
US11922601B2 (en) 2018-10-10 2024-03-05 Canon Kabushiki Kaisha Medical image processing apparatus, medical image processing method and computer-readable medium
CN113557714A (en) * 2019-03-11 2021-10-26 佳能株式会社 Medical image processing apparatus, medical image processing method, and program
US20210398259A1 (en) 2019-03-11 2021-12-23 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US11887288B2 (en) 2019-03-11 2024-01-30 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
CN110742631A (en) * 2019-10-23 2020-02-04 深圳蓝韵医学影像有限公司 Imaging method and device for medical image
CN110742631B (en) * 2019-10-23 2024-02-20 深圳蓝影医学科技股份有限公司 Imaging method and device for medical image
US20220122266A1 (en) * 2019-12-20 2022-04-21 Brainlab Ag Correcting segmentation of medical images using a statistical analysis of historic corrections
US11861846B2 (en) * 2019-12-20 2024-01-02 Brainlab Ag Correcting segmentation of medical images using a statistical analysis of historic corrections
WO2021187675A1 (en) * 2020-03-17 2021-09-23 인하대학교 산학협력단 Reliable subsurface scattering method for volume rendering in three-dimensional ultrasound image
CN111789618B (en) * 2020-08-10 2023-06-30 上海联影医疗科技股份有限公司 Imaging system and method
CN111789618A (en) * 2020-08-10 2020-10-20 上海联影医疗科技有限公司 Imaging system and method
CN112259197A (en) * 2020-10-14 2021-01-22 北京赛迈特锐医疗科技有限公司 Intelligent analysis system and method for acute abdomen plain film
CN113362934A (en) * 2021-06-03 2021-09-07 深圳市妇幼保健院 System for simulating disease attack characterization based on electroencephalogram of children

Similar Documents

Publication Publication Date Title
WO2017030276A1 (en) Medical image display device and medical image processing method
WO2015108306A1 (en) Medical image providing apparatus and medical image processing method of the same
WO2016080813A1 (en) Method and apparatus for processing medical image
WO2017142281A1 (en) Image processing apparatus, image processing method and recording medium thereof
WO2015126205A2 (en) Tomography apparatus and method for reconstructing tomography image thereof
WO2015002409A1 (en) Method of sharing information in ultrasound imaging
WO2015126189A1 (en) Tomography apparatus and method of reconstructing a tomography image by the tomography apparatus
WO2016140424A1 (en) Tomography imaging apparatus and method of reconstructing tomography image
WO2015122687A1 (en) Tomography apparatus and method of displaying a tomography image by the tomography apparatus
EP3302239A1 (en) Medical image display apparatus and method of providing user interface
WO2016117807A1 (en) Medical device diagnostic apparatus and control method thereof
EP3220826A1 (en) Method and apparatus for processing medical image
WO2016060475A1 (en) Method of providing information using plurality of displays and ultrasound apparatus therefor
EP3331447A1 (en) Tomography imaging apparatus and method of reconstructing tomography image
WO2020185003A1 (en) Method for displaying ultrasonic image, ultrasonic diagnostic device, and computer program product
EP3107457A1 (en) Tomography apparatus and method of reconstructing a tomography image by the tomography apparatus
WO2016195417A1 (en) Apparatus and method of processing medical image
EP3104782A1 (en) Tomography apparatus and method of displaying a tomography image by the tomography apparatus
WO2015126217A2 (en) Diagnostic imaging method and apparatus, and recording medium thereof
WO2015060656A1 (en) Magnetic resonance imaging apparatus and method
WO2016186279A1 (en) Method and apparatus for synthesizing medical images
WO2016190701A1 (en) Magnetic resonance imaging apparatus and method
WO2014200289A2 (en) Apparatus and method for providing medical information
WO2016043411A1 (en) X-ray apparatus and method of scanning the same
WO2016072581A1 (en) Medical imaging apparatus and method of processing medical image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16837218

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15753051

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE