Nothing Special   »   [go: up one dir, main page]

WO2019216125A1 - Learning device, method, and program for classifier for classifying infarct region, classifier for classifying infarct region, and device, method and program for classifying infarct region - Google Patents

Learning device, method, and program for classifier for classifying infarct region, classifier for classifying infarct region, and device, method and program for classifying infarct region Download PDF

Info

Publication number
WO2019216125A1
WO2019216125A1 PCT/JP2019/016052 JP2019016052W WO2019216125A1 WO 2019216125 A1 WO2019216125 A1 WO 2019216125A1 JP 2019016052 W JP2019016052 W JP 2019016052W WO 2019216125 A1 WO2019216125 A1 WO 2019216125A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
infarct region
discriminator
infarct
region
Prior art date
Application number
PCT/JP2019/016052
Other languages
French (fr)
Japanese (ja)
Inventor
赤堀 貞登
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Publication of WO2019216125A1 publication Critical patent/WO2019216125A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present disclosure relates to a learning device, method, and program for a discriminator that discriminates an infarct region in a brain image, a discriminator that discriminates an infarct region, and an infarct region discriminating device, method, and program.
  • JP2013-165765A a cerebral infarction site included in an MRI diffusion weighted image (DWI: DiffusionDWeightedifImage) is detected, and from the abnormal site of the diffusion weighted image and the diffusion weighted image of a healthy person, The position conversion data necessary for the anatomical alignment between the two is obtained, and SPECT (Single Photon) is used by using the position conversion data so that each tissue position of the patient's brain matches each tissue position of the healthy person's brain.
  • SPECT Single Photon
  • Japanese Patent Laid-Open No. 2014-518516 proposes a method for performing diagnosis by aligning and displaying a CT image and an MRI image.
  • thrombolytic therapy is performed for cerebral infarction patients.
  • thrombolytic therapy is applicable within 4.5 hours from the time at which cerebral infarction has not been confirmed, and the risk of bleeding after treatment increases as the infarct range expands over time.
  • the infarct range expands over time.
  • the infarct region has a pixel value different from other regions.
  • the difference in pixel value from other regions becomes significant in the diffusion weighted image. For this reason, a diffusion weighted image is often used for confirmation of the infarct region.
  • the present disclosure has been made in view of the above circumstances, and an object of the present disclosure is to enable quick diagnosis of whether or not a cerebral infarction has occurred by using a CT image.
  • a learning apparatus for a discriminator includes an image acquisition unit that acquires a CT image of a brain of a subject developing cerebral infarction and an MRI image of the brain of the same subject as the subject; An infarct region extraction unit that extracts an infarct region from an MRI image; An alignment unit for aligning the CT image and the MRI image; An infarct region specifying unit that specifies an infarct region in the CT image based on the result of the alignment; And a learning unit that learns a discriminator for discriminating the infarct region in the input CT image using the infarct region specified in the CT image as teacher data.
  • the MRI image may be a diffusion weighted image.
  • the alignment unit may extract a brain region from the CT image and perform alignment between the extracted brain region and the diffusion weighted image.
  • the discriminator according to the present disclosure has been learned by the discriminator learning device according to the present disclosure.
  • An infarct region discriminating apparatus includes an image acquisition unit that acquires a CT image of an infarct region discrimination target; A discriminator according to the present disclosure that discriminates an infarct region in a CT image to be discriminated;
  • the infarct region discriminating apparatus further includes a display control unit that displays the discrimination result by the discriminator on the display unit.
  • the discriminator learning method acquires a CT image of a brain of a subject who has developed cerebral infarction and an MRI image of the brain of the same subject as the subject, Extract infarct area from MRI image, Align CT image and MRI image, Based on the alignment result, identify the infarct region in the CT image, A discriminator for discriminating the infarct region in the input CT image is learned using the infarct region specified in the CT image as teacher data.
  • the infarct region determination method obtains a CT image of an infarct region determination target,
  • the discriminator of the present disclosure discriminates the infarct region in the CT image to be discriminated.
  • discriminator learning method according to the present disclosure and the infarct region determination method according to the present disclosure may be provided as a program for causing a computer to execute the method.
  • Another discriminator learning device includes a memory for storing instructions to be executed by a computer, A processor configured to execute stored instructions, the processor comprising: Obtaining a CT image of the brain of the subject developing cerebral infarction and an MRI image of the brain of the same subject as the subject, Extract infarct area from MRI image, Align CT image and MRI image, Based on the alignment result, identify the infarct region in the CT image, A process of learning a discriminator for discriminating the infarct region in the input CT image is executed using the infarct region specified in the CT image as teacher data.
  • Another infarct region discriminating device includes a memory for storing instructions for causing a computer to execute, A processor configured to execute stored instructions, the processor comprising: Obtain a CT image of the infarct area discrimination target, By the discriminator of the present disclosure, processing for discriminating the infarct region in the CT image to be discriminated is executed.
  • a CT image of the brain of a subject developing cerebral infarction and an MRI image of the brain of the same subject as the subject are acquired, an infarct region is extracted from the MRI image, and the CT image and the MRI Registration with the image is performed. Then, based on the alignment result, an infarct region in the CT image is specified, and a discriminator for discriminating the infarct region in the input CT image is learned using the specified infarct region as teacher data.
  • the bleeding region since the bleeding region has a pixel value that is greatly different from other regions, it is easy to specify the bleeding region in the CT image.
  • the infarct region has a different pixel value from other regions, but the difference is smaller than the difference between the bleeding region and other regions.
  • the infarct region has a pixel value that is significantly different from other regions. For this reason, if the MRI image and CT image of the brain of the subject who has developed cerebral infarction are aligned, the infarct region in the CT image can be specified based on the infarct region in the MRI image. Therefore, by learning the discriminator using the identified infarct region as teacher data, the infarct region in the input CT image can be discriminated by the learned discriminator. Thereby, it is possible to discriminate not only the bleeding region of the brain but also the infarct region using only the CT image. Therefore, according to the present disclosure, it is possible to quickly diagnose whether cerebral infarction has developed.
  • Hardware configuration diagram showing an outline of a diagnosis support system to which a learning device, a discriminator, and an infarct region discrimination device according to an embodiment of the present disclosure are applied
  • the figure which shows the example of an MRI image The figure for demonstrating position alignment with CT image and MRI image
  • FIG. 1 is a hardware configuration diagram illustrating an outline of a diagnosis support system to which a discriminator learning device, a discriminator, and an infarct region discrimination device according to an embodiment of the present disclosure are applied.
  • the infarct region discriminating device 1 in the diagnosis support system, the infarct region discriminating device 1, the three-dimensional image capturing device 2, and the image storage server 3 according to the present embodiment are connected in a communicable state via a network 4. Yes.
  • the infarct region discriminating device 1 includes the learning device and the discriminator according to the present embodiment.
  • the 3D image capturing apparatus 2 is an apparatus that generates a 3D image representing a part as a medical image by capturing the part to be diagnosed of the subject.
  • the medical image generated by the 3D image capturing apparatus 2 is transmitted to the image storage server 3 and stored.
  • the diagnosis target part of the patient who is the subject is the brain
  • the three-dimensional imaging apparatus 2 is the CT apparatus 2A and the MRI apparatus 2B.
  • the CT apparatus 2A generates a three-dimensional CT image Bc0 including the subject's brain
  • the MRI apparatus 2B generates a three-dimensional MRI image Bm0 including the subject's brain.
  • the MRI image Bm0 is a diffusion weighted image.
  • the CT image Bc0 is a non-contrast CT image acquired by performing imaging without using a contrast agent.
  • a contrast CT acquired by performing imaging using a contrast agent. An image may be used.
  • the image storage server 3 is a computer that stores and manages various data, and includes a large-capacity external storage device and database management software.
  • the image storage server 3 communicates with other devices via a wired or wireless network 4 to transmit and receive image data and the like.
  • various types of data including image data of CT images and MRI images generated by the three-dimensional image capturing apparatus 2 are acquired via a network, stored in a recording medium such as a large-capacity external storage device, and managed.
  • the storage format of the image data and the communication between the devices via the network 4 are based on a protocol such as DICOM (Digital-Imaging-and-Communication-in-Medicine).
  • the infarct region discriminating apparatus 1 is obtained by installing the learning program and the infarct region discriminating program of the present disclosure on one computer.
  • the computer may be a workstation or personal computer directly operated by a doctor who performs diagnosis, or may be a server computer connected to them via a network.
  • the learning program and the infarct region discrimination program are recorded and distributed on a recording medium such as a DVD (Digital Versatile Disc) or a CD-ROM (Compact Disc Read Only Memory), and are installed in the computer from the recording medium.
  • a recording medium such as a DVD (Digital Versatile Disc) or a CD-ROM (Compact Disc Read Only Memory)
  • it is stored in a storage device of a server computer connected to a network or a network storage in a state where it can be accessed from the outside, and is downloaded and installed on a computer used by a doctor upon request.
  • FIG. 2 is a diagram showing a schematic configuration of the infarct region discriminating apparatus according to the present embodiment realized by installing a learning program and an infarct region discriminating program in a computer.
  • the infarct region discriminating apparatus 1 includes a CPU (Central Processing Unit) 11, a memory 12, and a storage 13 as a standard workstation configuration.
  • the infarct region discriminating apparatus 1 is connected with a display 14 and an input unit 15 such as a keyboard and a mouse.
  • the display 14 corresponds to a display unit.
  • the storage 13 is composed of a hard disk drive or the like, and stores various information including medical images of the subject acquired from the image storage server 3 via the network 4 and information necessary for processing.
  • the memory 12 stores a learning program and an infarct region discrimination program.
  • the learning program executes an image acquisition process for acquiring a CT image Bc0 and an MRI image Bm0 of the brain of a subject who has developed a cerebral infarction, and an infarct area extraction process for extracting an infarct area from the MRI image Bm0.
  • a learning process for learning a discriminator for discriminating an infarct region in the input CT image Bc1 is defined as teacher data.
  • the infarct region determination program executes, as processing to be executed by the CPU 11, an image acquisition process for acquiring a CT image Bc1 that is a determination target of the infarct region, a determination process that determines an infarct region in the CT image Bc1 of the determination target, and a determination result.
  • a display control process to be displayed on the display 14 is defined.
  • the computer acquires the image acquisition unit 21, the infarct region extraction unit 22, the alignment unit 23, the infarct region specifying unit 24, the learning unit 25, the discriminator 26, and display control. It functions as the unit 27.
  • the image acquisition unit 21, the infarct region extraction unit 22, the alignment unit 23, the infarct region specifying unit 24, and the learning unit 25 constitute a discriminator learning device of the present embodiment.
  • the discriminator 26 and the display control unit 27 constitute an infarct region discriminating apparatus of the present embodiment.
  • the CPU 11 executes the function of each unit by the learning program and the infarct region determination program.
  • a general-purpose processor that functions as various processing units by executing software
  • the CPU 11 In addition, a programmable logic device (PLD) which is a processor whose circuit configuration can be changed after manufacturing FPGA (Field Programmable Gate Array) or the like can be used.
  • the processing of each unit may be executed by a dedicated electric circuit or the like that is a processor having a circuit configuration designed exclusively for executing specific processing such as ASIC (Application Specific Specific Integrated Circuit).
  • ASIC Application Specific Specific Integrated Circuit
  • One processing unit may be composed of one of these various processors, or a combination of two or more processors of the same type or different types (for example, a plurality of FPGAs or a combination of a CPU and an FPGA). It may be configured. Further, the plurality of processing units may be configured by one processor. As an example of configuring a plurality of processing units with one processor, first, as represented by a computer such as a client or a server, one processor is configured with a combination of one or more CPUs and software. There is a form in which the processor functions as a plurality of processing units.
  • SoC system-on-chip
  • a form of using a processor that realizes the functions of the entire system including a plurality of processing units with a single IC (integrated circuit) chip. is there.
  • various processing units are configured using one or more of the various processors as a hardware structure.
  • circuitry circuitry in which circuit elements such as semiconductor elements are combined.
  • the image acquisition unit 21 acquires the CT image Bc0 and the MRI image Bm0 of the brain of the subject who has developed cerebral infarction from the image storage server 3 for learning by the discriminator 26.
  • a CT image Bc1 that is a determination target of the infarct region is acquired from the image storage server 3. If the CT image Bc0, CT image Bc1, and MRI image Bm0 are already stored in the storage 13, the image acquisition unit 21 acquires the CT image Bc0, CT image Bc1, and MRI image Bm0 from the storage 13. It may be.
  • the image acquisition unit 21 acquires CT images Bc0 and MRI images Bm0 for a large number of subjects for learning by the discriminator 26 described later.
  • the infarct region extraction unit 22 extracts the infarct region of the brain from the MRI image Bm0.
  • FIG. 3 is a diagram showing an example of the MRI image Bm0.
  • the MRI image Bm0 is a three-dimensional image, for the sake of explanation, the MRI image Bm0 will be described using a two-dimensional tomographic image on one tomographic plane of the MRI image Bm0.
  • the MRI image Bm0 which is a diffusion weighted image, is an image including only the brain parenchyma from which the skull has been erased. As shown in FIG.
  • the infarct region has a larger pixel value (lower density) than other regions.
  • the infarct region extraction unit 22 extracts, for example, a region having a pixel value larger than a predetermined threshold value as an infarct region in the MRI image Bm0.
  • an infarct region may be extracted from the MRI image Bm0 using a discriminator that has been trained to extract an infarct region. Thereby, the infarct region A1 shown in FIG. 3 is extracted.
  • the alignment unit 23 performs alignment between the CT image Bc0 and the MRI image Bm0.
  • FIG. 4 is a diagram for explaining alignment between the CT image Bc0 and the MRI image Bm0.
  • the CT image Bc0 and the MRI image Bm0 are both three-dimensional images. However, for the sake of explanation, the CT image Bc0 and the MRI image Bm0 will be described using a two-dimensional tomographic image on one corresponding tomographic plane of the CT image Bc0 and the MRI image Bm0. As shown in FIG. 4, the shape of the brain is substantially the same in the same subject. On the other hand, in the MRI image Bm0, the infarct region has a larger pixel value (lower density) than other regions.
  • the alignment unit 23 extracts a brain parenchymal region from the CT image Bc0 as a brain region, and performs alignment between the extracted brain region and the MRI image Bm0.
  • the alignment unit 23 aligns one of the CT image Bc0 and the MRI image Bm0 with respect to the other by non-rigid alignment.
  • the CT image Bc0 is aligned with the MRI image Bm0.
  • the MRI image Bm0 may be aligned with the CT image Bc0.
  • the non-rigid registration is a method in which feature points in the CT image Bc0 are nonlinearly converted into corresponding points corresponding to the feature points in the MRI image Bm0 using functions such as B spline and thin plate spline, for example. Although it can be used, it is not limited to this.
  • the infarct region specifying unit 24 specifies the infarct region in the CT image Bc0 based on the alignment result by the alignment unit 23.
  • FIG. 5 is a diagram for explaining the specification of the infarct region in the CT image Bc0. Note that the CT image Bc0 shown in FIG. 5 is aligned with the MRI image Bm0. Since the CT image Bc0 is aligned with the MRI image Bm0, the infarct region specifying unit 24 specifies the voxel region of the CT image Bc0 corresponding to the infarct region A1 extracted from the MRI image Bm0 as the infarct region A2.
  • the learning unit 25 learns the discriminator 26 for discriminating the infarct region in the input CT image using the infarct region specified in the CT image Bc0 as teacher data.
  • the discriminator 26 discriminates the infarct region in the CT image Bc1 when the CT image Bc1 to be discriminated is input. Specifically, the discriminator 26 classifies each voxel position of the CT image Bc1 to be discriminated into two classes other than the infarct region and the infarct region, and discriminates the infarct region.
  • the learning unit 25 acquires a feature amount in a region having a predetermined size (for example, 3 ⁇ 3) from the infarct region A2 specified in the CT images Bc0 of a plurality of subjects, and the acquired feature amount Is input to the discriminator 26, and learning of the discriminator 26, that is, machine learning, is performed so that a discrimination result indicating that the region is an infarct region is output.
  • a predetermined size for example, 3 ⁇ 3
  • the voxel values of the CT image Bc1 are classified into an infarct region and a region outside the infarct region, and the infarct region in the CT image Bc1 to be determined Is generated.
  • the display control unit 27 displays the discrimination result by the discriminator 26 of the CT image Bc1 to be discriminated on the display 14.
  • FIG. 6 is a diagram illustrating an example of display of the discrimination result.
  • FIG. 6 shows a tomographic image in one cross section of the CT image Bc1 to be discriminated.
  • the infarct region A3 is specified in the CT image Bc1 to be discriminated.
  • a support vector machine SVM (SupportSMachine)
  • DNN Deep Neural Network
  • CNN convolutional neural network
  • RNN Recurrent Neural Network
  • FIG. 7 is a flowchart showing processing performed during learning in the present embodiment.
  • the image acquisition unit 21 acquires the CT image Bc0 and MRI image Bm0 of the brain of the subject who has developed cerebral infarction (step ST1), and the infarct region extraction unit 22 extracts the infarct region A1 from the MRI image Bm0. Extract (step ST2).
  • the alignment unit 23 performs alignment between the CT image Bc0 and the MRI image Bm0 (step ST3), and the infarct region specifying unit 24 specifies the infarct region A2 in the CT image Bc0 based on the alignment result.
  • the learning unit 25 learns the discriminator 26 that discriminates the infarct region in the input CT image Bc1 using the infarct region specified in the CT image Bc0 as teacher data (step ST6), and ends the process.
  • FIG. 8 is a flowchart showing a process performed when determining an infarct region in the present embodiment.
  • the image acquisition unit 21 acquires a CT image Bc1 to be determined (step ST11), and the determiner 26 determines an infarct region in the CT image to be determined (step ST12).
  • the display control part 27 displays a discrimination
  • the CT image Bc0 and MRI image Bm0 of the brain of the subject developing cerebral infarction are acquired, the infarct region is extracted from the MRI image Bm0, and the CT image Bc0 and the MRI image Bm0 are extracted. Align with. Then, based on the alignment result, the infarct region A1 in the CT image Bc0 is identified, and the discriminator 26 for discriminating the infarct region in the input CT image Bc1 is learned using the identified infarct region A1 as teacher data. I did it.
  • the bleeding region since the bleeding region has a pixel value that is significantly different from other regions, it is easy to specify the bleeding region in the CT image.
  • the infarct region has a different pixel value from other regions, but the difference is smaller than the difference between the bleeding region and other regions.
  • the infarct region has a pixel value that is significantly different from other regions.
  • the infarct region in the acute phase has a pixel value that is significantly different from other regions.
  • the infarct region A2 in the CT image Bc0 is specified based on the infarct region in the MRI image Bm0. Can do. Therefore, by learning the discriminator 26 using the identified infarct region A2 as teacher data, the learned discriminator 26 can discriminate the infarct region in the CT image Bc1 to be discriminated. Thereby, it is possible to discriminate not only the bleeding region of the brain but also the infarct region using only the CT image. Therefore, according to the present embodiment, it is possible to quickly diagnose whether or not cerebral infarction has developed.
  • a diffusion weighted image is used as the MRI image Bm0.
  • an MRI image other than the diffusion weighted image may be used.
  • a T1-weighted image, a T2-weighted image, or a FLAIR (FLuid-Attenuated Inversion Recovery) image may be used.
  • a plurality of MRI images may be used in combination.
  • a non-contrast CT image or a contrast CT image is used as the CT image Bc0 used for learning by the discriminator 26.
  • both the contrast CT image and the non-contrast CT image are learned by the discriminator 26. You may make it use for.
  • the discriminator 26 learned in this way the infarct region can be discriminated regardless of whether the CT image to be discriminated is a contrast CT image or a non-contrast CT image.
  • the infarct region discriminating apparatus 1 includes the learning apparatus, but the present invention is not limited to this. That is, the diagnosis support system includes an image acquisition unit 21, an infarct region extraction unit 22, an alignment unit 23, an infarct region specifying unit 24, and a learning unit 25 separately from the infarct region determination device 1. You may make it provide the learning apparatus which performs learning. In this case, the infarct region discriminating apparatus 1 includes only the image acquisition unit 21, the discriminator 26, and the display control unit 27.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • High Energy & Nuclear Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

An image acquisition unit (21) acquires a CT image Bc0 and an MRI image Bm0 of the brain of a subject with a brain infarct, while an infarct region extraction unit (22) extracts an infarct region A1 from the MRI image Bm0. An alignment unit (23) aligns the CT image Bc0 with the MRI image Bm0, while an infarct region identification unit (25) identifies an infarct region A2 in the CT image Bc0 on the basis of a result of the alignment. A learning unit (25), using the infarct region identified in the CT image Bc0 as training data, performs learning about a classifier (26) for classifying an infarct region in a CT image Bc1 that has been inputted thereto.

Description

梗塞領域を判別する判別器の学習装置、方法およびプログラム、梗塞領域を判別する判別器、並びに梗塞領域判別装置、方法およびプログラムLearning device, method and program for discriminator for discriminating infarct region, discriminator for discriminating infarct region, infarct region discriminating device, method and program
 本開示は、脳画像における梗塞領域を判別する判別器の学習装置、方法およびプログラム、梗塞領域を判別する判別器、並びに梗塞領域判別装置、方法およびプログラムに関する。 The present disclosure relates to a learning device, method, and program for a discriminator that discriminates an infarct region in a brain image, a discriminator that discriminates an infarct region, and an infarct region discriminating device, method, and program.
 近年、CT(Computed Tomography)装置およびMRI(Magnetic Resonance Imaging)装置等の医療機器の進歩により、より質の高い高解像度の医用画像を用いての画像診断が可能となってきている。とくに、対象部位を脳とした場合においては、CT画像およびMRI画像等を用いた画像診断により、脳梗塞および脳出血等の脳の血管障害を起こしている領域を特定することができる。このため、画像診断を支援するための各種手法が提案されている。 In recent years, with the advance of medical equipment such as CT (Computed Tomography) apparatus and MRI (Magnetic Resonance Imaging) apparatus, it has become possible to perform image diagnosis using medical images with higher quality and higher resolution. In particular, in the case where the target site is the brain, a region causing cerebral vascular disorders such as cerebral infarction and cerebral hemorrhage can be identified by image diagnosis using CT images and MRI images. For this reason, various methods for supporting image diagnosis have been proposed.
 例えば、特開2013-165765号公報においては、MRIの拡散強調画像(DWI:Diffusion Weighted Image)に含まれる脳梗塞部位を検出し、拡散強調画像の異常部位と健常者の拡散強調画像とから、両者の解剖学的位置合わせに必要な位置変換データを取得し、患者の脳の各組織位置が健常者の脳の各組織位置に合致するように、位置変換データを用いて、SPECT(Single Photon Emission Computed Tomography)装置で撮影されたSPECT画像を変換し、SPECT画像上で脳梗塞部位を判別する手法が提案されている。また、特開2014-518516号公報においては、CT画像とMRI画像とを位置合わせして表示して、診断を行う手法が提案されている。 For example, in JP2013-165765A, a cerebral infarction site included in an MRI diffusion weighted image (DWI: DiffusionDWeightedifImage) is detected, and from the abnormal site of the diffusion weighted image and the diffusion weighted image of a healthy person, The position conversion data necessary for the anatomical alignment between the two is obtained, and SPECT (Single Photon) is used by using the position conversion data so that each tissue position of the patient's brain matches each tissue position of the healthy person's brain. There has been proposed a technique of converting a SPECT image taken by an Emission (Computed Tomography) apparatus and discriminating a cerebral infarction site on the SPECT image. Japanese Patent Laid-Open No. 2014-518516 proposes a method for performing diagnosis by aligning and displaying a CT image and an MRI image.
 ところで、脳梗塞患者に対しては血栓溶解療法が行われる。しかしながら、血栓溶解療法は、脳梗塞の未発症が確認された時刻から4.5時間以内が適用対象であり、時間の経過により梗塞範囲が広がるほど、治療後に出血が生じるリスクが高くなることが知られている。このため、血栓溶解療法の適否を判断するためには、医用画像を用いて梗塞範囲を迅速かつ適切に判別することが必要である。ここで、拡散強調画像においては、梗塞領域は他の領域とは異なる画素値を有するものとなる。とくに急性期の脳梗塞の領域は、拡散強調画像においては他の領域との画素値の相違が顕著となる。このため、梗塞領域の確認には拡散強調画像が使用されることが多い。 By the way, thrombolytic therapy is performed for cerebral infarction patients. However, thrombolytic therapy is applicable within 4.5 hours from the time at which cerebral infarction has not been confirmed, and the risk of bleeding after treatment increases as the infarct range expands over time. Are known. For this reason, in order to determine the suitability of thrombolytic therapy, it is necessary to quickly and appropriately determine the infarct range using medical images. Here, in the diffusion weighted image, the infarct region has a pixel value different from other regions. In particular, in the region of acute cerebral infarction, the difference in pixel value from other regions becomes significant in the diffusion weighted image. For this reason, a diffusion weighted image is often used for confirmation of the infarct region.
 一方、脳の診断においては、脳梗塞を確認する前に、脳内の出血の有無を確認することが多い。脳内の出血はCT画像において明確に確認することができるため、脳疾患が疑われる患者に対しては、まずCT画像を用いた診断が行われる。しかしながら、CT画像においては、急性期の脳梗塞の領域と他の領域との画素値の相違はそれほど大きくないため、CT画像を用いて急性期の梗塞を特定することは、困難であることが多い。このため、CT画像を用いた診断後にMRI画像を取得して脳梗塞が発症しているか否かの診断が行われることとなる。 On the other hand, in brain diagnosis, the presence or absence of bleeding in the brain is often confirmed before confirming cerebral infarction. Since hemorrhage in the brain can be clearly confirmed in the CT image, a diagnosis using a CT image is first performed for a patient suspected of having a brain disease. However, in the CT image, the difference in pixel values between the cerebral infarction region in the acute phase and other regions is not so large, and it may be difficult to specify the infarction in the acute phase using the CT image. Many. For this reason, after diagnosis using a CT image, an MRI image is acquired to determine whether or not a cerebral infarction has occurred.
 しかしながら、CT画像を用いた診断後にMRI画像を取得して脳梗塞が発症しているか否かの診断を行っていたのでは、梗塞が発症してからの経過時間が長くなるため、血栓溶解療法による治療後に出血のリスクが高くなる可能性がある。 However, if an MRI image is acquired after diagnosis using a CT image to diagnose whether or not a cerebral infarction has occurred, the elapsed time from the onset of the infarction becomes longer. May increase the risk of bleeding after treatment with
 本開示は上記事情に鑑みなされたものであり、CT画像を用いて脳梗塞が発症しているか否かの診断を迅速に行うことができるようにすることを目的とする。 The present disclosure has been made in view of the above circumstances, and an object of the present disclosure is to enable quick diagnosis of whether or not a cerebral infarction has occurred by using a CT image.
 本開示による判別器の学習装置は、脳梗塞を発症している被検体の脳のCT画像および被検体と同一被検体の脳のMRI画像を取得する画像取得部と、
 MRI画像から梗塞領域を抽出する梗塞領域抽出部と、
 CT画像とMRI画像との位置合わせを行う位置合わせ部と、
 位置合わせの結果に基づいて、CT画像における梗塞領域を特定する梗塞領域特定部と、
 CT画像において特定された梗塞領域を教師データとして、入力されたCT画像における梗塞領域を判別する判別器を学習する学習部とを備える。
A learning apparatus for a discriminator according to the present disclosure includes an image acquisition unit that acquires a CT image of a brain of a subject developing cerebral infarction and an MRI image of the brain of the same subject as the subject;
An infarct region extraction unit that extracts an infarct region from an MRI image;
An alignment unit for aligning the CT image and the MRI image;
An infarct region specifying unit that specifies an infarct region in the CT image based on the result of the alignment;
And a learning unit that learns a discriminator for discriminating the infarct region in the input CT image using the infarct region specified in the CT image as teacher data.
 なお、本開示による判別器の学習装置においては、MRI画像が拡散強調画像であってもよい。 In the discriminator learning device according to the present disclosure, the MRI image may be a diffusion weighted image.
 また、本開示による判別器の学習装置においては、位置合わせ部は、CT画像から脳領域を抽出し、抽出した脳領域と拡散強調画像との位置合わせを行うものであってもよい。 Further, in the discriminator learning device according to the present disclosure, the alignment unit may extract a brain region from the CT image and perform alignment between the extracted brain region and the diffusion weighted image.
 本開示による判別器は、本開示による判別器の学習装置により学習がなされたものである。 The discriminator according to the present disclosure has been learned by the discriminator learning device according to the present disclosure.
 本開示による梗塞領域判別装置は、梗塞領域の判別対象のCT画像を取得する画像取得部と、
 判別対象のCT画像における梗塞領域を判別する、本開示の判別器とを備える。
An infarct region discriminating apparatus according to the present disclosure includes an image acquisition unit that acquires a CT image of an infarct region discrimination target;
A discriminator according to the present disclosure that discriminates an infarct region in a CT image to be discriminated;
 なお、本開示による梗塞領域判別装置においては、判別器による判別結果を表示部に表示する表示制御部をさらに備える。 Note that the infarct region discriminating apparatus according to the present disclosure further includes a display control unit that displays the discrimination result by the discriminator on the display unit.
 本開示による判別器の学習方法は、脳梗塞を発症している被検体の脳のCT画像および被検体と同一被検体の脳のMRI画像を取得し、
 MRI画像から梗塞領域を抽出し、
 CT画像とMRI画像との位置合わせを行い、
 位置合わせの結果に基づいて、CT画像における梗塞領域を特定し、
 CT画像において特定された梗塞領域を教師データとして、入力されたCT画像における梗塞領域を判別する判別器を学習する。
The discriminator learning method according to the present disclosure acquires a CT image of a brain of a subject who has developed cerebral infarction and an MRI image of the brain of the same subject as the subject,
Extract infarct area from MRI image,
Align CT image and MRI image,
Based on the alignment result, identify the infarct region in the CT image,
A discriminator for discriminating the infarct region in the input CT image is learned using the infarct region specified in the CT image as teacher data.
 本開示による梗塞領域判別方法は、梗塞領域の判別対象のCT画像を取得し、
 本開示の判別器により、判別対象のCT画像における梗塞領域を判別する。
The infarct region determination method according to the present disclosure obtains a CT image of an infarct region determination target,
The discriminator of the present disclosure discriminates the infarct region in the CT image to be discriminated.
 なお、本開示による判別器の学習方法および本開示による梗塞領域判別方法をコンピュータに実行させるためのプログラムとして提供してもよい。 Note that the discriminator learning method according to the present disclosure and the infarct region determination method according to the present disclosure may be provided as a program for causing a computer to execute the method.
 本開示による他の判別器の学習装置は、コンピュータに実行させるための命令を記憶するメモリと、
 記憶された命令を実行するよう構成されたプロセッサとを備え、プロセッサは、
 脳梗塞を発症している被検体の脳のCT画像および被検体と同一被検体の脳のMRI画像を取得し、
 MRI画像から梗塞領域を抽出し、
 CT画像とMRI画像との位置合わせを行い、
 位置合わせの結果に基づいて、CT画像における梗塞領域を特定し、
 CT画像において特定された梗塞領域を教師データとして、入力されたCT画像における梗塞領域を判別する判別器を学習する処理を実行する。
Another discriminator learning device according to the present disclosure includes a memory for storing instructions to be executed by a computer,
A processor configured to execute stored instructions, the processor comprising:
Obtaining a CT image of the brain of the subject developing cerebral infarction and an MRI image of the brain of the same subject as the subject,
Extract infarct area from MRI image,
Align CT image and MRI image,
Based on the alignment result, identify the infarct region in the CT image,
A process of learning a discriminator for discriminating the infarct region in the input CT image is executed using the infarct region specified in the CT image as teacher data.
 本開示による他の梗塞領域判別装置は、コンピュータに実行させるための命令を記憶するメモリと、
 記憶された命令を実行するよう構成されたプロセッサとを備え、プロセッサは、
 梗塞領域の判別対象のCT画像を取得し、
 本開示の判別器により、判別対象のCT画像における梗塞領域を判別する処理を実行する。
Another infarct region discriminating device according to the present disclosure includes a memory for storing instructions for causing a computer to execute,
A processor configured to execute stored instructions, the processor comprising:
Obtain a CT image of the infarct area discrimination target,
By the discriminator of the present disclosure, processing for discriminating the infarct region in the CT image to be discriminated is executed.
 本開示によれば、脳梗塞を発症している被検体の脳のCT画像およびこの被検体と同一被検体の脳のMRI画像が取得され、MRI画像から梗塞領域が抽出され、CT画像とMRI画像との位置合わせが行われる。そして、位置合わせの結果に基づいて、CT画像における梗塞領域が特定され、特定された梗塞領域を教師データとして、入力されたCT画像における梗塞領域を判別する判別器が学習される。ここで、CT画像においては、出血領域が他の領域とは大きく異なる画素値を有するため、CT画像において出血領域を特定することは容易である。しかしながら、CT画像においては、梗塞領域は他の領域と異なる画素値を有するものの、その差は出血領域と他の領域との差よりも小さい。一方、MRI画像においては、梗塞領域は他の領域とは大きく異なる画素値を有する。このため、脳梗塞を発症している被検体の脳のMRI画像とCT画像とを位置合わせすれば、MRI画像における梗塞領域に基づいて、CT画像における梗塞領域を特定することができる。したがって、特定された梗塞領域を教師データとして判別器の学習を行うことにより、学習された判別器によって、入力されたCT画像における梗塞領域を判別できることとなる。これにより、CT画像のみを用いて、脳の出血領域のみならず、梗塞領域を判別することができる。よって、本開示によれば、脳梗塞が発症しているか否かの診断を迅速に行うことができる。 According to the present disclosure, a CT image of the brain of a subject developing cerebral infarction and an MRI image of the brain of the same subject as the subject are acquired, an infarct region is extracted from the MRI image, and the CT image and the MRI Registration with the image is performed. Then, based on the alignment result, an infarct region in the CT image is specified, and a discriminator for discriminating the infarct region in the input CT image is learned using the specified infarct region as teacher data. Here, in the CT image, since the bleeding region has a pixel value that is greatly different from other regions, it is easy to specify the bleeding region in the CT image. However, in the CT image, the infarct region has a different pixel value from other regions, but the difference is smaller than the difference between the bleeding region and other regions. On the other hand, in the MRI image, the infarct region has a pixel value that is significantly different from other regions. For this reason, if the MRI image and CT image of the brain of the subject who has developed cerebral infarction are aligned, the infarct region in the CT image can be specified based on the infarct region in the MRI image. Therefore, by learning the discriminator using the identified infarct region as teacher data, the infarct region in the input CT image can be discriminated by the learned discriminator. Thereby, it is possible to discriminate not only the bleeding region of the brain but also the infarct region using only the CT image. Therefore, according to the present disclosure, it is possible to quickly diagnose whether cerebral infarction has developed.
本開示の実施形態による学習装置、判別器および梗塞領域判別装置を適用した、診断支援システムの概要を示すハードウェア構成図Hardware configuration diagram showing an outline of a diagnosis support system to which a learning device, a discriminator, and an infarct region discrimination device according to an embodiment of the present disclosure are applied 本実施形態による梗塞領域判別装置の概略構成を示す図The figure which shows schematic structure of the infarct area | region discrimination apparatus by this embodiment. MRI画像の例を示す図The figure which shows the example of an MRI image CT画像とMRI画像との位置合わせを説明するための図The figure for demonstrating position alignment with CT image and MRI image CT画像における梗塞領域の特定を説明するための図The figure for demonstrating specification of the infarction area | region in CT image 判別結果の表示の例を示す図A figure showing an example of display of discrimination results 本実施形態において学習時に行われる処理を示すフローチャートA flowchart showing processing performed at the time of learning in the present embodiment 本実施形態において梗塞領域の判別時に行われる処理を示すフローチャートThe flowchart which shows the process performed at the time of discrimination | determination of an infarction area | region in this embodiment.
 以下、図面を参照して本開示の実施形態について説明する。図1は、本開示の実施形態による判別器の学習装置、判別器および梗塞領域判別装置を適用した、診断支援システムの概要を示すハードウェア構成図である。図1に示すように、診断支援システムでは、本実施形態による梗塞領域判別装置1、3次元画像撮影装置2、および画像保管サーバ3が、ネットワーク4を経由して通信可能な状態で接続されている。なお、梗塞領域判別装置1には、本実施形態による学習装置および判別器が内包される。 Hereinafter, embodiments of the present disclosure will be described with reference to the drawings. FIG. 1 is a hardware configuration diagram illustrating an outline of a diagnosis support system to which a discriminator learning device, a discriminator, and an infarct region discrimination device according to an embodiment of the present disclosure are applied. As shown in FIG. 1, in the diagnosis support system, the infarct region discriminating device 1, the three-dimensional image capturing device 2, and the image storage server 3 according to the present embodiment are connected in a communicable state via a network 4. Yes. The infarct region discriminating device 1 includes the learning device and the discriminator according to the present embodiment.
 3次元画像撮影装置2は、被検体の診断対象となる部位を撮影することにより、その部位を表す3次元画像を医用画像として生成する装置である。3次元画像撮影装置2により生成された医用画像は画像保管サーバ3に送信され、保存される。なお、本実施形態においては、被検体である患者の診断対象部位は脳であり、3次元画像撮影装置2はCT装置2AおよびMRI装置2Bである。そして、CT装置2Aにおいて、被検体の脳を含む3次元のCT画像Bc0を生成し、MRI装置2Bにおいて、被検体の脳を含む3次元のMRI画像Bm0を生成する。なお、本実施形態においては、MRI画像Bm0は、拡散強調画像とする。また、本実施形態においては、CT画像Bc0は、造影剤を使用しないで撮影を行うことにより取得される非造影CT画像とするが、造影剤を使用して撮影を行うことにより取得した造影CT画像を用いてもよい。 The 3D image capturing apparatus 2 is an apparatus that generates a 3D image representing a part as a medical image by capturing the part to be diagnosed of the subject. The medical image generated by the 3D image capturing apparatus 2 is transmitted to the image storage server 3 and stored. In the present embodiment, the diagnosis target part of the patient who is the subject is the brain, and the three-dimensional imaging apparatus 2 is the CT apparatus 2A and the MRI apparatus 2B. Then, the CT apparatus 2A generates a three-dimensional CT image Bc0 including the subject's brain, and the MRI apparatus 2B generates a three-dimensional MRI image Bm0 including the subject's brain. In the present embodiment, the MRI image Bm0 is a diffusion weighted image. In this embodiment, the CT image Bc0 is a non-contrast CT image acquired by performing imaging without using a contrast agent. However, a contrast CT acquired by performing imaging using a contrast agent. An image may be used.
 画像保管サーバ3は、各種データを保存して管理するコンピュータであり、大容量外部記憶装置およびデータベース管理用ソフトウェアを備えている。画像保管サーバ3は、有線あるいは無線のネットワーク4を介して他の装置と通信を行い、画像データ等を送受信する。具体的には3次元画像撮影装置2で生成されたCT画像およびMRI画像の画像データを含む各種データをネットワーク経由で取得し、大容量外部記憶装置等の記録媒体に保存して管理する。なお、画像データの格納形式およびネットワーク4経由での各装置間の通信は、DICOM(Digital Imaging and Communication in Medicine)等のプロトコルに基づいている。 The image storage server 3 is a computer that stores and manages various data, and includes a large-capacity external storage device and database management software. The image storage server 3 communicates with other devices via a wired or wireless network 4 to transmit and receive image data and the like. Specifically, various types of data including image data of CT images and MRI images generated by the three-dimensional image capturing apparatus 2 are acquired via a network, stored in a recording medium such as a large-capacity external storage device, and managed. Note that the storage format of the image data and the communication between the devices via the network 4 are based on a protocol such as DICOM (Digital-Imaging-and-Communication-in-Medicine).
 梗塞領域判別装置1は、1台のコンピュータに、本開示の学習プログラムおよび梗塞領域判別プログラムをインストールしたものである。コンピュータは、診断を行う医師が直接操作するワークステーションまたはパーソナルコンピュータでもよく、それらとネットワークを介して接続されたサーバコンピュータでもよい。学習プログラムおよび梗塞領域判別プログラムは、DVD(Digital Versatile Disc)あるいはCD-ROM(Compact Disc Read Only Memory)等の記録媒体に記録されて配布され、その記録媒体からコンピュータにインストールされる。または、ネットワークに接続されたサーバコンピュータの記憶装置、もしくはネットワークストレージに、外部からアクセス可能な状態で記憶され、要求に応じて医師が使用するコンピュータにダウンロードされ、インストールされる。 The infarct region discriminating apparatus 1 is obtained by installing the learning program and the infarct region discriminating program of the present disclosure on one computer. The computer may be a workstation or personal computer directly operated by a doctor who performs diagnosis, or may be a server computer connected to them via a network. The learning program and the infarct region discrimination program are recorded and distributed on a recording medium such as a DVD (Digital Versatile Disc) or a CD-ROM (Compact Disc Read Only Memory), and are installed in the computer from the recording medium. Alternatively, it is stored in a storage device of a server computer connected to a network or a network storage in a state where it can be accessed from the outside, and is downloaded and installed on a computer used by a doctor upon request.
 図2は、コンピュータに学習プログラムおよび梗塞領域判別プログラムをインストールすることにより実現される、本実施形態による梗塞領域判別装置の概略構成を示す図である。図2に示すように、梗塞領域判別装置1は、標準的なワークステーションの構成として、CPU(Central Processing Unit)11、メモリ12およびストレージ13を備えている。また、梗塞領域判別装置1には、ディスプレイ14、並びにキーボードおよびマウス等の入力部15が接続されている。なお、ディスプレイ14が表示部に対応する。 FIG. 2 is a diagram showing a schematic configuration of the infarct region discriminating apparatus according to the present embodiment realized by installing a learning program and an infarct region discriminating program in a computer. As shown in FIG. 2, the infarct region discriminating apparatus 1 includes a CPU (Central Processing Unit) 11, a memory 12, and a storage 13 as a standard workstation configuration. The infarct region discriminating apparatus 1 is connected with a display 14 and an input unit 15 such as a keyboard and a mouse. The display 14 corresponds to a display unit.
 ストレージ13には、ハードディスクドライブ等からなり、ネットワーク4を経由して画像保管サーバ3から取得した、被検体の医用画像、並びに処理に必要な情報を含む各種情報が記憶されている。 The storage 13 is composed of a hard disk drive or the like, and stores various information including medical images of the subject acquired from the image storage server 3 via the network 4 and information necessary for processing.
 また、メモリ12には、学習プログラムおよび梗塞領域判別プログラムが記憶されている。学習プログラムは、CPU11に実行させる処理として、脳梗塞を発症している被検体の脳のCT画像Bc0およびMRI画像Bm0を取得する画像取得処理、MRI画像Bm0から梗塞領域を抽出する梗塞領域抽出処理、CT画像Bc0とMRI画像Bm0との位置合わせを行う位置合わせ処理、位置合わせの結果に基づいて、CT画像Bc0における梗塞領域を特定する梗塞領域特定処理、CT画像Bc0において特定された梗塞領域を教師データとして、入力されたCT画像Bc1における梗塞領域を判別する判別器を学習する学習処理を規定する。また、梗塞領域判別プログラムは、CPU11に実行させる処理として、梗塞領域の判別対象のCT画像Bc1を取得する画像取得処理、判別対象のCT画像Bc1における梗塞領域を判別する判別処理、および判別結果をディスプレイ14に表示する表示制御処理を規定する。 Further, the memory 12 stores a learning program and an infarct region discrimination program. The learning program executes an image acquisition process for acquiring a CT image Bc0 and an MRI image Bm0 of the brain of a subject who has developed a cerebral infarction, and an infarct area extraction process for extracting an infarct area from the MRI image Bm0. , An alignment process for aligning the CT image Bc0 and the MRI image Bm0, an infarct area specifying process for specifying an infarct area in the CT image Bc0 based on the result of the alignment, and an infarct area specified in the CT image Bc0 A learning process for learning a discriminator for discriminating an infarct region in the input CT image Bc1 is defined as teacher data. In addition, the infarct region determination program executes, as processing to be executed by the CPU 11, an image acquisition process for acquiring a CT image Bc1 that is a determination target of the infarct region, a determination process that determines an infarct region in the CT image Bc1 of the determination target, and a determination result. A display control process to be displayed on the display 14 is defined.
 そして、CPU11がプログラムに従いこれらの処理を実行することで、コンピュータは、画像取得部21、梗塞領域抽出部22、位置合わせ部23、梗塞領域特定部24、学習部25、判別器26および表示制御部27として機能する。ここで、画像取得部21、梗塞領域抽出部22、位置合わせ部23、梗塞領域特定部24および学習部25が、本実施形態の判別器の学習装置を構成する。また、判別器26および表示制御部27が、本実施形態の梗塞領域判別装置を構成する。 When the CPU 11 executes these processes according to the program, the computer acquires the image acquisition unit 21, the infarct region extraction unit 22, the alignment unit 23, the infarct region specifying unit 24, the learning unit 25, the discriminator 26, and display control. It functions as the unit 27. Here, the image acquisition unit 21, the infarct region extraction unit 22, the alignment unit 23, the infarct region specifying unit 24, and the learning unit 25 constitute a discriminator learning device of the present embodiment. Further, the discriminator 26 and the display control unit 27 constitute an infarct region discriminating apparatus of the present embodiment.
 なお、本実施形態においては、CPU11が学習プログラムおよび梗塞領域判別プログラムによって、各部の機能を実行するようにしたが、ソフトウェアを実行して各種の処理部として機能する汎用的なプロセッサとしては、CPU11の他、FPGA (Field Programmable Gate Array)等の製造後に回路構成を変更可能なプロセッサであるプログラマブルロジックデバイス(Programmable Logic Device:PLD)を用いることができる。また、ASIC(Application Specific Integrated Circuit)等の特定の処理を実行させるために専用に設計された回路構成を有するプロセッサである専用電気回路等により、各部の処理を実行するようにしてもよい。 In the present embodiment, the CPU 11 executes the function of each unit by the learning program and the infarct region determination program. However, as a general-purpose processor that functions as various processing units by executing software, the CPU 11 In addition, a programmable logic device (PLD) which is a processor whose circuit configuration can be changed after manufacturing FPGA (Field Programmable Gate Array) or the like can be used. Further, the processing of each unit may be executed by a dedicated electric circuit or the like that is a processor having a circuit configuration designed exclusively for executing specific processing such as ASIC (Application Specific Specific Integrated Circuit).
 1つの処理部は、これら各種のプロセッサのうちの1つで構成されてもよいし、同種または異種の2つ以上のプロセッサの組み合わせ(例えば、複数のFPGA、またはCPUとFPGAの組み合わせ等)で構成されてもよい。また、複数の処理部を1つのプロセッサで構成してもよい。複数の処理部を1つのプロセッサで構成する例としては、第1に、クライアントやサーバ等のコンピュータに代表されるように、1つ以上のCPUとソフトウェアの組み合わせで1つのプロセッサを構成し、このプロセッサが複数の処理部として機能する形態がある。第2に、システムオンチップ(System On Chip:SoC)などに代表されるように、複数の処理部を含むシステム全体の機能を1つのIC(Integrated Circuit)チップで実現するプロセッサを使用する形態がある。このように、各種の処理部は、ハードウェア的な構造として、上記各種のプロセッサを1つ以上用いて構成される。 One processing unit may be composed of one of these various processors, or a combination of two or more processors of the same type or different types (for example, a plurality of FPGAs or a combination of a CPU and an FPGA). It may be configured. Further, the plurality of processing units may be configured by one processor. As an example of configuring a plurality of processing units with one processor, first, as represented by a computer such as a client or a server, one processor is configured with a combination of one or more CPUs and software. There is a form in which the processor functions as a plurality of processing units. Second, as represented by a system-on-chip (SoC), a form of using a processor that realizes the functions of the entire system including a plurality of processing units with a single IC (integrated circuit) chip. is there. As described above, various processing units are configured using one or more of the various processors as a hardware structure.
 さらに、これらの各種のプロセッサのハードウェア的な構造は、より具体的には、半導体素子等の回路素子を組み合わせた電気回路(circuitry)である。 Further, the hardware structure of these various processors is more specifically an electric circuit (circuitry) in which circuit elements such as semiconductor elements are combined.
 画像取得部21は、判別器26の学習のために、脳梗塞を発症している被検体の脳のCT画像Bc0およびMRI画像Bm0を画像保管サーバ3から取得する。また、梗塞領域の判別のために、梗塞領域の判別対象のCT画像Bc1を画像保管サーバ3から取得する。なお、CT画像Bc0、CT画像Bc1およびMRI画像Bm0が既にストレージ13に記憶されている場合には、画像取得部21は、ストレージ13からCT画像Bc0、CT画像Bc1およびMRI画像Bm0を取得するようにしてもよい。また、画像取得部21は、後述する判別器26の学習のために、多数の被検体についてのCT画像Bc0およびMRI画像Bm0を取得する。 The image acquisition unit 21 acquires the CT image Bc0 and the MRI image Bm0 of the brain of the subject who has developed cerebral infarction from the image storage server 3 for learning by the discriminator 26. In addition, in order to determine the infarct region, a CT image Bc1 that is a determination target of the infarct region is acquired from the image storage server 3. If the CT image Bc0, CT image Bc1, and MRI image Bm0 are already stored in the storage 13, the image acquisition unit 21 acquires the CT image Bc0, CT image Bc1, and MRI image Bm0 from the storage 13. It may be. The image acquisition unit 21 acquires CT images Bc0 and MRI images Bm0 for a large number of subjects for learning by the discriminator 26 described later.
 梗塞領域抽出部22は、MRI画像Bm0から脳の梗塞領域を抽出する。図3はMRI画像Bm0の例を示す図である。なお、MRI画像Bm0は3次元画像であるが、ここでは説明のため、MRI画像Bm0の1つの断層面における2次元の断層画像を用いて説明する。なお、図3に示すように、拡散強調画像であるMRI画像Bm0は、頭蓋骨が消去された脳実質のみを含む画像となる。図3に示すように、拡散強調画像であるMRI画像Bm0においては、梗塞領域は他の領域と比較して大きい画素値(低い濃度)を有するものとなる。梗塞領域抽出部22は、MRI画像Bm0において、例えばあらかじめ定められたしきい値よりも大きい画素値を有する領域を、梗塞領域として抽出する。なお、梗塞領域を抽出するように学習がなされた判別器を用いて、MRI画像Bm0から梗塞領域を抽出するようにしてもよい。これにより、図3に示す梗塞領域A1が抽出される。 The infarct region extraction unit 22 extracts the infarct region of the brain from the MRI image Bm0. FIG. 3 is a diagram showing an example of the MRI image Bm0. Although the MRI image Bm0 is a three-dimensional image, for the sake of explanation, the MRI image Bm0 will be described using a two-dimensional tomographic image on one tomographic plane of the MRI image Bm0. As shown in FIG. 3, the MRI image Bm0, which is a diffusion weighted image, is an image including only the brain parenchyma from which the skull has been erased. As shown in FIG. 3, in the MRI image Bm0 that is a diffusion weighted image, the infarct region has a larger pixel value (lower density) than other regions. The infarct region extraction unit 22 extracts, for example, a region having a pixel value larger than a predetermined threshold value as an infarct region in the MRI image Bm0. Note that an infarct region may be extracted from the MRI image Bm0 using a discriminator that has been trained to extract an infarct region. Thereby, the infarct region A1 shown in FIG. 3 is extracted.
 位置合わせ部23は、CT画像Bc0とMRI画像Bm0との位置合わせを行う。図4は、CT画像Bc0とMRI画像Bm0との位置合わせを説明するための図である。なお、CT画像Bc0およびMRI画像Bm0はともに3次元画像であるが、ここでは説明のため、CT画像Bc0およびMRI画像Bm0の対応する1つの断層面における2次元の断層画像を用いて説明する。図4に示すように、同一被検体においては、脳の形状は略同一となる。一方、MRI画像Bm0においては、梗塞領域が他の領域と比較して大きい画素値(低い濃度)を有するものとなる。これに対して、CT画像Bc0においては、梗塞領域と他の領域とは画素値の差異がMRI画像Bm0ほど大きくない。なお、CT画像Bc0は、拡散強調画像であるMRI画像Bm0とは異なり、頭蓋骨および脳実質を含んでいる。このため、位置合わせ部23は、CT画像Bc0から脳実質の領域を脳領域として抽出し、抽出した脳領域とMRI画像Bm0との位置合わせを行う。 The alignment unit 23 performs alignment between the CT image Bc0 and the MRI image Bm0. FIG. 4 is a diagram for explaining alignment between the CT image Bc0 and the MRI image Bm0. The CT image Bc0 and the MRI image Bm0 are both three-dimensional images. However, for the sake of explanation, the CT image Bc0 and the MRI image Bm0 will be described using a two-dimensional tomographic image on one corresponding tomographic plane of the CT image Bc0 and the MRI image Bm0. As shown in FIG. 4, the shape of the brain is substantially the same in the same subject. On the other hand, in the MRI image Bm0, the infarct region has a larger pixel value (lower density) than other regions. In contrast, in the CT image Bc0, the difference in pixel values between the infarct region and other regions is not as great as that of the MRI image Bm0. Note that, unlike the MRI image Bm0 that is a diffusion weighted image, the CT image Bc0 includes the skull and the brain parenchyma. Therefore, the alignment unit 23 extracts a brain parenchymal region from the CT image Bc0 as a brain region, and performs alignment between the extracted brain region and the MRI image Bm0.
 本実施形態において、位置合わせ部23は、非剛体位置合わせにより、CT画像Bc0およびMRI画像Bm0のいずれか一方を他方に対して位置合わせする。本実施形態においては、CT画像Bc0をMRI画像Bm0に対して位置合わせするものとするが、MRI画像Bm0をCT画像Bc0に対して位置合わせするようにしてもよい。 In the present embodiment, the alignment unit 23 aligns one of the CT image Bc0 and the MRI image Bm0 with respect to the other by non-rigid alignment. In the present embodiment, the CT image Bc0 is aligned with the MRI image Bm0. However, the MRI image Bm0 may be aligned with the CT image Bc0.
 なお、非剛体位置合わせとしては、例えばBスプラインおよびシンプレートスプライン等の関数を用いて、CT画像Bc0における特徴点をMRI画像Bm0における特徴点に対応する対応点に非線形に変換することによる手法を用いることができるが、これに限定されるものではない。 Note that the non-rigid registration is a method in which feature points in the CT image Bc0 are nonlinearly converted into corresponding points corresponding to the feature points in the MRI image Bm0 using functions such as B spline and thin plate spline, for example. Although it can be used, it is not limited to this.
 梗塞領域特定部24は、位置合わせ部23による位置合わせの結果に基づいて、CT画像Bc0における梗塞領域を特定する。図5はCT画像Bc0における梗塞領域の特定を説明するための図である。なお、図5に示すCT画像Bc0はMRI画像Bm0と位置合わせがなされているものとする。CT画像Bc0はMRI画像Bm0と位置合わせがなされているため、梗塞領域特定部24は、MRI画像Bm0において抽出した梗塞領域A1に対応するCT画像Bc0のボクセルの領域を梗塞領域A2に特定する。 The infarct region specifying unit 24 specifies the infarct region in the CT image Bc0 based on the alignment result by the alignment unit 23. FIG. 5 is a diagram for explaining the specification of the infarct region in the CT image Bc0. Note that the CT image Bc0 shown in FIG. 5 is aligned with the MRI image Bm0. Since the CT image Bc0 is aligned with the MRI image Bm0, the infarct region specifying unit 24 specifies the voxel region of the CT image Bc0 corresponding to the infarct region A1 extracted from the MRI image Bm0 as the infarct region A2.
 学習部25は、CT画像Bc0において特定された梗塞領域を教師データとして、入力されたCT画像における梗塞領域を判別する判別器26を学習する。本実施形態においては、判別器26は、判別対象となるCT画像Bc1が入力されると、CT画像Bc1における梗塞領域を判別するものとする。具体的には、判別器26は、判別対象となるCT画像Bc1各ボクセル位置を、梗塞領域と梗塞領域以外の2つのクラスに分類して、梗塞領域を判別する。このため、学習部25は、複数の被検体のCT画像Bc0において特定した梗塞領域A2から、あらかじめ定められたサイズ(例えば3×3等)の領域内の特徴量を取得し、取得した特徴量を判別器26に入力し、梗塞領域であるとの判別結果を出力するように、判別器26の学習、すなわち機械学習を行う。 The learning unit 25 learns the discriminator 26 for discriminating the infarct region in the input CT image using the infarct region specified in the CT image Bc0 as teacher data. In the present embodiment, the discriminator 26 discriminates the infarct region in the CT image Bc1 when the CT image Bc1 to be discriminated is input. Specifically, the discriminator 26 classifies each voxel position of the CT image Bc1 to be discriminated into two classes other than the infarct region and the infarct region, and discriminates the infarct region. For this reason, the learning unit 25 acquires a feature amount in a region having a predetermined size (for example, 3 × 3) from the infarct region A2 specified in the CT images Bc0 of a plurality of subjects, and the acquired feature amount Is input to the discriminator 26, and learning of the discriminator 26, that is, machine learning, is performed so that a discrimination result indicating that the region is an infarct region is output.
 このように学習が行われることにより、CT画像Bc1が入力されると、CT画像Bc1のボクセル値を、梗塞領域と梗塞領域外の領域とに分類して、判別対象のCT画像Bc1における梗塞領域を判別する判別器26が生成される。 When the CT image Bc1 is input by performing learning in this way, the voxel values of the CT image Bc1 are classified into an infarct region and a region outside the infarct region, and the infarct region in the CT image Bc1 to be determined Is generated.
 表示制御部27は、判別対象となるCT画像Bc1の判別器26による判別結果をディスプレイ14に表示する。図6は判別結果の表示の例を示す図である。なお、図6においては、判別対象となるCT画像Bc1の1つの断面における断層画像を示している。図6に示すように、判別結果においては、判別対象のCT画像Bc1において、梗塞領域A3が特定されている。 The display control unit 27 displays the discrimination result by the discriminator 26 of the CT image Bc1 to be discriminated on the display 14. FIG. 6 is a diagram illustrating an example of display of the discrimination result. FIG. 6 shows a tomographic image in one cross section of the CT image Bc1 to be discriminated. As shown in FIG. 6, in the discrimination result, the infarct region A3 is specified in the CT image Bc1 to be discriminated.
 なお、機械学習の手法としては、サポートベクタマシン(SVM(Support Vector Machine))、ディープニューラルネットワーク(DNN(Deep Neural Network))、畳み込みニューラルネットワーク(CNN(Convolutional Neural Network))、およびリカレントニューラルネットワーク(RNN(Recurrent Neural Network))等を用いることができる。 As a machine learning method, a support vector machine (SVM (SupportSMachine)), a deep neural network (DNN (Deep Neural Network)), a convolutional neural network (CNN (Convolutional Neural Network)), and a recurrent neural network ( RNN (Recurrent Neural Network)) or the like can be used.
 次いで、本実施形態において行われる処理について説明する。図7は本実施形態において学習時に行われる処理を示すフローチャートである。まず、画像取得部21が、脳梗塞を発症している被検体の脳のCT画像Bc0およびMRI画像Bm0を取得し(ステップST1)、梗塞領域抽出部22が、MRI画像Bm0から梗塞領域A1を抽出する(ステップST2)。次いで、位置合わせ部23が、CT画像Bc0とMRI画像Bm0との位置合わせを行い(ステップST3)、梗塞領域特定部24が、位置合わせの結果に基づいて、CT画像Bc0における梗塞領域A2を特定する(ステップST4)。そして、学習部25が、CT画像Bc0において特定された梗塞領域を教師データとして、入力されたCT画像Bc1における梗塞領域を判別する判別器26を学習し(ステップST6)、処理を終了する。 Next, processing performed in the present embodiment will be described. FIG. 7 is a flowchart showing processing performed during learning in the present embodiment. First, the image acquisition unit 21 acquires the CT image Bc0 and MRI image Bm0 of the brain of the subject who has developed cerebral infarction (step ST1), and the infarct region extraction unit 22 extracts the infarct region A1 from the MRI image Bm0. Extract (step ST2). Next, the alignment unit 23 performs alignment between the CT image Bc0 and the MRI image Bm0 (step ST3), and the infarct region specifying unit 24 specifies the infarct region A2 in the CT image Bc0 based on the alignment result. (Step ST4). Then, the learning unit 25 learns the discriminator 26 that discriminates the infarct region in the input CT image Bc1 using the infarct region specified in the CT image Bc0 as teacher data (step ST6), and ends the process.
 図8は本実施形態において梗塞領域の判別時に行われる処理を示すフローチャートである。まず、画像取得部21が、判別対象となるCT画像Bc1を取得し(ステップST11)、判別器26が判別対象のCT画像における梗塞領域を判別する(ステップST12)。そして、表示制御部27が、判別結果をディスプレイ14に表示し(ステップST13)、処理を終了する。 FIG. 8 is a flowchart showing a process performed when determining an infarct region in the present embodiment. First, the image acquisition unit 21 acquires a CT image Bc1 to be determined (step ST11), and the determiner 26 determines an infarct region in the CT image to be determined (step ST12). And the display control part 27 displays a discrimination | determination result on the display 14 (step ST13), and complete | finishes a process.
 このように、本実施形態においては、脳梗塞を発症している被検体の脳のCT画像Bc0およびMRI画像Bm0を取得し、MRI画像Bm0から梗塞領域を抽出し、CT画像Bc0とMRI画像Bm0との位置合わせを行う。そして、位置合わせの結果に基づいて、CT画像Bc0における梗塞領域A1を特定し、特定された梗塞領域A1を教師データとして、入力されたCT画像Bc1における梗塞領域を判別する判別器26を学習するようにした。 As described above, in the present embodiment, the CT image Bc0 and MRI image Bm0 of the brain of the subject developing cerebral infarction are acquired, the infarct region is extracted from the MRI image Bm0, and the CT image Bc0 and the MRI image Bm0 are extracted. Align with. Then, based on the alignment result, the infarct region A1 in the CT image Bc0 is identified, and the discriminator 26 for discriminating the infarct region in the input CT image Bc1 is learned using the identified infarct region A1 as teacher data. I did it.
 ここで、CT画像においては、出血領域が他の領域とは大きく異なる画素値を有するため、CT画像において出血領域を特定することは容易である。しかしながら、CT画像においては、梗塞領域は他の領域と異なる画素値を有するものの、その差は出血領域と他の領域との差よりも小さい。一方、MRI画像においては、梗塞領域が他の領域とは大きく異なる画素値を有する。特に、拡散強調画像においては、急性期の梗塞領域は他の領域と大きく異なる画素値を有するものとなる。このため、脳梗塞を発症している被検体の脳のMRI画像Bm0とCT画像Bc0とを位置合わせすれば、MRI画像Bm0における梗塞領域に基づいて、CT画像Bc0における梗塞領域A2を特定することができる。したがって、特定された梗塞領域A2を教師データとして判別器26の学習を行うことにより、学習された判別器26によって、判別対象となるCT画像Bc1における梗塞領域を判別できることとなる。これにより、CT画像のみを用いて、脳の出血領域のみならず、梗塞領域を判別することができる。よって、本実施形態によれば、脳梗塞が発症しているか否かの診断を迅速に行うことができる。 Here, in the CT image, since the bleeding region has a pixel value that is significantly different from other regions, it is easy to specify the bleeding region in the CT image. However, in the CT image, the infarct region has a different pixel value from other regions, but the difference is smaller than the difference between the bleeding region and other regions. On the other hand, in the MRI image, the infarct region has a pixel value that is significantly different from other regions. In particular, in the diffusion weighted image, the infarct region in the acute phase has a pixel value that is significantly different from other regions. For this reason, if the MRI image Bm0 and the CT image Bc0 of the brain of the subject developing cerebral infarction are aligned, the infarct region A2 in the CT image Bc0 is specified based on the infarct region in the MRI image Bm0. Can do. Therefore, by learning the discriminator 26 using the identified infarct region A2 as teacher data, the learned discriminator 26 can discriminate the infarct region in the CT image Bc1 to be discriminated. Thereby, it is possible to discriminate not only the bleeding region of the brain but also the infarct region using only the CT image. Therefore, according to the present embodiment, it is possible to quickly diagnose whether or not cerebral infarction has developed.
 なお、上記実施形態においては、MRI画像Bm0として拡散強調画像を用いているが、拡散強調画像以外のMRI画像を用いてもよい。例えば、T1強調画像、T2強調画像またはFLAIR(FLuid-Attenuated Inversion Recovery)画像等を用いてもよい。また、複数のMRI画像を組み合わせて用いてもよい。 In the above embodiment, a diffusion weighted image is used as the MRI image Bm0. However, an MRI image other than the diffusion weighted image may be used. For example, a T1-weighted image, a T2-weighted image, or a FLAIR (FLuid-Attenuated Inversion Recovery) image may be used. A plurality of MRI images may be used in combination.
 また、上記実施形態においては、判別器26の学習に用いるCT画像Bc0として、非造影CT画像または造影CT画像を用いているが、造影CT画像および非造影CT画像の双方を判別器26の学習に用いるようにしてもよい。このように学習された判別器26を用いることにより、判別対象のCT画像が造影CT画像および非造影CT画像のいずれであっても、梗塞領域を判別できることとなる。 In the above embodiment, a non-contrast CT image or a contrast CT image is used as the CT image Bc0 used for learning by the discriminator 26. However, both the contrast CT image and the non-contrast CT image are learned by the discriminator 26. You may make it use for. By using the discriminator 26 learned in this way, the infarct region can be discriminated regardless of whether the CT image to be discriminated is a contrast CT image or a non-contrast CT image.
 また、上記実施形態においては、梗塞領域判別装置1が学習装置を内包しているが、これに限定されるものではない。すなわち、診断支援システムにおいて、梗塞領域判別装置1とは別個に、画像取得部21、梗塞領域抽出部22、位置合わせ部23、梗塞領域特定部24、および学習部25を備え、判別器26の学習を行う学習装置を備えるようにしてもよい。この場合、梗塞領域判別装置1は、画像取得部21、判別器26および表示制御部27のみを備えるものとなる。 In the above embodiment, the infarct region discriminating apparatus 1 includes the learning apparatus, but the present invention is not limited to this. That is, the diagnosis support system includes an image acquisition unit 21, an infarct region extraction unit 22, an alignment unit 23, an infarct region specifying unit 24, and a learning unit 25 separately from the infarct region determination device 1. You may make it provide the learning apparatus which performs learning. In this case, the infarct region discriminating apparatus 1 includes only the image acquisition unit 21, the discriminator 26, and the display control unit 27.
   1  梗塞領域判別装置
   2  3次元画像撮影装置
   3  画像保管サーバ
   4  ネットワーク
   11  CPU
   12  メモリ
   13  ストレージ
   14  ディスプレイ
   15  入力部
   21  画像取得部
   22  梗塞領域抽出部
   23  位置合わせ部
   24  梗塞領域特定部
   25  学習部
   26  判別器
   27  表示制御部
   A1,A2,A3  梗塞領域
   Bc0  CT画像
   Bc1  判別対象のCT画像
   Bm0  MRI画像
DESCRIPTION OF SYMBOLS 1 Infarction area | region discrimination device 2 3D imaging device 3 Image storage server 4 Network 11 CPU
DESCRIPTION OF SYMBOLS 12 Memory 13 Storage 14 Display 15 Input part 21 Image acquisition part 22 Infarct area extraction part 23 Positioning part 24 Infarct area specific part 25 Learning part 26 Discriminator 27 Display control part A1, A2, A3 Infarct area Bc0 CT image Bc1 Discrimination object CT image of Bm0 MRI image

Claims (10)

  1.  脳梗塞を発症している被検体の脳のCT画像および前記被検体と同一被検体の脳のMRI画像を取得する画像取得部と、
     前記MRI画像から梗塞領域を抽出する梗塞領域抽出部と、
     前記CT画像と前記MRI画像との位置合わせを行う位置合わせ部と、
     前記位置合わせの結果に基づいて、前記CT画像における前記梗塞領域を特定する梗塞領域特定部と、
     前記CT画像において特定された梗塞領域を教師データとして、入力されたCT画像における梗塞領域を判別する判別器を学習する学習部とを備えた判別器の学習装置。
    An image acquisition unit for acquiring a CT image of the brain of a subject developing cerebral infarction and an MRI image of the brain of the same subject as the subject;
    An infarct region extraction unit for extracting an infarct region from the MRI image;
    An alignment unit for aligning the CT image and the MRI image;
    Based on the result of the alignment, an infarct region specifying unit that specifies the infarct region in the CT image,
    A learning device for a discriminator, comprising: a learning unit that learns a discriminator that discriminates an infarct region in an input CT image using the infarct region specified in the CT image as teacher data.
  2.  前記MRI画像が拡散強調画像である請求項1に記載の判別器の学習装置。 The discriminator learning device according to claim 1, wherein the MRI image is a diffusion weighted image.
  3.  前記位置合わせ部は、前記CT画像から脳領域を抽出し、抽出した前記脳領域と前記拡散強調画像との位置合わせを行う請求項2に記載の判別器の学習装置。 The discriminator learning device according to claim 2, wherein the alignment unit extracts a brain region from the CT image and aligns the extracted brain region with the diffusion weighted image.
  4.  請求項1から3のいずれか1項に記載の判別器の学習装置により学習がなされた判別器。 A discriminator learned by the discriminator learning device according to any one of claims 1 to 3.
  5.  梗塞領域の判別対象のCT画像を取得する画像取得部と、
     前記判別対象のCT画像における梗塞領域を判別する、請求項4に記載の判別器とを備えた梗塞領域判別装置。
    An image acquisition unit for acquiring a CT image of a discrimination target of the infarct region;
    An infarct region discriminating apparatus comprising the discriminator according to claim 4, which discriminates an infarct region in the CT image to be discriminated.
  6.  前記判別器による判別結果を表示部に表示する表示制御部をさらに備えた請求項5に記載の梗塞領域判別装置。 The infarct region discriminating apparatus according to claim 5, further comprising a display control unit that displays a discrimination result by the discriminator on a display unit.
  7.  脳梗塞を発症している被検体の脳のCT画像および前記被検体と同一被検体の脳のMRI画像を取得し、
     前記MRI画像から梗塞領域を抽出し、
     前記CT画像と前記MRI画像との位置合わせを行い、
     前記位置合わせの結果に基づいて、前記CT画像における前記梗塞領域を特定し、
     前記CT画像において特定された梗塞領域を教師データとして、入力されたCT画像における梗塞領域を判別する判別器を学習する判別器の学習方法。
    Obtaining a CT image of the brain of a subject developing cerebral infarction and an MRI image of the brain of the same subject as the subject;
    Extracting an infarct region from the MRI image;
    Align the CT image and the MRI image,
    Identifying the infarct region in the CT image based on the alignment result;
    A learning method for a discriminator that learns a discriminator for discriminating an infarct region in an input CT image using the infarct region specified in the CT image as teacher data.
  8.  梗塞領域の判別対象のCT画像を取得し、
     請求項4に記載の判別器により、前記判別対象のCT画像における梗塞領域を判別する梗塞領域判別方法。
    Obtain a CT image of the infarct area discrimination target,
    An infarct region discriminating method for discriminating an infarct region in the CT image to be discriminated by the discriminator according to claim 4.
  9.  脳梗塞を発症している被検体の脳のCT画像および前記被検体と同一被検体の脳のMRI画像を取得する手順と、
     前記MRI画像から梗塞領域を抽出する手順と、
     前記CT画像と前記MRI画像との位置合わせを行う手順と、
     前記位置合わせの結果に基づいて、前記CT画像における前記梗塞領域を特定する手順と、
     前記CT画像において特定された梗塞領域を教師データとして、入力されたCT画像における梗塞領域を判別する判別器を学習する手順とをコンピュータに実行させる判別器の学習プログラム。
    A procedure for obtaining a CT image of the brain of a subject developing cerebral infarction and an MRI image of the brain of the same subject as the subject;
    A procedure for extracting an infarct region from the MRI image;
    A procedure for aligning the CT image and the MRI image;
    A procedure for identifying the infarct region in the CT image based on the alignment result;
    A discriminator learning program for causing a computer to execute a procedure for learning a discriminator for discriminating an infarct region in an input CT image using the infarct region specified in the CT image as teacher data.
  10.  梗塞領域の判別対象のCT画像を取得する手順と、
     請求項4に記載の判別器により、前記判別対象のCT画像における梗塞領域を判別する手順とをコンピュータに実行させる梗塞領域判別プログラム。
    A procedure for obtaining a CT image of an infarct area discrimination target;
    An infarct region discrimination program for causing a computer to execute a procedure for discriminating an infarct region in the CT image to be discriminated by the discriminator according to claim 4.
PCT/JP2019/016052 2018-05-09 2019-04-12 Learning device, method, and program for classifier for classifying infarct region, classifier for classifying infarct region, and device, method and program for classifying infarct region WO2019216125A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-090344 2018-05-09
JP2018090344 2018-05-09

Publications (1)

Publication Number Publication Date
WO2019216125A1 true WO2019216125A1 (en) 2019-11-14

Family

ID=68468072

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/016052 WO2019216125A1 (en) 2018-05-09 2019-04-12 Learning device, method, and program for classifier for classifying infarct region, classifier for classifying infarct region, and device, method and program for classifying infarct region

Country Status (1)

Country Link
WO (1) WO2019216125A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020032045A (en) * 2018-08-31 2020-03-05 富士フイルム株式会社 Learning device, method, and program for discriminator for discriminating infraction region and discriminator for discriminating infraction region, and infraction region discrimination device, method, and program
WO2021177374A1 (en) * 2020-03-04 2021-09-10 株式会社Kompath Image processing device, image processing model generation device, learning data generation device, and program
CN113506634A (en) * 2021-07-15 2021-10-15 南京易爱医疗设备有限公司 Brain simulation system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018011958A (en) * 2016-07-21 2018-01-25 東芝メディカルシステムズ株式会社 Medical image processing apparatus and medical image processing program

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018011958A (en) * 2016-07-21 2018-01-25 東芝メディカルシステムズ株式会社 Medical image processing apparatus and medical image processing program

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
NAGASHIMA, HIROYUKI ET AL.: "Computerized Detection of Acute Ischemic Stroke in Brain Computed Tomography Images", MEDICAL IMAGING TECHNOLOGY, vol. 27, no. 1, January 2009 (2009-01-01), pages 30 - 38 *
ROMAN, PETER ET AL.: "A Quantitative Symmetry- based Analysis of Hyperacute Ischemic Stroke Lesions in Non-Contrast Computed Tomography", MEDICAL PHYSICS, vol. 44, no. 1, January 2017 (2017-01-01), pages 192 - 199, XP055645068 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020032045A (en) * 2018-08-31 2020-03-05 富士フイルム株式会社 Learning device, method, and program for discriminator for discriminating infraction region and discriminator for discriminating infraction region, and infraction region discrimination device, method, and program
WO2021177374A1 (en) * 2020-03-04 2021-09-10 株式会社Kompath Image processing device, image processing model generation device, learning data generation device, and program
JPWO2021177374A1 (en) * 2020-03-04 2021-09-10
JP7323885B2 (en) 2020-03-04 2023-08-09 株式会社Kompath Image processing device, image processing model generation device, learning data generation device, and program
CN113506634A (en) * 2021-07-15 2021-10-15 南京易爱医疗设备有限公司 Brain simulation system
CN113506634B (en) * 2021-07-15 2024-04-09 南京易爱医疗设备有限公司 Brain Simulation System

Similar Documents

Publication Publication Date Title
JP7129869B2 (en) Disease area extraction device, method and program
Leung et al. Brain MAPS: an automated, accurate and robust brain extraction technique using a template library
US11049251B2 (en) Apparatus, method, and program for learning discriminator discriminating infarction region, discriminator for discriminating infarction region, and apparatus, method, and program for discriminating infarction region
JP7018856B2 (en) Medical image processing equipment, methods and programs
WO2019216125A1 (en) Learning device, method, and program for classifier for classifying infarct region, classifier for classifying infarct region, and device, method and program for classifying infarct region
US11244455B2 (en) Apparatus, method, and program for training discriminator discriminating disease region, discriminator discriminating disease region, disease region discrimination apparatus, and disease region discrimination program
JP6945493B2 (en) Medical image processing equipment, methods and programs
WO2020054188A1 (en) Medical image processing device, method, and program
JP6981940B2 (en) Diagnostic imaging support devices, methods and programs
WO2019102917A1 (en) Radiologist determination device, method, and program
US11551351B2 (en) Priority judgement device, method, and program
JP2019213785A (en) Medical image processor, method and program
US11176413B2 (en) Apparatus, method, and program for training discriminator discriminating disease region, discriminator discriminating disease region, disease region discrimination apparatus, and disease region discrimination program
US11328414B2 (en) Priority judgement device, method, and program
WO2021084916A1 (en) Region identification device, method, and program
WO2019150717A1 (en) Interlobar membrane display device, method, and program
JP2021175454A (en) Medical image processing apparatus, method and program
JP6765396B2 (en) Medical image processing equipment, methods and programs
US20240037738A1 (en) Image processing apparatus, image processing method, and image processing program
WO2020262681A1 (en) Learning device, method, and program, medical image processing device, method, and program, and discriminator
JP7361930B2 (en) Medical image processing device, method and program
WO2023121005A1 (en) Method for outputting classification information on basis of artificial nerual network and apparatus therefor
JP6845480B2 (en) Diagnostic support devices, methods and programs
JP2023091190A (en) Medical image diagnosis support device, medical image diagnosis support method, and program
JP2023168719A (en) Medical image processing apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19799760

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19799760

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP