CN111278348A - Diagnosis support method, diagnosis support system, diagnosis support program, and computer-readable recording medium storing diagnosis support program for disease based on endoscopic image of digestive organ - Google Patents
Diagnosis support method, diagnosis support system, diagnosis support program, and computer-readable recording medium storing diagnosis support program for disease based on endoscopic image of digestive organ Download PDFInfo
- Publication number
- CN111278348A CN111278348A CN201880037797.9A CN201880037797A CN111278348A CN 111278348 A CN111278348 A CN 111278348A CN 201880037797 A CN201880037797 A CN 201880037797A CN 111278348 A CN111278348 A CN 111278348A
- Authority
- CN
- China
- Prior art keywords
- image
- digestive organ
- endoscopic
- disease
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000004798 organs belonging to the digestive system Anatomy 0.000 title claims abstract description 263
- 201000010099 disease Diseases 0.000 title claims abstract description 220
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 title claims abstract description 220
- 238000003745 diagnosis Methods 0.000 title claims abstract description 196
- 238000000034 method Methods 0.000 title claims abstract description 101
- 238000013528 artificial neural network Methods 0.000 claims abstract description 106
- 238000012549 training Methods 0.000 claims abstract description 63
- 238000013527 convolutional neural network Methods 0.000 claims description 152
- 210000002784 stomach Anatomy 0.000 claims description 113
- 238000003860 storage Methods 0.000 claims description 43
- 206010009900 Colitis ulcerative Diseases 0.000 claims description 32
- 201000006704 Ulcerative Colitis Diseases 0.000 claims description 32
- 210000003238 esophagus Anatomy 0.000 claims description 27
- 206010030216 Oesophagitis Diseases 0.000 claims description 24
- 210000001198 duodenum Anatomy 0.000 claims description 24
- 208000006881 esophagitis Diseases 0.000 claims description 24
- 238000003384 imaging method Methods 0.000 claims description 23
- 238000012795 verification Methods 0.000 claims description 22
- 208000000461 Esophageal Neoplasms Diseases 0.000 claims description 21
- 206010030155 Oesophageal carcinoma Diseases 0.000 claims description 21
- 201000004101 esophageal cancer Diseases 0.000 claims description 21
- 210000002429 large intestine Anatomy 0.000 claims description 20
- 208000021302 gastroesophageal reflux disease Diseases 0.000 claims description 18
- 230000001954 sterilising effect Effects 0.000 claims description 18
- 238000004659 sterilization and disinfection Methods 0.000 claims description 18
- 238000004891 communication Methods 0.000 claims description 15
- 230000002496 gastric effect Effects 0.000 claims description 15
- 210000003800 pharynx Anatomy 0.000 claims description 11
- 241000590002 Helicobacter pylori Species 0.000 claims description 10
- 229940037467 helicobacter pylori Drugs 0.000 claims description 10
- 230000009467 reduction Effects 0.000 claims description 10
- 230000000747 cardiac effect Effects 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 9
- 210000004203 pyloric antrum Anatomy 0.000 claims description 7
- 206010019375 Helicobacter infections Diseases 0.000 claims description 6
- 238000005192 partition Methods 0.000 claims description 6
- 230000001720 vestibular Effects 0.000 claims description 6
- 230000001537 neural effect Effects 0.000 claims description 4
- 238000012360 testing method Methods 0.000 description 44
- 208000015181 infectious disease Diseases 0.000 description 42
- 230000003211 malignant effect Effects 0.000 description 41
- 230000003902 lesion Effects 0.000 description 32
- 230000035945 sensitivity Effects 0.000 description 29
- 201000011510 cancer Diseases 0.000 description 19
- 206010028980 Neoplasm Diseases 0.000 description 18
- 210000001072 colon Anatomy 0.000 description 18
- 238000001839 endoscopy Methods 0.000 description 15
- 210000000664 rectum Anatomy 0.000 description 15
- 238000013473 artificial intelligence Methods 0.000 description 14
- 210000001519 tissue Anatomy 0.000 description 14
- 230000003287 optical effect Effects 0.000 description 13
- 208000005718 Stomach Neoplasms Diseases 0.000 description 12
- 238000001574 biopsy Methods 0.000 description 12
- 206010017758 gastric cancer Diseases 0.000 description 12
- 201000011549 stomach cancer Diseases 0.000 description 12
- 208000007882 Gastritis Diseases 0.000 description 11
- 238000011156 evaluation Methods 0.000 description 11
- 238000013135 deep learning Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 210000000867 larynx Anatomy 0.000 description 10
- 238000012216 screening Methods 0.000 description 10
- 208000028299 esophageal disease Diseases 0.000 description 9
- 208000024891 symptom Diseases 0.000 description 9
- 230000036210 malignancy Effects 0.000 description 8
- 238000005259 measurement Methods 0.000 description 8
- 230000009266 disease activity Effects 0.000 description 7
- 238000002181 esophagogastroduodenoscopy Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 238000010200 validation analysis Methods 0.000 description 7
- 206010061534 Oesophageal squamous cell carcinoma Diseases 0.000 description 6
- 208000036765 Squamous cell carcinoma of the esophagus Diseases 0.000 description 6
- 230000005856 abnormality Effects 0.000 description 6
- 238000003759 clinical diagnosis Methods 0.000 description 6
- 208000007276 esophageal squamous cell carcinoma Diseases 0.000 description 6
- 206010061218 Inflammation Diseases 0.000 description 5
- 239000000427 antigen Substances 0.000 description 5
- 102000036639 antigens Human genes 0.000 description 5
- 108091007433 antigens Proteins 0.000 description 5
- 210000004369 blood Anatomy 0.000 description 5
- 239000008280 blood Substances 0.000 description 5
- 210000004027 cell Anatomy 0.000 description 5
- 238000001514 detection method Methods 0.000 description 5
- 210000000981 epithelium Anatomy 0.000 description 5
- 230000002550 fecal effect Effects 0.000 description 5
- 230000004054 inflammatory process Effects 0.000 description 5
- 230000001575 pathological effect Effects 0.000 description 5
- 210000002700 urine Anatomy 0.000 description 5
- 208000032843 Hemorrhage Diseases 0.000 description 4
- 206010048899 Radiation oesophagitis Diseases 0.000 description 4
- XSQUKJJJFZCRTK-UHFFFAOYSA-N Urea Chemical compound NC(N)=O XSQUKJJJFZCRTK-UHFFFAOYSA-N 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 4
- 239000004202 carbamide Substances 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 210000000056 organ Anatomy 0.000 description 4
- 230000005855 radiation Effects 0.000 description 4
- 238000012827 research and development Methods 0.000 description 4
- 206010003694 Atrophy Diseases 0.000 description 3
- 241000894006 Bacteria Species 0.000 description 3
- 206010063560 Excessive granulation tissue Diseases 0.000 description 3
- 206010030201 Oesophageal ulcer Diseases 0.000 description 3
- 208000000453 Skin Neoplasms Diseases 0.000 description 3
- 241000519995 Stachys sylvatica Species 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 3
- 230000037444 atrophy Effects 0.000 description 3
- 229910052788 barium Inorganic materials 0.000 description 3
- DSAJWYNOEDNPEQ-UHFFFAOYSA-N barium atom Chemical compound [Ba] DSAJWYNOEDNPEQ-UHFFFAOYSA-N 0.000 description 3
- 230000000740 bleeding effect Effects 0.000 description 3
- 235000019646 color tone Nutrition 0.000 description 3
- 230000002183 duodenal effect Effects 0.000 description 3
- 210000002919 epithelial cell Anatomy 0.000 description 3
- 208000019064 esophageal ulcer Diseases 0.000 description 3
- 210000001126 granulation tissue Anatomy 0.000 description 3
- 238000010191 image analysis Methods 0.000 description 3
- 230000002458 infectious effect Effects 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 210000004877 mucosa Anatomy 0.000 description 3
- 238000010827 pathological analysis Methods 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 201000000849 skin cancer Diseases 0.000 description 3
- 231100000397 ulcer Toxicity 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 208000018522 Gastrointestinal disease Diseases 0.000 description 2
- 206010054949 Metaplasia Diseases 0.000 description 2
- 108010047320 Pepsinogen A Proteins 0.000 description 2
- 208000025865 Ulcer Diseases 0.000 description 2
- 206010060926 abdominal symptom Diseases 0.000 description 2
- 210000003484 anatomy Anatomy 0.000 description 2
- 210000003445 biliary tract Anatomy 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000009534 blood test Methods 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 238000009535 clinical urine test Methods 0.000 description 2
- 238000002052 colonoscopy Methods 0.000 description 2
- 239000003246 corticosteroid Substances 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 208000010643 digestive system disease Diseases 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000002594 fluoroscopy Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 210000001156 gastric mucosa Anatomy 0.000 description 2
- 208000018685 gastrointestinal system disease Diseases 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 238000001727 in vivo Methods 0.000 description 2
- 238000013178 mathematical model Methods 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 210000000277 pancreatic duct Anatomy 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000001172 regenerating effect Effects 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 210000002966 serum Anatomy 0.000 description 2
- 210000000813 small intestine Anatomy 0.000 description 2
- 206010041823 squamous cell carcinoma Diseases 0.000 description 2
- 238000007619 statistical method Methods 0.000 description 2
- 208000018556 stomach disease Diseases 0.000 description 2
- 230000008961 swelling Effects 0.000 description 2
- 210000003384 transverse colon Anatomy 0.000 description 2
- RBTBFTRPCNLSDE-UHFFFAOYSA-N 3,7-bis(dimethylamino)phenothiazin-5-ium Chemical compound C1=CC(N(C)C)=CC2=[S+]C3=CC(N(C)C)=CC=C3N=C21 RBTBFTRPCNLSDE-UHFFFAOYSA-N 0.000 description 1
- 208000004300 Atrophic Gastritis Diseases 0.000 description 1
- 201000009030 Carcinoma Diseases 0.000 description 1
- 206010008570 Chloasma Diseases 0.000 description 1
- 206010012689 Diabetic retinopathy Diseases 0.000 description 1
- 238000012327 Endoscopic diagnosis Methods 0.000 description 1
- 206010064212 Eosinophilic oesophagitis Diseases 0.000 description 1
- 208000012895 Gastric disease Diseases 0.000 description 1
- 208000036495 Gastritis atrophic Diseases 0.000 description 1
- 206010020880 Hypertrophy Diseases 0.000 description 1
- 206010051589 Large intestine polyp Diseases 0.000 description 1
- 208000003351 Melanosis Diseases 0.000 description 1
- 206010030154 Oesophageal candidiasis Diseases 0.000 description 1
- 206010071192 Oesophageal papilloma Diseases 0.000 description 1
- 208000008469 Peptic Ulcer Diseases 0.000 description 1
- 208000037062 Polyps Diseases 0.000 description 1
- 102000001708 Protein Isoforms Human genes 0.000 description 1
- 108010029485 Protein Isoforms Proteins 0.000 description 1
- 238000001069 Raman spectroscopy Methods 0.000 description 1
- 206010057969 Reflux gastritis Diseases 0.000 description 1
- 208000007107 Stomach Ulcer Diseases 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 229940124350 antibacterial drug Drugs 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 210000001815 ascending colon Anatomy 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 238000003556 assay Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 238000007664 blowing Methods 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000000711 cancerogenic effect Effects 0.000 description 1
- 231100000357 carcinogen Toxicity 0.000 description 1
- 239000003183 carcinogenic agent Substances 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 210000004534 cecum Anatomy 0.000 description 1
- 208000016644 chronic atrophic gastritis Diseases 0.000 description 1
- 238000012321 colectomy Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 229960001334 corticosteroids Drugs 0.000 description 1
- 210000003792 cranial nerve Anatomy 0.000 description 1
- 208000018999 crinkle Diseases 0.000 description 1
- 210000000805 cytoplasm Anatomy 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 210000001731 descending colon Anatomy 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 235000005911 diet Nutrition 0.000 description 1
- 230000000378 dietary effect Effects 0.000 description 1
- 238000003748 differential diagnosis Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 238000001537 electron coincidence spectroscopy Methods 0.000 description 1
- 201000000708 eosinophilic esophagitis Diseases 0.000 description 1
- 208000037828 epithelial carcinoma Diseases 0.000 description 1
- 230000008029 eradication Effects 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 201000005655 esophageal candidiasis Diseases 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000013110 gastrectomy Methods 0.000 description 1
- 210000001035 gastrointestinal tract Anatomy 0.000 description 1
- 238000002575 gastroscopy Methods 0.000 description 1
- 210000004907 gland Anatomy 0.000 description 1
- 230000035876 healing Effects 0.000 description 1
- 230000002008 hemorrhagic effect Effects 0.000 description 1
- 238000010562 histological examination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 210000004969 inflammatory cell Anatomy 0.000 description 1
- 230000001678 irradiating effect Effects 0.000 description 1
- 201000011061 large intestine cancer Diseases 0.000 description 1
- 229960000907 methylthioninium chloride Drugs 0.000 description 1
- 230000001338 necrotic effect Effects 0.000 description 1
- 230000001613 neoplastic effect Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000004223 overdiagnosis Methods 0.000 description 1
- 208000011906 peptic ulcer disease Diseases 0.000 description 1
- 208000022131 polyp of large intestine Diseases 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 210000001187 pylorus Anatomy 0.000 description 1
- 238000012134 rapid urease test Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000002271 resection Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 208000037974 severe injury Diseases 0.000 description 1
- 230000009528 severe injury Effects 0.000 description 1
- 210000001599 sigmoid colon Anatomy 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 206010040882 skin lesion Diseases 0.000 description 1
- 231100000444 skin lesion Toxicity 0.000 description 1
- 230000002269 spontaneous effect Effects 0.000 description 1
- 238000005728 strengthening Methods 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 238000010998 test method Methods 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
- 210000000264 venule Anatomy 0.000 description 1
- 210000000857 visual cortex Anatomy 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
- A61B1/000096—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope using artificial intelligence
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00011—Operational features of endoscopes characterised by signal transmission
- A61B1/00016—Operational features of endoscopes characterised by signal transmission using wireless means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/143—Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/273—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the upper alimentary canal, e.g. oesophagoscopes, gastroscopes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30028—Colon; Small intestine
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30028—Colon; Small intestine
- G06T2207/30032—Colon polyp
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30092—Stomach; Gastric
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
- Radiology & Medical Imaging (AREA)
- General Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- Optics & Photonics (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Biophysics (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Quality & Reliability (AREA)
- Signal Processing (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computer Networks & Wireless Communication (AREA)
- Probability & Statistics with Applications (AREA)
- Software Systems (AREA)
- Endoscopes (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a diagnosis support method for diseases based on an endoscopic image of a digestive organ using a neural network. The present invention is a method for assisting diagnosis of a disease based on endoscopic images of a digestive organ using a neural network, the method comprising: and training a neural network using a 1 st endoscope image of a digestive organ and a result of determination of at least one of a positive or negative state of the disease of the digestive organ corresponding to the 1 st endoscope image, a past disease, an severity level, or information corresponding to a part to be imaged, wherein the trained neural network outputs at least one of a probability of a positive and/or negative state of the disease of the digestive organ, a probability of a past disease, a severity level of the disease, or information corresponding to a part to be imaged, based on the 2 nd endoscope image of the digestive organ.
Description
Technical Field
The present invention relates to a method, a system, and a program for assisting diagnosis of a disease based on endoscopic images of a digestive organ using a neural network (neural network), and a computer-readable recording medium storing the program.
Background
Endoscopic examinations are often performed on digestive organs, such as larynx, pharynx, esophagus, stomach, duodenum, biliary tract, pancreatic duct, small intestine, large intestine, and the like, and endoscopic examinations on the upper digestive organ are often performed to screen for gastric cancer, esophageal cancer, peptic ulcer, reflux gastritis, and the like, and endoscopic examinations on the large intestine are often performed to screen for large intestine cancer, large intestine polyp, ulcerative colitis, and the like. In particular, endoscopic examination of the upper digestive organs is also useful for detailed examination of various upper abdominal symptoms, precise examination after receiving a positive result of barium examination for gastric diseases, and precise examination for abnormal serum pepsinogen levels for regular health diagnosis generally incorporated in japan. In recent years, the shift from conventional barium examinations to gastroscopic examinations has been advanced for the diagnosis of gastric cancer.
Gastric cancer is one of the most common malignant tumors, and is presumed to occur in about 100 ten thousand cases worldwide several years ago. Infection with Helicobacter pylori (hereinafter sometimes referred to as "h") among the root causes of gastric cancer induces atrophic gastritis, intestinal metaplasia, and finally, gastric cancer. It is believed that 98% of gastric cardia-non-cancers worldwide are caused by h. Considering that the risk of gastric cancer of patients infected with h.pyri increases and the incidence of gastric cancer after h.pyri sterilization decreases, the International Agency for Research on cancer (International Agency) classifies h.pyri as a clear carcinogen. From this result, it is useful to perform h.pyrori sterilization in order to reduce the risk of gastric cancer, and sterilization using antibacterial drugs is also an insurance medical treatment in our country and a treatment method strongly promoted in health and hygiene in the future.
Intragastric endoscopy provides extremely useful information in the differential diagnosis of the presence of h. It can be clearly observed that the condition of capillaries (RAC (regularly arranged venules) or polyps of the fundal gland are characteristic of h.pyriri negative gastric mucosa, but atrophy, redness, swelling of mucosa, hypertrophy of crinkle wall are typically seen in h.pyriri infected gastritis. Accurate endoscopic diagnosis of h.pylori infection is confirmed by various examinations such as anti-h.pylori IgG grade measurement in blood or urine, fecal antigen measurement, urea breath test, or rapid urease test, and patients who have positive findings can be h.pylori sterilized. Endoscopic examination is widely used for examination of gastric lesions, but if h.pyleri infection can be identified even when gastric lesions are confirmed without clinical specimen analysis, a unified blood test, urine test, or the like can be omitted, the burden on patients can be greatly reduced, and a medical economic contribution can be expected.
In addition, particularly in advanced industrial countries, the incidence of Ulcerative Colitis (UC) is steadily increasing, suggesting that dietary life or environment in europe and america is one of the causes of ulcerative colitis. For the treatment of ulcerative colitis, melasma is includedSeveral options for corticosteroids and anti-tnf monoclonal antibodies are important to use these drugs to maintain disease activity in remission according to the patient's clinical symptoms and disease activity index.
In addition to the score of clinical symptoms, the degree and severity of ulcerative colitis patients were evaluated mainly by large intestine endoscopy, and a combination with measurement of endoscopic disease activity was reported in previous studies. Among them, the Mayo (meio) endoscope score is one of the most reliable and widely used indicators in clinical practice for evaluating disease activity in patients with ulcerative colitis. Specifically, the Mayo endoscope score is classified as 0: normal or inactive, 1: mild symptoms (redness, unclear vessel fluoroscopy images, mild hemorrhagic liability), 2: moderate symptoms (bright redness, disappearance of blood vessel fluoroscopy images, easy bleeding and erosion), 3: severe injury (spontaneous hemorrhage, ulcer) of these 4 types.
Endoscopic remission status, defined as a Mayo endoscopic score < 1 and referred to as "mucosal healing", is associated with a decrease in corticosteroid usage, hospitalization rate, clinical relapse rate, and colectomy rate. However, in previous reports, the kappa-coefficient was in the range of 0.45 to 0.53 if not expert, with considerable differences between observers in the assessment of disease activity, consistent in the range of 0.71 to 0.74 even with highly trained experts. The kappa coefficient is a parameter for evaluating the degree of coincidence of diagnoses between observers, and is a value of 0 to 1, and the higher the value is, the higher the degree of coincidence is determined to be.
In such endoscopic examination of digestive organs, many endoscopic images are collected, but in order to perform precision control, an endoscopist is obligated to perform double examination of endoscopic images. With the endoscopic examinations performed in tens of thousands of cases per year, the number of images to be read by an endoscopic specialist at the time of second reading is enormous, about 2800 images per one person, which is a large burden on the site.
Moreover, diagnosis based on these endoscope images not only requires a lot of time for training or examining and storing images for an endoscope specialist, but also is subjective and may cause various false positive and false negative determinations. Further, the diagnosis by the endoscopist may deteriorate accuracy due to fatigue. Such a large burden on the site or a reduction in accuracy may also result in a limitation in the number of examinees, and even a fear that medical services corresponding to the needs cannot be sufficiently provided is assumed.
In order to improve the labor load and the reduction in accuracy of the endoscopic examination, it is expected to effectively utilize AI (artificial intelligence). If AI, which has image recognition capability exceeding that of human in recent years, can be used as an endoscope specialist's support, it is expected that the precision and speed of the second interpretation operation can be improved. In recent years, AI using deep learning (deep learning) has attracted attention in various medical fields, and reports have been made to screen medical images such as radiation oncology, skin cancer classification, and diabetic retinopathy instead of specialist doctors.
In particular, it has been demonstrated that AI can exhibit the same accuracy as a specialist in screening at the microscopic endoscope level (non-patent document 1). Further, it is disclosed that AI having a deep learning function in dermatology exhibits an image diagnosis capability equivalent to that of a specialist (non-patent document 2), and there are also patent documents using various machine learning methods (see patent documents 1 and 2). However, it has not been verified whether or not the endoscope image diagnosis capability of the AI satisfies the accuracy (accuracy) and performance (speed) that work at an actual medical site, and diagnosis based on the endoscope image by the machine learning method has not been put to practical use.
The deep learning may use a neural network formed by overlapping a plurality of layers to learn high-order eigenvalues from input data. In addition, deep learning may use back-propagation algorithms to update internal parameters used to calculate the performance of each layer from the performance of previous layers by indicating how the device should be changed.
When medical images are associated, deep learning can become an efficient machine learning technique that can be trained using medical images accumulated in the past, so that clinical features of patients can be directly obtained from the medical images. The neural network is a mathematical model representing the characteristics of the cranial nerve circuit through simulation on a calculator, and a way of supporting a deep learning algorithm is the neural network.
Background of the invention
Patent document
Patent document 1: japanese patent laid-open publication No. 2017-045341
Patent document 2: japanese patent laid-open publication No. 2017-067489
Patent document 3: japanese patent laid-open publication No. 2013-113689
Non-patent document
Non-patent document 1: http:// www.giejournal.org/article/S0016-5107(14)02171-3/fulltext, "Novel computer-aided diagnostic system for colloidal losses by using endicoscopy" Yuichi Motor et al. Presensed at diagnostic diseases Week2014, May 3-6,2014, Chicago, Illinois, USA
Non-patent document 2: "Nature" No. 2 months 2017, article "study of skin lesions: capability of strengthening artificial intelligence to detect skin cancer according to images (http:// www.natureasia.com/ja-jp/nature/highlights/82762)
Non-patent document 3: http:// gentikyo.or.jp/about/pdf/report/131119. pdf
Non-patent document 4: kumagai Y, Monma K, Kawada K. "magnetizing chromoendoscopis of our esphagus". in-vivo clinical diagnosis using an endocytocopy system ", Endoscope, 2004, vol.36, p590-4.
Non-patent document 5: esteva A, Kuprel B, Novoa RA, et al, "Dermatologic-level Classification of skin cancer with deep neural networks", Nature,2017, vol.542, p115-8
Disclosure of Invention
[ problems to be solved by the invention ]
In the determination work of an endoscope image in an endoscopic examination of the digestive tract, efficiency while maintaining high accuracy becomes a large problem. In addition, when an attempt is made to effectively utilize AI in image analysis in this field, the improvement of this AI technique has become a major issue. Further, as far as the present inventors know, there are no reports concerning the ability of the neuroid network to diagnose h.pyri-infected gastritis, ulcerative colitis, or esophageal diseases, particularly esophageal diseases using an ultra-magnified Endoscope (ECS) image, or no examples in which the neuroid network is optimized and installed in a medical field for use in h.pyri-infected gastritis examination, ulcerative colitis examination, or esophageal disease examination based on analysis of these esophagoscope images through deep learning using the neuroid network, and the present inventors have become problems to be solved in the future.
In the upper endoscopic examination, images of not only the stomach but also a plurality of organs such as the larynx, the pharynx, the esophagus, and the duodenum are mixed, but it is considered that, particularly in the gastric cancer endoscopic examination, if the collected images can be classified by organs or parts by AI, the burden of filling in the observation results or performing secondary interpretation by the endoscope specialist is reduced.
The purpose of the present invention is to provide a disease diagnosis support method, a diagnosis support system, a diagnosis support program, and a computer-readable recording medium storing the diagnosis support program, which are capable of accurately diagnosing a disease, such as h.pyleri-infected gastritis, ulcerative colitis, or an esophageal disease, using an endoscopic image of a digestive organ and using a neural network.
[ means for solving problems ]
The present invention according to claim 1 is a disease diagnosis support method based on endoscopic images of a digestive organ, comprising: using neural networks, and using
1 st endoscopic image of digestive organ, and
training a neural network based on a result of determination of at least one of a positive or negative state of the disease of the digestive organ corresponding to the 1 st endoscopic image, a past disease, a severity level, and information corresponding to a part to be imaged,
the trained neuroid network outputs at least one of a probability of positivity and/or negativity of a disease of a digestive organ, a probability of a past disease, a severity level of the disease, or information corresponding to a part imaged, based on a 2 nd endoscope image of the digestive organ.
According to the diagnostic support method for diseases based on endoscopic images of digestive organs using a neural network of this aspect, since the neural network is trained based on the result of determination of at least one of the 1 st endoscope image including endoscope images of a plurality of digestive organs, which are previously obtained for a plurality of subjects, and the positive or negative of the disease, the past disease, the severity level, or information corresponding to the imaged part, which are previously obtained for a plurality of subjects, therefore, it is possible to obtain, in a short time, at least one of the positive and/or negative probabilities of a disease in the digestive organs of a subject, the probability of a past disease, the severity level of a disease, and information corresponding to the imaged part, with an accuracy substantially comparable to that of an endoscope specialist, and to screen subjects who need to be diagnosed separately in a short time. Further, since at least one of the probability of positivity and/or negativity of a disease, the probability of a past disease, the severity level of a disease, and information corresponding to a part to be imaged can be automatically diagnosed with respect to test data including endoscopic images of a plurality of digestive organs of a plurality of subjects, it is possible to facilitate examination and correction by an endoscopic specialist and to omit the operation of creating an image set related to a disease.
A diagnosis support method for a disease based on endoscopic images of a digestive organ using an neuroid network according to aspect 2 of the present invention is a diagnosis support method system for a disease based on endoscopic images of a digestive organ using an neuroid network according to aspect 1, characterized in that: the 1 st endoscope image is adjusted in contrast.
According to the diagnosis support method for diseases based on endoscopic images of digestive organs using a neural network of the 2 nd aspect, since the contrast of all the 1 st endoscopic images is adjusted to have substantially the same gray scale, the detection accuracy of the probability or severity level of each of the positivity and negativity of the disease is improved.
Further, a 3 rd aspect of the present invention is a method for supporting diagnosis of a disease based on endoscopic images of a digestive organ using an neuroid network according to the 1 st or 2 nd aspect, wherein: the 1 st endoscope image is associated with each of the photographed parts.
In the untrained neural network, it is sometimes difficult to identify which part of the endoscopic image of a specific digestive organ is. According to the diagnosis support method for diseases based on endoscopic images of digestive organs using a neuroid network of the 3 rd aspect, since the neuroid network trained using endoscopic images classified for each part is used, it is possible to perform detailed training corresponding to each part on the neuroid network, and therefore, the detection accuracy of the probability of negativity of each of the positivity and negativity of the disease with respect to the 2 nd endoscopic image, the probability of the past disease, the severity level of the disease, and the like is improved.
The present invention according to claim 4 is a method for assisting in diagnosing a disease based on endoscopic images of a digestive organ using an artificial neural network, according to claim 3, wherein: the site includes at least one of the pharynx, esophagus, stomach, or duodenum.
According to the method for assisting in diagnosis of a disease based on endoscopic images of digestive organs using a neural network of the 4 th aspect, each of the pharyngeal portion, the esophagus, the stomach, and the duodenum can be accurately classified, and therefore, the detection accuracy of the probability of each of the positive and negative negatives of the disease in each of the parts, the probability of the past disease, the severity level of the disease, and the like is improved.
Further, a method for assisting a diagnosis of a disease based on endoscopic images of a digestive organ using an artificial neural network according to aspect 5 of the present invention is a method for assisting a diagnosis of a disease based on endoscopic images of a digestive organ using an artificial neural network according to aspect 3 or 4, comprising: the site is divided into a plurality of sites in at least 1 of the plurality of digestive organs.
Since the digestive organs each have a complicated shape, if the number of classifications of the parts is small, it may be difficult to identify which part of the digestive organ a specific endoscopic image is.
According to the diagnosis support method for diseases based on endoscopic images of digestive organs using a neural network of the 5 th aspect, since a plurality of digestive organs are divided into a plurality of parts, a highly accurate diagnosis result can be obtained in a short time.
Further, a method for supporting diagnosis of a disease based on endoscopic images of a digestive organ using an analagous network according to aspect 6 of the present invention is a method for supporting diagnosis of a disease based on endoscopic images of a digestive organ using an analagous network according to aspect 5, wherein: where the site is a stomach, the division includes at least one of an upper stomach, a middle stomach, or a lower stomach. Further, a 7 th aspect of the present invention is a method for supporting diagnosis of a disease based on endoscopic images of a digestive organ using an neuroid network according to the 5 th aspect, wherein: where the site is the stomach, the division includes at least one of a cardiac portion, a fundus portion, a corpus portion, a gastric corner portion, a vestibular portion, a pyloric antrum, or a pyloric portion.
According to the method for assisting diagnosis of a disease based on endoscopic images of a digestive organ using a neural network according to aspect 6 or 7 of the present invention, since each division of the stomach or each part can be accurately classified, detection accuracy of the probability of each positive or negative of the disease for each division or part, the probability of the disease in the past, the severity level of the disease, and the like is improved. The division or the location of the stomach may be selected as appropriate as needed.
Further, a method for supporting diagnosis of a disease based on endoscopic images of a digestive organ using an neuroid network according to claim 8 is a method for supporting diagnosis of a disease based on endoscopic images of a digestive organ using an neuroid network according to any one of claims 3 to 7, characterized in that: when the number of the 1 st endoscope images in the captured portion is smaller than that of other portions, the number of the 1 st endoscope images in all the portions is substantially equal by using at least one of rotation, enlargement, reduction, change in the number of pixels, capturing of a bright/dark portion, and capturing of a color-tone-changed portion of the 1 st endoscope image.
There are the following cases: the number of 1 st endoscopic images of endoscopic images including a plurality of digestive organs, which are respectively obtained in advance for a plurality of subjects, is greatly different for each part. According to the diagnosis support method for diseases based on endoscopic images of digestive organs using a neural network of the 8 th aspect, even in this case, the number of images used to train the neural network can be increased by using at least one of the 1 st endoscopic image of the site that is appropriately rotated, enlarged, reduced, changed in the number of pixels, captured in the bright and dark portions, and captured in the color tone change site, and therefore, the difference in detection accuracy based on the probability of each of the positive and negative of diseases different in site, the probability of the past disease, the severity level of the disease, and the like is reduced. In the method for supporting diagnosis of a disease based on endoscopic images of digestive organs using a neural network according to claim 8, the number of endoscopic images of each part does not have to be the same, and the difference in the number may be small.
A method for supporting diagnosis of a disease based on endoscopic images of a digestive organ using an neuroid network according to claim 9 of the present invention is a method for supporting diagnosis of a disease based on endoscopic images of a digestive organ using an neuroid network according to any one of claims 3 to 8, comprising: the trained neural network can output information corresponding to a portion where the 2 nd endoscope image is captured.
Generally, endoscopic images or continuous images of more digestive organs are available for a single subject. According to the method for supporting diagnosis of a disease based on endoscopic images of a digestive organ using a neural network of the 9 th aspect, the probability and the part name of each of the positive and negative states of the disease are output for each of the 2 nd endoscopic images, and therefore, the positive part or distribution of the disease can be easily understood.
A disease diagnosis support method using an endoscopic image of a digestive organ using an artificial neural network according to a 10 th aspect of the present invention is a disease diagnosis support method using an endoscopic image of a digestive organ using an artificial neural network according to a 9 th aspect, characterized in that: the trained neural network outputs the probability or the severity and information corresponding to the part together.
According to the method for assisting diagnosis of a disease based on endoscopic imaging of a digestive organ using a neural network of the 10 th aspect, since the probability, the severity, and the name of the part are output in descending order of the probability or the severity of the disease being positive, it is easy to understand the part that needs to be examined precisely.
Further, a method for assisting a diagnosis of a disease based on endoscopic images of a digestive organ using an neuroid network according to claim 11 is a method for assisting a diagnosis of a disease based on endoscopic images of a digestive organ using an neuroid network according to any one of claims 1 to 10, characterized in that: the 1 st endoscopic image comprises an intragastric endoscopic image, and the disease comprises at least one of helicobacter pylori infection or the presence or absence of helicobacter pylori sterilization.
According to the diagnosis support method for diseases based on endoscopic images of digestive organs using a neural network of the 11 th aspect, the probability of each of positive and negative helicobacter pylori infection or the presence or absence of helicobacter pylori sterilization of a subject can be predicted with the same accuracy as that of a specialist of the japan gastroenterology society, and a subject who needs to be separately diagnosed can be accurately screened in a short time. The confirmed diagnosis can be performed by subjecting the selected subject to anti-h.pyri IgG level measurement of blood or urine, a fecal antigen test, or a urea breath test.
A method for assisting in diagnosing a disease based on endoscopic images of a digestive organ using an neuroid network according to claim 12 is the method for assisting in diagnosing a disease based on endoscopic images of a digestive organ using an neuroid network according to any one of claims 1 to 10, characterized in that: the 1 st endoscopic image includes a large-intestine endoscopic image, the disease includes at least ulcerative colitis, and the trained neural network is output in a plurality of stages corresponding to the severity of the ulcerative colitis.
According to the method for assisting in diagnosis of a disease based on endoscopic images of digestive organs using a neural network of the 12 th aspect, the severity of ulcerative colitis in a subject can be predicted with the same accuracy as that of a specialist of the japan gastroenterology society, and therefore a subject who needs to be subjected to another precise examination can be accurately screened in a short time.
A method for supporting diagnosis of a disease based on endoscopic images of a digestive organ using an neuroid network according to claim 13 of the present invention is a method for supporting diagnosis of a disease based on endoscopic images of a digestive organ using an neuroid network according to any one of claims 1 to 10, comprising: the 1 st endoscope image includes an esophageal endoscope image obtained by using a super-magnification endoscope, the disease includes at least one of esophageal cancer, gastroesophageal reflux disease, or esophagitis, and the trained neural network partitions and outputs the at least one of esophageal cancer, gastroesophageal reflux disease, or esophagitis.
According to the method for assisting diagnosis of a disease using an endoscopic image of a digestive organ using a neural network of the 13 th aspect, an excessive diagnosis of a non-cancerous lesion to be cancerous can be suppressed to a minimum based on an esophageal endoscopic image obtained by using an ultra-magnifying endoscope, and therefore, the number of subjects who need to perform a tissue biopsy can be reduced.
Further, a 14 th aspect of the present invention is a method for supporting diagnosis of a disease based on endoscopic images of a digestive organ using an neuroid network according to any one of the 1 st to 13 th aspects, comprising: the 2 nd endoscope image is at least one of an image captured by an endoscope, an image transmitted via a communication network, an image provided by a remote operation system or a cloud-based system, an image recorded in a computer-readable recording medium, or a moving image.
According to the diagnosis support method for a disease based on endoscopic images of a digestive organ of the 14 th aspect, the probability or severity of each of the positive and negative diseases of the digestive organ with respect to the 2 nd endoscopic image that is input can be output in a short time, and therefore, the method can be used regardless of the input form of the 2 nd endoscopic image, for example, even for images or moving images transmitted from a remote place.
As the communication Network, the internet, an intranet, an internet Network, a LAN (local area Network), an ISDN (Integrated Service Digital Network), a VAN (value added Network), a CATV (Community Antenna Television) communication Network, a virtual private Network (virtual private Network), a telephone line Network, a mobile communication Network, a satellite communication Network, and the like are well known. As a transmission medium constituting the communication network, a wired Line such as an IEEE1394 Serial Bus, a USB (Universal Serial Bus), a power Line carrier, a cable television Line, a telephone Line, an ADSL (Asymmetric Digital Subscriber Line) Line, an infrared ray, a Bluetooth (registered trademark), a wireless Line such as IEEE802.11, a wireless Line such as a cellular phone network, a satellite Line, a terrestrial Digital network, and the like, which are well known, may be used. This makes it possible to use the cloud service or the remote support service.
As a recording medium that can be Read by the computer, a known tape system such as a magnetic tape or a magnetic cassette, a disk system including a magnetic disk such as a floppy (registered trademark) disk or a hard disk, an optical disk system such as an optical disk-ROM (Read Only Memory)/MO (magneto-optical)/MD (MiniDisc)/digital video disk/optical disk-R, a card system such as an IC card, a Memory card, or an optical card, or a semiconductor Memory system such as a mask ROM/EPROM (Erasable Programmable Read Only Memory)/EEPROM (Electrically Erasable Programmable Read Only Memory)/ROM, or the like can be used. This makes it possible to provide a form in which a system can be easily transplanted or installed in a so-called medical facility or examination facility.
Further, a 15 th aspect of the present invention is a method for supporting diagnosis of a disease based on endoscopic images of a digestive organ using an neuroid network according to any one of the 1 st to 14 th aspects, wherein: as the neural network, a convolutional neural network is used.
A convolutional neural network, which is a neural network, is a mathematical model that simulates the characteristics of the visual cortex of the brain, and has an extremely excellent ability to learn images. Therefore, the method for assisting in diagnosing a disease based on endoscopic images of a digestive organ using a neural network according to the 15 th aspect has a higher sensitivity and a higher specificity, and thus has a very high usefulness.
Further, a disease diagnosis support system based on endoscopic images of a digestive organ according to the 16 th aspect of the present invention is characterized in that: a computer having an endoscope image input unit, an output unit, and a neural network incorporated therein, the computer comprising: a 1 st storage area for storing a 1 st endoscopic image of a digestive organ; a 2 nd storage area for storing a result of diagnosis of at least one of a positive or negative of the disease of the digestive organ corresponding to the 1 st endoscopic image, a past disease, a severity level, and information corresponding to a part to be imaged; and 3 rd storage area, store said neural network program; the neural network-like program is trained based on the 1 st endoscope image stored in the 1 st storage area and the diagnosis result stored in the 2 nd storage area, and outputs, to the output unit, at least one of a probability of positivity and/or negativity of a disease of a digestive organ with respect to the 2 nd endoscope image, a probability of a past disease, a severity level of a disease, or information corresponding to a part to be imaged, based on the 2 nd endoscope image of the digestive organ input from the endoscope image input unit.
A 17 th aspect of the present invention is a disease diagnosis support system based on endoscopic images of a digestive organ according to the 16 th aspect, wherein: the training/validation data was adjusted for contrast.
The present invention also provides the disease diagnosis support system based on endoscopic images of a digestive organ according to claim 18, wherein the disease diagnosis support system is based on endoscopic images of a digestive organ according to claim 16 or 17, and the disease diagnosis support system comprises: the 1 st endoscope image is associated with each of the photographed parts.
A 19 th aspect of the present invention is a disease diagnosis support system based on endoscopic images of a digestive organ according to the 18 th aspect, wherein: the site includes at least one of the pharynx, esophagus, stomach, or duodenum.
The present invention also provides the disease diagnosis support system based on endoscopic images of a digestive organ according to claim 20, which is a disease diagnosis support system based on endoscopic images of a digestive organ according to claim 18 or 19, characterized in that: the site is divided into a plurality of sites in at least one of the plurality of digestive organs.
Further, a disease diagnosis support system based on endoscopic images of a digestive organ according to the 21 st aspect of the present invention is a disease diagnosis support system based on endoscopic images of a digestive organ according to the 20 th aspect of the present invention, characterized in that: where the site is a stomach, the division includes at least one of an upper stomach, a middle stomach, or a lower stomach.
The 22 nd aspect of the present invention is directed to the disease diagnosis support system based on endoscopic images of a digestive organ according to the 20 th aspect of the present invention, wherein: where the site is the stomach, the division includes at least one of a cardiac portion, a fundus portion, a corpus portion, a gastric corner portion, a vestibular portion, a pyloric antrum, or a pyloric portion.
The 23 rd aspect of the present invention is a disease diagnosis support system based on endoscopic images of a digestive organ according to any one of the 16 th to 22 th aspects, characterized in that: when the number of the 1 st endoscope image in the captured portion is smaller than that of other portions, the number of training/verification data in all portions is substantially equal by using at least one of rotation, enlargement, reduction, change in the number of pixels, capturing of a light and dark portion, and capturing of a color-tone-changed portion of the 1 st endoscope image.
The 24 th aspect of the present invention is a disease diagnosis support system based on endoscopic images of a digestive organ according to any one of the 16 th to 23 th aspects, characterized in that: the trained neural network program can output information corresponding to a part where the 2 nd endoscope image is captured.
A 25 th aspect of the present invention is a disease diagnosis support system based on endoscopic images of a digestive organ according to the 24 th aspect, comprising: the trained neural network-like program outputs the probability or the severity and information corresponding to the part together.
The 26 th aspect of the present invention is a disease diagnosis support system based on endoscopic images of a digestive organ according to any one of the 16 th to 25 th aspects, characterized in that: the 1 st endoscopic image includes an intragastric endoscopic image, and the disease includes at least one of helicobacter pylori infection or the presence or absence of helicobacter pylori sterilization.
The 27 th aspect of the present invention is a disease diagnosis support system based on endoscopic images of a digestive organ according to any one of the 16 th to 25 th aspects, characterized in that: the 1 st endoscope image includes a large intestine endoscope image, the disease includes at least ulcerative colitis, and the trained neural network program is output in a plurality of stages corresponding to the severity of the ulcerative colitis.
The 28 th aspect of the present invention is a disease diagnosis support system based on endoscopic images of a digestive organ according to any one of the 16 th to 25 th aspects, characterized in that: the 1 st endoscope image includes an esophageal endoscope image obtained by using a super-magnification endoscope, the disease includes at least one of esophageal cancer, gastroesophageal reflux disease, or esophagitis, and the trained neural network partitions and outputs the at least one of esophageal cancer, gastroesophageal reflux disease, or esophagitis.
The 29 th aspect of the present invention is a disease diagnosis support system based on endoscopic images of a digestive organ according to any one of the 16 th to 28 th aspects, characterized in that: the 2 nd endoscope image is at least one of an image captured by an endoscope, an image transmitted via a communication network, an image provided by a remote operation system or a cloud-based system, an image recorded in a computer-readable recording medium, or a moving image.
A disease diagnosis support system according to claim 30 of the present invention is a disease diagnosis support system based on endoscopic images of a digestive organ according to any one of claims 16 to 29, characterized in that: the neural network is a convolutional neural network.
The system for assisting diagnosis of a disease based on endoscopic images of a digestive organ according to any one of aspects 16 to 30 of the present invention can exhibit the same effects as the method for assisting diagnosis of a disease based on endoscopic images of a digestive organ using an artificial neural network according to any one of aspects 1 to 15.
Further, a 31 th aspect of the present invention is a diagnosis support program based on endoscopic video of a digestive organ, comprising: a computer for operating each component of the disease diagnosis support system based on endoscopic images of a digestive organ according to any one of the 16 th to 30 th aspects.
According to the diagnostic support program for endoscopic images of digestive organs of the 31 st aspect of the present invention, there is provided a diagnostic support program for endoscopic images of digestive organs for causing a computer to operate as each component in the system for supporting diagnosis of diseases based on endoscopic images of digestive organs of any one of the 16 th to 30 th aspects.
A 32 nd aspect of the present invention is a diagnosis support program based on endoscopic images of a digestive organ, comprising: a diagnosis support program based on endoscopic images of digestive organs according to the 31 st embodiment is recorded.
According to the diagnostic support program for endoscopic images of a digestive organ of the present invention in the 32 nd aspect, a computer-readable recording medium on which the diagnostic support program for endoscopic images of a digestive organ of the 31 st aspect is recorded can be provided.
Further, a method for discriminating a site of a digestive organ based on an endoscopic image of the digestive organ using a neural network according to aspect 33 of the present invention is characterized in that: and training a neural network using the 1 st endoscope image of the digestive organ and determination information corresponding to information of the portion to be imaged corresponding to the 1 st endoscope image, wherein the trained neural network outputs information corresponding to the portion to be imaged of the digestive organ based on the 2 nd endoscope image of the digestive organ.
According to the method for determining a site of a digestive organ based on an endoscopic image of a digestive organ using a neuroid network according to the 33 rd aspect of the present invention, since the neuroid network is trained using the 1 st endoscopic image of the digestive organ and determination information corresponding to information of a portion to be imaged corresponding to the 1 st endoscopic image, information corresponding to the portion to be imaged of a subject can be obtained in a short time with an accuracy substantially equivalent to that of an endoscopic specialist, and the site of the digestive organ to be diagnosed can be determined in a short time. Further, since the information corresponding to the imaged part can be automatically determined with respect to the test data including the endoscopic images of the plurality of digestive organs of the plurality of subjects, it is possible to not only facilitate examination by an endoscopic specialist, but also omit the operation of analyzing the image associated with the disease.
A system for discriminating a site of a digestive organ based on an endoscopic image of the digestive organ using a neural network according to aspect 34 of the present invention is characterized in that: a computer having an endoscope image input unit, an output unit, and a neural network incorporated therein, the computer comprising: a 1 st storage area for storing a 1 st endoscopic image of a digestive organ; a 2 nd storage area for storing determination information of information corresponding to the 1 st endoscopic image and corresponding to the imaged part of the digestive organ; and 3 rd storage area, store said neural network program; the neural network-like program is trained based on the 1 st endoscope image stored in the 1 st storage area and the specification information stored in the 2 nd storage area, and outputs information corresponding to a region to be imaged of a digestive organ with respect to the 2 nd endoscope image to the output unit based on the 2 nd endoscope image of the digestive organ input from the endoscope image input unit.
According to the system for discriminating a region of a digestive organ based on an endoscopic image of a digestive organ using a neural network according to embodiment 34 of the present invention, the same effect as that of the method for discriminating a region of a digestive organ based on an endoscopic image of a digestive organ using a neural network according to embodiment 33 can be exhibited.
A 35 th aspect of the present invention is a program for discriminating a site of a digestive organ based on an endoscopic image of the digestive organ, comprising: for operating each component in the system for discriminating the part of the digestive organ based on the endoscopic image of the digestive organ according to the 34 th aspect.
According to the program for discriminating a region of a digestive organ based on an endoscopic image of a digestive organ of the present invention as defined in claim 35, there is provided a program for discriminating a region of a digestive organ based on an endoscopic image of a digestive organ, which causes a computer to operate as each component of a system for discriminating a region of a digestive organ based on an endoscopic image of a digestive organ of the present invention as defined in claim 34.
A computer-readable recording medium according to claim 36 of the present invention is characterized in that: the procedure for discriminating the part of the digestive organ based on the endoscopic image of the digestive organ in the 35 th embodiment is recorded.
According to the computer-readable recording medium of the 36 th aspect of the present invention, there is provided a computer-readable recording medium on which a program for discriminating a site of a digestive organ based on an endoscopic image of the digestive organ of the 35 th aspect is recorded.
[ Effect of the invention ]
As described above, according to the present invention, since the program incorporating the neural network is trained based on the endoscope images of the plurality of digestive organs obtained in advance for the plurality of subjects and the positive or negative determination results of the disease obtained in advance for the plurality of subjects, the probability of the positive and/or negative of the disease of the digestive organ of the subject, the probability of the past disease, the severity level of the disease, the information corresponding to the imaged part, and the like can be obtained in a short time with the accuracy substantially equivalent to that of an endoscope specialist, and the subject who needs to be additionally diagnosed can be screened in a short time.
Drawings
Fig. 1A of fig. 1 shows an example of an endoscopic image when h.pyri infection is positive, and fig. 1B shows an example of an endoscopic image when h.pyri infection is negative.
Fig. 2 is a diagram showing patient screening for testing a data set.
Fig. 3 is a schematic conceptual diagram showing the operation of google net.
Fig. 4 is a graph showing the ROC curve of the first learning result and the diagnosis result of h.pyrori infection by an endoscopist.
Fig. 5 is a graph showing the ROC curve of the 2 nd implementation result and the diagnosis result of h.pyri infection by an endoscopist.
Fig. 6 is a diagram showing the main anatomical classification and the quasi-anatomical classification of the stomach based on the japanese guideline.
Fig. 7 is a schematic diagram of a flow chart of the CNN system according to embodiment 2.
Fig. 8A is a ROC curve of a laryngeal image obtained by using the CNN system of embodiment 2, fig. 8B is a ROC curve of an esophageal image obtained by using the CNN system in the same manner, fig. 8C is a ROC curve of a stomach image obtained by using the CNN system in the same manner, and fig. 8D is a ROC curve of a duodenal image obtained by using the CNN system in the same manner.
Fig. 9A is a ROC curve of an image of the upper stomach obtained by using the CNN system of embodiment 2, fig. 9B is a ROC curve of an image of the middle stomach obtained by using the CNN system in the same manner, and fig. 9C is a ROC curve of an image of the lower stomach obtained by using the CNN system in the same manner.
Fig. 10A is an image of the duodenum misclassified as the lower stomach, and fig. 10B is an image of the lower stomach correctly classified.
Fig. 11A is an image of the esophagus incorrectly classified as the lower stomach, and fig. 11B is an image of the lower stomach correctly classified as the lower stomach.
Fig. 12A is an image of the duodenum that is misclassified as the middle stomach, and fig. 12B is an image of the upper stomach that is correctly classified.
Fig. 13A shows a pharyngeal image classified by mistake as the esophagus, and fig. 13B shows an esophagus image classified correctly.
Fig. 14 is a large intestine endoscope image classified into typical Mayo0 to Mayo 3.
Fig. 15 is a schematic diagram of a diagnostic system for CNN library according to embodiment 3.
Fig. 16 is a diagram showing an example of a representative CNN image and 3-segment Mayo classification according to embodiment 3.
Fig. 17 is an ROC curve in the 2-division Mayo classification of embodiment 3.
Fig. 18 shows examples of low-magnification images (a to e) and high-magnification images (f to j) according to embodiment 4.
Fig. 19 shows examples of videos classified into type 1 (fig. 19a), type 2 (fig. 19b), and type 3 (fig. 19c) in embodiment 4.
Fig. 20A is a ROC curve of all images of embodiment 4, fig. 20B is a ROC curve in the case of the same HMP, and fig. 20C is a ROC curve in the case of the same LMP.
Fig. 21 is a non-malignant image of the CNN misdiagnosis according to embodiment 4, fig. 21A is a normal endoscope image, fig. 21B is a similar super-magnified endoscope image, and fig. 21C is a similar pathological tissue examination image.
Fig. 22 shows an example of a malignant disease misdiagnosed by an endoscope specialist, fig. 22A shows a normal endoscope image, fig. 22B shows a similar super-magnified endoscope image, and fig. 22C shows a similar pathological tissue examination image.
Fig. 23 is a block diagram of a disease diagnosis support method based on endoscopic images of digestive organs using a neural network according to embodiment 5.
Fig. 24 is a block diagram of a disease diagnosis support system based on endoscopic images of a digestive organ, a diagnosis support program based on endoscopic images of a digestive organ, and a computer-readable recording medium according to embodiment 6.
Fig. 25 is a block diagram of a method of determining a site of a digestive organ based on an endoscopic image of the digestive organ using a neural network according to embodiment 7.
Fig. 26 is a block diagram of a system for determining a site of a digestive organ based on an endoscopic image of a digestive organ, a program for determining a site of a digestive organ based on an endoscopic image of a digestive organ, and a computer-readable recording medium according to embodiment 8.
Detailed Description
Hereinafter, a diagnosis support method, a diagnosis support system, a diagnosis support program, and a computer-readable recording medium storing the diagnosis support program according to the present invention based on endoscopic images of digestive organs will be described in detail, taking a case where h.pylori infects gastritis and a case where the gastritis are classified by organs as an example, and then a case where ulcerative colitis and an esophageal disease are taken as an example. However, the embodiments described below are examples for embodying the technical idea of the present invention, and the present invention is not intended to be specific to these cases. That is, the present invention relates to a common disease that can be specified based on endoscopic images of digestive organs, and can be equally adapted to other embodiments included in the claims. In the present invention, the term video includes not only still images but also moving images.
First, the accuracy, sensitivity, and specificity of a general examination will be described in brief using the quadrant table shown in table 1. Generally, since the test value is a continuous quantity, a threshold value (threshold value) is determined, and a case where the test value is higher than the threshold value is determined as positive, and a case where the test value is lower than the threshold value is determined as negative (the other case may be the case).
[ Table 1]
As shown in table 1, the patient was classified into 4 regions a to d depending on the presence or absence of the disease and whether the test result was positive or negative. In addition, the patient is classified as a patient (present) when the patient has a disease, and classified as a patient (absent) when the patient has no disease. The region a is a true positive region when the patient has a disease and the examination result is positive. The b region is a false positive region when the patient has no disease but the examination result is positive. The c region is a false negative region when the patient has a disease but the examination result is negative. Similarly, the d region indicates that the patient has no disease and the test result is negative, and indicates a true negative.
On the other hand, the sensitivity is the probability that the test result is positive in the patient (the diagnosis rate of the patient), and is represented by a/(a + c). The specificity is the probability that the test result is negative in a patient-free subject (diagnosis rate of disease-free), and is represented by d/(b + d). The false positive rate is the probability that the test result is positive in a patient-free subject (disease-free misdiagnosis rate), and is represented by b/(b + d). The false negative rate is the probability that the test result is negative in a patient (misdiagnosis with disease), and is represented by c/(a + c).
Further, if the sensitivity and the specificity are calculated at any time while changing the threshold value, and the sensitivity is plotted on the vertical axis and the false positive rate (1-specificity) is plotted on the horizontal axis, an ROC curve (Receiver operating characteristic curve) can be obtained (see fig. 4 and 5).
[ embodiment 1]
In embodiment 1, an example of application to a case where h.pyriri infects gastritis is described with respect to a diagnosis support method, a diagnosis support system, a diagnosis support program, and a computer-readable recording medium storing the diagnosis support program for diseases based on endoscopic video of the present invention. In a hospital to which the present inventor belongs, a total of 33 endoscopists performed esophageal, gastric, and duodenal examinations (hereinafter, referred to as "EGD") using an endoscope of normal magnification using white light. Indications for EGD are presentations from primary physicians related to various upper abdominal symptoms, positive results from barium tests for gastric disease, abnormal serum pepsinogen levels, previous symptoms of stomach or duodenum, or screening. All endoscopists were instructed to take a full image of the larynx, esophagus, stomach, and duodenum, even if no abnormalities were present. The typical number of images for a patient without gastrointestinal disease is 34 (larynx 1, esophagus 6, stomach 25, duodenum 2).
EGD was performed by taking images with white light using standard endoscopes (EVIS GIF-XP290N, GIF-XP260NS, GIF-N260, etc., Olympus Medical Systems, Tokyo). The obtained image is an image of normal magnification, and no enlarged image is used. A typical endoscopic image of the stomach obtained is shown in fig. 1. Fig. 1A is an example of an image diagnosed as positive for h.pyri infection, and fig. 1B is an example of an image diagnosed as negative for h.pyri infection.
All patients received a test to detect the presence of h. The examination is at least one of an anti-h.pyri IgG grade measurement in blood or urine, a fecal antigen measurement, and a urea breath test. Also, patients who showed a positive response in any of these tests were classified as positive for h. Among patients not diagnosed as positive for h.pylori infection, those who did not receive h.pylori sterilization treatment were classified as h.pylori negative. In addition, patients who have been subjected to h.pyri sterilization treatment in the past and have been successfully sterilized are excluded from the measurement subjects.
[ with respect to data set ]
By reviewing 1768 EGD images taken from month 1 of 2014 to month 12 of 2016, a data set (referred to as "training/verification data") used for training and constructing a diagnostic system of the AI library is prepared. Data from patients with gastric cancer, ulcers, or submucosal tumors or medical history were excluded from the training/validation dataset. There are also images excluded by the endoscopist from images of the stomach diagnosed as either positive for h.pyri infection or negative for gastric food debris, bleeding after biopsy, halation, or other reasons. In addition, an endoscope image data set (referred to as "test data") to be evaluated is also prepared. The "training/verification data" corresponds to the "1 st endoscope image" of the present invention, and the "test data" corresponds to the "2 nd endoscope image" of the present invention. Demographic characteristics and image characteristics of these patients are shown in table 2.
[ Table 2]
SD: standard deviation of
As shown in table 2, 32,208 training/validation datasets were created using images from 753 patients judged to be h.pyri positive and 1,015 patients classified as negative. The data set performs contrast adjustment at the same ratio for each signal of RGB of all images. Each of RGB takes 8 bits, that is, a value in the range of 0 to 255, and contrast adjustment is performed so that 1 dot is always present in each of the RGB colors, which is a dot (white) of 255, and 255. In general, since the quality of an endoscope image is good, there are many cases where contrast adjustment is not actually performed. In addition, although a black frame is added to the screen of the endoscope, the image is automatically trimmed and enlarged or reduced so that all images have the same size.
As a classification, a plurality of digestive organs may be divided into a plurality of parts, and here, images classified into a plurality of parts according to positions are obtained only for the stomach, and other parts are used without being divided. Furthermore, the diagnosis can be performed by attribute by classifying and inputting the attributes such as male and female, age, and the like.
It is desirable if an equal amount of data is available for all classifications, but for the less numerous classifications, randomly rotated between 0 degrees and 359 degrees, with the tool appropriately zoomed in or out, thereby increasing the data. The result was 32,208 original endoscope images for training/validation.
First, a class 1 neural network was constructed by using all images without classification data in combination. Next, the 2 nd-order neural network is constructed using the images classified into a plurality as described above. Further, since many repetitive processes are performed in the construction of such a neural network, a computer whose speed is increased by parallel processing is used.
[ preparation of test data ]
A test data set is prepared for evaluating the diagnostic accuracy of the neuroid network and the endoscopy doctor of embodiment 1 constructed using the training/verification data set. In the hospital to which the inventor belongs, among the image data of 587 patients who underwent endoscopy between 2017 and 1 and 2 months, the image data of 166 patients who had undergone sterilization by h.pyri, the image data of 23 patients whose infection state by h.pyri was unknown, and the image data of 1 patient after gastrectomy were excluded (see fig. 2). As a result, the image data of the remaining 397 patients judged to be positive or negative for h.
In the test data set, 72 persons (43%) were diagnosed by fecal antigen testing and 87 persons (21%) were diagnosed based on the anti-h.pyri IgG class in urine. The test data set contained a total of 11,481 images from a total of 397 patients. Among them, 72 people were diagnosed as positive for h.pyri infection, and 325 people were diagnosed as negative for h.pyri infection. There is no duplication between the test dataset and the training/validation dataset.
[ training, Algorithm ]
In order to construct a diagnostic system for an AI library, the Caffe architecture is used as a development template for a deep learning neural network, and google lenet including 22 layers is used as a Convolutional Neural Network (CNN) architecture.
The CNN used in embodiment 1 is trained by using back propagation (error back propagation) as shown in fig. 3. Each layer of CNN was adjusted at a global learning rate of 0.0001 using Adam (https:// arxiv. org/abs/1412.6980) as one of the parameter optimization methods.
In order to make all images compatible with google lenet, each image is resized to 244 × 244 pixels. In addition, a trained model in which the feature value of the natural image is learned by ImageNet is used as an initial value at the start of training. ImageNet (http:// www.image-net. org /) is a database that has received up to 1,400 million or more images in the early 2017. This training method is called transfer learning and has been confirmed to be effective even in the case of less guidance data.
[ evaluation algorithm ]
The trained neural network outputs a probability value (PS) between 0 and 1 as a positive or negative diagnostic result of h. The ROC curve is a curve obtained by plotting the probability values by changing the threshold values for identifying positive and negative.
[ comparison of Performance of test data sets between CNN and endoscopists ]
The experienced 23-bit endoscopy physicians of CNN and EGD classified each patient as positive or negative for h. In addition, the accuracy of the diagnosis of h.pylori infection and the time required for assessment were determined and compared between CNN and endoscopic specialist.
6 out of 23 endoscopists were specialists at the society of gastroenterology in Japan. The remaining 17 EGDs had experience of more than 1000 (n-9) and less than 1000 (n-8), classifying the former as "experienced physician" and the latter as "research and development physician". The ROC curve for showing the diagnostic accuracy of CNN was described using R software (statistical analysis free software). The ROC curve for showing the diagnostic accuracy of CNN was described using R software (statistical analysis free software). The information for all patients displayed in these images is not identifiable for anonymity purposes prior to data parsing, so that all endoscopists associated with the assessment cannot access the information for the identified patients. The study was conducted with approval from the ethical review committee of the japanese physician's office (ID JMA-IIA 00283).
[ results of endoscopist ]
The sensitivity, specificity and time required for evaluation of each endoscopist for diagnosis of h.pylori infection are collectively shown in table 3. The sensitivity and specificity of all endoscopists averaged 79.6% and 83.2%, respectively, and the time required to evaluate all images of the test dataset (evaluation time) was 230 ± 65 minutes (mean ± standard deviation).
[ Table 3]
SD-standard deviation
AUC ═ area of lower side of ROC curve
Among the 23 endoscopy physicians, the specialist (n ═ 6) had sensitivity and specificity of 85.2% and 89.3%, respectively, and had very high sensitivity and specificity compared with 72.2% and 76.3% of the research and development physicians (n ═ 8). In addition, the evaluation time was 252.5. + -. 92.3 minutes for the specialist, and 206.6. + -. 54.7 minutes for the physician in study and repair. The sensitivity, specificity and evaluation time of experienced doctors (n ═ 9) are all intermediate values between those of specialists and physicians in research and development.
[ Properties of CNN ]
The CNN constructed in embodiment 1 outputs the h.pyri infection probability P for each image, and further calculates the standard deviation of the specimen for each patient. First, the characteristics of the 1 st CNN constructed using all training/verification datasets that do not include classification data are examined. The ROC curve obtained at this time is shown in fig. 4. Note that the horizontal axis of the ROC curve originally indicates "1-specificity", the left end indicates the origin "0", and the right end indicates "1", but the horizontal axis indicates "specificity", the left end indicates the origin "1", and the right end indicates "0" in fig. 4 (the same applies to fig. 5).
Each endoscopist predicts h.pyrori infection for each patient, and the predicted outcome for each endoscopist is represented by a hollow dot. The black dots represent the average of the predicted values for all endoscopists. CNN outputs h.pylori probability P for each image and the program calculates the standard deviation of the probability for each patient.
The area of the lower part of the ROC curve of fig. 4 is 0.89, and if the cut-off value is defined as 0.53, the sensitivity and specificity of the 1 st CNN are 81.9% and 83.4%, respectively. The diagnosis time of all images obtained by using the CNN constructed at the 1 st time was 3.3 minutes. The CNN constructed at this 1 st accurately diagnosed 350 (88.2%) of 397. In the case of 47 misdiagnoses by the CNN constructed at the 1 st time, the mean. + -. standard deviation of the accuracy was 43.5. + -. 33.4% as a result of diagnosis by the endoscopist.
Next, an ROC curve of the 2 nd CNN constructed using the same training/verification dataset classified into a plurality of sets is obtained. The results are shown in FIG. 5. Fig. 5 is a ROC curve using CNN obtained by learning using a training/verification dataset classified into a plurality of parts according to location only for the stomach, and fig. 5 outputs a better probability than the ROC curve obtained using the 1 st CNN. The area under the ROC curve is 93%. Each hollow point represents the prediction result of each endoscopist, and the black point represents the average value of all endoscopists, which is the same as the value shown in fig. 4.
The area of the lower portion of the ROC curve of fig. 5 increases from 89% to 93% of fig. 4. The optimal threshold value is 0.42, and the sensitivity and specificity of CNN are 88.9% and 87.4%, respectively, which are comparable to the diagnosis results of specialist in the Japan gastroenterology society and much higher than those of research and development physicians. The diagnosis time for all images obtained using this 2 nd CNN was 3.2 minutes.
The following was found from the results. The diagnostic ability of CNN 1 is similar to that of endoscope specialist doctors with abundant experience, and the diagnostic ability of CNN 2 is comparable to that of endoscope specialist doctors with much shorter time than that of endoscope specialist doctors. The results indicate that the h.pyloi infection screening system using CNN of embodiment 1 has sufficiently high sensitivity and specificity for application in clinical practice.
Unlike the case of the skin or retina, the stomach has a complicated shape, and images of various parts with different observation methods are included in images for the gastroscopy, so that it is sometimes difficult to identify which part is an image when using CNN vision obtained by learning using a training/verification dataset that does not perform classification corresponding to the position in the stomach. Therefore, if a CNN is constructed using a training/validation data set classified into a plurality of sites according to location only for the stomach, the sensitivity increases from 81.9% to 88.9%, and the ability to diagnose h.
Further, it is considered that the endoscopy doctor and CNN recognize images by different methods and perform diagnosis. Changes in the gastric mucosa caused by h.pyleri infection, such as atrophy or intestinal metaplasia, first occur in the distal stomach (pyloric part) and gradually expand to the proximal stomach (cardiac part). In the stomach with slight changes, the normal mucosa may be misdiagnosed as abnormal, and therefore, the endoscopist recognizes the position of the image of the stomach, particularly, the pyloric part, angle, or whole image, and then makes a diagnosis.
H.pyrori infection is more in our country, especially in the elderly, and endoscopic cluster screening of gastric cancer has started since 2016, and therefore, more effective screening methods are being sought. The results obtained in embodiment 1 suggest that by connecting a large number of saved images to the automated system, even without evaluation by an endoscopist, screening for h.pylori infection can be greatly facilitated, and by further experiments, confirmation of h.pylori infection and possible eventual eradication are achieved.
The CNN of embodiment 1 can significantly shorten the screening time for h.pylori infection without fatigue, and can obtain a report result immediately after an endoscopic examination. This can contribute to reduction in burden of diagnosis of h.pyri infection by endoscopy physicians and reduction in medical costs, which are major problems to be solved worldwide. Further, since the h.pyri diagnosis by the CNN of embodiment 1 can be obtained immediately as long as the image at the time of endoscopic examination is input, the h.pyri diagnosis can be completely assisted "on-line", and the problem of the uneven distribution of physicians due to the region can be solved in the form of so-called "telemedicine".
In addition, in the 2 nd CNN of embodiment 1, an example is shown in which the training/verification dataset classified into a plurality of parts according to the position is used for learning, but the classification of the plurality of parts may be about 3 to 10 parts, and may be appropriately specified by the manufacturer. In addition, since the 2 nd CNN is constructed by including not only the image of the stomach but also the images of the pharynx, esophagus and duodenum, these images are excluded in order to improve the accuracy of the positive and negative diagnosis of helicobacter pylori infection.
Further, although embodiment 1 shows an example in which a training/training image obtained by using an endoscope of normal magnification using white light is used, if the number of images is large, an image obtained by an image-intensifying endoscope or a magnifying endoscope, or an image obtained by observing by irradiating laser beams having different wavelengths using NBI (Narrow Band Imaging: Narrow Band light method) or an endoscope of a stimulated raman scattering microscope (see patent document 3) or the like may be used.
In embodiment 1, data acquired only in a single hospital is used as both the training/verification data set and the test data set, but it is considered that sensitivity and specificity of CNN can be further improved by additionally using an example of an image acquired in another facility by using another endoscope apparatus and technique. Further, in embodiment 1, although the patient after h.pyri sterilization is excluded, it can be used to determine whether or not h.pyri sterilization is completed by learning the image after h.pyri sterilization.
The diagnosis of the h.pyri infection state is confirmed by various tests such as anti-h.pyri IgG level measurement of blood or urine, fecal antigen test, and urea breath test, but not any of these tests can achieve 100% sensitivity or 100% specificity. Therefore, there is a possibility that an image accompanied by an erroneous diagnosis is mixed in the training/verification data set as a result, and this can be solved by the following method: the CNN is trained using a training/verification data set corresponding to a classification of only those diagnosed as negative for h.pylori infection in a plurality of clinical examinations, or those diagnosed as negative for h.pylori infection in one examination and those diagnosed as positive for h.pylori infection in another examination, and the like, and the corresponding images are collected.
In embodiment 1, the difference in diagnostic ability is not evaluated based on the state of the stomach, that is, the degree of atrophy, the presence or absence of redness, or the like, but if the number of training images increases in the future, and the training images are further subdivided based on the findings and CNN is memorized, it is possible that the judgment equivalent to that of a specialist can be made in the future for the findings as well as the presence or absence of h. In addition, although embodiment 1 has been described as an assistant for diagnosis of h.pyrori infection in the stomach, endoscopic examination is performed on the larynx, the pharynx, the esophagus, the duodenum, the biliary tract, the pancreatic duct, the small intestine, the large intestine, and the like in addition to the stomach, and if images of these parts are cumulatively advanced and images for training are increased, it can be used for diagnosis assistance of these parts.
In particular, in the case where the disease is ulcerative colitis, since the feature extraction and analysis algorithm is similar to the case of presence or absence of h.pyleri-infected gastritis, if the CNN is trained using a large intestine endoscope image, the CNN can be easily divided into a plurality of stages corresponding to the severity of ulcerative colitis and output. The large intestine endoscope image serving as the training/verification data is classified into a plurality of regions according to the region of the large intestine, and the diagnosis result is given to the stage in which ulcerative colitis is positive or negative or severe. By using the CNN trained based on the training/verification data and the diagnosis result including the large intestine endoscope images, the positive or negative or severity level of the disease can be automatically diagnosed with respect to the test data including a plurality of large intestine endoscope images for a plurality of subjects to be taken as the evaluation measure.
Further, since there is a manual operation by a person in the process of creating the training data of the endoscope image, it cannot be denied that there is a possibility that there is an error in the training data in a part of the data. If erroneous training data is mixed, the accuracy of CNN determination is adversely affected. The endoscope image used in embodiment 1 is subjected to anonymization processing, and it is not possible to confirm whether or not there is an error in the training data. In order to eliminate this as much as possible, a high-quality training dataset can be created by mechanically classifying images based on the results of clinical examination determination. The accuracy of determination can be further improved if CNN is learned using the training data.
Further, in embodiment 1, an example of an architecture using google lenet as CNN is shown, but the architecture of CNN is evolving, and there are cases where a better result can be obtained if the latest architecture is adopted. Similarly, Caffe, which is an open source, is used as the deep learning framework, but CNTK, TensorFlow, thano, Torch, MXNet, and the like can be used. Further, Adam is used as an optimization method, but in addition to this, the well-known SGD (Stochastic Gradient Descent) method, MomentumSGV method for giving an inertia term (Momentum) to SGD, AdaGrad method, AdaDelta method, nesterova method, rmsprograves method, and the like can be appropriately selected and used.
As described above, the diagnostic accuracy of h.pylori infection by the endoscopic image based on the stomach of CNN according to embodiment 1 is comparable to that of an endoscopic doctor. The CNN of embodiments is therefore useful for screening h.pyriori infected patients from obtained endoscopic images for screening or other reasons. In addition, if CNN is made to learn the image after h.pyri bacteria sterilization, it can also be used to determine whether h.pyri sterilization has been completed. In addition, since whether or not the h.pyri bacterium has been sterilized can be easily understood at the time of an inquiry, it is useful that the h.pyri infection diagnosis can be immediately applied to a clinical site only by the present determination of positive and negative h.pyri bacterium.
[ diagnosis support System ]
A computer incorporating the CNN as a diagnosis support system according to embodiment 1 basically includes an endoscope image input unit, a storage unit (a hard disk or a semiconductor memory), an image analysis device, a determination display device, and a determination output device. Alternatively, the endoscope image capturing device may be provided as it is. The computer system may be installed remotely from the endoscopy facility, and may be operated as a central diagnosis support system by obtaining image information from a remote location, or as a cloud-based computer system via the internet.
The computer includes, in an internal storage unit: a 1 st storage area that stores endoscopic images of a plurality of digestive organs, which are previously obtained for a plurality of subjects, respectively; a 2 nd storage area that stores positive or negative results of diagnosis of the disease respectively obtained in advance for a plurality of subjects; and a 3 rd storage area storing the CNN program. In this case, since the number of endoscopic images of a plurality of digestive organs obtained in advance for a plurality of subjects is large and the data amount is large, and a large amount of data processing is performed during the operation of the CNN program, parallel processing is preferably performed, and a storage unit having a large capacity is also preferable.
In recent years, the capabilities of a CPU (Central Processing Unit) or a GPU (graphics Processing Unit) have been remarkably improved, and if a commercially available personal computer with a certain high performance is used as a computer incorporated into the CNN used as the diagnosis support system in embodiment 1, 3000 cases or more can be treated in 1 hour as the h.pyri infectious gastritis diagnosis system, and 1 image can be treated in about 0.2 seconds. Therefore, by providing image data during endoscopic imaging to a computer incorporating the CNN used in embodiment 1, h.pyrori infection can be determined in real time, and an endoscopic image transmitted from the whole world or a remote place can be remotely diagnosed even if it is a dynamic image. In particular, since the GPU performance of recent computers is very excellent, by incorporating the CNN of embodiment 1, it is possible to perform high-speed and high-precision video processing.
The endoscopic image of the digestive organ of the subject input to the input unit of the computer incorporating the CNN as the diagnosis support system according to embodiment 1 may be an image captured by an endoscope, an image transmitted via a communication network, or an image recorded on a computer-readable recording medium. That is, the computer incorporating the CNN as the diagnosis support system according to embodiment 1 can output the probabilities of the disease positivity and the disease negativity of the digestive organ to the input endoscopic image of the digestive organ of the subject in a short time, and therefore, can be used without being limited to the input format of the endoscopic image of the digestive organ of the subject.
As the communication network, the well-known internet, intranet, internet network, LAN, ISDN, VAN, CATV communication network, virtual private network (virtual private network), telephone line network, mobile communication network, satellite communication network, and the like can be used. As a transmission medium constituting the communication network, a wired medium such as an IEEE1394 serial bus, a USB, a power line carrier, a cable television line, a telephone line, and an ADSL line, an infrared ray, Bluetooth (registered trademark), a wireless medium such as IEEE802.11, a wireless medium such as a cellular phone network, a satellite line, and a terrestrial digital network, which are well known, may be used. As the recording medium that can be read by the computer, there are known tape systems such as magnetic tape and magnetic cassette, disk systems including magnetic disks such as floppy (registered trademark) disks and hard disks, optical disks such as optical disks-ROM/MO/MD/digital video disks/optical disks-R, card systems such as IC cards, memory cards and optical cards, and semiconductor memory systems such as mask ROM/EPROM/EEPROM/flash ROM.
[ embodiment 2]
In the h.pyri infectious gastritis diagnostic system using a computer incorporating the CNN used in embodiment 1, it is possible to automatically determine which part of the inputted image is the image. Therefore, in the h.pyriri infectious gastritis diagnostic system, the h.pyriri infection probability can be displayed for each of 7 classifications of the cardiac region, the fundus stomach, the corpus stomach, the gastric corner, the vestibule region, and the antrum pylorus, or the h.pyriri infection probability can be displayed by classifying several of these regions, for example, the upper 5 classifications of the h.pyriri infection probability. This method of classifying and displaying the probability of h.pyri infection must ultimately be diagnosed by blood or urine tests, but is useful at the discretion of an endoscope specialist.
Therefore, in embodiment 2, verification is performed that can automatically determine which part of the input video is. First, among the images of EGDs obtained in embodiment 1, images unclear due to cancer, ulcer, debris, foreign matter, bleeding, halation, blurring, or poor focus were excluded by an endoscopist, and 27,335 images from the remaining 1750 patients were classified into 6 main categories (larynx, esophagus, duodenum, upper stomach, middle stomach, and lower stomach). As shown in fig. 6, the main stomach region includes a plurality of sub-regions, the lower stomach includes a pyloric portion, a pyloric antrum, and a vestibule portion, the upper stomach includes a cardiac portion and a fundus portion, and the middle stomach includes a gastric corner portion and a gastric body portion.
In the same manner as in embodiment 1, 27,335 original endoscope images are randomly rotated between 0 degrees and 359 degrees by a tool to increase the number of images, black frames surrounding each image are automatically corrected and clipped, and the images are appropriately enlarged or reduced to include only a normal white light image having a normal magnification, and enhanced images such as narrow-band images are excluded to obtain more than 100 ten thousand of training image data. In order to make the training image data of embodiment 2 obtained in this way compatible with google lenet, the size of all images is resized to 244 × 244 pixels. The CNN system used in embodiment 2 is trained using the same system as the CNN system of embodiment 1, except that the learning rate is changed to 0.0002 using the training image data.
The trained CNN system of embodiment 2 produces probability values (PS) for each image ranging from 0 to 100%. The probability value indicates the probability that each image belongs to each anatomical classification. The category with the highest probability value is used as the final classification of the CNN, and the images classified by this CNN are evaluated manually by the 2-bit endoscope specialist to evaluate whether the classification has been performed correctly. When the evaluation differs according to 2 endoscopic specialists, discussion is made to achieve satisfactory resolution.
In order to evaluate the diagnostic accuracy of the CNN system, 17,081 independent image data of 435 patients who received endoscopy in a hospital belonging to the inventor from 2017 month 2 to 2017 month 3 were collected, and a series of verification image data was created. The verification image data includes only a normal white light image having a normal magnification, and image data in which the enhanced image or the anatomical classification cannot be recognized is excluded from the verification image data by the same method as in the case of the training image data. Finally, 17,081 images including 363 laryngeal images, 2,142 esophageal images, 13,047 gastric images, and 1,528 duodenal images were used as the verification image data (see table 4).
[ Table 4]
First, it is evaluated whether the CNN system of embodiment 2 can recognize images of the larynx, esophagus, stomach, and duodenum, and then, as shown in fig. 6, whether the stomach position (upper stomach, middle stomach, lower stomach) classified by the gastric cancer in japan can be recognized. Then, the sensitivity and specificity with respect to the anatomical classification by the CNN system of embodiment 2 are calculated. For the classification of each organ, a Receiver Operating Characteristic (ROC) curve was plotted, and the area of the lower side portion (AUC) was calculated using GraphPad Prism 7(GraphPad software, inc., California, u.s.a.). Fig. 7 shows a simplified outline of these test methods.
First, GIE images were classified into 4 major categories (larynx, esophagus, stomach, and duodenum). The CNN system of embodiment 2 accurately classifies the anatomical positions of 16,632 (97%) of 17,081 images. The probability value calculated by the CNN system is in the range of 0 to 100%, and the highest probability value of 96% of the images is 90% or more (see table 5).
[ Table 5]
Probability value (PS) | Normal class number (%) | Number of integers (%) | Accuracy (%) |
>99% | 15,168(91) | 15,265(89) | 99.4 |
99-90% | 980(6) | 1,101(6) | 89.0 |
90-70% | 336(2) | 437(3) | 76.5 |
70-50% | 143(1) | 264(2) | 54.2 |
<50% | 5(0) | 14(0) | 35.7 |
Total up to | 16,632(100) | 17,081(100) | 97.4 |
That is, the CNN system of embodiment 2 is based on the accuracy of the AUC values with a pharyngeal portion of 1.00[ 95% Confidence Interval (CI): 1.00 to 1.00, p < 0.0001), 1.00 for esophagus (95% CI: 1.00 to 1.00, p < 0.0001), 0.99 for stomach (95% CI: 0.99 to 1.00, p < 0.0001), and 0.99 for duodenum (95% CI: 0.99 to 0.99, p < 0.0001), automatically identified the anatomical location of the GIE image (see FIGS. 8A to 8D). The sensitivity and specificity of the CNN system of embodiment 2 for identifying the anatomical positions of the respective sites were 93.9% and 100% for the larynx, 95.8% and 99.7% for the esophagus, 89.9% and 93.0% for the stomach, and 87.0% and 99.2% for the duodenum (see table 6).
[ Table 6]
Next, it is examined whether the CNN system according to embodiment 2 can correctly classify the anatomical positions of the image data acquired from different parts of the stomach. Since several gastric diseases tend to occur in specific regions of the stomach, it is important to accurately classify the anatomical location of the stomach. In the training of the CNN system according to embodiment 2, 13,048 total stomach image data including 3,532 pieces of image data from the upper stomach, 6,379 pieces of image data from the middle stomach, and 3,137 pieces of image data from the lower stomach were used.
As shown in fig. 9, the CNN system of embodiment 2 obtained AUC values of 0.99 (95% CI: 0.99 to 0.99, p < 0.0001) for the upper, middle and lower stomachs, and accurately classified the anatomical positions of these stomach images. The sensitivity and specificity of CNN when classifying the anatomical location of each part were 96.9% and 98.5% in the upper stomach, 95.9% and 98.0% in the middle stomach, and 96.0% and 98.8% in the lower stomach (see table 7). That is, the CNN system of embodiment 2 correctly classifies GIE image data into 3 anatomical positions of the stomach (upper stomach, middle stomach, lower stomach).
[ Table 7]
[ evaluation of misclassified images ]
Finally, to provide a basis for improving CNN performance, studies were conducted on images misclassified by CNN. Typical examples of the misclassified images are shown in fig. 10 to 13. Fig. 10A shows an image of the duodenum that is misclassified as the lower stomach, and fig. 10B shows an image of the lower stomach that is correctly classified. As observed in these images, the duodenum is sometimes similar to the lower stomach if viewed from the side. This is considered to be the cause of misclassification.
Fig. 11A shows an image of the esophagus wrongly classified as the lower stomach, and fig. 11B shows an image of the lower stomach correctly classified as the lower stomach. Fig. 12A shows an image of the duodenum that is misclassified as the middle stomach, and fig. 12B shows an image of the upper stomach that is correctly classified. Fig. 13A shows an image of a pharynx classified as an esophagus by mistake, and fig. 13B shows an image of an esophagus classified correctly.
The images in fig. 11A and 12A are structures that cannot be used for overall recognition due to insufficient air blowing into the lumen and/or approach of the endoscope scope to the lumen wall. On the other hand, fig. 13A shows an image of the larynx, but the image is clearly different from the image observed with the naked eye, and is erroneously classified as the esophagus.
The architecture for image classification continues to advance. Based on the most recent results of Imagenet large-scale visual recognition challenge match 2016(ILSVRC2016, http:// image-net. org/changes/LSVRC/2016/results) using this technique, the classification error rate was 3.0% to 7.3%. In contrast, the CNN system of embodiment 2 showed high-precision classification results with high AUC of 0.99 to 1.00, confirming the potential possibility of CNN in classification of GIE images according to anatomical locations. This ability to identify anatomical classifications is an important step in determining whether the CNN system can be used, inter alia, in the diagnosis of gastrointestinal disease by automatically detecting lesions or abnormalities in images taken from a patient.
Since physicians need years of significant training to become GIE experts, the CNN system of the present invention reduces the burden on patients and is beneficial to patients. In this regard, the results of the present invention show a promising possibility of establishing a new GIE support system by the CNN system of the present invention. Furthermore, the automatically collected and electronically stored GIE images are frequently reviewed by the 2 nd observer in order not to overlook the disease, and therefore, the automatic GIE image classification itself is useful in clinical settings. Anatomically classified images can be more easily interpreted by the 2 nd observer, and thus the burden thereof can be reduced.
[ embodiment 3]
In embodiment 3, an example of the case where the diagnosis support method, the diagnosis support system, the diagnosis support program, and the computer-readable recording medium storing the diagnosis support program according to the present invention are applied to ulcerative colitis will be described. The clinical data of patients who received a large intestine endoscopy examination in a hospital to which the inventors belonged were reviewed retrospectively. The symptoms, endoscopic findings, and pathological findings of up to 958 patients are included overall. The patients were subjected to a full-length colonoscopy and the obtained colonoscopy images were reviewed by 3 gastroenterologists.
In embodiment 3, only a normal white-light image having a normal magnification is included, and an enhanced image such as a narrow-band image is excluded. Also, unclear images accompanied by stool, blurring, halation or air inclusion were excluded, and other clear images were classified according to the parts of the colon, rectum, that is, the right colon (cecum, ascending and transverse colon), the left colon (descending and sigmoid colon) and rectum, and according to their respective endoscopic disease activities, by 3-division Mayo endoscopic scores (Mayo0, Mayo1 and Mayo 2-3). Fig. 14 shows typical large intestine endoscopic images of the rectum classified as Mayo0 to Mayo 3. Fig. 14A shows an example of Mayo0, fig. 14B shows an example of Mayo1, fig. 14C shows an example of Mayo2, and fig. 14D shows an example of Mayo 3.
When the classifications of the reviewers are different for each image to be reviewed, at least 2 of the 3 gastroenterologists reviewed each image, and a Mayo endoscope score was assigned to each image. The Mayo endoscope score reassigned in this manner is referred to as the "true Mayo classification".
26,304 images from 841 ulcerative colitis patients taken during the period from 10 to 3 months in 2006 were used as a training dataset, and 4,589 images from 117 ulcerative colitis patients collected during the period from 4 to 2017 and 6 months were used as a validation dataset. Table 8 shows details of patient characteristics of the training dataset and the verification dataset.
[ Table 8]
C-T: cecum-ascending colon-transverse colon
D. S: descending colon-sigmoid colon
These original endoscope images were randomly rotated between 0 degrees and 359 degrees, the black frame surrounding each image was cropped on each side, and enlarged or reduced to 0.9 times or 1.1 times using software. Finally, the amplification is performed so that the number of training data sets exceeds 100 ten thousand.
All images were taken using a standard large intestine endoscope (EVIS LUCERA ELITE, Olympus medical systems, Tokyo, Japan). All patient information that comes with is annotated prior to data analysis so that no endoscopic physician associated with the study has access to identifiable patient information. The study was approved by the Japanese physician's office ethical review Committee (ID: JM A-IIA 00283). In addition, the informed consent was deleted because the study was retrospectively and completely anonymized data was used.
In order to make the training image data of embodiment 3 obtained in this way compatible with google lenet, the size of all images is resized to 244 × 244 pixels. The CNN system used in embodiment 3 is trained using the same system as the CNN system of embodiment 1, except that the learning rate is changed to 0.0002 using the training image data.
The diagnostic system of the trained CNN library produces probability values (PS) for each image in the range of 0 to 1, each image representing the probability of belonging to each Mayo endoscope score. The category with the highest probability value is taken as the final classification of CNN. Fig. 15 shows a simplified overview of the diagnostic system of the CNN library according to embodiment 3, and fig. 16 shows an example of a representative image of the CNN according to embodiment 3 and the obtained 3-segment Mayo endoscope score. Further, fig. 16A shows an example of the transverse colon classified as Mayo0, fig. 16B shows an example of the rectum classified as Mayo1, and fig. 16C shows an example of the rectum classified as Mayo2 or Mayo 3.
The performance of CNN was verified by evaluating whether CNN was able to classify each image into 2 partitions, i.e., 2 partitions classified into Mayo0-1 and Mayo 2-3. The reason for this is that: mayo0 and Mayo1 were in remission (symptom controlled) and Mayo2 and Mayo3 were producing inflammation. Then, for the Mayo endoscope score, a Receiver Operating Characteristic (ROC) curve was plotted, and the area of the lower side of the ROC curve (AUC) with a 95% Confidence Interval (CI) was determined using GraphPad Prism 7(GraphPad software, Inc, usa). Further, AUC was evaluated at various parts of the colon and rectum (right colon, left colon, and rectum).
The association between the true Mayo endoscope score and the 2-divided classifications of Mayo0-1 and Mayo2-3 using CNN is shown in Table 9. In addition, FIG. 17A shows an example of each probability score among the 2-division Mayo endoscope scores of Mayo0-1 and Mayo2-3 in shades of color, and similarly, FIG. 17B shows an ROC curve among the 2-division Mayo endoscope scores of Mayo0-1 and Mayo 2-3.
[ Table 9]
From the results shown in Table 9 and FIG. 17, 96% of the images of Mayo0-1 and 49% of the images of Mayo2-3 were correctly classified by CNN into various Mayo endoscope scores. The ROC curve shown in FIG. 17B indicates a high AUC of 0.95 (95% CI: 0.94-0.95) relative to the 2-division classification of Mayo0-1, and Mayo2-3, indicating higher performance of CNN.
[ Association between the true Mayo classification and the CNN classification according to the respective positions of colon and rectum ]
Next, since the positions of the colon and rectum may have an influence on the performance of the CNN, the performance of the CNN according to the positions of the colon and rectum (right colon, left colon, and rectum) was evaluated.
Table 10 shows the AUC of the ROC curve, the true Mayo classification, and the degree of agreement of the results for CNN at each location. The performance of CNN was good at the colon further to the right (AUC 0.96) and the colon further to the left (AUC 0.97) than the rectum (AUC 0.88) when classified into 2 divisions of Mayo0-1 and Mayo 2-3.
[ Table 10]
C-T: cecum-ascending colon-transverse colon
D. S: descending colon-sigmoid colon
As described above, according to embodiment 3, it was confirmed that the trained CNN can recognize the endoscopic disease activity of ulcerative colitis images and shows promising performance for distinguishing images of remission (Mayo0, Mayo1) and images of severe inflammation (Mayo2, Mayo 3).
According to the results of embodiment 3, CNN showed very good performance in identifying the mucosa in a cured state (Mayo0, Mayo1), better in the rectum than the right and left colon. One reason for this is that: the image used is that of the most part of patients with ulcerative colitis under treatment. This treatment changes the severity of the inflammation, leads to "spotting" or "non-continuous lesions" of the inflammation, and makes it difficult to properly classify the Mayo endoscope score. This is due to the fact that rectum has several treatment options, and plaque or non-continuous lesions often occur, which may lead to further performance degradation when classified by CNN in rectum.
In embodiment 3, the classification using CNN is defined as a video with the highest probability value of the Mayo endoscope score. Therefore, even a video having a probability value of 0.344 of Mayo1 (the lowest value) is classified as Mayo1, which is the same as a video having a probability value of 1.00 of Mayo 1. This is one of the reasons why the true Mayo endoscopy score is relatively low compared to the AUC value compared to the agreement between classifications with CNN. Therefore, by setting an optimal threshold value for the probability value in order to define the classification using CNN, the performance of CNN can be improved.
There are several limitations in the assay of embodiment 3. 1, which is a retrospective study, a prospective study was required to assess the performance of CNNs in actual clinical situations. Since all the large intestine endoscopy images are collected in a clinic that does not have an admission facility, the image of Mayo2-3 is relatively small in both training data and verification data. Therefore, consider that the performance of CNN is further improved by training CNN with more Mayo2-3 images.
[ embodiment 4]
In embodiment 4, an example of applying the endoscopic image-based disease diagnosis support method, the endoscopic image-based disease diagnosis support system, the endoscopic image-based disease diagnosis support program, and the computer-readable recording medium storing the endoscopic image diagnosis support program according to the present invention to esophageal diseases will be described. The ECS system used here is a novel magnifying endoscopy capable of observing surface epithelial cells in real time in vivo using vital stain such as methylene blue. The inventors of the present invention first performed a clinical test using ECS in 2003 and reported characteristics of normal squamous epithelium and surface epithelial cells of esophageal cancer (see non-patent document 4). Currently commercially available ECS generation 4 has a magnification of 500 times optically, and may further be magnified to a maximum magnification of 900 times using a digital magnification function built into the video processor.
In a hospital to which one inventor belongs, during the period from 11 months to 2 months in 2011 to 2018, 308 esophageal ECS examinations were performed on 241 patients in the hospital. The observation mirror used 2 prototype ECS (olympus medical systems) of GIF-Y0002 (optical magnification: 400 times (digital magnification: 700 times)) and GIF-Y0074 (optical magnification: 500 times (digital magnification: 900 times)). In addition, as the endoscope video observation mirror system, a standard endoscope video system (EVIS LUCERACV-260/CLV-260, EVIS LUCERA ELITE CV-290/CLV-290 SL: Olympus Medical Systems) was used.
As shown in fig. 18, endoscope images (a to e in fig. 18) having the maximum optical magnification (low-magnification image) of ECS and endoscope images (high-magnification image, f to j in fig. 18) having a digital magnification of 1.8 times are stored together. In the ECS test of 308 cases, 4715 images from 240 cases performed during the period from 11 months to 2016 months in 2011 were used as a training dataset for developing an algorithm for ECS image analysis.
The training data set includes 126 cases of esophageal cancer (58 cases of superficial cancer and 68 cases of advanced cancer), 106 cases of esophagitis (52 cases of radiation-associated esophagitis, 45 cases of gastroesophageal reflux disease, candida esophagitis, 2 cases of eosinophilic esophagitis, and 3 cases of esophagitis), and 8 cases of normal squamous cell epithelia. All lesions were subjected to histological diagnosis using biopsy tissues or resection specimens. Finally, the training data set was classified into "malignant images (number of images: 1141 (low-magnification image: 231, high-magnification image: 910))" and "non-malignant images (number of images: 3574 (low-magnification image: 1150, high-magnification image: 2424))". In embodiment 4, the CNN is collectively trained using a low-magnification image and a high-magnification image.
As a test data set for evaluating the diagnostic ability of the constructed CNN, 55 cases capable of observing the surface cell morphology of the esophagus well by ECS were used. The test data used 1520 images (low magnification image: 467, high magnification image: 1053) containing 27 cases of malignant lesions and 28 cases of non-malignant lesions. All the symptoms of malignant lesions are esophageal squamous cell carcinoma. Among them, 20 cases of superficial cancer and 7 cases of advanced cancer were mentioned. Examples of the non-malignant lesions include 27 cases of esophagitis (14 cases of radiation-associated esophagitis, 9 cases of gastroesophageal reflux disease (GERD), and 4 cases of esophagitis), and 1 case of esophageal papilloma.
In order to make the training image data of embodiment 3 obtained in this way compatible with google lenet, the size of all images is resized to 244 × 244 pixels. The CNN system used in embodiment 4 is trained using the same system as the CNN system of embodiment 1, using these pieces of training image data.
[ diagnostic results Using CNN ]
After the CNN was constructed using the training data set, the test data was studied. For each image of the test data, the probability of malignancy is calculated, a Receiver Operating Characteristic (ROC) curve is plotted, and the Area (AUC) of the lower portion of the ROC curve is calculated. The cut-off value is determined in consideration of the sensitivity and specificity of the ROC curve.
In the study for each disease case, when 2 or more ECS images acquired from the same lesion were diagnosed as malignant, the lesion was diagnosed as malignant. Sensitivity, positive hit rate (PPV), negative hit rate (NPV), and specificity of CNN for diagnosing esophageal cancer were calculated as follows.
Sensitivity ═ number of correctly diagnosed malignancies by CNN/number of histologically proven esophageal cancer lesions
Number of lesions diagnosed as esophageal cancer using CNN/number of correctly diagnosed malignant PPV ═ CNN
Number of non-malignant lesions correctly diagnosed by NPV (CNN) (ECS images diagnosed as malignant are 1 or less)/number of lesions diagnosed as non-malignant lesions by CNN
Number of correctly diagnosed non-malignant lesions/histologically confirmed non-malignant lesions
Prior to diagnosis with CNN, an experienced 1-bit endoscopist had made a diagnosis in an endoscopic examination. The endoscopist classifies all disease cases into the following categories.
Type 1: surface epithelial cells have a lower nucleus/cytoplasm ratio and a lower cell density. The nuclear isoforms could not be confirmed. (refer to FIG. 19a)
Type 2: the density of nuclei is high, but there is no definite nuclear allotype. The intercellular boundaries are not clear. (refer to FIG. 19b)
Type 3: clear increase in the nuclear density and nuclear heterogeneity were observed. Swelling of the nucleus was also confirmed (see FIG. 19c)
[ clinical diagnosis and pathological diagnosis results of test data ]
The results of clinical diagnosis and pathological diagnosis of the test data symptoms are shown in table 11.
[ Table 11]
As shown in table 11, 27 cases diagnosed as esophageal cancer by clinical diagnosis were all squamous cell carcinoma. 9 cases diagnosed with gastroesophageal reflux disease by clinical diagnosis were histologically diagnosed as re-epithelia in 4 cases, esophageal ulcer in 3 cases, and esophagitis in 2 cases. 14 cases diagnosed with radiation esophagitis by clinical diagnosis were histologically diagnosed as re-epithelia in 2 cases, esophagitis in 8 cases, and esophageal ulcer, esophagitis with atypical cells, granulation tissue, and necrotic tissue in others.
[ ROC curve and area and critical value of lower part thereof ]
The CNN obtained in embodiment 4 requires 17 seconds for analyzing 1520 images (0.011 seconds per image). The AUC obtained from ROC curves (fig. 20A) plotted from all images was 0.85. Further, ROC curves were created for the high-magnification image (fig. 20B) and the low-magnification image (fig. 20C), and the results of the analysis showed that AUC for the high-magnification image and AUC for the low-magnification image were 0.90 and 0.72, respectively.
High specificity for non-malignant lesions is expected, with the cutoff value for the probability of malignancy set to 90% according to the ROC curve. As shown in Table 12, the sensitivity, specificity, PPV and NPV were 39.4, 98.2, 92.9 and 73.2, respectively, for all images. When the high-magnification image and the low-magnification image were analyzed independently, the sensitivity, specificity, PPV, and NPV were 46.4, 98.4, 95.2, and 72.6 for the high-magnification image, and the sensitivity, specificity, PPV, and NPV were 17.4, 98.2, 79.3, and 74.4 for the low-magnification image, respectively.
[ Table 12]
Sensitivity of CNN | Degree of specificity | Positive Predictive Value (PPV) | Negative Predictive Value (NPV) | |
All images | 39.4 | 98.2 | 92.9 | 73.2 |
High magnification image (HMP) | 46.4 | 98.4 | 95.2 | 72.6 |
Low multiplying power image (LMP) | 17.4 | 98.2 | 79.3 | 74.4 |
In 622 cases (1.6%) clinically diagnosed as non-malignant among high-magnification images, the probability of malignant tumors in 10 images exceeded 90%. The 10 images included 7 images obtained from the white spot portion, 2 images obtained from the atypical epithelium, and 1 image obtained from the normal epithelium. Among 332 cases clinically diagnosed as non-malignant in the low-magnification images, 7 images (2.1%) diagnosed as esophagitis had a malignancy probability of more than 90%.
[ evaluation of type classification by endoscopic physician for each disease case and comparison of diagnosis with CNN ]
Table 13 shows the relationship between the diagnosis result obtained by using the CNN constructed in embodiment 4 and the type classified by the endoscopist in 27 cases of esophageal squamous cell carcinoma.
[ Table 13]
From the results shown in table 13, among 27 cases diagnosed with esophageal squamous cell carcinoma, CNN accurately diagnosed 25 cases (sensitivity 92.6%) as malignant (2 or more images containing malignant tumors with a probability of more than 90%). The central value (range) of the proportion of CNN recognized as malignant in photographs taken with cancer cells was 40.9(0 to 88.9)%. CNN accurately diagnosed non-malignant (specificity 89.3%) in 25 of 28 cases of non-malignant lesions. The accuracy of PPV, NPV, and unity were 89.3%, 92.6%, 90.9%, respectively.
All malignant lesions were diagnosed as type 3 by the endoscopist. The endoscopist diagnosed 13 of 28 non-malignant lesions as type 1, 12 as type 2, and 3 as type 3. When type 3 is considered to correspond to malignant tumor, the sensitivity, specificity, combined accuracy of diagnosis by PPV, NPV, and endoscopy are 100%, 89.3%, 90.0%, 100%, and 94.5%, respectively.
2 malignant lesions with CNN misdiagnosed non-malignant were accurately diagnosed by an endoscope specialist as type 3. These 2 cases are superficial cancers, and the increase in nuclear density and nuclear abnormalities are evident. However, the enlargement of the nucleus is not obvious (refer to fig. 21). Fig. 21A is a normal endoscope image, fig. 21B is a similar ECS image, and fig. 21C is a similar pathological anatomy examination image.
The most promising goal of ECS diagnosis against the esophagus is to omit biopsy tissue diagnosis of esophageal squamous cell carcinoma. In the results of embodiment 4, particularly, a high AUC was observed in the ROC curve of the high-magnification image. Previous trials were conducted to determine whether 1 pathologist could differentiate malignant ECS images based solely on ECS images. The pathologist diagnosed esophagitis with about 2/3 as malignant when only low-magnification images were used. In these cases, although the nuclear density increases, the nuclear allotype cannot be identified because of the low magnification. Thereafter, in order to identify the nuclear type, the experiment was performed again using an image of 1.8 times (high magnification) the number. The pathologist's sensitivity and specificity were improved sharply to 90% or more.
In embodiment 4, all non-malignant low-magnification images diagnosed as malignant were obtained from the example of esophageal inflammation. These images showed higher nuclear density, but were not able to assess nuclear abnormalities due to low magnification. Therefore, it is recommended to refer to a high-magnification image instead of a low-magnification image. On the other hand, most of non-malignant high-magnification images misdiagnosed as malignant are obtained from observation of the white spot portion. ECS images of white spots are characterized by clustering of inflammatory cells. It is assumed that these images are recognized as low-magnification images of squamous cell carcinoma (see fig. 18). Therefore, in the future, it is preferable to establish 2 different CNNs, i.e., a high-magnification image and a low-magnification image.
In order to omit tissue biopsy, it is necessary to minimize malignant misdiagnosis (false positive) of non-malignant lesions. Over-diagnosis of squamous epithelial tumors by endoscopic specialists using ECS requires a surgery at risk of death. In the paper of applying AI for early cancer discovery, a certain false positive rate is allowed. Therefore, the threshold value is set with a relatively low probability. In this study, the threshold for the probability of malignancy was set to 90%. This is to minimize the false positives described above.
In the evaluation for each disease case, an endoscope specialist applies a type classification for diagnosis with ECS. Type 1 and type 3 correspond to non-neoplastic epithelial and esophageal squamous cell carcinoma, respectively. In these cases, tissue biopsy may be omitted. However, since type 2 includes various histological conditions, a tissue biopsy should be performed in order to determine a treatment course. In the results of embodiment 4, the diagnosis result of ECS images using CNN exceeded 90% of the results of a specialist endoscope.
Even with higher thresholds, the results are allowed. These cases can omit tissue biopsy with CNN support. Furthermore, 2 cases of malignancy that were not diagnosed as malignant by CNN were accurately diagnosed as type 3 by an endoscopist. Since normal endoscopic observation of lesions clearly indicates malignancy, these cases can omit tissue biopsy using only ECS images without CNN support.
On the other hand, the specificity of embodiment 4 was 89.3%, and 3 cases of non-malignant lesions were diagnosed as malignant by CNN. Grade C gastroesophageal reflux disease is given in 1 case, and radiation esophagitis is given in the other 2 cases. Pathological diagnosis of these lesions is regenerative epithelium, granulation tissue and esophageal ulcers. With respect to regenerative epithelium, even experienced pathologists are difficult to distinguish from esophageal squamous cell carcinoma.
In 1 example of radiation esophagitis, type 3 was diagnosed by an endoscopist, and malignancy was diagnosed by the CNN of embodiment 4. Even when tissue biopsy is performed after irradiation, it is sometimes difficult to diagnose recurrence or residual esophageal cancer. In addition to ECS observation, tissue biopsy should be performed for tracking after radiation irradiation of esophageal cancer.
From the results of embodiment 4, several limits can be indicated in this CNN study. 1 st, the number of test data is small. To solve this problem, it is planned to acquire an enormous number of images from dynamic images. Further deeply learned CNNs are expected to improve diagnostic results. 2 nd, 2 different ECSs with 400 and 500 times optical magnification were used. The difference in the magnification of these 2 ECS may have an effect on the diagnostic results using CNN. Therefore, it is considered that the use of ECS images having an optical magnification of 500 times that of ECS images on the market will be limited in future.
As a result, the CNN of embodiment 4 can strongly support the omission of tissue biopsy by the endoscopist based on the ECS image. In order to identify the nuclear abnormality, a high-magnification image using digital zooming is referred to instead of a low-magnification image.
[ embodiment 5]
A method for assisting diagnosis of a disease based on endoscopic images of a digestive organ using a neural network according to embodiment 5 will be described with reference to fig. 23. In embodiment 5, the disease diagnosis support method based on endoscopic images of digestive organs using the neural network according to embodiments 1 to 4 can be used. At S1, the neuroid network is trained using the results of diagnosis of at least one of the 1 st endoscopic image of the digestive organ, the positive or negative state of the disease of the digestive organ corresponding to the 1 st endoscopic image, the past disease, the severity level, and information corresponding to the imaged part. At S2, the trained neural network at S1 outputs at least one of the probability of positivity and/or negativity of a disease of the digestive organ, the probability of a past disease, the severity level of a disease, and information corresponding to the imaged part, based on the 2 nd endoscope image of the digestive organ.
In S1, the 1 st endoscope image may be subjected to contrast adjustment. In S1, the 1 st endoscope image may be associated with each of the captured parts. The site may include at least one of a pharynx, an esophagus, a stomach, or a duodenum, and the site may be divided into a plurality of sites in at least one of a plurality of digestive organs. Where the site is a stomach, the divisions may include at least one of an upper stomach, a middle stomach, or a lower stomach, and further, the divisions may include at least one of a cardiac portion, a fundus portion, a corpus portion, a gastric corner portion, a vestibular portion, a pyloric antrum, or a pyloric portion.
When the number of 1 st endoscope images in the portion to be photographed is smaller than that of other portions, the number of 1 st endoscope images in all portions may be substantially equal by using at least one of rotation, enlargement, reduction, change in the number of pixels, capturing of a light and dark portion, and capturing of a color-tone change portion of the 1 st endoscope image.
The trained neural network may be configured to output information corresponding to the part where the 2 nd endoscope image is captured, or may output information corresponding to the part together with the probability or the severity.
When the 1 st endoscopic image includes an endoscopic image of the stomach, the disease may include at least one of infection with helicobacter pylori and presence/absence of sterilization with helicobacter pylori. When the 1 st endoscopic image includes a large-intestine endoscopic image, the disease may include at least ulcerative colitis, and the trained neuroid network may be output in a plurality of stages according to the severity of the ulcerative colitis. In the case where the 1 st endoscopic image includes an esophageal endoscopic image, the disease includes at least one of esophageal cancer, gastroesophageal reflux disease, or esophagitis, and the trained neural network may also classify and output at least one of esophageal cancer, gastroesophageal reflux disease, or esophagitis.
The 2 nd endoscope image may be at least one of an image photographed by an endoscope, an image transmitted via a communication network, an image provided by a remote operation system or a cloud-based system, an image recorded in a computer-readable recording medium, or a moving image. In addition, as the neural network, a convolutional neural network may be used.
[ embodiment 6]
A diagnosis support system for a disease based on endoscopic images of a digestive organ, a diagnosis support program based on endoscopic images of a digestive organ, and a computer-readable recording medium according to embodiment 6 will be described with reference to fig. 24. In embodiment 6, the diagnosis support system for diseases based on endoscopic images of digestive organs described in embodiments 1 to 4 can be used. A disease diagnosis support system based on endoscopic images of digestive organs comprises an endoscopic image input unit (10), an output unit (30), and a computer (20) incorporating a neural network. The computer 20 includes: a 1 st storage area 21 for storing a 1 st endoscopic image of a digestive organ; a 2 nd storage area 22 for storing a result of diagnosis of at least one of a positive or negative disease of a digestive organ corresponding to the 1 st endoscopic image, a past disease, a severity level, and information corresponding to a part to be imaged; and a 3 rd storage area 23 storing a neural network program. The neural network program stored in the 3 rd storage area 23 is trained based on the 1 st endoscope image stored in the 1 st storage area 21 and the diagnosis result stored in the 2 nd storage area 22, and outputs at least one of the probability of positivity and/or negativity of a disease of the digestive organ with respect to the 2 nd endoscope image, the probability of a past disease, the severity level of a disease, and information corresponding to the imaged part to the output unit 30 based on the 2 nd endoscope image of the digestive organ input from the endoscope image input unit 10.
Contrast adjustment may also be performed on the 1 st endoscope image. In addition, the 1 st endoscope image may be associated with each of the captured portions. The site may include at least one of a pharynx, an esophagus, a stomach, or a duodenum, and the site may be divided into a plurality of sites in at least one of a plurality of digestive organs. Where the site is a stomach, the divisions may include at least one of an upper stomach, a middle stomach, or a lower stomach, and further, the divisions may include at least one of a cardiac portion, a fundus portion, a corpus portion, a gastric corner portion, a vestibular portion, a pyloric antrum, or a pyloric portion.
When the number of 1 st endoscope images in the portion to be photographed is smaller than that of other portions, the number of 1 st endoscope images in all portions may be substantially equal by using at least one of rotation, enlargement, reduction, change in the number of pixels, capturing of a light and dark portion, and capturing of a color-tone change portion of the 1 st endoscope image.
The trained neural network program may be configured to output information corresponding to the part where the 2 nd endoscope image is captured, or may output information corresponding to the part together with the probability or the severity.
When the 1 st endoscopic image includes an endoscopic image of the stomach, the disease may include at least one of infection with helicobacter pylori and presence/absence of sterilization with helicobacter pylori. When the 1 st endoscopic image includes a large-intestine endoscopic image, the disease may include at least ulcerative colitis, and the trained neuroid network may be output in a plurality of stages according to the severity of the ulcerative colitis. In the case where the 1 st endoscopic image includes an esophageal endoscopic image, the disease includes at least one of esophageal cancer, gastroesophageal reflux disease, or esophagitis, and the trained neural network may also classify and output at least one of esophageal cancer, gastroesophageal reflux disease, or esophagitis.
The 2 nd endoscope image may be at least one of an image photographed by an endoscope, an image transmitted via a communication network, an image provided by a remote operation system or a cloud-based system, an image recorded in a computer-readable recording medium, or a moving image. In addition, as the neural network, a convolutional neural network may be used.
A diagnosis support system for diseases based on endoscopic images of digestive organs is provided with a diagnosis support program for endoscopic images of digestive organs that operates a computer as each component. The diagnostic support program based on the endoscopic image of the digestive organ may be stored in a computer-readable recording medium.
[ embodiment 7]
A method for discriminating a site of a digestive organ based on an endoscopic image of the digestive organ using a neural network according to embodiment 7 will be described with reference to fig. 25. In embodiment 7, the method for discriminating a site of a digestive organ based on an endoscopic image of the digestive organ using a neural network described in embodiments 1 to 4 can be used. At S11, the neuroid network is trained using the 1 st endoscopic image of the digestive organ and the determination information corresponding to the information of the part corresponding to the imaged part corresponding to the 1 st endoscopic image. In S12, the trained neural network outputs information corresponding to the imaged part of the digestive organ based on the 2 nd endoscopic image of the digestive organ.
[ embodiment 8]
A system for determining a site of a digestive organ based on an endoscopic image of a digestive organ, a program for determining a site of a digestive organ based on an endoscopic image of a digestive organ, and a computer-readable recording medium according to embodiment 8 will be described with reference to fig. 26. In embodiment 8, the system for discriminating a site of a digestive organ based on an endoscopic image of the digestive organ using a neural network described in embodiments 1 to 4 can be used. The system for determining a part of a digestive organ based on an endoscopic image of the digestive organ according to embodiment 8 includes an endoscopic image input unit 40, an output unit 60, and a computer 50 incorporating a neural network. The computer 50 includes: a 1 st storage area 51 for storing a 1 st endoscopic image of a digestive organ; a 2 nd storage area 52 for storing determination information of information corresponding to the imaged part of the digestive organ corresponding to the 1 st endoscopic image; and a 3 rd storage area 53 storing a neural network program. The neural network program stored in the 3 rd storage area 53 is trained based on the 1 st endoscope image stored in the 1 st storage area 51 and the determination information stored in the 2 nd storage area 52, and outputs information corresponding to the region to be imaged of the digestive organ with respect to the 2 nd endoscope image to the output unit 60 based on the 2 nd endoscope image of the digestive organ input from the endoscope image input unit 40.
The system for determining a part of a digestive organ based on an endoscopic image of the digestive organ includes a program for determining a part of a digestive organ based on an endoscopic image of the digestive organ for operating as each component. The program for discriminating the site of the digestive organ based on the endoscopic image of the digestive organ may be stored in a computer-readable recording medium.
[ description of symbols ]
10 endoscope image input part
20 computer
21 st storage area
22 nd storage area 2
23 rd storage area 3
30 output part
40 endoscope image input part
50 computer
51 st storage area
52 nd storage area
53 No. 3 memory area
60 output part
Claims (36)
1. A method for assisting diagnosis of a disease based on endoscopic imaging of a digestive organ using a neural network, comprising:
1 st endoscopic image using digestive organs, and
a result of determination of at least one of positive or negative of the disease of the digestive organ corresponding to the 1 st endoscopic image, a past disease, a severity level, or information corresponding to a part imaged,
to train the neural network, an
The trained neuroid network outputs at least one of a probability of positivity and/or negativity of a disease of a digestive organ, a probability of a past disease, a severity level of the disease, or information corresponding to a part imaged, based on a 2 nd endoscope image of the digestive organ.
2. The method for supporting diagnosis of a disease based on endoscopic images of a digestive organ using a neural network according to claim 1, wherein:
the 1 st endoscope image is subjected to contrast adjustment.
3. The method for assisting diagnosis of a disease based on endoscopic imaging of a digestive organ using a neural network according to claim 1 or 2, wherein:
the 1 st endoscope image is associated with the photographed parts, respectively.
4. The method for supporting diagnosis of a disease based on endoscopic images of a digestive organ using a neural network according to claim 3, wherein:
the site includes at least one of the pharynx, esophagus, stomach, or duodenum.
5. The method for supporting diagnosis of a disease based on endoscopic imaging of a digestive organ using a neural network according to claim 3 or 4, wherein:
the site is divided into a plurality of sites in at least one of the plurality of digestive organs.
6. The method for supporting diagnosis of a disease based on endoscopic images of a digestive organ using a neural network according to claim 5, wherein:
where the site is a stomach, the division includes at least one of an upper stomach, a middle stomach, or a lower stomach.
7. The method for supporting diagnosis of a disease based on endoscopic images of a digestive organ using a neural network according to claim 5, wherein:
where the site is the stomach, the division includes at least one of a cardiac portion, a fundus portion, a corpus portion, a gastric corner portion, a vestibular portion, a pyloric antrum, or a pyloric portion.
8. The method for assisting diagnosis of a disease based on endoscopic imaging of a digestive organ using a neural network according to any one of claims 3 to 7, wherein:
when the number of the 1 st endoscope images in the captured portion is smaller than that of other portions, the number of the 1 st endoscope images in all the portions is substantially equal by using at least one of rotation, enlargement, reduction, change in the number of pixels, capturing of a bright/dark portion, and capturing of a color-tone-changed portion of the 1 st endoscope image.
9. The method for assisting diagnosis of a disease based on endoscopic imaging of a digestive organ using a neural network according to any one of claims 3 to 8, wherein:
the trained neural network can output information corresponding to a portion where the 2 nd endoscope image is captured.
10. The method for supporting diagnosis of a disease based on endoscopic images of a digestive organ using a neural network according to claim 9, wherein:
the trained neural network outputs the probability or the severity and information corresponding to the part together.
11. The method for assisting diagnosis of a disease based on endoscopic imaging of a digestive organ using a neural network according to any one of claims 1 to 10, wherein:
the 1 st endoscopic image comprises an intragastric endoscopic image, and the disease comprises at least one of helicobacter pylori infection or the presence or absence of helicobacter pylori sterilization.
12. The method for assisting diagnosis of a disease based on endoscopic imaging of a digestive organ using a neural network according to any one of claims 1 to 10, wherein:
the 1 st endoscopic image includes a large-intestine endoscopic image, the disease includes at least ulcerative colitis, and the trained neural network is output in a plurality of stages corresponding to the severity of the ulcerative colitis.
13. The method for assisting diagnosis of a disease based on endoscopic imaging of a digestive organ using a neural network according to any one of claims 1 to 10, wherein:
the 1 st endoscope image includes an esophageal endoscope image obtained by using a super-magnification endoscope, the disease includes at least one of esophageal cancer, gastroesophageal reflux disease, or esophagitis, and the trained neural network partitions and outputs the at least one of esophageal cancer, gastroesophageal reflux disease, or esophagitis.
14. The method for assisting diagnosis of a disease based on endoscopic imaging of a digestive organ using a neural network according to any one of claims 1 to 13, wherein:
the 2 nd endoscope image is at least one of an image captured by an endoscope, an image transmitted via a communication network, an image provided by a remote operation system or a cloud-based system, an image recorded in a computer-readable recording medium, or a moving image.
15. The method for assisting diagnosis of a disease based on endoscopic imaging of a digestive organ using a neural network according to any one of claims 1 to 14, wherein:
as the neural network, a convolutional neural network is used.
16. A diagnosis support system for a disease based on endoscopic imaging of a digestive organ, comprising: has an endoscope image input unit, an endoscope image output unit, and a computer incorporating a neural network
The computer is provided with:
a 1 st storage area for storing a 1 st endoscopic image of a digestive organ;
a 2 nd storage area for storing a result of diagnosis of at least one of a positive or negative of the disease of the digestive organ corresponding to the 1 st endoscopic image, a past disease, a severity level, and information corresponding to a part to be imaged; and
a 3 rd storage area for storing the neural network program; and is
The neural network program is
Based on the 1 st endoscope image stored in the 1 st storage area and the diagnosis result stored in the 2 nd storage area, training is performed, and
based on the 2 nd endoscopic image of the digestive organ input from the endoscopic image input unit, at least one of a probability of positivity and/or negativity of a disease of the digestive organ with respect to the 2 nd endoscopic image, a probability of a past disease, a severity level of a disease, and information corresponding to the imaged part is output to the output unit.
17. The system for supporting diagnosis of a disease based on endoscopic imaging of a digestive organ according to claim 16, wherein:
the 1 st endoscope image is subjected to contrast adjustment.
18. The system for supporting diagnosis of a disease based on endoscopic imaging of a digestive organ according to claim 16 or 17, wherein:
the 1 st endoscope image is associated with the photographed parts, respectively.
19. The system for supporting diagnosis of a disease based on endoscopic imaging of a digestive organ according to claim 18, wherein:
the site includes at least one of the pharynx, esophagus, stomach, or duodenum.
20. The system for supporting diagnosis of a disease based on endoscopic imaging of a digestive organ according to claim 18 or 19, wherein:
the site is divided into a plurality of sites in at least one of the plurality of digestive organs.
21. The system for supporting diagnosis of a disease based on endoscopic imaging of a digestive organ according to claim 20, wherein:
where the site is a stomach, the division includes at least one of an upper stomach, a middle stomach, or a lower stomach.
22. The system for supporting diagnosis of a disease based on endoscopic imaging of a digestive organ according to claim 20, wherein:
where the site is the stomach, the division includes at least one of a cardiac portion, a fundus portion, a corpus portion, a gastric corner portion, a vestibular portion, a pyloric antrum, or a pyloric portion.
23. The system for supporting diagnosis of a disease based on endoscopic imagery of a digestive organ according to any one of claims 16 to 22, wherein:
when the number of the 1 st endoscope image in the captured portion is smaller than that of other portions, the number of training/verification data in all portions is substantially equal by using at least one of rotation, enlargement, reduction, change in the number of pixels, capturing of a light and dark portion, and capturing of a color-tone-changed portion of the 1 st endoscope image.
24. The system for supporting diagnosis of a disease based on endoscopic imagery of a digestive organ according to any one of claims 16 to 23, wherein:
the trained neural network program can output information corresponding to a part where the 2 nd endoscope image is captured.
25. The system for supporting diagnosis of a disease based on endoscopic imaging of a digestive organ according to claim 24, wherein:
the trained neural network-like program outputs the probability or the severity and information corresponding to the part together.
26. The system for supporting diagnosis of a disease based on endoscopic imagery of a digestive organ according to any one of claims 16 to 25, wherein:
the 1 st endoscopic image comprises an intragastric endoscopic image, and the disease comprises at least one of helicobacter pylori infection or the presence or absence of helicobacter pylori sterilization.
27. The system for supporting diagnosis of a disease based on endoscopic imagery of a digestive organ using a neural network as set forth in any one of claims 16 to 25, wherein:
the 1 st endoscope image includes a large intestine endoscope image, the disease includes at least ulcerative colitis, and the trained neural network program is output in a plurality of stages corresponding to the severity of the ulcerative colitis.
28. The system for supporting diagnosis of a disease based on endoscopic imagery of a digestive organ using a neural network as set forth in any one of claims 16 to 25, wherein:
the 1 st endoscope image includes an esophageal endoscope image obtained by using a super-magnification endoscope, the disease includes at least one of esophageal cancer, gastroesophageal reflux disease, or esophagitis, and the trained neural network partitions and outputs the at least one of esophageal cancer, gastroesophageal reflux disease, or esophagitis.
29. The system for supporting diagnosis of a disease based on endoscopic imagery of a digestive organ using a neural network as set forth in any one of claims 16 to 28, wherein:
the 2 nd endoscope image is at least one of an image captured by an endoscope, an image transmitted via a communication network, an image provided by a remote operation system or a cloud-based system, an image recorded in a computer-readable recording medium, or a moving image.
30. The system for supporting diagnosis of a disease based on endoscopic imagery of a digestive organ using a neural network as claimed in any one of claims 16 to 29, wherein:
the neural network is a convolutional neural network.
31. A diagnosis support program for endoscopic imaging based on a digestive organ, comprising:
is used to make a computer operate as each component in a diagnosis support system for diseases based on endoscopic images of a digestive organ according to any one of claims 16 to 30.
32. A computer-readable recording medium characterized in that:
a diagnosis support program for endoscopic imaging based on a digestive organ according to claim 31 is recorded.
33. A method for discriminating a site of a digestive organ based on an endoscopic image of the digestive organ using a neural network, comprising:
1 st endoscopic image using digestive organs, and
training a neural network in accordance with determination information of information corresponding to the captured part of the 1 st endoscope image,
the trained neural network outputs information corresponding to a photographed portion of a digestive organ based on a 2 nd endoscope image of the digestive organ.
34. A system for discriminating a portion of a digestive organ based on an endoscopic image of the digestive organ using a neural network, characterized in that: is a system for discriminating a part of a digestive organ based on an endoscopic image of the digestive organ, comprising an endoscopic image input unit, an output unit, and a computer incorporating a neural network, and
the computer is provided with:
a 1 st storage area for storing a 1 st endoscopic image of a digestive organ;
a 2 nd storage area for storing determination information of information corresponding to the imaged part of the digestive organ corresponding to the 1 st endoscopic image; and
a 3 rd storage area for storing the neural network program; and is
The neural network program is
Based on the 1 st endoscope image stored in the 1 st storage area and the specific information stored in the 2 nd storage area, and training
And an output unit configured to output, to the output unit, information corresponding to a region of the digestive organ to be imaged with respect to a 2 nd endoscopic image of the digestive organ, based on the 2 nd endoscopic image of the digestive organ input from the endoscopic image input unit.
35. A program for discriminating a portion of a digestive organ based on an endoscopic image of the digestive organ, comprising:
the components of the system for discriminating a site of a digestive organ based on an endoscopic image of the digestive organ according to claim 34 operate.
36. A computer-readable recording medium characterized in that:
a program for discriminating a site of a digestive organ based on an endoscopic image of the digestive organ according to claim 35 is recorded.
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017-114792 | 2017-06-09 | ||
JP2017114792 | 2017-06-09 | ||
JP2017-213311 | 2017-11-02 | ||
JP2017213311 | 2017-11-02 | ||
PCT/JP2018/018316 WO2018225448A1 (en) | 2017-06-09 | 2018-05-11 | Disease diagnosis support method, diagnosis support system and diagnosis support program employing endoscopic image of digestive organ, and computer-readable recording medium having said diagnosis support program stored thereon |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111278348A true CN111278348A (en) | 2020-06-12 |
Family
ID=64567036
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201880037797.9A Pending CN111278348A (en) | 2017-06-09 | 2018-05-11 | Diagnosis support method, diagnosis support system, diagnosis support program, and computer-readable recording medium storing diagnosis support program for disease based on endoscopic image of digestive organ |
Country Status (6)
Country | Link |
---|---|
US (1) | US11270433B2 (en) |
JP (2) | JP6875709B2 (en) |
CN (1) | CN111278348A (en) |
SG (1) | SG11201911791RA (en) |
TW (1) | TW201902411A (en) |
WO (1) | WO2018225448A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111655116A (en) * | 2017-10-30 | 2020-09-11 | 公益财团法人癌研究会 | Image diagnosis support device, data collection method, image diagnosis support method, and image diagnosis support program |
CN112396591A (en) * | 2020-11-25 | 2021-02-23 | 暨南大学附属第一医院(广州华侨医院) | Osteoporosis intelligent evaluation method based on lumbar X-ray image |
CN112584749A (en) * | 2018-06-22 | 2021-03-30 | 株式会社Ai医疗服务 | Method for assisting diagnosis of disease based on endoscopic image of digestive organ, diagnosis assisting system, diagnosis assisting program, and computer-readable recording medium storing the diagnosis assisting program |
CN113539476A (en) * | 2021-06-02 | 2021-10-22 | 复旦大学 | Stomach endoscopic biopsy Raman image auxiliary diagnosis method and system based on artificial intelligence |
CN113610847A (en) * | 2021-10-08 | 2021-11-05 | 武汉楚精灵医疗科技有限公司 | Method and system for evaluating stomach markers in white light mode |
Families Citing this family (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3727236A1 (en) | 2017-12-22 | 2020-10-28 | Coloplast A/S | Sensor assembly part and a base plate for an ostomy appliance and a method for manufacturing a sensor assembly part and a base plate |
AU2018391393B2 (en) | 2017-12-22 | 2024-08-22 | Coloplast A/S | Coupling part with a hinge for an ostomy base plate and sensor assembly part |
US12064369B2 (en) | 2017-12-22 | 2024-08-20 | Coloplast A/S | Processing schemes for an ostomy system, monitor device for an ostomy appliance and related methods |
DK3727243T3 (en) | 2017-12-22 | 2023-10-02 | Coloplast As | BASE PLATE AND SENSOR UNIT FOR A STOMA SYSTEM WITH A LEAK SENSOR |
CA3090672A1 (en) * | 2018-02-06 | 2019-08-15 | The Regents Of The University Of Michigan | Systems and methods for analysis and remote interpretation of optical histologic images |
WO2019161863A1 (en) | 2018-02-20 | 2019-08-29 | Coloplast A/S | Accessory devices of an ostomy system, and related methods for changing an ostomy appliance based on future operating state |
KR102237441B1 (en) * | 2018-02-28 | 2021-04-07 | 이화여자대학교 산학협력단 | Method and apparatus for reading lesion from capsule endoscopic image using neural network |
DK3764961T3 (en) | 2018-03-15 | 2024-04-22 | Coloplast As | APPARATUS AND METHODS FOR NAVIGATING A USER OF AN OSTOMY DEVICE TO A CHANGING ROOM |
WO2019198637A1 (en) * | 2018-04-13 | 2019-10-17 | 富士フイルム株式会社 | Image processing device, endoscope system, and image processing method |
WO2020061370A1 (en) * | 2018-09-20 | 2020-03-26 | Siemens Healthcare Diagnostics Inc. | Hypothesizing and verification networks and methods for specimen classification |
KR102210806B1 (en) * | 2018-10-02 | 2021-02-01 | 한림대학교 산학협력단 | Apparatus and method for diagnosing gastric lesion using deep learning of endoscopic images |
CN109523522B (en) * | 2018-10-30 | 2023-05-09 | 腾讯医疗健康(深圳)有限公司 | Endoscopic image processing method, device, system and storage medium |
CN113260341A (en) * | 2018-12-20 | 2021-08-13 | 科洛普拉斯特公司 | Device for classifying ostomy conditions using masking and related method |
BR112021011637A2 (en) | 2018-12-20 | 2021-09-08 | Coloplast A/S | METHOD TO CLASSIFY AN OSTOMY CONDITION |
WO2020162275A1 (en) | 2019-02-08 | 2020-08-13 | 富士フイルム株式会社 | Medical image processing device, endoscope system, and medical image processing method |
WO2020170791A1 (en) | 2019-02-19 | 2020-08-27 | 富士フイルム株式会社 | Medical image processing device and method |
EP3932323B1 (en) * | 2019-02-28 | 2024-09-04 | FUJIFILM Corporation | Ultrasonic endoscopic system and operating method of ultrasonic endoscopic system |
TWI711050B (en) * | 2019-03-12 | 2020-11-21 | 宏碁股份有限公司 | Medical image recognizition device and medical image recognizition method |
JP6883662B2 (en) * | 2019-03-27 | 2021-06-09 | Hoya株式会社 | Endoscope processor, information processing device, endoscope system, program and information processing method |
US11779222B2 (en) | 2019-07-10 | 2023-10-10 | Compal Electronics, Inc. | Method of and imaging system for clinical sign detection |
JP7477269B2 (en) * | 2019-07-19 | 2024-05-01 | 株式会社ニコン | Learning device, judgment device, microscope, trained model, and program |
JPWO2021020187A1 (en) * | 2019-07-26 | 2021-02-04 | ||
CN110495847B (en) | 2019-08-23 | 2021-10-08 | 重庆天如生物科技有限公司 | Advanced learning-based auxiliary diagnosis system and examination device for early cancer of digestive tract |
TWI709147B (en) * | 2019-10-16 | 2020-11-01 | 中國醫藥大學附設醫院 | System of deep learning neural network in prostate cancer bone metastasis identification based on whole body bone scan images |
EP4052162B1 (en) * | 2019-10-31 | 2024-08-14 | Siemens Healthcare Diagnostics, Inc. | Methods and apparatus for protecting patient information during characterization of a specimen in an automated diagnostic analysis system |
CN114945315A (en) * | 2020-01-20 | 2022-08-26 | 富士胶片株式会社 | Medical image processing apparatus, method for operating medical image processing apparatus, and endoscope system |
JPWO2021157392A1 (en) * | 2020-02-07 | 2021-08-12 | ||
US11232570B2 (en) * | 2020-02-13 | 2022-01-25 | Olympus Corporation | System and method for diagnosing severity of gastritis |
CN115087386A (en) * | 2020-02-18 | 2022-09-20 | 索尼奥林巴斯医疗解决方案公司 | Learning apparatus and medical image processing apparatus |
CN111353547B (en) * | 2020-03-06 | 2023-07-04 | 重庆金山医疗技术研究院有限公司 | Picture processing method, device and system based on deep learning |
WO2021181564A1 (en) * | 2020-03-11 | 2021-09-16 | オリンパス株式会社 | Processing system, image processing method, and learning method |
JPWO2021206170A1 (en) * | 2020-04-10 | 2021-10-14 | ||
CN115460968A (en) | 2020-04-27 | 2022-12-09 | 公益财团法人癌研究会 | Image diagnosis device, image diagnosis method, image diagnosis program, and learned model |
US11478124B2 (en) * | 2020-06-09 | 2022-10-25 | DOCBOT, Inc. | System and methods for enhanced automated endoscopy procedure workflow |
KR102492463B1 (en) * | 2020-06-24 | 2023-01-27 | 주식회사 뷰노 | Method to display lesion readings result |
EP4176794A4 (en) * | 2020-07-06 | 2024-06-12 | Aillis Inc. | Processing device, processing program, processing method, and processing system |
WO2022074992A1 (en) * | 2020-10-09 | 2022-04-14 | 富士フイルム株式会社 | Medical image processing device and method for operating same |
US11100373B1 (en) | 2020-11-02 | 2021-08-24 | DOCBOT, Inc. | Autonomous and continuously self-improving learning system |
CN112508827B (en) * | 2020-11-06 | 2022-04-22 | 中南大学湘雅医院 | Deep learning-based multi-scene fusion endangered organ segmentation method |
TWI767456B (en) * | 2020-12-18 | 2022-06-11 | 陳階曉 | Jaundice analysis system and method thereof |
WO2023089718A1 (en) | 2021-11-18 | 2023-05-25 | 日本電気株式会社 | Information processing device, information processing method, and recording medium |
KR102714219B1 (en) * | 2021-12-24 | 2024-10-08 | 주식회사 인피니트헬스케어 | Artificial intelligence-based gastroscopy diagnosis supporting system and method to improve gastro polyp and cancer detection rate |
CN113989284B (en) * | 2021-12-29 | 2022-05-10 | 广州思德医疗科技有限公司 | Helicobacter pylori assists detecting system and detection device |
CN114664410B (en) * | 2022-03-11 | 2022-11-08 | 北京医准智能科技有限公司 | Video-based focus classification method and device, electronic equipment and medium |
US12062169B2 (en) | 2022-04-25 | 2024-08-13 | Hong Kong Applied Science and Technology Research Institute Company Limited | Multi-functional computer-aided gastroscopy system optimized with integrated AI solutions and method |
WO2024075240A1 (en) * | 2022-10-06 | 2024-04-11 | 日本電気株式会社 | Image processing device, image processing method, and storage medium |
WO2024142290A1 (en) * | 2022-12-27 | 2024-07-04 | 日本電気株式会社 | Image processing device, image processing method and recording medium |
CN116012367B (en) * | 2023-02-14 | 2023-09-12 | 山东省人工智能研究院 | Deep learning-based stomach mucosa feature and position identification method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002165757A (en) * | 2000-11-30 | 2002-06-11 | Olympus Optical Co Ltd | Diagnostic supporting system |
CN104732243A (en) * | 2015-04-09 | 2015-06-24 | 西安电子科技大学 | SAR target identification method based on CNN |
WO2016094330A2 (en) * | 2014-12-08 | 2016-06-16 | 20/20 Genesystems, Inc | Methods and machine learning systems for predicting the liklihood or risk of having cancer |
CN205665697U (en) * | 2016-04-05 | 2016-10-26 | 陈进民 | Medical science video identification diagnostic system based on cell neural network or convolution neural network |
WO2017055412A1 (en) * | 2015-09-30 | 2017-04-06 | Siemens Healthcare Gmbh | Method and system for classification of endoscopic images using deep decision networks |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4409166B2 (en) * | 2002-12-05 | 2010-02-03 | オリンパス株式会社 | Image processing device |
JP5094036B2 (en) * | 2006-04-17 | 2012-12-12 | オリンパスメディカルシステムズ株式会社 | Endoscope insertion direction detection device |
US8081811B2 (en) | 2007-04-12 | 2011-12-20 | Fujifilm Corporation | Method, apparatus, and program for judging image recognition results, and computer readable medium having the program stored therein |
JP5831901B2 (en) | 2011-11-28 | 2015-12-09 | 国立大学法人大阪大学 | Stimulated Raman scattering microscope |
WO2017027475A1 (en) * | 2015-08-07 | 2017-02-16 | Jianming Liang | Methods, systems, and media for simultaneously monitoring colonoscopic video quality and detecting polyps in colonoscopy |
JP6528608B2 (en) | 2015-08-28 | 2019-06-12 | カシオ計算機株式会社 | Diagnostic device, learning processing method in diagnostic device, and program |
CN108292366B (en) * | 2015-09-10 | 2022-03-18 | 美基蒂克艾尔有限公司 | System and method for detecting suspicious tissue regions during endoscopic surgery |
JP6545591B2 (en) | 2015-09-28 | 2019-07-17 | 富士フイルム富山化学株式会社 | Diagnosis support apparatus, method and computer program |
JP6656357B2 (en) * | 2016-04-04 | 2020-03-04 | オリンパス株式会社 | Learning method, image recognition device and program |
CN106372390B (en) | 2016-08-25 | 2019-04-02 | 汤一平 | A kind of self-service healthy cloud service system of prevention lung cancer based on depth convolutional neural networks |
CN110049709B (en) * | 2016-12-07 | 2022-01-11 | 奥林巴斯株式会社 | Image processing apparatus |
US10806334B2 (en) * | 2017-02-28 | 2020-10-20 | Verily Life Sciences Llc | System and method for multiclass classification of images using a programmable light source |
-
2018
- 2018-05-11 CN CN201880037797.9A patent/CN111278348A/en active Pending
- 2018-05-11 WO PCT/JP2018/018316 patent/WO2018225448A1/en active Application Filing
- 2018-05-11 TW TW107116169A patent/TW201902411A/en unknown
- 2018-05-11 US US16/620,861 patent/US11270433B2/en active Active
- 2018-05-11 JP JP2019523412A patent/JP6875709B2/en active Active
- 2018-05-11 SG SG11201911791RA patent/SG11201911791RA/en unknown
-
2021
- 2021-04-15 JP JP2021069162A patent/JP7216376B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002165757A (en) * | 2000-11-30 | 2002-06-11 | Olympus Optical Co Ltd | Diagnostic supporting system |
WO2016094330A2 (en) * | 2014-12-08 | 2016-06-16 | 20/20 Genesystems, Inc | Methods and machine learning systems for predicting the liklihood or risk of having cancer |
CN104732243A (en) * | 2015-04-09 | 2015-06-24 | 西安电子科技大学 | SAR target identification method based on CNN |
WO2017055412A1 (en) * | 2015-09-30 | 2017-04-06 | Siemens Healthcare Gmbh | Method and system for classification of endoscopic images using deep decision networks |
CN205665697U (en) * | 2016-04-05 | 2016-10-26 | 陈进民 | Medical science video identification diagnostic system based on cell neural network or convolution neural network |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111655116A (en) * | 2017-10-30 | 2020-09-11 | 公益财团法人癌研究会 | Image diagnosis support device, data collection method, image diagnosis support method, and image diagnosis support program |
CN112584749A (en) * | 2018-06-22 | 2021-03-30 | 株式会社Ai医疗服务 | Method for assisting diagnosis of disease based on endoscopic image of digestive organ, diagnosis assisting system, diagnosis assisting program, and computer-readable recording medium storing the diagnosis assisting program |
CN112396591A (en) * | 2020-11-25 | 2021-02-23 | 暨南大学附属第一医院(广州华侨医院) | Osteoporosis intelligent evaluation method based on lumbar X-ray image |
CN113539476A (en) * | 2021-06-02 | 2021-10-22 | 复旦大学 | Stomach endoscopic biopsy Raman image auxiliary diagnosis method and system based on artificial intelligence |
CN113610847A (en) * | 2021-10-08 | 2021-11-05 | 武汉楚精灵医疗科技有限公司 | Method and system for evaluating stomach markers in white light mode |
Also Published As
Publication number | Publication date |
---|---|
US20200279368A1 (en) | 2020-09-03 |
SG11201911791RA (en) | 2020-01-30 |
TW201902411A (en) | 2019-01-16 |
JP2021112593A (en) | 2021-08-05 |
WO2018225448A1 (en) | 2018-12-13 |
JP6875709B2 (en) | 2021-05-26 |
JPWO2018225448A1 (en) | 2020-07-30 |
US11270433B2 (en) | 2022-03-08 |
JP7216376B2 (en) | 2023-02-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7216376B2 (en) | Diagnosis support method, diagnosis support system, diagnosis support program, and computer-readable recording medium storing this diagnosis support program using endoscopic images of digestive organs | |
JP7037220B2 (en) | A computer-readable recording medium that stores a disease diagnosis support system using endoscopic images of the digestive organs, a method of operating the diagnosis support system, a diagnosis support program, and this diagnosis support program. | |
US12048413B2 (en) | Diagnostic assistance method, diagnostic assistance system, diagnostic assistance program, and computer-readable recording medium storing therein diagnostic assistance program for disease based on endoscopic image of digestive organ | |
JP7335552B2 (en) | Diagnostic imaging support device, learned model, operating method of diagnostic imaging support device, and diagnostic imaging support program | |
Cai et al. | Using a deep learning system in endoscopy for screening of early esophageal squamous cell carcinoma (with video) | |
Kumagai et al. | Diagnosis using deep-learning artificial intelligence based on the endocytoscopic observation of the esophagus | |
EP3811845A1 (en) | Method of assisting disease diagnosis based on endoscope image of digestive organ, diagnosis assistance system, diagnosis assistance program, and computer-readable recording medium having said diagnosis assistance program stored thereon | |
WO2021054477A2 (en) | Disease diagnostic support method using endoscopic image of digestive system, diagnostic support system, diagnostic support program, and computer-readable recording medium having said diagnostic support program stored therein | |
JP2022502150A (en) | Devices and methods for diagnosing gastric lesions using deep learning of gastroscopy images | |
WO2018165620A1 (en) | Systems and methods for clinical image classification | |
Namikawa et al. | Utilizing artificial intelligence in endoscopy: a clinician’s guide | |
KR20200038121A (en) | Endoscopic device and method for diagnosing gastric lesion based on gastric endoscopic image obtained in real time | |
JPWO2020121906A1 (en) | Medical support system, medical support device and medical support method | |
US20220160208A1 (en) | Methods and Systems for Cystoscopic Imaging Incorporating Machine Learning | |
JP7550409B2 (en) | Image diagnosis device, image diagnosis method, and image diagnosis program | |
Eroğlu et al. | Comparison of computed tomography-based artificial intelligence modeling and magnetic resonance imaging in diagnosis of cholesteatoma | |
Schreuder et al. | Algorithm combining virtual chromoendoscopy features for colorectal polyp classification | |
Chen et al. | Artificial Intelligence Model for a Distinction between Early-Stage Gastric Cancer Invasive Depth T1a and T1b | |
JP2023079866A (en) | Inspection method for stomach cancer by super-magnifying endoscope, diagnosis support method, diagnosis support system, diagnosis support program, learned model and image diagnosis support device | |
Fahmy et al. | Diagnosis of gastrointestinal cancer metastasis with deep learning technique | |
KR20220143184A (en) | Gallbladder cancer diagnosis method based on atificial intelligence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: Tokyo, Japan Applicant after: AI medical services Co.,Ltd. Address before: Japanese Saitama Applicant before: AI medical services Co.,Ltd. |
|
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200612 |