WO2015186335A1 - Image processing apparatus, image processing method, and program - Google Patents
Image processing apparatus, image processing method, and program Download PDFInfo
- Publication number
- WO2015186335A1 WO2015186335A1 PCT/JP2015/002748 JP2015002748W WO2015186335A1 WO 2015186335 A1 WO2015186335 A1 WO 2015186335A1 JP 2015002748 W JP2015002748 W JP 2015002748W WO 2015186335 A1 WO2015186335 A1 WO 2015186335A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- medical image
- region
- interest
- important object
- circuitry
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 52
- 238000003672 processing method Methods 0.000 title description 5
- 238000000034 method Methods 0.000 claims description 91
- 238000003384 imaging method Methods 0.000 claims description 52
- 238000002059 diagnostic imaging Methods 0.000 claims description 10
- 238000003708 edge detection Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 description 70
- 238000001356 surgical procedure Methods 0.000 description 70
- 238000001514 detection method Methods 0.000 description 42
- 238000005516 engineering process Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 13
- 230000003287 optical effect Effects 0.000 description 10
- 230000000694 effects Effects 0.000 description 5
- 238000002674 endoscopic surgery Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000003780 insertion Methods 0.000 description 4
- 230000037431 insertion Effects 0.000 description 4
- 230000000740 bleeding effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 239000000470 constituent Substances 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- KNMAVSAGTYIFJF-UHFFFAOYSA-N 1-[2-[(2-hydroxy-3-phenoxypropyl)amino]ethylamino]-3-phenoxypropan-2-ol;dihydrochloride Chemical compound Cl.Cl.C=1C=CC=CC=1OCC(O)CNCCNCC(O)COC1=CC=CC=C1 KNMAVSAGTYIFJF-UHFFFAOYSA-N 0.000 description 1
- 210000000683 abdominal cavity Anatomy 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 229910052736 halogen Inorganic materials 0.000 description 1
- 150000002367 halogens Chemical class 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- PWPJGUXAGUPAHP-UHFFFAOYSA-N lufenuron Chemical compound C1=C(Cl)C(OC(F)(F)C(C(F)(F)F)F)=CC(Cl)=C1NC(=O)NC(=O)C1=C(F)C=CC=C1F PWPJGUXAGUPAHP-UHFFFAOYSA-N 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 229910052709 silver Inorganic materials 0.000 description 1
- 239000004332 silver Substances 0.000 description 1
- 229910052724 xenon Inorganic materials 0.000 description 1
- FHNFHKCVQCLJFQ-UHFFFAOYSA-N xenon atom Chemical compound [Xe] FHNFHKCVQCLJFQ-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
- A61B1/000095—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope for image enhancement
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00043—Operational features of endoscopes provided with output arrangements
- A61B1/00045—Display arrangement
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00163—Optical arrangements
- A61B1/00193—Optical arrangements adapted for stereoscopic vision
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00163—Optical arrangements
- A61B1/00194—Optical arrangements adapted for three-dimensional imaging
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B23/00—Telescopes, e.g. binoculars; Periscopes; Instruments for viewing the inside of hollow bodies; Viewfinders; Optical aiming or sighting devices
- G02B23/24—Instruments or systems for viewing the inside of hollow bodies, e.g. fibrescopes
- G02B23/2407—Optical details
- G02B23/2415—Stereoscopic endoscopes
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B23/00—Telescopes, e.g. binoculars; Periscopes; Instruments for viewing the inside of hollow bodies; Viewfinders; Optical aiming or sighting devices
- G02B23/24—Instruments or systems for viewing the inside of hollow bodies, e.g. fibrescopes
- G02B23/2476—Non-optical details, e.g. housings, mountings, supports
- G02B23/2484—Arrangements in relation to a camera or imaging device
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/18—Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
- G05B19/402—Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by control arrangements for positioning, e.g. centring a tool relative to a hole in the workpiece, additional detection means to correct position
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/56—Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/633—Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
- H04N23/635—Region indicators; Field of view indicators
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/673—Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/45—Nc applications
- G05B2219/45123—Electrogoniometer, neuronavigator, medical robot used by surgeon to operate
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/555—Constructional details for picking-up images in sites, inaccessible due to their dimensions or hazardous conditions, e.g. endoscopes or borescopes
Definitions
- the present technology relates to an image processing apparatus, an image processing method, and a program, particularly, to an image processing apparatus, an image processing method, and a program allowed to display a surgery region desired by a practitioner without an effort of the practitioner.
- An endoscopic surgery has been used in which an endoscope is inserted into a body, a region (surgery region) being a surgical target in the body is captured as an observed portion to display the captured region on a screen by using the endoscope, and treatment is performed on the surgery region while viewing the screen.
- desired signal processing is performed on an image signal of the observed portion which has an optical image and is obtained in the endoscope by applying illumination light to the observed portion from a light source device and an image of the observed portion having an optical image is displayed on a screen.
- the practitioner typically holds a surgical instrument for performing surgery with both hands. Accordingly, it is difficult for the practitioner to operate a work of such the screen adjustment rapidly for himself.
- the practitioner operating an adjustment mechanism and the like for himself for screen adjustment is not preferable in view of ensuring a degree of cleanness of a surgery region, medical equipment, an operating room, and the like.
- an instruction of the practitioner is given to an assistant called a scopist or the like and the assistant operates the adjustment mechanism in accordance with the instruction from the practitioner to perform such the screen adjustment.
- PTL 1 discloses that focus control is performed on an area in which brightness or contrast does not change for a predetermined period.
- an area at which brightness or contrast does not change for a predetermined period may or may not be an area which a practitioner wishes to match with a focus and thus an incorrect focus may be obtained.
- the present technology is for performing estimation of region of interest desired by a practitioner without an effort of the practitioner in view of such circumstances.
- a medical image processing apparatus including a controller including circuitry configured to determine a position of a distal end of an important object within a medical image, estimate a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object, and control display of the region of interest.
- a method for processing a medical image by a medical image processing apparatus including a controller including circuitry.
- the method includes the steps of determining, using the circuitry, a position of a distal end of an important object within the medical image, estimating, using the circuitry, a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object, and controlling, using the circuitry, display of the region of interest.
- a medical image processing system including a medical imaging device that obtains a medical image, a display device having a display area, and a controller including circuitry configured to determine a position of a distal end of an important object within the medical image obtained by the medical imaging device, estimate a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object, and control display of the region of interest in the display area of the display device.
- a medical image processing apparatus including a controller including circuitry configured to determine a position of an important area within a medical image, estimate a region of interest within the medical image adjacent to a region including the important area based on the position of the important area, and control display of the region of interest.
- Fig. 1 is a block diagram illustrating a configuration example of an embodiment of an endoscope system according to the present technology.
- Fig. 2 is a diagram illustrating a usage state of the endoscope system.
- Fig. 3 is a diagram illustrating a calculation method of a depth position.
- Fig. 4 is a diagram illustrating the calculation method of the depth position.
- Fig. 5A is a diagram illustrating detection of a position of forceps.
- Fig. 5B is a diagram illustrating detection of the position of forceps.
- Fig. 5C is a diagram illustrating detection of the position of forceps.
- Fig. 5D is a diagram illustrating detection of the position of forceps.
- Fig. 6 is a diagram illustrating detection of a position of a remarkable point.
- Fig. 7 is a diagram illustrating an example of a superposition image to be displayed on a display.
- Fig. 8 is a flowchart illustrating focus control.
- Fig. 9 is a flowchart illustrating a forceps position detecting process in detail.
- Fig. 10 is a flowchart illustrating a remarkable point estimating process in detail.
- Fig. 11 is a flowchart illustrating focus control.
- Fig. 12 is a block diagram illustrating a configuration example of an embodiment of a computer according to the present technology.
- FIG. 1 is a block diagram illustrating a configuration example of an embodiment of an endoscope system according to the present technology.
- the endoscope system in Fig. 1 is configured by an endoscope camera head 11, a camera control unit (CCU) 12, an operation section 13, and a display 14.
- CCU camera control unit
- This endoscope system is used in endoscopic surgery in which a region (surgery region) in a body being a surgical target is captured as an observed portion and is displayed on the display 14, and the observed portion is treated while viewing the display 14.
- an insertion portion 25 of the endoscope camera head 11 and two pairs of forceps 81 (81A and 81B) being surgical instruments are inserted into the body of a patient.
- the endoscope camera head 11 emits light from a tip end of the insertion portion 25, illuminates a surgery region 82 of the patient, and images a state of the two pairs of forceps 81 and the surgery region 82.
- an endoscope will be described as an example, but the present technology may be also applied to an apparatus other than a medical apparatus such as an endoscope.
- the present technology may be applied to an apparatus of executing some types of processes on a remarkable region corresponding to a surgery region by an instructing tool, a predetermined device, or the like corresponding to the surgical instrument.
- the endoscope camera head 11 includes an imaging section 21, a light source 22, and a focus lens 23, as illustrated in Fig. 1.
- the imaging section 21 includes at least two imaging sensors 24 of a first imaging sensor 24a and a second imaging sensor 24b.
- the imaging sensor 24 is configured by, for example, a charge coupled device (CCD) sensor, a complementary metal oxide semiconductor (CMOS) sensor, or the like.
- CMOS complementary metal oxide semiconductor
- the imaging sensor 24 images a subject and generates an image obtained as a result.
- the imaging sensor 24 may employ a high resolution sensor which has the number of pixels of about 4000 x about 2000 being the number of pixels of (horizontal direction) x (vertical direction) and is called a 4K camera.
- the two imaging sensors 24 are disposed at a predetermined distance from each other in a traverse direction and generate images having view point directions different from each other to output the images to the CCU 12.
- images obtained by the two imaging sensors 24 performing imaging are referred to as surgery region images.
- the first imaging sensor 24a is set to be disposed on a right side and the second imaging sensor 24b is set to be disposed on a left side, and the surgery region image generated by the first imaging sensor 24a is referred to as an R image and the surgery region image generated by the second imaging sensor 24b is referred to as an L image.
- the light source 22 is configured by, for example, a halogen lamp, a xenon lamp, a light emitting diode (LED) light source, and the like and the light source 22 emits light for illuminating the surgery region.
- a halogen lamp for example, a halogen lamp, a xenon lamp, a light emitting diode (LED) light source, and the like and the light source 22 emits light for illuminating the surgery region.
- LED light emitting diode
- the focus lens 23 is configured by one or a plurality of lenses, and is driven by a focus control section 46 (will be described later) of the CCU 12 and forms an image on an imaging surface of the imaging sensor 24 by using incident light (image light) from the subject.
- the CCU 12 is an image processing apparatus for processing the surgery region image obtained by the imaging section 21 of the endoscope camera head 11 performing imaging.
- the CCU 12 is configured by a depth information generation section 41, a forceps position detection section 42, a remarkable point estimation section 43, an image superposition section 44, an operation control section 45, and a focus control section 46.
- An R image and an L image which are generated and output in the imaging section 21 are supplied to the depth information generation section 41 and the image superposition section 44 of the CCU 12.
- One (for example, L image) of the R image and the L image is also supplied to the focus control section 46.
- the depth information generation section 41 generates depth information of the surgery region image from the supplied R image and L image. More specifically, the depth information generation section 41 calculates a position of each pixel of the surgery region image in a depth direction by using the supplied R image and L image and a principle of triangulation.
- the first imaging sensor 24a and the second imaging sensor 24b are arranged in a row at a distance T in the traverse direction, as illustrated in Fig. 3, and each of the first imaging sensor 24a and the second imaging sensor 24b images an object P in the real world.
- the positions of the first imaging sensor 24a and the second imaging sensor 24b in the vertical direction are the same as each other and the positions in the horizontal direction are different from each other.
- the position of the object P in the R image obtained by the first imaging sensor 24a and the position of the object P in the L image obtained by the second imaging sensor 24b are different only in x coordinates.
- the x coordinate of the object P shown in the R image obtained by the first imaging sensor 24a is set to x r and the x coordinate of the object P shown in the L image obtained by the second imaging sensor 24b is set to x l .
- the x coordinate of the object P in the R image being x r corresponds to a position on a straight line joining an optical center O r of the first imaging sensor 24a and the object P.
- the x coordinate of the object P in the L image being x l corresponds to a position on a straight line joining an optical center O l of the second imaging sensor 24b and the object P.
- Equation (1) A relationship of the Equation (1) is established between T, Z, d, and f.
- the distance Z to the object P may be obtained by using the following Equation (2) which is obtained by deforming the Equation (1).
- the depth information generation section 41 in Fig. 1 calculates a depth Z of each pixel in the surgery region image by using the above-described principle of the triangulation.
- the depth Z of each pixel calculated by the depth information generation section 41 is supplied to the forceps position detection section 42 and the remarkable point estimation section 43, as depth information.
- the forceps position detection section 42 detects a position of an important object such as the forceps 81 shown in the surgery region image using the depth information of the surgery region image supplied from the depth information generation section 41.
- the two pairs of forceps 81 may be imaged as subjects in the surgery region image. However, a position of either of the forceps 81 may be detected. The position of the distal end of the forceps 81 may also be detected.
- the forceps 81 of which the position is to be detected may be determined in advance or the forceps of which the position is detected more easily than another in the surgery region image may be determined. In addition, the positions of the two pairs of forceps 81 may also be detected.
- the forceps position detection section 42 generates a parallax image from depth information of the surgery region image supplied from the depth information generation section 41.
- the parallax image refers to an image obtained by representing the depth Z of each pixel being the depth information, in gray scale.
- Fig. 5A illustrates an example of the parallax image and represents that brightness value in the parallax image becomes greater, corresponding depth Z becomes less, and the subject in the surgery region image becomes closer to the front.
- the forceps position detection section 42 detects an edge which is a boundary between brightness values, from the generated parallax image. For example, pixels which have a difference between pixel values of adjacent pixels is equal to or greater than a predetermined value in the parallax image are detected as an edge.
- the forceps position detection section 42 may detect be detected from one or more of color difference information, brightness, and depth independent from or in concert with edge detection techniques.
- An edge detection component is detected based on the brightness value.
- the surgical field has red as a main component and the forceps has a color such as silver, white, and black different from red in general. Since the surgical field and the forceps have different colors as described above, edge detection based on color component information may be also performed. That is, a configuration in which a three-dimensional position of the surgical instrument such as forceps is detected based on information on a specific color in the parallax image may be made.
- Fig. 5B illustrates an example of edges detected in the parallax image of Fig. 5A.
- the forceps position detection section 42 removes a curved edge out of the detected edge and detects only a linear edge having a predetermined length or greater.
- the forceps position detection section 42 only detects the linear edge having a predetermined length or greater out of the detected edge as the edge of the forceps 81.
- the forceps position detection section 42 may determine whether or not the detected linear edge is a straight line continuing from an outer circumference portion of the surgery region image in addition to determining whether or not the detected linear edge has the predetermined length or greater, when the edge of the forceps 81 is specified.
- the forceps 81 generally is captured to have a position of being extended to a center portion from the outer circumference portion of the surgery region image in the surgery region image. For this reason, it is possible to further raise detection accuracy of the forceps 81 by determining whether or not the detected linear edge is a straight line continuing from the outer circumference portion of the surgery region image.
- the forceps position detection section 42 estimates a position of the forceps 81 in the three-dimensional space in the captured image, that is, a posture of the forceps 81, from the detected linear edge.
- the forceps position detection section 42 calculates a line segment (straight line) 101 corresponding to the forceps 81, from the detected linear edge, as illustrated in Fig. 5D.
- the line segment 101 may be obtained by using an intermediate line between the detected two linear edges, and the like.
- the forceps position detection section 42 arbitrarily detects two points (x 1 , y 1 ) and (x 2 , y 2 ) on the calculated line segment 101 and acquires depth positions z 1 and z 2 at positions (x 1 , y 1 ) and (x 2 , y 2 ) of the detected two points from the supplied depth information. Accordingly, the positions (x 1 , y 1 , z 1 ) and (x 2 , y 2 , z 2 ) of the forceps 81 in the three-dimensional space are specified in the surgery region image.
- the positions may include, for example, the distal end of the forceps.
- either of the two line segments may be selected by selecting one closer to the front than another.
- the forceps position detection section 42 supplies the positions (x 1 , y 1 , z 1 ) and (x 2 , y 2 , z 2 ) of the forceps 81 in the three-dimensional space which are detected in the above-described manner, to the remarkable point estimation section 43.
- the depth information of the surgery region image is supplied from the depth information generation section 41 to the remarkable point estimation section 43 and the coordinates (x 1 , y 1 , z 1 ) and (x 2 , y 2 , z 2 ) of the two points in the three-dimensional space which represent a posture of the forceps 81 are supplied from the forceps position detection section 42 to the remarkable point estimation section 43.
- element 42 can also be an area position detection section.
- the area position detection section 42 detects an important area having, for example, certain tissues, body parts, bleeding or blood vessels, etc. The detection of the important area is based on color information, brightness and/or differences between different frames. For example, an important area could be detected as an area having no bleeding in one frame and then bleeding in subsequent frames.
- the remarkable point estimation section 43 assumes that a remarkable point Q at the surgery region 82 is at a position obtained by extending the detected positions of the forceps 81 and estimates a position of the remarkable point Q of the surgery region 82 in the three-dimensional space, as illustrated in Fig. 6.
- the remarkable point Q at the surgery region 82 corresponds to an intersection point of an extension line obtained by extending the detected posture of the forceps 81 and a surface of the surgery region 82.
- An estimated location coordinate of the remarkable point Q at the surgery region 82 in the three-dimensional space is supplied to the image superposition section 44.
- the remarkable point estimation section 43 can also estimate the remarkable point Q from the determined important area.
- the remarkable point Q can be generated based on the position of the important area.
- the image superposition section 44 generates a superposition image by superposing a predetermined mark (for example, a mark of x), a circle or a quadrangle as the region of interest having a predetermined size in which the remarkable point Q supplied from the remarkable point estimation section 43 is set as the center on a position at which the remarkable point Q of the surgery region image supplied from the imaging section 21 is set as the center and the image superposition section 44 displays the generated superposition image on the display 14.
- a predetermined mark for example, a mark of x
- a circle or a quadrangle as the region of interest having a predetermined size in which the remarkable point Q supplied from the remarkable point estimation section 43 is set as the center on a position at which the remarkable point Q of the surgery region image supplied from the imaging section 21 is set as the center
- the image superposition section 44 displays the generated superposition image on the display 14.
- a configuration in which display mode information which is control information for designating ON or OFF of the 3D display is supplied to the image superposition section 44 from the operation control section 45 may be also made.
- the image superposition section 44 supplies any one of the R image and the L image to the display 14 and causes the surgery region image to be displayed in the 2D display manner when OFF of the 3D display is designated through the display mode information.
- the image superposition section 44 supplies both of the R image and the L image to the display 14 and causes the surgery region image to be displayed in a 3D manner.
- the 3D display refers to an image display manner in which the R image and the L image are alternately displayed on the display 14, the right eye of a practitioner visually recognizes the R image, the left eye of the practitioner visually recognizes the L image, and thus the practitioner perceives the surgery region image three-dimensionally.
- the operation control section 45 supplies various control signals to necessary sections based on an operation signal supplied from the operation section 13. For example, the operation control section 45 supplies an instruction of focus matching to the focus control section 46 in accordance with an instruction of matching a focus with an area including the remarkable point Q generated in the operation section 13.
- the focus control section 46 performs focus control by using a contrast method, based on the L image supplied from the imaging section 21. Specifically, the focus control section 46 drives the focus lens 23 of the endoscope camera head 11 and compares contrast of the L image supplied from the imaging section 21 to detect a focus position.
- the focus control section 46 may perform the focus control in which the location coordinate of the remarkable point Q is acquired from the remarkable point estimation section 43 and an area of a predetermined range having the remarkable point Q as the center is set to be a focus control target area.
- the operation section 13 includes at least a foot pedal 61.
- the operation section 13 receives an operation from a practitioner (operator) and supplies an operation signal corresponding to an operation performed by the practitioner to the operation control section 45.
- the practitioner may perform, for example, matching a focus with a position of the mark indicating the remarkable point Q displayed on the display 14, switching the 2D display and the 3D display of the surgery region image displayed on the display 14, setting zoom magnification of the endoscope, and the like by operating the operation section 13.
- the display 14 is configured by, for example, a liquid crystal display (LCD) and the like and displays a surgery region image captured by the imaging section 21 of the endoscope camera head 11 based on an image signal supplied from the image superposition section 44.
- the superposition mode is set to be ON, a surgery region image captured by the imaging section 21 or a superposition image obtained by superposing a mark which has a predetermined shape and indicates a position of the remarkable point Q estimated by the remarkable point estimation section 43 on the surgery region image is displayed on the display 14.
- Fig. 7 illustrates an example of the superposition image displayed on the display 14.
- an area of interest (region of interest) QA which is determined to be an area including the remarkable point Q and has a predetermined size is indicated on a surgery region image 110 supplied from the imaging section 21 by a quadrangular mark.
- the two pairs of forceps 81A and 81B are imaged and the remarkable point Q is estimated based on a position of the forceps 81A on the left side.
- the estimated remarkable point Q In the superposition image 100, the estimated remarkable point Q, the mark (quadrangle indicating the area of interest in Fig. 7) for causing the practitioner to recognize the remarkable point Q, and a guide line 111 corresponding to an extension line calculated through estimation of the remarkable point Q are displayed.
- the area of interest (region of interest) QA may be overlapping on or adjacent to and/or distinct from the region including the forceps 81 or the important area.
- display of the guide line 111 allows a three-dimensional distance in the abdominal cavity to be recognized intuitively and may cause three-dimensional distance information to be provided for the practitioner in a plan view.
- the quadrangle is illustrated as a mark or highlight for causing the practitioner to recognize the remarkable point Q, but the mark is not limited to the quadrangle.
- the mark other shapes such as a triangle may be applied.
- a mark of a shape such as an (x) mark may be displayed.
- a configuration in which a process based on the flow of the focusing process illustrated in Fig. 8 is started when a practitioner operates a start button (the foot pedal 61 is available), when an optical zoom and the like is determined to be out of focus, and the like may be made.
- the present technology may be also applied to a microscope system and have a configuration in which the process based on the flow of the focusing process illustrated in Fig. 8 is started using movement of an arm as a trigger in the microscope system.
- Fig. 8 illustrates the flowchart of the focusing process. Power is supplied to each mechanism of the endoscope system in a state where the focusing process of Fig. 8 is executed.
- the insertion portion 25 of the endoscope camera head 11 and the forceps 81 are inserted into the body of a patient and the light source 22 illuminates the surgery region 82 of the patient.
- Step S1 the depth information generation section 41 generates depth information of a surgery region image from an R image and an L image supplied from the imaging section 21 of the endoscope camera head 11. More specifically, the depth information generation section 41 calculates depth Z of the each location (pixel) in the surgery region image by using the Equation (2) which uses the principle of triangulation described with reference to Fig. 4. Depth information calculated in the depth information generation section 41 is supplied to the remarkable point estimation section 43 and the forceps position detection section 42.
- a process of Step S1 is a process for determining three-dimensional positions of a surface of a subject imaged by the imaging section 21.
- Step S2 the forceps position detection section 42 executes a forceps position detecting process of detecting a position of the forceps 81 in the surgery region image using the depth information of the surgery region image supplied from the depth information generation section 41.
- a process of Step S2 is a process for detecting a three-dimensional position of a bar shaped instrument such as forceps held by a practitioner. Step S2 can alternatively correspond to the area position detection section 42 executing an area position detecting process.
- Fig. 9 illustrates a detailed flowchart of the forceps position detecting process executed in Step S2.
- Step S21 the forceps position detection section 42 generates a parallax image from the depth information of the surgery region image supplied from the depth information generation section 41.
- Step S22 the forceps position detection section 42 detects an edge which is a boundary between brightness values from the generated parallax image.
- Step S23 the forceps position detection section 42 removes a curved edge out of the detected edge and detects only a linear edge having a predetermined length or greater.
- Step S24 the forceps position detection section 42 estimates a position of the forceps 81 in the surgery region image in the three-dimensional space from the detected linear edge.
- coordinates (x 1 , y 1 , z 1 ) and (x 2 , y 2 , z 2 ) of two points indicating positions of the forceps 81 in the surgery region image in the three-dimensional space are determined.
- the positions may be for example the distal end of the forceps.
- Step S3 the remarkable point estimation section 43 executes a remarkable point estimating process of assuming that a remarkable point Q at the surgery region 82 is at a position obtained by extending the detected positions of the forceps 81 and of detecting a position of the remarkable point Q of the surgery region 82 in the three-dimensional space.
- Step S41 the remarkable point estimation section 43 obtains positions (x 1 , y 1 , z 1 ) and (x 2 , y 2 , z 2 ) of the forceps 81 in the surgery region image in the three-dimensional space which are supplied from the forceps position detection section 42.
- Step S42 the remarkable point estimation section 43 calculates a slope a 1 of a line segment A joining the coordinates (x 1 , y 1 ) and (x 2 , y 2 ) of the two points of the forceps in an XY plane.
- Step S43 the remarkable point estimation section 43 calculates a slope a 2 of a line segment B joining the coordinates (X 1 , Z 1 ) and (X 2 , Z 2 ) of the two points of the forceps in an XZ plane.
- Step S44 the remarkable point estimation section 43 determines a coordinate X 3 being an X coordinate value when the line segment A in the XY plane is extended by a predetermined length W in a center direction of the screen.
- the predetermined length W can be defined as 1/N (N is a positive integer) of the line segment A, for example.
- "*" represents multiplication.
- the calculated depth Z 3 of the extension point (X 3 , Y 3 ) of the line segment A in the XZ plane corresponds to a logical value of the extension point (X 3 , Y 3 ).
- Step S47 the remarkable point estimation section 43 acquires depth Z 4 of the extension point (X 3 , Y 3 ) of the line segment A from the depth information supplied from the depth information generation section 41.
- the acquired depth Z 4 of the extension point (X 3 , Y 3 ) corresponds to a real value of the extension point (X 3 , Y 3 ).
- Step S48 the remarkable point estimation section 43 determines whether or not the depth Z 3 being the logical value of the extension point (X 3 , Y 3 ) is greater than the depth Z 4 being the real value of the extension point (X 3 , Y 3 ).
- a case where the extension point (X 3 , Y 3 ) obtained by extending the line segment A which corresponds to the forceps 81 by the predetermined length W in the center direction of the screen is not included in the surgery region 82 means a case where the surgery region 82 is at a position deeper than the extension point (X 3 , Y 3 ).
- the depth Z 4 being the real value of the extension point (X 3 , Y 3 ) obtained from the depth information is greater than the depth Z 3 being the logical value.
- the surgery region 82 actually includes the extension point (X 3 , Y 3 )
- the real value (depth Z 4 ) of the extension point (X 3 , Y 3 ) obtained from the depth information becomes ahead of the logical value (depth Z 3 ) of the extension point (X 3 , Y 3 ).
- the depth Z 4 being the real value of the extension point (X 3 , Y 3 ) is less than the depth Z 3 being the logical value.
- Step S48 the remarkable point estimation section 43 determines whether or not the logical value (depth Z 3 ) of the extension point (X 3 , Y 3 ) obtained by extending the line segment A by the predetermined length W becomes ahead of the real value (depth Z 4 ) of the extension point (X 3 , Y 3 ) obtained from the depth information.
- Step S48 when the depth Z 3 being the logical value of the extension point (X 3 , Y 3 ) is equal to or less than the depth Z 4 being the real value of the extension point (X 3 ,Y 3 ), that is, when the depth Z 3 being the logical value of the extension point (X 3 , Y 3 ) is determined to be ahead of the depth Z 4 being the real value of the extension point (X 3 , Y 3 ), the process returns to Step S44.
- Step S44 to which the process returns a coordinate X 3 obtained by extending the line segment A in the center direction of the screen to be deeper than the current extension point (X 3 , Y 3 ) by the predetermined length W in the XY plane is determined as a new coordinate X 3 .
- Steps S45 to S48 which are described above are executed again on the determined new coordinate X 3 .
- Step S48 when the depth Z 3 being the logical value of the extension point (X 3 , Y 3 ) is greater than the depth Z 4 being the real value, that is, when the depth Z 4 being the real value of the extension point (X 3 , Y 3 ) is determined to be ahead of the depth Z 3 being the logical value, the process proceeds to Step S49.
- Step S49 the remarkable point estimation section 43 determines an extension point (X 3 , Y 3 , Z 4 ) which has the depth Z 4 being the real value as a Z coordinate value to be the remarkable point Q.
- Step S3 of Fig. 8 is completed and the process proceeds to Step S4.
- Step S4 the estimated (detected) remarkable point Q is displayed on a screen. For example, a predetermined mark is displayed at the position of the remarkable point Q, as described with reference to Fig. 7.
- Step S5 it is determined whether or not an instruction of setting a focal point is generated.
- the practitioner operates the foot pedal 61 of the operation section 13 when the practitioner wishes to control the remarkable point Q displayed on the display 14 to be a focal point.
- operating the foot pedal 61 causes an instruction of setting a focal point.
- a configuration will be made in which an instruction of setting a focal point is generated using other methods such as a method in which a switch (not illustrated) attached to the forceps is operated and a method in which a word preset through voice recognition is spoken.
- Step S5 when it is determined that the instruction of setting a focal point is not generated, the process returns to Step S1 and the subsequent processes are repeated.
- Step S5 when it is determined that the instruction of setting a focal point is generated, the process proceeds to Step S6 and the focus control section 46 performs the focus control.
- the focus control section 46 performs the focus control in such a manner that the focus control section 46 obtains information on a coordinate position and the like of the remarkable point estimated by the remarkable point estimation section 43 and sets a focus to match with the position of the remarkable point Q with the obtained information by, for example, the above-described contrast method.
- Step S5 when it is determined that the instruction of setting a focal point is not generated in Step S5, as illustrated in Fig. 8, the process returns to Step S1, depth information is generated again, and the subsequent processes are repeated in a case where three-dimensional positions of the surface of the imaged subject may be changed due to movement of the endoscope camera head 11.
- the present technology may be applied to a microscope system other than the above-described endoscope system, similarly.
- a microscope camera head may be provided instead of the endoscope camera head 11 in Fig. 1 and the CCU 12 may execute a superposition displaying process of the surgery region image imaged by the microscope camera head.
- a configuration in which when the instruction of setting a focal point is not generated in Step S5 illustrated in Fig. 8, the process returns to Step S2 and the subsequent processes are repeated may be made in a case where the microscope camera head and the subject are fixed and thus it is unlikely that the three-dimensional positions of the surface of the subject imaged by the imaging section 21 are changed.
- the three-dimensional positions (depth information) of the surface of the imaged subject are obtained once at first and then the processes are executed by repeatedly using the obtained depth information.
- depth information may be also generated in Step S1 for each predetermined period and the generated depth information may be updated in a configuration in which the process returns to Step S2 and the subsequent processes are repeated when it is determined that the instruction of setting a focal point is not generated in Step S5.
- marks may be marked on two predetermined locations of the forceps 81 and the two marks marked on the forceps 81 may be detected as positions (x 1 ,y 1 ,z 1 ) and (x 2 ,y 2 ,z 2 ) of the forceps 81 in the surgery region image.
- Characteristics such as a shape and a color of the forceps 81, and a shape and a color of the mark marked on the forceps 81 of various forcipes 81 used in performing surgery are stored in advance in a memory of the forceps position detection section 42 as a database and the mark may be detected based on information on the designated forceps 81.
- a direction pointed by the forceps is a direction on which the practitioner focuses and a focus is controlled to match with a portion of the subject image positioned in the direction
- the practitioner may make a focus match with a desired portion without separating the practitioner from the instrument such as the forceps. Accordingly, it is possible to provide an endoscope system or a microscope system having good convenience and improved operability.
- Step S101 the forceps position detection section 42 executes the forceps position detecting process of detecting a position of the forceps 81 in the surgery region image.
- the process in Step S101 corresponds to a process for detecting a three-dimensional position of a bar shaped instrument such as a forceps held by the practitioner.
- the forceps position detecting process in Step S101 may be executed similarly to the process of Step S2 illustrated in Fig. 8.
- Step S102 the depth information generation section 41 generates depth information of the surgery region image from an L image and an R image supplied from the imaging section 21 of the endoscope camera head 11.
- a process of Step S102 corresponds to a process for determining three-dimensional positions of the surface of the subject imaged by the imaging section 21.
- the forceps position detecting process in Step S102 may be executed similarly to the process of Step S1 illustrated in Fig. 8.
- a process based on a flowchart illustrated in Fig. 11 is obtained by reversing an order of generating depth information and then detecting a position of the forceps in the process based on the flowchart illustrated in Fig. 8 and is executed in order of detecting the position of the forceps and then generating the depth information.
- a portion which is on an extension of the tip end (distal end) portion of the forceps and overlapped with the subject image may be estimated.
- the process for generating the depth information is executed on the estimated portion of the subject image. That is, an area for generating depth information is extracted and the depth information in the extracted area is generated in the process of the flowchart illustrated in Fig. 11.
- Step S103 to Step S106 are executed similarly to the processes of Step S3 to Step S6 in the flowchart of Fig. 8. Thus, descriptions thereof are omitted.
- a focus may be matched with an image at a portion pointed by the forceps and thus it is possible to match a focus with a desired portion without separating an instrument such as the forceps held with the hand by the an operator from the hand. Accordingly, it is possible to improve operability of the endoscope system.
- An image process executed by the CCU 12 may be executed with hardware or software.
- a program constituting the software is installed on a computer.
- the computer includes a computer obtained by combining dedicated hardware, a general personal computer allowed to perform various functions by installing various programs, and the like.
- Fig. 12 is a block diagram illustrating a configuration example of hardware of a computer in which the CCU 12 executes an image process by using a program.
- a central processing unit (CPU) 201 In the computer, a central processing unit (CPU) 201, a read only memory (ROM) 202, and a random access memory (RAM) 203 are connected to each other through a bus 204.
- CPU central processing unit
- ROM read only memory
- RAM random access memory
- An input and output interface 205 is connected to the bus 204.
- An input section 206, an output section 207, a storage section 208, a communication section 209, and a drive 210 are connected to the input and output interface 205.
- the input section 206 is configured by a keyboard, a mouse, a microphone, and the like.
- the output section 207 is configured by a display, a speaker, and the like.
- the storage section 208 is configured by a hard disk, a non-volatile memory, and the like.
- the communication section 209 is configured by a network interface and the like.
- the drive 210 drives a removable recording medium 211 such as a magnetic disk, an optical disc, a magneto-optical disk, and a semiconductor.
- the CPU 201 executes the above-described series of processes by loading a program stored in the storage section 208 on the RAM 203 through the input and output interface 205 and the bus 204 and executing the loaded program, for example.
- the program may be installed on the storage section 208 through the input and output interface 205 by mounting the removable recording medium 211 on the drive 210.
- the program may be received in the communication section 209 through a wired or wireless transmission medium such as a local area network, the Internet, and satellite data broadcasting and may be installed on the storage section 208.
- the program may be installed on the ROM 202 or the storage section 208 in advance.
- the steps illustrated in the flowcharts may be executed in time series in the illustrated order.
- the steps may be executed not necessarily in time series, but in parallel or at a necessary timing, for example, when calling is performed, and the like.
- the system means a set of multiple constituents (apparatus, module (component), and the like) and it is not necessary that all of the constituents are in the same housing. Accordingly, the system includes a plurality of apparatuses which are stored in separate housings and connected to each other through a network and one apparatus in which a plurality of modules are stored in one housing.
- the embodiment of the present technology is not limited to the above-described embodiment and may be changed variously without departing the gist of the present technology.
- the present technology may have a configuration of cloud computing in which one function is distributed by a plurality of apparatuses through a network and the plurality of apparatuses together process the function.
- Each step illustrated in the above-described flowcharts may be executed in one apparatus or may be distributed and executed in a plurality of apparatuses.
- one step includes a plurality of processes
- the plurality of processes included in the one step may be executed in one apparatus or may be distributed and executed in a plurality of apparatuses.
- the present technology may have the following configurations.
- a medical image processing apparatus comprising: a controller including circuitry configured to determine a position of a distal end of an important object within a medical image, estimate a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object, and control display of the region of interest.
- a controller including circuitry configured to determine a position of a distal end of an important object within a medical image, estimate a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object, and control display of the region of interest.
- the medical image processing apparatus includes the circuitry is configured to control display of the region of interest by displaying one or more of: a zoomed image corresponding to the region of interest, the medical image having focus on the region of interest, and the medical image in which the region of interest is highlighted.
- the controller including the circuitry is configured to estimate the region of interest based on a three-dimensional posture of the important object.
- the controller including the circuitry is configured to estimate the region of interest by detecting an estimated position of a cross point between an extension from the distal end of the important object and a surface of a body.
- a method for processing a medical image by a medical image processing apparatus including a controller including circuitry comprising: determining, using the circuitry, a position of a distal end of an important object within the medical image; estimating, using the circuitry, a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object; and controlling, using the circuitry, display of the region of interest.
- a medical image processing system comprising: a medical imaging device that obtains a medical image; a display device having a display area; and a controller including circuitry configured to determine a position of a distal end of an important object within the medical image obtained by the medical imaging device, estimate a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object, and control display of the region of interest in the display area of the display device.
- the medical image processing system according to (11) wherein the medical imaging device generates both a left and a right medical image corresponding to the medical image using three dimensional imaging.
- a medical image processing apparatus comprising: a controller including circuitry configured to determine a position of an important area within a medical image, estimate a region of interest within the medical image adjacent to a region including the important area based on the position of the important area, and control display of the region of interest.
- An image processing apparatus including: a generation section configured to generate depth information of an image from the image obtained by imaging a surgical field which includes a region of a surgical target; and a position detection section configured to detect a three-dimensional position of a surgical instrument by using the generated depth information of the image.
- a remarkable point estimation section configured to estimate a remarkable point for a practitioner operating the surgical instrument, based on the detected three-dimensional position of the surgical instrument and the depth information.
- the remarkable point estimation section estimates an intersection point to be the remarkable point, the intersection point of the surgery region and an extension line obtained by extending a line segment corresponding to the three-dimensional position of the surgical instrument.
- a predetermined mark for indicating the remarkable point is superposed on the image obtained by the imaging section.
- An image processing method including: causing an image processing apparatus to generate depth information of an image from the image obtained by imaging a surgical field which includes a region of a surgical target and to detect a three-dimensional position of the surgical instrument by using the generated depth information of the image.
- a program of causing a computer to execute a process including: generating depth information of an image from the image obtained by imaging a surgical field which includes a region of a surgical target; and detecting a three-dimensional position of the surgical instrument by using the generated depth information of the image.
- Endoscope Camera head 12 CCU 13 Operation section 14 Display 21 Imaging section 24a First imaging sensor 24b Second imaging sensor 41 Depth information generation section 42 Forceps/area position detection section 43 Remarkable point estimation section 44 Image superposition section 45 Operation control section 61 Foot pedal Q Remarkable point QA Area of interest 111 Guide line QB Zoom image 201 CPU 202 ROM 203 RAM 206 Input section 207 Output section 208 Storage section 209 Communication section 210 Drive
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Radiology & Medical Imaging (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Public Health (AREA)
- Optics & Photonics (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Veterinary Medicine (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- General Physics & Mathematics (AREA)
- Astronomy & Astrophysics (AREA)
- Human Computer Interaction (AREA)
- Manufacturing & Machinery (AREA)
- Automation & Control Theory (AREA)
- Endoscopes (AREA)
Abstract
A medical image processing apparatus including a controller including circuitry configured to determine a position of a distal end of an important object within a medical image, estimate a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object, and control display of the region of interest.
Description
The present technology relates to an image processing apparatus, an image processing method, and a program, particularly, to an image processing apparatus, an image processing method, and a program allowed to display a surgery region desired by a practitioner without an effort of the practitioner.
<CROSS REFERENCE TO RELATED APPLICATIONS>
This application claims the benefit of Japanese Priority Patent Application JP 2014-115769 filed June 04, 2014, the entire contents of which are incorporated herein by reference.
This application claims the benefit of Japanese Priority Patent Application JP 2014-115769 filed June 04, 2014, the entire contents of which are incorporated herein by reference.
An endoscopic surgery has been used in which an endoscope is inserted into a body, a region (surgery region) being a surgical target in the body is captured as an observed portion to display the captured region on a screen by using the endoscope, and treatment is performed on the surgery region while viewing the screen. In the endoscopic surgery, desired signal processing is performed on an image signal of the observed portion which has an optical image and is obtained in the endoscope by applying illumination light to the observed portion from a light source device and an image of the observed portion having an optical image is displayed on a screen.
In such an endoscopic surgery, it is necessary that a range or a position of an observed portion to be displayed on the screen is appropriately adjusted depending on circumstances in order for a practitioner performing surgery to ensure an optimal view field (surgical field) for a surgery region.
However, typically, the practitioner holds a surgical instrument for performing surgery with both hands. Accordingly, it is difficult for the practitioner to operate a work of such the screen adjustment rapidly for himself. The practitioner operating an adjustment mechanism and the like for himself for screen adjustment is not preferable in view of ensuring a degree of cleanness of a surgery region, medical equipment, an operating room, and the like.
Thus, in general, an instruction of the practitioner is given to an assistant called a scopist or the like and the assistant operates the adjustment mechanism in accordance with the instruction from the practitioner to perform such the screen adjustment.
However, in the method in which the assistant intervenes, the instruction of the practitioner may be transferred inaccurately and thus rapid screen adjustment desired by the practitioner may be difficult.
As a method of adjusting a screen without an assistant intervening, for example, PTL 1 discloses that focus control is performed on an area in which brightness or contrast does not change for a predetermined period.
However, according to PTL 1, an area at which brightness or contrast does not change for a predetermined period may or may not be an area which a practitioner wishes to match with a focus and thus an incorrect focus may be obtained.
The present technology is for performing estimation of region of interest desired by a practitioner without an effort of the practitioner in view of such circumstances.
According to an embodiment of the present disclosure, there is provided a medical image processing apparatus including a controller including circuitry configured to determine a position of a distal end of an important object within a medical image, estimate a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object, and control display of the region of interest.
According to another embodiment of the present disclosure, there is provided a method for processing a medical image by a medical image processing apparatus including a controller including circuitry. The method includes the steps of determining, using the circuitry, a position of a distal end of an important object within the medical image, estimating, using the circuitry, a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object, and controlling, using the circuitry, display of the region of interest.
According to another embodiment of the present disclosure, there is provided a medical image processing system including a medical imaging device that obtains a medical image, a display device having a display area, and a controller including circuitry configured to determine a position of a distal end of an important object within the medical image obtained by the medical imaging device, estimate a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object, and control display of the region of interest in the display area of the display device.
According to another embodiment of the present disclosure, there is provided a medical image processing apparatus including a controller including circuitry configured to determine a position of an important area within a medical image, estimate a region of interest within the medical image adjacent to a region including the important area based on the position of the important area, and control display of the region of interest.
According to the embodiments of the present technology, it is possible to perform estimation of region of interest desired by the practitioner.
The effect described herein is not necessarily limited thereto and may include any effect described in the present technology.
Hereinafter, a configuration (below referred to as an embodiment) for implementing the present technology will be described. The description will be made in the following order.
1. Configuration Example of Endoscope System
2. Flow of Focusing Process
3. The Other Flow of Focusing Process
4. Regarding Recording Medium
1. Configuration Example of Endoscope System
2. Flow of Focusing Process
3. The Other Flow of Focusing Process
4. Regarding Recording Medium
<Configuration Example of Endoscope System>
Fig. 1 is a block diagram illustrating a configuration example of an embodiment of an endoscope system according to the present technology.
Fig. 1 is a block diagram illustrating a configuration example of an embodiment of an endoscope system according to the present technology.
The endoscope system in Fig. 1 is configured by an endoscope camera head 11, a camera control unit (CCU) 12, an operation section 13, and a display 14.
This endoscope system is used in endoscopic surgery in which a region (surgery region) in a body being a surgical target is captured as an observed portion and is displayed on the display 14, and the observed portion is treated while viewing the display 14.
In the endoscopic surgery, for example, as illustrated in Fig. 2, an insertion portion 25 of the endoscope camera head 11 and two pairs of forceps 81 (81A and 81B) being surgical instruments are inserted into the body of a patient. The endoscope camera head 11 emits light from a tip end of the insertion portion 25, illuminates a surgery region 82 of the patient, and images a state of the two pairs of forceps 81 and the surgery region 82.
Here, an endoscope will be described as an example, but the present technology may be also applied to an apparatus other than a medical apparatus such as an endoscope. For example, the present technology may be applied to an apparatus of executing some types of processes on a remarkable region corresponding to a surgery region by an instructing tool, a predetermined device, or the like corresponding to the surgical instrument.
The endoscope camera head 11 includes an imaging section 21, a light source 22, and a focus lens 23, as illustrated in Fig. 1.
The imaging section 21 includes at least two imaging sensors 24 of a first imaging sensor 24a and a second imaging sensor 24b. The imaging sensor 24 is configured by, for example, a charge coupled device (CCD) sensor, a complementary metal oxide semiconductor (CMOS) sensor, or the like. The imaging sensor 24 images a subject and generates an image obtained as a result. The imaging sensor 24 may employ a high resolution sensor which has the number of pixels of about 4000 x about 2000 being the number of pixels of (horizontal direction) x (vertical direction) and is called a 4K camera. The two imaging sensors 24 are disposed at a predetermined distance from each other in a traverse direction and generate images having view point directions different from each other to output the images to the CCU 12.
In this embodiment, images obtained by the two imaging sensors 24 performing imaging are referred to as surgery region images. In this embodiment, the first imaging sensor 24a is set to be disposed on a right side and the second imaging sensor 24b is set to be disposed on a left side, and the surgery region image generated by the first imaging sensor 24a is referred to as an R image and the surgery region image generated by the second imaging sensor 24b is referred to as an L image.
The light source 22 is configured by, for example, a halogen lamp, a xenon lamp, a light emitting diode (LED) light source, and the like and the light source 22 emits light for illuminating the surgery region.
The focus lens 23 is configured by one or a plurality of lenses, and is driven by a focus control section 46 (will be described later) of the CCU 12 and forms an image on an imaging surface of the imaging sensor 24 by using incident light (image light) from the subject.
The CCU 12 is an image processing apparatus for processing the surgery region image obtained by the imaging section 21 of the endoscope camera head 11 performing imaging. The CCU 12 is configured by a depth information generation section 41, a forceps position detection section 42, a remarkable point estimation section 43, an image superposition section 44, an operation control section 45, and a focus control section 46.
An R image and an L image which are generated and output in the imaging section 21 are supplied to the depth information generation section 41 and the image superposition section 44 of the CCU 12. One (for example, L image) of the R image and the L image is also supplied to the focus control section 46.
The depth information generation section 41 generates depth information of the surgery region image from the supplied R image and L image. More specifically, the depth information generation section 41 calculates a position of each pixel of the surgery region image in a depth direction by using the supplied R image and L image and a principle of triangulation.
A calculation method of a depth position of each pixel in the surgery region image will be described by using the principle of triangulation with reference to Fig. 3 and Fig. 4.
First, the first imaging sensor 24a and the second imaging sensor 24b are arranged in a row at a distance T in the traverse direction, as illustrated in Fig. 3, and each of the first imaging sensor 24a and the second imaging sensor 24b images an object P in the real world.
The positions of the first imaging sensor 24a and the second imaging sensor 24b in the vertical direction are the same as each other and the positions in the horizontal direction are different from each other. Thus, the position of the object P in the R image obtained by the first imaging sensor 24a and the position of the object P in the L image obtained by the second imaging sensor 24b are different only in x coordinates.
For example, the x coordinate of the object P shown in the R image obtained by the first imaging sensor 24a is set to xr and the x coordinate of the object P shown in the L image obtained by the second imaging sensor 24b is set to xl.
If the principle of triangulation is used, as illustrated in Fig. 4, the x coordinate of the object P in the R image being xr corresponds to a position on a straight line joining an optical center Or of the first imaging sensor 24a and the object P. The x coordinate of the object P in the L image being xl corresponds to a position on a straight line joining an optical center Ol of the second imaging sensor 24b and the object P.
Here, when a distance from the optical center Or to an image plane of the R image or from the optical center Ol to an image plane of the L image is set as f and a distance (depth) from the a line joining the optical center Or and the optical center Ol to the object P in the real world is set as Z, parallax d is represented by d=(xl-xr).
Accordingly, the distance Z to the object P may be obtained by using the following Equation (2) which is obtained by deforming the Equation (1).
The depth information generation section 41 in Fig. 1 calculates a depth Z of each pixel in the surgery region image by using the above-described principle of the triangulation. The depth Z of each pixel calculated by the depth information generation section 41 is supplied to the forceps position detection section 42 and the remarkable point estimation section 43, as depth information.
The forceps position detection section 42 detects a position of an important object such as the forceps 81 shown in the surgery region image using the depth information of the surgery region image supplied from the depth information generation section 41. As described above, the two pairs of forceps 81 may be imaged as subjects in the surgery region image. However, a position of either of the forceps 81 may be detected. The position of the distal end of the forceps 81 may also be detected. The forceps 81 of which the position is to be detected may be determined in advance or the forceps of which the position is detected more easily than another in the surgery region image may be determined. In addition, the positions of the two pairs of forceps 81 may also be detected.
Position detection of the forceps 81 performed by the forceps position detection section 42 will be described with reference to Fig. 5A to Fig. 5D.
First, the forceps position detection section 42 generates a parallax image from depth information of the surgery region image supplied from the depth information generation section 41. The parallax image refers to an image obtained by representing the depth Z of each pixel being the depth information, in gray scale.
Fig. 5A illustrates an example of the parallax image and represents that brightness value in the parallax image becomes greater, corresponding depth Z becomes less, and the subject in the surgery region image becomes closer to the front.
Then, the forceps position detection section 42 detects an edge which is a boundary between brightness values, from the generated parallax image. For example, pixels which have a difference between pixel values of adjacent pixels is equal to or greater than a predetermined value in the parallax image are detected as an edge. Alternatively, the forceps position detection section 42 may detect be detected from one or more of color difference information, brightness, and depth independent from or in concert with edge detection techniques.
An edge detection component is detected based on the brightness value. However, for example, the surgical field has red as a main component and the forceps has a color such as silver, white, and black different from red in general. Since the surgical field and the forceps have different colors as described above, edge detection based on color component information may be also performed. That is, a configuration in which a three-dimensional position of the surgical instrument such as forceps is detected based on information on a specific color in the parallax image may be made.
Here, a case of using the brightness value will be described as an example. Fig. 5B illustrates an example of edges detected in the parallax image of Fig. 5A.
Then, the forceps position detection section 42 removes a curved edge out of the detected edge and detects only a linear edge having a predetermined length or greater.
Since the forceps 81 has a bar shape, there is the linear edge having a predetermined length or greater, as the edge of the forceps 81. Thus, the forceps position detection section 42 only detects the linear edge having a predetermined length or greater out of the detected edge as the edge of the forceps 81.
The forceps position detection section 42 may determine whether or not the detected linear edge is a straight line continuing from an outer circumference portion of the surgery region image in addition to determining whether or not the detected linear edge has the predetermined length or greater, when the edge of the forceps 81 is specified. When the insertion portion 25 of the endoscope camera head 11 and the forceps 81 have a position relationship as illustrated in Fig. 2, the forceps 81 generally is captured to have a position of being extended to a center portion from the outer circumference portion of the surgery region image in the surgery region image. For this reason, it is possible to further raise detection accuracy of the forceps 81 by determining whether or not the detected linear edge is a straight line continuing from the outer circumference portion of the surgery region image.
Then, the forceps position detection section 42 estimates a position of the forceps 81 in the three-dimensional space in the captured image, that is, a posture of the forceps 81, from the detected linear edge.
Specifically, the forceps position detection section 42 calculates a line segment (straight line) 101 corresponding to the forceps 81, from the detected linear edge, as illustrated in Fig. 5D. The line segment 101 may be obtained by using an intermediate line between the detected two linear edges, and the like.
The forceps position detection section 42 arbitrarily detects two points (x1, y1) and (x2, y2) on the calculated line segment 101 and acquires depth positions z1 and z2 at positions (x1, y1) and (x2, y2) of the detected two points from the supplied depth information. Accordingly, the positions (x1, y1, z1) and (x2, y2, z2) of the forceps 81 in the three-dimensional space are specified in the surgery region image. The positions may include, for example, the distal end of the forceps.
When two line segments corresponding to two pairs of forceps 81 are detected in the surgery region image, either of the two line segments may be selected by selecting one closer to the front than another.
Returning to Fig. 1, the forceps position detection section 42 supplies the positions (x1, y1, z1) and (x2, y2, z2) of the forceps 81 in the three-dimensional space which are detected in the above-described manner, to the remarkable point estimation section 43.
The depth information of the surgery region image is supplied from the depth information generation section 41 to the remarkable point estimation section 43 and the coordinates (x1, y1, z1) and (x2, y2, z2) of the two points in the three-dimensional space which represent a posture of the forceps 81 are supplied from the forceps position detection section 42 to the remarkable point estimation section 43.
Alternatively to the forceps positon detection section 42, element 42 can also be an area position detection section. The area position detection section 42 detects an important area having, for example, certain tissues, body parts, bleeding or blood vessels, etc. The detection of the important area is based on color information, brightness and/or differences between different frames. For example, an important area could be detected as an area having no bleeding in one frame and then bleeding in subsequent frames.
The remarkable point estimation section 43 assumes that a remarkable point Q at the surgery region 82 is at a position obtained by extending the detected positions of the forceps 81 and estimates a position of the remarkable point Q of the surgery region 82 in the three-dimensional space, as illustrated in Fig. 6. The remarkable point Q at the surgery region 82 corresponds to an intersection point of an extension line obtained by extending the detected posture of the forceps 81 and a surface of the surgery region 82. An estimated location coordinate of the remarkable point Q at the surgery region 82 in the three-dimensional space is supplied to the image superposition section 44.
The remarkable point estimation section 43 can also estimate the remarkable point Q from the determined important area. In particular, the remarkable point Q can be generated based on the position of the important area.
The image superposition section 44 generates a superposition image by superposing a predetermined mark (for example, a mark of x), a circle or a quadrangle as the region of interest having a predetermined size in which the remarkable point Q supplied from the remarkable point estimation section 43 is set as the center on a position at which the remarkable point Q of the surgery region image supplied from the imaging section 21 is set as the center and the image superposition section 44 displays the generated superposition image on the display 14.
A configuration in which display mode information which is control information for designating ON or OFF of the 3D display is supplied to the image superposition section 44 from the operation control section 45 may be also made. The image superposition section 44 supplies any one of the R image and the L image to the display 14 and causes the surgery region image to be displayed in the 2D display manner when OFF of the 3D display is designated through the display mode information.
On the other hand, when an instruction of ON of the 3D display is received through the display mode information, the image superposition section 44 supplies both of the R image and the L image to the display 14 and causes the surgery region image to be displayed in a 3D manner. Here, the 3D display refers to an image display manner in which the R image and the L image are alternately displayed on the display 14, the right eye of a practitioner visually recognizes the R image, the left eye of the practitioner visually recognizes the L image, and thus the practitioner perceives the surgery region image three-dimensionally.
The operation control section 45 supplies various control signals to necessary sections based on an operation signal supplied from the operation section 13. For example, the operation control section 45 supplies an instruction of focus matching to the focus control section 46 in accordance with an instruction of matching a focus with an area including the remarkable point Q generated in the operation section 13.
The focus control section 46 performs focus control by using a contrast method, based on the L image supplied from the imaging section 21. Specifically, the focus control section 46 drives the focus lens 23 of the endoscope camera head 11 and compares contrast of the L image supplied from the imaging section 21 to detect a focus position. The focus control section 46 may perform the focus control in which the location coordinate of the remarkable point Q is acquired from the remarkable point estimation section 43 and an area of a predetermined range having the remarkable point Q as the center is set to be a focus control target area.
The operation section 13 includes at least a foot pedal 61. The operation section 13 receives an operation from a practitioner (operator) and supplies an operation signal corresponding to an operation performed by the practitioner to the operation control section 45. The practitioner may perform, for example, matching a focus with a position of the mark indicating the remarkable point Q displayed on the display 14, switching the 2D display and the 3D display of the surgery region image displayed on the display 14, setting zoom magnification of the endoscope, and the like by operating the operation section 13.
The display 14 is configured by, for example, a liquid crystal display (LCD) and the like and displays a surgery region image captured by the imaging section 21 of the endoscope camera head 11 based on an image signal supplied from the image superposition section 44. When the superposition mode is set to be ON, a surgery region image captured by the imaging section 21 or a superposition image obtained by superposing a mark which has a predetermined shape and indicates a position of the remarkable point Q estimated by the remarkable point estimation section 43 on the surgery region image is displayed on the display 14.
Fig. 7 illustrates an example of the superposition image displayed on the display 14.
In a superposition image 100 of Fig. 7, an area of interest (region of interest) QA which is determined to be an area including the remarkable point Q and has a predetermined size is indicated on a surgery region image 110 supplied from the imaging section 21 by a quadrangular mark.
In the surgery region image 110, the two pairs of forceps 81A and 81B are imaged and the remarkable point Q is estimated based on a position of the forceps 81A on the left side. In the superposition image 100, the estimated remarkable point Q, the mark (quadrangle indicating the area of interest in Fig. 7) for causing the practitioner to recognize the remarkable point Q, and a guide line 111 corresponding to an extension line calculated through estimation of the remarkable point Q are displayed. The area of interest (region of interest) QA may be overlapping on or adjacent to and/or distinct from the region including the forceps 81 or the important area.
In a configuration in which the guide line 111 is displayed, display of the guide line 111 allows a three-dimensional distance in the abdominal cavity to be recognized intuitively and may cause three-dimensional distance information to be provided for the practitioner in a plan view.
In Fig. 7, the quadrangle is illustrated as a mark or highlight for causing the practitioner to recognize the remarkable point Q, but the mark is not limited to the quadrangle. As the mark, other shapes such as a triangle may be applied. In addition, a mark of a shape such as an (x) mark may be displayed.
<Flow of Focusing Process>
Then, in the endoscope system of Fig. 1, a process when a remarkable point Q is detected and a focus is matched with a position of the remarkable point Q will be described with reference to a flowchart of Fig. 8. Whether or not such a process (process of auto-focusing) of detecting a remarkable point Q and performing matching of a focus is executed may be set by a practitioner. A configuration in which a process of the flowchart illustrated in Fig. 8 is executed when auto-focusing is set to be executed may be made.
Then, in the endoscope system of Fig. 1, a process when a remarkable point Q is detected and a focus is matched with a position of the remarkable point Q will be described with reference to a flowchart of Fig. 8. Whether or not such a process (process of auto-focusing) of detecting a remarkable point Q and performing matching of a focus is executed may be set by a practitioner. A configuration in which a process of the flowchart illustrated in Fig. 8 is executed when auto-focusing is set to be executed may be made.
In addition, a configuration in which setting whether or not auto-focusing is executed is performed by operating the foot pedal 61 may be made.
A configuration in which a process based on the flow of the focusing process illustrated in Fig. 8 is started when a practitioner operates a start button (the foot pedal 61 is available), when an optical zoom and the like is determined to be out of focus, and the like may be made. The present technology may be also applied to a microscope system and have a configuration in which the process based on the flow of the focusing process illustrated in Fig. 8 is started using movement of an arm as a trigger in the microscope system.
Fig. 8 illustrates the flowchart of the focusing process. Power is supplied to each mechanism of the endoscope system in a state where the focusing process of Fig. 8 is executed. The insertion portion 25 of the endoscope camera head 11 and the forceps 81 are inserted into the body of a patient and the light source 22 illuminates the surgery region 82 of the patient.
First, in Step S1, the depth information generation section 41 generates depth information of a surgery region image from an R image and an L image supplied from the imaging section 21 of the endoscope camera head 11. More specifically, the depth information generation section 41 calculates depth Z of the each location (pixel) in the surgery region image by using the Equation (2) which uses the principle of triangulation described with reference to Fig. 4. Depth information calculated in the depth information generation section 41 is supplied to the remarkable point estimation section 43 and the forceps position detection section 42.
A process of Step S1 is a process for determining three-dimensional positions of a surface of a subject imaged by the imaging section 21.
In Step S2, the forceps position detection section 42 executes a forceps position detecting process of detecting a position of the forceps 81 in the surgery region image using the depth information of the surgery region image supplied from the depth information generation section 41. A process of Step S2 is a process for detecting a three-dimensional position of a bar shaped instrument such as forceps held by a practitioner. Step S2 can alternatively correspond to the area position detection section 42 executing an area position detecting process.
<Detailed Flow of Forceps Position Detecting Process>
Fig. 9 illustrates a detailed flowchart of the forceps position detecting process executed in Step S2.
Fig. 9 illustrates a detailed flowchart of the forceps position detecting process executed in Step S2.
In the forceps position detecting process, at first, in Step S21, the forceps position detection section 42 generates a parallax image from the depth information of the surgery region image supplied from the depth information generation section 41.
In Step S22, the forceps position detection section 42 detects an edge which is a boundary between brightness values from the generated parallax image.
In Step S23, the forceps position detection section 42 removes a curved edge out of the detected edge and detects only a linear edge having a predetermined length or greater.
In Step S24, the forceps position detection section 42 estimates a position of the forceps 81 in the surgery region image in the three-dimensional space from the detected linear edge. With this, as described above with reference to Fig. 5D, coordinates (x1, y1, z1) and (x2, y2, z2) of two points indicating positions of the forceps 81 in the surgery region image in the three-dimensional space are determined. The positions may be for example the distal end of the forceps.
The positions (x1, y1, z1) and (x2, y2, z2) of the forceps 81 in the surgery region image, which are detected as described above, are supplied to the remarkable point estimation section 43 and the process proceeds to Step S3 in Fig. 8.
In Step S3, the remarkable point estimation section 43 executes a remarkable point estimating process of assuming that a remarkable point Q at the surgery region 82 is at a position obtained by extending the detected positions of the forceps 81 and of detecting a position of the remarkable point Q of the surgery region 82 in the three-dimensional space.
<Detailed Flow of Remarkable Point Estimating Process>
The remarkable point estimating process executed in Step S3 of Fig. 8 will be described in detail with reference to a flowchart of Fig. 10.
The remarkable point estimating process executed in Step S3 of Fig. 8 will be described in detail with reference to a flowchart of Fig. 10.
First, in Step S41, the remarkable point estimation section 43 obtains positions (x1, y1, z1) and (x2, y2, z2) of the forceps 81 in the surgery region image in the three-dimensional space which are supplied from the forceps position detection section 42.
In Step S42, the remarkable point estimation section 43 calculates a slope a1 of a line segment A joining the coordinates (x1, y1) and (x2, y2) of the two points of the forceps in an XY plane. The slope a1 may be calculated by using the following equation.
a1=(Y2-Y1)/(X2-X1)
a1=(Y2-Y1)/(X2-X1)
In Step S43, the remarkable point estimation section 43 calculates a slope a2 of a line segment B joining the coordinates (X1, Z1) and (X2, Z2) of the two points of the forceps in an XZ plane. The slope a2 may be calculated by using the following equation.
a2=(Z2-Z1)/(X2-X1)
a2=(Z2-Z1)/(X2-X1)
In Step S44, the remarkable point estimation section 43 determines a coordinate X3 being an X coordinate value when the line segment A in the XY plane is extended by a predetermined length W in a center direction of the screen. The predetermined length W can be defined as 1/N (N is a positive integer) of the line segment A, for example.
In Step S45, the remarkable point estimation section 43 calculates Y3=a1*X3 and calculates an extension point (X3, Y3) of the line segment A in the XY plane. Here, "*" represents multiplication.
In Step S46, the remarkable point estimation section 43 calculates depth Z3 of the extension point (X3,Y3) of the line segment A in the XZ plane by using Z3=a2*X3. Here, the calculated depth Z3 of the extension point (X3, Y3) of the line segment A in the XZ plane corresponds to a logical value of the extension point (X3, Y3).
In Step S47, the remarkable point estimation section 43 acquires depth Z4 of the extension point (X3, Y3) of the line segment A from the depth information supplied from the depth information generation section 41. Here, the acquired depth Z4 of the extension point (X3, Y3) corresponds to a real value of the extension point (X3, Y3).
In Step S48, the remarkable point estimation section 43 determines whether or not the depth Z3 being the logical value of the extension point (X3, Y3) is greater than the depth Z4 being the real value of the extension point (X3, Y3).
A case where the extension point (X3, Y3) obtained by extending the line segment A which corresponds to the forceps 81 by the predetermined length W in the center direction of the screen is not included in the surgery region 82 means a case where the surgery region 82 is at a position deeper than the extension point (X3, Y3). In this case, the depth Z4 being the real value of the extension point (X3, Y3) obtained from the depth information is greater than the depth Z3 being the logical value.
On the other hand, when the surgery region 82 actually includes the extension point (X3, Y3), the real value (depth Z4) of the extension point (X3, Y3) obtained from the depth information becomes ahead of the logical value (depth Z3) of the extension point (X3, Y3). Thus, the depth Z4 being the real value of the extension point (X3, Y3) is less than the depth Z3 being the logical value.
Accordingly, in Step S48, the remarkable point estimation section 43 determines whether or not the logical value (depth Z3) of the extension point (X3, Y3) obtained by extending the line segment A by the predetermined length W becomes ahead of the real value (depth Z4) of the extension point (X3, Y3) obtained from the depth information.
In Step S48, when the depth Z3 being the logical value of the extension point (X3, Y3) is equal to or less than the depth Z4 being the real value of the extension point (X3,Y3), that is, when the depth Z3 being the logical value of the extension point (X3, Y3) is determined to be ahead of the depth Z4 being the real value of the extension point (X3, Y3), the process returns to Step S44.
In Step S44 to which the process returns, a coordinate X3 obtained by extending the line segment A in the center direction of the screen to be deeper than the current extension point (X3, Y3) by the predetermined length W in the XY plane is determined as a new coordinate X3. Steps S45 to S48 which are described above are executed again on the determined new coordinate X3.
On the other hand, in Step S48, when the depth Z3 being the logical value of the extension point (X3, Y3) is greater than the depth Z4 being the real value, that is, when the depth Z4 being the real value of the extension point (X3, Y3) is determined to be ahead of the depth Z3 being the logical value, the process proceeds to Step S49.
In Step S49, the remarkable point estimation section 43 determines an extension point (X3, Y3, Z4) which has the depth Z4 being the real value as a Z coordinate value to be the remarkable point Q.
In this manner, Step S3 of Fig. 8 is completed and the process proceeds to Step S4.
In Step S4, the estimated (detected) remarkable point Q is displayed on a screen. For example, a predetermined mark is displayed at the position of the remarkable point Q, as described with reference to Fig. 7.
In Step S5, it is determined whether or not an instruction of setting a focal point is generated. The practitioner operates the foot pedal 61 of the operation section 13 when the practitioner wishes to control the remarkable point Q displayed on the display 14 to be a focal point. When such this operation is performed, it is determined that the instruction of controlling the estimated remarkable point to be focal point is generated.
Here, it is described that operating the foot pedal 61 causes an instruction of setting a focal point. However, for example, a configuration will be made in which an instruction of setting a focal point is generated using other methods such as a method in which a switch (not illustrated) attached to the forceps is operated and a method in which a word preset through voice recognition is spoken.
In Step S5, when it is determined that the instruction of setting a focal point is not generated, the process returns to Step S1 and the subsequent processes are repeated.
On the other hand, in Step S5, when it is determined that the instruction of setting a focal point is generated, the process proceeds to Step S6 and the focus control section 46 performs the focus control. The focus control section 46 performs the focus control in such a manner that the focus control section 46 obtains information on a coordinate position and the like of the remarkable point estimated by the remarkable point estimation section 43 and sets a focus to match with the position of the remarkable point Q with the obtained information by, for example, the above-described contrast method.
Meanwhile, when it is determined that the instruction of setting a focal point is not generated in Step S5, as illustrated in Fig. 8, the process returns to Step S1, depth information is generated again, and the subsequent processes are repeated in a case where three-dimensional positions of the surface of the imaged subject may be changed due to movement of the endoscope camera head 11.
The present technology may be applied to a microscope system other than the above-described endoscope system, similarly. In this case, a microscope camera head may be provided instead of the endoscope camera head 11 in Fig. 1 and the CCU 12 may execute a superposition displaying process of the surgery region image imaged by the microscope camera head.
In the microscope system, a configuration in which when the instruction of setting a focal point is not generated in Step S5 illustrated in Fig. 8, the process returns to Step S2 and the subsequent processes are repeated may be made in a case where the microscope camera head and the subject are fixed and thus it is unlikely that the three-dimensional positions of the surface of the subject imaged by the imaging section 21 are changed. In this case, the three-dimensional positions (depth information) of the surface of the imaged subject are obtained once at first and then the processes are executed by repeatedly using the obtained depth information.
For example, depth information may be also generated in Step S1 for each predetermined period and the generated depth information may be updated in a configuration in which the process returns to Step S2 and the subsequent processes are repeated when it is determined that the instruction of setting a focal point is not generated in Step S5.
In the above-described forceps position detecting process, an example in which the forceps 81 is detected by generating a parallax image from the depth information of the surgery region image and detecting a linear edge will be described. However, other detecting methods may be employed.
For example, marks may be marked on two predetermined locations of the forceps 81 and the two marks marked on the forceps 81 may be detected as positions (x1,y1,z1) and (x2,y2,z2) of the forceps 81 in the surgery region image. Characteristics such as a shape and a color of the forceps 81, and a shape and a color of the mark marked on the forceps 81 of various forcipes 81 used in performing surgery are stored in advance in a memory of the forceps position detection section 42 as a database and the mark may be detected based on information on the designated forceps 81.
In this manner, since a direction pointed by the forceps is a direction on which the practitioner focuses and a focus is controlled to match with a portion of the subject image positioned in the direction, the practitioner may make a focus match with a desired portion without separating the practitioner from the instrument such as the forceps. Accordingly, it is possible to provide an endoscope system or a microscope system having good convenience and improved operability.
<The Other Flow of Focusing Process>
Other process according to the focus control of the endoscope system will be described with reference to a flowchart illustrated in Fig. 11.
Other process according to the focus control of the endoscope system will be described with reference to a flowchart illustrated in Fig. 11.
In Step S101, the forceps position detection section 42 executes the forceps position detecting process of detecting a position of the forceps 81 in the surgery region image. The process in Step S101 corresponds to a process for detecting a three-dimensional position of a bar shaped instrument such as a forceps held by the practitioner. The forceps position detecting process in Step S101 may be executed similarly to the process of Step S2 illustrated in Fig. 8.
In Step S102, the depth information generation section 41 generates depth information of the surgery region image from an L image and an R image supplied from the imaging section 21 of the endoscope camera head 11. A process of Step S102 corresponds to a process for determining three-dimensional positions of the surface of the subject imaged by the imaging section 21. The forceps position detecting process in Step S102 may be executed similarly to the process of Step S1 illustrated in Fig. 8.
A process based on a flowchart illustrated in Fig. 11 is obtained by reversing an order of generating depth information and then detecting a position of the forceps in the process based on the flowchart illustrated in Fig. 8 and is executed in order of detecting the position of the forceps and then generating the depth information.
In detecting the position of the forceps, a portion which is on an extension of the tip end (distal end) portion of the forceps and overlapped with the subject image may be estimated. The process for generating the depth information is executed on the estimated portion of the subject image. That is, an area for generating depth information is extracted and the depth information in the extracted area is generated in the process of the flowchart illustrated in Fig. 11.
In this manner, it is possible to reduce processing capacity or processing time necessary for generating depth information by extracting an area for generating depth information.
Processes of Step S103 to Step S106 are executed similarly to the processes of Step S3 to Step S6 in the flowchart of Fig. 8. Thus, descriptions thereof are omitted.
As described above, according to the present technology, a focus may be matched with an image at a portion pointed by the forceps and thus it is possible to match a focus with a desired portion without separating an instrument such as the forceps held with the hand by the an operator from the hand. Accordingly, it is possible to improve operability of the endoscope system.
<Regarding Recording Medium>
An image process executed by theCCU 12 may be executed with hardware or software. When a series of processes are executed with software, a program constituting the software is installed on a computer. Here, the computer includes a computer obtained by combining dedicated hardware, a general personal computer allowed to perform various functions by installing various programs, and the like.
An image process executed by the
Fig. 12 is a block diagram illustrating a configuration example of hardware of a computer in which the CCU 12 executes an image process by using a program.
In the computer, a central processing unit (CPU) 201, a read only memory (ROM) 202, and a random access memory (RAM) 203 are connected to each other through a bus 204.
An input and output interface 205 is connected to the bus 204. An input section 206, an output section 207, a storage section 208, a communication section 209, and a drive 210 are connected to the input and output interface 205.
The input section 206 is configured by a keyboard, a mouse, a microphone, and the like. The output section 207 is configured by a display, a speaker, and the like. The storage section 208 is configured by a hard disk, a non-volatile memory, and the like. The communication section 209 is configured by a network interface and the like. The drive 210 drives a removable recording medium 211 such as a magnetic disk, an optical disc, a magneto-optical disk, and a semiconductor.
In the computer configured as described above, the CPU 201 executes the above-described series of processes by loading a program stored in the storage section 208 on the RAM 203 through the input and output interface 205 and the bus 204 and executing the loaded program, for example.
In the computer, the program may be installed on the storage section 208 through the input and output interface 205 by mounting the removable recording medium 211 on the drive 210. The program may be received in the communication section 209 through a wired or wireless transmission medium such as a local area network, the Internet, and satellite data broadcasting and may be installed on the storage section 208. In addition, the program may be installed on the ROM 202 or the storage section 208 in advance.
In this specification, the steps illustrated in the flowcharts may be executed in time series in the illustrated order. The steps may be executed not necessarily in time series, but in parallel or at a necessary timing, for example, when calling is performed, and the like.
In this specification, the system means a set of multiple constituents (apparatus, module (component), and the like) and it is not necessary that all of the constituents are in the same housing. Accordingly, the system includes a plurality of apparatuses which are stored in separate housings and connected to each other through a network and one apparatus in which a plurality of modules are stored in one housing.
The embodiment of the present technology is not limited to the above-described embodiment and may be changed variously without departing the gist of the present technology.
For example, an embodiment obtained by combining some or all of the above-described embodiments may be employed.
For example, the present technology may have a configuration of cloud computing in which one function is distributed by a plurality of apparatuses through a network and the plurality of apparatuses together process the function.
Each step illustrated in the above-described flowcharts may be executed in one apparatus or may be distributed and executed in a plurality of apparatuses.
Furthermore, when one step includes a plurality of processes, the plurality of processes included in the one step may be executed in one apparatus or may be distributed and executed in a plurality of apparatuses.
The effects described in this specification are only examples and are not limited thereto. There may be effects other than the effects described in this specification.
The present technology may have the following configurations.
(1)
A medical image processing apparatus, comprising:
a controller including circuitry configured to
determine a position of a distal end of an important object within a medical image,
estimate a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object, and
control display of the region of interest.
(2)
The medical image processing apparatus according to (1), wherein the medical image includes a surgical image of a body.
(3)
The medical image processing apparatus according to (1) to (2), wherein the region of interest within the medical image is distinct from the region including the important object.
(4)
The medical image processing apparatus according to (1) to (3), wherein the controller including the circuitry is configured to determine the position of the distal end of the important object within the medical image based on three dimensional position information of the important object.
(5)
The medical image processing apparatus according to (1) to (4), wherein the controller including the circuitry is configured to control display of the region of interest by displaying one or more of: a zoomed image corresponding to the region of interest, the medical image having focus on the region of interest, and the medical image in which the region of interest is highlighted.
(6)
The medical image processing apparatus according to (1) to (5), wherein the controller including the circuitry is configured to estimate the region of interest based on a three-dimensional posture of the important object.
(7)
The medical image processing apparatus according to (1) to (6), wherein the controller including the circuitry is configured to estimate the region of interest by detecting an estimated position of a cross point between an extension from the distal end of the important object and a surface of a body.
(8)
The medical image processing apparatus according to (1) to (7), wherein the controller including the circuitry is configured to determine the position of the distal end of the important object within the medical image by detecting the position of the distal end of the important object using one or more of color difference information, edge detection techniques, brightness, and depth.
(9)
A method for processing a medical image by a medical image processing apparatus including a controller including circuitry, the method comprising:
determining, using the circuitry, a position of a distal end of an important object within the medical image;
estimating, using the circuitry, a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object; and
controlling, using the circuitry, display of the region of interest.
(10)
A non-transitory computer readable medium having stored thereon a program that when executed by a computer causes the computer to implement a method for processing a medical image by a medical image apparatus including a controller including circuitry, the method comprising:
determining, using the circuitry, a position of a distal end of an important object within the medical image;
estimating, using the circuitry, a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object; and
controlling, using the circuitry, display of the region of interest.
(11)
A medical image processing system, comprising:
a medical imaging device that obtains a medical image;
a display device having a display area; and
a controller including circuitry configured to
determine a position of a distal end of an important object within the medical image obtained by the medical imaging device,
estimate a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object, and
control display of the region of interest in the display area of the display device.
(12)
The medical image processing system according to (11), wherein the medical imaging device generates both a left and a right medical image corresponding to the medical image using three dimensional imaging.
(13)
The medical image processing system according to (11) to (12), wherein the controller including the circuitry is configured to determine the position of the distal end of the important object within the medical image by detecting the position of the distal end of the important object using a depth image generated from the left and right medical images.
(14)
The medical image processing system according to (11) to (13), wherein the important object is a surgical instrument and the medical imaging device is an endoscope or a surgical microscope.
(15)
A medical image processing apparatus, comprising:
a controller including circuitry configured to
determine a position of an important area within a medical image,
estimate a region of interest within the medical image adjacent to a region including the important area based on the position of the important area, and
control display of the region of interest.
(16)
The medical image processing apparatus according to (15), wherein the region of interest within the medical image is distinct from the region including the important area.
(17)
An image processing apparatus including: a generation section configured to generate depth information of an image from the image obtained by imaging a surgical field which includes a region of a surgical target; and a position detection section configured to detect a three-dimensional position of a surgical instrument by using the generated depth information of the image.
(18)
The apparatus according to (17), further including: a remarkable point estimation section configured to estimate a remarkable point for a practitioner operating the surgical instrument, based on the detected three-dimensional position of the surgical instrument and the depth information.
(19)
The apparatus according to (18), further including: a focus control section configured to output a focus control signal such that a focus coincides with the remarkable point estimated by the remarkable point estimation section.
(20)
The apparatus according to (18), in which the remarkable point estimation section estimates an intersection point to be the remarkable point, the intersection point of the surgery region and an extension line obtained by extending a line segment corresponding to the three-dimensional position of the surgical instrument.
(21)
The apparatus according to (18), in which a predetermined mark for indicating the remarkable point is superposed on the image obtained by the imaging section.
(22)
The apparatus according to (19), in which the focus control section controls the focus when an instruction from the practitioner is received, and the instruction from the practitioner is generated by an operation of a foot pedal or a button included in the surgical instrument or utterance of a predetermined word.
(23)
The apparatus according to any one of (17) to (22), in which the generation section generates the depth information from a predetermined area of the image positioned on extension of a tip end portion of the surgical instrument which is detected by the position detection section.
(24)
The apparatus according to (21), in which an extension line is also superposed and displayed on the image, the extension line being obtained by extending a line segment corresponding to the three-dimensional position of the surgical instrument.
(25)
The apparatus according to any one of (17) to (24), in which the position detection section detects the three-dimensional position of the surgical instrument based on information on a linear edge in the image.
(26)
The apparatus according to any one of (17) to (25), in which the position detection section detects the three-dimensional position of the surgical instrument based on information on a specific color in a parallax image.
(27)
The apparatus according to any one of (17) to (26), in which the position detection section detects the three-dimensional position of the surgical instrument by detecting a mark marked on the surgical instrument from the image.
(28)
An image processing method including: causing an image processing apparatus to generate depth information of an image from the image obtained by imaging a surgical field which includes a region of a surgical target and to detect a three-dimensional position of the surgical instrument by using the generated depth information of the image.
(29)
A program of causing a computer to execute a process including: generating depth information of an image from the image obtained by imaging a surgical field which includes a region of a surgical target; and detecting a three-dimensional position of the surgical instrument by using the generated depth information of the image.
A medical image processing apparatus, comprising:
a controller including circuitry configured to
determine a position of a distal end of an important object within a medical image,
estimate a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object, and
control display of the region of interest.
(2)
The medical image processing apparatus according to (1), wherein the medical image includes a surgical image of a body.
(3)
The medical image processing apparatus according to (1) to (2), wherein the region of interest within the medical image is distinct from the region including the important object.
(4)
The medical image processing apparatus according to (1) to (3), wherein the controller including the circuitry is configured to determine the position of the distal end of the important object within the medical image based on three dimensional position information of the important object.
(5)
The medical image processing apparatus according to (1) to (4), wherein the controller including the circuitry is configured to control display of the region of interest by displaying one or more of: a zoomed image corresponding to the region of interest, the medical image having focus on the region of interest, and the medical image in which the region of interest is highlighted.
(6)
The medical image processing apparatus according to (1) to (5), wherein the controller including the circuitry is configured to estimate the region of interest based on a three-dimensional posture of the important object.
(7)
The medical image processing apparatus according to (1) to (6), wherein the controller including the circuitry is configured to estimate the region of interest by detecting an estimated position of a cross point between an extension from the distal end of the important object and a surface of a body.
(8)
The medical image processing apparatus according to (1) to (7), wherein the controller including the circuitry is configured to determine the position of the distal end of the important object within the medical image by detecting the position of the distal end of the important object using one or more of color difference information, edge detection techniques, brightness, and depth.
(9)
A method for processing a medical image by a medical image processing apparatus including a controller including circuitry, the method comprising:
determining, using the circuitry, a position of a distal end of an important object within the medical image;
estimating, using the circuitry, a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object; and
controlling, using the circuitry, display of the region of interest.
(10)
A non-transitory computer readable medium having stored thereon a program that when executed by a computer causes the computer to implement a method for processing a medical image by a medical image apparatus including a controller including circuitry, the method comprising:
determining, using the circuitry, a position of a distal end of an important object within the medical image;
estimating, using the circuitry, a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object; and
controlling, using the circuitry, display of the region of interest.
(11)
A medical image processing system, comprising:
a medical imaging device that obtains a medical image;
a display device having a display area; and
a controller including circuitry configured to
determine a position of a distal end of an important object within the medical image obtained by the medical imaging device,
estimate a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object, and
control display of the region of interest in the display area of the display device.
(12)
The medical image processing system according to (11), wherein the medical imaging device generates both a left and a right medical image corresponding to the medical image using three dimensional imaging.
(13)
The medical image processing system according to (11) to (12), wherein the controller including the circuitry is configured to determine the position of the distal end of the important object within the medical image by detecting the position of the distal end of the important object using a depth image generated from the left and right medical images.
(14)
The medical image processing system according to (11) to (13), wherein the important object is a surgical instrument and the medical imaging device is an endoscope or a surgical microscope.
(15)
A medical image processing apparatus, comprising:
a controller including circuitry configured to
determine a position of an important area within a medical image,
estimate a region of interest within the medical image adjacent to a region including the important area based on the position of the important area, and
control display of the region of interest.
(16)
The medical image processing apparatus according to (15), wherein the region of interest within the medical image is distinct from the region including the important area.
(17)
An image processing apparatus including: a generation section configured to generate depth information of an image from the image obtained by imaging a surgical field which includes a region of a surgical target; and a position detection section configured to detect a three-dimensional position of a surgical instrument by using the generated depth information of the image.
(18)
The apparatus according to (17), further including: a remarkable point estimation section configured to estimate a remarkable point for a practitioner operating the surgical instrument, based on the detected three-dimensional position of the surgical instrument and the depth information.
(19)
The apparatus according to (18), further including: a focus control section configured to output a focus control signal such that a focus coincides with the remarkable point estimated by the remarkable point estimation section.
(20)
The apparatus according to (18), in which the remarkable point estimation section estimates an intersection point to be the remarkable point, the intersection point of the surgery region and an extension line obtained by extending a line segment corresponding to the three-dimensional position of the surgical instrument.
(21)
The apparatus according to (18), in which a predetermined mark for indicating the remarkable point is superposed on the image obtained by the imaging section.
(22)
The apparatus according to (19), in which the focus control section controls the focus when an instruction from the practitioner is received, and the instruction from the practitioner is generated by an operation of a foot pedal or a button included in the surgical instrument or utterance of a predetermined word.
(23)
The apparatus according to any one of (17) to (22), in which the generation section generates the depth information from a predetermined area of the image positioned on extension of a tip end portion of the surgical instrument which is detected by the position detection section.
(24)
The apparatus according to (21), in which an extension line is also superposed and displayed on the image, the extension line being obtained by extending a line segment corresponding to the three-dimensional position of the surgical instrument.
(25)
The apparatus according to any one of (17) to (24), in which the position detection section detects the three-dimensional position of the surgical instrument based on information on a linear edge in the image.
(26)
The apparatus according to any one of (17) to (25), in which the position detection section detects the three-dimensional position of the surgical instrument based on information on a specific color in a parallax image.
(27)
The apparatus according to any one of (17) to (26), in which the position detection section detects the three-dimensional position of the surgical instrument by detecting a mark marked on the surgical instrument from the image.
(28)
An image processing method including: causing an image processing apparatus to generate depth information of an image from the image obtained by imaging a surgical field which includes a region of a surgical target and to detect a three-dimensional position of the surgical instrument by using the generated depth information of the image.
(29)
A program of causing a computer to execute a process including: generating depth information of an image from the image obtained by imaging a surgical field which includes a region of a surgical target; and detecting a three-dimensional position of the surgical instrument by using the generated depth information of the image.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
11 Endoscope Camera head
12 CCU
13 Operation section
14 Display
21 Imaging section
24a First imaging sensor
24b Second imaging sensor
41 Depth information generation section
42 Forceps/area position detection section
43 Remarkable point estimation section
44 Image superposition section
45 Operation control section
61 Foot pedal
Q Remarkable point
QA Area of interest
111 Guide line
QB Zoom image
201 CPU
202 ROM
203 RAM
206 Input section
207 Output section
208 Storage section
209 Communication section
210 Drive
12 CCU
13 Operation section
14 Display
21 Imaging section
24a First imaging sensor
24b Second imaging sensor
41 Depth information generation section
42 Forceps/area position detection section
43 Remarkable point estimation section
44 Image superposition section
45 Operation control section
61 Foot pedal
Q Remarkable point
QA Area of interest
111 Guide line
QB Zoom image
201 CPU
202 ROM
203 RAM
206 Input section
207 Output section
208 Storage section
209 Communication section
210 Drive
Claims (16)
- A medical image processing apparatus, comprising:
a controller including circuitry configured to
determine a position of a distal end of an important object within a medical image,
estimate a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object, and
control display of the region of interest. - The medical image processing apparatus according to claim 1, wherein the medical image includes a surgical image of a body.
- The medical image processing apparatus according to claim 1, wherein the region of interest within the medical image is distinct from the region including the important object.
- The medical image processing apparatus according to claim 1, wherein the controller including the circuitry is configured to determine the position of the distal end of the important object within the medical image based on three dimensional position information of the important object.
- The medical image processing apparatus according to claim 1, wherein the controller including the circuitry is configured to control display of the region of interest by displaying one or more of: a zoomed image corresponding to the region of interest, the medical image having focus on the region of interest, and the medical image in which the region of interest is highlighted.
- The medical image processing apparatus according to claim 1, wherein the controller including the circuitry is configured to estimate the region of interest based on a three-dimensional posture of the important object.
- The medical image processing apparatus according to claim 1, wherein the controller including the circuitry is configured to estimate the region of interest by detecting an estimated position of a cross point between an extension from the distal end of the important object and a surface of a body.
- The medical image processing apparatus according to claim 1, wherein the controller including the circuitry is configured to determine the position of the distal end of the important object within the medical image by detecting the position of the distal end of the important object using one or more of color difference information, edge detection techniques, brightness, and depth.
- A method for processing a medical image by a medical image processing apparatus including a controller including circuitry, the method comprising:
determining, using the circuitry, a position of a distal end of an important object within the medical image;
estimating, using the circuitry, a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object; and
controlling, using the circuitry, display of the region of interest. - A non-transitory computer readable medium having stored thereon a program that when executed by a computer causes the computer to implement a method for processing a medical image by a medical image apparatus including a controller including circuitry, the method comprising:
determining, using the circuitry, a position of a distal end of an important object within the medical image;
estimating, using the circuitry, a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object; and
controlling, using the circuitry, display of the region of interest. - A medical image processing system, comprising:
a medical imaging device that obtains a medical image;
a display device having a display area; and
a controller including circuitry configured to
determine a position of a distal end of an important object within the medical image obtained by the medical imaging device,
estimate a region of interest within the medical image adjacent to a region including the important object based on the position of the distal end of the important object, and
control display of the region of interest in the display area of the display device. - The medical image processing system according to claim 11, wherein the medical imaging device generates both a left and a right medical image corresponding to the medical image using three dimensional imaging.
- The medical image processing system according to claim 12, wherein the controller including the circuitry is configured to determine the position of the distal end of the important object within the medical image by detecting the position of the distal end of the important object using a depth image generated from the left and right medical images.
- The medical image processing system according to claim 11, wherein the important object is a surgical instrument and the medical imaging device is an endoscope or a surgical microscope.
- A medical image processing apparatus, comprising:
a controller including circuitry configured to
determine a position of an important area within a medical image,
estimate a region of interest within the medical image adjacent to a region including the important area based on the position of the important area, and
control display of the region of interest. - The medical image processing apparatus according to claim 15, wherein the region of interest within the medical image is distinct from the region including the important area.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/306,775 US10827906B2 (en) | 2014-06-04 | 2015-06-01 | Endoscopic surgery image processing apparatus, image processing method, and program |
EP15728186.6A EP3151719B1 (en) | 2014-06-04 | 2015-06-01 | Image processing apparatus and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014115769A JP6323184B2 (en) | 2014-06-04 | 2014-06-04 | Image processing apparatus, image processing method, and program |
JP2014-115769 | 2014-06-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015186335A1 true WO2015186335A1 (en) | 2015-12-10 |
Family
ID=53373516
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/002748 WO2015186335A1 (en) | 2014-06-04 | 2015-06-01 | Image processing apparatus, image processing method, and program |
Country Status (4)
Country | Link |
---|---|
US (1) | US10827906B2 (en) |
EP (1) | EP3151719B1 (en) |
JP (1) | JP6323184B2 (en) |
WO (1) | WO2015186335A1 (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE112015006562T5 (en) * | 2015-06-29 | 2018-03-22 | Olympus Corporation | Image processing apparatus, endoscope system, image processing method and image processing program |
JP6753081B2 (en) | 2016-03-09 | 2020-09-09 | ソニー株式会社 | Endoscopic surgery system, image processing method and medical observation system |
JP2017164007A (en) * | 2016-03-14 | 2017-09-21 | ソニー株式会社 | Medical image processing device, medical image processing method, and program |
US11653853B2 (en) | 2016-11-29 | 2023-05-23 | Biosense Webster (Israel) Ltd. | Visualization of distances to walls of anatomical cavities |
JPWO2018179610A1 (en) * | 2017-03-27 | 2020-02-06 | ソニー・オリンパスメディカルソリューションズ株式会社 | Control device, endoscope system, processing method and program |
EP3756531B1 (en) * | 2018-03-23 | 2023-06-07 | Sony Olympus Medical Solutions Inc. | Medical display control device and display control method |
WO2019187502A1 (en) * | 2018-03-29 | 2019-10-03 | ソニー株式会社 | Image processing apparatus, image processing method, and program |
US11147629B2 (en) * | 2018-06-08 | 2021-10-19 | Acclarent, Inc. | Surgical navigation system with automatically driven endoscope |
US20210345856A1 (en) * | 2018-10-18 | 2021-11-11 | Sony Corporation | Medical observation system, medical observation apparatus, and medical observation method |
WO2020167678A1 (en) * | 2019-02-12 | 2020-08-20 | Intuitive Surgical Operations, Inc. | Systems and methods for facilitating optimization of an imaging device viewpoint during an operating session of a computer-assisted operation system |
US11625834B2 (en) * | 2019-11-08 | 2023-04-11 | Sony Group Corporation | Surgical scene assessment based on computer vision |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080004603A1 (en) * | 2006-06-29 | 2008-01-03 | Intuitive Surgical Inc. | Tool position and identification indicator displayed in a boundary area of a computer display screen |
JP2011139760A (en) | 2010-01-06 | 2011-07-21 | Olympus Medical Systems Corp | Endoscope system |
US20130104066A1 (en) * | 2011-10-19 | 2013-04-25 | Boston Scientific Neuromodulation Corporation | Stimulation leadwire and volume of activation control and display interface |
JP2013258627A (en) * | 2012-06-14 | 2013-12-26 | Olympus Corp | Image processing apparatus and three-dimensional image observation system |
Family Cites Families (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3036287B2 (en) * | 1992-12-15 | 2000-04-24 | 富士ゼロックス株式会社 | Video scene detector |
JPH07154687A (en) * | 1993-11-29 | 1995-06-16 | Olympus Optical Co Ltd | Picture compositing device |
US5649021A (en) * | 1995-06-07 | 1997-07-15 | David Sarnoff Research Center, Inc. | Method and system for object detection for instrument control |
DE19529950C1 (en) * | 1995-08-14 | 1996-11-14 | Deutsche Forsch Luft Raumfahrt | Guiding method for stereo laparoscope in minimal invasive surgery |
JPH11309A (en) * | 1997-06-12 | 1999-01-06 | Hitachi Ltd | Image processor |
JP2006215105A (en) * | 2005-02-01 | 2006-08-17 | Fuji Photo Film Co Ltd | Imaging apparatus |
KR100567391B1 (en) | 2005-02-04 | 2006-04-04 | 국방과학연구소 | Solar simulator using method of combining mercury lamp and halogen lamp |
JP4980625B2 (en) * | 2006-02-21 | 2012-07-18 | 富士フイルム株式会社 | Body cavity observation device |
US7794396B2 (en) * | 2006-11-03 | 2010-09-14 | Stryker Corporation | System and method for the automated zooming of a surgical camera |
WO2008134715A1 (en) * | 2007-04-30 | 2008-11-06 | Mobileye Technologies Ltd. | Rear obstruction detection |
US8264542B2 (en) * | 2007-12-31 | 2012-09-11 | Industrial Technology Research Institute | Methods and systems for image processing in a multiview video system |
JP5410021B2 (en) * | 2008-01-22 | 2014-02-05 | 株式会社日立メディコ | Medical diagnostic imaging equipment |
JP5535725B2 (en) * | 2010-03-31 | 2014-07-02 | 富士フイルム株式会社 | Endoscope observation support system, endoscope observation support device, operation method thereof, and program |
JP5551957B2 (en) * | 2010-03-31 | 2014-07-16 | 富士フイルム株式会社 | Projection image generation apparatus, operation method thereof, and projection image generation program |
JP5380348B2 (en) * | 2010-03-31 | 2014-01-08 | 富士フイルム株式会社 | System, method, apparatus, and program for supporting endoscopic observation |
JP2012075507A (en) * | 2010-09-30 | 2012-04-19 | Panasonic Corp | Surgical camera |
JP2012075508A (en) * | 2010-09-30 | 2012-04-19 | Panasonic Corp | Surgical camera |
WO2012075631A1 (en) * | 2010-12-08 | 2012-06-14 | Industrial Technology Research Institute | Methods for generating stereoscopic views from monoscopic endoscope images and systems using the same |
US8989448B2 (en) * | 2011-03-22 | 2015-03-24 | Morpho, Inc. | Moving object detecting device, moving object detecting method, moving object detection program, moving object tracking device, moving object tracking method, and moving object tracking program |
JP5855358B2 (en) * | 2011-05-27 | 2016-02-09 | オリンパス株式会社 | Endoscope apparatus and method for operating endoscope apparatus |
US9204939B2 (en) * | 2011-08-21 | 2015-12-08 | M.S.T. Medical Surgery Technologies Ltd. | Device and method for assisting laparoscopic surgery—rule based approach |
TWI517828B (en) * | 2012-06-27 | 2016-01-21 | 國立交通大學 | Image tracking system and image tracking method thereof |
EP4186422A1 (en) * | 2012-07-25 | 2023-05-31 | Intuitive Surgical Operations, Inc. | Efficient and interactive bleeding detection in a surgical system |
GB2505926A (en) * | 2012-09-14 | 2014-03-19 | Sony Corp | Display of Depth Information Within a Scene |
JP2014147630A (en) * | 2013-02-04 | 2014-08-21 | Canon Inc | Three-dimensional endoscope apparatus |
JP6265627B2 (en) * | 2013-05-23 | 2018-01-24 | オリンパス株式会社 | Endoscope apparatus and method for operating endoscope apparatus |
WO2015029318A1 (en) * | 2013-08-26 | 2015-03-05 | パナソニックIpマネジメント株式会社 | 3d display device and 3d display method |
JP6458732B2 (en) * | 2013-09-10 | 2019-01-30 | ソニー株式会社 | Image processing apparatus, image processing method, and program |
US20150080652A1 (en) * | 2013-09-18 | 2015-03-19 | Cerner Innovation, Inc. | Lesion detection and image stabilization using portion of field of view |
ES2900181T3 (en) * | 2014-02-27 | 2022-03-16 | Univ Surgical Associates Inc | interactive screen for surgery |
US10334227B2 (en) * | 2014-03-28 | 2019-06-25 | Intuitive Surgical Operations, Inc. | Quantitative three-dimensional imaging of surgical scenes from multiport perspectives |
WO2015149043A1 (en) * | 2014-03-28 | 2015-10-01 | Dorin Panescu | Quantitative three-dimensional imaging and printing of surgical implants |
KR101599129B1 (en) * | 2014-05-20 | 2016-03-02 | 박현준 | Method for Measuring Size of Lesion which is shown by Endoscopy, and Computer Readable Recording Medium |
JP6381313B2 (en) * | 2014-06-20 | 2018-08-29 | キヤノン株式会社 | Control device, control method, and program |
WO2016088187A1 (en) * | 2014-12-02 | 2016-06-09 | オリンパス株式会社 | Focus control device, endoscope device, and control method for focus control device |
WO2016088186A1 (en) * | 2014-12-02 | 2016-06-09 | オリンパス株式会社 | Focus control device, endoscope device, and method for controlling focus control device |
-
2014
- 2014-06-04 JP JP2014115769A patent/JP6323184B2/en active Active
-
2015
- 2015-06-01 WO PCT/JP2015/002748 patent/WO2015186335A1/en active Application Filing
- 2015-06-01 EP EP15728186.6A patent/EP3151719B1/en active Active
- 2015-06-01 US US15/306,775 patent/US10827906B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080004603A1 (en) * | 2006-06-29 | 2008-01-03 | Intuitive Surgical Inc. | Tool position and identification indicator displayed in a boundary area of a computer display screen |
JP2011139760A (en) | 2010-01-06 | 2011-07-21 | Olympus Medical Systems Corp | Endoscope system |
US20130104066A1 (en) * | 2011-10-19 | 2013-04-25 | Boston Scientific Neuromodulation Corporation | Stimulation leadwire and volume of activation control and display interface |
JP2013258627A (en) * | 2012-06-14 | 2013-12-26 | Olympus Corp | Image processing apparatus and three-dimensional image observation system |
US20150077529A1 (en) * | 2012-06-14 | 2015-03-19 | Olympus Corporation | Image-processing device and three-dimensional-image observation system |
Also Published As
Publication number | Publication date |
---|---|
JP2015228955A (en) | 2015-12-21 |
EP3151719B1 (en) | 2021-02-17 |
EP3151719A1 (en) | 2017-04-12 |
US10827906B2 (en) | 2020-11-10 |
US20170042407A1 (en) | 2017-02-16 |
JP6323184B2 (en) | 2018-05-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10694933B2 (en) | Image processing apparatus and image processing method for image display including determining position of superimposed zoomed image | |
US10827906B2 (en) | Endoscopic surgery image processing apparatus, image processing method, and program | |
US11835702B2 (en) | Medical image processing apparatus, medical image processing method, and medical observation system | |
CN110099599B (en) | Medical image processing apparatus, medical image processing method, and program | |
JP7226325B2 (en) | Focus detection device and method, and program | |
WO2017159335A1 (en) | Medical image processing device, medical image processing method, and program | |
CN106455958B (en) | Colposcopic device for performing a colposcopic procedure | |
US10609354B2 (en) | Medical image processing device, system, method, and program | |
WO2016088187A1 (en) | Focus control device, endoscope device, and control method for focus control device | |
US11141050B2 (en) | Autofocus control device, endoscope apparatus, and operation method of autofocus control device | |
US20210019921A1 (en) | Image processing device, image processing method, and program | |
US11426052B2 (en) | Endoscopic system | |
US10429632B2 (en) | Microscopy system, microscopy method, and computer-readable recording medium | |
JP2014175965A (en) | Camera for surgical operation | |
JP6860378B2 (en) | Endoscope device | |
JP5792401B2 (en) | Autofocus device | |
JP6996883B2 (en) | Medical observation device | |
US20220142454A1 (en) | Image processing system, image processing device, and image processing method | |
JP7207296B2 (en) | IMAGING DEVICE, FOCUS CONTROL METHOD, AND FOCUS DETERMINATION METHOD |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15728186 Country of ref document: EP Kind code of ref document: A1 |
|
REEP | Request for entry into the european phase |
Ref document number: 2015728186 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2015728186 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15306775 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |