Nothing Special   »   [go: up one dir, main page]

WO2011040769A2 - Surgical image processing device, image-processing method, laparoscopic manipulation method, surgical robot system and an operation-limiting method therefor - Google Patents

Surgical image processing device, image-processing method, laparoscopic manipulation method, surgical robot system and an operation-limiting method therefor Download PDF

Info

Publication number
WO2011040769A2
WO2011040769A2 PCT/KR2010/006662 KR2010006662W WO2011040769A2 WO 2011040769 A2 WO2011040769 A2 WO 2011040769A2 KR 2010006662 W KR2010006662 W KR 2010006662W WO 2011040769 A2 WO2011040769 A2 WO 2011040769A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
surgical
endoscope
unit
modeling
Prior art date
Application number
PCT/KR2010/006662
Other languages
French (fr)
Korean (ko)
Other versions
WO2011040769A3 (en
Inventor
최승욱
이민규
원종석
민동명
Original Assignee
주식회사 이턴
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020090094124A external-priority patent/KR101598774B1/en
Priority claimed from KR1020090114651A external-priority patent/KR101683057B1/en
Application filed by 주식회사 이턴 filed Critical 주식회사 이턴
Publication of WO2011040769A2 publication Critical patent/WO2011040769A2/en
Publication of WO2011040769A3 publication Critical patent/WO2011040769A3/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B34/37Master-slave robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/365Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems

Definitions

  • the present invention relates to a surgical image processing apparatus and its method, a surgical robot system and its operation limiting method, a surgical robot system and a laparoscopic operation method, a tangible surgical image processing apparatus and method.
  • Surgical robot refers to a robot having a function that can replace a surgical operation performed by a surgeon. Such a surgical robot has the advantage of being capable of accurate and precise operation and remote surgery compared to humans.
  • Surgical robots currently being developed worldwide include bone surgery robots, laparoscope surgery robots, and stereotactic surgery robots.
  • the laparoscopic surgical robot is a robot that performs minimally invasive surgery using a laparoscope and a small surgical tool.
  • Laparoscopic surgery is an advanced surgical technique that involves surgery after inserting a laparoscope, an endoscope that looks into the belly and drills a hole about 1 cm in the navel area.
  • Recent laparoscopic is equipped with a computer chip to obtain a clearer and enlarged image than the naked eye, and has been developed so that any operation can be performed using specially designed laparoscopic surgical instruments while viewing the screen through the monitor.
  • laparoscopic surgery has the same range of surgery as laparotomy, but has fewer complications than laparotomy, and can start treatment much faster after the procedure, and it has the advantage of maintaining the stamina or immune function of the patient. have.
  • laparoscopic surgery is increasingly recognized as a standard surgery for treating colorectal cancer in the US and Europe.
  • the surgical robot system generally consists of a master robot and a slave robot.
  • a manipulator for example, a handle
  • a surgical tool coupled to the robot arm of the slave robot or held by the robot arm is operated to perform surgery.
  • the master robot and the slave robot are combined through a communication network to perform network communication.
  • the surgical image taken by the laparoscope is output to the operator, and the operator sees this, but there is a problem that the actual feeling is somewhat lower than that of the surgery performed by direct surgery.
  • This problem is that even when the laparoscope moves and rotates in the abdominal cavity to illuminate other parts of the body, the image seen by the user is output from the monitor of the same position and size, so that the relative distance and movement between the manipulator and the image as described above may be reduced. This can be caused by a difference in the relative distance and movement between the surgical tool and the organ.
  • the operator since the surgical image taken by the laparoscopic includes only some shapes of the surgical tool, the operator may be difficult to operate when they collide or overlap each other, or may not be able to proceed with the operation smoothly due to obscured vision.
  • the surgical robot system is equipped with a plurality of surgical instruments, each instrument performs the operation required for the operation in accordance with the operator's operation on the surgical robot system.
  • Instruments operated by the operator during the operation are controlled by the operator, but are not operated by the operator, especially those that are not visible on the user's operation screen (i.e., located in an out-of-visible area).
  • unnecessary operations may be performed by operations other than the intention of the operator.
  • the operator may collide with the robot body or an adjacent instrument, and the possibility of causing damage or injury to blood vessels or tissues of the patient under surgery cannot be excluded.
  • the conventional surgical robot system requires a separate user operation for the operator to move the laparoscope to a desired position or to adjust the image input angle in order to obtain an image of the surgical site. That is, the operator must input a manipulation for the control of the laparoscope separately using the hands or feet during the surgical procedure.
  • the background art described above is technical information possessed by the inventors for the derivation of the present invention or acquired during the derivation process of the present invention, and is not necessarily a publicly known technique disclosed to the general public before the application of the present invention.
  • the present invention is to provide a surgical image processing apparatus and method for enabling a smooth operation by providing a real image and a modeling image at the time of surgery.
  • the present invention provides a surgical image processing apparatus and a method for allowing the operator to operate by referring to the image of the neighboring site and the external environment of the patient as well as the surgical site away from the conventional method of operating by looking only at the surgical site. It is for.
  • the present invention is to provide a surgical image processing apparatus and method that can perform the operation smoothly by detecting the collision of the surgical instruments in advance, so that the operator can recognize the collision in advance and avoid it.
  • the present invention is to provide a surgical image processing apparatus and the method that the operator can use the surgical image more conveniently by implementing an interface that the operator can select the type of the image to be displayed on the monitor for convenience.
  • the present invention is to provide a surgical robot system and a method of limiting its operation to allow the instrument to be controlled only in the manner that the operator intends to perform a normal operation.
  • the present invention is to provide a surgical robot system and a laparoscopy manipulation method that allows the operator to control the position and image input angle of the laparoscope only by the action to see the desired surgical site.
  • the present invention is to provide a surgical robot system and a laparoscopy manipulation method that allows the operator to concentrate only on the surgical operation is not necessary because the operator does not need a separate operation for the operation of the laparoscope.
  • the present invention changes the output position of the endoscope image output to the monitor to the user in response to the viewpoint of the endoscope changes according to the movement of the surgical endoscope, so that the user can feel the actual surgical situation more realistic It is to provide a tangible surgical image processing apparatus and method.
  • the present invention by extracting the previously stored endoscope image at the present time point and outputs to the screen display unit along with the current endoscope image, haptic surgical image that can inform the user about the change in the endoscope image It is to provide a processing apparatus and method.
  • the present invention by modifying the image, such as to match the image or adjust the size of each of the modeling image stored in the image storage and generated in advance for the endoscopic image and the surgical instrument actually taken using the endoscope during surgery It is to provide a tangible surgical image processing apparatus and method that can output to an observable monitor.
  • the present invention is to provide a tangible surgical image processing apparatus and method that can make the user feel more vivid to the operation by rotating and moving the monitor in accordance with the viewpoint of varying endoscopes.
  • the image input unit for receiving an endoscope image provided from the surgical endoscope, an image storage unit for storing a modeling image of a surgical tool for operating a surgical target taken by the surgical endoscope, and an endoscope image Surgical image processing apparatus including an image matching unit for generating an output image by matching the modeling image with each other, and a screen display unit for outputting the output image including the endoscope image and the modeling image.
  • the surgical endoscope may be at least one of laparoscopic, thoracoscopic, arthroscopic, parenteral, bladder, rectal, duodenum, mediastinal, cardiac.
  • the image matching unit may generate an output image by matching the actual surgical tool image included in the endoscope image with the modeling surgical tool image included in the modeling image.
  • the image matching unit may include a characteristic value calculating unit calculating a characteristic value using at least one of endoscope images and position coordinate information of an actual surgical tool coupled to at least one robot arm, and a characteristic value calculated by the characteristic value calculating unit.
  • the apparatus may further include a modeling image implementation unit for implementing a modeling image.
  • the present embodiment may further include an overlapping image processor for removing an overlapping region between the modeling surgical tool image and the actual surgical tool image from the modeling surgical tool image.
  • the position of the modeling surgical tool image output to the modeling image may be set using the operation information of the surgical tool, the screen display unit outputs the endoscope image to any region of the output image, the modeling image of the output image Output to the surrounding area.
  • the modeling image may be an image of a surgical tool photographed at a specific time point before the start of the operation
  • the embodiment may further include a camera to take a picture of the outside of the surgical target during surgery to generate a camera image.
  • the image matching unit may generate an output image by matching the endoscope image, the modeling image, the camera image, and a combination image thereof, and at least one of the endoscope image, the modeling image, and the camera image may be a 3D image.
  • the present embodiment may further include a mode selection unit for selecting any combination of two or more of the endoscope image, the modeling image and the camera image, the collision detection unit for detecting whether the modeling surgery tool images included in the modeling image collide with each other.
  • the collision detection unit may further include a warning information output unit for outputting warning information when detecting a collision between the modeling surgical tool image and generating a collision detection signal.
  • the collision detection unit may perform any one or more of a force feedback process and an operation limitation of the arm operation unit for controlling the robot arm on which the surgical tool is mounted when detecting a collision between the modeling surgical tool images.
  • the surgical image processing apparatus may be included in an interface mounted on a master robot that controls a slave robot including a robot arm, and the modeling image may include CT, MR, It may further include an image modeling an image obtained from any one or more imaging equipment of PET, SPECT, US.
  • a surgical image processing method comprising the step of receiving an input, generating an output image by matching the endoscope image and the modeling image with each other and outputting the output image including the endoscope image and the modeling image.
  • the surgical endoscope may be one or more of laparoscopic, thoracoscopic, arthroscopy, parenteral, bladder, rectal, duodenum, mediastinal, cardiac, the output image generation step, the actual surgical tool included in the endoscope image
  • the output image may be generated by matching the modeling surgical tool image included in the image and the modeling image with each other.
  • the generating of the output image may include calculating a characteristic value using at least one of an endoscope image and position coordinate information of an actual surgical tool coupled to at least one robot arm, and modeling the characteristic value calculated by the characteristic value calculator.
  • the method may further include implementing an image.
  • the output image generating step may further include removing an overlapping region between the modeling surgical tool image and the actual surgical tool image from the modeling surgical tool image.
  • the output image generating step may further include operation information of the surgical tool.
  • the position of the modeling surgical tool image output to the modeling image may be set using the image.
  • the endoscope image may be output to an arbitrary region of the output image, and the modeling image may be output to a peripheral region of the output image.
  • the modeling image may be generated from an image of a surgical tool photographed at a specific time point before the start of surgery.
  • the present embodiment may further comprise the step of generating a camera image by the camera photographing the outside of the operation target during the operation, in this case, the output image generation step, endoscope image, modeling image, camera image and a combination image thereof Are matched with each other to generate an output image.
  • At least one of the endoscope image, the modeling image, and the camera image may be a 3D image.
  • the present invention may further include selecting any combination of two or more of an endoscope image, a modeling image, and a camera image, and further comprising detecting whether a modeling surgery tool image included in the modeling image collides with each other.
  • the collision detection may further include outputting warning information when detecting a collision between the modeling surgical tool image and generating a collision detection signal.
  • the collision detection step if detecting a collision between the modeling surgical tool image, performing any one or more of the process of force feedback processing and the operation control of the arm control unit for controlling the robot arm on which the surgical tool is mounted It may further include.
  • the generating and storing of the modeling image may further include modeling an image obtained from at least one imaging device of CT, MR, PET, SPECT, and US for the surgical target.
  • a program is recorded in which a program of instructions that can be executed by a digital processing apparatus is tangibly implemented to perform the above-described surgical image processing method, and records a program that can be read by the digital processing apparatus.
  • a medium is provided.
  • a restricted area setting unit for receiving the restricted area setting information for the area where the operation of the controlled object is restricted and generates and stores the restricted area coordinate information
  • a controlled object An arm operation unit for receiving operation information for the operation of the control unit, and an operation determination unit for determining whether the outer surface of the control object is in contact with the coordinate information of the restricted area by referring to the displacement information of the control object according to the operation information,
  • the object to be controlled is provided with a surgical robot, characterized in that at least one of the robot arm, the instrument and the endoscope.
  • the controller may further control to control the object to be controlled, and control to output the response information accordingly.
  • the reaction information may be one or more of tactile information, visual information, and auditory information.
  • the controller may determine whether to control the control object when the outer surface of the control object is in contact with the coordinate information of the restricted area, and then control the control object to be restricted only when the control object is determined to be restricted. Can be.
  • the controller may process to allow the manipulation of the object to be controlled when the restricted area setting ignore command is input.
  • the restricted area coordinate information may be coordinate range information corresponding to a closed curve input using a touch-sensitive screen on which a video image is displayed.
  • the image image may be one or more of an image image obtained by an endoscope, a CT image image, an MRI image image, and a human modeling image image.
  • the restricted area coordinate information may be coordinate range information of an area that is not displayed in the video image obtained by the endoscope.
  • the endoscope may be one or more of laparoscopic, thoracoscopic, arthroscopic, parenteral, bladder, rectal, duodenum, mediastinal, cardiac.
  • the restricted area coordinate information may be coordinate range information of a figure formed by positions designated as restricted area setting information among positions at which the object to be controlled is manipulated by operation information of the arm operating unit.
  • the object to be controlled is manipulated in three-dimensional space, and the designated positions may be points designated to form an outline or an outer surface of a figure in three-dimensional space, respectively.
  • the surgical robot may further include an arm manipulation setting unit configured to generate and store manipulation setting information on whether to allow manipulation of each control object by manipulation of the arm manipulation unit.
  • the manipulation determination unit may further determine whether the manipulation information for any manipulation object is valid manipulation information.
  • the control object Receiving operation information for manipulating an object, determining whether an outer surface of the controlled object is in contact with coordinate information of a restricted area by referring to displacement information of the controlled object according to the manipulation information, and controlling the object And controlling the controlled object to be restricted in operation when the outer surface of the contact surface is in contact with the coordinate information of the restricted area, wherein the controlled object is one or more of a robot arm, an instrument, and an endoscope.
  • the controlling may include determining whether to control the control object when the external surface of the control object is in contact with the restricted area coordinate information, and when the control object is determined to be restricted to control. Controlling to include.
  • the determining step it may be determined whether a command to ignore the restricted area setting is input or not to control the controlled object.
  • the controlling may include controlling the response information to be output when the outer surface of the object to be controlled is in contact with the coordinate information of the restricted area.
  • the operation limiting method of the surgical robot includes generating and storing operation setting information on whether to allow operation of each control target object by operation of the arm operating unit, and whether the operation information for any operation target object is valid operation information.
  • the method may further include determining whether or not.
  • the method may further include controlling the reaction information to be output.
  • the region image By inputting a point designation command and a segmentation command for two or more positions defining a boundary line connected to segment the region image, the region image can be divided into two or more separate regions.
  • the area image By inputting a point designation command and a folding command to two or more locations that define a boundary line that is connected to segment the area image, the area image is rotated and overlapped with respect to the other one or more individual areas by the input of a point designation command and a folding command. Can be displayed.
  • the region image may be displayed by rotating and shifting the overlapping individual regions returned to their original positions by input of the restoration command.
  • One or more individual areas may be erased by inputting a point designation command and a delete command for two or more locations that define a boundary line connected to divide the area image.
  • the outline and the outer surface of the area image are recognized as a series of points, and by the point designation and movement command for any one point, the area image can be transformed.
  • the coordinate range extracted to correspond to the area image may be set as a restricted area or allowed area.
  • an operation determination unit for determining whether the outer surface of the control object is in contact with the coordinate range by referring to the displacement information of the control object according to the operation information is further included. Can be.
  • the controller may control the object to be controlled to be restricted, and may control the response information to be output accordingly.
  • the arm operation setting unit may further include an arm operation setting unit for generating and storing operation setting information on whether to allow operation of each control target object by an operation of the arm operation unit, and the operation determination unit may determine whether the operation information for any control target object is valid operation information. It can be further judged.
  • a method for setting a region of a surgical robot comprising: displaying an image image obtained and provided by an endoscope; receiving operation information for manipulation of a controlled object; And controlling a region image corresponding to a region including a plurality of positions where a point designation command is input among the positions to which the object is moved is overlaid on the video image to be displayed, wherein the object to be controlled includes a robot arm and an instrument. And at least one of endoscopes, and a coordinate range extracted to correspond to an area image is set to a restricted area or an allowable area.
  • a surgical robot a display unit for displaying an image image obtained and provided by the endoscope, a storage unit for storing the human body modeling information and the corresponding modeling data, and obtained one or more organs And a control unit configured to extract modeling data corresponding to organ selection information for and control the display so that an area image corresponding to the modeling data is overlaid on the image image, wherein the modeling data includes at least one of color, shape, and size of the organ.
  • a surgical robot comprising a.
  • Human body modeling information and modeling data may be generated using a reference image that is one or more of a CT image and an MRI image.
  • the organ selection information may be selectively designated as an organ corresponding to one or more of the shape and color of the organ recognized by the image recognition of the image image among the modeling data.
  • the arm operation unit further receives operation information for manipulation of the controlled object including one or more of the robot arm, the instrument, and the endoscope, when one or more of the position change or shape change of the organ is generated by the operation information,
  • the region image may be deformed to correspond to the outline of the organ.
  • a method for setting a region of a surgical robot comprising: storing human modeling information and corresponding modeling data, displaying an image image obtained and provided by an endoscope, and Extracting modeling data corresponding to organ selection information for one or more organs, and controlling the area image corresponding to the modeling data to be overlaid and displayed on the image image, wherein the modeling data includes the color, shape, and size of the organs.
  • a method for setting a region of a surgical robot comprising one or more pieces of information.
  • a surgical robot for controlling at least one of the position and the image input angle of the vision unit using the operation signal, the flow in the direction and size corresponding to the movement direction and size of the contacted operator's face
  • a motion detector for outputting sensing information corresponding to the contacting part that is moved, the direction and size of the contacting part, and generating and outputting an operation command for at least one of the position and the image input angle of the vision part using the sensing information.
  • a surgical robot including an operation command generation unit.
  • the operation handle direction of the surgical robot can be changed and operated accordingly.
  • the contact portion may be formed as part of a console panel of the surgical robot.
  • the support may be formed to protrude at one or more points of the contact portion to fix the position of the operator's face.
  • the eyepiece may be perforated in the contacting portion such that the image obtained by the vision portion is shown as visual information.
  • the contact portion may be formed of a material of light transmitting material so that the image obtained by the vision portion is shown as visual information.
  • the surgical robot includes a touch sensing unit for detecting whether the operator's face is in contact with the contact portion or the support, and a reference state in which the contact portion is designated as a default position and state when the contact release is recognized by the sensing of the contact sensing unit. It may further include an original state recovery unit for processing to return to.
  • the original state restoration unit may process the contact portion to return to the reference state by reverse manipulation of the contact portion flow direction and size according to the sensing information.
  • the surgical robot may further include an eye tracker unit configured to compare the sequentially generated image data in order of time to generate analysis information for analyzing one or more of a change in the position of the eye, a change in the shape of the eye, and a gaze direction.
  • the camera unit may further include a camera unit for generating image data by capturing toward the contact portion inside the surgical robot, and a storage unit for storing the generated image data.
  • the operation command generation unit may determine whether the analysis information satisfies a preset change as an arbitrary operation command, and output a corresponding operation command when the operation information is satisfied.
  • the vision portion may be any one of a microscope and an endoscope, and the endoscope may be one or more of laparoscopic, thoracoscopic, arthroscopic, parenteral, bladder, rectal, duodenum, mediastinal and cardiac.
  • the contact portion is formed on the front surface of the console panel to be supported by the elastic body, the elastic body may provide a restoring force so that the contact portion is returned to its original position when the external force for the movement of the contact portion is removed.
  • a surgical robot for controlling at least one of the position and the image input angle of the vision unit using the operation signal, the eyepiece for providing the image obtained by the vision unit as visual information;
  • the eye tracker unit for generating analysis information analyzing one or more of a change in the position of the pupil, a change in the shape of the eye, and a viewing direction seen through the eyepiece; and whether the analysis information satisfies a preset change as an arbitrary operation command.
  • a surgical robot comprising an operation command generation unit for determining and outputting an operation command for operating the vision unit when satisfied.
  • the eye tracker unit includes a camera unit for imaging the eyepiece from the inside of the surgical robot to generate image data, a storage unit for storing the generated image data, and sequentially comparing the image data generated sequentially to determine the position of the pupil. It may include an eye tracker unit for generating analysis information that interprets one or more of a change, an eye shape change, and a viewing direction.
  • the eyepiece may be perforated in a contact portion formed as part of a console panel of the surgical robot.
  • the surgical robot for controlling one or more of the position of the vision portion and the image input angle, the step of outputting the sensing information corresponding to the direction and the size of the contact portion flows; And generating and outputting an operation command for at least one of the position and the image input angle of the vision unit using the sensing information, wherein the contact portion is formed as part of the console panel of the surgical robot,
  • a vision unit which is configured to flow in a direction and a size corresponding to the direction and size of movement.
  • the vision unit manipulation method may further include determining whether the operator's face is in contact with the contact portion, and if the contact is in contact with the operator, controlling to start output of sensing information.
  • the operation method of the vision unit includes determining whether the contact portion exists as a reference state which is a position and state designated as default when the contact is released, and when the contact state is not present, returning to the reference state. It may further comprise the step of processing.
  • the return to the reference state may be performed by inverse manipulation of the contact portion flow direction and size according to the sensing information.
  • Vision operating method includes the steps of generating and storing the image data captured from the inside of the surgical robot toward the contact portion, and comparing the stored image data in chronological order to determine the position of the pupil, the change of the eye shape and the direction of attention
  • the method may further include generating interpretation information of one or more interpretations.
  • the vision unit manipulation method may further include determining whether the analysis information satisfies a preset change as an arbitrary manipulation command, and outputting a corresponding manipulation command that is preset accordingly if satisfied.
  • the contact portion is formed on the front surface of the console panel to be supported by the elastic body, the elastic body may provide a restoring force so that the contact portion is returned to its original position when the external force for the movement of the contact portion is removed.
  • a surgical robot for controlling at least one of the position and the image input angle of the vision unit using the operation signal, a contact for providing the image obtained by the vision unit as visual information
  • An analysis processor for generating analysis information for analyzing the face part, the movement of the face seen through the contact part, and determining whether the analysis information satisfies a preset change as an arbitrary operation command, and if so, to operate the vision part.
  • a surgical robot including an operation command generation unit for outputting an operation command.
  • the analysis processing unit includes a camera unit for generating image data by imaging toward the contact portion inside the surgical robot, a storage unit for storing the generated image data, and a change in position of a predetermined feature point in the sequentially generated image data. It may include an analysis unit for generating the analysis information on the movement of the face by comparing and determining in chronological order.
  • the contact portion may be formed as part of a console panel of the surgical robot, and the contact portion may be formed of a material of light transmitting material so that the image obtained by the vision part is viewed as visual information. Can be.
  • the surgical robot is a control method for controlling one or more of the position and the image input angle of the vision portion, one of the position change, eye shape change and the direction of eye viewing through the eyepiece Generating analysis information that interprets the above, determining whether the analysis information satisfies a preset change as an arbitrary operation command, and if satisfactory, manipulating at least one of the position of the vision unit and the image input angle.
  • a vision operation method is provided that includes generating and outputting a command.
  • the generating of the analysis information includes generating and storing image data photographed toward the eyepiece from the inside of the surgical robot, and comparing the stored image data in chronological order to determine the position of the pupil, the change of the eye shape, and the gaze direction. It may include generating analysis information that interprets one or more of the.
  • an image input unit for receiving an endoscope image provided from the surgical endoscope, a screen display unit for outputting the endoscope image in a specific region, and a screen for outputting the endoscope image corresponding to the viewpoint of the surgical endoscope
  • an immersive surgical image processing apparatus including a screen display control unit for changing a specific area of a display unit.
  • the surgical endoscope may be one or more of laparoscopic, thoracoscopic, arthroscopic, parenteral, cystoscopic, rectal, duodenum, mediastinal, cardiac, or may be a stereoscopic endoscope.
  • the screen display control unit the endoscope perspective tracking unit for tracking the perspective information of the surgical endoscope corresponding to the movement and rotation of the surgical endoscope, and the image to extract the movement information of the endoscope image using the perspective information of the surgical endoscope
  • the apparatus may include a movement information extracting unit and an image position setting unit for setting a specific region of the screen display unit on which the endoscope image is output using the movement information.
  • the screen display controller may move the center point of the endoscope image corresponding to the coordinate change value of the viewpoint of the surgical endoscope.
  • the image input unit for receiving the first endoscope image and the second endoscope image provided at different time points from the surgical endoscope, and outputting the first endoscope image and the second endoscope image to different areas
  • a screen display unit, an image storage unit for storing the first endoscope image and the second endoscope image, and a screen display unit for outputting the first endoscope image and the second endoscope image to different areas according to different viewpoints of the surgical endoscope There is provided an immersive surgical image processing apparatus including a screen display control unit for controlling the same.
  • the image input unit may receive the first endoscope image before the second endoscope image, and the screen display unit may differently output one or more of the saturation, brightness, color, and screen pattern of the first endoscope image and the second endoscope image. Can be.
  • the screen display controller may further include a storage image display unit which extracts the first endoscope image stored in the image storage unit and outputs the first endoscope image stored in the image storage unit while the screen display unit outputs the second endoscope image input in real time.
  • the image input unit for receiving an endoscope image provided from the surgical endoscope, a screen display unit for outputting the endoscope image to a specific area, and a surgical tool for operating a surgical target taken by the surgical endoscope Change a specific area of an image storage unit for storing a modeling image, an image matching unit for generating an output image by matching an endoscope image and a modeling image, and a screen display unit for outputting an endoscope image in accordance with a surgical endoscope
  • a tangible surgical image processing apparatus including a screen display control unit for outputting the matched endoscope image and modeling image to the screen display unit.
  • the image matching unit may generate an output image by matching the actual surgical tool image included in the endoscope image with the modeling surgical tool image included in the modeling image.
  • the image matching unit may further include a characteristic value calculator configured to calculate a characteristic value using at least one of an endoscope image and position coordinate information of an actual surgical tool coupled to at least one robot arm, and a characteristic value calculated by the characteristic value calculator.
  • the apparatus may further include a modeling image implementation unit for implementing a modeling image.
  • the image matching unit may further include an overlapping image processor which removes an overlapping region between the modeling surgical tool image and the actual surgical tool image from the modeling surgical tool image, and the position of the modeling surgical tool image output on the modeling image is It can be set using the operation information of the surgical instrument.
  • an overlapping image processor which removes an overlapping region between the modeling surgical tool image and the actual surgical tool image from the modeling surgical tool image, and the position of the modeling surgical tool image output on the modeling image is It can be set using the operation information of the surgical instrument.
  • the image input unit for receiving the endoscope image provided from the surgical endoscope, the screen display unit for outputting the endoscope image, the screen drive unit for rotating and moving the screen display, and the endoscope for surgery
  • a immersive surgical image processing apparatus including a screen driving control unit which controls the screen driving unit so that the screen driving unit rotates and moves the screen display unit.
  • the screen driving control unit the endoscope perspective tracking unit for tracking the perspective information of the surgical endoscope corresponding to the movement and rotation of the surgical endoscope, and the image to extract the movement information of the endoscope image using the perspective information of the surgical endoscope
  • the apparatus may include a movement information extracting unit and a driving information generating unit generating screen driving information of the screen display unit using the movement information.
  • the screen display unit may include a dome-shaped screen and a projector that projects an endoscope image on the dome-shaped screen.
  • a tangible surgical image processing method comprising the step of changing a specific area of the screen display unit for outputting the endoscope image corresponding to the viewpoint of the surgical endoscope.
  • the surgical endoscope may be one or more of laparoscopic, thoracoscopic, arthroscopic, parenteral, cystoscopic, rectal, duodenum, mediastinal, cardiac, or may be a stereoscopic endoscope.
  • the step of changing a specific area of the screen display unit the step of tracking the perspective information of the surgical endoscope corresponding to the movement and rotation of the surgical endoscope, extracting the movement information of the endoscope image using the perspective information of the surgical endoscope And setting a specific area of the screen display unit on which the endoscope image is output using the movement information.
  • the changing of the specific area of the screen display unit may include moving the center point of the endoscope image corresponding to the coordinate change value of the viewpoint of the surgical endoscope.
  • the surgical image processing apparatus in the method for outputting the endoscope image by the surgical image processing apparatus, receiving the first endoscope image and the second endoscope image provided at different time points from the surgical endoscope, the first Outputting the endoscope image and the second endoscope image to different areas of the screen display unit, storing the first endoscope image and the second endoscope image, and the first endoscope image and the second corresponding to different perspectives of the surgical endoscope;
  • a tangible surgical image processing method comprising controlling the screen display unit to output an endoscope image to different areas.
  • the receiving of the endoscope image may include receiving the first endoscope image before the second endoscope image
  • the outputting step may include any one of the saturation, brightness, color, and screen pattern of the first endoscope image and the second endoscope image. You can print one or more differently.
  • the controlling of the screen display unit may further include extracting and outputting the first endoscope image stored in the image storage unit to the screen display unit while the screen display unit outputs the second endoscope image input in real time.
  • the method for outputting the endoscope image by the surgical image processing apparatus receiving an endoscope image provided from the surgical endoscope, outputting the endoscope image to a specific area of the screen display unit, Storing a modeling image of a surgical tool for operating a surgical target photographed by the surgical endoscope; generating an output image by matching the endoscope image and the modeling image; and outputting the endoscope image corresponding to the perspective of the surgical endoscope
  • a tangible surgical image processing method including changing a specific region of a screen display unit, and outputting a matched endoscope image and a modeling image to a screen display unit.
  • the output image may be generated by matching the actual surgical tool image included in the endoscope image with the modeling surgical tool image included in the modeling image.
  • the generating of the output image may include calculating a characteristic value using at least one of an endoscope image and position coordinate information of an actual surgical tool coupled to at least one robot arm, and generating a modeling image corresponding to the calculated characteristic value. It may further comprise the step of implementing.
  • the generating of the output image may further include removing an overlapping region between the modeling surgical tool image and the actual surgical tool image from the modeling surgical tool image.
  • the position of the modeling surgical tool image output to the modeling image may be set using the operation information of the surgical tool.
  • a haptic surgical image processing method comprising the step of rotating and moving the screen display.
  • the rotating and moving the screen display unit tracking the perspective information of the surgical endoscope corresponding to the movement and rotation of the surgical endoscope, extracting the movement information of the endoscope image using the perspective information of the surgical endoscope
  • the method may include generating motion information of the screen display unit using the movement information.
  • the screen display unit may include a dome-shaped screen and a projector that projects an endoscope image on the dome-shaped screen.
  • a program of instructions that can be executed by a digital processing apparatus is tangibly embodied in order to perform the above-described immersive surgical image processing method, and a program that can be read by a digital processing apparatus. Recorded recording medium is provided.
  • Surgical image processing apparatus and method according to the present invention has the effect of enabling a smooth operation by providing a real image and a modeling image together during surgery.
  • the surgical image processing device and the method according to the present invention has the effect of allowing the operator to operate by referring to the image of the neighboring site and the external environment of the patient as well as the surgical site apart from the conventional method of operating by looking only at the surgical site There is.
  • the surgical image processing apparatus and the method according to the present invention can detect and warn the collision of the surgical tools in advance, the operator has an effect that can perform the operation smoothly by recognizing the collision state in advance and avoiding it.
  • the surgical image processing apparatus and method according to the present invention has the effect that the operator can use the surgical image more conveniently by implementing an interface that the operator can select the type of the image to be displayed on the monitor for convenience.
  • the operator does not need a separate operation for the operation of the laparoscopic has the effect of allowing the operator to focus on the operation only.
  • the bodily-type surgical image processing apparatus and method according to the present invention by changing the output position of the endoscope image output to the monitor in accordance with the viewpoint of the endoscope changes according to the movement of the surgical endoscope, the effect is to make the actual surgical situation feel more realistic.
  • the apparatus and method for immersive surgical image processing extracts a previously input and stored endoscope image at the present time point and outputs the information on the change of the endoscope image by outputting it to the screen display together with the current endoscope image. There is an effect that can inform the user.
  • the bodily-type surgical image processing apparatus and method according to the present invention match each or each of the modeling images that are actually generated by using an endoscope during surgery and modeling images previously generated for a surgical tool and stored in an image storage unit or the same. It is effective to modify the image such as adjusting the size and output it to the monitor that the user can observe.
  • the bodily-type surgical image processing apparatus and method according to the present invention has an effect that allows the user to feel more realistic about the surgery by rotating and moving the monitor in accordance with the viewpoint of various changes in the endoscope.
  • FIG. 1 is a plan view showing the overall structure of a surgical robot according to an embodiment of the present invention.
  • Figure 2 is a conceptual diagram showing a master interface of the surgical robot according to an embodiment of the present invention.
  • Figure 3 is a block diagram of a surgical robot according to an embodiment of the present invention.
  • Figure 4 is a block diagram of a surgical image processing apparatus according to an embodiment of the present invention.
  • FIG. 5 is a flow chart of a surgical image processing method according to an embodiment of the present invention.
  • FIG. 6 is a block diagram of an output image according to the surgical image processing method according to an embodiment of the present invention.
  • Figure 7 is a block diagram of a surgical robot according to an embodiment of the present invention.
  • FIG. 8 is a block diagram of a surgical image processing apparatus according to an embodiment of the present invention.
  • FIG. 9 is a flow chart of a surgical image processing method according to an embodiment of the present invention.
  • FIG. 10 is a block diagram of an output image according to the surgical image processing method according to an embodiment of the present invention.
  • Figure 11 is a block diagram of a surgical robot according to an embodiment of the present invention.
  • FIG. 12 is a flow chart of a surgical image processing method according to an embodiment of the present invention.
  • Figure 13 is a plan view showing the overall structure of a surgical robot according to an embodiment of the present invention.
  • FIG. 14 is a conceptual diagram showing a master interface of the surgical robot according to an embodiment of the present invention.
  • 15 is a block diagram schematically showing the configuration of a master robot and a slave robot according to an embodiment of the present invention.
  • 16 to 27 illustrate a method for designating a region according to embodiments of the present invention.
  • FIG. 28 is a flowchart illustrating a method for setting a restricted area according to an embodiment of the present invention.
  • 29 and 30 are flow charts showing the operation limiting method of the surgical robot system according to an embodiment of the present invention.
  • 31 is an exemplary view of a screen display for explaining an operation limiting method according to an embodiment of the present invention.
  • FIG. 32 is a plan view showing the overall structure of a surgical robot according to an embodiment of the present invention.
  • 33 is a conceptual diagram showing a master interface of the surgical robot according to an embodiment of the present invention.
  • 34 to 37 are views illustrating the movement form of the contact portion according to the embodiment of the present invention.
  • FIG. 38 is a block diagram schematically illustrating a configuration of a telescopic display unit for generating a laparoscopic manipulation command according to an embodiment of the present invention.
  • 39 is a flowchart illustrating a method of transmitting a laparoscopic manipulation command according to an embodiment of the present invention.
  • FIG. 40 is a block diagram schematically illustrating a configuration of a telescopic display unit for generating a laparoscopic manipulation command according to an embodiment of the present invention.
  • 41 is a flowchart illustrating a method of transmitting a laparoscopic manipulation command according to an embodiment of the present invention.
  • FIG. 42 is a block diagram schematically illustrating a configuration of a telescopic display unit for generating a laparoscopic manipulation command according to an embodiment of the present invention.
  • 43 is a flowchart illustrating a method of transmitting a laparoscopic manipulation command according to an embodiment of the present invention.
  • 44 is a flowchart illustrating a method of transmitting a laparoscopic manipulation command according to an embodiment of the present invention.
  • 45 is a diagram illustrating an image display form by a telescopic display unit according to an embodiment of the present invention.
  • 46 is a flowchart illustrating a method of transmitting a laparoscopic manipulation command according to an embodiment of the present invention.
  • 47 is a plan view showing the overall structure of a surgical robot according to an embodiment of the present invention.
  • FIG. 48 is a conceptual diagram illustrating a master interface of a surgical robot according to an embodiment of the present invention.
  • 49 is a block diagram of a surgical robot according to an embodiment of the present invention.
  • FIG. 50 is a block diagram of a bodily-type surgical image processing apparatus according to an embodiment of the present invention.
  • 51 is a flowchart of a tangible surgical image processing method according to an embodiment of the present invention.
  • FIG. 52 is a block diagram of an output image according to the tangible surgical image processing method according to the embodiment of the present invention.
  • 53 is a block diagram of a surgical robot according to an embodiment of the present invention.
  • FIG. 54 is a block diagram illustrating an apparatus for immersive surgical image processing according to an embodiment of the present invention.
  • 55 is a flowchart of a tangible surgical image processing method according to an embodiment of the present invention.
  • FIG. 56 is a block diagram of an output image according to the tangible surgical image processing method according to the embodiment of the present invention.
  • 57 is a block diagram of a surgical robot according to an embodiment of the present invention.
  • FIG. 58 is a block diagram of an apparatus for processing immersive surgical images according to an embodiment of the present invention.
  • 59 is a flowchart of a tangible surgical image processing method according to an embodiment of the present invention.
  • 60 is a block diagram of an output image according to the tangible surgical image processing method according to an embodiment of the present invention.
  • 61 is a conceptual diagram showing a master interface of the surgical robot according to an embodiment of the present invention.
  • FIG. 62 is a block diagram of a surgical robot according to an embodiment of the present invention.
  • FIG. 63 is a block diagram illustrating an apparatus for immersive surgical image processing according to an embodiment of the present invention.
  • 64 is a flowchart of a tangible surgical image processing method according to an embodiment of the present invention.
  • 65 is a conceptual diagram illustrating a master interface of a surgical robot according to an embodiment of the present invention.
  • first and second may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
  • ... unit means a unit for processing at least one function or operation, which means hardware or software or hardware and It can be implemented in a combination of software.
  • each embodiment should not be interpreted or implemented independently, and the feature elements and / or technical ideas described in each embodiment may be combined with other embodiments separately described. It is to be understood that it may be interpreted or practiced.
  • the present invention is a technical idea that can be used universally for surgery or experiments in which vision parts such as endoscopes and microscopes are used.
  • the endoscope may be diversified into laparoscopic, thoracoscopic, arthroscopic, parenteral, bladder, rectal, duodenum, mediastinal, cardiac, and the like.
  • the vision unit is a kind of endoscope, that is, laparoscope for the convenience of explanation and understanding will be described as an example.
  • FIG. 1 is a plan view showing the overall structure of a surgical robot according to an embodiment of the present invention
  • Figure 2 is a conceptual diagram showing a master interface of a surgical robot according to an embodiment of the present invention.
  • the endoscope image actually taken by using the endoscope during surgery and the modeling image generated and stored in advance for the surgical instrument are modified or matched with each other, or the size thereof is modified and output to a monitor that can be observed by the operator.
  • the present embodiment outputs a camera image generated by photographing a surgical target (patient) lying on the operating table with a endoscopic image and / or modeling image outputted in combination with the endoscope image and / or modeling image, the surgeon is operating from the conventional operation while looking only at the surgical site.
  • the endoscope according to the present embodiment may be not only a laparoscope but also various kinds of tools used as an imaging tool during surgery, such as thoracoscopic, arthroscopy, parenteral, bladder, rectal, duodenum, mediastinal, cardiac, etc.
  • the surgical image processing apparatus according to the present embodiment is not necessarily implemented to be limited to the surgical robot system as shown, and may be applied to a system for outputting an endoscope image during surgery and operating using a surgical tool.
  • the surgical image processing apparatus according to the present embodiment is applied to the surgical robot system will be described.
  • the surgical robot system includes a slave robot 2 performing surgery on a patient lying on an operating table and a master robot 1 remotely controlling the slave robot 2.
  • the master robot 1 and the slave robot 2 are not necessarily separated into separate devices that are physically independent, but may be integrated into one and integrally formed, in which case the master interface 4 may be, for example, of an integrated robot. May correspond to an interface portion.
  • the master interface 4 of the master robot 1 comprises a monitor part 6 and a master controller, and the slave robot 2 comprises a robot arm 3 and an instrument 8.
  • the instrument 8 is a surgical tool such as an endoscope, such as a laparoscope, or a surgical instrument for directly manipulating the affected part.
  • the master interface 4 may further include a button for selecting an image, which will be described later.
  • the image selection button may be implemented in the form of a clutch button or a pedal (not shown), and the implementation of the image selection button is not limited thereto. For example, a function menu or a mode selection displayed through the monitor unit 6 may be selected. It may also be implemented as a menu.
  • the master interface 4 is provided with a master controller so that the operator can be gripped and manipulated by both hands. As illustrated in FIGS. 1 and 2, the master controller may be implemented with two handles 10. An operation signal according to the manipulation of the operator's handle 10 is transmitted to the slave robot 2 so that the robot arm 3 may be moved. Controlled. By the operation of the operator's handle 10, the position movement, rotation, cutting, etc. of the robot arm 3 and / or the instrument 8 may be performed.
  • the handle 10 may be composed of a main handle and a sub handle. It is also possible to operate the slave robot arm 3, the instrument 8, etc. with only one handle, or to operate a plurality of surgical equipment in real time by adding a sub handle.
  • the main handle and the sub handle may have various mechanical configurations depending on the operation method thereof.
  • the robot arm 3 and / or other surgery of the slave robot 2 such as a joystick type, a keypad, a trackball, and a touch screen, may be used.
  • Various input means for operating the equipment can be used.
  • the master controller is not limited to the shape of the handle 10 and may be applied without any limitation as long as it can control the operation of the robot arm 3 through a network.
  • the monitor 6 of the master interface 4 displays an endoscope image, a camera image, and a modeling image input by the instrument 8 as an image image.
  • the information displayed on the monitor unit 6 may vary according to the type of the selected image.
  • the monitor unit 6 may be composed of one or more monitors, and may display information necessary for surgery on each monitor separately. 1 and 2 illustrate a case in which the monitor unit 6 includes three monitors, the quantity of the monitors may be variously determined according to the type or type of information requiring display.
  • the slave robot 2 and the master robot 1 may be coupled to each other through a wired communication network or a wireless communication network so that an operation signal and an endoscope image input through the instrument 8 may be transmitted to the counterpart. If two operation signals provided by the two handles 10 provided in the master interface 4 and / or operation signals for adjusting the position of the instrument 8 need to be transmitted at the same time and / or at a similar time point, Each operation signal may be independently transmitted to the slave robot 2.
  • each operation signal is 'independently' transmitted, it means that the operation signals do not interfere with each other and one operation signal does not affect the other signal.
  • each operation signal in order to transmit the plurality of operation signals independently of each other, in the generation step of each operation signal, header information for each operation signal is added and transmitted, or each operation signal is transmitted in the generation order thereof, or Various methods may be used such as prioritizing each operation signal in advance and transmitting the operation signal accordingly.
  • the transmission path through which each operation signal is transmitted may be provided independently so that interference between each operation signal may be fundamentally prevented.
  • the robot arm 3 of the slave robot 2 can be implemented to be driven with multiple degrees of freedom.
  • the robot arm 3 includes, for example, a surgical tool inserted into a surgical site of a patient, a rocking drive unit for rotating the surgical tool in the yaw direction according to a surgical position, and a pitch direction perpendicular to the rotational drive of the rocking drive unit. It comprises a pitch drive unit for rotating the surgical tool, a transfer drive for moving the surgical tool in the longitudinal direction, a rotation drive for rotating the surgical tool, the surgical tool drive is installed on the end of the surgical tool to cut or cut the surgical lesion Can be.
  • the configuration of the robot arm 3 is not limited thereto, and it should be understood that this example does not limit the scope of the present invention.
  • the actual control process such as the operator rotates the robot arm 3 in the corresponding direction by operating the handle 10 is somewhat distanced from the subject matter of the present invention, so a detailed description thereof will be omitted.
  • One or more slave robots 2 may be used to operate the patient, and the instrument 8 for causing the surgical site to be displayed as an image image through the monitor unit 6 may be implemented as an independent slave robot 2.
  • the master robot 1 may also be implemented integrally with the slave robot 2.
  • FIG. 3 is a block diagram schematically showing the configuration of a surgical robot according to an embodiment of the present invention.
  • a master robot 1 comprising a and a robot arm 3, a slave robot 2 including a laparoscope 5 are shown.
  • Surgical image processing apparatus may be implemented as a module including an image input unit 310, the screen display unit 320, the image matching unit 350, the image storage unit 360, of course, such a module
  • the control unit may include an operation signal generator 340, an image matcher 350, and a controller 370.
  • the image input unit 310 receives an image input through a camera provided in the laparoscope 5 of the slave robot 2 through a wired or wireless communication network.
  • Laparoscope 5 may also be one type of surgical instrument according to the present embodiment, and the number thereof may be one or more.
  • the screen display unit 320 outputs an image image corresponding to the image received through the image input unit 310 as visual information.
  • the screen display unit 320 may output the endoscope image as it is or zoom in / zoom out, or may match the endoscope image and the modeled image with each other, or may output each as a separate image.
  • the screen display unit 320 is an image that reflects the endoscope image and the entire surgical situation, for example, as described later, the camera image generated by the camera photographing the outside of the operation target and / or matched with each other and outputs the surgery It can also make the situation easier to grasp.
  • the screen display unit 320 outputs a reduced image of the entire image (endoscopic image, modeling image, camera image, etc.) in a portion of the output image or a window generated on a separate screen, and the operator described above
  • the entire output image may be moved or rotated, so-called bird's eye view function of the CAD program. Functions such as zooming in / out, moving, and rotating the image output to the screen display unit 320 as described above may be controlled by the controller 370 according to the operation of the master controller.
  • the screen display unit 320 may be implemented in the form of a monitor unit 6.
  • An image processing process for outputting a received image as an image image through the screen display unit 320 may include a control unit 360 and an image matching unit. 350 or by a separate image processor (not shown).
  • the screen display unit 320 may be a terminal for outputting an image to the monitor unit 6.
  • the present embodiment does not include a display like the monitor unit 6, and the screen unit 320 may be used.
  • the arm manipulation unit 330 is a means for allowing the operator to manipulate the position and function of the robot arm 3 of the slave robot 2.
  • the arm manipulation unit 330 may be formed in the shape of the handle 10 as illustrated in FIG. 2, but the shape is not limited thereto and may be modified in various shapes for achieving the same purpose. Further, for example, some may be formed in the shape of a handle, others may be formed in other shapes such as a clutch button, and a finger cannula or insertion may be inserted to fix the operator's finger to facilitate manipulation of the surgical tool. More rings may be formed.
  • the operation signal generator 340 generates a corresponding operation signal when the operator manipulates the arm operation unit 330 for the movement of the robot arm 3 and / or the laparoscope 5 or the operation for surgery. Transfer to the robot (2).
  • the manipulation signal may be transmitted and received through a wired or wireless communication network.
  • the image matching unit 350 generates an output image by matching the endoscope image received through the image input unit 310 with the modeling image of the surgical tool stored in the image storage unit 360, and outputs the output image to the screen display unit 320. do.
  • the endoscope image is an image of the inside of the patient's body using the endoscope. Since the image is obtained by capturing only a limited area, the endoscope image includes an image of a part of the surgical instrument.
  • the modeling image is an image generated by realizing the shape of the entire surgical tool as a 2D or 3D image.
  • the modeling image may be an image of a surgical tool photographed at a specific time point before the start of surgery, for example, an initial setting state. Since the modeling image is an image generated by a computer simulation technique of a surgical tool, the image matching unit 350 may match and output the surgical tool and the modeling image shown in the actual endoscope image. Since a technique of obtaining an image by modeling a real object has a little distance from the gist of the present invention, a detailed description thereof will be omitted. In addition, specific functions, various detailed configurations, and the like of the image matching unit 350 will be described in detail with reference to the accompanying drawings.
  • the controller 360 controls the operation of each component so that the above-described function can be performed.
  • the controller 360 may perform a function of converting an image input through the image input unit 310 into an image image to be displayed through the screen display unit 320.
  • the controller 360 controls the image matching unit 350 to output the modeling image through the screen display unit 320 in response to the manipulation information according to the manipulation of the arm manipulation unit 330.
  • the actual surgical tool included in the endoscope image is a surgical tool included in the image inputted by the laparoscope 5 and transmitted to the master robot 1, and is a surgical tool that applies a surgical operation directly to the patient's body.
  • the modeling surgical tool included in the modeling image is mathematically modeled with respect to the entire surgical tool in advance and stored in the image storage unit 360 as a 2D or 3D image.
  • Surgical tools and modeling images of the endoscope image Surgical tools are controlled by the operation information (that is, information about the movement, rotation, etc. of the surgical tool) recognized by the master robot 1 as the operator operates the cancer operation unit 330 Can be.
  • their position and manipulation shape may be determined by the manipulation information.
  • the operation signal generator 340 generates an operation signal by using the operation information according to the operation of the operator's arm operation unit 340, and transmits the generated operation signal to the slave robot 2. To be manipulated accordingly. In addition, the position and operation shape of the actual surgical instrument operated by the operation signal can be confirmed by the operator by the image input by the laparoscope (5).
  • the modeling image may include an image reconstructed by modeling not only the surgical instrument but also the organ of the patient.
  • the modeled images are CT (Computer Tomography), MR (Magnetic Resonance), PET (Positron Emission Tomography), SPECT (Single Photon Emission Computed Tomography), single photon emission tomography ), which may include 2D or 3D images of the organ surface of the patient, reconstructed with reference to images acquired from imaging equipment such as US (Ultrasonography), in which case the actual endoscope image and the computer modeling image are matched. It is more effective to provide the operator with a full image including the surgical site.
  • FIG. 4 is a block diagram of a surgical image processing apparatus according to an embodiment of the present invention.
  • the image matcher 350 may include a feature value calculator 351, a modeled image implementer 353, and an overlapped image processor 355.
  • the characteristic value calculator 351 uses the characteristic value by using the image inputted by the laparoscope 5 of the slave robot 2 and / or coordinate information on the position of the actual surgical tool coupled to the robot arm 3. Calculate The actual position of the surgical tool can be recognized by referring to the position value of the robot arm 3 of the slave robot 2, the information on the position may be provided from the slave robot 2 to the master robot (1). .
  • the characteristic value calculator 351 may use, for example, a field of view (FOV), an enlargement ratio, a viewpoint (for example, a viewing direction), a viewing depth, etc. of the laparoscope 5 using an image of the laparoscope 5.
  • FOV field of view
  • a viewpoint for example, a viewing direction
  • a viewing depth etc.
  • characteristic values such as the type, direction, depth, and degree of bending of the actual surgical instrument.
  • an image recognition technique for recognizing the outline of the subject included in the image, shape recognition, tilt angle, or the like may be used.
  • the type of the actual surgical tool may be input in advance in the process of coupling the corresponding surgical tool to the robot arm (3).
  • the modeling image implementer 353 implements a modeling image corresponding to the feature value calculated by the feature value calculator 351.
  • Data related to the modeled image may be extracted from the image storage unit 360. That is, the modeling image implementer 353 may determine the characteristic values of the laparoscope 5 (field of view (FOV), magnification, perspective, viewing depth, etc., type, direction, depth, degree of bending, etc. of the actual surgical tool).
  • the modeling image is implemented to extract the modeling image data of the corresponding surgical tool and the like to match the surgical tool of the endoscope image.
  • the modeling image implementer 353 may extract a modeling image corresponding to the characteristic value of the laparoscope 5 directly. That is, the modeling image implementer 353 may extract the 2D or 3D modeling surgical tool image corresponding to the laparoscopic 5 and the data, such as the angle of view and the magnification of the laparoscope 5, and match it with the endoscope image.
  • the characteristic values such as the angle of view and the magnification may be calculated by comparing and analyzing the images of the laparoscope 5 which are calculated or sequentially generated through comparison with the reference image according to the initial set value.
  • the modeling image implementer 353 may extract the modeling image by using manipulation information for determining the position and the manipulation shape of the laparoscope 5 and the robot arm 3. That is, as described above, since the surgical tool of the endoscope image may be controlled by the operation information recognized by the master robot 1 as the operator manipulates the cancer operation unit 330, the modeling surgery corresponding to the characteristic value of the endoscope image is performed. The position and manipulation shape of the tool can be determined by the manipulation information.
  • Such manipulation information may be stored in a separate database according to the temporal order, and the modeling image implementer 353 may recognize the characteristic value of the actual surgical tool by referring to the database, and correspondingly, the information about the modeling image. Can be extracted. That is, the position of the surgical tool output on the modeling image may be set using cumulative data of the position change signal of the surgical tool. For example, if the operation information for the surgical instrument includes information that rotates 90 degrees clockwise and 1 cm in the extension direction, the modeling image implementation unit 353 is included in the modeling image corresponding to the operation information. The image of the surgical instrument can be converted and extracted.
  • the surgical instrument is mounted to the front end of the surgical robot arm is provided with an actuator, the driving wheel (not shown) provided in the drive unit (not shown) by receiving the driving force from the actuator is operated, connected to the drive wheel and surgery
  • the operator 150 inserted into the patient's body performs surgery by performing a predetermined operation.
  • the driving wheel is formed in a disc shape, and may be clutched to the actuator to receive the driving force.
  • the number of driving wheels may be determined corresponding to the number of objects to be controlled, and the description of such driving wheels will be apparent to those skilled in the art related to surgical instruments, and thus detailed description thereof will be omitted.
  • the superimposed image processor 355 outputs a partial image of the modeled image so that the actually captured endoscope image and the modeled image do not overlap. That is, when the endoscope image includes some shape of the surgical tool and the modeling image implementer 353 outputs the corresponding modeling surgical tool, the superimposed image processor 355 may perform the actual surgical tool image and the modeling surgical tool image of the endoscope image. By checking the overlapping region of, and deleting the overlapping portion from the modeling surgical tool image, the two images can be matched with each other. The overlapping image processor 355 may process the overlapping region by removing the overlapping region of the modeling surgical tool image and the actual surgical tool image from the modeling surgical tool image.
  • the total length of an actual surgical instrument is 20 cm, and characteristics values (field of view (FOV), magnification, perspective, depth of view, etc., type, direction, depth, degree of bending, etc. of the actual surgical instrument) are considered.
  • FOV field of view
  • magnification magnification
  • perspective depth of view
  • depth depth
  • degree of bending etc. of the actual surgical instrument
  • FIG. 5 is a flow chart of a surgical image processing method according to an embodiment of the present invention.
  • a modeling image is generated and stored in advance with respect to the surgical target and / or the surgical tool.
  • the modeled image may be modeled by computer simulation, and the embodiment may generate a modeled image by using a separate modeling image generating apparatus.
  • step S520 the endoscope is used to generate an endoscope image adjacent to the affected part of the patient.
  • Endoscopic images are images of organs and surgical instruments of a patient, and include some images thereof.
  • the characteristic value calculator 351 calculates a characteristic value of the endoscope image.
  • the characteristic value calculator 351 may input an image provided by the laparoscope 5 of the slave robot 2 and / or coordinate information about the position of the actual surgical tool coupled to the robot arm 3.
  • the characteristic value is calculated using the field of view (FOV, field of view), magnification, perspective (for example, viewing direction), viewing depth, and the type, direction, and depth of the surgical instrument. , Bend, and so on.
  • the image matching unit 350 extracts the modeling image corresponding to the endoscope image, processes the overlapping region, and matches the two images to each other and outputs the image.
  • FIG. 6 is an exemplary view of an output image according to a surgical image processing method according to an embodiment of the present invention.
  • a modeling image 610 a modeling image 610, a first modeling surgical tool 612, a second modeling surgical tool 614, an endoscope image 620, and an actual surgical tool 622 are shown.
  • the endoscope image 620 is located in the center region of the modeling image 610 and the modeling image 610 is output to the peripheral region of the endoscope image 620, the endoscope image 620 is not particularly limited to the position,
  • the screen display unit 320 may be output at various positions such as the upper right side, the upper left side, the lower right side, and the lower left side.
  • the endoscope image 620 may be output as a full screen of the screen display unit 320 or as a zoomed out image so as to have an effect spaced apart from the screen as shown. In the latter case, the modeling image 610 may be matched with the endoscope image 620 and output together.
  • the first modeling surgical tool 612 is an image that is not output to the endoscope image 620 but is a surgical tool positioned adjacent to the actual surgical site.
  • the second modeling surgical tool 614 is a modeling image in which the actual surgical tool 622 included in the endoscope image 620 is extended.
  • the actual surgical tool 622 and the second modeling surgical tool 614 may be shown using characteristic values as described above to match each other and not overlap each other.
  • such an image may be output for the operator's convenience through functions such as zoom in, zoom out, rotation, refresh, and focus shift.
  • functions such as zoom in, zoom out, rotation, refresh, and focus shift.
  • the size of the output to the full screen display 320 is determined. Therefore, the size of the first modeling surgical tool 612 and the second modeling surgical tool 614 and The exposure image may be determined and output.
  • the endoscope image 620 and the modeling image 610 may be implemented in 3D so that when the operator rotates the image in a predetermined direction, the modeling image may be converted and output according to the rotation direction.
  • first modeling surgical tool 612 and the second modeling surgical tool 614 are shown as a surgical instrument having an operation unit capable of manipulating the affected part at one end, the present invention is not limited thereto.
  • the first modeling surgical tool 612 and / or the second modeling surgical tool 614 may be a surgical tool such as a laparoscope.
  • FIG. 7 is a block diagram of a surgical robot according to an embodiment of the present invention.
  • the image input unit 310 the screen display unit 320, the arm operation unit 330, the operation signal generation unit 340, the image matching unit 350, the image storage unit 360, and the control unit 370.
  • a slave robot 2 including a master robot 1 and a robot arm 3, a camera 7, and a laparoscope 5 are shown. The differences from the above will be explained mainly.
  • the present embodiment combines the image of the patient taken from the outside with the above-described endoscope image and modeling image to provide the operator with the image not only inside the patient but also outside the patient to perform the operation more effectively. There is this.
  • the camera 7 is provided to the outside of the patient, for example, the operating room ceiling, one side of the operating table and the like to photograph the operation site and generate an image.
  • the image input unit 310 receives a camera image generated by the camera 7 and is displayed on the screen display 320.
  • the image matching unit 350 registers two or more images of the endoscope image, the modeling image, and the camera image and displays them on the screen display unit 320.
  • the images output together according to the present embodiment may be various combination images, for example, a combination of an endoscope image and a modeling image, a combination of an endoscope image and a camera image, a combination of a modeling image and a camera image, and the like. Can be.
  • FIG. 8 is a block diagram of a surgical image processing apparatus according to an embodiment of the present invention.
  • the image matching unit 350, the characteristic value calculating unit 351, the modeling image implementing unit 353, the overlapping image processing unit 355, the collision detecting unit 357, and the warning information output unit 359 may be used. Shown. The differences from the above will be explained mainly.
  • This embodiment has a feature that can alert the operator by detecting if there is a risk of collision of the above-described surgical instruments. That is, the surgical tools may collide with each other not only within the endoscope image but also outside thereof, and the present embodiment may detect and warn the collision of these surgical tools in advance, so that the operator recognizes the collision state in advance and avoids the surgery. There is an advantage that can be performed smoothly.
  • the collision detector 357 detects whether the modeling surgical tool images included in the modeling image collide with each other. Surgical instruments such as surgical instruments and laparoscopics 5 are determined in position and operation shape in accordance with the above-described operation information, so that when the operator operates the arm operation unit 330, the collision detection unit 357 receives the operation information. The analysis determines whether the surgical tools collide with each other, and generates a collision detection signal if the surgical tools are manipulated to the colliding position by the operation information.
  • the warning information output unit 359 may receive a collision detection signal from the collision detection unit 357 and output predetermined warning information.
  • the warning information may be information that can be recognized by the operator, such as sound information, color information, and vibration information. For example, an alarm sound transmitted from a predetermined speaker and text, an icon, and a border displayed on the screen display 320 may be used. It may be a color, background color change information, a shake signal of the arm manipulation unit 330, and the like.
  • the friction sound generated when the actual surgical tool collides may be stored in advance, and then the friction sound may be sent as a warning sound when the collision is detected.
  • the collision detection unit 357 determines that the surgical tools are in contact, for example, the arm operation unit 330 is no longer manipulated or the force feedback (force feedback) through the arm operation unit (330) Can be processed to occur.
  • the force feedback force feedback
  • Form a virtual wall in the direction of the collision for force feedback and detect when surgical tools approach each other's virtual walls when the operator manipulates the master controller and the surgical tools move along the virtual wall or By controlling to move in the other direction, it is possible to avoid the actual collision with each other.
  • a separate component for processing force feedback may be included as a component of the master robot 1.
  • FIG. 9 is a flow chart of a surgical image processing method according to an embodiment of the present invention.
  • step S510 a modeling image is generated and stored in advance with respect to the surgical object and / or the surgical tool, and in step S520, an endoscope image adjacent to the affected part of the patient is generated using the endoscope.
  • the characteristic value calculator 351 calculates a characteristic value of the endoscope image.
  • the image matcher 350 extracts a modeling image corresponding to the endoscope image, and processes the overlapped region to generate two images. Match each other and output.
  • step S550 the collision detection unit 357 analyzes and detects the operation information whether the surgical tool is in the collision position, and then exists in the collision position in step S560, the warning information output unit 359, as described above, the sound information, Output warning information such as color information, vibration information, etc.
  • the present embodiment has been described in the order of determining whether the surgical tools collide after extracting and matching the modeling image corresponding to the endoscope image, the present invention is not limited thereto.
  • the step of determining whether the surgical instruments are collided may be immediately performed whenever the above-described manipulation information is generated, or may be immediately performed whenever the endoscope image is generated.
  • FIG. 10 is a block diagram of an output image according to the surgical image processing method according to an embodiment of the present invention.
  • the modeling image 610, the first modeling surgical tool 612, the second modeling surgical tool 614, the endoscope image 620, the actual surgical tool 622, and the camera image 630 are shown. The differences from the above will be explained mainly.
  • the camera image 630 is an image generated by the camera 7 installed in the external environment of the patient as described above.
  • the camera image 630 may include an image of an operating table, a patient's appearance, and the like.
  • the first driving unit 231 and the second driving unit 232 are driving means for driving the first surgical tool 202 and the second surgical tool 204, respectively, and the description thereof is somewhat distance from the gist of the present invention. Detailed description thereof will be omitted.
  • an endoscope image 620, a modeling image 610, and a camera image 630 are output while occupying a specific position of the screen display 320.
  • the first modeling surgical tool 612, the first surgical tool 202, the second modeling surgical tool 614, and the second surgical tool 204 are computed, matched, and output.
  • the camera image 630 may be entirely replaced by the modeling image 610. That is, the modeling image 610 may include a first robot arm 121, a second robot arm 122, a first surgical tool 202, a second surgical tool 204, a first driver 231, and a second driver.
  • the image may be implemented by modeling the entire surgical system such as 232, which may be combined with and / or matched with the endoscope image 620 and output.
  • FIG. 11 is a block diagram of a surgical robot according to an embodiment of the present invention.
  • the master robot 1 including the image selecting unit 380 and the slave robot 2 including the robot arm 3, the camera 7, and the laparoscope 5 are shown. The differences from the above will be explained mainly.
  • This embodiment has a feature that the operator can use the surgical image more conveniently by implementing an interface that the operator can select the type of the image to be displayed on the screen display unit 320 for convenience. That is, according to the present embodiment, one or more images of the endoscope image, the modeling image, and the camera image as described above may be output by the operator's selection.
  • the image selector 380 is a means for selecting an image to be matched with each other and output from the above-described endoscope image, modeling image, and camera image.
  • the image selection unit 380 may be an image selection button implemented in the form of a clutch button or a pedal (not shown), as described above, and a function menu or mode selection displayed through the monitor unit 6. It may also be implemented as a menu.
  • the image selecting unit 380 functions to zoom in, zoom out, rotate, refresh, horizontally move, 2D / 3D conversion, hide / display a specific image, and change viewpoints with respect to the above-described endoscope images, modeling images, and camera images. It may also include. That is, the image selector 380 may be an interface provided to express the image output by the operator at various angles.
  • FIG. 12 is a flow chart of a surgical image processing method according to an embodiment of the present invention.
  • step S510 a modeling image is generated and stored in advance with respect to the surgical object and / or the surgical tool, and in step S520, an endoscope image adjacent to the affected part of the patient is generated using the endoscope.
  • any one of the screen display unit 320, the image matching unit 350, and the control unit 370 receives an image selection signal to be output from the image selection unit 380 and outputs an image from among an endoscope image, a modeling image, and a camera image. Select the video to be.
  • the characteristic value calculator 351 calculates a characteristic value of the endoscope image, and in operation S545, the image matcher 350 processes the overlapped area corresponding to the selected image to match and output the two images.
  • Surgical image processing method can be implemented in the form of program instructions that can be executed by various computer means may be recorded on a computer readable medium.
  • the recording medium may be a computer readable recording medium having recorded thereon a program for causing the computer to execute the above steps.
  • the computer readable medium may include a program command, a data file, a data structure, etc. alone or in combination.
  • Program instructions recorded on the media may be those specially designed and constructed for the purposes of the present invention, or they may be of the kind well-known and available to those having skill in the computer software arts.
  • Examples of computer readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tape, optical media such as CD-ROMs, DVDs, and magnetic disks such as floppy disks.
  • -Magneto-Optical Media and hardware devices specifically configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
  • the surgical image processing apparatus described the configuration of a surgical tool and a robotic surgical system according to an embodiment, but is not necessarily limited thereto, the present invention is not a robot surgery Even if applied to a surgical image processing system or modeling and outputting a variety of other surgical instruments, such other configurations may be included in the scope of the present invention if there is no difference in the overall operation and effect.
  • FIG. 13 is a plan view showing the overall structure of the surgical robot according to an embodiment of the present invention
  • Figure 14 is a conceptual diagram showing a master interface of the surgical robot according to an embodiment of the present invention.
  • the laparoscopic surgical robot system includes a slave robot 2 that actually performs surgery on a patient lying on an operating table, and a master robot 1 that remotely controls the slave robot 2.
  • a slave robot 2 that actually performs surgery on a patient lying on an operating table
  • a master robot 1 that remotely controls the slave robot 2.
  • Include. 13 illustrates a case in which the master robot 1 and the slave robot 2 are separately implemented, the master robot 1 and the slave robot 2 are not necessarily separated into physically independent separate devices. It can be integrated into one and configured in one piece, in which case the master interface 4 can correspond to, for example, the interface part of the one-piece robot.
  • the master interface 4 of the master robot 1 comprises a monitor 6 and a master manipulator
  • the slave robot 2 comprises a robot arm 3 and a laparoscope 5.
  • the illustrated laparoscopic 5 is one embodiment of a surgical endoscope, and may be replaced with one or more of thoracoscopic, arthroscopic, parenteral, bladder, rectal, duodenum, mediastinal, cardiac, etc.
  • the master manipulator of the master interface 4 is embodied so that the operator can grip and manipulate each hand.
  • the master controller may be implemented with two handles 10 or more handles 10, and an operation signal according to the operator's manipulation of the handle 10 may be controlled by the slave robot 2. Is transmitted to control the robot arm 3 and / or the instrument (not shown).
  • a surgical operation such as position movement, rotation, and cutting of the robot arm 3 may be performed.
  • the master controller is not limited to the shape of the handle 10 and may be applied without any limitation as long as it can control the operation of the robot arm 3 through a network.
  • the handle 10 may be configured to include a main handle and a sub handle.
  • the operator may operate the slave robot arm 3 or the laparoscope 5 or the like only by the main handle, or may operate the sub handles to simultaneously operate a plurality of surgical equipments in real time.
  • the main handle and the sub handle may have various mechanical configurations depending on the operation method thereof.
  • the robot arm 3 and / or other surgery of the slave robot 2 such as a joystick type, a keypad, a trackball, and a touch screen, may be used.
  • Various input means for operating the equipment can be used.
  • An image input by the laparoscope 5 may be displayed as an image image on the monitor unit 6 of the master interface 4.
  • the manner in which the image input by the laparoscope 5 is displayed may be various in addition to the manner in which the image is displayed through the monitor 6, which is omitted from the description because it is somewhat distance from the gist of the present invention.
  • the monitor unit 6 may be composed of one or more monitors, and may display information necessary for surgery on each monitor separately. 13 and 14 illustrate the case where the monitor unit 6 includes three monitors, the quantity of the monitors may be variously determined according to the type or type of information requiring display.
  • the monitor unit 6 may further output one or more biometric information about the patient.
  • one or more indicators indicating a patient's condition for example, body temperature, pulse rate, respiration, and blood pressure, may be output through one or more monitors of the monitor unit 6. It may be output.
  • the slave robot 2 includes a biometric information measuring unit including at least one of a body temperature measuring module, a pulse measuring module, a respiratory measuring module, a blood pressure measuring module, an electrocardiogram measuring module, and the like. It may include.
  • the biometric information measured by each module may be transmitted from the slave robot 2 to the master robot 1 in the form of an analog signal or a digital signal, and the master robot 1 monitors the received biometric information. Can be displayed via
  • the slave robot 2 and the master robot 1 may be coupled to each other through a wired communication network or a wireless communication network so that an operation signal and a laparoscope image input through the laparoscope 5 may be transmitted to the counterpart. If two operation signals by the two handles 10 provided in the master interface 4 and / or operation signals for adjusting the position of the laparoscope 5 need to be transmitted at the same time and / or at a similar point in time. Each operation signal may be independently transmitted to the slave robot 2.
  • each operation signal is 'independently' transmitted, it means that the operation signals do not interfere with each other and one operation signal does not affect the other signal.
  • each operation signal in order to transmit the plurality of operation signals independently of each other, in the generation step of each operation signal, header information for each operation signal is added and transmitted, or each operation signal is transmitted in the generation order thereof, or Various methods may be used such as prioritizing each operation signal in advance and transmitting the operation signal accordingly.
  • the transmission path through which each operation signal is transmitted may be provided independently so that interference between each operation signal may be fundamentally prevented.
  • the robot arm 3 of the slave robot 2 can be implemented to be driven with multiple degrees of freedom.
  • the robot arm 3 includes, for example, a surgical instrument inserted into a surgical site of a patient, a rocking drive unit for rotating the surgical instrument in the yaw direction according to the surgical position, and a pitch direction perpendicular to the rotational drive of the rocking drive unit. It comprises a pitch drive unit for rotating the surgical instruments, a transfer drive for moving the surgical instruments in the longitudinal direction, a rotation drive for rotating the surgical instruments, a surgical instrument drive unit installed on the end of the surgical instruments to cut or cut the surgical lesion Can be.
  • the configuration of the robot arm 3 is not limited thereto, and it should be understood that this example does not limit the scope of the present invention.
  • One or more slave robots 2 may be used to operate the patient, and the laparoscope 5 for displaying the surgical site as an image image through the monitor unit 6 is implemented as an independent slave robot 2. May be in addition, as described above, embodiments of the present invention may be used universally in operations in which various surgical endoscopes (eg, thoracoscopic, arthroscopy, parenteral, etc.) other than laparoscopic are used.
  • various surgical endoscopes eg, thoracoscopic, arthroscopy, parenteral, etc.
  • FIG. 15 is a block diagram schematically showing the configuration of a master robot and a slave robot according to an embodiment of the present invention
  • Figures 16 to 27 is a view showing a region designation method according to embodiments of the present invention.
  • the master robot 1 and the slave robot 2 may be integrally implemented.
  • the master robot 1 includes an image input unit 310, a screen display unit 320, an arm operation setting unit 331, a storage unit 341, a restricted area setting unit 1350, and an arm operation unit 330. ), An operation determination unit 371, a reaction information processing unit 1380, and a control unit 370.
  • the master robot 1 includes an input unit for the operator to input a control command, a speaker unit for outputting the reaction information as auditory information (for example, a warning sound or a warning voice message), and visualizes the response information. It may further include one or more of the LED unit for outputting the information.
  • the slave robot 2 may comprise a robot arm 3 and a laparoscope 5.
  • the robot arm 3 may be interpreted as a concept including an instrument unless explicitly limited herein.
  • the slave robot 2 further includes a location information providing unit for providing location information of the robot arm 3 and the laparoscope 5, a biometric information measuring unit for measuring and providing biometric information about the patient. It may include.
  • the position information providing unit masters, for example, information about how much the robot arm 3 or the laparoscope 5 has moved at an angle from the basic position (for example, the rotation angle of a driving motor such as a robot arm). It may be configured to provide to the robot 1.
  • the image input unit 310 receives an image input through a camera provided in the laparoscope 5 of the slave robot 2 through a wired or wireless communication network.
  • the screen display unit 320 outputs an image image corresponding to the image received through the image input unit 310 as visual information.
  • the screen display unit 320 may output the response information as visual information (for example, screen flickering), or may further output corresponding information when biometric information is input from the slave robot 2.
  • the screen display unit 320 may be implemented in the form of, for example, the monitor unit 6.
  • the arm manipulation setting unit 331 receives the arm manipulation setting from the operator or the setter and generates manipulation setting information on whether the robot arm 3 and / or the instrument is to be manipulated according to the manipulation instruction of the operator.
  • the arm manipulation setting may be, for example, a setting of whether or not the robot arm and / or the instrument located in the out-of-visible region not visible in the video image acquired by the laparoscope 5 is to be manipulated.
  • the operator sets only the robot arm and / or instrument shown on the screen with reference to the image image input through the laparoscope 5 to be operable using the identification information, and other robots. Arms and instruments can be set inoperable.
  • the master robot 1 receives the position information of each robot arm and / or instrument, the position of the laparoscope 5 and the image input angle from the slave robot 2, After calculating whether or not the image image of the coordinate range is input, it can be set to be operable only the robot arm and the instrument positioned to include some or all within the coordinate range.
  • the laparoscope 5 may further include a distance sensing sensor for sensing a distance to the surface of the surgical site to calculate a coordinate range of the surgical site represented by the image image obtained by the laparoscope 5.
  • the robot arm outside the coordinate range for the replacement of the robot arm or instrument to be used during the operation can be set to be manipulated.
  • the storage unit 341 stores the operation setting information and the restricted area coordinate information set by the restricted area setting unit 1350 in the robot arm 3 and / or the instrument generated by the arm operating setting unit 331.
  • information on the response information processing type eg, one or more of tactile information output method, auditory information output method, visual information output method, etc. may be further stored.
  • the control unit 370 when the operation information for any robot arm (3) or the instrument is input by the operator, when the operation using the operator's arm operation unit 330 falls within the specified restricted area for the response information processing type With reference to the information, the reaction information processor 1380 may control to perform a corresponding process.
  • the control unit 370 may include an operation determination unit 371 described below.
  • the restricted area setting unit 1350 may access the robot arm 3 or the instrument to prevent damage to a part of the patient's body (for example, blood vessels or organs) due to a misoperation or the like. Create and store coordinate information for restricted areas that cannot be manipulated.
  • the restricted area may be designated or changed by the operator before or during the operation.
  • the following is only a few examples of various embodiments in which an operator designates a restricted area.
  • the area set as follows may be displayed overlaid on the image image obtained by the laparoscope 5.
  • the master robot 1 Analyzes the image image acquired by the laparoscope 5 (for example, the presence or absence of the selected organ by the color analysis of the surgical site included in the image image and the interpretation of its coordinate region, etc.). Coordinate information (eg, restricted area coordinate information or allowed area coordinate information) may be generated.
  • a limited organ eg, liver, etc.
  • the master robot 1 Analyzes the image image acquired by the laparoscope 5 (for example, the presence or absence of the selected organ by the color analysis of the surgical site included in the image image and the interpretation of its coordinate region, etc.).
  • Coordinate information eg, restricted area coordinate information or allowed area coordinate information
  • the allowable area may be set only for the coordinate range corresponding to the image image obtained and displayed by the laparoscope 5, and all other areas may be set as the restricted area. At this time, only the robot arm 3 or the instrument, which is partially or entirely located in the allowable area, will be operated by the operator's arm operation or the like.
  • 3 corresponding to an area (ie, a restricted area or an allowable area) set using a master controller and a controlled object moving according to its manipulation (eg, at least one of a robot arm, an instrument, and an endoscope).
  • a master controller e.g., a robot controller
  • a controlled object moving according to its manipulation eg, at least one of a robot arm, an instrument, and an endoscope.
  • Dimensional coordinate values can be set.
  • the end of the instrument is recognized as a three-dimensional mouse cursor, and the operator performs an operation of taking a point in space (that is, specifying a point) while viewing an image image obtained by the laparoscope 5.
  • This allows the dots to be connected to be set as a restricted area.
  • the operator sets an area to include S1 in the area on the image image obtained and displayed by the laparoscope 5 (in the image image, organs, blood vessels such as S1, S2, and S3 are included).
  • the end is positioned at the first vertex
  • the user moves to the second vertex and additionally inputs a point designation command.
  • an area to which vertices previously set by the point designation command are connected is set.
  • Each vertex may be connected in a straight line, but may be connected in a curved form, and the corresponding area may be set as a planar region (for example, a triangle or a circle) or a three-dimensional region (for example, a polyhedron shape).
  • the aforementioned point designation command, area section command, and the like may be input using a master controller provided in the master interface 4 or the like.
  • the restricted area setting method may be extended by designating a vertex on a three-dimensional space to three-dimensionally set the restricted area.
  • the three-dimensional restriction zone can be set in a manner of designating a vertex or the like on the three-dimensional space, but also a planar restriction zone such as a wall can be set.
  • the region set by the operator may be deformed by locating the end of the instrument at one point of the outline of the region and then moving the point out of the region or inward.
  • the outline and the outer surface of the region formed by the designated vertices may be interpreted and stored in advance as being the connection of the points.
  • the operator places the instrument end at the vertex P4 and performs point movement as shown in FIG. 18.
  • the end of the instrument is located on the line connecting the vertices P3 and P4 and then the point movement is performed.
  • an area set to include S2 in the corresponding area may be extended.
  • the above-described deformation of the set area is not limited to be made only on a plane, and as shown in FIG. 19, it is natural that the same can be performed even when the three-dimensional area is set.
  • the region is deformed by a point movement method using vertices is illustrated in FIG. 16, it is natural that the region may be deformed by pulling a surface, a corner, or the like in a three-dimensional space.
  • the area set by the operator may be divided into a plurality of areas and separated into individual areas. That is, as shown in FIGS. 20 and 21, the operator may separate a region set to include S1 and S2 into a first individual region including only S1 and a second individual region including only S2. To this end, the operator may designate two or more points (eg, Q1 and Q2) at a point to be separated, and then input a division command to separate the corresponding area into two or more separate areas.
  • points eg, Q1 and Q2
  • unnecessary areas may be removed. That is, as illustrated in FIGS. 22 and 23, the operator may allow the region set to include S1 and S2 to be reduced to the region including only S2. To do this, the operator assigns two or more points (e.g., Q1 and Q2) to the boundary point to be removed, allows the instrument end to be located in the area to be removed, and then enters a delete command. In this case, the separated area can be deleted.
  • points e.g., Q1 and Q2
  • the area set by the operator may be manipulated to be folded around the boundary point partitioning into a plurality of areas or to be restored to its original position. That is, as shown in FIGS. 24 and 25, the operator divides a region set to include S1 and S2 into a first individual region including only S1 and a second individual region including only S2, and then each individual region is Folding operations can be performed to act as a restricted or permitted area.
  • FIG. 24 assumes a case where the restricted area is planarly deformed by folding or restoring a planarly set area flatly
  • FIG. 25 assumes a case where the restricted area is folded into a three-dimensional limited area. .
  • the operator specifies a plurality of points (e.g., Q1-Q2 and Q3-Q4) that serve as boundary points and enters a fold command, whereby one direction (e.g., The rotation and folding commands may be executed in the direction corresponding to the position so that the restricted area may be set in the shape of a three-dimensional figure.
  • the rotation and folding commands may be executed in the direction corresponding to the position so that the restricted area may be set in the shape of a three-dimensional figure.
  • it may be set in advance as to what angle the individual regions at which positions are to be rotated around the boundary point.
  • the three-dimensional shape of the area image shown in plan by the folding command may be various.
  • the method of setting and modifying the above-described area is one of arbitrary shapes (for example, circles, triangles, squares, spheres, hexahedrons, tetrahedrons, etc.) designated as a template, in addition to the method in which the operator precedes the area setting using an instrument.
  • a template for example, circles, triangles, squares, spheres, hexahedrons, tetrahedrons, etc.
  • the three-dimensional mouse or the end of the instrument
  • FIG. 26 when the heart region is to be set as a restricted area, a template corresponding to the heart is selected to be shown, and then scaled or drawn by drawing an outline or an outline.
  • the template can be adapted to suit the characteristics of the surgical patient.
  • control may be controlled so that the set restricted area is deformed correspondingly by tracking the change of the actual organ (for example, position shift, shape change, etc.) that is deformed by the operator's control object manipulation.
  • an image analysis technique such as an edge detection may be applied to an image of an actual organ.
  • a reference image (eg, an image image obtained by the laparoscope 5, an MRI image of a patient, etc.) of a surgical patient displayed through a monitor implemented as a touch-sensitive input device is generated and generalized.
  • the human modeling image when the operator shows a certain range in the shape of a closed curve with a finger, the corresponding region may be set as a restricted area.
  • the coordinates of the range shown by the operator are coordinated to correspond to the position of organs or blood vessels inside the body of the surgical patient based on an arbitrary reference point (for example, the position of the characteristic organs, the position of the laparoscope 5, etc.).
  • the information is mapped and the coordinate information mapping may be processed by a position recognition method by conventional image analysis, and thus description thereof will be omitted.
  • modeling of each organ or blood vessel of a human model image generated using a reference image eg, CT image, MRI image, etc.
  • a reference image eg, CT image, MRI image, etc.
  • You can also set the area automatically using the data.
  • a human modeling image is generated using a reference image (eg, CT image, MRI image, etc.) of a surgical patient in step P450, and modeling data corresponding to the human modeling image (for example, in step P455). For example, the color, shape, size, etc. of each organ and blood vessel) are generated and stored.
  • steps P450 and P455 may be omitted.
  • step P460 long-term selection information is obtained.
  • the organ selection information is obtained by using an image analysis technique (for example, pixel-by-pixel color analysis, organ outline analysis, etc.) to determine whether organs or blood vessels in the image image obtained and displayed by the laparoscope 5 are displayed, for example. After analysis, it can be automatically recognized against pre-stored modeling data.
  • an image analysis technique for example, pixel-by-pixel color analysis, organ outline analysis, etc.
  • it can be automatically recognized against pre-stored modeling data.
  • a method in which an operator selects an arbitrary organ from an organ list including a drop down menu or a human modeling image displayed through the monitor unit 6 may be used.
  • the corresponding area is partitioned and displayed as an area image (see FIGS. 16 to 26) for the operator to recognize.
  • the master robot 1 recognizes the position of the organ corresponding to the organ selection information in consideration of the height of the patient, the lying down shape and the general position of the organ in the human body, and then the area of the shape defined by the modeling data. This is set so as to be overlaid and displayed in the image image obtained by the monitor unit 6 and / or the laparoscope 5.
  • modeling data is extracted from the storage 341 or the like to display an area image corresponding to the shape of the heart, and the area image corresponding to the extracted modeling data is the actual organ.
  • the image may be processed to be overlaid and displayed on the video image.
  • step P470 the operator checks whether the area image overlaid on the image image corresponds to the area of the actual organ, and then modifies the area image overlaid (see FIGS. 19 to 26) when the area image does not match.
  • the region image deformation of step P470 may be processed by the operator as described above, but if the organ is present in the image image acquired and displayed by the laparoscope 5, the shape (size) of the organ is determined using an image analysis technique. And so on, the image may be automatically transformed to match the shape of the organ.
  • a method of setting and using a restricted area in advance may also be applied.
  • the matching of the 3D image and the real-time image image may be omitted, and a method of matching only a few points required may be applied.
  • the arm operation unit 330 is a means for allowing the operator to manipulate the position and function of the robot arm 3 of the slave robot 2.
  • the arm manipulation unit 331 may be formed in the shape of the handle 10 as illustrated in FIG. 14, but the shape is not limited thereto and may be modified in various shapes for achieving the same purpose. Further, for example, some may be formed in the shape of a handle, others may be formed in a different shape, such as a clutch button, finger insertion tube or insertion to enable the operator's fingers can be inserted and fixed to facilitate the operation of the surgical tool More rings may be formed.
  • the clutch button 14 or the like is used as the laparoscope 5. It may be set to function for the adjustment of the position and / or the image input angle of.
  • the operation determination unit 371 determines whether the operation information by the operator's operation of the arm operation unit 330 is valid operation information by referring to at least one of operation setting information and restricted area coordinate information stored in the storage unit 341.
  • the manipulation determination unit 371 may determine that the manipulation information is invalid when the manipulation information according to the manipulation of the arm manipulation unit 330 is set to be inoperable with reference to the manipulation setting information.
  • the operation determination unit 371 may determine that the operation information according to the operation of the arm operation unit 330 is invalid when the robot arm 3 contacts the restricted area with reference to the restricted area coordinate information. . To this end, the operation determination unit 371 may determine displacement information (for example, the robot arm 3) at which angle and at which angle the robot arm 3 or the instrument received from the position information providing unit has moved from the basic position. Rotation angle of a driving motor, etc.).
  • the response information processing unit 1380 performs the processing of the response information specified by the operator and / or the setter when the operation determination unit 371 determines that the operator's operation information is invalid operation information.
  • the processing of the response information includes visual information such as tactile information output for the operator to recognize force by applying force feedback to the handle 10 and outputting a warning message corresponding to the image image obtained by the blinking LED or the laparoscope 5. May be processed in one or more ways, such as output of auditory information, such as output of warning sounds.
  • the controller 370 controls the operation of each component so that the above-described function can be performed.
  • the controller 370 may perform a function of converting an image input through the image input unit 310 into an image image to be displayed through the screen display unit 320.
  • FIG. 28 is a flowchart illustrating a method for setting a restricted area according to an embodiment of the present invention.
  • the generation and storage of the arm operation setting information and the generation and storage of the restricted area setting information are illustrated as sequential steps, but the order of each step may be changed or performed simultaneously.
  • step P510 the master robot 1 receives an arm manipulation setting on whether to operate the robot arm 3 and / or the instrument located in the out-of-visible region from the operator or setter.
  • step P520 the master robot 1 generates and stores arm operation setting information corresponding to the input arm operation setting.
  • step P530 the master robot 1 inputs a restricted area setting for designating a zone that is restricted from entering the process of operating the robot arm 3 and / or the instrument located in the visible and non-visible areas from the operator or setter. Receive.
  • step P540 the master robot 1 generates and stores restricted zone setting information corresponding to the entered restricted zone setting.
  • the operator or / and setter may be displayed by pre-acquired image information (e.g., one or more of patient images such as CT, MRI, etc., virtual images modeled to be generalized, etc.) or / and clicked on a drop down menu.
  • image information e.g., one or more of patient images such as CT, MRI, etc., virtual images modeled to be generalized, etc.
  • Long-term items can be used to establish restricted areas before or during surgery.
  • the area set as the restricted area is restricted so that the robot arm 3 and / or the instrument does not operate in spite of the user's operation, and the access or contact to the restricted area is fed back to the operator as response information, thereby being damaged during the surgery.
  • the response information may be output to the operator in one or more of a tactile information type, a visual information type, and an auditory information type.
  • the designated area is set as a restricted area. can do.
  • the patient image information such as CT, MRI, or the like taken in advance or a reconstructed image from the patient image information is displayed on the operator's screen, and if the operator designates a specific region in the image displayed on the screen, the designated region becomes a restricted area. It can also be set.
  • a process of recognizing the coordinate range of the designated specific region may be performed by image analysis or the like.
  • the operator may designate the restricted area by referring to the screen on which the corresponding image is displayed, may be specified in advance in the photographed image, and in addition, the restricted area may be specified in various ways.
  • FIG. 29 and 30 are flowcharts illustrating an operation limiting method of a surgical robot system according to an exemplary embodiment of the present invention
  • FIG. 31 is an exemplary view of a screen display for explaining an operation limiting method according to an exemplary embodiment of the present invention.
  • the master robot 1 receives arm manipulation information according to manipulation of the arm manipulation unit 330 from an operator.
  • the master robot 1 determines whether the arm operation information input in step P520 is for the robot arm 3 and / or the instrument located in the out-of-visible region.
  • the robot arm positioned in the non-visible region may be manipulated by a misoperation of an operator or when the robot arm 3 positioned in the visible region is to be replaced with the robot arm 3 positioned in the non-visible region.
  • the operation may be commanded.
  • the robot arm 3 may be interpreted as a concept including an instrument, unless expressly limited herein.
  • step P620 if it is not an operation command for the robot arm 3 or the like located in the non-visible region (i.e., the visible region), the flow advances to step P630, whereby the master robot 1 generates an arm manipulation instruction.
  • step P710 shown in FIG. 30 is followed.
  • robot arms or instruments positioned in the visible region may be referred to as 830a and 830b, and robot arms or instruments positioned in the non-visible region may be referred to as 850.
  • step P620 in the case of an operation instruction for the robot arm 3 or the like located in the out-of-visible region, the flow advances to step P640, where the master robot 1 moves to the robot arm 3 or the like in the out-of-visible region. It is determined whether the operation setting information for allowing the operation to be permitted is stored.
  • step P640 if operation setting information for allowing operation on the robot arm 3 and the like located in the out-of-visible region is stored, the process proceeds to step P630 and the master robot 1 generates and outputs an arm operation command. .
  • the robot arm 3 or the like located in the visible region or the non-visible region may be moved or manipulated in the same manner by the arm manipulation command.
  • An arm operation instruction may be generated to allow the robot arm 3 or the like positioned in the visible area to be processed slower than the operation speed or the operation speed.
  • step P640 if operation setting information is stored so that the operation on the robot arm 3, etc. located in the out-of-visible region is not allowed, the process proceeds to step P650 and the master robot 1 performs output processing of the reaction information. To allow the operator to recognize it.
  • the master robot 1 receives arm operation state information from the slave robot 2.
  • the arm operation state information may include, for example, information about how much the robot arm 3 or the instrument has moved at an angle from the basic position (for example, the rotation angle of a driving motor such as the robot arm), The corresponding information may be provided from the location information providing unit.
  • step P720 the master robot 1 determines whether the robot arm 3 or the instrument is in contact with the preset restricted area with reference to the arm operation state information.
  • the restricted area may be designated by the operator with reference to the screen on which the corresponding image is displayed, or may be specified in advance in the captured image, or may be set by various methods.
  • the augmented reality technique is applied to the real image acquired by the laparoscope 5, etc. (see 840 of FIG. 31), a line treatment method of a specific color, and an opacity process. It can be handled variously in one or more ways. As a result, the operator may continuously inform that the area is a restricted area. Naturally, the restricted zone display method may be processed in conjunction with the movement or enlargement of the display screen.
  • step P720 if the robot arm 3 or the instrument has not contacted the restricted area, the process proceeds to step P610 again.
  • step P720 if the robot arm 3 or the instrument is in contact with the restricted area as a result of the determination in step P720, the process proceeds to step P730 where the master robot 1 performs the operation limitation control of the robot arm, and also performs reaction information output processing. This allows the operator to recognize it.
  • Operation restriction control of the robot arm 3 in the restricted zone can, for example, allow the set restricted zone to be recognized as if it were a virtual wall.
  • the operator restricts the operation of the instrument as if the instrument is not operated by the virtual wall and recognizes it through the output of the response information. Do it.
  • the contact with the restricted area is not necessarily limited to not operate the instrument, it is also possible to determine whether the operation by the operator's choice. For example, if bleeding occurs in a region (eg, a blood vessel) set as a restricted area irrespective of an operator's intention, rapid bleeding may be possible if the instrument is set to operate.
  • the command for ignoring the restricted area setting for allowing an instrument or the like to be operated in the restricted area when the contacted with the restricted area may be input by, for example, an operator operating a predetermined button or pedal. will be.
  • the restricted area is not necessarily set only within the area currently displayed on the screen (ie, the visible area 810), and is not displayed on the screen (ie, the non-visible area 820) as illustrated in FIG. 31.
  • the restricted area 840 may be designated and set. That is, even when the restricted area 840 is set in the out-of-visible region 820, if the instrument is in contact with the restricted area may cause a serious risk to the safety of the surgical patient, the movement of the instrument is limited to prevent this Can be controlled.
  • a control method for limiting the operation of the instrument for example, a method of locking the instrument so as not to operate may be used, and further, by allowing reaction force to be fed back to the operation handle of the user who operates the locked instrument, The user may be informed that the operation is limited.
  • the manner in which the reaction information is output may vary as described above.
  • the operation limiting method of the above-described surgical robot system may be implemented by a software program or the like. Codes and code segments constituting a program can be easily inferred by a computer programmer in the art.
  • the program is also stored in a computer readable media, and read and executed by a computer to implement the method.
  • the information storage medium includes a magnetic recording medium, an optical recording medium and a carrier wave medium.
  • FIG. 32 is a plan view showing the overall structure of a surgical robot according to an embodiment of the present invention
  • Figure 33 is a conceptual diagram showing a master interface of the surgical robot according to an embodiment of the present invention
  • Figures 34 to 37 is the present invention
  • Figure is a view illustrating a movement form of the contact portion according to the embodiment.
  • the surgical robot system includes a slave robot 2 performing surgery on a patient lying on an operating table and a master robot 1 remotely controlling the slave robot 2.
  • the master robot 1 and the slave robot 2 are not necessarily separated into separate devices that are physically independent, but may be integrated into one and integrally formed, in which case the master interface 4 may be, for example, of an integrated robot. May correspond to an interface portion.
  • the master interface 4 of the master robot 1 comprises a monitor 6, a telescopic display 20 and a master manipulator, and the slave robot 2 comprises a robot arm 3 and a laparoscope 5.
  • the monitor unit 6 of the master interface 4 may be composed of one or more monitors, and each monitor may be individually displayed information necessary for the operation.
  • 32 and 33 illustrate a case in which the monitor unit 6 is included on each side of the telescopic display unit 20 one by one, but the quantity of the monitor may vary depending on the type or type of information requiring display. have.
  • the monitor unit 6 may output one or more biometric information about the patient, for example.
  • at least one of indicators indicating the patient's condition for example, biometric information such as body temperature, pulse rate, respiration and blood pressure may be output through at least one monitor of the monitor unit 6, and a plurality of pieces of information may be output. In this case, each piece of information may be divided and output by area.
  • the slave robot 2 includes a biometric information measuring unit including at least one of a body temperature measuring module, a pulse measuring module, a respiratory measuring module, a blood pressure measuring module, an electrocardiogram measuring module, and the like. It may include.
  • the biometric information measured by each module may be transmitted from the slave robot 2 to the master robot 1 in the form of an analog signal or a digital signal, and the master robot 1 monitors the received biometric information. Can be displayed via
  • the telescopic display unit 20 of the master interface 4 provides the operator with an image input through the laparoscope 5 as a surgical site.
  • the operator views the image through the eyepiece 220 formed on the contact portion 210 of the telescopic display portion 20, and manipulates the robot arm 3 and the end effector by manipulating the master controller to operate on the surgical site.
  • Proceed. 33 illustrates a case in which a panel portion 210 is implemented as an example of the folding portion 210, the folding portion 210 may be formed to be recessed to face the inside of the master interface 4.
  • 33 illustrates an example in which the eyepiece 220 for the operator to view an image acquired by the laparoscope 5 is formed on the contacting part 210, but the contacting part 210 shows the image of the backside being transmitted.
  • the formation of the eyepiece 220 may be omitted.
  • the folding unit 210 may be formed of a transparent material, coated with a polarizing film, or used to watch a 3D IMAX film so that an image of the rear surface of the folding unit 210 may be transmitted to the operator. It may be formed by a light transmissive material.
  • the telescopic display unit 20 functions not only as a display device for the operator to check the image of the laparoscope 5 through the eyepiece 220, but also as a control command input unit for controlling the position and image input angle of the laparoscope 5. It is configured to have together.
  • the operator's face is in contact with or close to the contact portion 210 of the telescopic display unit 20, and a plurality of supports 230 and 240 are formed to protrude so that the operator's face movement can be recognized.
  • the support 230 formed at the top may be used to contact the operator's forehead to fix the forehead position
  • the support 240 formed at the side may be located at an area under the operator's eye (for example, the cheekbone area). It can be used to contact and fix the face position.
  • the position and quantity of the support illustrated in FIG. 33 are exemplary, and the position or the shape of the support may be varied, for example, a jaw fixing support, a face left and right support 290, or the like.
  • the contact portion 210 may be formed in the form of a rod or a wall to support the movement in the corresponding direction.
  • the position of the operator's face is fixed by the support parts 230 and 240 formed as described above, and when the operator turns the face in an arbitrary direction while viewing the image by the laparoscope 5 through the eyepiece 220, the facial movement accordingly
  • This can be detected and used as input information for adjusting the position of the laparoscope 5 and / or the image input angle. For example, if the operator wants to check the area on the left side (i.e., the area on the left side of the display screen) rather than the surgical area displayed on the current image, the operator can turn his head so that his face is relatively left.
  • the laparoscope 5 may be manipulated so that the image of the corresponding area is output.
  • the contact portion 210 of the telescopic display unit 20 is coupled to the master interface 4 so that the position and / or the angle is changed in accordance with the operator's face movement.
  • the master interface 4 and the contact portion 210 of the telescopic display portion 20 may be coupled to each other by the flow portion 240.
  • the flow unit 250 may be formed of, for example, an elastic body so as to easily change the position and / or angle of the telescopic display unit 20 and restore the original state when the external force caused by the operator's face movement is removed.
  • the telescopic display unit 20 may control the original state restoration unit (see FIG. 40) so that the telescopic display unit 20 may be restored to the original state.
  • the contact part 210 is moved by the flow part 250 based on a virtual center point and coordinates in a three-dimensional space formed on the XYZ axis, or is operated in any direction (eg, clockwise, counterclockwise, etc.). Rotational movement).
  • the virtual center point may be any one point or axis in the contact portion 210, for example, the center point of the contact portion 210.
  • 34 to 37 illustrate the movement of the contact portion 210.
  • the contact portion 210 is moved in the direction in which the force due to the face movement is applied as illustrated in FIG.
  • the contact portion 210 When the operator's face movement direction rotates on the X-Y plane, the contact portion 210 is rotated in a direction in which a force caused by the face movement is applied as illustrated in FIG. 35. At this time, the contact portion 210 may be rotated in a clockwise or counterclockwise direction depending on the direction in which the force is applied.
  • the contact portion 210 When the operator's face movement is rotated about the X, Y or Z axis, the contact portion 210 is rotated in the direction in which the force by the face movement is applied about the reference axis as illustrated in FIG. 36. . In this case, the contact portion 210 may be rotated in a clockwise or counterclockwise direction according to the direction in which the force is applied.
  • the contact portion 210 is based on the virtual center point and the two axes to which the force is applied as illustrated in FIG. Rotation is moved.
  • the vertical and horizontal movements of the contact portion 210 and the rotational movement are determined by the direction of the force applied by the face movement, and one or more types of movements described above may be combined.
  • the master interface 4 is provided with a master manipulator so that the operator can be gripped and manipulated by both hands.
  • the master controller may be implemented by two handles 10 or more handles 10, and an operation signal according to the operator's manipulation of the handle 10 is transmitted to the slave robot 2 so that the robot arm 3 may be moved. Controlled.
  • a surgical operation such as a position movement, rotation, and cutting operation of the robot arm 3 may be performed.
  • the handle 10 may be configured to include a main handle and a sub handle.
  • the operator may operate the slave robot arm 3 or the laparoscope 5 or the like only by the main handle, or may operate the sub handles to simultaneously operate a plurality of surgical equipments in real time.
  • the main handle and the sub handle may have various mechanical configurations depending on the operation method thereof.
  • the robot arm 3 and / or other surgery of the slave robot 2 such as a joystick type, a keypad, a trackball, and a touch screen, may be used.
  • Various input means for operating the equipment can be used.
  • the master manipulator is not limited to the shape of the handle 10 and may be applied without any limitation as long as it can control the operation of the robot arm 3 through a network.
  • the master robot 1 and the slave robot 2 may be coupled to each other through a wired communication network or a wireless communication network so that an operation signal and a laparoscope image input through the laparoscope 5 may be transmitted to the counterpart. If a plurality of operation signals by the plurality of handles 10 provided in the master interface 4 and / or an operation signal for adjusting the laparoscope 5 need to be transmitted at the same time and / or at a similar time point, each The operation signal may be transmitted to the slave robot 2 independently of each other.
  • each operation signal is 'independently' transmitted, it means that the operation signals do not interfere with each other and one operation signal does not affect the other signal.
  • each operation signal in order to transmit the plurality of operation signals independently of each other, in the generation step of each operation signal, header information for each operation signal is added and transmitted, or each operation signal is transmitted in the generation order thereof, or Various methods may be used such as prioritizing each operation signal in advance and transmitting the operation signal accordingly.
  • the transmission path through which each operation signal is transmitted may be provided independently so that interference between each operation signal may be fundamentally prevented.
  • the robot arm 3 of the slave robot 2 can be implemented to be driven with multiple degrees of freedom.
  • the robot arm 3 includes, for example, a surgical instrument inserted into a surgical site of a patient, a rocking drive unit for rotating the surgical instrument in the yaw direction according to the surgical position, and a pitch direction perpendicular to the rotational drive of the rocking drive unit. It comprises a pitch drive unit for rotating the surgical instruments, a transfer drive for moving the surgical instruments in the longitudinal direction, a rotation drive for rotating the surgical instruments, a surgical instrument drive unit installed on the end of the surgical instruments to cut or cut the surgical lesion Can be.
  • the configuration of the robot arm 3 is not limited thereto, and it should be understood that this example does not limit the scope of the present invention.
  • the actual control process such as the operator rotates the robot arm 3 in the corresponding direction by operating the handle 10 is somewhat distanced from the subject matter of the present invention, so a detailed description thereof will be omitted.
  • One or more slave robots 2 may be used to operate the patient, and the laparoscope 5 for allowing the surgical site to be displayed as an image (that is, an image image) that can be seen through the eyepiece 220 is an independent slave. It may be implemented by the robot (2).
  • embodiments of the present invention may be used universally in operations in which various surgical endoscopes (eg, thoracoscopic, arthroscopy, parenteral, etc.) other than laparoscopic are used.
  • FIG. 38 is a block diagram schematically illustrating a configuration of a telescopic display unit for generating a laparoscopic manipulation command according to an embodiment of the present invention
  • FIG. 39 is a flowchart illustrating a method of transmitting a laparoscopic manipulation command according to an embodiment of the present invention.
  • the telescopic display unit 20 includes a motion detector 311, an operation command generator 321, and a transmitter 332.
  • the telescopic display unit 20 may further include a component that allows the operator to visually recognize the image of the surgical site input through the laparoscope 5 through the eyepiece 220, but this is the gist of the present invention. Since there is a little distance from the description thereof will be omitted.
  • the motion detector 311 outputs sensing information by detecting in which direction the operator moves the face while the operator contacts the face 230 and / or 240 of the contact portion 210.
  • the motion detector 311 may include sensing means for detecting a direction and a size (eg, a distance) of the face moving.
  • the sensing means is sufficient as sensing means capable of detecting how much the contact portion 210 has moved in which direction, for example, in which direction the flow portion 250 having an elastic force supporting the contact portion 210 is directed. It may be a sensor that detects to what extent the tension ( ⁇ ⁇ ), or a sensor provided to the inside of the master robot 1 to detect how close or / and rotated feature points formed on the inner surface of the contact portion 210, and the like. .
  • the manipulation command generation unit 321 analyzes the operator's face movement direction and size using the sensing information received from the motion detection unit 311, and controls the position and image input angle of the laparoscope 5 according to the analyzed result. Create an operation command to
  • the transmission unit 332 transmits the operation command generated by the operation command generation unit 321 to the slave robot 2 so that the position and image input angle of the laparoscope 5 are manipulated, and an image is provided accordingly.
  • the transmission unit 332 may be a transmission unit already provided in the master robot 1 to transmit an operation command for the operation of the robot arm 3.
  • 39 shows a method of transmitting a laparoscope manipulation command according to an embodiment of the present invention.
  • the telescopic display unit 20 detects the operator's face movement in step 410, and proceeds to step 420 to manipulate the laparoscopic 5 using the sensing information generated by the detection of the face movement. Create a command.
  • step 430 the manipulation command generated by step 420 is transmitted to the slave robot 2 for manipulation of the laparoscope 5.
  • the operation command generated for the operation of the laparoscope 5 may be functioned so that a specific operation is performed also on the master robot 1. For example, when the face is rotated to detect the rotation of the laparoscope 5, a manipulation command for the rotation is transmitted to the slave robot 2 and the direction of the manipulation handle of the master robot 1 is correspondingly changed. By doing so, it is possible to maintain the intuition and ease of operation of the operator.
  • the laparoscope 5 is rotated by the generated operation signal, and the image displayed on the screen and the position of the surgical tool shown in the image are currently Since it may not coincide with the position of the hand of the manipulation handle, an operation of matching the position of the surgical tool displayed on the screen by moving the position of the manipulation handle may be performed.
  • the control of the operation handle direction is a case where the position / direction of the surgical tool displayed on the screen and the actual operation handle position / direction are not only in the case of the rotary motion of the contact portion 210 but also in the case of the linear motion. The same may apply.
  • FIG. 40 is a block diagram schematically illustrating the configuration of a telescopic display unit for generating a laparoscopic manipulation command according to an embodiment of the present invention
  • FIG. 41 is a flowchart illustrating a method of transmitting a laparoscopic manipulation command according to an embodiment of the present invention.
  • the telescopic display unit 20 may include a motion detector 311, an operation command generator 321, a transmitter 332, a contact detector 510, and an original state restorer 520. have.
  • the motion detector 311 may perform an operation while it is recognized that the operator's face is in contact with the support 230 or / and 240 as the sensing information by the touch detector 510.
  • the touch detector 510 detects whether the operator's face is in contact with the support 230 or / and 240 and outputs sensing information.
  • a touch sensor may be provided at the end of the support, and in addition, various sensing schemes may be applied to detect whether a face is in contact.
  • the original state restoring unit 520 controls the motor driver 530 when the face of the operator is sensed to be in contact with the supporting unit 230 or / and 240 as the sensing information by the contact detecting unit 510. 210 is returned to its original state.
  • the original state restorer 520 may include a motor driver 530 to be described below.
  • the motor driving unit 530 using the motor is illustrated as an operation means for returning the contact portion 210 to its original state, but it is obvious that the operation means for achieving the same purpose is not limited thereto.
  • the contact portion 210 may be treated to return to its original state by various methods such as pneumatic or hydraulic pressure.
  • the original state restoring unit 520 controls the motor driving unit 530 using, for example, information on the reference state (ie, position and / or angle) of the contacting unit 210, or the operation command generation unit 321.
  • the motor driver 530 may be controlled to be manipulated in the reverse direction and size using the face movement direction and size analyzed by the face movement, so that the contact portion 210 may be returned to its original position.
  • the operator turns his face in the corresponding direction (by which the contact portion 210 is also moved or rotated) in order to check a region different from the surgical region displayed in the current image or to take an action on the region.
  • the original state restoring unit 520 may return the motor unit to return to the reference state designated by the contacting unit 210 as the default. 530 may be controlled.
  • the motor driving unit 530 may include a motor that rotates under the control of the original state restoring unit 520, and the motor (eg, position and / or angle) of the contacting unit 210 is adjusted by the rotation of the motor.
  • the driving unit 530 and the contacting unit 210 are coupled to each other.
  • the motor driver 530 may be formed to be received inside the master interface 4.
  • the motor included in the motor driving unit 530 may be, for example, a spherical motor for allowing a degree of freedom movement, and the support structure of the spherical motor may be a spherical bearing in order to remove the limitation of the inclination angle. It may be composed of a circular rotor or a frame structure having three degrees of freedom for supporting the circular rotor.
  • the operation command generation unit 321 does not generate and transmit the operation command for the image, and thus is input and output by the laparoscope 5.
  • the image does not change. Therefore, after the operator checks the laparoscopic (5) image through the eyepiece 220 may be consistent in the operation.
  • 41 shows a method of transmitting a laparoscope manipulation command according to an embodiment of the present invention.
  • the telescopic display unit 20 detects the operator's face movement in step 410, and proceeds to step 420 to manipulate the laparoscopic 5 using the sensing information generated by the detection of the face movement. Create a command. Thereafter, in step 430, the manipulation command generated by step 420 is transmitted to the slave robot 2 for the manipulation of the laparoscope 5.
  • the telescopic display unit 20 determines whether the operator releases contact with the contacting unit 210 in step Q610. If the contact is maintained, the process proceeds to step 410 again. If the contact is released, the process proceeds to step Q620 to control the contact unit 210 to return to its original position.
  • FIG. 42 is a block diagram schematically illustrating a configuration of a telescopic display unit for generating a laparoscopic manipulation command according to an embodiment of the present invention.
  • the telescopic display unit 20 includes a contact detector 510, a camera unit 710, a storage unit 720, an eye tracker unit 730, an operation command generation unit 321, and a transmission unit 332. And the controller 740.
  • the touch detector 510 detects whether the operator's face is in contact with the support 230 or / and 240 formed to protrude from the contact portion 210 and outputs sensing information.
  • the camera unit 710 When the camera unit 710 detects that the operator's face is in contact with the contacting unit 210 by sensing information of the contact sensor 510, the camera unit 710 photographs an image of the operator's eyes in real time.
  • the camera unit 710 is arranged to photograph the operator's eye seen through the eyepiece 220 inside the master interface 4.
  • the image of the operator's eye photographed by the camera unit 710 is stored in the storage unit 720 for the eye tracking process of the eye tracker unit 730.
  • the image photographed by the camera unit 710 is sufficient that the eye tracking process of the eye tracker unit 730 is possible. May be stored. Since the image generating method and the generated image type for the eye tracking process will be apparent to those skilled in the art, a description thereof will be omitted.
  • the eye tracker unit 730 analyzes the images stored in the storage unit 720 in real time or at predetermined intervals in chronological order, and analyzes the change of the pupil position of the operator and the gaze direction by the operator and outputs the analysis information. In addition, the eye tracker unit 730 may further analyze the shape of the pupil (for example, blinking eyes, etc.) and output analysis information thereof.
  • the operation command generation unit 321 refers to the analysis information by the eye tracker unit 730 when the operator's gaze direction is changed according to the operation command for controlling the position and / or image input angle of the laparoscope 5 accordingly. Create In addition, the operation command generation unit 321 may generate an operation command for this if the change in the shape of the pupil is for inputting a predetermined command.
  • the designation command according to the change in the shape of the pupil is, for example, the laparoscopic (5) approach to the surgical site in the case of two consecutive blinks of the right eye, and the clockwise rotation in the case of two consecutive blinks of the left eye. It can be specified in advance.
  • the transmission unit 332 transmits the operation command generated by the operation command generation unit 321 to the slave robot 2 so that the position and image input angle of the laparoscope 5 are manipulated, and an image is provided accordingly.
  • the transmission unit 332 may be a transmission unit already provided in the master robot 1 to transmit an operation command for the operation of the robot arm 3.
  • the controller 740 controls each of the above components to perform a specified operation.
  • the telescopic processor 20 for recognizing and processing eye movements using eye tracking technology has been described.
  • the present invention is not limited thereto, and the telescopic processor 20 may be implemented in a manner of detecting, recognizing, and processing a movement of the operator's face itself.
  • the camera unit 710 captures a face image
  • the analysis processing unit replacing the eye tracker unit 730 captures a feature point (for example, a position of two eyes, a position of a nose, a position of a person, etc.). If the position and change of one or more of the) is analyzed, the operation command generation unit 321 may generate a corresponding operation command.
  • 43 is a flowchart illustrating a method of transmitting a laparoscopic manipulation command according to an embodiment of the present invention.
  • the telescopic display unit 20 activates the camera unit 710 to the eye of the operator visible through the eyepiece 220.
  • the digital image data is generated and stored in the storage unit 720.
  • step Q820 the telescopic display unit 20 compares and determines digital image data stored in the storage unit 720 in real time or at predetermined intervals, and generates analysis information about changes in the eye position and the gaze direction of the operator. In the comparison determination, the telescopic display unit 20 may allow a certain error so that a change in the position information of a certain range may be recognized as not changing the position of the pupil.
  • step Q830 the telescopic display unit 20 determines whether the gaze direction of the operator changed over the preset threshold time is maintained.
  • the telescopic display unit 20 manipulates (e.g., moves or / or changes the image input angle) the laparoscope 5 to receive an image of the corresponding position.
  • An operation command is generated and transmitted to the slave robot 2.
  • the threshold time may be set to a time for preventing the laparoscopic 5 from being manipulated by the movement of the pupil for movement of the operator's pupils or general overview of the surgical site, and the time value is set experimentally and statistically. Or set by an operator or the like.
  • FIG. 44 is a flowchart illustrating a method of transmitting a laparoscopic manipulation command according to an embodiment of the present invention
  • FIG. 45 is a view illustrating an image display form by a telescopic display unit according to an embodiment of the present invention.
  • the telescopic display unit 20 activates the camera unit 710 to the eye of the operator visible through the eyepiece 220.
  • the digital image data is generated and stored in the storage unit 720.
  • step Q820 the telescopic display unit 20 compares and determines digital image data stored in the storage unit 720 in real time or at predetermined intervals, and generates analysis information about changes in the eye position and the gaze direction of the operator.
  • step Q910 the telescopic display unit 20 determines whether the gaze position of the operator is a predetermined setting.
  • an operator may check an image image 1010 provided through the laparoscope 5, and the image image may include a surgical part and an instrument 1020.
  • the image by the telescopic display unit 20 may be displayed by overlapping the gaze position 1030 of the operator, and the setting positions may be displayed together.
  • the setting position may include one or more of an outer edge 1040, a first rotation instruction position 1050, a second rotation instruction position 1060, and the like.
  • the laparoscope 5 may be controlled to move in the corresponding direction. That is, when the left side of the outer edge 1040 is watched for more than a threshold time, the laparoscope 5 may be controlled to be moved to the left to photograph the left side of the current display position.
  • the laparoscope when the operator watches the first rotational instruction position 1050 for a threshold time or more, the laparoscope is controlled to rotate in a counterclockwise direction, and when the operator watches the second rotational instruction position 1060 for more than the threshold time, the laparoscope is in a clockwise direction. It may be controlled to rotate.
  • step Q810 if the operator has a gaze position other than the above-described setting position, the process proceeds to step Q810 again.
  • step Q920 determines whether the gaze of the operator is maintained for more than a predetermined threshold time.
  • the telescopic display unit 20 If the operator's attention to the set position is maintained for more than the threshold time, the telescopic display unit 20 generates an operation command to operate the laparoscope 5 according to the command specified for the set position in step Q930, so that the slave robot 2 To send.
  • step Q810 the process proceeds to step Q810 again.
  • 46 is a flowchart illustrating a method of transmitting a laparoscopic manipulation command according to an embodiment of the present invention.
  • the telescopic display unit 20 activates the camera unit 710 to the eye of the operator visible through the eyepiece 220.
  • the digital image data is generated and stored in the storage unit 720.
  • the telescopic display unit 20 compares and determines image information stored in the storage unit 720 in real time or at a predetermined period to generate analysis information about a change in the shape of the driver's eyes.
  • the interpretation information may be information about how many times the operator's eyes blinked during a certain time, and which eyes blinked when the operator blinked.
  • the telescopic display unit 20 determines whether analysis information regarding the change in eye shape satisfies a predetermined predetermined condition.
  • the designated condition according to the change of the eye shape may be set in advance, for example, whether the right eye blinks twice in a predetermined time or whether the left eye blinks twice in a predetermined time.
  • the flow advances to step 1130 and generates an operation command for manipulating the laparoscope 5 as a designated command in the case of satisfying the condition, thereby generating the slave robot 2.
  • the designation command according to the change of the eye shape may be, for example, the laparoscopic (5) approach to the surgical site in the case of two consecutive blinks of the right eye, or the clockwise rotation in the case of two consecutive blinks of the left eye. It may be specified in advance.
  • step Q910 if the analysis information for the eye shape change does not satisfy the predetermined condition, the flow advances to step Q910.
  • the above-described laparoscopic manipulation method may be implemented by a software program or the like. Codes and code segments constituting a program can be easily inferred by a computer programmer in the art.
  • the program is also stored in a computer readable media, and read and executed by a computer to implement the method.
  • the information storage medium includes a magnetic recording medium, an optical recording medium and a carrier wave medium.
  • Figure 48 is a conceptual diagram showing a master interface of the surgical robot according to an embodiment of the present invention.
  • the output position of the endoscope image 9 output to the monitor viewed by the user corresponds to the viewpoint of the endoscope changing according to the movement of the surgical endoscope, so that the user can feel the actual surgical situation more realistically.
  • the present embodiment may match the view of the endoscope in the abdominal cavity with the position and output direction of the monitor outputting the endoscope image 9 at the external surgery site.
  • the motion of the system located at the external surgery site reflects the motion of the endoscope moving inside the actual patient.
  • Surgical endoscopes according to the present embodiment may be a variety of tools used as an imaging tool during surgery, such as laparoscopic as well as thoracoscopic, arthroscopy, parenteral, bladder, rectal, duodenum, mediastinal, cardiac.
  • the surgical endoscope according to the present embodiment may be a stereoscopic endoscope. That is, the surgical endoscope according to the present embodiment may be a stereoscopic endoscope for generating stereoscopic image information, and the stereoscopic image information generating method may be implemented by various techniques.
  • the surgical endoscope according to the present embodiment includes a plurality of cameras to acquire a plurality of images having stereoscopic information, or acquire a plurality of images using a single camera. An image can be obtained.
  • the surgical endoscope according to the present embodiment may of course generate a stereoscopic image by various other methods.
  • the bodily-type surgical image processing apparatus is not necessarily implemented to be limited to the surgical robot system as shown, and if the system outputs an endoscope image 9 during surgery and operates using a surgical tool. Applicable.
  • the surgical image processing apparatus according to the present embodiment is applied to the surgical robot system will be described.
  • the surgical robot system includes a slave robot 2 performing surgery on a patient lying on an operating table and a master robot 1 remotely controlling the slave robot 2.
  • the master robot 1 and the slave robot 2 are not necessarily separated into separate devices that are physically independent, but may be integrated into one and integrally formed, in which case the master interface 4 may be, for example, of an integrated robot. May correspond to an interface portion.
  • the master interface 4 of the master robot 1 comprises a monitor part 6 and a master controller
  • the slave robot 2 comprises a robot arm 3 and an instrument 8.
  • the instrument 8 is a surgical tool such as an endoscope, such as a laparoscope, or a surgical instrument for directly manipulating the affected part.
  • the master interface 4 is provided with a master controller so that the operator can be gripped and manipulated by both hands.
  • the master controller may be implemented with two handles 10 as illustrated in FIGS. 47 and 48.
  • the operation signal according to the manipulation of the operator's handle 10 is transmitted to the slave robot 2 so that the robot arm 3 may be operated. This is controlled.
  • the position movement, rotation, cutting, etc. of the robot arm 3 and / or the instrument 8 may be performed.
  • the handle 10 may be composed of a main handle and a sub handle. It is also possible to operate the slave robot arm 3, the instrument 8, etc. with only one handle, or to operate a plurality of surgical equipment in real time by adding a sub handle.
  • the main handle and the sub handle may have various mechanical configurations depending on the operation method thereof.
  • the robot arm 3 and / or other surgery of the slave robot 2 such as a joystick type, a keypad, a trackball, and a touch screen, may be used.
  • Various input means for operating the equipment can be used.
  • the master controller is not limited to the shape of the handle 10 and may be applied without any limitation as long as it can control the operation of the robot arm 3 through a network.
  • the monitor 6 of the master interface 4 displays an endoscope image 9, a camera image, and a modeling image input by the instrument 8 as an image image.
  • the information displayed on the monitor unit 6 may vary according to the type of the selected image.
  • the monitor unit 6 may be composed of one or more monitors, and may display information necessary for surgery on each monitor separately. 47 and 48 illustrate the case in which the monitor unit 6 includes three monitors, the quantity of monitors may be variously determined according to the type or type of information requiring display.
  • the monitor unit 6 includes a plurality of monitors, the screen may be extended in cooperation with each other. That is, the endoscope image 9 may freely move each monitor, such as a window output to one monitor, and the entire image may be output by outputting some images connected to each monitor.
  • the slave robot 2 and the master robot 1 are coupled to each other via wired or wireless, so that the master robot 1 transmits an operation signal to the slave robot 2, and the slave robot 2 is the master robot 1.
  • the endoscope image 9 input through the instrument 8 may be transmitted to the. If two operation signals provided by the two handles 10 provided in the master interface 4 and / or operation signals for adjusting the position of the instrument 8 need to be transmitted at the same time and / or at a similar time point, Each operation signal may be independently transmitted to the slave robot 2.
  • each operation signal is 'independently' transmitted, it means that the operation signals do not interfere with each other and one operation signal does not affect the other signal.
  • each operation signal in order to transmit the plurality of operation signals independently of each other, in the generation step of each operation signal, header information for each operation signal is added and transmitted, or each operation signal is transmitted in the generation order thereof, or Various methods may be used such as prioritizing each operation signal in advance and transmitting the operation signal accordingly.
  • the transmission path through which each operation signal is transmitted may be provided independently so that interference between each operation signal may be fundamentally prevented.
  • the robot arm 3 of the slave robot 2 can be implemented to be driven with multiple degrees of freedom.
  • the robot arm 3 includes, for example, a surgical tool inserted into a surgical site of a patient, a rocking drive unit for rotating the surgical tool in the yaw direction according to a surgical position, and a pitch direction perpendicular to the rotational drive of the rocking drive unit. It comprises a pitch drive unit for rotating the surgical tool, a transfer drive for moving the surgical tool in the longitudinal direction, a rotation drive for rotating the surgical tool, the surgical tool drive is installed on the end of the surgical tool to cut or cut the surgical lesion Can be.
  • the configuration of the robot arm 3 is not limited thereto, and it should be understood that this example does not limit the scope of the present invention.
  • the actual control process such as the operator rotates the robot arm 3 in the corresponding direction by operating the handle 10 is somewhat distanced from the subject matter of the present invention, so a detailed description thereof will be omitted.
  • One or more slave robots 2 may be used to operate the patient, and the instrument 8 for causing the surgical site to be displayed as an image image through the monitor unit 6 may be implemented as an independent slave robot 2.
  • the master robot 1 may also be implemented integrally with the slave robot 2.
  • a master robot 1 including an image input unit 310, a screen display unit 320, an arm operation unit 330, an operation signal generation unit 340, a screen display control unit 3350, and a control unit 370.
  • a slave robot 2 comprising a robot arm 3, an endoscope 5.
  • the haptic surgical image processing apparatus may be implemented as a module including an image input unit 310, a screen display unit 320, and a screen display control unit 3350.
  • a module may be an arm manipulation unit 330.
  • the operation signal generator 340 and the controller 370 may be included.
  • the image input unit 310 receives an image input through the endoscope 5 of the slave robot 2 through wired or wireless transmission.
  • Endoscope 5 may also be one type of surgical tool according to the present embodiment, the number may be one or more.
  • the screen display unit 320 outputs an image image corresponding to the image received through the image input unit 310 as visual information.
  • the screen display 320 may output the endoscope image as it is or zoom in / zoom out, or may match the endoscope image and a modeling image to be described later, or output each as a separate image.
  • the screen display unit 320 outputs the endoscope image and the image reflecting the entire surgical situation, for example, a camera image generated by the camera photographing the outside of the surgical target simultaneously and / or matched with each other and outputted to identify the surgical situation. It may be easy.
  • the screen display unit 320 outputs a reduced image of the entire image (endoscopic image, modeling image, camera image, etc.) in a portion of the output image or a window generated on a separate screen, and the operator described above
  • the entire output image may be moved or rotated, so-called bird's eye view function of the CAD program.
  • Functions such as zooming in / out, moving, and rotating the image output to the screen display unit 320 as described above may be controlled by the controller 370 according to the operation of the master controller.
  • the screen display unit 320 may be implemented in the form of a monitor unit 6.
  • An image processing process for outputting a received image as an image image through the screen display unit 320 may include a control unit 370 and a screen display control unit. 3350 or a separate image processor (not shown).
  • the screen display unit 320 according to the present embodiment may be a displayer implemented by various technologies, and may be, for example, an ultra high resolution monitor such as a UHDTV (7380x4320).
  • the screen display 320 according to the present embodiment may be a 3D display.
  • the screen display 320 according to the present exemplary embodiment may allow the user to separately recognize the left eye and right eye images by using the principle of binocular disparity.
  • Such a 3D image implementation method may be implemented in various ways such as glasses (for example, red blue glasses (anaglyph), polarized glasses (passive glass), shutter glasses (active glass), etc.), lenticular method, barrier method. have.
  • the screen display unit 320 outputs the input endoscope image to a specific region.
  • the specific area may be an area on the screen having a predetermined size and location. This particular area may be determined in correspondence with the change of viewpoint of the endoscope 5 as described above.
  • the screen display controller 3350 may set this specific area in accordance with the viewpoint of the endoscope 5. That is, the screen display control unit 3350 tracks the point of view corresponding to the motion, such as rotation and movement of the endoscope 5, and sets the specific area for outputting the endoscope image on the screen display unit 320 by reflecting the viewpoint.
  • the arm manipulation unit 330 is a means for allowing the operator to manipulate the position and function of the robot arm 3 of the slave robot 2.
  • the arm manipulation unit 330 may be formed in the shape of the handle 10 as illustrated in FIG. 48, but the shape is not limited thereto and may be modified in various shapes for achieving the same purpose. Further, for example, some may be formed in the shape of a handle, others may be formed in other shapes such as a clutch button, and a finger cannula or insertion may be inserted to fix the operator's finger to facilitate manipulation of the surgical tool. More rings may be formed.
  • the operation signal generator 340 generates a corresponding operation signal when the operator manipulates the arm operation unit 330 for the movement of the robot arm 3 and / or the endoscope 5 or the operation for surgery. Transfer to the robot (2).
  • the operation signal may be transmitted and received via a wired or wireless communication network.
  • the operation signal generator 340 generates an operation signal by using the operation information according to the operation of the operator's arm operation unit 340, and transmits the generated operation signal to the slave robot 2. To be manipulated accordingly. In addition, the position and operation shape of the actual surgical instrument operated by the operation signal can be confirmed by the operator by the image input by the endoscope (5).
  • the screen display controller 3350 may include an endoscope perspective tracker 1351, an image movement information extractor 1353, and an image position setter 1355.
  • the endoscope perspective tracking unit 1351 tracks the perspective information of the endoscope 5 corresponding to the movement and rotation of the endoscope 5.
  • the view point information refers to a view point viewed by the endoscope 5, and the view point information may be extracted from a signal for manipulating the endoscope 5 in the above-described surgical robot system. That is, the viewpoint information can be specified by signals for manipulating the movement and rotational movement of the endoscope 5. Since the endoscope 5 manipulation signal is generated in the surgical robot system and transmitted to the robot arm 3 for manipulating the endoscope 5, the signal used to track the endoscope 5 can be tracked.
  • the image movement information extractor 1353 extracts movement information of the endoscope image using the viewpoint information of the endoscope 5. That is, the viewpoint information of the endoscope 5 may include information on the position change amount of the target object of the acquired endoscope image, and the movement information of the endoscope image may be extracted from this information.
  • the image position setting unit 1355 sets a specific area of the screen display unit 320 on which the endoscope image is output using the extracted movement information. For example, if the viewpoint information of the endoscope 5 has been changed by a predetermined vector A, the endoscope image of the patient's internal organs has its movement information corresponding to the corresponding vector, and the screen using the movement information A specific area of the display unit 320 is set. If the endoscope image is changed by a predetermined vector B, a specific area in which the endoscope image is actually output to the screen display unit 320 may be set using this information and the size, shape, and resolution of the screen display unit 320.
  • FIG. 51 is a flowchart illustrating a tangible surgical image processing method according to an embodiment of the present invention. Each step to be performed below may be executed by the screen display control unit 3350 as a subject, and the steps need not be executed in time series in the order described.
  • step R511 the viewpoint information of the endoscope 5, which is information about the viewpoint viewed by the endoscope 5, is tracked corresponding to the movement and rotation of the endoscope 5.
  • the viewpoint information is specified by signals for manipulating the movement and rotational movement of the endoscope 5 so that the direction that the endoscope 5 faces can be tracked.
  • step R513 the movement information of the endoscope image corresponding to the change amount of the position of the object to be captured of the endoscope image is extracted using the viewpoint information of the endoscope 5.
  • step R515 a specific area of the screen display unit 320 for outputting the endoscope image is set using the extracted movement information. That is, when the viewpoint information of the endoscope 5 and the movement information of the endoscope image are specified as described above, a specific area for outputting the endoscope image on the screen display unit 320 is set using the movement information.
  • step R517 the endoscope image is output to the specific area set by the screen display unit 320.
  • the screen display unit 320 may be a full screen, and the endoscope image 620 acquired from the endoscope 5 may include the endoscope image 620 at a specific position of the screen display unit 320, for example, coordinates (X, Y). ) Can be output at the centered position. Coordinates (X, Y) can be set corresponding to the amount of change in the viewpoint of the endoscope (5).
  • the center point of the endoscope image 620 is moved to the position of coordinates (X + 1, Y-1). I can move it.
  • FIG. 53 is a block diagram of a surgical robot according to an embodiment of the present invention.
  • the image input unit 310 the screen display unit 320, the arm operation unit 330, the operation signal generation unit 340, the screen display control unit 3350, the image storage unit 360, and the control unit 370.
  • a master robot 1 and a robot arm 3 comprising a slave robot 2 including an endoscope 5 are shown. The differences from the above will be explained mainly.
  • the present embodiment has a feature of extracting a previously input and stored endoscope image at the present time point and outputting the endoscope image together with the current endoscope image to inform the user of the change of the endoscope image.
  • the image input unit 310 receives a first endoscope image and a second endoscope image provided at different time points from the surgical endoscope.
  • the ordinal numbers such as the first and the second, may be identifiers for distinguishing different endoscope images
  • the first endoscope image and the second endoscope image may be images captured by the endoscope 5 from different viewpoints and viewpoints. Can be.
  • the image input unit 310 may receive the first endoscope image before the second endoscope image.
  • the image storage unit 360 stores the first endoscope image and the second endoscope image.
  • the image storage unit 360 stores not only image information which is actual image content of the first endoscope image and the second endoscope image, but also information on a specific region to be output to the screen display unit 320.
  • the screen display unit 320 outputs the first endoscope image and the second endoscope image to different areas, and the screen display control unit 3350 corresponds to the first endoscope image and the second endoscope image according to different viewpoints of the endoscope 5.
  • the screen display 320 may be controlled to output the data to different areas.
  • the screen display unit 320 may differently output one or more of the saturation, brightness, color, and screen pattern of the first endoscope image and the second endoscope image.
  • the screen display unit 320 may output a second endoscope image currently input as a color image, and output a first endoscope image, which is a past image, as a black and white image, so that the user may distinguish the images from each other. Referring to FIG.
  • a second endoscope image 623 which is a currently input image is output as a color image at coordinates X1 and Y1
  • a first endoscope image 621 which is a previously input image is a screen pattern, that is, An example in which hatched patterns are formed and output at coordinates X2 and Y2 is shown.
  • the first endoscope image which is a previous image
  • the first endoscope image may be output only for a continuous or preset time.
  • the past image is output to the screen display unit 320 only for a predetermined time, so that the new endoscope image may be continuously updated on the screen display unit 320.
  • the screen display controller 3350 may include an endoscope perspective tracker 1351, an image movement information extractor 1353, an image position setting unit 1355, and a stored image display unit 1357.
  • the endoscope perspective tracking unit 1351 tracks the perspective information of the endoscope 5 in correspondence with the movement and rotation of the endoscope 5, and the image movement information extracting unit 1353 uses the perspective information of the endoscope 5 in detail. As described above, the movement information of the endoscope image is extracted.
  • the image position setting unit 1355 sets a specific area of the screen display unit 320 on which the endoscope image is output using the extracted movement information.
  • the stored image display unit 1357 extracts the first endoscope image stored in the image storage unit 360 and outputs the image to the screen display unit 320 while the screen display unit 320 outputs the second endoscope image input in real time. Since the output region and the image information of the first endoscope image and the second endoscope image are different from each other, the storage image display unit 1357 may extract the information from the image storage unit 360 to store the first endoscope image, which is a past image. Output to the screen display unit 320.
  • FIG. 55 is a flowchart of a tangible surgical image processing method according to an embodiment of the present invention.
  • Each step to be described below may be executed by the screen display control unit 3350 as a main subject, and may be classified into a step of outputting a first endoscope image and a step of outputting a second endoscope image.
  • the first endoscope image may be output together with the second endoscope image.
  • step R511 the viewpoint information of the endoscope 5, which is information about the viewpoint viewed by the endoscope 5, is tracked corresponding to the first movement and rotation information of the endoscope 5.
  • step R513 the movement information of the first endoscope image is extracted.
  • step R515 a specific area of the screen display unit 320 on which the endoscope image is output is set using the extracted movement information. The first endoscope image is output.
  • step R521 the viewpoint information of the endoscope 5, which is information about the viewpoint viewed by the endoscope 5, is tracked corresponding to the second movement and rotation information of the endoscope 5.
  • step R522 movement information of the second endoscope image is extracted.
  • step R523 a specific area of the screen display unit 320 for outputting the endoscope image is set using the extracted movement information, and in step R524, The second endoscope image is output.
  • the information about the outputted second endoscope image and the second screen position are stored in the image storage unit 360.
  • the first endoscope image stored in the image storage unit 360 together with the second endoscope image is output to the first screen position.
  • the first endoscope image may be output by differently performing any one or more of saturation, brightness, color, and screen pattern from the second endoscope image.
  • FIG. 57 is a block diagram of a surgical robot according to an embodiment of the present invention.
  • the image input unit 310, the screen display unit 320, the arm operation unit 330, the operation signal generator 340, the screen display control unit 3350, the control unit 370, and the image matching unit 450 are shown. The differences from the above will be explained mainly.
  • the endoscope image actually photographed using an endoscope during surgery and a modeling image generated in advance for a surgical tool and stored in the image storage unit 360 are matched with each other, or the size thereof is corrected.
  • the image matching unit 450 generates an output image by matching the endoscope image received through the image input unit 310 with the modeling image of the surgical tool stored in the image storage unit 360 and generating the output image. Output to.
  • the endoscope image is an image of the inside of the patient's body using the endoscope. Since the image is obtained by capturing only a limited area, the endoscope image includes an image of a part of the surgical instrument.
  • the modeling image is an image generated by realizing the shape of the entire surgical tool as a 2D or 3D image.
  • the modeling image may be an image of a surgical tool photographed at a specific time point before the start of surgery, for example, an initial setting state. Since the modeling image is an image generated by a computer simulation technique of the surgical tool, the image matching unit 450 may output the registered surgical tool and the modeling image shown in the actual endoscope image. Since a technique of obtaining an image by modeling a real object has a little distance from the gist of the present invention, a detailed description thereof will be omitted. In addition, specific functions, various detailed configurations, and the like of the image matching unit 450 will be described in detail with reference to the accompanying drawings.
  • the controller 370 controls the operation of each component so that the above-described function can be performed.
  • the controller 370 may perform a function of converting an image input through the image input unit 310 into an image image to be displayed through the screen display unit 320.
  • the controller 360 controls the image matching unit 450 to output the modeling image through the screen display unit 320 in response to the manipulation information according to the manipulation of the arm manipulation unit 330.
  • the actual surgical tool included in the endoscope image is a surgical tool included in the image inputted by the endoscope 5 and transmitted to the master robot 1 and is a surgical tool that applies a surgical operation directly to the patient's body.
  • the modeling surgical tool included in the modeling image is mathematically modeled with respect to the entire surgical tool in advance and stored in the image storage unit 360 as a 2D or 3D image.
  • Surgical tools and modeling images of the endoscope image are controlled by the operation information (that is, information about the movement, rotation, etc. of the surgical tool) recognized by the master robot 1 as the operator operates the cancer operation unit 330 Can be.
  • their position and manipulation shape may be determined by the manipulation information. Referring to FIG. 60, the endoscope image 620 is matched with the modeling image 610 and output to the coordinates (X, Y) of the screen display unit 320.
  • the modeling image may include an image reconstructed by modeling not only the surgical instrument but also the organ of the patient.
  • the modeled images are CT (Computer Tomography), MR (Magnetic Resonance), PET (Positron Emission Tomography), SPECT (Single Photon Emission Computed Tomography), single photon emission tomography ), which may include 2D or 3D images of the organ surface of the patient, reconstructed with reference to images acquired from imaging equipment such as US (Ultrasonography), in which case the actual endoscope image and the computer modeling image are matched. It is more effective to provide the operator with a full image including the surgical site.
  • the image matcher 450 may include a feature value calculator 451, a modeled image implementer 453, and an overlapped image processor 455.
  • the characteristic value calculator 451 uses the characteristic value by using the image inputted by the laparoscope 5 of the slave robot 2 and / or coordinate information on the position of the actual surgical tool coupled to the robot arm 3. Calculate The actual position of the surgical tool can be recognized by referring to the position value of the robot arm 3 of the slave robot 2, the information on the position may be provided from the slave robot 2 to the master robot (1). .
  • the characteristic value calculator 451 may use, for example, a field of view (FOV), an enlargement ratio, a viewpoint (for example, a viewing direction), a viewing depth, etc. of the laparoscope 5 using an image of the laparoscope 5.
  • FOV field of view
  • a viewpoint for example, a viewing direction
  • a viewing depth etc.
  • characteristic values such as type, direction, depth, and degree of bending of the actual surgical instrument.
  • an image recognition technique for recognizing the outline of the subject included in the image, shape recognition, tilt angle, or the like may be used.
  • the type of the actual surgical tool may be input in advance in the process of coupling the corresponding surgical tool to the robot arm (3).
  • the modeling image implementer 453 implements a modeling image corresponding to the feature value calculated by the feature value calculator 451.
  • Data related to the modeled image may be extracted from the image storage unit 360. That is, the modeling image implementer 453 may determine the characteristic values of the laparoscope 5 (field of view (FOV), magnification, perspective, viewing depth, etc., type, direction, depth, degree of bending, etc. of the actual surgical instrument).
  • the modeling image is implemented to extract the modeling image data of the corresponding surgical tool and the like to match the surgical tool of the endoscope image.
  • the modeling image implementer 453 may extract various images according to the feature values calculated by the feature value calculator 451.
  • the modeling image implementer 453 may extract a modeling image corresponding to the characteristic value of the laparoscope 5 directly. That is, the modeling image implementer 453 may extract the 2D or 3D modeling surgical tool image corresponding to the above-described data such as the angle of view and the magnification of the laparoscope 5 and match it with the endoscope image.
  • the characteristic values such as the angle of view and the magnification may be calculated by comparing and analyzing the images of the laparoscope 5 which are calculated or sequentially generated through comparison with the reference image according to the initial set value.
  • the modeling image implementation unit 453 may extract the modeling image by using the manipulation information for determining the position and the manipulation shape of the laparoscope 5 and the robot arm 3. That is, as described above, since the surgical tool of the endoscope image may be controlled by the operation information recognized by the master robot 1 as the operator manipulates the cancer operation unit 330, the modeling surgery corresponding to the characteristic value of the endoscope image is performed. The position and manipulation shape of the tool can be determined by the manipulation information.
  • Such manipulation information may be stored in a separate database according to the temporal order, and the modeling image implementer 453 may recognize the characteristic values of the actual surgical tool by referring to the database, and correspondingly, the information about the modeling image. Can be extracted. That is, the position of the surgical tool output on the modeling image may be set using cumulative data of the position change signal of the surgical tool. For example, if the operation information for the surgical instrument, which is one of the surgical instruments, includes the information that it is rotated 90 degrees clockwise and 1 cm in the extending direction, the modeling image implementer 453 may correspond to the operation information. An image of a surgical instrument included in the modeling image may be converted and extracted.
  • the surgical instrument is mounted to the front end of the surgical robot arm is provided with an actuator, the driving wheel (not shown) provided in the drive unit (not shown) by receiving the driving force from the actuator is operated, connected to the drive wheel and surgery
  • the operator inserted into the patient's body performs the operation by predetermined operation.
  • the driving wheel is formed in a disc shape, and may be clutched to the actuator to receive the driving force.
  • the number of driving wheels may be determined corresponding to the number of objects to be controlled, and the description of such driving wheels will be apparent to those skilled in the art related to surgical instruments, and thus detailed description thereof will be omitted.
  • the superimposed image processor 455 outputs a partial image of the modeled image so that the actually captured endoscope image and the modeled image do not overlap. That is, when the endoscope image includes some shape of the surgical tool and the modeling image implementer 453 outputs the corresponding modeling surgical tool, the superimposed image processor 455 may perform the actual surgical tool image and the modeling surgical tool image of the endoscope image. By checking the overlapping region of, and deleting the overlapping portion from the modeling surgical tool image, the two images can be matched with each other. The overlapping image processor 455 may process the overlapping region by removing the overlapping region of the modeling surgical tool image and the actual surgical tool image from the modeling surgical tool image.
  • the total length of an actual surgical instrument is 20 cm, and characteristics values (field of view (FOV), magnification, perspective, depth of view, etc., type, direction, depth, degree of bending, etc. of the actual surgical instrument) are considered.
  • FOV field of view
  • magnification magnification
  • perspective depth of view
  • depth depth
  • degree of bending etc. of the actual surgical instrument
  • 59 is a flowchart of a tangible surgical image processing method according to an embodiment of the present invention. The differences from the above will be explained mainly.
  • a modeling image is generated and stored in advance with respect to the surgical target and / or the surgical tool.
  • the modeled image may be modeled by computer simulation, and the embodiment may generate a modeled image by using a separate modeling image generating apparatus.
  • the characteristic value calculator 1351 calculates a characteristic value of the endoscope image.
  • the characteristic value calculating unit 1351 receives coordinates of an image provided by the laparoscope 5 of the slave robot 2 and / or coordinate information on the position of the actual surgical tool coupled to the robot arm 3.
  • the characteristic value is calculated using the field of view (FOV, field of view), magnification, perspective (for example, viewing direction), viewing depth, and the type, direction, and depth of the surgical instrument. , Bend, and so on.
  • the image matching unit 450 extracts the modeling image corresponding to the endoscope image, processes the overlapping region, and matches the two images to be output to the screen display unit 320.
  • the output time point may be variously set such that the endoscope image and the modeling image are initially output at the same time point or the endoscope image is output and the modeling image is also output together.
  • the master interface 4 may include a monitor unit 6, a handle 10, a monitor driving unit 12, and a moving groove 13. The differences from the above will be explained mainly.
  • the present embodiment can rotate and move the monitor unit 6 of the master interface 4 in accordance with the viewpoint of the endoscope 5, which is variously changed as described above, thereby making the user feel more realistic about the surgery. There is a characteristic.
  • One end of the monitor driving means 12 is coupled to the monitor portion 6, and the other end thereof is coupled to the main body portion of the master interface 4 to rotate the monitor portion 6 by applying a driving force to the monitor portion 6.
  • the rotation may include rotation about various axes (X, Y, Z), that is, rotation by a pitch, roll, yaw axis. Referring to FIG. 61, rotation A by the yaw axis is shown.
  • the monitor driving means 12 moves (B direction) along the moving groove 13 formed in the main body of the master interface 4 located at the lower end of the monitor 6 to endoscope 5. ) Can be moved according to the point of view.
  • the moving groove 13 may have a concave direction toward the user so that the front surface of the monitor 6 always faces the user when the monitor 6 moves along the moving groove 13.
  • FIG. 62 is a block diagram of a surgical robot according to an embodiment of the present invention.
  • the image input unit 310, the screen display unit 320, the arm operation unit 330, the operation signal generation unit 340, the control unit 370, the screen driving control unit 2380, and the screen driving unit 390 may be used.
  • a master robot 1 and robot arm 3 comprising, a slave robot 2 comprising an endoscope 5 is shown. The differences from the above will be explained mainly.
  • the screen driver 390 may be a means for rotating and moving the screen display 320, and may include, for example, a motor and a monitor 6 supporting means.
  • the screen driving controller 2380 may control the screen driving unit 390 so that the screen driving unit 390 rotates and moves the screen display unit 320 according to the viewpoint of the endoscope 5.
  • the screen driving controller 2380 may include an endoscope perspective tracker 381 and a perspective view of the endoscope 5, which track perspective information of the endoscope 5 in correspondence with the movement and rotation of the endoscope 5.
  • the screen driver 390 may drive the screen display 320 as described above using the motion information of the screen display 320 generated by the drive information generator 385.
  • the screen driver 390 may be driven by a user's command.
  • the screen driving control unit 2380 may be replaced with a user interface, for example, a switch (eg, a step switch) that is operable by a user.
  • the screen driving unit 390 may be rotated and moved by a user's operation. This may be controlled.
  • the motion of the screen driver 390 can also be controlled by the touch screen.
  • the screen display unit 320 is implemented as a touch screen, and when the user touches the screen display unit 320 using a finger or the like and drags in a predetermined direction, the screen display unit 320 rotates accordingly. You can also move.
  • the motion of the screen display 320 may be controlled by using the rotation / movement signal generated according to the user's eyes or the rotation / movement signal generated according to the moving direction of the face contact unit or the voice command.
  • FIG. 64 is a flowchart of a tangible surgical image processing method according to an embodiment of the present invention. Each step to be performed below may be performed by the screen driving controller 2380 as a main agent.
  • step R181 the viewpoint information of the endoscope 5, which is information about the viewpoint viewed by the endoscope 5, is tracked corresponding to the movement and rotation of the endoscope 5.
  • step R182 the movement information of the endoscope image corresponding to the amount of change in the position of the image capturing object of the endoscope image is extracted using the viewpoint information of the endoscope 5.
  • step R183 the above-described screen driving information is generated using the viewpoint information of the endoscope 5 and / or the extracted movement information. That is, when the viewpoint information of the endoscope 5 and the movement information of the endoscope image are specified as described above, information for moving and rotating the screen display unit 320 is generated using the movement information.
  • step R184 the screen display unit 320 is moved and rotated according to the screen driving information.
  • FIG. 65 is a conceptual diagram illustrating a master interface of a surgical robot according to an embodiment of the present invention. Referring to FIG. 65, a dome screen 191, a projector 192, a work bench 193, a first endoscope image 621, and a second endoscope image 623 are illustrated.
  • the present embodiment implements a function of outputting an endoscope image to a specific region of the screen display unit 320 using the dome screen 191 and the projector 192 as described above, so that the user can quickly operate the surgery on a wide screen. There is a characteristic which can be confirmed conveniently.
  • the projector 192 projects the endoscope image onto the dome screen 191.
  • the endoscope image may be a spherical image having a spherical shape in front of the projected image.
  • the spherical shape does not mean mathematically strictly a shape having a spherical shape, and may include various shapes such as an ellipse, a curved shape of a cross section, and some spherical shapes.
  • the dome screen 191 has an open front end and a hemispherical inner dome surface that reflects the image projected from the projector 192.
  • the size of the dome screen 191 is easy to see the user, for example, the diameter may be about 1m ⁇ 2m.
  • the inner dome surface of the dome screen 191 may be faceted or have a hemispherical shape for each block.
  • the dome screen 191 may be axially symmetrically formed about a central axis thereof, and the line of sight of the user may be located at the central axis of the dome screen 191.
  • the projector 192 may be located between the user performing the surgery and the dome screen 191 to prevent the image projected by the user from being blocked.
  • the projector 192 may be attached to the bottom surface of the work bench 530 in order to secure the space of the work table without covering the projected image during the user's work.
  • the inner dome surface may be formed of or coated with a material with high reflectivity.
  • the first endoscope image 621 and the second endoscope image (i.e., the first endoscope image 621 and the second endoscope image) in a specific area of the dome screen 191 may correspond to various viewpoints of the endoscope 5 as described above. 623) can be projected.
  • the immersive surgical image processing method according to the present invention may be implemented in the form of program instructions that can be executed by various computer means and recorded in a computer readable medium.
  • the recording medium may be a computer-readable recording medium having recorded thereon a program for causing the computer to execute the above-described steps.
  • the computer readable medium may include a program command, a data file, a data structure, etc. alone or in combination.
  • Program instructions recorded on the media may be those specially designed and constructed for the purposes of the present invention, or they may be of the kind well-known and available to those having skill in the computer software arts.
  • Examples of computer readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tape, optical media such as CD-ROMs, DVDs, and magnetic disks such as floppy disks.
  • -Magneto-Optical Media and hardware devices specifically configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
  • the medium may be a transmission medium such as an optical or metal wire, a waveguide, or the like including a carrier wave for transmitting a signal specifying a program command, a data structure, or the like.
  • Examples of program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.
  • the hardware device described above may be configured to operate as one or more software modules to perform the operations of the present invention.

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Robotics (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Endoscopes (AREA)

Abstract

Disclosed are a surgical image processing device and a method therefor. The surgical image processing device comprises an image input unit for receiving input in the form of endoscopic images provided by a surgical endoscope, an image storage unit for storing modelling images relating to a surgical implement performing surgery on a surgical object being photographed by the surgical endoscope, an image matching unit for matching the endoscopic images and the modelling images with each other and generating output images, and a screen display unit for outputting output images comprising the endoscopic images and the modelling images; and the surgical image processing device makes smooth surgery possible by providing actual images and modelling images together during surgery.

Description

수술용 영상 처리 장치, 영상 처리 방법, 복강경 조작 방법, 수술 로봇 시스템 및 그 동작 제한 방법Surgical image processing apparatus, image processing method, laparoscopic operation method, surgical robot system and its motion limiting method
본 발명은 수술용 영상 처리 장치 및 그 방법, 수술 로봇 시스템 및 그 동작 제한 방법, 수술 로봇 시스템 및 그 복강경 조작 방법, 체감형 수술용 영상 처리 장치 및 방법에 관한 것이다.The present invention relates to a surgical image processing apparatus and its method, a surgical robot system and its operation limiting method, a surgical robot system and a laparoscopic operation method, a tangible surgical image processing apparatus and method.
수술 로봇은 외과의사에 의해 시행되던 수술 행위를 대신할 수 있는 기능을 가지는 로봇을 말한다. 이러한 수술 로봇은 사람에 비하여 정확하고 정밀한 동작을 할 수 있으며 원격 수술이 가능하다는 장점을 가진다.Surgical robot refers to a robot having a function that can replace a surgical operation performed by a surgeon. Such a surgical robot has the advantage of being capable of accurate and precise operation and remote surgery compared to humans.
현재 전세계적으로 개발되고 있는 수술 로봇은 뼈 수술 로봇, 복강경(laparoscope) 수술 로봇, 정위 수술 로봇 등이 있다. 여기서 복강경 수술 로봇은 복강경과 소형 수술도구를 이용하여 최소 침습적 수술을 시행하는 로봇이다.Surgical robots currently being developed worldwide include bone surgery robots, laparoscope surgery robots, and stereotactic surgery robots. The laparoscopic surgical robot is a robot that performs minimally invasive surgery using a laparoscope and a small surgical tool.
복강경 수술은 배꼽 부위에 1cm 정도의 구멍을 뚫고 배 안을 들여다보는 내시경인 복강경을 집어넣은 후 수술하는 첨단 수술기법으로서 향후 많은 발전이 기대되는 분야이다.Laparoscopic surgery is an advanced surgical technique that involves surgery after inserting a laparoscope, an endoscope that looks into the belly and drills a hole about 1 cm in the navel area.
최근의 복강경은 컴퓨터칩이 장착되어 육안으로 보는 것보다도 더 선명하면서도 확대된 영상을 얻을 수 있으며, 또 모니터를 통해 화면을 보면서 특별히 고안된 복강경용 수술 기구들을 사용하면 어떠한 수술도 가능할 정도로 발전되었다.Recent laparoscopic is equipped with a computer chip to obtain a clearer and enlarged image than the naked eye, and has been developed so that any operation can be performed using specially designed laparoscopic surgical instruments while viewing the screen through the monitor.
더욱이 복강경 수술은 그 수술 범위가 개복 수술과 거의 같으면서도, 개복수술에 비해 합병증이 적고, 시술 후 훨씬 빠른 시간 안에 치료를 시작할 수 있으며, 수술 환자의 체력이나 면역기능을 유지시키는 능력이 우수한 이점이 있다. 이로 인해 미국과 유럽 등지에서는 대장암 치료 등에 있어서는 복강경 수술이 점차 표준 수술로 인식되어 가는 추세이다.Moreover, laparoscopic surgery has the same range of surgery as laparotomy, but has fewer complications than laparotomy, and can start treatment much faster after the procedure, and it has the advantage of maintaining the stamina or immune function of the patient. have. As a result, laparoscopic surgery is increasingly recognized as a standard surgery for treating colorectal cancer in the US and Europe.
수술 로봇 시스템은 일반적으로 마스터 로봇과 슬레이브 로봇으로 구성된다. 수술자가 마스터 로봇에 구비된 조종기(예를 들어 핸들)를 조작하면, 슬레이브 로봇의 로봇 암에 결합되거나 로봇 암이 파지하고 있는 수술도구가 조작되어 수술이 수행된다. 마스터 로봇과 슬레이브 로봇은 통신망을 통해 결합되어 네트워크 통신을 하게 된다. The surgical robot system generally consists of a master robot and a slave robot. When the operator manipulates a manipulator (for example, a handle) provided in the master robot, a surgical tool coupled to the robot arm of the slave robot or held by the robot arm is operated to perform surgery. The master robot and the slave robot are combined through a communication network to perform network communication.
현재 복강경 수술 시 복강경에 의해 촬영된 수술 영상이 수술자에게 출력되어 수술자가 이를 보며 수술하고 있으나 직접 개복하여 수행하는 수술보다는 실제감이 다소 떨어지는 문제점이 있다. 이러한 문제점은 복강경이 복강내에서 이동 및 회전하여 다른 부위를 비추는 경우에도 사용자가 보는 영상은 동일한 위치 및 크기의 모니터에서 출력되기 때문에 상술한 바와 같은 조종기와 영상과의 상대적인 거리감 및 움직임이 실제 복강내의 수술도구와 장기과의 상대적인 거리감 및 움직임과 다르게 되기 때문에 발생될 수 있다.At the time of laparoscopic surgery, the surgical image taken by the laparoscope is output to the operator, and the operator sees this, but there is a problem that the actual feeling is somewhat lower than that of the surgery performed by direct surgery. This problem is that even when the laparoscope moves and rotates in the abdominal cavity to illuminate other parts of the body, the image seen by the user is output from the monitor of the same position and size, so that the relative distance and movement between the manipulator and the image as described above may be reduced. This can be caused by a difference in the relative distance and movement between the surgical tool and the organ.
또한, 복강경에 의해 촬영된 수술 영상은 수술 도구의 일부 형상만을 포함하고 있으므로, 수술자는 이들이 서로 충돌하거나 중첩되는 경우 조작이 어렵거나 시야가 가려서 원활히 수술을 진행하지 못하는 경우가 발생할 수 있다.In addition, since the surgical image taken by the laparoscopic includes only some shapes of the surgical tool, the operator may be difficult to operate when they collide or overlap each other, or may not be able to proceed with the operation smoothly due to obscured vision.
한편, 수술 로봇 시스템에는 복수의 수술용 인스트루먼트가 장착되며, 수술 로봇 시스템에 대한 수술자의 조작에 따라 각 인스트루먼트가 수술에 필요한 동작을 하게 된다. 수술 과정에서 수술자가 조작하고 있는 인스트루먼트는 수술자의 조작에 의해 제어되지만, 수술자가 조작하고 있지 않은 인스트루먼트, 특히 사용자 조작 화면에 보이지 않는(즉, 가시외(可視外) 영역에 위치하는) 인스트루먼트의 경우, 수술자의 의도와 다른 조작에 의해 불필요한 동작이 수행될 가능성이 있다.On the other hand, the surgical robot system is equipped with a plurality of surgical instruments, each instrument performs the operation required for the operation in accordance with the operator's operation on the surgical robot system. Instruments operated by the operator during the operation are controlled by the operator, but are not operated by the operator, especially those that are not visible on the user's operation screen (i.e., located in an out-of-visible area). However, there is a possibility that unnecessary operations may be performed by operations other than the intention of the operator.
이처럼, 수술자가 의도하지 않은 방식으로 인스트루먼트가 조작됨으로써, 로봇 본체나 인접한 인스트루먼트에 충돌하는 경우가 있을 수 있으며, 수술 중인 환자의 혈관이나 조직 등에 손상이나 상해를 입힐 가능성도 배제할 수 없다.As such, by operating the instrument in an unintended manner, the operator may collide with the robot body or an adjacent instrument, and the possibility of causing damage or injury to blood vessels or tissues of the patient under surgery cannot be excluded.
또한, 종래의 수술 로봇 시스템은 수술자가 수술 부위에 대한 영상을 얻기 위해서는 복강경을 원하는 위치로 이동하거나 영상 입력 각도를 조절하기 위한 별도의 사용자 조작을 요구한다. 즉, 수술자는 수술 과정 중에 손 또는 발을 이용하여 별도로 복강경의 제어를 위한 조작을 입력하여야 한다.In addition, the conventional surgical robot system requires a separate user operation for the operator to move the laparoscope to a desired position or to adjust the image input angle in order to obtain an image of the surgical site. That is, the operator must input a manipulation for the control of the laparoscope separately using the hands or feet during the surgical procedure.
그러나 이는 고도한 수준의 집중력을 유지하여야 하는 수술 진행 과정에서 수술자의 집중력을 약화시키는 원인이 될 수 있으며, 집중력의 약화에 따른 불완전한 수술은 수술 환자에게 심각한 후유증을 야기할 수 있는 문제점도 있다.However, this may be a cause of weakening the operator's concentration in the course of the surgery to maintain a high level of concentration, and incomplete surgery due to the weakened concentration has a problem that can cause serious sequelae to the surgical patient.
전술한 배경기술은 발명자가 본 발명의 도출을 위해 보유하고 있었거나, 본 발명의 도출 과정에서 습득한 기술 정보로서, 반드시 본 발명의 출원 전에 일반 공중에게 공개된 공지기술이라 할 수는 없다.The background art described above is technical information possessed by the inventors for the derivation of the present invention or acquired during the derivation process of the present invention, and is not necessarily a publicly known technique disclosed to the general public before the application of the present invention.
본 발명은 수술시 실제 영상과 모델링 영상을 같이 제공함으로써 원활한 수술을 가능하게 하는 수술용 영상 처리 장치 및 그 방법을 제공하기 위한 것이다. The present invention is to provide a surgical image processing apparatus and method for enabling a smooth operation by providing a real image and a modeling image at the time of surgery.
또한, 본 발명은 수술자가 수술 부위만 보면서 수술하던 종래 방식에서 벗어나서 수술 부위뿐만 아니라 인접 부위 및 환자의 외부 환경에 대한 영상을 참조하여 수술할 수 있도록 하는 수술용 영상 처리 장치 및 그 방법을 제공하기 위한 것이다. In addition, the present invention provides a surgical image processing apparatus and a method for allowing the operator to operate by referring to the image of the neighboring site and the external environment of the patient as well as the surgical site away from the conventional method of operating by looking only at the surgical site. It is for.
또한, 본 발명은 수술 도구들의 충돌을 미리 감지하여 경고할 수 있으므로, 수술자는 미리 충돌 상태를 인지하고 이를 회피함으로써 수술을 원활히 수행할 수 있는 수술용 영상 처리 장치 및 그 방법을 제공하기 위한 것이다. In addition, the present invention is to provide a surgical image processing apparatus and method that can perform the operation smoothly by detecting the collision of the surgical instruments in advance, so that the operator can recognize the collision in advance and avoid it.
또한, 본 발명은 모니터에 표시될 영상의 종류를 수술자가 편의대로 선택할 수 있는 인터페이스를 구현함으로써 수술자가 수술 영상을 보다 편리하게 이용할 수 있는 수술용 영상 처리 장치 및 그 방법을 제공하기 위한 것이다. In addition, the present invention is to provide a surgical image processing apparatus and the method that the operator can use the surgical image more conveniently by implementing an interface that the operator can select the type of the image to be displayed on the monitor for convenience.
또한, 본 발명은 정상적인 수술이 이루어지도록 수술자가 의도하는 방식으로만 인스트루먼트가 제어되도록 하는 수술 로봇 시스템 및 그 동작 제한 방법을 제공하기 위한 것이다.In addition, the present invention is to provide a surgical robot system and a method of limiting its operation to allow the instrument to be controlled only in the manner that the operator intends to perform a normal operation.
또한 본 발명은 로봇 암 및/또는 인스트루먼트에 대한 오조작을 근본적으로 방지함으로써 수술중인 환자의 안전을 도모할 수 있는 수술 로봇 시스템 및 그 동작 제한 방법을 제공하기 위한 것이다.It is another object of the present invention to provide a surgical robot system and a method of limiting its operation, which can promote safety of a patient under surgery by fundamentally preventing misoperation of a robot arm and / or an instrument.
또한, 본 발명은 수술자가 원하는 수술 부위를 보고자 하는 행위만으로 복강경의 위치 및 영상 입력 각도가 제어되도록 할 수 있는 수술 로봇 시스템 및 그 복강경 조작 방법을 제공하기 위한 것이다.In addition, the present invention is to provide a surgical robot system and a laparoscopy manipulation method that allows the operator to control the position and image input angle of the laparoscope only by the action to see the desired surgical site.
또한 본 발명은 복강경의 조작을 위한 수술자의 별도의 조작이 불필요하여 수술자는 수술 행위에만 집중할 수 있도록 하는 수술 로봇 시스템 및 그 복강경 조작 방법을 제공하기 위한 것이다.In another aspect, the present invention is to provide a surgical robot system and a laparoscopy manipulation method that allows the operator to concentrate only on the surgical operation is not necessary because the operator does not need a separate operation for the operation of the laparoscope.
또한, 본 발명은 수술용 내시경의 움직임에 따라 변화하는 내시경의 관점에 상응하여, 사용자가 보는 모니터에 출력되는 내시경 영상의 출력 위치를 변화시킴으로써, 사용자가 실제 수술 상황을 보다 현실감 있게 느낄 수 있도록 하는 체감형 수술용 영상 처리 장치 및 방법을 제공하기 위한 것이다. In addition, the present invention changes the output position of the endoscope image output to the monitor to the user in response to the viewpoint of the endoscope changes according to the movement of the surgical endoscope, so that the user can feel the actual surgical situation more realistic It is to provide a tangible surgical image processing apparatus and method.
또한, 본 발명은 현재 시점에서, 이전에 입력되어 저장된 내시경 영상을 추출하여 현재의 내시경 영상과 함께 화면 표시부에 출력함으로써, 내시경 영상의 변화에 대한 정보를 사용자에게 알려 줄 수 있는 체감형 수술용 영상 처리 장치 및 방법을 제공하기 위한 것이다. In addition, the present invention, by extracting the previously stored endoscope image at the present time point and outputs to the screen display unit along with the current endoscope image, haptic surgical image that can inform the user about the change in the endoscope image It is to provide a processing apparatus and method.
또한, 본 발명은 수술시 내시경을 이용하여 실제 촬영한 내시경 영상과 미리 수술 도구에 대해 생성하여 영상 저장부에 저장한 모델링 영상을 각각 또는 서로 정합하거나 그 크기를 조절하는 등 영상을 수정하여 사용자가 관찰가능한 모니터에 출력할 수 있는 체감형 수술용 영상 처리 장치 및 방법을 제공하기 위한 것이다. In addition, the present invention by modifying the image, such as to match the image or adjust the size of each of the modeling image stored in the image storage and generated in advance for the endoscopic image and the surgical instrument actually taken using the endoscope during surgery It is to provide a tangible surgical image processing apparatus and method that can output to an observable monitor.
또한, 본 발명은 다양하게 변화하는 내시경의 관점에 상응하여 모니터를 회전 및 이동시킴으로써, 사용자가 수술에 대한 현실감을 보다 생생하게 느끼게 할 수 있는 체감형 수술용 영상 처리 장치 및 방법을 제공하기 위한 것이다. In addition, the present invention is to provide a tangible surgical image processing apparatus and method that can make the user feel more vivid to the operation by rotating and moving the monitor in accordance with the viewpoint of varying endoscopes.
본 발명이 제시하는 이외의 기술적 과제들은 하기의 설명을 통해 쉽게 이해될 수 있을 것이다.Technical problems other than the present invention will be easily understood through the following description.
본 발명의 일 측면에 따르면, 수술용 내시경으로부터 제공되는 내시경 영상을 입력받는 영상 입력부와, 수술용 내시경이 촬영하는 수술 대상을 수술하는 수술 도구에 관한 모델링 영상을 저장하는 영상 저장부와, 내시경 영상과 모델링 영상을 서로 정합하여 출력 영상을 생성하는 영상 정합부와, 내시경 영상과 모델링 영상을 포함하는 출력 영상을 출력하는 화면 표시부를 포함하는 수술용 영상 처리 장치가 제공된다. According to an aspect of the present invention, the image input unit for receiving an endoscope image provided from the surgical endoscope, an image storage unit for storing a modeling image of a surgical tool for operating a surgical target taken by the surgical endoscope, and an endoscope image Surgical image processing apparatus including an image matching unit for generating an output image by matching the modeling image with each other, and a screen display unit for outputting the output image including the endoscope image and the modeling image.
여기서, 수술용 내시경은 복강경, 흉강경, 관절경, 비경, 방광경, 직장경, 십이지장경, 종격경, 심장경 중 하나 이상이 될 수 있다. Here, the surgical endoscope may be at least one of laparoscopic, thoracoscopic, arthroscopic, parenteral, bladder, rectal, duodenum, mediastinal, cardiac.
또한, 영상 정합부는 내시경 영상에 포함되는 실제 수술 도구 영상과 모델링 영상에 포함되는 모델링 수술 도구 영상을 서로 정합하여 출력 영상을 생성할 수 있다. The image matching unit may generate an output image by matching the actual surgical tool image included in the endoscope image with the modeling surgical tool image included in the modeling image.
여기서, 영상 정합부는, 내시경 영상 및 하나 이상의 로봇 암에 결합된 실제 수술 도구의 위치 좌표정보 중 하나 이상을 이용하여 특성값을 연산하는 특성값 연산부와, 특성값 연산부에서 연산된 특성값에 상응하는 모델링 영상을 구현하는 모델링 영상 구현부를 더 포함할 수 있다. Here, the image matching unit may include a characteristic value calculating unit calculating a characteristic value using at least one of endoscope images and position coordinate information of an actual surgical tool coupled to at least one robot arm, and a characteristic value calculated by the characteristic value calculating unit. The apparatus may further include a modeling image implementation unit for implementing a modeling image.
또한, 본 실시예는 모델링 수술 도구 영상으로부터 모델링 수술 도구 영상과 실제 수술 도구 영상의 중첩 영역을 제거하는 중첩 영상 처리부를 더 포함할 수 있다. In addition, the present embodiment may further include an overlapping image processor for removing an overlapping region between the modeling surgical tool image and the actual surgical tool image from the modeling surgical tool image.
여기서, 모델링 영상에 출력되는 모델링 수술 도구 영상의 위치는 수술 도구의 조작정보를 이용하여 설정될 수 있으며, 화면 표시부는, 내시경 영상을 출력 영상의 임의의 영역에 출력하며, 모델링 영상을 출력 영상의 주변 영역에 출력할 수 있다. Here, the position of the modeling surgical tool image output to the modeling image may be set using the operation information of the surgical tool, the screen display unit outputs the endoscope image to any region of the output image, the modeling image of the output image Output to the surrounding area.
또한, 모델링 영상은 수술 개시 시점 전 특정 시점에 촬영한 수술 도구에 관한 영상이 될 수 있으며, 본 실시예는 수술시 수술 대상 외부를 촬영하여 카메라 영상을 생성하는 카메라를 더 포함할 수 있다. In addition, the modeling image may be an image of a surgical tool photographed at a specific time point before the start of the operation, the embodiment may further include a camera to take a picture of the outside of the surgical target during surgery to generate a camera image.
여기서, 영상 정합부는 내시경 영상, 모델링 영상, 카메라 영상 및 이들의 조합 영상을 서로 정합하여 출력 영상을 생성할 수 있으며, 내시경 영상, 모델링 영상 및 카메라 영상 중 어느 하나 이상은 3D 영상이 될 수 있다. Here, the image matching unit may generate an output image by matching the endoscope image, the modeling image, the camera image, and a combination image thereof, and at least one of the endoscope image, the modeling image, and the camera image may be a 3D image.
또한, 본 실시예는 내시경 영상, 모델링 영상 및 카메라 영상 중 어느 둘 이상의 조합을 선택하는 모드 선택부를 더 포함할 수 있고, 모델링 영상에 포함되는 모델링 수술 도구 영상이 서로 충돌하는지 감지하는 충돌 감지부를 더 포함할 수 있으며, 이 경우 충돌 감지부가 모델링 수술 도구 영상의 상호 충돌을 감지하여 충돌 감지 신호를 생성하는 경우 경고 정보를 출력하는 경고 정보 출력부를 더 포함할 수 있다. In addition, the present embodiment may further include a mode selection unit for selecting any combination of two or more of the endoscope image, the modeling image and the camera image, the collision detection unit for detecting whether the modeling surgery tool images included in the modeling image collide with each other. In this case, the collision detection unit may further include a warning information output unit for outputting warning information when detecting a collision between the modeling surgical tool image and generating a collision detection signal.
또한, 충돌 감지부는 모델링 수술 도구 영상의 상호 충돌을 감지하는 경우 포스 피드백(force feedback) 처리 및 수술 도구가 장착되는 로봇 암을 제어하는 암 조작부의 조작 제한 중 어느 하나 이상의 처리를 수행할 수 있다. In addition, the collision detection unit may perform any one or more of a force feedback process and an operation limitation of the arm operation unit for controlling the robot arm on which the surgical tool is mounted when detecting a collision between the modeling surgical tool images.
또한, 수술용 영상 처리 장치는, 로봇 암을 포함하는 슬레이브(slave) 로봇을 제어하는 마스터(master) 로봇에 장착되는 인터페이스(interface)에 포함될 수 있고, 모델링 영상은 수술 대상에 대한 CT, MR, PET, SPECT, US 중 어느 하나 이상의 영상 장비로부터 획득한 영상을 모델링한 영상을 더 포함할 수 있다. In addition, the surgical image processing apparatus may be included in an interface mounted on a master robot that controls a slave robot including a robot arm, and the modeling image may include CT, MR, It may further include an image modeling an image obtained from any one or more imaging equipment of PET, SPECT, US.
본 발명의 다른 측면에 따르면, 수술용 영상 처리 장치가 수술 시 영상을 처리하는 방법에 있어서, 수술 대상을 수술하는 수술 도구에 관해 모델링 영상을 생성하여 저장하는 단계, 수술용 내시경으로부터 제공되는 내시경 영상을 입력받는 단계, 내시경 영상과 모델링 영상을 서로 정합하여 출력 영상을 생성하는 단계 및 내시경 영상과 모델링 영상을 포함하는 출력 영상을 출력하는 단계를 포함하는 수술용 영상 처리 방법이 제공된다. According to another aspect of the invention, in the surgical image processing apparatus for processing an image during surgery, generating and storing a modeling image for the surgical instrument for operating the surgical target, the endoscope image provided from the surgical endoscope There is provided a surgical image processing method comprising the step of receiving an input, generating an output image by matching the endoscope image and the modeling image with each other and outputting the output image including the endoscope image and the modeling image.
여기서, 수술용 내시경은 복강경, 흉강경, 관절경, 비경, 방광경, 직장경, 십이지장경, 종격경, 심장경 중 하나 이상이 될 수 있으며, 출력 영상 생성 단계는, 내시경 영상에 포함되는 실제 수술 도구 영상과 모델링 영상에 포함되는 모델링 수술 도구 영상을 서로 정합하여 출력 영상을 생성할 수 있다. Here, the surgical endoscope may be one or more of laparoscopic, thoracoscopic, arthroscopy, parenteral, bladder, rectal, duodenum, mediastinal, cardiac, the output image generation step, the actual surgical tool included in the endoscope image The output image may be generated by matching the modeling surgical tool image included in the image and the modeling image with each other.
또한, 출력 영상 생성 단계는, 내시경 영상 및 하나 이상의 로봇 암에 결합된 실제 수술 도구의 위치 좌표정보 중 하나 이상을 이용하여 특성값을 연산하는 단계, 특성값 연산부에서 연산된 특성값에 상응하는 모델링 영상을 구현하는 단계를 더 포함할 수 있다. The generating of the output image may include calculating a characteristic value using at least one of an endoscope image and position coordinate information of an actual surgical tool coupled to at least one robot arm, and modeling the characteristic value calculated by the characteristic value calculator. The method may further include implementing an image.
여기서, 출력 영상 생성 단계는, 모델링 수술 도구 영상으로부터 모델링 수술 도구 영상과 실제 수술 도구 영상의 중첩 영역을 제거하는 단계를 더 포함할 수 있으며, 또한, 출력 영상 생성 단계는, 수술 도구의 조작정보를 이용하여 모델링 영상에 출력되는 모델링 수술 도구 영상의 위치를 설정할 수 있다. The output image generating step may further include removing an overlapping region between the modeling surgical tool image and the actual surgical tool image from the modeling surgical tool image. The output image generating step may further include operation information of the surgical tool. The position of the modeling surgical tool image output to the modeling image may be set using the image.
여기서, 출력 영상 출력 단계는, 내시경 영상을 출력 영상의 임의의 영역에 출력하며, 모델링 영상을 출력 영상의 주변 영역에 출력할 수 있다. Here, in the output image output step, the endoscope image may be output to an arbitrary region of the output image, and the modeling image may be output to a peripheral region of the output image.
또한, 모델링 영상 생성 및 저장 단계는, 수술 개시 시점 전 특정 시점에 촬영한 수술 도구에 관한 영상으로부터 모델링 영상을 생성할 수 있다. In the generating and storing of the modeling image, the modeling image may be generated from an image of a surgical tool photographed at a specific time point before the start of surgery.
또한, 본 실시예는 수술시 카메라가 수술 대상 외부를 촬영하여 카메라 영상을 생성하는 단계를 더 포함할 수 있으며, 이 경우 출력 영상 생성 단계는, 내시경 영상, 모델링 영상, 카메라 영상 및 이들의 조합 영상을 서로 정합하여 출력 영상을 생성할 수 있다. In addition, the present embodiment may further comprise the step of generating a camera image by the camera photographing the outside of the operation target during the operation, in this case, the output image generation step, endoscope image, modeling image, camera image and a combination image thereof Are matched with each other to generate an output image.
여기서, 내시경 영상, 모델링 영상 및 카메라 영상 중 어느 하나 이상은 3D 영상이 될 수 있다. Here, at least one of the endoscope image, the modeling image, and the camera image may be a 3D image.
또한, 본 실시예는 내시경 영상, 모델링 영상 및 카메라 영상 중 어느 둘 이상의 조합을 선택하는 단계를 더 포함할 수 있으며, 모델링 영상에 포함되는 모델링 수술 도구 영상이 서로 충돌하는지 감지하는 단계를 더 포함할 수 있고, 이 경우 충돌 감지 단계는, 모델링 수술 도구 영상의 상호 충돌을 감지하여 충돌 감지 신호를 생성하는 경우 경고 정보를 출력하는 단계를 더 포함할 수 있다. The present invention may further include selecting any combination of two or more of an endoscope image, a modeling image, and a camera image, and further comprising detecting whether a modeling surgery tool image included in the modeling image collides with each other. In this case, the collision detection may further include outputting warning information when detecting a collision between the modeling surgical tool image and generating a collision detection signal.
또한, 충돌 감지 단계는, 모델링 수술 도구 영상의 상호 충돌을 감지하는 경우 포스 피드백(force feedback) 처리 및 수술 도구가 장착되는 로봇 암을 제어하는 암 조작부의 조작 제한 중 어느 하나 이상의 처리를 수행하는 단계를 더 포함할 수 있다. In addition, the collision detection step, if detecting a collision between the modeling surgical tool image, performing any one or more of the process of force feedback processing and the operation control of the arm control unit for controlling the robot arm on which the surgical tool is mounted It may further include.
또한, 모델링 영상 생성 및 저장 단계는, 수술 대상에 대한 CT, MR, PET, SPECT, US 중 어느 하나 이상의 영상 장비로부터 획득한 영상을 모델링하는 단계를 더 포함할 수 있다. The generating and storing of the modeling image may further include modeling an image obtained from at least one imaging device of CT, MR, PET, SPECT, and US for the surgical target.
본 발명의 또 다른 측면에 따르면, 상술한 수술용 영상 처리 방법을 수행하기 위하여 디지털 처리 장치에 의해 실행될 수 있는 명령어들의 프로그램이 유형적으로 구현되어 있으며 디지털 처리 장치에 의해 판독될 수 있는 프로그램을 기록한 기록매체가 제공된다.According to still another aspect of the present invention, a program is recorded in which a program of instructions that can be executed by a digital processing apparatus is tangibly implemented to perform the above-described surgical image processing method, and records a program that can be read by the digital processing apparatus. A medium is provided.
본 발명의 또 다른 측면에 따르면, 수술용 로봇으로서, 제어 대상 물체의 조작이 제한되는 구역에 대한 제한구역 설정 정보를 입력받아 제한구역 좌표 정보를 생성하여 저장하는 제한구역 설정부와, 제어 대상 물체의 조작을 위한 조작정보를 입력받는 암 조작부와, 조작정보에 따른 제어 대상 물체의 변위 정보를 참조하여 제어 대상 물체의 외부면이 제한구역 좌표 정보에 접촉되었는지 여부를 판단하는 조작 판단부를 포함하되, 여기서 제어 대상 물체는 로봇 암, 인스트루먼트 및 내시경 중 하나 이상인 것을 특징으로 하는 수술용 로봇이 제공된다.According to another aspect of the present invention, as a surgical robot, a restricted area setting unit for receiving the restricted area setting information for the area where the operation of the controlled object is restricted and generates and stores the restricted area coordinate information, and a controlled object An arm operation unit for receiving operation information for the operation of the control unit, and an operation determination unit for determining whether the outer surface of the control object is in contact with the coordinate information of the restricted area by referring to the displacement information of the control object according to the operation information, Wherein the object to be controlled is provided with a surgical robot, characterized in that at least one of the robot arm, the instrument and the endoscope.
제어 대상 물체의 외부면이 제한구역 좌표 정보에 접촉되는 경우, 제어 대상 물체가 조작 제한되도록 제어하고, 이에 상응하도록 반응정보가 출력되도록 제어하는 제어부가 더 포함될 수 있다. 여기서, 반응정보는 촉각정보, 시각정보 및 청각정보 중 하나 이상일 수 있다.When the outer surface of the object to be controlled is in contact with the coordinate information of the restricted area, the controller may further control to control the object to be controlled, and control to output the response information accordingly. Here, the reaction information may be one or more of tactile information, visual information, and auditory information.
제어부는, 제어 대상 물체의 외부면이 제한구역 좌표 정보에 접촉되는 경우 제어 대상 물체가 조작 제한되도록 할지 여부를 결정한 후, 제어 대상 물체가 조작 제한되도록 결정한 경우에만 제어 대상 물체가 조작 제한되도록 제어할 수 있다.The controller may determine whether to control the control object when the outer surface of the control object is in contact with the coordinate information of the restricted area, and then control the control object to be restricted only when the control object is determined to be restricted. Can be.
또한, 제한구역 설정무시 명령을 입력받기 위한 입력부가 더 포함되는 경우, 제어부는 제한구역 설정무시 명령이 입력되면, 제어 대상 물체의 조작이 허용되도록 처리할 수도 있다.In addition, when the input unit for receiving the command to ignore the restricted area setting is further included, the controller may process to allow the manipulation of the object to be controlled when the restricted area setting ignore command is input.
제한구역 좌표 정보는 영상 이미지가 표시되는 터치 감응 스크린을 이용하여 입력된 폐곡선에 상응하는 좌표 범위 정보일 수도 있다.The restricted area coordinate information may be coordinate range information corresponding to a closed curve input using a touch-sensitive screen on which a video image is displayed.
영상 이미지는 내시경에 의해 획득된 영상 이미지, CT 영상 이미지, MRI 영상 이미지 및 인체 모델링 영상 이미지 중 하나 이상일 수 있다.The image image may be one or more of an image image obtained by an endoscope, a CT image image, an MRI image image, and a human modeling image image.
제한구역 좌표 정보는 내시경에 의해 획득된 영상 이미지에 표시되지 않는 영역의 좌표 범위 정보일 수도 있다.The restricted area coordinate information may be coordinate range information of an area that is not displayed in the video image obtained by the endoscope.
내시경은 복강경, 흉강경, 관절경, 비경, 방광경, 직장경, 십이지장경, 종격경, 심장경 중 하나 이상일 수 있다.The endoscope may be one or more of laparoscopic, thoracoscopic, arthroscopic, parenteral, bladder, rectal, duodenum, mediastinal, cardiac.
제한구역 좌표 정보는 암 조작부의 조작 정보에 의해 제어 대상 물체가 조작된 위치들 중 제한구역 설정정보로서 지정받은 위치들에 의해 형성되는 도형의 좌표 범위 정보일 수도 있다.The restricted area coordinate information may be coordinate range information of a figure formed by positions designated as restricted area setting information among positions at which the object to be controlled is manipulated by operation information of the arm operating unit.
제어 대상 물체는 3차원 공간상에서 조작되고, 지정받은 위치들은 각각 3차원 공간상에서 도형의 외곽선 또는 외곽면을 형성하도록 지정된 포인트일 수 있다.The object to be controlled is manipulated in three-dimensional space, and the designated positions may be points designated to form an outline or an outer surface of a figure in three-dimensional space, respectively.
수술용 로봇은, 암 조작부의 조작에 의해 각 제어 대상 물체의 조작 허용 여부에 대한 조작 설정 정보를 생성하여 저장하는 암 조작 설정부를 더 포함할 수도 있다. 여기서, 조작 판단부는 임의의 조작 대상 물체에 대한 조작정보가 유효한 조작정보인지 여부를 더 판단할 수 있다.The surgical robot may further include an arm manipulation setting unit configured to generate and store manipulation setting information on whether to allow manipulation of each control object by manipulation of the arm manipulation unit. Here, the manipulation determination unit may further determine whether the manipulation information for any manipulation object is valid manipulation information.
본 발명의 또 다른 측면에 따르면, 수술용 로봇의 동작 제한 방법으로서, 제어 대상 물체의 조작이 제한되는 구역에 대한 제한구역 설정 정보를 입력받아 제한구역 좌표 정보를 생성하여 저장하는 단계와, 제어 대상 물체의 조작을 위한 조작정보를 입력받는 단계와, 조작정보에 따른 제어 대상 물체의 변위 정보를 참조하여 제어 대상 물체의 외부면이 제한구역 좌표 정보에 접촉되었는지 여부를 판단하는 단계와, 제어 대상 물체의 외부면이 제한구역 좌표 정보에 접촉되는 경우 제어 대상 물체가 조작 제한되도록 제어하는 단계를 포함하되, 제어 대상 물체는 로봇 암, 인스트루먼트 및 내시경 중 하나 이상인 것을 특징으로 하는 수술용 로봇의 동작 제한 방법이 제공된다.According to another aspect of the present invention, as a method of limiting the operation of a surgical robot, receiving the restricted area setting information for the area where the operation of the controlled object is restricted, generating and storing the restricted area coordinate information, and the control object Receiving operation information for manipulating an object, determining whether an outer surface of the controlled object is in contact with coordinate information of a restricted area by referring to displacement information of the controlled object according to the manipulation information, and controlling the object And controlling the controlled object to be restricted in operation when the outer surface of the contact surface is in contact with the coordinate information of the restricted area, wherein the controlled object is one or more of a robot arm, an instrument, and an endoscope. This is provided.
제어하는 단계는, 제어 대상 물체의 외부면이 제한구역 좌표 정보에 접촉되는 경우 제어 대상 물체가 조작 제한되도록 할지 여부를 결정하는 단계와, 제어 대상 물체가 조작 제한되도록 결정한 경우 제어 대상 물체가 조작 제한되도록 제어하는 단계를 포함할 수 있다. The controlling may include determining whether to control the control object when the external surface of the control object is in contact with the restricted area coordinate information, and when the control object is determined to be restricted to control. Controlling to include.
결정하는 단계는, 제한구역 설정무시 명령이 입력되었는지 여부를 판단하여 제어 대상 물체가 조작 제한되도록 할지 여부를 결정할 수 있다.In the determining step, it may be determined whether a command to ignore the restricted area setting is input or not to control the controlled object.
제어하는 단계는, 제어 대상 물체의 외부면이 제한구역 좌표 정보에 접촉되는 경우 반응정보가 출력되도록 제어하는 단계를 포함할 수 있다.The controlling may include controlling the response information to be output when the outer surface of the object to be controlled is in contact with the coordinate information of the restricted area.
수술용 로봇의 동작 제한 방법은, 암 조작부의 조작에 의해 각 제어 대상 물체의 조작 허용 여부에 대한 조작 설정 정보를 생성하여 저장하는 단계와, 임의의 조작 대상 물체에 대한 조작정보가 유효한 조작정보인지 여부를 판단하는 단계를 더 포함할 수도 있다.The operation limiting method of the surgical robot includes generating and storing operation setting information on whether to allow operation of each control target object by operation of the arm operating unit, and whether the operation information for any operation target object is valid operation information. The method may further include determining whether or not.
만일 조작정보가 유효하지 않은 조작정보인 경우, 반응정보가 출력되도록 제어하는 단계가 더 포함될 수도 있다.If the manipulation information is invalid manipulation information, the method may further include controlling the reaction information to be output.
본 발명의 또 다른 측면에 따르면, 수술용 로봇으로서, 내시경에 의해 획득되어 제공되는 영상 이미지를 디스플레이하는 표시부와, 제어 대상 물체의 조작을 위한 조작정보를 입력받는 암 조작부와, 제어 대상 물체가 이동된 위치들 중 포인트 지정 명령이 입력된 복수의 위치들을 외곽선에 포함하는 영역에 상응하는 영역 이미지가 영상 이미지에 오버레이되어 디스플레이되도록 제어하는 제어부를 포함하되, 제어 대상 물체는 로봇 암, 인스트루먼트 및 내시경 중 하나 이상인 것을 특징으로 하는 수술용 로봇이 제공된다.According to another aspect of the invention, as a surgical robot, a display unit for displaying an image image obtained and provided by the endoscope, an arm operation unit for receiving the operation information for the operation of the controlled object, the control object is moved It includes a control unit for controlling to display the area image corresponding to the area including the plurality of points of the point input command inputted in the outline to the image image is overlaid on the image image, the object to be controlled includes: robot arm, instrument and endoscope There is provided a surgical robot, characterized in that at least one.
영역 이미지를 분할하도록 연결되는 경계선을 규정하는 둘 이상의 위치들에 대한 포인트 지정 명령 및 분할 명령의 입력에 의해, 영역 이미지는 둘 이상의 개별 영역들로 분할될 수 있다.By inputting a point designation command and a segmentation command for two or more positions defining a boundary line connected to segment the region image, the region image can be divided into two or more separate regions.
영역 이미지를 분할하도록 연결되는 경계선을 규정하는 둘 이상의 위치들에 대한 포인트 지정 명령 및 접기 명령의 입력에 의해, 경계선을 기준으로 하나 이상의 개별 영역이 다른 하나 이상의 개별 영역쪽으로 회전 이동되어 중첩되도록 영역 이미지가 디스플레이될 수 있다.By inputting a point designation command and a folding command to two or more locations that define a boundary line that is connected to segment the area image, the area image is rotated and overlapped with respect to the other one or more individual areas by the input of a point designation command and a folding command. Can be displayed.
복원 명령의 입력에 의해 경계선을 기준으로 회전 이동되어 중첩 표시된 개별 영역이 원위치로 복귀된 영역 이미지가 디스플레이될 수 있다.The region image may be displayed by rotating and shifting the overlapping individual regions returned to their original positions by input of the restoration command.
영역 이미지를 분할하도록 연결되는 경계선을 규정하는 둘 이상의 위치들에 대한 포인트 지정 명령 및 삭제 명령의 입력에 의해, 하나 이상의 개별 영역이 삭제 처리될 수 있다.One or more individual areas may be erased by inputting a point designation command and a delete command for two or more locations that define a boundary line connected to divide the area image.
영역 이미지의 외곽선 및 외곽면은 점들의 연속으로 인식되고, 어느 하나의 점에 대한 포인트 지정 및 이동 명령에 의해, 영역 이미지는 변형 처리될 수 있다.The outline and the outer surface of the area image are recognized as a series of points, and by the point designation and movement command for any one point, the area image can be transformed.
영역 이미지에 상응하도록 추출된 좌표 범위가 제한구역 또는 허용구역으로 설정될 수 있다.The coordinate range extracted to correspond to the area image may be set as a restricted area or allowed area.
영역 이미지에 상응하도록 추출된 좌표 범위가 제한구역으로 설정된 경우, 조작정보에 따른 제어 대상 물체의 변위 정보를 참조하여 제어 대상 물체의 외부면이 좌표 범위에 접촉되었는지 여부를 판단하는 조작 판단부가 더 포함될 수 있다.If the coordinate range extracted to correspond to the area image is set as the restricted area, an operation determination unit for determining whether the outer surface of the control object is in contact with the coordinate range by referring to the displacement information of the control object according to the operation information is further included. Can be.
제어부는 제어 대상 물체의 외부면이 좌표 범위에 접촉된 경우, 제어 대상 물체가 조작 제한되도록 제어하고, 이에 상응하도록 반응정보가 출력되도록 제어할 수 있다.When the external surface of the object to be controlled is in contact with the coordinate range, the controller may control the object to be controlled to be restricted, and may control the response information to be output accordingly.
암 조작부의 조작에 의해 각 제어 대상 물체의 조작 허용 여부에 대한 조작 설정 정보를 생성하여 저장하는 암 조작 설정부가 더 포함될 수 있고, 조작 판단부는 임의의 제어 대상 물체에 대한 조작정보가 유효한 조작정보인지 여부를 더 판단할 수 있다.The arm operation setting unit may further include an arm operation setting unit for generating and storing operation setting information on whether to allow operation of each control target object by an operation of the arm operation unit, and the operation determination unit may determine whether the operation information for any control target object is valid operation information. It can be further judged.
본 발명의 또 다른 측면에 따르면, 수술용 로봇의 영역 설정 방법으로서, 내시경에 의해 획득되어 제공되는 영상 이미지를 디스플레이하는 단계와, 제어 대상 물체의 조작을 위한 조작정보를 입력받는 단계와, 제어 대상 물체가 이동된 위치들 중 포인트 지정 명령이 입력된 복수의 위치들을 외곽선에 포함하는 영역에 상응하는 영역 이미지가 영상 이미지에 오버레이되어 디스플레이되도록 제어하는 단계를 포함하되, 제어 대상 물체는 로봇 암, 인스트루먼트 및 내시경 중 하나 이상이고, 영역 이미지에 상응하도록 추출된 좌표 범위가 제한구역 또는 허용구역으로 설정되는 것을 특징으로 하는 수술용 로봇의 영역 설정 방법이 제공된다.According to still another aspect of the present invention, there is provided a method for setting a region of a surgical robot, the method comprising: displaying an image image obtained and provided by an endoscope; receiving operation information for manipulation of a controlled object; And controlling a region image corresponding to a region including a plurality of positions where a point designation command is input among the positions to which the object is moved is overlaid on the video image to be displayed, wherein the object to be controlled includes a robot arm and an instrument. And at least one of endoscopes, and a coordinate range extracted to correspond to an area image is set to a restricted area or an allowable area.
본 발명의 또 다른 측면에 따르면, 수술용 로봇으로서, 내시경에 의해 획득되어 제공되는 영상 이미지를 디스플레이하는 표시부와, 인체 모델링 정보 및 이에 상응하는 모델링 데이터를 저장하는 저장부와, 획득된 하나 이상의 장기에 대한 장기 선택 정보에 상응하는 모델링 데이터를 추출하고, 모델링 데이터에 상응하는 영역 이미지가 영상 이미지에 오버레이되어 디스플레이되도록 제어하는 제어부를 포함하되, 모델링 데이터는 장기의 색상, 형상, 크기 중 하나 이상의 정보를 포함하는 것을 특징으로 하는 수술용 로봇이 제공된다.According to another aspect of the present invention, a surgical robot, a display unit for displaying an image image obtained and provided by the endoscope, a storage unit for storing the human body modeling information and the corresponding modeling data, and obtained one or more organs And a control unit configured to extract modeling data corresponding to organ selection information for and control the display so that an area image corresponding to the modeling data is overlaid on the image image, wherein the modeling data includes at least one of color, shape, and size of the organ. There is provided a surgical robot comprising a.
인체 모델링 정보 및 모델링 데이터는 CT 영상, MRI 영상 중 하나 이상인 참조 영상을 이용하여 생성될 수 있다.Human body modeling information and modeling data may be generated using a reference image that is one or more of a CT image and an MRI image.
장기 선택 정보는 모델링 데이터 중 영상 이미지에 대한 영상 인식에 의해 인식된 장기의 형상 및 색상 중 하나 이상에 상응하는 장기로 선택 지정될 수 있다.The organ selection information may be selectively designated as an organ corresponding to one or more of the shape and color of the organ recognized by the image recognition of the image image among the modeling data.
로봇 암, 인스트루먼트 및 내시경 중 하나 이상을 포함하는 제어 대상 물체의 조작을 위한 조작정보를 입력받는 암 조작부가 더 포함되는 경우, 조작정보에 의해 장기의 위치 변동, 모양 변화 중 하나 이상이 발생되면, 장기의 외곽선에 상응하도록 영역 이미지가 변형 처리될 수 있다.When the arm operation unit further receives operation information for manipulation of the controlled object including one or more of the robot arm, the instrument, and the endoscope, when one or more of the position change or shape change of the organ is generated by the operation information, The region image may be deformed to correspond to the outline of the organ.
본 발명의 또 다른 측면에 따르면, 수술용 로봇의 영역 설정 방법으로서, 인체 모델링 정보 및 이에 상응하는 모델링 데이터를 저장하는 단계와, 내시경에 의해 획득되어 제공되는 영상 이미지를 디스플레이하는 단계와, 획득된 하나 이상의 장기에 대한 장기 선택 정보에 상응하는 모델링 데이터를 추출하고, 모델링 데이터에 상응하는 영역 이미지가 영상 이미지에 오버레이되어 디스플레이되도록 제어하는 단계를 포함하되, 모델링 데이터는 장기의 색상, 형상, 크기 중 하나 이상의 정보를 포함하는 것을 특징으로 하는 수술용 로봇의 영역 설정 방법이 제공된다.According to still another aspect of the present invention, there is provided a method for setting a region of a surgical robot, the method comprising: storing human modeling information and corresponding modeling data, displaying an image image obtained and provided by an endoscope, and Extracting modeling data corresponding to organ selection information for one or more organs, and controlling the area image corresponding to the modeling data to be overlaid and displayed on the image image, wherein the modeling data includes the color, shape, and size of the organs. There is provided a method for setting a region of a surgical robot comprising one or more pieces of information.
본 발명의 또 다른 측면에 따르면, 조작신호를 이용하여 비젼부의 위치 및 영상 입력 각도 중 하나 이상을 제어하는 수술용 로봇으로서, 접촉된 수술자 얼굴의 움직임 방향 및 크기에 상응하는 방향 및 크기로 유동(遊動)되는 접면부와, 접면부가 유동되는 방향 및 크기에 상응하는 센싱 정보를 출력하는 움직임 감지부와, 센싱 정보를 이용하여 비젼부의 위치 및 영상 입력 각도 중 하나 이상에 대한 조작명령을 생성하여 출력하는 조작명령 생성부를 포함하는 수술용 로봇이 제공된다.According to another aspect of the present invention, a surgical robot for controlling at least one of the position and the image input angle of the vision unit using the operation signal, the flow in the direction and size corresponding to the movement direction and size of the contacted operator's face ( A motion detector for outputting sensing information corresponding to the contacting part that is moved, the direction and size of the contacting part, and generating and outputting an operation command for at least one of the position and the image input angle of the vision part using the sensing information. There is provided a surgical robot including an operation command generation unit.
조작명령이 비젼부의 직선 조작 및 회전 조작 중 하나 이상에 관한 것인 경우, 수술용 로봇의 조작 핸들 방향이 그에 상응하도록 변경 조작될 수 있다.If the operation command relates to one or more of straight line operation and rotational operation of the vision unit, the operation handle direction of the surgical robot can be changed and operated accordingly.
접면부는 수술용 로봇의 콘솔(console) 패널의 일부로서 형성될 수 있다.The contact portion may be formed as part of a console panel of the surgical robot.
수술자 얼굴의 위치를 고정하기 위해 지지부가 접면부의 하나 이상의 개소에 돌출되어 형성될 수 있다.The support may be formed to protrude at one or more points of the contact portion to fix the position of the operator's face.
비젼부에 의해 획득된 영상이 시각(視覺) 정보로서 보여지도록 접안부가 접면부에 천공될 수 있다.The eyepiece may be perforated in the contacting portion such that the image obtained by the vision portion is shown as visual information.
비젼부에 의해 획득된 영상이 시각(視覺) 정보로서 보여지도록 접면부는 빛이 투과되는 재질의 재료로 형성될 수 있다.The contact portion may be formed of a material of light transmitting material so that the image obtained by the vision portion is shown as visual information.
수술용 로봇은, 접면부 또는 지지부에 수술자 얼굴이 접촉되었는지 여부를 감지하는 접촉 감지부와, 접촉 감지부의 감지에 의해 접촉 해제가 인식되면, 접면부가 디폴트(default)로 지정된 위치 및 상태인 기준 상태로 복귀되도록 처리하는 원상태 복원부를 더 포함할 수 있다.The surgical robot includes a touch sensing unit for detecting whether the operator's face is in contact with the contact portion or the support, and a reference state in which the contact portion is designated as a default position and state when the contact release is recognized by the sensing of the contact sensing unit. It may further include an original state recovery unit for processing to return to.
원상태 복원부는 센싱 정보에 따른 접면부 유동 방향 및 크기의 역 조작으로서 접면부가 기준 상태로 복귀되도록 처리할 수 있다.The original state restoration unit may process the contact portion to return to the reference state by reverse manipulation of the contact portion flow direction and size according to the sensing information.
수술용 로봇은, 순차적으로 생성된 이미지 데이터를 시간순으로 비교 판단하여 눈동자의 위치 변화, 눈 모양의 변화 및 주시방향 중 하나 이상을 해석한 해석 정보를 생성하는 아이트래커부를 더 포함할 수 있다. 또한, 수술용 로봇의 내측에서 접면부를 향해 촬상하여 이미지 데이터를 생성하는 카메라부와, 생성된 이미지 데이터를 저장하는 저장부가 더 포함될 수도 있다.The surgical robot may further include an eye tracker unit configured to compare the sequentially generated image data in order of time to generate analysis information for analyzing one or more of a change in the position of the eye, a change in the shape of the eye, and a gaze direction. The camera unit may further include a camera unit for generating image data by capturing toward the contact portion inside the surgical robot, and a storage unit for storing the generated image data.
조작명령 생성부는 해석 정보가 임의의 조작명령으로서 미리 설정된 변화를 만족하는지 여부를 판단하고, 만족하는 경우 상응하는 조작명령을 출력할 수 있다.The operation command generation unit may determine whether the analysis information satisfies a preset change as an arbitrary operation command, and output a corresponding operation command when the operation information is satisfied.
비젼부는 현미경, 내시경 중 어느 하나일 수 있고, 내시경은 복강경, 흉강경, 관절경, 비경, 방광경, 직장경, 십이지장경, 종격경, 심장경 중 하나 이상일 수 있다.The vision portion may be any one of a microscope and an endoscope, and the endoscope may be one or more of laparoscopic, thoracoscopic, arthroscopic, parenteral, bladder, rectal, duodenum, mediastinal and cardiac.
접면부는 탄성체에 의해 지지되도록 콘솔 패널의 전면에 형성되고, 탄성체는 접면부의 움직임을 위한 외력이 제거되면 접면부가 원위치로 복귀되도록 복원력을 제공할 수 있다.The contact portion is formed on the front surface of the console panel to be supported by the elastic body, the elastic body may provide a restoring force so that the contact portion is returned to its original position when the external force for the movement of the contact portion is removed.
본 발명의 다른 측면에 따르면, 조작신호를 이용하여 비젼부의 위치 및 영상 입력 각도 중 하나 이상을 제어하는 수술용 로봇으로서, 비젼부에 의해 획득된 영상을 시각(視覺) 정보로 제공하기 위한 접안부와, 접안부를 통해 보여지는 눈동자의 위치 변화, 눈 모양의 변화 및 주시방향 중 하나 이상을 해석한 해석 정보를 생성하는 아이트래커부와, 해석 정보가 임의의 조작명령으로서 미리 설정된 변화를 만족하는지 여부를 판단하고, 만족하는 경우 비젼부를 조작하기 위한 조작명령을 출력하는 조작명령 생성부를 포함하는 수술용 로봇이 제공된다.According to another aspect of the present invention, a surgical robot for controlling at least one of the position and the image input angle of the vision unit using the operation signal, the eyepiece for providing the image obtained by the vision unit as visual information; The eye tracker unit for generating analysis information analyzing one or more of a change in the position of the pupil, a change in the shape of the eye, and a viewing direction seen through the eyepiece; and whether the analysis information satisfies a preset change as an arbitrary operation command. There is provided a surgical robot comprising an operation command generation unit for determining and outputting an operation command for operating the vision unit when satisfied.
아이트래커부는, 수술용 로봇의 내측에서 접안부를 향해 촬상하여 이미지 데이터를 생성하는 카메라 유닛과, 생성된 이미지 데이터를 저장하는 저장 유닛과, 순차적으로 생성된 이미지 데이터를 시간순으로 비교 판단하여 눈동자의 위치 변화, 눈 모양의 변화 및 주시방향 중 하나 이상을 해석한 해석 정보를 생성하는 아이트래커 유닛을 포함할 수 있다.The eye tracker unit includes a camera unit for imaging the eyepiece from the inside of the surgical robot to generate image data, a storage unit for storing the generated image data, and sequentially comparing the image data generated sequentially to determine the position of the pupil. It may include an eye tracker unit for generating analysis information that interprets one or more of a change, an eye shape change, and a viewing direction.
접안부는 수술용 로봇의 콘솔(console) 패널의 일부로서 형성된 접면부에 천공될 수 있다.The eyepiece may be perforated in a contact portion formed as part of a console panel of the surgical robot.
본 발명의 다른 측면에 따르면, 수술용 로봇이 비젼부의 위치 및 영상 입력 각도 중 하나 이상을 제어하기 위한 조작 방법으로서, 접면부가 유동(遊動)되는 방향 및 크기에 상응하는 센싱 정보를 출력하는 단계와, 센싱 정보를 이용하여 비젼부의 위치 및 영상 입력 각도 중 하나 이상에 대한 조작명령을 생성하여 출력하는 단계를 포함하되, 접면부는 수술용 로봇의 콘솔 패널의 일부로서 형성되고, 접촉된 수술자 얼굴의 움직임 방향 및 크기에 상응하는 방향 및 크기로 유동되도록 형성되는 것을 특징으로 하는 비젼부 조작 방법이 제공된다.According to another aspect of the invention, the surgical robot for controlling one or more of the position of the vision portion and the image input angle, the step of outputting the sensing information corresponding to the direction and the size of the contact portion flows; And generating and outputting an operation command for at least one of the position and the image input angle of the vision unit using the sensing information, wherein the contact portion is formed as part of the console panel of the surgical robot, Provided is a method for operating a vision unit, which is configured to flow in a direction and a size corresponding to the direction and size of movement.
비젼부 조작 방법은, 접면부에 수술자 얼굴이 접촉된 상태인지 여부를 판단하는 단계와, 접촉된 상태인 경우, 센싱 정보의 출력이 개시되도록 제어하는 단계를 더 포함할 수 있다.The vision unit manipulation method may further include determining whether the operator's face is in contact with the contact portion, and if the contact is in contact with the operator, controlling to start output of sensing information.
비젼부 조작 방법은, 접촉이 해제된 상태인 경우, 접면부가 디폴트(default)로 지정된 위치 및 상태인 기준 상태로 존재하는지 여부를 판단하는 단계와, 기준 상태로 존재하지 않는 경우, 기준 상태로 복귀되도록 처리하는 단계를 더 포함할 수 있다.The operation method of the vision unit includes determining whether the contact portion exists as a reference state which is a position and state designated as default when the contact is released, and when the contact state is not present, returning to the reference state. It may further comprise the step of processing.
기준 상태로의 복귀는, 센싱 정보에 따른 접면부 유동 방향 및 크기의 역 조작에 의해 수행될 수 있다.The return to the reference state may be performed by inverse manipulation of the contact portion flow direction and size according to the sensing information.
비젼부 조작 방법은, 수술용 로봇의 내측에서 접면부를 향해 촬상한 이미지 데이터를 생성하여 저장하는 단계와, 저장된 이미지 데이터를 시간순으로 비교 판단하여 눈동자의 위치 변화, 눈 모양의 변화 및 주시방향 중 하나 이상을 해석한 해석 정보를 생성하는 단계를 더 포함할 수 있다.Vision operating method includes the steps of generating and storing the image data captured from the inside of the surgical robot toward the contact portion, and comparing the stored image data in chronological order to determine the position of the pupil, the change of the eye shape and the direction of attention The method may further include generating interpretation information of one or more interpretations.
비젼부 조작 방법은, 해석 정보가 임의의 조작명령으로서 미리 설정된 변화를 만족하는지 여부를 판단하는 단계와, 만족하는 경우 상응하도록 미리 설정된 조작명령을 출력하는 단계를 더 포함할 수 있다.The vision unit manipulation method may further include determining whether the analysis information satisfies a preset change as an arbitrary manipulation command, and outputting a corresponding manipulation command that is preset accordingly if satisfied.
접면부는 탄성체에 의해 지지되도록 콘솔 패널의 전면에 형성되고, 탄성체는 접면부의 움직임을 위한 외력이 제거되면 접면부가 원위치로 복귀되도록 복원력을 제공할 수 있다. The contact portion is formed on the front surface of the console panel to be supported by the elastic body, the elastic body may provide a restoring force so that the contact portion is returned to its original position when the external force for the movement of the contact portion is removed.
본 발명의 또 다른 측면에 따르면, 조작신호를 이용하여 비젼부의 위치 및 영상 입력 각도 중 하나 이상을 제어하는 수술용 로봇으로서, 비젼부에 의해 획득된 영상을 시각(視覺) 정보로 제공하기 위한 접면부와, 접면부를 통해 보여지는 얼굴의 움직임을 해석한 해석 정보를 생성하는 분석 처리부와, 해석 정보가 임의의 조작명령으로서 미리 설정된 변화를 만족하는지 여부를 판단하고, 만족하는 경우 비젼부를 조작하기 위한 조작명령을 출력하는 조작명령 생성부를 포함하는 수술용 로봇이 제공된다.According to another aspect of the present invention, a surgical robot for controlling at least one of the position and the image input angle of the vision unit using the operation signal, a contact for providing the image obtained by the vision unit as visual information An analysis processor for generating analysis information for analyzing the face part, the movement of the face seen through the contact part, and determining whether the analysis information satisfies a preset change as an arbitrary operation command, and if so, to operate the vision part. There is provided a surgical robot including an operation command generation unit for outputting an operation command.
분석 처리부는, 수술용 로봇의 내측에서 상기 접면부를 향해 촬상하여 이미지 데이터를 생성하는 카메라 유닛과, 생성된 이미지 데이터를 저장하는 저장 유닛과, 순차적으로 생성된 이미지 데이터에서 소정의 특징점의 위치 변화를 시간순으로 비교 판단하여 얼굴의 움직임에 대한 해석 정보를 생성하는 분석 유닛을 포함할 수 있다.The analysis processing unit includes a camera unit for generating image data by imaging toward the contact portion inside the surgical robot, a storage unit for storing the generated image data, and a change in position of a predetermined feature point in the sequentially generated image data. It may include an analysis unit for generating the analysis information on the movement of the face by comparing and determining in chronological order.
접면부는 상기 수술용 로봇의 콘솔(console) 패널의 일부로서 형성될 수 있으며, 비젼부에 의해 획득된 영상이 시각(視覺) 정보로서 보여지도록 접면부는 빛이 투과되는 재질의 재료로 형성될 수 있다.The contact portion may be formed as part of a console panel of the surgical robot, and the contact portion may be formed of a material of light transmitting material so that the image obtained by the vision part is viewed as visual information. Can be.
본 발명의 또 다른 측면에 따르면, 수술용 로봇이 비젼부의 위치 및 영상 입력 각도 중 하나 이상을 제어하기 위한 조작 방법으로서, 접안부를 통해 보여지는 눈동자의 위치 변화, 눈 모양의 변화 및 주시방향 중 하나 이상을 해석한 해석 정보를 생성하는 단계와, 해석 정보가 임의의 조작명령으로서 미리 설정된 변화를 만족하는지 여부를 판단하는 단계와, 만족하는 경우, 비젼부의 위치 및 영상 입력 각도 중 하나 이상에 대한 조작명령을 생성하여 출력하는 단계를 포함하는 비젼부 조작 방법이 제공된다.According to another aspect of the invention, the surgical robot is a control method for controlling one or more of the position and the image input angle of the vision portion, one of the position change, eye shape change and the direction of eye viewing through the eyepiece Generating analysis information that interprets the above, determining whether the analysis information satisfies a preset change as an arbitrary operation command, and if satisfactory, manipulating at least one of the position of the vision unit and the image input angle. A vision operation method is provided that includes generating and outputting a command.
해석 정보를 생성하는 단계는, 수술용 로봇의 내측에서 접안부를 향해 촬상한 이미지 데이터를 생성하여 저장하는 단계와, 저장된 이미지 데이터를 시간순으로 비교 판단하여 눈동자의 위치 변화, 눈 모양의 변화 및 주시방향 중 하나 이상을 해석한 해석 정보를 생성하는 단계를 포함할 수 있다.The generating of the analysis information includes generating and storing image data photographed toward the eyepiece from the inside of the surgical robot, and comparing the stored image data in chronological order to determine the position of the pupil, the change of the eye shape, and the gaze direction. It may include generating analysis information that interprets one or more of the.
본 발명의 또 다른 측면에 따르면, 수술용 내시경으로부터 제공되는 내시경 영상을 입력받는 영상 입력부와, 내시경 영상을 특정 영역에 출력하는 화면 표시부와, 수술용 내시경의 관점에 상응하여 내시경 영상을 출력하는 화면 표시부의 특정 영역을 변경하는 화면 표시 제어부를 포함하는 체감형 수술용 영상 처리 장치가 제공된다. According to another aspect of the present invention, an image input unit for receiving an endoscope image provided from the surgical endoscope, a screen display unit for outputting the endoscope image in a specific region, and a screen for outputting the endoscope image corresponding to the viewpoint of the surgical endoscope There is provided an immersive surgical image processing apparatus including a screen display control unit for changing a specific area of a display unit.
여기서, 수술용 내시경은 복강경, 흉강경, 관절경, 비경, 방광경, 직장경, 십이지장경, 종격경, 심장경 중 하나 이상이 될 수 있으며, 입체 내시경이 될 수도 있다.Here, the surgical endoscope may be one or more of laparoscopic, thoracoscopic, arthroscopic, parenteral, cystoscopic, rectal, duodenum, mediastinal, cardiac, or may be a stereoscopic endoscope.
여기서, 화면 표시 제어부는, 수술용 내시경의 이동 및 회전에 상응하여 수술용 내시경의 관점 정보를 추적하는 내시경 관점 추적부와, 수술용 내시경의 관점 정보를 이용하여 내시경 영상의 이동 정보를 추출하는 영상 이동 정보 추출부와, 이동 정보를 이용하여 내시경 영상이 출력되는 화면 표시부의 특정 영역을 설정하는 영상 위치 설정부를 포함할 수 있다.Here, the screen display control unit, the endoscope perspective tracking unit for tracking the perspective information of the surgical endoscope corresponding to the movement and rotation of the surgical endoscope, and the image to extract the movement information of the endoscope image using the perspective information of the surgical endoscope The apparatus may include a movement information extracting unit and an image position setting unit for setting a specific region of the screen display unit on which the endoscope image is output using the movement information.
또한, 화면 표시 제어부는, 내시경 영상의 중심점을 수술용 내시경의 관점의 좌표 변화값에 상응하여 이동시킬 수 있다. In addition, the screen display controller may move the center point of the endoscope image corresponding to the coordinate change value of the viewpoint of the surgical endoscope.
본 발명의 다른 측면에 따르면, 수술용 내시경으로부터 서로 다른 시점에 제공되는 제1 내시경 영상 및 제2 내시경 영상을 입력받는 영상 입력부와, 제1 내시경 영상과 제2 내시경 영상을 서로 다른 영역에 출력하는 화면 표시부와, 제1 내시경 영상과 제2 내시경 영상을 저장하는 영상 저장부와, 수술용 내시경의 서로 다른 관점에 상응하여 제1 내시경 영상과 제2 내시경 영상을 서로 다른 영역에 출력하도록 화면 표시부를 제어하는 화면 표시 제어부를 포함하는 체감형 수술용 영상 처리 장치가 제공된다. According to another aspect of the invention, the image input unit for receiving the first endoscope image and the second endoscope image provided at different time points from the surgical endoscope, and outputting the first endoscope image and the second endoscope image to different areas A screen display unit, an image storage unit for storing the first endoscope image and the second endoscope image, and a screen display unit for outputting the first endoscope image and the second endoscope image to different areas according to different viewpoints of the surgical endoscope There is provided an immersive surgical image processing apparatus including a screen display control unit for controlling the same.
여기서, 영상 입력부는 제1 내시경 영상을 제2 내시경 영상보다 먼저 입력받을 수 있으며, 화면 표시부는 제1 내시경 영상과 제2 내시경 영상을 채도, 명도, 색상 및 화면 패턴 중 어느 하나 이상을 다르게 출력할 수 있다. The image input unit may receive the first endoscope image before the second endoscope image, and the screen display unit may differently output one or more of the saturation, brightness, color, and screen pattern of the first endoscope image and the second endoscope image. Can be.
또한, 화면 표시 제어부는, 화면 표시부가 실시간으로 입력된 제2 내시경 영상을 출력하는 동안 영상 저장부에 저장된 제1 내시경 영상을 추출하여 화면 표시부에 출력하는 저장 영상 표시부를 더 포함할 수 있다. The screen display controller may further include a storage image display unit which extracts the first endoscope image stored in the image storage unit and outputs the first endoscope image stored in the image storage unit while the screen display unit outputs the second endoscope image input in real time.
본 발명의 또 다른 측면에 따르면, 수술용 내시경으로부터 제공되는 내시경 영상을 입력받는 영상 입력부와, 내시경 영상을 특정 영역에 출력하는 화면 표시부와, 수술용 내시경이 촬영하는 수술 대상을 수술하는 수술 도구에 관한 모델링 영상을 저장하는 영상 저장부와, 내시경 영상과 모델링 영상을 서로 정합하여 출력 영상을 생성하는 영상 정합부와, 수술용 내시경의 관점에 상응하여 내시경 영상을 출력하는 화면 표시부의 특정 영역을 변경하며, 정합된 내시경 영상과 모델링 영상을 화면 표시부에 출력하는 화면 표시 제어부를 포함하는 체감형 수술용 영상 처리 장치가 제공된다. According to another aspect of the invention, the image input unit for receiving an endoscope image provided from the surgical endoscope, a screen display unit for outputting the endoscope image to a specific area, and a surgical tool for operating a surgical target taken by the surgical endoscope Change a specific area of an image storage unit for storing a modeling image, an image matching unit for generating an output image by matching an endoscope image and a modeling image, and a screen display unit for outputting an endoscope image in accordance with a surgical endoscope In addition, there is provided a tangible surgical image processing apparatus including a screen display control unit for outputting the matched endoscope image and modeling image to the screen display unit.
여기서, 영상 정합부는 내시경 영상에 포함되는 실제 수술 도구 영상과 모델링 영상에 포함되는 모델링 수술 도구 영상을 서로 정합하여 출력 영상을 생성할 수 있다. Here, the image matching unit may generate an output image by matching the actual surgical tool image included in the endoscope image with the modeling surgical tool image included in the modeling image.
또한, 영상 정합부는, 내시경 영상 및 하나 이상의 로봇 암에 결합된 실제 수술 도구의 위치 좌표정보 중 하나 이상을 이용하여 특성값을 연산하는 특성값 연산부와, 특성값 연산부에서 연산된 특성값에 상응하는 모델링 영상을 구현하는 모델링 영상 구현부를 더 포함할 수 있다. The image matching unit may further include a characteristic value calculator configured to calculate a characteristic value using at least one of an endoscope image and position coordinate information of an actual surgical tool coupled to at least one robot arm, and a characteristic value calculated by the characteristic value calculator. The apparatus may further include a modeling image implementation unit for implementing a modeling image.
여기서, 영상 정합부는, 모델링 수술 도구 영상으로부터 모델링 수술 도구 영상과 실제 수술 도구 영상의 중첩 영역을 제거하는 중첩 영상 처리부를 더 포함할 수 있으며, 또한, 모델링 영상에 출력되는 모델링 수술 도구 영상의 위치는 수술 도구의 조작정보를 이용하여 설정될 수 있다. Here, the image matching unit may further include an overlapping image processor which removes an overlapping region between the modeling surgical tool image and the actual surgical tool image from the modeling surgical tool image, and the position of the modeling surgical tool image output on the modeling image is It can be set using the operation information of the surgical instrument.
본 발명의 또 다른 측면에 따르면, 수술용 내시경으로부터 제공되는 내시경 영상을 입력받는 영상 입력부와, 내시경 영상을 출력하는 화면 표시부와, 화면 표시부를 회전 및 이동시키는 화면 구동부와, 수술용 내시경의 관점에 상응하여 화면 구동부가 화면 표시부를 회전 및 이동시키도록 화면 구동부를 제어하는 화면 구동 제어부를 포함하는 체감형 수술용 영상 처리 장치가 제공된다. According to another aspect of the invention, the image input unit for receiving the endoscope image provided from the surgical endoscope, the screen display unit for outputting the endoscope image, the screen drive unit for rotating and moving the screen display, and the endoscope for surgery Correspondingly, there is provided a immersive surgical image processing apparatus including a screen driving control unit which controls the screen driving unit so that the screen driving unit rotates and moves the screen display unit.
여기서, 화면 구동 제어부는, 수술용 내시경의 이동 및 회전에 상응하여 수술용 내시경의 관점 정보를 추적하는 내시경 관점 추적부와, 수술용 내시경의 관점 정보를 이용하여 내시경 영상의 이동 정보를 추출하는 영상 이동 정보 추출부와, 이동 정보를 이용하여 화면 표시부의 화면 구동 정보를 생성하는 구동 정보 생성부를 포함할 수 있다. Here, the screen driving control unit, the endoscope perspective tracking unit for tracking the perspective information of the surgical endoscope corresponding to the movement and rotation of the surgical endoscope, and the image to extract the movement information of the endoscope image using the perspective information of the surgical endoscope The apparatus may include a movement information extracting unit and a driving information generating unit generating screen driving information of the screen display unit using the movement information.
여기서, 화면 표시부는, 돔 형상의 스크린과, 돔 형상의 스크린에 내시경 영상을 투사하는 프로젝터를 포함할 수 있다. Here, the screen display unit may include a dome-shaped screen and a projector that projects an endoscope image on the dome-shaped screen.
본 발명의 또 다른 측면에 따르면, 수술용 영상 처리 장치가 내시경 영상을 출력하는 방법에 있어서, 수술용 내시경으로부터 제공되는 내시경 영상을 입력받는 단계, 내시경 영상을 화면 표시부의 특정 영역에 출력하는 단계 및 수술용 내시경의 관점에 상응하여 내시경 영상을 출력하는 화면 표시부의 특정 영역을 변경하는 단계를 포함하는 체감형 수술용 영상 처리 방법이 제공된다. According to another aspect of the invention, in the method for outputting the endoscope image by the surgical image processing apparatus, receiving an endoscope image provided from the surgical endoscope, outputting the endoscope image to a specific area of the screen display unit and There is provided a tangible surgical image processing method comprising the step of changing a specific area of the screen display unit for outputting the endoscope image corresponding to the viewpoint of the surgical endoscope.
여기서, 수술용 내시경은 복강경, 흉강경, 관절경, 비경, 방광경, 직장경, 십이지장경, 종격경, 심장경 중 하나 이상이 될 수 있으며, 입체 내시경이 될 수도 있다.Here, the surgical endoscope may be one or more of laparoscopic, thoracoscopic, arthroscopic, parenteral, cystoscopic, rectal, duodenum, mediastinal, cardiac, or may be a stereoscopic endoscope.
또한, 화면 표시부의 특정 영역을 변경하는 단계는, 수술용 내시경의 이동 및 회전에 상응하여 수술용 내시경의 관점 정보를 추적하는 단계, 수술용 내시경의 관점 정보를 이용하여 내시경 영상의 이동 정보를 추출하는 단계 및 이동 정보를 이용하여 내시경 영상이 출력되는 화면 표시부의 특정 영역을 설정하는 단계를 포함할 수 있다. In addition, the step of changing a specific area of the screen display unit, the step of tracking the perspective information of the surgical endoscope corresponding to the movement and rotation of the surgical endoscope, extracting the movement information of the endoscope image using the perspective information of the surgical endoscope And setting a specific area of the screen display unit on which the endoscope image is output using the movement information.
여기서, 화면 표시부의 특정 영역을 변경하는 단계는, 내시경 영상의 중심점을 수술용 내시경의 관점의 좌표 변화값에 상응하여 이동시키는 단계를 포함할 수 있다. Here, the changing of the specific area of the screen display unit may include moving the center point of the endoscope image corresponding to the coordinate change value of the viewpoint of the surgical endoscope.
본 발명의 또 다른 측면에 따르면, 수술용 영상 처리 장치가 내시경 영상을 출력하는 방법에 있어서, 수술용 내시경으로부터 서로 다른 시점에 제공되는 제1 내시경 영상 및 제2 내시경 영상을 입력받는 단계, 제1 내시경 영상과 제2 내시경 영상을 화면 표시부의 서로 다른 영역에 출력하는 단계, 제1 내시경 영상과 제2 내시경 영상을 저장하는 단계 및 수술용 내시경의 서로 다른 관점에 상응하여 제1 내시경 영상과 제2 내시경 영상을 서로 다른 영역에 출력하도록 화면 표시부를 제어하는 단계를 포함하는 체감형 수술용 영상 처리 방법이 제공된다. According to another aspect of the present invention, in the method for outputting the endoscope image by the surgical image processing apparatus, receiving the first endoscope image and the second endoscope image provided at different time points from the surgical endoscope, the first Outputting the endoscope image and the second endoscope image to different areas of the screen display unit, storing the first endoscope image and the second endoscope image, and the first endoscope image and the second corresponding to different perspectives of the surgical endoscope; There is provided a tangible surgical image processing method comprising controlling the screen display unit to output an endoscope image to different areas.
여기서, 내시경 영상을 입력받는 단계는, 제1 내시경 영상을 제2 내시경 영상보다 먼저 입력받을 수 있으며, 출력 단계는, 제1 내시경 영상과 제2 내시경 영상을 채도, 명도, 색상 및 화면 패턴 중 어느 하나 이상을 다르게 출력할 수 있다. Here, the receiving of the endoscope image may include receiving the first endoscope image before the second endoscope image, and the outputting step may include any one of the saturation, brightness, color, and screen pattern of the first endoscope image and the second endoscope image. You can print one or more differently.
또한, 화면 표시부 제어 단계는, 화면 표시부가 실시간으로 입력된 제2 내시경 영상을 출력하는 동안 영상 저장부에 저장된 제1 내시경 영상을 추출하여 화면 표시부에 출력하는 단계를 더 포함할 수 있다. The controlling of the screen display unit may further include extracting and outputting the first endoscope image stored in the image storage unit to the screen display unit while the screen display unit outputs the second endoscope image input in real time.
본 발명의 또 다른 측면에 따르면, 수술용 영상 처리 장치가 내시경 영상을 출력하는 방법에 있어서, 수술용 내시경으로부터 제공되는 내시경 영상을 입력받는 단계, 내시경 영상을 화면 표시부의 특정 영역에 출력하는 단계, 수술용 내시경이 촬영하는 수술 대상을 수술하는 수술 도구에 관한 모델링 영상을 저장하는 단계, 내시경 영상과 모델링 영상을 서로 정합하여 출력 영상을 생성하는 단계 및 수술용 내시경의 관점에 상응하여 내시경 영상을 출력하는 화면 표시부의 특정 영역을 변경하며, 정합된 내시경 영상과 모델링 영상을 화면 표시부에 출력하는 단계를 포함하는 체감형 수술용 영상 처리 방법이 제공된다. According to another aspect of the invention, in the method for outputting the endoscope image by the surgical image processing apparatus, receiving an endoscope image provided from the surgical endoscope, outputting the endoscope image to a specific area of the screen display unit, Storing a modeling image of a surgical tool for operating a surgical target photographed by the surgical endoscope; generating an output image by matching the endoscope image and the modeling image; and outputting the endoscope image corresponding to the perspective of the surgical endoscope There is provided a tangible surgical image processing method including changing a specific region of a screen display unit, and outputting a matched endoscope image and a modeling image to a screen display unit.
여기서, 출력 영상을 생성하는 단계는, 내시경 영상에 포함되는 실제 수술 도구 영상과 모델링 영상에 포함되는 모델링 수술 도구 영상을 서로 정합하여 출력 영상을 생성할 수 있다. Here, in the generating of the output image, the output image may be generated by matching the actual surgical tool image included in the endoscope image with the modeling surgical tool image included in the modeling image.
또한, 출력 영상을 생성하는 단계는, 내시경 영상 및 하나 이상의 로봇 암에 결합된 실제 수술 도구의 위치 좌표정보 중 하나 이상을 이용하여 특성값을 연산하는 단계 및 연산된 특성값에 상응하는 모델링 영상을 구현하는 단계를 더 포함할 수 있다. The generating of the output image may include calculating a characteristic value using at least one of an endoscope image and position coordinate information of an actual surgical tool coupled to at least one robot arm, and generating a modeling image corresponding to the calculated characteristic value. It may further comprise the step of implementing.
여기서, 출력 영상을 생성하는 단계는, 모델링 수술 도구 영상으로부터 모델링 수술 도구 영상과 실제 수술 도구 영상의 중첩 영역을 제거하는 단계를 더 포함할 수 있다. The generating of the output image may further include removing an overlapping region between the modeling surgical tool image and the actual surgical tool image from the modeling surgical tool image.
또한, 모델링 영상에 출력되는 모델링 수술 도구 영상의 위치는 수술 도구의 조작정보를 이용하여 설정될 수 있다. In addition, the position of the modeling surgical tool image output to the modeling image may be set using the operation information of the surgical tool.
본 발명의 또 다른 측면에 따르면, 수술용 영상 처리 장치가 내시경 영상을 출력하는 방법에 있어서, 수술용 내시경으로부터 제공되는 내시경 영상을 입력받는 단계, 내시경 영상을 화면 표시부에 출력하는 단계 및 수술용 내시경의 관점에 상응하여 화면 표시부를 회전 및 이동시키는 단계를 포함하는 체감형 수술용 영상 처리 방법이 제공된다. According to another aspect of the invention, in the method for outputting the endoscope image by the surgical image processing apparatus, receiving an endoscope image provided from the surgical endoscope, outputting the endoscope image on the screen display and the endoscope for surgery According to the aspect of the present invention, there is provided a haptic surgical image processing method comprising the step of rotating and moving the screen display.
여기서, 화면 표시부를 회전 및 이동시키는 단계는, 수술용 내시경의 이동 및 회전에 상응하여 수술용 내시경의 관점 정보를 추적하는 단계, 수술용 내시경의 관점 정보를 이용하여 내시경 영상의 이동 정보를 추출하는 단계, 이동 정보를 이용하여 화면 표시부의 모션 정보를 생성하는 단계를 포함할 수 있다. Here, the rotating and moving the screen display unit, tracking the perspective information of the surgical endoscope corresponding to the movement and rotation of the surgical endoscope, extracting the movement information of the endoscope image using the perspective information of the surgical endoscope The method may include generating motion information of the screen display unit using the movement information.
여기서, 화면 표시부는, 돔 형상의 스크린과, 돔 형상의 스크린에 내시경 영상을 투사하는 프로젝터를 포함할 수 있다. Here, the screen display unit may include a dome-shaped screen and a projector that projects an endoscope image on the dome-shaped screen.
본 발명의 또 다른 측면에 따르면, 상술한 체감형 수술용 영상 처리 방법을 수행하기 위하여 디지털 처리 장치에 의해 실행될 수 있는 명령어들의 프로그램이 유형적으로 구현되어 있으며 디지털 처리 장치에 의해 판독될 수 있는 프로그램을 기록한 기록매체가 제공된다.According to another aspect of the present invention, a program of instructions that can be executed by a digital processing apparatus is tangibly embodied in order to perform the above-described immersive surgical image processing method, and a program that can be read by a digital processing apparatus. Recorded recording medium is provided.
전술한 것 외의 다른 측면, 특징, 이점이 이하의 도면, 특허청구범위 및 발명의 상세한 설명으로부터 명확해질 것이다.Other aspects, features, and advantages other than those described above will become apparent from the following drawings, claims, and detailed description of the invention.
본 발명에 따른 수술용 영상 처리 장치 및 그 방법은 수술시 실제 영상과 모델링 영상을 같이 제공함으로써 원활한 수술을 가능하게 하는 효과가 있다. Surgical image processing apparatus and method according to the present invention has the effect of enabling a smooth operation by providing a real image and a modeling image together during surgery.
또한, 본 발명에 따른 수술용 영상 처리 장치 및 그 방법은 수술자가 수술 부위만 보면서 수술하던 종래 방식에서 벗어나서 수술 부위뿐만 아니라 인접 부위 및 환자의 외부 환경에 대한 영상을 참조하여 수술할 수 있도록 하는 효과가 있다. In addition, the surgical image processing device and the method according to the present invention has the effect of allowing the operator to operate by referring to the image of the neighboring site and the external environment of the patient as well as the surgical site apart from the conventional method of operating by looking only at the surgical site There is.
또한, 본 발명에 따른 수술용 영상 처리 장치 및 그 방법은 수술 도구들의 충돌을 미리 감지하여 경고할 수 있으므로, 수술자는 미리 충돌 상태를 인지하고 이를 회피함으로써 수술을 원활히 수행할 수 있는 효과가 있다. In addition, since the surgical image processing apparatus and the method according to the present invention can detect and warn the collision of the surgical tools in advance, the operator has an effect that can perform the operation smoothly by recognizing the collision state in advance and avoiding it.
또한, 본 발명에 따른 수술용 영상 처리 장치 및 그 방법은 모니터에 표시될 영상의 종류를 수술자가 편의대로 선택할 수 있는 인터페이스를 구현함으로써 수술자가 수술 영상을 보다 편리하게 이용할 수 있는 효과가 있다.In addition, the surgical image processing apparatus and method according to the present invention has the effect that the operator can use the surgical image more conveniently by implementing an interface that the operator can select the type of the image to be displayed on the monitor for convenience.
또한, 본 발명의 실시예에 따르면, 정상적인 수술이 이루어지도록 수술자가 의도하는 방식으로만 인스트루먼트가 제어되도록 하는 효과가 있다.In addition, according to an embodiment of the present invention, there is an effect that the instrument is controlled only in the manner that the operator intends to perform normal surgery.
또한, 로봇 암 및/또는 인스트루먼트에 대한 오조작이 근본적으로 방지됨으로써 수술중인 환자의 안전을 도모할 수 있는 효과도 있다.In addition, the erroneous manipulation of the robot arm and / or instrument is fundamentally prevented, so that the safety of the patient under surgery can be improved.
또한, 본 발명의 실시예에 따르면, 수술자가 원하는 수술 부위를 보고자 하는 행위만으로 복강경의 위치 및 영상 입력 각도가 제어되도록 할 수 있는 효과가 있다.In addition, according to an embodiment of the present invention, there is an effect that the position of the laparoscope and the image input angle can be controlled only by the operator to see the desired surgical site.
또한, 복강경의 조작을 위한 수술자의 별도의 조작이 불필요하여 수술자는 수술 행위에만 집중할 수 있도록 하는 효과도 있다.In addition, the operator does not need a separate operation for the operation of the laparoscopic has the effect of allowing the operator to focus on the operation only.
또한, 본 발명에 따른 체감형 수술용 영상 처리 장치 및 방법은 수술용 내시경의 움직임에 따라 변화하는 내시경의 관점에 상응하여, 사용자가 보는 모니터에 출력되는 내시경 영상의 출력 위치를 변화시킴으로써, 사용자가 실제 수술 상황을 보다 현실감 있게 느낄 수 있도록 하는 효과가 있다. In addition, the bodily-type surgical image processing apparatus and method according to the present invention, by changing the output position of the endoscope image output to the monitor in accordance with the viewpoint of the endoscope changes according to the movement of the surgical endoscope, The effect is to make the actual surgical situation feel more realistic.
또한, 본 발명에 따른 체감형 수술용 영상 처리 장치 및 방법은 현재 시점에서, 이전에 입력되어 저장된 내시경 영상을 추출하여 현재의 내시경 영상과 함께 화면 표시부에 출력함으로써, 내시경 영상의 변화에 대한 정보를 사용자에게 알려 줄 수 있는 효과가 있다. In addition, the apparatus and method for immersive surgical image processing according to the present invention extracts a previously input and stored endoscope image at the present time point and outputs the information on the change of the endoscope image by outputting it to the screen display together with the current endoscope image. There is an effect that can inform the user.
또한, 본 발명에 따른 체감형 수술용 영상 처리 장치 및 방법은 수술시 내시경을 이용하여 실제 촬영한 내시경 영상과 미리 수술 도구에 대해 생성하여 영상 저장부에 저장한 모델링 영상을 각각 또는 서로 정합하거나 그 크기를 조절하는 등 영상을 수정하여 사용자가 관찰가능한 모니터에 출력할 수 있는 효과가 있다. In addition, the bodily-type surgical image processing apparatus and method according to the present invention match each or each of the modeling images that are actually generated by using an endoscope during surgery and modeling images previously generated for a surgical tool and stored in an image storage unit or the same. It is effective to modify the image such as adjusting the size and output it to the monitor that the user can observe.
또한, 본 발명에 따른 체감형 수술용 영상 처리 장치 및 방법은 다양하게 변화하는 내시경의 관점에 상응하여 모니터를 회전 및 이동시킴으로써, 사용자가 수술에 대한 현실감을 보다 생생하게 느끼게 할 수 있는 효과가 있다. In addition, the bodily-type surgical image processing apparatus and method according to the present invention has an effect that allows the user to feel more realistic about the surgery by rotating and moving the monitor in accordance with the viewpoint of various changes in the endoscope.
도 1은 본 발명의 실시예에 따른 수술용 로봇의 전체구조를 나타낸 평면도. 1 is a plan view showing the overall structure of a surgical robot according to an embodiment of the present invention.
도 2는 본 발명의 실시예에 따른 수술용 로봇의 마스터 인터페이스를 나타낸 개념도.Figure 2 is a conceptual diagram showing a master interface of the surgical robot according to an embodiment of the present invention.
도 3은 본 발명의 실시예에 따른 수술용 로봇의 블록 구성도. Figure 3 is a block diagram of a surgical robot according to an embodiment of the present invention.
도 4는 본 발명의 실시예에 따른 수술용 영상 처리 장치의 블록 구성도. Figure 4 is a block diagram of a surgical image processing apparatus according to an embodiment of the present invention.
도 5는 본 발명의 실시예에 따른 수술용 영상 처리 방법의 흐름도. 5 is a flow chart of a surgical image processing method according to an embodiment of the present invention.
도 6은 본 발명의 실시예에 따른 수술용 영상 처리 방법에 따른 출력 영상의 구성도. 6 is a block diagram of an output image according to the surgical image processing method according to an embodiment of the present invention.
도 7은 본 발명의 실시예에 따른 수술용 로봇의 블록 구성도. Figure 7 is a block diagram of a surgical robot according to an embodiment of the present invention.
도 8은 본 발명의 실시예에 따른 수술용 영상 처리 장치의 블록 구성도. 8 is a block diagram of a surgical image processing apparatus according to an embodiment of the present invention.
도 9는 본 발명의 실시예에 따른 수술용 영상 처리 방법의 흐름도. 9 is a flow chart of a surgical image processing method according to an embodiment of the present invention.
도 10은 본 발명의 실시예에 따른 수술용 영상 처리 방법에 따른 출력 영상의 구성도. 10 is a block diagram of an output image according to the surgical image processing method according to an embodiment of the present invention.
도 11은 본 발명의 실시예에 따른 수술용 로봇의 블록 구성도. Figure 11 is a block diagram of a surgical robot according to an embodiment of the present invention.
도 12는 본 발명의 실시예에 따른 수술용 영상 처리 방법의 흐름도. 12 is a flow chart of a surgical image processing method according to an embodiment of the present invention.
도 13은 본 발명의 실시예에 따른 수술용 로봇의 전체구조를 나타낸 평면도.Figure 13 is a plan view showing the overall structure of a surgical robot according to an embodiment of the present invention.
도 14는 본 발명의 실시예에 따른 수술용 로봇의 마스터 인터페이스를 나타낸 개념도.14 is a conceptual diagram showing a master interface of the surgical robot according to an embodiment of the present invention.
도 15는 본 발명의 실시예에 따른 마스터 로봇과 슬레이브 로봇의 구성을 개략적으로 나타낸 블록 구성도.15 is a block diagram schematically showing the configuration of a master robot and a slave robot according to an embodiment of the present invention.
도 16 내지 도 27은 본 발명의 실시예들에 따른 영역 지정 방법을 나타낸 도면.16 to 27 illustrate a method for designating a region according to embodiments of the present invention.
도 28은 본 발명의 실시예에 따른 제한구역 설정 방법을 나타낸 순서도.28 is a flowchart illustrating a method for setting a restricted area according to an embodiment of the present invention.
도 29 및 도 30은 본 발명의 실시예에 따른 수술 로봇 시스템의 동작 제한 방법을 나타낸 순서도.29 and 30 are flow charts showing the operation limiting method of the surgical robot system according to an embodiment of the present invention.
도 31은 본 발명의 실시예에 따른 동작 제한 방법을 설명하기 위한 화면 표시의 예시도.31 is an exemplary view of a screen display for explaining an operation limiting method according to an embodiment of the present invention.
도 32는 본 발명의 실시예에 따른 수술용 로봇의 전체구조를 나타낸 평면도.32 is a plan view showing the overall structure of a surgical robot according to an embodiment of the present invention.
도 33은 본 발명의 실시예에 따른 수술용 로봇의 마스터 인터페이스를 나타낸 개념도. 33 is a conceptual diagram showing a master interface of the surgical robot according to an embodiment of the present invention.
도 34 내지 도 37은 본 발명의 실시예에 따른 접면부의 움직임 형태를 예시한 도면.34 to 37 are views illustrating the movement form of the contact portion according to the embodiment of the present invention.
도 38은 본 발명의 실시예에 따른 복강경 조작 명령을 생성하기 위한 텔레스코픽 표시부의 구성을 개략적으로 나타낸 블록 구성도.FIG. 38 is a block diagram schematically illustrating a configuration of a telescopic display unit for generating a laparoscopic manipulation command according to an embodiment of the present invention. FIG.
도 39는 본 발명의 실시예에 따른 복강경 조작 명령 전송 방법을 나타낸 순서도.39 is a flowchart illustrating a method of transmitting a laparoscopic manipulation command according to an embodiment of the present invention.
도 40은 본 발명의 실시예에 따른 복강경 조작 명령을 생성하기 위한 텔레스코픽 표시부의 구성을 개략적으로 나타낸 블록 구성도.40 is a block diagram schematically illustrating a configuration of a telescopic display unit for generating a laparoscopic manipulation command according to an embodiment of the present invention.
도 41은 본 발명의 실시예에 따른 복강경 조작 명령 전송 방법을 나타낸 순서도.41 is a flowchart illustrating a method of transmitting a laparoscopic manipulation command according to an embodiment of the present invention.
도 42는 본 발명의 실시예에 따른 복강경 조작 명령을 생성하기 위한 텔레스코픽 표시부의 구성을 개략적으로 나타낸 블록 구성도.FIG. 42 is a block diagram schematically illustrating a configuration of a telescopic display unit for generating a laparoscopic manipulation command according to an embodiment of the present invention. FIG.
도 43은 본 발명의 실시예에 따른 복강경 조작 명령 전송 방법을 나타낸 순서도.43 is a flowchart illustrating a method of transmitting a laparoscopic manipulation command according to an embodiment of the present invention.
도 44는 본 발명의 실시예에 따른 복강경 조작 명령 전송 방법을 나타낸 순서도.44 is a flowchart illustrating a method of transmitting a laparoscopic manipulation command according to an embodiment of the present invention.
도 45는 본 발명의 실시예에 따른 텔레스코픽 표시부에 의한 영상 표시 형태를 예시한 도면.45 is a diagram illustrating an image display form by a telescopic display unit according to an embodiment of the present invention.
도 46은 본 발명의 실시예에 따른 복강경 조작 명령 전송 방법을 나타낸 순서도.46 is a flowchart illustrating a method of transmitting a laparoscopic manipulation command according to an embodiment of the present invention.
도 47은 본 발명의 실시예에 따른 수술용 로봇의 전체구조를 나타낸 평면도. 47 is a plan view showing the overall structure of a surgical robot according to an embodiment of the present invention.
도 48은 본 발명의 실시예에 따른 수술용 로봇의 마스터 인터페이스를 나타낸 개념도.48 is a conceptual diagram illustrating a master interface of a surgical robot according to an embodiment of the present invention.
도 49는 본 발명의 실시예에 따른 수술용 로봇의 블록 구성도. 49 is a block diagram of a surgical robot according to an embodiment of the present invention.
도 50은 본 발명의 실시예에 따른 체감형 수술용 영상 처리 장치의 블록 구성도. 50 is a block diagram of a bodily-type surgical image processing apparatus according to an embodiment of the present invention.
도 51은 본 발명의 실시예에 따른 체감형 수술용 영상 처리 방법의 흐름도. 51 is a flowchart of a tangible surgical image processing method according to an embodiment of the present invention.
도 52는 본 발명의 실시예에 따른 체감형 수술용 영상 처리 방법에 따른 출력 영상의 구성도. 52 is a block diagram of an output image according to the tangible surgical image processing method according to the embodiment of the present invention.
도 53은 본 발명의 실시예에 따른 수술용 로봇의 블록 구성도. 53 is a block diagram of a surgical robot according to an embodiment of the present invention.
도 54는 본 발명의 실시예에 따른 체감형 수술용 영상 처리 장치의 블록 구성도. 54 is a block diagram illustrating an apparatus for immersive surgical image processing according to an embodiment of the present invention.
도 55는 본 발명의 실시예에 따른 체감형 수술용 영상 처리 방법의 흐름도. 55 is a flowchart of a tangible surgical image processing method according to an embodiment of the present invention.
도 56은 본 발명의 실시예에 따른 체감형 수술용 영상 처리 방법에 따른 출력 영상의 구성도. 56 is a block diagram of an output image according to the tangible surgical image processing method according to the embodiment of the present invention.
도 57은 본 발명의 실시예에 따른 수술용 로봇의 블록 구성도. 57 is a block diagram of a surgical robot according to an embodiment of the present invention.
도 58은 본 발명의 실시예에 따른 체감형 수술용 영상 처리 장치의 블록 구성도. 58 is a block diagram of an apparatus for processing immersive surgical images according to an embodiment of the present invention.
도 59는 본 발명의 실시예에 따른 체감형 수술용 영상 처리 방법의 흐름도. 59 is a flowchart of a tangible surgical image processing method according to an embodiment of the present invention.
도 60은 본 발명의 실시예에 따른 체감형 수술용 영상 처리 방법에 따른 출력 영상의 구성도. 60 is a block diagram of an output image according to the tangible surgical image processing method according to an embodiment of the present invention.
도 61은 본 발명의 실시예에 따른 수술용 로봇의 마스터 인터페이스를 나타낸 개념도.61 is a conceptual diagram showing a master interface of the surgical robot according to an embodiment of the present invention.
도 62는 본 발명의 실시예에 따른 수술용 로봇의 블록 구성도. 62 is a block diagram of a surgical robot according to an embodiment of the present invention.
도 63은 본 발명의 실시예에 따른 체감형 수술용 영상 처리 장치의 블록 구성도. FIG. 63 is a block diagram illustrating an apparatus for immersive surgical image processing according to an embodiment of the present invention. FIG.
도 64는 본 발명의 실시예에 따른 체감형 수술용 영상 처리 방법의 흐름도. 64 is a flowchart of a tangible surgical image processing method according to an embodiment of the present invention.
도 65는 본 발명의 실시예에 따른 수술용 로봇의 마스터 인터페이스를 나타낸 개념도.65 is a conceptual diagram illustrating a master interface of a surgical robot according to an embodiment of the present invention.
본 발명은 다양한 변환을 가할 수 있고 여러 가지 실시예를 가질 수 있는 바, 특정 실시예들을 도면에 예시하고 상세하게 설명하고자 한다. 그러나 이는 본 발명을 특정한 실시 형태에 대해 한정하려는 것이 아니며, 본 발명의 사상 및 기술 범위에 포함되는 모든 변환, 균등물 내지 대체물을 포함하는 것으로 이해되어야 한다. 본 발명을 설명함에 있어서 관련된 공지 기술에 대한 구체적인 설명이 본 발명의 요지를 흐릴 수 있다고 판단되는 경우 그 상세한 설명을 생략한다.As the invention allows for various changes and numerous embodiments, particular embodiments will be illustrated in the drawings and described in detail in the written description. However, this is not intended to limit the present invention to specific embodiments, it should be understood to include all transformations, equivalents, and substitutes included in the spirit and scope of the present invention. In the following description of the present invention, if it is determined that the detailed description of the related known technology may obscure the gist of the present invention, the detailed description thereof will be omitted.
제1, 제2 등의 용어는 다양한 구성요소들을 설명하는데 사용될 수 있지만, 상기 구성요소들은 상기 용어들에 의해 한정되어서는 안 된다. 상기 용어들은 하나의 구성요소를 다른 구성요소로부터 구별하는 목적으로만 사용된다. Terms such as first and second may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
본 출원에서 사용한 용어는 단지 특정한 실시예를 설명하기 위해 사용된 것으로, 본 발명을 한정하려는 의도가 아니다. 단수의 표현은 문맥상 명백하게 다르게 뜻하지 않는 한, 복수의 표현을 포함한다. 본 출원에서, "포함하다" 또는 "가지다" 등의 용어는 명세서상에 기재된 특징, 숫자, 단계, 동작, 구성요소, 부품 또는 이들을 조합한 것이 존재함을 지정하려는 것이지, 하나 또는 그 이상의 다른 특징들이나 숫자, 단계, 동작, 구성요소, 부품 또는 이들을 조합한 것들의 존재 또는 부가 가능성을 미리 배제하지 않는 것으로 이해되어야 한다.The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting of the present invention. Singular expressions include plural expressions unless the context clearly indicates otherwise. In this application, the terms "comprise" or "have" are intended to indicate that there is a feature, number, step, operation, component, part, or combination thereof described in the specification, and one or more other features. It is to be understood that the present invention does not exclude the possibility of the presence or the addition of numbers, steps, operations, components, components, or a combination thereof.
또한, 명세서에 기재될 수 있는 "...부", "...기", "모듈" 등의 용어는 적어도 하나의 기능이나 동작을 처리하는 단위를 의미하며, 이는 하드웨어나 소프트웨어 또는 하드웨어 및 소프트웨어의 결합으로 구현될 수 있다.In addition, the terms "... unit", "... group", "module" and the like that can be described in the specification means a unit for processing at least one function or operation, which means hardware or software or hardware and It can be implemented in a combination of software.
이하, 본 발명의 실시예를 첨부한 도면들을 참조하여 상세히 설명하기로 하며, 첨부 도면을 참조하여 설명함에 있어, 동일하거나 대응하는 구성 요소는 동일한 도면번호를 부여하고 이에 대한 중복되는 설명은 생략하기로 한다.Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings, and in the following description with reference to the accompanying drawings, the same or corresponding components are given the same reference numerals and redundant description thereof will be omitted. Shall be.
또한, 본 발명의 다양한 실시예들을 설명함에 있어, 각 실시예가 독립적으로 해석되거나 실시되어야 하는 것은 아니며, 각 실시예에서 설명되는 특징 요소 및/또는 기술적 사상들이 개별적으로 설명되는 다른 실시예에 조합되어 해석되거나 실시될 수 있는 것으로 이해되어야 한다.In addition, in describing various embodiments of the present invention, each embodiment should not be interpreted or implemented independently, and the feature elements and / or technical ideas described in each embodiment may be combined with other embodiments separately described. It is to be understood that it may be interpreted or practiced.
또한, 이하의 설명을 통해 본 발명이 내시경, 현미경 등의 비젼부가 이용되는 수술이나 실험 등에 범용적으로 사용될 수 있는 기술적 사상임이 명확해질 것이다. 아울러, 내시경은 복강경, 흉강경, 관절경, 비경, 방광경, 직장경, 십이지장경, 종격경, 심장경 등으로 다양화될 수도 있다. 다만, 이하에서는 설명 및 이해의 편의를 도모하기 위해 비젼부가 일종의 내시경, 즉 복강경인 경우를 예로 들어 설명하기로 한다.In addition, through the following description it will be apparent that the present invention is a technical idea that can be used universally for surgery or experiments in which vision parts such as endoscopes and microscopes are used. In addition, the endoscope may be diversified into laparoscopic, thoracoscopic, arthroscopic, parenteral, bladder, rectal, duodenum, mediastinal, cardiac, and the like. However, hereinafter, the case of the vision unit is a kind of endoscope, that is, laparoscope for the convenience of explanation and understanding will be described as an example.
도 1은 본 발명의 실시예에 따른 수술용 로봇의 전체구조를 나타낸 평면도이고, 도 2는 본 발명의 실시예에 따른 수술용 로봇의 마스터 인터페이스를 나타낸 개념도이다. 1 is a plan view showing the overall structure of a surgical robot according to an embodiment of the present invention, Figure 2 is a conceptual diagram showing a master interface of a surgical robot according to an embodiment of the present invention.
본 실시예는 수술시 내시경을 이용하여 실제 촬영한 내시경 영상과 미리 수술 도구에 대해 생성하여 저장한 모델링 영상을 각각 또는 서로 정합하거나 그 크기를 조절하는 등 영상을 수정하여 수술자가 관찰가능한 모니터에 출력할 수 있는 특징이 있다. 또한, 본 실시예는 수술대에 누워있는 수술 대상(환자)을 카메라로 촬영하여 생성한 카메라 영상을 내시경 영상 및/또는 모델링 영상과 조합하여 출력함으로써 수술자가 수술 부위만 보면서 수술하던 종래 방식에서 벗어나서 수술 부위뿐만 아니라 인접 부위 및 환자의 외부 환경에 대한 영상을 참조하여 수술할 수 있도록 하는 특징이 있다. In this embodiment, the endoscope image actually taken by using the endoscope during surgery and the modeling image generated and stored in advance for the surgical instrument are modified or matched with each other, or the size thereof is modified and output to a monitor that can be observed by the operator. There are features that can be done. In addition, the present embodiment outputs a camera image generated by photographing a surgical target (patient) lying on the operating table with a endoscopic image and / or modeling image outputted in combination with the endoscope image and / or modeling image, the surgeon is operating from the conventional operation while looking only at the surgical site In addition to the site, there is a feature that allows the operation by referring to the image of the adjacent area and the external environment of the patient.
본 실시예에 따른 내시경은 복강경뿐만 아니라 흉강경, 관절경, 비경, 방광경, 직장경, 십이지장경, 종격경, 심장경 등과 같이 수술시 촬영 도구로 사용되는 다양한 종류의 도구가 될 수 있다. 또한, 본 실시예에 따른 수술용 영상 처리 장치는 반드시 도시된 바와 같은 수술용 로봇 시스템에 한정되어 구현되지 않으며, 수술시 내시경 영상을 출력하며 수술 도구를 이용하여 수술하는 시스템이라면 적용가능하다. 이하에서는 수술용 로봇 시스템에 본 실시예에 따른 수술용 영상 처리 장치가 적용된 경우를 중심으로 설명한다. The endoscope according to the present embodiment may be not only a laparoscope but also various kinds of tools used as an imaging tool during surgery, such as thoracoscopic, arthroscopy, parenteral, bladder, rectal, duodenum, mediastinal, cardiac, etc. In addition, the surgical image processing apparatus according to the present embodiment is not necessarily implemented to be limited to the surgical robot system as shown, and may be applied to a system for outputting an endoscope image during surgery and operating using a surgical tool. Hereinafter, the case where the surgical image processing apparatus according to the present embodiment is applied to the surgical robot system will be described.
도 1 및 도 2를 참조하면, 수술용 로봇 시스템은 수술대에 누워있는 환자에게 수술을 행하는 슬레이브 로봇(2)과 슬레이브 로봇(2)을 수술자가 원격 조종하는 마스터 로봇(1)을 포함하여 구성된다. 마스터 로봇(1)과 슬레이브 로봇(2)이 반드시 물리적으로 독립된 별도의 장치로 분리되어야 하는 것은 아니며, 하나로 통합되어 일체형으로 구성될 수 있으며, 이 경우 마스터 인터페이스(4)는 예를 들어 일체형 로봇의 인터페이스 부분에 상응할 수 있다. 1 and 2, the surgical robot system includes a slave robot 2 performing surgery on a patient lying on an operating table and a master robot 1 remotely controlling the slave robot 2. . The master robot 1 and the slave robot 2 are not necessarily separated into separate devices that are physically independent, but may be integrated into one and integrally formed, in which case the master interface 4 may be, for example, of an integrated robot. May correspond to an interface portion.
마스터 로봇(1)의 마스터 인터페이스(4)는 모니터부(6) 및 마스터 조종기를 포함하고, 슬레이브 로봇(2)은 로봇 암(3) 및 인스트루먼트(8)를 포함한다. 인스트루먼트(8)는 복강경 등과 같은 내시경, 환부에 직접 조작을 가하는 수술용 인스트루먼트 등과 같은 수술 도구이다. 마스터 인터페이스(4)는 후술할 바와 같은 영상 선택을 위한 버튼을 더 포함할 수 있다. 영상 선택 버튼은 클러치 버튼 또는 페달(도시되지 않음) 등의 형태로 구현될 수 있으며, 영상 선택 버튼의 구현 형태는 이에 제한되지 않으며 예를 들어 모니터부(6)를 통해 표시되는 기능메뉴 또는 모드선택 메뉴 등으로 구현될 수도 있다.The master interface 4 of the master robot 1 comprises a monitor part 6 and a master controller, and the slave robot 2 comprises a robot arm 3 and an instrument 8. The instrument 8 is a surgical tool such as an endoscope, such as a laparoscope, or a surgical instrument for directly manipulating the affected part. The master interface 4 may further include a button for selecting an image, which will be described later. The image selection button may be implemented in the form of a clutch button or a pedal (not shown), and the implementation of the image selection button is not limited thereto. For example, a function menu or a mode selection displayed through the monitor unit 6 may be selected. It may also be implemented as a menu.
마스터 인터페이스(4)는 수술자가 양손에 각각 파지되어 조작할 수 있도록 마스터 조종기를 구비한다. 마스터 조종기는 도 1 및 2에 예시된 바와 같이 두 개의 핸들(10)로 구현될 수 있으며, 수술자의 핸들(10) 조작에 따른 조작신호가 슬레이브 로봇(2)으로 전송되어 로봇 암(3)이 제어된다. 수술자의 핸들(10) 조작에 의해 로봇 암(3) 및/또는 인스트루먼트(8)의 위치 이동, 회전, 절단 작업 등이 수행될 수 있다.The master interface 4 is provided with a master controller so that the operator can be gripped and manipulated by both hands. As illustrated in FIGS. 1 and 2, the master controller may be implemented with two handles 10. An operation signal according to the manipulation of the operator's handle 10 is transmitted to the slave robot 2 so that the robot arm 3 may be moved. Controlled. By the operation of the operator's handle 10, the position movement, rotation, cutting, etc. of the robot arm 3 and / or the instrument 8 may be performed.
예를 들어, 핸들(10)은 메인 핸들(main handle)과 서브 핸들(sub handle)로 구성될 수 있다. 하나의 핸들만으로 슬레이브 로봇 암(3)이나 인스트루먼트(8) 등을 조작할 수도 있고, 서브 핸들을 추가하여 동시에 복수의 수술 장비를 실시간으로 조작할 수도 있다. 메인 핸들 및 서브 핸들은 그 조작방식에 따라 다양한 기구적 구성을 가질 수 있으며, 예를 들면, 조이스틱 형태, 키패드, 트랙볼, 터치스크린 등 슬레이브 로봇(2)의 로봇 암(3) 및/또는 기타 수술 장비를 작동시키기 위한 다양한 입력수단이 사용될 수 있다.For example, the handle 10 may be composed of a main handle and a sub handle. It is also possible to operate the slave robot arm 3, the instrument 8, etc. with only one handle, or to operate a plurality of surgical equipment in real time by adding a sub handle. The main handle and the sub handle may have various mechanical configurations depending on the operation method thereof. For example, the robot arm 3 and / or other surgery of the slave robot 2, such as a joystick type, a keypad, a trackball, and a touch screen, may be used. Various input means for operating the equipment can be used.
마스터 조종기는 핸들(10)의 형상으로 제한되지 않으며, 네트워크를 통해 로봇 암(3)의 동작을 제어할 수 있는 형태이면 아무런 제한없이 적용될 수 있다.The master controller is not limited to the shape of the handle 10 and may be applied without any limitation as long as it can control the operation of the robot arm 3 through a network.
마스터 인터페이스(4)의 모니터부(6)에는 인스트루먼트(8)에 의해 입력되는 내시경 영상, 카메라 영상 및 모델링 영상이 화상 이미지로 표시된다. 또한, 모니터부(6)에 표시되는 정보는 선택된 영상의 종류에 의해 다양할 수 있을 것이다. The monitor 6 of the master interface 4 displays an endoscope image, a camera image, and a modeling image input by the instrument 8 as an image image. In addition, the information displayed on the monitor unit 6 may vary according to the type of the selected image.
모니터부(6)는 하나 이상의 모니터들로 구성될 수 있으며, 각 모니터에 수술시 필요한 정보들이 개별적으로 표시되도록 할 수 있다. 도 1 및 2에는 모니터부(6)가 세 개의 모니터를 포함하는 경우가 예시되었으나, 모니터의 수량은 표시를 요하는 정보의 유형이나 종류 등에 따라 다양하게 결정될 수 있다.The monitor unit 6 may be composed of one or more monitors, and may display information necessary for surgery on each monitor separately. 1 and 2 illustrate a case in which the monitor unit 6 includes three monitors, the quantity of the monitors may be variously determined according to the type or type of information requiring display.
슬레이브 로봇(2)과 마스터 로봇(1)은 유선 통신망 또는 무선 통신망을 통해 상호 결합되어 조작신호, 인스트루먼트(8)를 통해 입력된 내시경 영상 등이 상대방으로 전송될 수 있다. 만일, 마스터 인터페이스(4)에 구비된 두 개의 핸들(10)에 의한 두 개의 조작신호 및/또는 인스트루먼트(8)의 위치 조정을 위한 조작신호가 동시에 및/또는 유사한 시점에서 전송될 필요가 있는 경우, 각 조작신호는 상호 독립적으로 슬레이브 로봇(2)으로 전송될 수 있다. 여기서 각 조작신호가 '독립적으로' 전송된다는 것은, 조작신호 간에 서로 간섭을 주지 않으며, 어느 하나의 조작신호가 다른 하나의 신호에 영향을 미치지 않음을 의미한다. 이처럼, 복수의 조작신호가 서로 독립적으로 전송되도록 하기 위해서는, 각 조작신호의 생성 단계에서 각 조작신호에 대한 헤더 정보를 부가하여 전송시키거나, 각 조작신호가 그 생성 순서에 따라 전송되도록 하거나, 또는 각 조작신호의 전송 순서에 관하여 미리 우선순위를 정해 놓고 그에 따라 전송되도록 하는 등 다양한 방식이 이용될 수 있다. 이 경우, 각 조작신호가 전송되는 전송 경로가 독립적으로 구비되도록 함으로써 각 조작신호간에 간섭이 근본적으로 방지되도록 할 수도 있을 것이다.The slave robot 2 and the master robot 1 may be coupled to each other through a wired communication network or a wireless communication network so that an operation signal and an endoscope image input through the instrument 8 may be transmitted to the counterpart. If two operation signals provided by the two handles 10 provided in the master interface 4 and / or operation signals for adjusting the position of the instrument 8 need to be transmitted at the same time and / or at a similar time point, Each operation signal may be independently transmitted to the slave robot 2. Herein, when each operation signal is 'independently' transmitted, it means that the operation signals do not interfere with each other and one operation signal does not affect the other signal. As described above, in order to transmit the plurality of operation signals independently of each other, in the generation step of each operation signal, header information for each operation signal is added and transmitted, or each operation signal is transmitted in the generation order thereof, or Various methods may be used such as prioritizing each operation signal in advance and transmitting the operation signal accordingly. In this case, the transmission path through which each operation signal is transmitted may be provided independently so that interference between each operation signal may be fundamentally prevented.
슬레이브 로봇(2)의 로봇 암(3)은 다자유도를 가지며 구동되도록 구현될 수 있다. 로봇 암(3)은 예를 들어 환자의 수술 부위에 삽입되는 수술 도구, 수술 도구를 수술 위치에 따라 요(yaw)방향으로 회전시키는 요동 구동부, 요동 구동부의 회전 구동과 직교하는 피치(pitch) 방향으로 수술 도구를 회전시키는 피치 구동부, 수술 도구를 길이 방향으로 이동시키는 이송 구동부와, 수술 도구를 회전시키는 회전 구동부, 수술 도구의 끝단에 설치되어 수술 병변을 절개 또는 절단하는 수술 도구 구동부를 포함하여 구성될 수 있다. 다만, 로봇 암(3)의 구성이 이에 제한되지 않으며, 이러한 예시가 본 발명의 권리범위를 제한하지 않는 것으로 이해되어야 한다. 또한, 수술자가 핸들(10)을 조작함에 의해 로봇 암(3)이 상응하는 방향으로 회전, 이동하는 등의 실제적인 제어 과정은 본 발명의 요지와 다소 거리감이 있으므로 이에 대한 구체적인 설명은 생략한다.The robot arm 3 of the slave robot 2 can be implemented to be driven with multiple degrees of freedom. The robot arm 3 includes, for example, a surgical tool inserted into a surgical site of a patient, a rocking drive unit for rotating the surgical tool in the yaw direction according to a surgical position, and a pitch direction perpendicular to the rotational drive of the rocking drive unit. It comprises a pitch drive unit for rotating the surgical tool, a transfer drive for moving the surgical tool in the longitudinal direction, a rotation drive for rotating the surgical tool, the surgical tool drive is installed on the end of the surgical tool to cut or cut the surgical lesion Can be. However, the configuration of the robot arm 3 is not limited thereto, and it should be understood that this example does not limit the scope of the present invention. In addition, the actual control process, such as the operator rotates the robot arm 3 in the corresponding direction by operating the handle 10 is somewhat distanced from the subject matter of the present invention, so a detailed description thereof will be omitted.
슬레이브 로봇(2)은 환자를 수술하기 위하여 하나 이상으로 이용될 수 있으며, 수술 부위가 모니터부(6)를 통해 화상 이미지로 표시되도록 하기 위한 인스트루먼트(8)는 독립된 슬레이브 로봇(2)으로 구현될 수도 있으며, 마스터 로봇(1)도 슬레이브 로봇(2)과 일체화되어 구현될 수도 있다. One or more slave robots 2 may be used to operate the patient, and the instrument 8 for causing the surgical site to be displayed as an image image through the monitor unit 6 may be implemented as an independent slave robot 2. In addition, the master robot 1 may also be implemented integrally with the slave robot 2.
도 3은 본 발명의 실시예에 따른 수술용 로봇의 구성을 개략적으로 나타낸 블록 구성도이다. 도 3을 참조하면, 영상 입력부(310), 화면 표시부(320), 암 조작부(330), 조작신호 생성부(340), 영상 정합부(350), 영상 저장부(360), 제어부(370)를 포함하는 마스터 로봇(1) 및 로봇 암(3), 복강경(5)을 포함하는 슬레이브 로봇(2)이 도시된다. 3 is a block diagram schematically showing the configuration of a surgical robot according to an embodiment of the present invention. Referring to FIG. 3, the image input unit 310, the screen display unit 320, the arm operation unit 330, the operation signal generation unit 340, the image matching unit 350, the image storage unit 360, and the control unit 370. A master robot 1 comprising a and a robot arm 3, a slave robot 2 including a laparoscope 5 are shown.
본 실시예에 따른 수술용 영상 처리 장치는 영상 입력부(310), 화면 표시부(320), 영상 정합부(350), 영상 저장부(360)를 포함하는 모듈로 구현될 수 있으며, 물론, 이러한 모듈은 조작신호 생성부(340), 영상 정합부(350), 제어부(370)를 포함할 수도 있다. Surgical image processing apparatus according to the present embodiment may be implemented as a module including an image input unit 310, the screen display unit 320, the image matching unit 350, the image storage unit 360, of course, such a module The control unit may include an operation signal generator 340, an image matcher 350, and a controller 370.
영상 입력부(310)는 슬레이브 로봇(2)의 복강경(5)에 구비된 카메라를 통해 입력된 영상을 유선 또는 무선 통신망을 통해 수신한다. 복강경(5)도 본 실시예에 따른 수술 도구의 한 종류가 될 수 있으며, 그 개수는 하나 이상이 될 수 있다.The image input unit 310 receives an image input through a camera provided in the laparoscope 5 of the slave robot 2 through a wired or wireless communication network. Laparoscope 5 may also be one type of surgical instrument according to the present embodiment, and the number thereof may be one or more.
화면 표시부(320)는 영상 입력부(310)를 통해 수신된 영상에 상응하는 화상 이미지를 시각(視覺)적 정보로 출력한다. 화면 표시부(320)는 내시경 영상을 크기 그대로 또는 줌인(Zoom In)/줌아웃(Zoom Out)하여 출력하거나 또는 내시경 영상과 모델링 영상을 서로 정합하거나 각각 별도의 영상으로 출력할 수 있다. The screen display unit 320 outputs an image image corresponding to the image received through the image input unit 310 as visual information. The screen display unit 320 may output the endoscope image as it is or zoom in / zoom out, or may match the endoscope image and the modeled image with each other, or may output each as a separate image.
또한, 화면 표시부(320)는 내시경 영상과 전체 수술 상황을 비추는 영상, 예를 들면, 후술할 바와 같이 수술 대상의 외부를 카메라가 촬영하여 생성하는 카메라 영상을 동시 및/또는 서로 정합하여 출력하여 수술시 상황 파악을 용이하게 할 수도 있다. 또한, 화면 표시부(320)는 출력 영상의 일부분 또는 별도의 화면에 생성된 창(window)에 전체 영상(내시경 영상, 모델링 영상 및 카메라 영상 등)에 대한 축소된 영상을 출력하고, 수술자가 상술한 마스터 조종기를 이용하여 출력된 축소된 영상 중 특정 지점을 선택하거나 회전시키는 경우 출력 영상 전체가 이동 또는 회전하는 기능, 이른바, 캐드 프로그램의 조감뷰(bird's eye view) 기능을 수행할 수도 있다. 상술한 바와 같은 화면 표시부(320)에 출력되는 영상의 줌인/줌아웃, 이동, 회전 등과 같은 기능은 마스터 조종기의 조작에 상응하여 제어부(370)에서 제어할 수 있다. In addition, the screen display unit 320 is an image that reflects the endoscope image and the entire surgical situation, for example, as described later, the camera image generated by the camera photographing the outside of the operation target and / or matched with each other and outputs the surgery It can also make the situation easier to grasp. In addition, the screen display unit 320 outputs a reduced image of the entire image (endoscopic image, modeling image, camera image, etc.) in a portion of the output image or a window generated on a separate screen, and the operator described above When selecting or rotating a specific point among the reduced images output using the master controller, the entire output image may be moved or rotated, so-called bird's eye view function of the CAD program. Functions such as zooming in / out, moving, and rotating the image output to the screen display unit 320 as described above may be controlled by the controller 370 according to the operation of the master controller.
화면 표시부(320)는 모니터부(6) 등의 형태로 구현될 수 있으며, 수신된 영상이 화면 표시부(320)를 통해 화상 이미지로 출력되도록 하기 위한 영상 처리 프로세스가 제어부(360), 영상 정합부(350) 또는 별도의 영상 처리부(도시되지 않음)에 의해 수행될 수 있다. 또한, 이와 달리 화면 표시부(320)는 모니터부(6)에 영상을 출력하기 위한 단자가 될 수도 있으며, 이 경우 본 실시예는 모니터부(6)와 같이 디스플레이를 포함하지 않으며, 화면 표시부(320)는 모니터부(6)에 영상 신호를 전달하는 소정의 수단(하드웨어 또는 소프트웨어)이 될 수 있다. The screen display unit 320 may be implemented in the form of a monitor unit 6. An image processing process for outputting a received image as an image image through the screen display unit 320 may include a control unit 360 and an image matching unit. 350 or by a separate image processor (not shown). Alternatively, the screen display unit 320 may be a terminal for outputting an image to the monitor unit 6. In this case, the present embodiment does not include a display like the monitor unit 6, and the screen unit 320 may be used. ) May be any means (hardware or software) for transmitting the image signal to the monitor unit 6.
암 조작부(330)는 슬레이브 로봇(2)의 로봇 암(3)의 위치 및 기능을 수술자가 조작할 수 있도록 하는 수단이다. 암 조작부(330)는 도 2에 예시된 바와 같이 핸들(10)의 형상으로 형성될 수 있으나, 그 형상이 이에 제한되지 않으며 동일한 목적 달성을 위한 다양한 형상으로 변형 구현될 수 있다. 또한, 예를 들어 일부는 핸들 형상으로, 다른 일부는 클러치 버튼 등의 다른 형상으로 형성될 수도 있으며, 수술 도구의 조작을 용이하도록 하기 위해 수술자의 손가락을 삽입 고정할 수 있도록 하는 손가락 삽입관 또는 삽입 고리가 더 형성될 수도 있다.The arm manipulation unit 330 is a means for allowing the operator to manipulate the position and function of the robot arm 3 of the slave robot 2. The arm manipulation unit 330 may be formed in the shape of the handle 10 as illustrated in FIG. 2, but the shape is not limited thereto and may be modified in various shapes for achieving the same purpose. Further, for example, some may be formed in the shape of a handle, others may be formed in other shapes such as a clutch button, and a finger cannula or insertion may be inserted to fix the operator's finger to facilitate manipulation of the surgical tool. More rings may be formed.
조작신호 생성부(340)는 로봇 암(3) 및/또는 복강경(5)의 위치 이동 또는 수술을 위한 조작을 위해 수술자가 암 조작부(330)를 조작하는 경우 이에 상응하는 조작신호를 생성하여 슬레이브 로봇(2)으로 전송한다. 조작신호는 유선 또는 무선 통신망을 통해 송수신될 수 있음은 앞서 설명한 바와 같다.The operation signal generator 340 generates a corresponding operation signal when the operator manipulates the arm operation unit 330 for the movement of the robot arm 3 and / or the laparoscope 5 or the operation for surgery. Transfer to the robot (2). As described above, the manipulation signal may be transmitted and received through a wired or wireless communication network.
영상 정합부(350)는 영상 입력부(310)를 통해 수신된 내시경 영상과 영상 저장부(360)에 저장된 수술 도구에 대한 모델링 영상을 서로 정합하여 출력 영상을 생성하고 이를 화면 표시부(320)에 출력한다. 내시경 영상은 내시경을 이용하여 환자의 몸속 내부를 촬영한 영상으로서 제한된 영역만을 촬영하여 획득된 영상이므로, 수술 도구의 일부 모습에 대한 영상을 포함한다. The image matching unit 350 generates an output image by matching the endoscope image received through the image input unit 310 with the modeling image of the surgical tool stored in the image storage unit 360, and outputs the output image to the screen display unit 320. do. The endoscope image is an image of the inside of the patient's body using the endoscope. Since the image is obtained by capturing only a limited area, the endoscope image includes an image of a part of the surgical instrument.
모델링 영상은 전체 수술 도구에 대한 형상을 2D 또는 3D 영상으로 구현하여 생성한 영상이다. 모델링 영상은 수술 개시 시점 전 특정 시점, 예를 들면, 초기 설정 상태에 촬영한 수술 도구에 관한 영상이 될 수 있다. 모델링 영상은 수술 도구를 컴퓨터 시뮬레이션 기법에 의해 생성된 영상이므로 영상 정합부(350)는 실제 내시경 영상에 도시된 수술 도구와 모델링 영상을 정합하여 출력할 수 있다. 실제 물체를 모델링하여 영상을 획득하는 기술은 본 발명의 요지와 다소 거리감이 있으므로 이에 대한 구체적인 설명은 생략한다. 또한, 영상 정합부(350)의 구체적인 기능, 다양한 세부 구성 등은 이후 관련 도면을 참조하여 상세히 설명한다. The modeling image is an image generated by realizing the shape of the entire surgical tool as a 2D or 3D image. The modeling image may be an image of a surgical tool photographed at a specific time point before the start of surgery, for example, an initial setting state. Since the modeling image is an image generated by a computer simulation technique of a surgical tool, the image matching unit 350 may match and output the surgical tool and the modeling image shown in the actual endoscope image. Since a technique of obtaining an image by modeling a real object has a little distance from the gist of the present invention, a detailed description thereof will be omitted. In addition, specific functions, various detailed configurations, and the like of the image matching unit 350 will be described in detail with reference to the accompanying drawings.
제어부(360)는 상술한 기능이 수행될 수 있도록 각 구성 요소들의 동작을 제어한다. 제어부(360)는 영상 입력부(310)를 통해 입력되는 영상이 화면 표시부(320)를 통해 표시될 화상 이미지로 변환하는 기능을 수행할 수도 있다. 또한, 제어부(360)는 암 조작부(330)의 조작에 따른 조작 정보가 입력되면 이에 상응하여 모델링 영상이 화면 표시부(320)를 통해 출력되도록 영상 정합부(350)를 제어한다. The controller 360 controls the operation of each component so that the above-described function can be performed. The controller 360 may perform a function of converting an image input through the image input unit 310 into an image image to be displayed through the screen display unit 320. In addition, the controller 360 controls the image matching unit 350 to output the modeling image through the screen display unit 320 in response to the manipulation information according to the manipulation of the arm manipulation unit 330.
내시경 영상에 포함되는 실제 수술 도구는 복강경(5)에 의해 입력되어 마스터 로봇(1)으로 전송된 영상에 포함된 수술 도구로서 환자의 신체에 직접적인 수술 행위를 가하는 수술 도구이다. 이에 비해, 모델링 영상에 포함되는 모델링 수술 도구는 수술 도구 전체에 대해 미리 수학적으로 모델링되어 2D 또는 3D 영상으로 영상 저장부(360)에 저장된다. 내시경 영상의 수술 도구 및 모델링 영상의 모델링 수술 도구는 수술자가 암 조작부(330)를 조작함에 따라 마스터 로봇(1)이 인식하는 조작 정보(즉, 수술 도구의 이동, 회전 등에 관한 정보)에 의해 제어될 수 있다. 실제 수술 도구 및 모델링 수술 도구는 그 위치 및 조작 형상이 조작 정보에 의해 결정될 수 있다. The actual surgical tool included in the endoscope image is a surgical tool included in the image inputted by the laparoscope 5 and transmitted to the master robot 1, and is a surgical tool that applies a surgical operation directly to the patient's body. In contrast, the modeling surgical tool included in the modeling image is mathematically modeled with respect to the entire surgical tool in advance and stored in the image storage unit 360 as a 2D or 3D image. Surgical tools and modeling images of the endoscope image Surgical tools are controlled by the operation information (that is, information about the movement, rotation, etc. of the surgical tool) recognized by the master robot 1 as the operator operates the cancer operation unit 330 Can be. In actual surgical tools and modeling surgical tools, their position and manipulation shape may be determined by the manipulation information.
조작신호 생성부(340)는 수술자의 암 조작부(340) 조작에 따른 조작 정보를 이용하여 조작신호를 생성하고, 생성한 조작신호를 슬레이브 로봇(2)으로 전송하여 결과적으로 실제 수술 도구가 조작 정보에 상응하도록 조작되도록 한다. 아울러, 조작신호에 의해 조작된 실제 수술 도구의 위치 및 조작 형상은 복강경(5)에 의해 입력된 영상에 의해 수술자의 확인이 가능하다. The operation signal generator 340 generates an operation signal by using the operation information according to the operation of the operator's arm operation unit 340, and transmits the generated operation signal to the slave robot 2. To be manipulated accordingly. In addition, the position and operation shape of the actual surgical instrument operated by the operation signal can be confirmed by the operator by the image input by the laparoscope (5).
또한, 모델링 영상은 수술 도구뿐만 아니라 환자의 장기(臟器)에 대해서도 모델링하여 재구성한 영상을 포함할 수 있다. 즉, 모델링 영상은 CT(Computer Tomography, 컴퓨터단층촬영검사), MR(Magnetic Resonance, 자기공명), PET(Positron Emission Tomography, 양전자방출단층촬영술), SPECT(Single Photon Emission Computed Tomography, 단광자방출단층검사), US(Ultrasonography, 초음파촬영술) 등의 영상 장비로부터 획득한 영상을 참조하여 재구성한 환자의 장기 표면의 2D 또는 3D 영상을 포함할 수 있으며, 이 경우 실제 내시경 영상과 컴퓨터적 모델링 영상을 정합하면 보다 수술자에게 수술 부위를 포함한 전체 영상을 제공할 수 있는 효과가 있다. In addition, the modeling image may include an image reconstructed by modeling not only the surgical instrument but also the organ of the patient. In other words, the modeled images are CT (Computer Tomography), MR (Magnetic Resonance), PET (Positron Emission Tomography), SPECT (Single Photon Emission Computed Tomography), single photon emission tomography ), Which may include 2D or 3D images of the organ surface of the patient, reconstructed with reference to images acquired from imaging equipment such as US (Ultrasonography), in which case the actual endoscope image and the computer modeling image are matched. It is more effective to provide the operator with a full image including the surgical site.
도 4는 본 발명의 실시예에 따른 수술용 영상 처리 장치의 블록 구성도이다. 도 4를 참조하면, 영상 정합부(350)는 특성값 연산부(351), 모델링 영상 구현부(353), 중첩 영상 처리부(355)를 포함할 수 있다. Figure 4 is a block diagram of a surgical image processing apparatus according to an embodiment of the present invention. Referring to FIG. 4, the image matcher 350 may include a feature value calculator 351, a modeled image implementer 353, and an overlapped image processor 355.
특성값 연산부(351)는 슬레이브 로봇(2)의 복강경(5)에 의해 입력되어 제공되는 영상 및/또는 로봇 암(3)에 결합된 실제 수술 도구의 위치에 대한 좌표정보 등을 이용하여 특성값을 연산한다. 실제 수술 도구의 위치는 슬레이브 로봇(2)의 로봇 암(3)의 위치값을 참조하여 인식할 수 있으며, 해당 위치에 대한 정보는 슬레이브 로봇(2)으로부터 마스터 로봇(1)으로 제공될 수도 있다.The characteristic value calculator 351 uses the characteristic value by using the image inputted by the laparoscope 5 of the slave robot 2 and / or coordinate information on the position of the actual surgical tool coupled to the robot arm 3. Calculate The actual position of the surgical tool can be recognized by referring to the position value of the robot arm 3 of the slave robot 2, the information on the position may be provided from the slave robot 2 to the master robot (1). .
특성값 연산부(351)는 예를 들어 복강경(5)에 의한 영상 등을 이용하여 복강경(5)의 화각(FOV, Field of View), 확대율, 관점(예를 들어, 보는 방향), 보는 깊이 등과, 실제 수술 도구의 종류, 방향, 깊이, 꺽인 정도 등의 특성값을 연산할 수 있다. 복강경(5)에 의한 영상을 이용하여 특성값을 연산하는 경우, 해당 영상에 포함된 피사체의 외곽선 추출, 형상 인식, 기울어진 각도 등을 인식하기 위한 영상 인식 기술이 이용될 수도 있다. 또한, 실제 수술 도구의 종류 등은 로봇 암(3)에 해당 수술 도구를 결합하는 과정 등에서 미리 입력될 수도 있다.The characteristic value calculator 351 may use, for example, a field of view (FOV), an enlargement ratio, a viewpoint (for example, a viewing direction), a viewing depth, etc. of the laparoscope 5 using an image of the laparoscope 5. In addition, it is possible to calculate characteristic values such as the type, direction, depth, and degree of bending of the actual surgical instrument. When the characteristic value is calculated using the image of the laparoscope 5, an image recognition technique for recognizing the outline of the subject included in the image, shape recognition, tilt angle, or the like may be used. In addition, the type of the actual surgical tool may be input in advance in the process of coupling the corresponding surgical tool to the robot arm (3).
모델링 영상 구현부(353)는 특성값 연산부(351)에서 연산된 특성값에 상응하는 모델링 영상을 구현한다. 모델링 영상과 관련된 데이터는 영상 저장부(360)로부터 추출될 수 있다. 즉, 모델링 영상 구현부(353)는 복강경(5)의 특성값(화각(FOV, Field of View), 확대율, 관점, 보는 깊이 등과, 실제 수술 도구의 종류, 방향, 깊이, 꺽인 정도 등)에 상응하는 수술 도구 등에 대한 모델링 영상 데이터를 추출하여 내시경 영상의 수술 도구 등과 정합되도록 모델링 영상을 구현한다. The modeling image implementer 353 implements a modeling image corresponding to the feature value calculated by the feature value calculator 351. Data related to the modeled image may be extracted from the image storage unit 360. That is, the modeling image implementer 353 may determine the characteristic values of the laparoscope 5 (field of view (FOV), magnification, perspective, viewing depth, etc., type, direction, depth, degree of bending, etc. of the actual surgical tool). The modeling image is implemented to extract the modeling image data of the corresponding surgical tool and the like to match the surgical tool of the endoscope image.
모델링 영상 구현부(353)가 특성값 연산부(351)에서 연산된 특성값에 상응하여 영상을 추출하는 방법은 다양하게 구현될 수 있다. 예를 들면, 모델링 영상 구현부(353)는 복강경(5)의 특성값을 직접 이용하여 이에 상응하는 모델링 영상을 추출할 수 있다. 즉, 모델링 영상 구현부(353)는 상술한 복강경(5)의 화각, 확대율 등의 데이터를 참조하여 이에 상응하는 2D 또는 3D 모델링 수술 도구 영상을 추출하고 이를 내시경 영상과 정합하도록 할 수 있다. 여기서, 화각, 확대율과 같은 특성값은 최초 설정값에 따른 기준 영상과의 비교를 통해서 산출되거나 순차적으로 생성되는 복강경(5)의 영상을 서로 비교 분석하여 연산될 수 있다. Various methods may be implemented by the modeling image implementer 353 to extract an image corresponding to the feature value calculated by the feature value calculator 351. For example, the modeling image implementer 353 may extract a modeling image corresponding to the characteristic value of the laparoscope 5 directly. That is, the modeling image implementer 353 may extract the 2D or 3D modeling surgical tool image corresponding to the laparoscopic 5 and the data, such as the angle of view and the magnification of the laparoscope 5, and match it with the endoscope image. Here, the characteristic values such as the angle of view and the magnification may be calculated by comparing and analyzing the images of the laparoscope 5 which are calculated or sequentially generated through comparison with the reference image according to the initial set value.
또한, 다른 실시예에 따르면, 모델링 영상 구현부(353)는 복강경(5) 및 로봇 암(3)에 대한 위치 및 조작 형상을 결정하는 조작 정보를 이용하여 모델링 영상을 추출할 수 있다. 즉, 상술한 바와 같이 내시경 영상의 수술 도구는 수술자가 암 조작부(330)를 조작함에 따라 마스터 로봇(1)이 인식하는 조작 정보에 의해 제어될 수 있으므로, 내시경 영상의 특성값에 상응하는 모델링 수술 도구의 위치 및 조작 형상은 조작 정보에 의해 결정될 수 있다. In addition, according to another exemplary embodiment, the modeling image implementer 353 may extract the modeling image by using manipulation information for determining the position and the manipulation shape of the laparoscope 5 and the robot arm 3. That is, as described above, since the surgical tool of the endoscope image may be controlled by the operation information recognized by the master robot 1 as the operator manipulates the cancer operation unit 330, the modeling surgery corresponding to the characteristic value of the endoscope image is performed. The position and manipulation shape of the tool can be determined by the manipulation information.
이러한 조작 정보는 시간적 순서에 따라 별도의 데이터베이스에 저장될 수 있으며, 모델링 영상 구현부(353)는 이러한 데이터베이스를 참조하여 실제 수술 도구의 특성값을 인식할 수 있으며, 이에 상응하여 모델링 영상에 대한 정보를 추출할 수 있다. 즉, 모델링 영상에 출력되는 수술 도구의 위치는 수술 도구의 위치 변경 신호의 누적 데이터를 이용하여 설정될 수 있다. 예를 들면, 수술용 인스트루먼트에 대한 조작 정보가 시계방향으로 90도 회전 및 연장방향으로 1cm 이동이라는 정보를 포함하고 있으면, 모델링 영상 구현부(353)는 이러한 조작 정보에 상응하여 모델링 영상에 포함되는 수술용 인스트루먼트의 영상을 변환하여 추출할 수 있다. Such manipulation information may be stored in a separate database according to the temporal order, and the modeling image implementer 353 may recognize the characteristic value of the actual surgical tool by referring to the database, and correspondingly, the information about the modeling image. Can be extracted. That is, the position of the surgical tool output on the modeling image may be set using cumulative data of the position change signal of the surgical tool. For example, if the operation information for the surgical instrument includes information that rotates 90 degrees clockwise and 1 cm in the extension direction, the modeling image implementation unit 353 is included in the modeling image corresponding to the operation information. The image of the surgical instrument can be converted and extracted.
여기서, 수술용 인스트루먼트는 액추에이터가 구비된 수술용 로봇 암의 선단부에 장착되고, 액추에이터로부터 구동력을 전달받아 구동부(미도시)에 구비되는 구동휠(미도시)이 작동하며, 구동휠과 연결되고 수술 환자의 체내로 삽입되는 조작자(150)가 소정의 작동을 함으로써, 수술을 하게 된다. 구동휠은 원판형으로 형성되며, 액추에이터에 클러칭되어 구동력을 전달받을 수 있다. 또한, 구동휠의 개수는 제어 대상의 개수에 상응하여 결정될 수 있으며, 이러한 구동휠에 대한 기술은 수술용 인스트루먼트와 관련된 기술자에게 자명한 사항이므로 자세한 설명은 생략한다. Here, the surgical instrument is mounted to the front end of the surgical robot arm is provided with an actuator, the driving wheel (not shown) provided in the drive unit (not shown) by receiving the driving force from the actuator is operated, connected to the drive wheel and surgery The operator 150 inserted into the patient's body performs surgery by performing a predetermined operation. The driving wheel is formed in a disc shape, and may be clutched to the actuator to receive the driving force. In addition, the number of driving wheels may be determined corresponding to the number of objects to be controlled, and the description of such driving wheels will be apparent to those skilled in the art related to surgical instruments, and thus detailed description thereof will be omitted.
중첩 영상 처리부(355)는 실제 촬영된 내시경 영상과 모델링 영상이 중첩되지 않기 위해 모델링 영상의 일부 영상을 출력한다. 즉, 내시경 영상이 수술 도구의 일부 형상을 포함하고 모델링 영상 구현부(353)가 이에 상응하는 모델링 수술 도구를 출력하는 경우 중첩 영상 처리부(355)는 내시경 영상의 실제 수술 도구 영상과 모델링 수술 도구 영상의 중첩 영역을 확인하고, 모델링 수술 도구 영상에서 중첩 부분을 삭제함으로써 두 영상이 서로 매칭될 수 있도록 한다. 중첩 영상 처리부(355)는 모델링 수술 도구 영상과 실제 수술 도구 영상의 중첩 영역을 모델링 수술 도구 영상으로부터 제거함으로써 중첩 영역을 처리할 수 있다. The superimposed image processor 355 outputs a partial image of the modeled image so that the actually captured endoscope image and the modeled image do not overlap. That is, when the endoscope image includes some shape of the surgical tool and the modeling image implementer 353 outputs the corresponding modeling surgical tool, the superimposed image processor 355 may perform the actual surgical tool image and the modeling surgical tool image of the endoscope image. By checking the overlapping region of, and deleting the overlapping portion from the modeling surgical tool image, the two images can be matched with each other. The overlapping image processor 355 may process the overlapping region by removing the overlapping region of the modeling surgical tool image and the actual surgical tool image from the modeling surgical tool image.
예를 들면, 실제 수술 도구의 전체 길이가 20cm이고, 특성값(화각(FOV, Field of View), 확대율, 관점, 보는 깊이 등과, 실제 수술 도구의 종류, 방향, 깊이, 꺽인 정도 등)을 고려할 때 내시경 영상의 실제 수술 도구 영상의 길이가 3cm인 경우 중첩 영상 처리부(355)는 특성값을 이용하여 내시경 영상에 출력되지 않은 모델링 수술 도구 영상을 모델링 영상에 포함하여 출력하도록 한다. For example, the total length of an actual surgical instrument is 20 cm, and characteristics values (field of view (FOV), magnification, perspective, depth of view, etc., type, direction, depth, degree of bending, etc. of the actual surgical instrument) are considered. When the length of the actual surgical tool image of the endoscope image is 3cm, the superimposed image processor 355 outputs the modeling surgical tool image which is not output to the endoscope image by using the characteristic value in the modeling image.
도 5는 본 발명의 실시예에 따른 수술용 영상 처리 방법의 흐름도이다. 5 is a flow chart of a surgical image processing method according to an embodiment of the present invention.
단계 S510에서, 수술 대상 및/또는 수술 도구에 관해 미리 모델링 영상을 생성하여 저장한다. 모델링 영상은 컴퓨터 시뮬레이션에 의해 모델링될 수 있으며, 본 실시예는 별도의 모델링 영상 생성 장치를 이용하여 모델링 영상을 생성할 수도 있다. In operation S510, a modeling image is generated and stored in advance with respect to the surgical target and / or the surgical tool. The modeled image may be modeled by computer simulation, and the embodiment may generate a modeled image by using a separate modeling image generating apparatus.
단계 S520에서, 내시경을 이용하여 환자의 환부에 인접한 내시경 영상을 생성한다. 내시경 영상은 환자의 장기 및 수술 도구 등을 촬영한 영상이며, 이들에 대한 일부 영상을 포함한다. In step S520, the endoscope is used to generate an endoscope image adjacent to the affected part of the patient. Endoscopic images are images of organs and surgical instruments of a patient, and include some images thereof.
단계 S530에서, 특성값 연산부(351)는 내시경 영상의 특성값을 연산한다. 상술한 바와 같이 특성값 연산부(351)는 슬레이브 로봇(2)의 복강경(5)에 의해 입력되어 제공되는 영상 및/또는 로봇 암(3)에 결합된 실제 수술 도구의 위치에 대한 좌표정보 등을 이용하여 특성값을 연산하며, 특성값은 복강경(5)의 화각(FOV, Field of View), 확대율, 관점(예를 들어, 보는 방향), 보는 깊이 등과, 실제 수술 도구의 종류, 방향, 깊이, 꺽인 정도 등이 될 수 있다. In operation S530, the characteristic value calculator 351 calculates a characteristic value of the endoscope image. As described above, the characteristic value calculator 351 may input an image provided by the laparoscope 5 of the slave robot 2 and / or coordinate information about the position of the actual surgical tool coupled to the robot arm 3. The characteristic value is calculated using the field of view (FOV, field of view), magnification, perspective (for example, viewing direction), viewing depth, and the type, direction, and depth of the surgical instrument. , Bend, and so on.
단계 S540에서, 영상 정합부(350)는 내시경 영상에 상응하여 모델링 영상을 추출하고, 중첩 영역을 처리하여 두 영상을 서로 정합하여 출력한다. In operation S540, the image matching unit 350 extracts the modeling image corresponding to the endoscope image, processes the overlapping region, and matches the two images to each other and outputs the image.
도 6은 본 발명의 실시예에 따른 수술용 영상 처리 방법에 따른 출력 영상의 예시도이다. 도 6을 참조하면, 모델링 영상(610), 제1 모델링 수술 도구(612), 제2 모델링 수술 도구(614), 내시경 영상(620), 실제 수술 도구(622)가 도시된다. 6 is an exemplary view of an output image according to a surgical image processing method according to an embodiment of the present invention. Referring to FIG. 6, a modeling image 610, a first modeling surgical tool 612, a second modeling surgical tool 614, an endoscope image 620, and an actual surgical tool 622 are shown.
내시경 영상(620)은 모델링 영상(610)의 가운데 영역에 위치하고 모델링 영상(610)은 내시경 영상(620)의 주변 영역에 출력되어 있지만, 내시경 영상(620)은 그 위치에 특별히 한정되지 않고, 예를 들면, 화면 표시부(320) 중 우상측, 좌상측, 우하측, 좌하측 등 다양한 위치에 출력될 수 있다. 내시경 영상(620)은 화면 표시부(320)의 전체 화면으로 출력되거나 도시된 바와 같이 화면에서 이격된 효과를 가질 수 있도록 줌아웃된 영상으로 작게 출력될 수 있다. 후자의 경우 모델링 영상(610)은 내시경 영상(620)과 정합되어 같이 출력될 수 있다. The endoscope image 620 is located in the center region of the modeling image 610 and the modeling image 610 is output to the peripheral region of the endoscope image 620, the endoscope image 620 is not particularly limited to the position, For example, the screen display unit 320 may be output at various positions such as the upper right side, the upper left side, the lower right side, and the lower left side. The endoscope image 620 may be output as a full screen of the screen display unit 320 or as a zoomed out image so as to have an effect spaced apart from the screen as shown. In the latter case, the modeling image 610 may be matched with the endoscope image 620 and output together.
제1 모델링 수술 도구(612)는 내시경 영상(620)에 출력되지 않는 영상이지만 실제 수술 부위에 인접하여 위치하는 수술 도구이다. 제2 모델링 수술 도구(614)는 내시경 영상(620)에 포함된 실제 수술 도구(622)가 연장되어 도시된 모델링 영상이다. 실제 수술 도구(622)와 제2 모델링 수술 도구(614)는 서로 매칭되고 서로 중첩되어 보이지 않기 위해서 상술한 바와 같이 특성값을 이용하여 도시될 수 있다. The first modeling surgical tool 612 is an image that is not output to the endoscope image 620 but is a surgical tool positioned adjacent to the actual surgical site. The second modeling surgical tool 614 is a modeling image in which the actual surgical tool 622 included in the endoscope image 620 is extended. The actual surgical tool 622 and the second modeling surgical tool 614 may be shown using characteristic values as described above to match each other and not overlap each other.
또한, 이러한 영상은 줌인, 줌아웃, 회전, 리프레쉬, 초점 이동 등의 기능을 통해 수술자의 편의에 맞게 출력될 수 있다. 예를 들면, 내시경 영상(620)의 전체 크기가 작아지는 경우 전체 화면 표시부(320)에 출력되는 크기는 정해져 있으므로, 제1 모델링 수술 도구(612) 및 제2 모델링 수술 도구(614)의 크기 및 노출 영상을 결정하여 출력할 수 있다. 또한, 내시경 영상(620)과 모델링 영상(610)은 3D로 구현되어 수술자가 영상을 소정의 방향으로 회전시키는 경우 그 회전 방향에 상응하여 모델링 영상을 변환하여 출력할 수도 있다. In addition, such an image may be output for the operator's convenience through functions such as zoom in, zoom out, rotation, refresh, and focus shift. For example, when the total size of the endoscope image 620 is reduced, the size of the output to the full screen display 320 is determined. Therefore, the size of the first modeling surgical tool 612 and the second modeling surgical tool 614 and The exposure image may be determined and output. Also, the endoscope image 620 and the modeling image 610 may be implemented in 3D so that when the operator rotates the image in a predetermined direction, the modeling image may be converted and output according to the rotation direction.
제1 모델링 수술 도구(612)와 제2 모델링 수술 도구(614)는 일단에 환부를 조작할 수 있는 조작부를 구비한 수술용 인스트루먼트인 경우를 도시하였으나, 본 발명은 이에 한정되지 않으며, 예를 들면, 제1 모델링 수술 도구(612) 및/또는 제2 모델링 수술 도구(614)는 복강경과 같은 수술 도구가 될 수도 있다. Although the first modeling surgical tool 612 and the second modeling surgical tool 614 are shown as a surgical instrument having an operation unit capable of manipulating the affected part at one end, the present invention is not limited thereto. The first modeling surgical tool 612 and / or the second modeling surgical tool 614 may be a surgical tool such as a laparoscope.
도 7은 본 발명의 실시예에 따른 수술용 로봇의 블록 구성도이다. 도 7을 참조하면, 영상 입력부(310), 화면 표시부(320), 암 조작부(330), 조작신호 생성부(340), 영상 정합부(350), 영상 저장부(360), 제어부(370)를 포함하는 마스터 로봇(1) 및 로봇 암(3), 카메라(7), 복강경(5)을 포함하는 슬레이브 로봇(2)이 도시된다. 상술한 바와의 차이점을 위주로 설명한다. 7 is a block diagram of a surgical robot according to an embodiment of the present invention. Referring to FIG. 7, the image input unit 310, the screen display unit 320, the arm operation unit 330, the operation signal generation unit 340, the image matching unit 350, the image storage unit 360, and the control unit 370. A slave robot 2 including a master robot 1 and a robot arm 3, a camera 7, and a laparoscope 5 are shown. The differences from the above will be explained mainly.
본 실시예는 환자를 외부에서 촬영한 카메라 영상을 상술한 내시경 영상 및 모델링 영상과 조합하여 출력함으로써 환자 내부의 영상 뿐만 아니라 환자 외부의 영상까지 수술자에게 제공함으로써 수술을 더욱 효과적으로 수행할 수 있도록 하는 특징이 있다. The present embodiment combines the image of the patient taken from the outside with the above-described endoscope image and modeling image to provide the operator with the image not only inside the patient but also outside the patient to perform the operation more effectively. There is this.
카메라(7)는 환자의 외부, 예를 들면, 수술실 천장, 수술대의 일측 등에 구비되어 수술 현장을 촬영하고 영상을 생성한다. 영상 입력부(310)는 카메라(7)에 의해 생성된 카메라 영상을 입력 받아 화면 표시부(320)에 표시되도록 한다. The camera 7 is provided to the outside of the patient, for example, the operating room ceiling, one side of the operating table and the like to photograph the operation site and generate an image. The image input unit 310 receives a camera image generated by the camera 7 and is displayed on the screen display 320.
영상 정합부(350)는 상술한 바와 같이 내시경 영상, 모델링 영상 및 카메라 영상 중 둘 이상의 영상을 서로 정합하여 화면 표시부(320)에 표시한다. 따라서 본 실시예에 따라 서로 같이 출력되는 영상은 다양한 조합 영상이 될 수 있으며, 예를 들면, 내시경 영상과 모델링 영상의 조합, 내시경 영상과 카메라 영상의 조합, 모델링 영상과 카메라 영상의 조합 등이 될 수 있다. As described above, the image matching unit 350 registers two or more images of the endoscope image, the modeling image, and the camera image and displays them on the screen display unit 320. Accordingly, the images output together according to the present embodiment may be various combination images, for example, a combination of an endoscope image and a modeling image, a combination of an endoscope image and a camera image, a combination of a modeling image and a camera image, and the like. Can be.
도 8은 본 발명의 실시예에 따른 수술용 영상 처리 장치의 블록 구성도이다. 도 8을 참조하면, 영상 정합부(350), 특성값 연산부(351), 모델링 영상 구현부(353), 중첩 영상 처리부(355), 충돌 감지부(357), 경고 정보 출력부(359)가 도시된다. 상술한 바와의 차이점을 위주로 설명한다. 8 is a block diagram of a surgical image processing apparatus according to an embodiment of the present invention. Referring to FIG. 8, the image matching unit 350, the characteristic value calculating unit 351, the modeling image implementing unit 353, the overlapping image processing unit 355, the collision detecting unit 357, and the warning information output unit 359 may be used. Shown. The differences from the above will be explained mainly.
본 실시예는 상술한 수술 도구들의 충돌 위험이 있는 경우 이를 감지하여 수술자에게 경고할 수 있는 특징이 있다. 즉, 수술 도구들은 내시경 영상 내에서뿐만 아니라 그 외부에서도 서로 충돌할 수 있으며, 본 실시예는 이러한 수술 도구들의 충돌을 미리 감지하여 경고할 수 있으므로, 수술자는 미리 충돌 상태를 인지하고 이를 회피함으로써 수술을 원활히 수행할 수 있는 장점이 있다. This embodiment has a feature that can alert the operator by detecting if there is a risk of collision of the above-described surgical instruments. That is, the surgical tools may collide with each other not only within the endoscope image but also outside thereof, and the present embodiment may detect and warn the collision of these surgical tools in advance, so that the operator recognizes the collision state in advance and avoids the surgery. There is an advantage that can be performed smoothly.
충돌 감지부(357)는 모델링 영상에 포함되는 모델링 수술 도구 영상이 서로 충돌하는지 감지한다. 수술용 인스트루먼트 및 복강경(5) 등과 같은 수술 도구는 상술한 조작 정보에 상응하여 그 위치 및 조작 형상이 결정되므로, 수술자가 암 조작부(330)를 조작하는 경우 충돌 감지부(357)는 조작 정보를 분석하여 수술 도구가 서로 충돌하는지 판단하고 만약, 조작 정보에 의해 수술 도구가 충돌하는 위치로 조작되는 경우 충돌 감지 신호를 생성한다. The collision detector 357 detects whether the modeling surgical tool images included in the modeling image collide with each other. Surgical instruments such as surgical instruments and laparoscopics 5 are determined in position and operation shape in accordance with the above-described operation information, so that when the operator operates the arm operation unit 330, the collision detection unit 357 receives the operation information. The analysis determines whether the surgical tools collide with each other, and generates a collision detection signal if the surgical tools are manipulated to the colliding position by the operation information.
경고 정보 출력부(359)는 충돌 감지부(357)로부터 충돌 감지 신호를 수신하여 소정의 경고 정보를 출력할 수 있다. 경고 정보는 소리 정보, 색상 정보, 떨림 정보 등과 같이 수술자가 인식가능한 정보가 될 수 있으며, 예를 들면, 소정의 스피커에서 송출하는 경보음, 화면 표시부(320) 상에 출력되는 텍스트, 아이콘, 테두리 색상, 바탕 색상 변화 정보, 암 조작부(330)의 떨림 신호 등이 될 수 있다. 실제 수술 도구가 충돌하는 경우 발생하는 마찰음이 미리 저장된 후 해당 충돌 감지시 상기 마찰음을 경고음으로 송출할 수도 있다. The warning information output unit 359 may receive a collision detection signal from the collision detection unit 357 and output predetermined warning information. The warning information may be information that can be recognized by the operator, such as sound information, color information, and vibration information. For example, an alarm sound transmitted from a predetermined speaker and text, an icon, and a border displayed on the screen display 320 may be used. It may be a color, background color change information, a shake signal of the arm manipulation unit 330, and the like. The friction sound generated when the actual surgical tool collides may be stored in advance, and then the friction sound may be sent as a warning sound when the collision is detected.
또한, 충돌 감지부(357)가 수술 도구들의 접촉이 있는 것으로 판단하면, 예를 들어, 암 조작부(330)가 더 이상 조작되지 않도록 처리하거나 암 조작부(330)를 통해 포스 피드백(force feedback)이 발생되도록 처리할 수 있다. 포스 피드백을 위해 충돌이 발생하는 방향으로 가상벽(virtual wall)을 형성하고, 수술자가 마스터 조종기를 조작 시 수술 도구들이 서로의 가상벽에 접근하면 이를 감지하여 수술 도구들이 가상벽을 따라 이동하거나 이와 다른 방향으로 이동하도록 제어함으로써, 서로간의 실제적인 충돌을 회피할 수 있다. 포스 피드백을 처리하기 위한 별도의 구성요소가 마스터 로봇(1)의 구성요소로서 포함될 수 있다.In addition, when the collision detection unit 357 determines that the surgical tools are in contact, for example, the arm operation unit 330 is no longer manipulated or the force feedback (force feedback) through the arm operation unit (330) Can be processed to occur. Form a virtual wall in the direction of the collision for force feedback, and detect when surgical tools approach each other's virtual walls when the operator manipulates the master controller and the surgical tools move along the virtual wall or By controlling to move in the other direction, it is possible to avoid the actual collision with each other. A separate component for processing force feedback may be included as a component of the master robot 1.
도 9는 본 발명의 실시예에 따른 수술용 영상 처리 방법의 흐름도이다. 9 is a flow chart of a surgical image processing method according to an embodiment of the present invention.
단계 S510에서, 수술 대상 및/또는 수술 도구에 관해 미리 모델링 영상을 생성하여 저장하고, 단계 S520에서, 내시경을 이용하여 환자의 환부에 인접한 내시경 영상을 생성한다. In step S510, a modeling image is generated and stored in advance with respect to the surgical object and / or the surgical tool, and in step S520, an endoscope image adjacent to the affected part of the patient is generated using the endoscope.
단계 S530에서, 특성값 연산부(351)는 내시경 영상의 특성값을 연산하고, 단계 S540에서, 영상 정합부(350)는 내시경 영상에 상응하여 모델링 영상을 추출하고, 중첩 영역을 처리하여 두 영상을 서로 정합하여 출력한다. In operation S530, the characteristic value calculator 351 calculates a characteristic value of the endoscope image. In operation S540, the image matcher 350 extracts a modeling image corresponding to the endoscope image, and processes the overlapped region to generate two images. Match each other and output.
단계 S550에서 충돌 감지부(357)는 수술 도구가 충돌 위치에 있는지 조작 정보를 분석하여 감지한 후 충돌 위치에 존재하는 경우 단계 S560에서, 경고 정보 출력부(359)는 상술한 바와 같이 소리 정보, 색상 정보, 떨림 정보 등과 같은 경고 정보를 출력한다. In step S550, the collision detection unit 357 analyzes and detects the operation information whether the surgical tool is in the collision position, and then exists in the collision position in step S560, the warning information output unit 359, as described above, the sound information, Output warning information such as color information, vibration information, etc.
여기서는 내시경 영상에 상응하여 모델링 영상을 추출 및 정합한 후 수술 도구들의 충돌 여부를 판단하는 순서로 본 실시예가 설명되었으나, 본 발명은 이에 한정되지 않는다. 예를 들면, 수술 도구들의 충돌 여부를 판단하는 단계는 상술한 조작 정보가 생성되는 경우마다 바로 수행될 수 있으며, 내시경 영상을 생성하는 경우마다 바로 수행될 수도 있다. Here, although the present embodiment has been described in the order of determining whether the surgical tools collide after extracting and matching the modeling image corresponding to the endoscope image, the present invention is not limited thereto. For example, the step of determining whether the surgical instruments are collided may be immediately performed whenever the above-described manipulation information is generated, or may be immediately performed whenever the endoscope image is generated.
도 10은 본 발명의 실시예에 따른 수술용 영상 처리 방법에 따른 출력 영상의 구성도이다. 도 10을 참조하면, 제1 로봇 암(121), 제2 로봇 암(122), 제1 수술 도구(202), 제2 수술 도구(204), 제1 구동부(231), 제2 구동부(232), 모델링 영상(610), 제1 모델링 수술 도구(612), 제2 모델링 수술 도구(614), 내시경 영상(620), 실제 수술 도구(622), 카메라 영상(630)이 도시된다. 상술한 바와의 차이점을 위주로 설명한다. 10 is a block diagram of an output image according to the surgical image processing method according to an embodiment of the present invention. Referring to FIG. 10, the first robot arm 121, the second robot arm 122, the first surgical tool 202, the second surgical tool 204, the first driver 231, and the second driver 232. ), The modeling image 610, the first modeling surgical tool 612, the second modeling surgical tool 614, the endoscope image 620, the actual surgical tool 622, and the camera image 630 are shown. The differences from the above will be explained mainly.
카메라 영상(630)은 상술한 바와 같이 환자의 외부 환경에 설치된 카메라(7)가 촬영하여 생성한 영상이다. 카메라 영상(630)은 수술대, 환자의 모습 등에 대한 영상을 포함할 수도 있다. 제1 구동부(231)와 제2 구동부(232)는 각각 제1 수술 도구(202), 제2 수술 도구(204)를 구동하기 위한 구동 수단이며, 이에 대한 기술은 본 발명의 요지와 다소 거리감이 있어서 이에 대한 자세한 설명은 생략한다. The camera image 630 is an image generated by the camera 7 installed in the external environment of the patient as described above. The camera image 630 may include an image of an operating table, a patient's appearance, and the like. The first driving unit 231 and the second driving unit 232 are driving means for driving the first surgical tool 202 and the second surgical tool 204, respectively, and the description thereof is somewhat distance from the gist of the present invention. Detailed description thereof will be omitted.
도 10을 참조하면, 내시경 영상(620), 모델링 영상(610) 및 카메라 영상(630)이 화면 표시부(320)의 특정 위치를 점유하며 출력된다. 제1 모델링 수술 도구(612)와 제1 수술 도구(202), 제2 모델링 수술 도구(614)와 제2 수술 도구(204)는 상술한 바와 같이 특성값이 연산되어 정합되어 출력된다. 또한, 다른 실시예에 따르면, 카메라 영상(630)은 전체가 모델링 영상(610)에 의해 대체될 수도 있다. 즉, 모델링 영상(610)은 제1 로봇 암(121), 제2 로봇 암(122), 제1 수술 도구(202), 제2 수술 도구(204), 제1 구동부(231), 제2 구동부(232)와 같은 전체 수술 시스템에 대해 모델링하여 구현된 영상이 될 수 있으며, 이는 내시경 영상(620)과 조합 및/또는 정합되어 출력될 수 있다. Referring to FIG. 10, an endoscope image 620, a modeling image 610, and a camera image 630 are output while occupying a specific position of the screen display 320. As described above, the first modeling surgical tool 612, the first surgical tool 202, the second modeling surgical tool 614, and the second surgical tool 204 are computed, matched, and output. According to another embodiment, the camera image 630 may be entirely replaced by the modeling image 610. That is, the modeling image 610 may include a first robot arm 121, a second robot arm 122, a first surgical tool 202, a second surgical tool 204, a first driver 231, and a second driver. The image may be implemented by modeling the entire surgical system such as 232, which may be combined with and / or matched with the endoscope image 620 and output.
도 11은 본 발명의 실시예에 따른 수술용 로봇의 블록 구성도이다. 도 11을 참조하면, 영상 입력부(310), 화면 표시부(320), 암 조작부(330), 조작신호 생성부(340), 영상 정합부(350), 영상 저장부(360), 제어부(370), 영상 선택부(380)를 포함하는 마스터 로봇(1) 및 로봇 암(3), 카메라(7), 복강경(5)을 포함하는 슬레이브 로봇(2)이 도시된다. 상술한 바와의 차이점을 위주로 설명한다. 11 is a block diagram of a surgical robot according to an embodiment of the present invention. Referring to FIG. 11, the image input unit 310, the screen display unit 320, the arm operation unit 330, the operation signal generator 340, the image matching unit 350, the image storage unit 360, and the control unit 370. The master robot 1 including the image selecting unit 380 and the slave robot 2 including the robot arm 3, the camera 7, and the laparoscope 5 are shown. The differences from the above will be explained mainly.
본 실시예는 화면 표시부(320)에 표시될 영상의 종류를 수술자가 편의대로 선택할 수 있는 인터페이스를 구현함으로써 수술자가 수술 영상을 보다 편리하게 이용할 수 있는 특징이 있다. 즉, 본 실시예는 수술자의 선택에 의해 상술한 바와 같은 내시경 영상, 모델링 영상 및 카메라 영상 중 어느 하나 이상의 영상을 출력하도록 할 수 있다. This embodiment has a feature that the operator can use the surgical image more conveniently by implementing an interface that the operator can select the type of the image to be displayed on the screen display unit 320 for convenience. That is, according to the present embodiment, one or more images of the endoscope image, the modeling image, and the camera image as described above may be output by the operator's selection.
영상 선택부(380)는 상술한 내시경 영상, 모델링 영상 및 카메라 영상 중에서 서로 정합되어 출력될 영상을 선택할 수 있는 수단이다. 영상 선택부(380)는 상술한 바와 같이 클러치 버튼 또는 페달(도시되지 않음) 등의 형태로 구현되는 영상 선택 버튼이 될 수 있으며, 또한, 모니터부(6)를 통해 표시되는 기능메뉴 또는 모드선택 메뉴 등으로 구현될 수도 있다.The image selector 380 is a means for selecting an image to be matched with each other and output from the above-described endoscope image, modeling image, and camera image. The image selection unit 380 may be an image selection button implemented in the form of a clutch button or a pedal (not shown), as described above, and a function menu or mode selection displayed through the monitor unit 6. It may also be implemented as a menu.
또한, 영상 선택부(380)는 상술한 내시경 영상, 모델링 영상 및 카메라 영상들에 대해서 줌인, 줌아웃, 회전, 리프레쉬, 수평 이동, 2D/3D 변환, 특정 영상 숨기기/표시하기, 관점 변화 등의 기능을 포함할 수도 있다. 즉, 영상 선택부(380)는 수술자에 의해 출력되는 영상을 다양한 각도에서 표현하기 위해 마련되는 인터페이스가 될 수 있다. In addition, the image selecting unit 380 functions to zoom in, zoom out, rotate, refresh, horizontally move, 2D / 3D conversion, hide / display a specific image, and change viewpoints with respect to the above-described endoscope images, modeling images, and camera images. It may also include. That is, the image selector 380 may be an interface provided to express the image output by the operator at various angles.
도 12는 본 발명의 실시예에 따른 수술용 영상 처리 방법의 흐름도이다. 12 is a flow chart of a surgical image processing method according to an embodiment of the present invention.
단계 S510에서, 수술 대상 및/또는 수술 도구에 관해 미리 모델링 영상을 생성하여 저장하고, 단계 S520에서, 내시경을 이용하여 환자의 환부에 인접한 내시경 영상을 생성한다. In step S510, a modeling image is generated and stored in advance with respect to the surgical object and / or the surgical tool, and in step S520, an endoscope image adjacent to the affected part of the patient is generated using the endoscope.
단계 S525에서, 화면 표시부(320), 영상 정합부(350), 제어부(370) 중 어느 하나는 영상 선택부(380)로부터 출력될 영상 선택 신호를 수신하여 내시경 영상, 모델링 영상 및 카메라 영상 중 출력될 영상을 선택한다. In operation S525, any one of the screen display unit 320, the image matching unit 350, and the control unit 370 receives an image selection signal to be output from the image selection unit 380 and outputs an image from among an endoscope image, a modeling image, and a camera image. Select the video to be.
단계 S530에서, 특성값 연산부(351)는 내시경 영상의 특성값을 연산하고, 단계 S545에서, 영상 정합부(350)는 선택된 영상에 상응하여 중첩 영역을 처리하여 두 영상을 서로 정합하여 출력한다. In operation S530, the characteristic value calculator 351 calculates a characteristic value of the endoscope image, and in operation S545, the image matcher 350 processes the overlapped area corresponding to the selected image to match and output the two images.
그 외 본 발명의 실시예에 따른 수술용 영상 처리 장치에 대한 구체적인 임베디드 시스템, O/S 등의 공통 플랫폼 기술과 통신 프로토콜, I/O 인터페이스 등 인터페이스 표준화 기술 및 엑추에이터, 배터리, 카메라, 센서 등 부품 표준화 기술 등에 대한 구체적인 설명은 본 발명이 속하는 기술 분야의 통상의 지식을 가진자에게 자명한 사항이므로 생략하기로 한다.Other common platform technologies such as specific embedded systems, O / S, and communication protocols, interface standardization technologies such as I / O interfaces, and components such as actuators, batteries, cameras, sensors, etc. Detailed description of the standardization technology, etc. will be omitted since it is obvious to those skilled in the art.
본 발명에 따른 수술용 영상 처리 방법은 다양한 컴퓨터 수단을 통하여 수행될 수 있는 프로그램 명령 형태로 구현되어 컴퓨터 판독 가능 매체에 기록될 수 있다. 즉, 기록 매체는 컴퓨터에 상술한 단계들을 실행시키기 위한 프로그램을 기록한 컴퓨터로 읽을 수 있는 기록매체가 될 수 있다.Surgical image processing method according to the present invention can be implemented in the form of program instructions that can be executed by various computer means may be recorded on a computer readable medium. In other words, the recording medium may be a computer readable recording medium having recorded thereon a program for causing the computer to execute the above steps.
상기 컴퓨터 판독 가능 매체는 프로그램 명령, 데이터 파일, 데이터 구조 등을 단독으로 또는 조합한 형태로 포함할 수 있다. 상기 매체에 기록되는 프로그램 명령은 본 발명을 위하여 특별히 설계되고 구성된 것들이거나 컴퓨터 소프트웨어 당업자에게 공지되어 사용 가능한 것일 수도 있다. 컴퓨터 판독 가능 기록 매체의 예에는 하드 디스크, 플로피 디스크 및 자기 테이프와 같은 자기 매체(Magnetic Media), CD-ROM, DVD와 같은 광기록 매체(Optical Media), 플롭티컬 디스크(Floptical Disk)와 같은 자기-광 매체(Magneto-Optical Media), 및 롬(ROM), 램(RAM), 플래시 메모리 등과 같은 프로그램 명령을 저장하고 수행하도록 특별히 구성된 하드웨어 장치가 포함된다.The computer readable medium may include a program command, a data file, a data structure, etc. alone or in combination. Program instructions recorded on the media may be those specially designed and constructed for the purposes of the present invention, or they may be of the kind well-known and available to those having skill in the computer software arts. Examples of computer readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tape, optical media such as CD-ROMs, DVDs, and magnetic disks such as floppy disks. -Magneto-Optical Media, and hardware devices specifically configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
상기한 바에서, 본 발명의 실시예에 따른 수술용 영상 처리 장치는 수술 도구 및 로봇 수술 시스템의 구성을 일 실시예에 따라 기술하였으나, 반드시 이에 한정될 필요는 없고, 본 발명이 로봇 수술이 아닌 수술용 영상 처리 시스템에 적용되거나 기타 다양한 수술 도구를 모델링하고 출력하더라도 전체적인 작용 및 효과에는 차이가 없다면 이러한 다른 구성은 본 발명의 권리범위에 포함될 수 있다.In the above, the surgical image processing apparatus according to an embodiment of the present invention described the configuration of a surgical tool and a robotic surgical system according to an embodiment, but is not necessarily limited thereto, the present invention is not a robot surgery Even if applied to a surgical image processing system or modeling and outputting a variety of other surgical instruments, such other configurations may be included in the scope of the present invention if there is no difference in the overall operation and effect.
도 13은 본 발명의 실시예에 따른 수술용 로봇의 전체구조를 나타낸 평면도이고, 도 14는 본 발명의 실시예에 따른 수술용 로봇의 마스터 인터페이스를 나타낸 개념도이다.13 is a plan view showing the overall structure of the surgical robot according to an embodiment of the present invention, Figure 14 is a conceptual diagram showing a master interface of the surgical robot according to an embodiment of the present invention.
도 13 및 도 14를 참조하면, 복강경 수술용 로봇 시스템은 수술대에 누워있는 환자에게 실제적으로 수술을 행하는 슬레이브 로봇(2)과, 슬레이브 로봇(2)을 수술자가 원격 조종하는 마스터 로봇(1)을 포함한다. 도 13에는 마스터 로봇(1)과 슬레이브 로봇(2)이 분리되어 구현된 경우가 예시되었으나, 마스터 로봇(1)과 슬레이브 로봇(2)은 물리적으로 독립된 별도의 장치로 반드시 분리되어야 하는 것은 아니며, 하나로 통합되어 일체형으로 구성될 수 있으며, 이 경우 마스터 인터페이스(4)는 예를 들어 일체형 로봇의 인터페이스 부분에 상응할 수 있다.Referring to FIGS. 13 and 14, the laparoscopic surgical robot system includes a slave robot 2 that actually performs surgery on a patient lying on an operating table, and a master robot 1 that remotely controls the slave robot 2. Include. 13 illustrates a case in which the master robot 1 and the slave robot 2 are separately implemented, the master robot 1 and the slave robot 2 are not necessarily separated into physically independent separate devices. It can be integrated into one and configured in one piece, in which case the master interface 4 can correspond to, for example, the interface part of the one-piece robot.
마스터 로봇(1)의 마스터 인터페이스(4)는 모니터부(6) 및 마스터 조종기를 포함하고, 슬레이브 로봇(2)은 로봇 암(3) 및 복강경(5)을 포함한다. 도시된 복강경(5)은 수술용 내시경의 일 실시예이며, 수술 유형에 따라 흉강경, 관절경, 비경, 방광경, 직장경, 십이지장경, 종격경, 심장경 등 중 하나 이상으로 대체될 수도 있다.The master interface 4 of the master robot 1 comprises a monitor 6 and a master manipulator, and the slave robot 2 comprises a robot arm 3 and a laparoscope 5. The illustrated laparoscopic 5 is one embodiment of a surgical endoscope, and may be replaced with one or more of thoracoscopic, arthroscopic, parenteral, bladder, rectal, duodenum, mediastinal, cardiac, etc.
마스터 인터페이스(4)의 마스터 조종기는 수술자가 양손에 각각 파지하여 조작할 수 있도록 구현된다. 마스터 조종기는 도 13 및 도 14에 예시된 바와 같이 두 개의 핸들(10) 또는 그 이상의 수량의 핸들(10)로 구현될 수 있으며, 수술자의 핸들(10) 조작에 따른 조작신호가 슬레이브 로봇(2)으로 전송되어 로봇 암(3) 및/또는 인스트루먼트(도시되지 않음)가 제어된다. 수술자의 핸들(10) 조작에 의해 예를 들어 로봇 암(3)의 위치 이동, 회전, 절단 작업 등의 수술 동작이 수행될 수 있다. 마스터 조종기는 핸들(10)의 형상으로 제한되지 않으며, 네트워크를 통해 로봇 암(3)의 동작을 제어할 수 있는 형태이면 아무런 제한없이 적용될 수 있다.The master manipulator of the master interface 4 is embodied so that the operator can grip and manipulate each hand. As illustrated in FIGS. 13 and 14, the master controller may be implemented with two handles 10 or more handles 10, and an operation signal according to the operator's manipulation of the handle 10 may be controlled by the slave robot 2. Is transmitted to control the robot arm 3 and / or the instrument (not shown). By operation of the operator's handle 10, for example, a surgical operation such as position movement, rotation, and cutting of the robot arm 3 may be performed. The master controller is not limited to the shape of the handle 10 and may be applied without any limitation as long as it can control the operation of the robot arm 3 through a network.
예를 들어, 핸들(10)은 메인 핸들(main handle)과 서브 핸들(sub handle)을 포함하여 구성될 수 있다. 수술자는 메인 핸들만으로 슬레이브 로봇 암(3)이나 복강경(5) 등을 조작하거나, 서브 핸들을 조작하여 동시에 복수의 수술 장비가 실시간 조작되도록 할 수도 있다. 메인 핸들 및 서브 핸들은 그 조작방식에 따라 다양한 기구적 구성을 가질 수 있으며, 예를 들면, 조이스틱 형태, 키패드, 트랙볼, 터치스크린 등 슬레이브 로봇(2)의 로봇 암(3) 및/또는 기타 수술 장비를 작동시키기 위한 다양한 입력수단이 사용될 수 있다.For example, the handle 10 may be configured to include a main handle and a sub handle. The operator may operate the slave robot arm 3 or the laparoscope 5 or the like only by the main handle, or may operate the sub handles to simultaneously operate a plurality of surgical equipments in real time. The main handle and the sub handle may have various mechanical configurations depending on the operation method thereof. For example, the robot arm 3 and / or other surgery of the slave robot 2, such as a joystick type, a keypad, a trackball, and a touch screen, may be used. Various input means for operating the equipment can be used.
마스터 인터페이스(4)의 모니터부(6)에는 복강경(5)에 의해 입력되는 영상이 화상 이미지로 표시될 수 있다. 물론, 복강경(5)에 의해 입력되는 영상이 표시되는 방식은 모니터부(6)를 통해 표시하는 방식 외에도 다양할 수 있으나, 이는 본 발명의 요지와는 다소 거리감이 있으므로 이에 대한 설명은 생략한다.An image input by the laparoscope 5 may be displayed as an image image on the monitor unit 6 of the master interface 4. Of course, the manner in which the image input by the laparoscope 5 is displayed may be various in addition to the manner in which the image is displayed through the monitor 6, which is omitted from the description because it is somewhat distance from the gist of the present invention.
모니터부(6)는 하나 이상의 모니터들로 구성될 수 있으며, 각 모니터에 수술시 필요한 정보들이 개별적으로 표시되도록 할 수 있다. 도 13 및 도 14에는 모니터부(6)가 세 개의 모니터를 포함하는 경우가 예시되었으나, 모니터의 수량은 표시를 요하는 정보의 유형이나 종류 등에 따라 다양하게 결정될 수 있다.The monitor unit 6 may be composed of one or more monitors, and may display information necessary for surgery on each monitor separately. 13 and 14 illustrate the case where the monitor unit 6 includes three monitors, the quantity of the monitors may be variously determined according to the type or type of information requiring display.
모니터부(6)는 환자에 대한 하나 이상의 생체 정보를 더 출력할 수도 있다. 이 경우, 모니터부(6)의 하나 이상의 모니터를 통해 환자의 상태를 나타내는 지표, 예를 들면, 체온, 맥박, 호흡 및 혈압 등과 같은 생체 정보가 하나 이상 출력될 수 있으며, 각 정보는 영역별로 나뉘어져 출력될 수도 있다. 이러한 생체 정보를 마스터 로봇(1)으로 제공하기 위해, 슬레이브 로봇(2)은 체온 측정 모듈, 맥박 측정 모듈, 호흡 측정 모듈, 혈압 측정 모듈, 심전도 측정 모듈 등 중 하나 이상을 포함하는 생체 정보 측정 유닛을 포함할 수 있다. 각 모듈에 의해 측정된 생체 정보는 아날로그 신호 또는 디지털 신호의 형태로 슬레이브 로봇(2)에서 마스터 로봇(1)으로 전송될 수도 있으며, 마스터 로봇(1)은 수신된 생체 정보를 모니터부(6)를 통해 디스플레이할 수 있다.The monitor unit 6 may further output one or more biometric information about the patient. In this case, one or more indicators indicating a patient's condition, for example, body temperature, pulse rate, respiration, and blood pressure, may be output through one or more monitors of the monitor unit 6. It may be output. In order to provide such biometric information to the master robot 1, the slave robot 2 includes a biometric information measuring unit including at least one of a body temperature measuring module, a pulse measuring module, a respiratory measuring module, a blood pressure measuring module, an electrocardiogram measuring module, and the like. It may include. The biometric information measured by each module may be transmitted from the slave robot 2 to the master robot 1 in the form of an analog signal or a digital signal, and the master robot 1 monitors the received biometric information. Can be displayed via
슬레이브 로봇(2)과 마스터 로봇(1)은 유선 통신망 또는 무선 통신망을 통해 상호 결합되어 조작신호, 복강경(5)을 통해 입력된 복강경 영상 등이 상대방으로 전송될 수 있다. 만일, 마스터 인터페이스(4)에 구비된 두 개의 핸들(10)에 의한 두 개의 조작신호 및/또는 복강경(5)의 위치 조정을 위한 조작신호가 동시에 및/또는 유사한 시점에서 전송될 필요가 있는 경우, 각 조작신호는 상호 독립적으로 슬레이브 로봇(2)으로 전송될 수 있다. 여기서 각 조작신호가 ‘독립적으로’ 전송된다는 것은, 조작신호 간에 서로 간섭을 주지 않으며, 어느 하나의 조작신호가 다른 하나의 신호에 영향을 미치지 않음을 의미한다. 이처럼, 복수의 조작신호가 서로 독립적으로 전송되도록 하기 위해서는, 각 조작신호의 생성 단계에서 각 조작신호에 대한 헤더 정보를 부가하여 전송시키거나, 각 조작신호가 그 생성 순서에 따라 전송되도록 하거나, 또는 각 조작신호의 전송 순서에 관하여 미리 우선순위를 정해 놓고 그에 따라 전송되도록 하는 등 다양한 방식이 이용될 수 있다. 이 경우, 각 조작신호가 전송되는 전송 경로가 독립적으로 구비되도록 함으로써 각 조작신호간에 간섭이 근본적으로 방지되도록 할 수도 있을 것이다.The slave robot 2 and the master robot 1 may be coupled to each other through a wired communication network or a wireless communication network so that an operation signal and a laparoscope image input through the laparoscope 5 may be transmitted to the counterpart. If two operation signals by the two handles 10 provided in the master interface 4 and / or operation signals for adjusting the position of the laparoscope 5 need to be transmitted at the same time and / or at a similar point in time. Each operation signal may be independently transmitted to the slave robot 2. Herein, when each operation signal is 'independently' transmitted, it means that the operation signals do not interfere with each other and one operation signal does not affect the other signal. As described above, in order to transmit the plurality of operation signals independently of each other, in the generation step of each operation signal, header information for each operation signal is added and transmitted, or each operation signal is transmitted in the generation order thereof, or Various methods may be used such as prioritizing each operation signal in advance and transmitting the operation signal accordingly. In this case, the transmission path through which each operation signal is transmitted may be provided independently so that interference between each operation signal may be fundamentally prevented.
슬레이브 로봇(2)의 로봇 암(3)은 다자유도를 가지며 구동되도록 구현될 수 있다. 로봇 암(3)은 예를 들어 환자의 수술 부위에 삽입되는 수술기구, 수술기구를 수술 위치에 따라 요(yaw)방향으로 회전시키는 요동 구동부, 요동 구동부의 회전 구동과 직교하는 피치(pitch) 방향으로 수술기구를 회전시키는 피치 구동부, 수술기구를 길이 방향으로 이동시키는 이송 구동부와, 수술기구를 회전시키는 회전 구동부, 수술기구의 끝단에 설치되어 수술 병변을 절개 또는 절단하는 수술기구 구동부를 포함하여 구성될 수 있다. 다만, 로봇 암(3)의 구성이 이에 제한되지 않으며, 이러한 예시가 본 발명의 권리범위를 제한하지 않는 것으로 이해되어야 한다. The robot arm 3 of the slave robot 2 can be implemented to be driven with multiple degrees of freedom. The robot arm 3 includes, for example, a surgical instrument inserted into a surgical site of a patient, a rocking drive unit for rotating the surgical instrument in the yaw direction according to the surgical position, and a pitch direction perpendicular to the rotational drive of the rocking drive unit. It comprises a pitch drive unit for rotating the surgical instruments, a transfer drive for moving the surgical instruments in the longitudinal direction, a rotation drive for rotating the surgical instruments, a surgical instrument drive unit installed on the end of the surgical instruments to cut or cut the surgical lesion Can be. However, the configuration of the robot arm 3 is not limited thereto, and it should be understood that this example does not limit the scope of the present invention.
슬레이브 로봇(2)은 환자를 수술하기 위하여 하나 이상으로 이용될 수 있으며, 수술 부위가 모니터부(6) 등을 통해 화상 이미지로 표시되도록 하기 위한 복강경(5)은 독립된 슬레이브 로봇(2)으로 구현될 수도 있다. 또한, 앞서 설명된 바와 같이, 본 발명의 실시예들은 복강경 이외의 다양한 수술용 내시경(예를 들어, 흉강경, 관절경, 비경 등)이 이용되는 수술들에 범용적으로 사용될 수 있다.One or more slave robots 2 may be used to operate the patient, and the laparoscope 5 for displaying the surgical site as an image image through the monitor unit 6 is implemented as an independent slave robot 2. May be In addition, as described above, embodiments of the present invention may be used universally in operations in which various surgical endoscopes (eg, thoracoscopic, arthroscopy, parenteral, etc.) other than laparoscopic are used.
도 15는 본 발명의 실시예에 따른 마스터 로봇과 슬레이브 로봇의 구성을 개략적으로 나타낸 블록 구성도이고, 도 16 내지 도 27은 본 발명의 실시예들에 따른 영역 지정 방법을 나타낸 도면이다. 마스터 로봇(1)과 슬레이브 로봇(2)이 일체형으로 구현될 수도 있음은 앞서 설명한 바와 같다.15 is a block diagram schematically showing the configuration of a master robot and a slave robot according to an embodiment of the present invention, Figures 16 to 27 is a view showing a region designation method according to embodiments of the present invention. As described above, the master robot 1 and the slave robot 2 may be integrally implemented.
도 15를 참조하면, 마스터 로봇(1)은 영상 입력부(310), 화면 표시부(320), 암 조작 설정부(331), 저장부(341), 제한구역 설정부(1350), 암 조작부(330), 조작 판단부(371), 반응정보 처리부(1380) 및 제어부(370)를 포함할 수 있다. 도시되지는 않았으나, 마스터 로봇(1)은 수술자가 제어 명령을 입력하기 위한 입력부, 반응정보를 청각 정보(예를 들어, 경고 음향, 경고 음성 메시지 등)로서 출력하기 위한 스피커부, 반응정보를 시각 정보로 출력하기 위한 LED부 등 중 하나 이상을 더 포함할 수도 있다.Referring to FIG. 15, the master robot 1 includes an image input unit 310, a screen display unit 320, an arm operation setting unit 331, a storage unit 341, a restricted area setting unit 1350, and an arm operation unit 330. ), An operation determination unit 371, a reaction information processing unit 1380, and a control unit 370. Although not shown, the master robot 1 includes an input unit for the operator to input a control command, a speaker unit for outputting the reaction information as auditory information (for example, a warning sound or a warning voice message), and visualizes the response information. It may further include one or more of the LED unit for outputting the information.
슬레이브 로봇(2)은 로봇 암(3) 및 복강경(5)을 포함할 수 있다. 또한, 본 명세서에서 명시적으로 한정하지 않는 한 로봇 암(3)은 인스트루먼트를 포함하는 개념으로 해석될 수 있다. 도시되지는 않았으나, 슬레이브 로봇(2)은 로봇 암(3) 및 복강경(5)의 위치 정보를 제공하는 위치 정보 제공 유닛, 환자에 대한 생체 정보를 측정하여 제공하기 위한 생체 정보 측정 유닛 등을 더 포함할 수 있다. 위치 정보 제공 유닛은 예를 들어 로봇 암(3)이나 복강경(5)이 기본 위치에서 어느 각도로 어느 정도 움직였는지에 대한 정보(예를 들어, 로봇 암 등의 구동 모터의 회전각 등)를 마스터 로봇(1)으로 제공하도록 구성될 수 있다.The slave robot 2 may comprise a robot arm 3 and a laparoscope 5. In addition, the robot arm 3 may be interpreted as a concept including an instrument unless explicitly limited herein. Although not shown, the slave robot 2 further includes a location information providing unit for providing location information of the robot arm 3 and the laparoscope 5, a biometric information measuring unit for measuring and providing biometric information about the patient. It may include. The position information providing unit masters, for example, information about how much the robot arm 3 or the laparoscope 5 has moved at an angle from the basic position (for example, the rotation angle of a driving motor such as a robot arm). It may be configured to provide to the robot 1.
영상 입력부(310)는 슬레이브 로봇(2)의 복강경(5)에 구비된 카메라를 통해 입력된 영상을 유선 또는 무선 통신망을 통해 수신한다.The image input unit 310 receives an image input through a camera provided in the laparoscope 5 of the slave robot 2 through a wired or wireless communication network.
화면 표시부(320)는 영상 입력부(310)를 통해 수신된 영상에 상응하는 화상 이미지를 시각(視覺)적 정보로 출력한다. 또한, 화면 표시부(320)는 반응정보를 시각 정보(예를 들어, 화면 점멸 등)로서 출력하거나, 슬레이브 로봇(2)으로부터 생체 정보가 입력되는 경우 이에 상응하는 정보를 더 출력할 수도 있다. 화면 표시부(320)는 예를 들어, 모니터부(6) 등의 형태로 구현될 수 있다.The screen display unit 320 outputs an image image corresponding to the image received through the image input unit 310 as visual information. In addition, the screen display unit 320 may output the response information as visual information (for example, screen flickering), or may further output corresponding information when biometric information is input from the slave robot 2. The screen display unit 320 may be implemented in the form of, for example, the monitor unit 6.
암 조작 설정부(331)는 수술자 또는 설정자로부터 암 조작 설정을 입력받아 이에 상응하도록 로봇 암(3) 및/또는 인스트루먼트가 수술자의 조작명령에 따라 조작되도록 할 것인지 여부에 대한 조작 설정정보를 생성한다. 암 조작 설정은 예를 들어, 복강경(5)에 의해 획득된 영상 이미지에서 보여지지 않는 가시외(可視外) 영역에 위치하는 로봇 암 및/또는 인스트루먼트가 조작되도록 할 것인지 여부에 대한 설정일 수 있다. The arm manipulation setting unit 331 receives the arm manipulation setting from the operator or the setter and generates manipulation setting information on whether the robot arm 3 and / or the instrument is to be manipulated according to the manipulation instruction of the operator. . The arm manipulation setting may be, for example, a setting of whether or not the robot arm and / or the instrument located in the out-of-visible region not visible in the video image acquired by the laparoscope 5 is to be manipulated.
수술자가 암 조작 설정을 수행하는 방법은 다양할 수 있다. 이중 몇가지 실시예만 제시하면 아래와 같다.The manner in which the operator performs cancer manipulation settings can vary. Only a few of these examples are presented below.
제1 실시예로서, 수술자가 복강경(5)을 통해 입력된 화상 이미지를 참조하여 화면상에 보여지는 로봇 암 및/또는 인스트루먼트에 대해서만 그 식별정보를 이용하여 조작 가능으로 설정하고, 그 이외의 로봇 암 및 인스트루먼트는 조작 불능으로 설정할 수 있다.As a first embodiment, the operator sets only the robot arm and / or instrument shown on the screen with reference to the image image input through the laparoscope 5 to be operable using the identification information, and other robots. Arms and instruments can be set inoperable.
제2 실시예로서, 마스터 로봇(1)이 슬레이브 로봇(2)으로부터 각 로봇 암 및/또는 인스트루먼트의 위치 정보와 복강경(5)의 위치 및 영상 입력 각도를 수신한 후, 복강경(5)이 어느 좌표 범위의 화상 이미지를 입력받고 있는지 연산한 후, 해당 좌표 범위 이내에 일부 또는 전체가 포함되도록 위치하는 로봇 암 및 인스트루먼트만 조작 가능하도록 설정할 수 있다. 복강경(5)에 의해 획득되는 화상 이미지로 표시되는 수술 부위의 좌표 범위를 연산하기 위해 복강경(5)에는 수술 부위의 표면까지의 거리를 센싱하기 위한 거리 감지 센서가 더 구비될 수도 있다. 물론, 수술 도중 사용할 로봇 암이나 인스트루먼트의 교체를 위해 그 좌표 범위 이외의 로봇 암 등도 조작되도록 설정될 수 있음은 당연하다.As a second embodiment, after the master robot 1 receives the position information of each robot arm and / or instrument, the position of the laparoscope 5 and the image input angle from the slave robot 2, After calculating whether or not the image image of the coordinate range is input, it can be set to be operable only the robot arm and the instrument positioned to include some or all within the coordinate range. The laparoscope 5 may further include a distance sensing sensor for sensing a distance to the surface of the surgical site to calculate a coordinate range of the surgical site represented by the image image obtained by the laparoscope 5. Of course, the robot arm outside the coordinate range for the replacement of the robot arm or instrument to be used during the operation can be set to be manipulated.
저장부(341)에는 암 조작 설정부(331)에 의해 생성된 로봇 암(3) 및/또는 인스트루먼트에 조작 설정정보, 제한구역 설정부(1350)에 의해 설정된 제한구역 좌표 정보가 저정된다. 또한, 수술자에 의해 반응정보 처리 방식이 지정된다면 반응정보 처리 유형에 대한 정보(예를 들어, 촉각 정보 출력 방식, 청각 정보 출력 방식, 시각 정보 출력 방식 등 중 하나 이상)가 더 저장될 수도 있다. The storage unit 341 stores the operation setting information and the restricted area coordinate information set by the restricted area setting unit 1350 in the robot arm 3 and / or the instrument generated by the arm operating setting unit 331. In addition, when the response information processing method is designated by the operator, information on the response information processing type (eg, one or more of tactile information output method, auditory information output method, visual information output method, etc.) may be further stored.
제어부(370)는 수술자에 의해 임의의 로봇 암(3)이나 인스트루먼트에 대한 조작정보가 입력되었을 때, 수술자의 암 조작부(330)를 이용한 조작이 지정된 제한구역에 해당되는 경우 반응정보 처리 유형에 대한 정보를 참조하여 반응정보 처리부(1380)가 상응하는 처리를 수행하도록 제어할 수 있다. 제어부(370)는 이하에서 설명되는 조작 판단부(371)를 포함할 수도 있다.The control unit 370, when the operation information for any robot arm (3) or the instrument is input by the operator, when the operation using the operator's arm operation unit 330 falls within the specified restricted area for the response information processing type With reference to the information, the reaction information processor 1380 may control to perform a corresponding process. The control unit 370 may include an operation determination unit 371 described below.
제한구역 설정부(1350)는 수술시 오조작 등의 원인에 의해 수술중인 환자의 신체 부위(예를 들어, 혈관, 장기 등)가 손상됨을 방지하기 위해, 로봇 암(3)이나 인스트루먼트가 접근하거나 조작될 수 없는 제한구역에 대한 좌표 정보를 생성하여 저장한다. 여기서, 제한구역은 수술 이전이나 수술 도중에 수술자에 의해 지정되거나 변경 지정될 수 있다.The restricted area setting unit 1350 may access the robot arm 3 or the instrument to prevent damage to a part of the patient's body (for example, blood vessels or organs) due to a misoperation or the like. Create and store coordinate information for restricted areas that cannot be manipulated. Here, the restricted area may be designated or changed by the operator before or during the operation.
수술자가 제한구역을 지정하고 이에 상응하는 제한구역 좌표 정보가 생성되도록 하는 방법은 다양할 수 있다. 본 명세서에서는 수술자 등이 임의의 영역을 지정하는 등의 방법으로 제한구역을 설정하는 방법을 중심으로 설명하지만, 수술자 등이 지정하는 영역만이 허용구역으로 설정되고 그 이외의 영역은 제한구역으로 설정되는 방법도 적용될 수 있음은 당연하다.There may be a variety of ways for the operator to designate the restricted area and to generate corresponding restricted area coordinate information. In the present specification, a description will be given of a method of setting a restricted area by a method such as designating an arbitrary area by an operator, but only an area designated by an operator is set as an allowable area, and other areas are set as restricted areas. Naturally, the method can be applied.
수술자가 제한구역을 지정하는 다양한 실시예들 중 몇가지 실시예만 제시하면 아래와 같다. 하기와 같이 설정된 영역은 복강경(5)에 의해 획득된 화상 이미지 위에 오버레이되어 표시될 수 있다.The following is only a few examples of various embodiments in which an operator designates a restricted area. The area set as follows may be displayed overlaid on the image image obtained by the laparoscope 5.
제1 실시예로서, 수술자가 제한된 장기(예를 들어, 간 등)에 대한 수술을 진행하고자 하는 경우, 모니터부(6)에 표시되는 메뉴 항목을 이용하여 해당 장기를 선택하면, 마스터 로봇(1)은 복강경(5)에 의해 획득된 영상 이미지를 해석(예를 들어, 영상 이미지에 포함된 수술 부위의 색상 분석 등의 방식으로 선택된 장기의 존재 여부와 존재시 그 좌표 영역의 해석 등)함으로써 영역 좌표 정보(예를 들어, 제한구역 좌표 정보 또는 허용구역 좌표 정보)를 생성할 수 있다.As a first embodiment, when the operator wants to perform surgery on a limited organ (eg, liver, etc.), when the operator selects the organ using a menu item displayed on the monitor 6, the master robot 1 ) Analyzes the image image acquired by the laparoscope 5 (for example, the presence or absence of the selected organ by the color analysis of the surgical site included in the image image and the interpretation of its coordinate region, etc.). Coordinate information (eg, restricted area coordinate information or allowed area coordinate information) may be generated.
제2 실시예로서, 복강경(5)에 의해 획득되어 표시되는 화상 이미지에 상응하는 좌표 범위에 대해서만 허용구역으로 설정하고, 그 이외의 모든 영역은 제한구역으로 설정할 수도 있을 것이다. 이때, 허용구역에 일부 또는 전체가 위치하는 로봇 암(3)이나 인스트루먼트만이 수술자의 암 조작 등에 의해 조작될 것이다.As a second embodiment, the allowable area may be set only for the coordinate range corresponding to the image image obtained and displayed by the laparoscope 5, and all other areas may be set as the restricted area. At this time, only the robot arm 3 or the instrument, which is partially or entirely located in the allowable area, will be operated by the operator's arm operation or the like.
제3 실시예로서, 마스터 조종기 및 이의 조작에 따라 움직이는 제어 대상 물체(예를 들어, 로봇암, 인스트루먼트 및 내시경 중 하나 이상)를 이용하여 설정된 영역(즉, 제한구역 또는 허용구역)에 해당되는 3차원 좌표값이 설정되도록 할 수 있다. 이하 해당 영역이 제한구역으로 설정되는 경우에 대해 관련 도면을 참조하여 구체적으로 설명한다.In a third embodiment, 3 corresponding to an area (ie, a restricted area or an allowable area) set using a master controller and a controlled object moving according to its manipulation (eg, at least one of a robot arm, an instrument, and an endoscope). Dimensional coordinate values can be set. Hereinafter, a case in which the corresponding area is set as the restricted area will be described in detail with reference to the accompanying drawings.
예를 들어, 도 16에 도시된 바와 같이 인스트루먼트 끝단을 3차원 마우스 커서로 인식하여 수술자가 복강경(5)에 의해 획득된 화상 이미지를 보면서 공간상에 점을 찍는(즉, 포인트 지정) 조작을 수행하면 찍힌 점들이 연결되어 제한구역으로 설정되도록 할 수 있다. For example, as shown in FIG. 16, the end of the instrument is recognized as a three-dimensional mouse cursor, and the operator performs an operation of taking a point in space (that is, specifying a point) while viewing an image image obtained by the laparoscope 5. This allows the dots to be connected to be set as a restricted area.
즉, 수술자는 복강경(5)에 의해 획득되어 표시되는 화상 이미지(화상 이미지 내에는 S1, S2 및 S3와 같이 장기, 혈관 등이 포함됨) 상에서 S1을 영역 내에 포함하도록 하는 영역을 설정하기 위해, 인스트루먼트 끝단이 제1 꼭지점에 위치되도록 한 후 포인트 지정 명령을 입력한 후 제2 꼭지점으로 이동하여 포인트 지정 명령을 추가적으로 입력한다. 이와 같은 꼭지점 입력 동작을 반복하여 설정하고자 하는 영역이 획정되도록 한 후 영역 구획 명령을 입력하면 앞서 포인트 지정 명령에 의해 설정된 꼭지점들이 연결되는 영역이 설정된다. 각 꼭지점들은 직선으로 연결될 수도 있으나, 곡선 형태로 연결될 수도 있으며, 해당 영역은 평면 영역(예를 들어, 삼각형, 원 등) 또는 입체 영역(예를 들어, 다면체 형상 등)으로 설정될 수도 있다. 전술한 포인트 지정 명령, 영역 구획 명령 등은 마스터 인터페이스(4)에 구비된 마스터 조종기 등을 이용하여 입력될 수 있다.That is, the operator sets an area to include S1 in the area on the image image obtained and displayed by the laparoscope 5 (in the image image, organs, blood vessels such as S1, S2, and S3 are included). After the end is positioned at the first vertex, after inputting a point designation command, the user moves to the second vertex and additionally inputs a point designation command. After repeating the vertex input operation such that an area to be set is defined, and inputting an area partition command, an area to which vertices previously set by the point designation command are connected is set. Each vertex may be connected in a straight line, but may be connected in a curved form, and the corresponding area may be set as a planar region (for example, a triangle or a circle) or a three-dimensional region (for example, a polyhedron shape). The aforementioned point designation command, area section command, and the like may be input using a master controller provided in the master interface 4 or the like.
이러한 제한 구역 설정 방식은 도 17에 도시된 바와 같이, 3차원 공간상에 꼭지점을 지정하여 입체적으로 제한 구역을 설정하는 방식으로 확장될 수도 있다. As shown in FIG. 17, the restricted area setting method may be extended by designating a vertex on a three-dimensional space to three-dimensionally set the restricted area.
전술한 방법에 의해, 3차원 공간상에 꼭지점 등을 지정하는 방식으로 입체적인 제한 구역의 설정이 가능할 뿐 아니라, 벽을 만드는 것과 같은 평면적인 제한 구역의 설정도 가능할 수 있다.By the above-described method, not only the three-dimensional restriction zone can be set in a manner of designating a vertex or the like on the three-dimensional space, but also a planar restriction zone such as a wall can be set.
또한, 수술자에 의해 설정된 영역은 해당 영역의 외곽선 중 한 점에 인스트루먼트 끝단을 위치시킨 후 해당 점을 영역 외부로 이동하거나 내부로 이동하는 등의 방식에 의해 변형될 수도 있다. 여기서, 지정된 꼭지점들에 의해 형성되는 해당 영역의 외곽선 및 외곽면들은 점들의 연결인 것으로 미리 해석되어 저장될 수 있다.In addition, the region set by the operator may be deformed by locating the end of the instrument at one point of the outline of the region and then moving the point out of the region or inward. Here, the outline and the outer surface of the region formed by the designated vertices may be interpreted and stored in advance as being the connection of the points.
즉, 수술자가 해당 영역이 S1뿐 아니라 S2를 더 포함하도록 하기 위해, 도 18에 도시된 바와 같이, 인스트루먼트 끝단을 꼭지점 P4에 위치시킨 후 포인트 이동을 수행한다. 또한, 인스트루먼트 끝단을 꼭지점 P3과 P4를 연결하는 선분상에 위치시킨 후 포인트 이동을 수행한다. 이와 같은 포인트 이동들에 의해 S2가 해당 영역에 포함되도록 설정된 영역을 확장할 수 있다. That is, in order for the operator to further include S2 as well as S1, the operator places the instrument end at the vertex P4 and performs point movement as shown in FIG. 18. In addition, the end of the instrument is located on the line connecting the vertices P3 and P4 and then the point movement is performed. By such point movements, an area set to include S2 in the corresponding area may be extended.
전술한 설정된 영역의 변형은 평면상에서만 이루어지도록 제한되지 않으며, 도 19에 도시된 바와 같이, 입체 영역이 설정된 경우에도 동일하게 수행될 수 있음은 당연하다. 또한 도 16에는 꼭지점을 이용한 포인트 이동 방식으로 영역이 변형되는 경우가 예시되었으나, 3차원 공간상에서 면, 모서리 등의 잡아끄는 방식으로 영역 변형이 가능함은 당연하다.The above-described deformation of the set area is not limited to be made only on a plane, and as shown in FIG. 19, it is natural that the same can be performed even when the three-dimensional area is set. In addition, although the case in which the region is deformed by a point movement method using vertices is illustrated in FIG. 16, it is natural that the region may be deformed by pulling a surface, a corner, or the like in a three-dimensional space.
또한, 수술자에 의해 설정된 영역은 복수의 영역으로 구획되어 개별 영역들로 분리될 수도 있다. 즉, 도 20 및 도 21에 도시된 바와 같이, 수술자는 S1과 S2를 포함하도록 설정된 영역을 S1만을 포함하는 제1 개별 영역과 S2만을 포함하는 제2 개별 영역으로 분리할 수도 있다. 이를 위해, 수술자는 분리를 원하는 지점에 둘 이상의 포인트(예를 들어, Q1 및 Q2)를 지정한 후, 분할 명령을 입력함으로써, 해당 영역이 둘 이상의 개별 영역들로 분리되도록 할 수 있다.In addition, the area set by the operator may be divided into a plurality of areas and separated into individual areas. That is, as shown in FIGS. 20 and 21, the operator may separate a region set to include S1 and S2 into a first individual region including only S1 and a second individual region including only S2. To this end, the operator may designate two or more points (eg, Q1 and Q2) at a point to be separated, and then input a division command to separate the corresponding area into two or more separate areas.
또한, 수술자에 의해 설정된 영역이 복수의 영역으로 구획된 후 불필요한 영역이 제거될 수도 있다. 즉, 도 22 및 도 23에 도시된 바와 같이, 수술자는 S1과 S2를 포함하도록 설정된 영역이 S2만을 포함하는 영역으로 축소되도록 할 수도 있다. 이를 위해, 수술자는 제거를 원하는 경계 지점에 둘 이상의 포인트(예를 들어, Q1 및 Q2)를 지정하고, 인스트루먼트 끝단이 분리된 영역들 중 제거를 원하는 영역에 위치되도록 한 후, 삭제 명령을 입력함으로써, 분리된 해당 영역이 삭제되도록 할 수 있다.In addition, after the area set by the operator is divided into a plurality of areas, unnecessary areas may be removed. That is, as illustrated in FIGS. 22 and 23, the operator may allow the region set to include S1 and S2 to be reduced to the region including only S2. To do this, the operator assigns two or more points (e.g., Q1 and Q2) to the boundary point to be removed, allows the instrument end to be located in the area to be removed, and then enters a delete command. In this case, the separated area can be deleted.
또한, 수술자에 의해 설정된 영역은 복수의 영역으로 구획하는 경계 지점을 중심으로 접혀지거나 원위치로 복원되도록 조작될 수도 있다. 즉, 도 24 및 도 25에 도시된 바와 같이, 수술자는 S1과 S2를 포함하도록 설정된 영역을 S1만을 포함하는 제1 개별 영역과 S2만을 포함하는 제2 개별 영역으로 분리한 후, 각 개별 영역이 제한구역 또는 허용구역으로 기능하도록 접기 조작을 수행할 수 있다. 참고로, 도 24는 평면적으로 설정된 영역을 평면적으로 접거나 복원함으로써 제한 구역을 평면적으로 변형하는 경우가 가정되어 있고, 도 25는 평면적으로 설정된 영역을 접어 입체적인 제한 구역으로 변형하는 경우가 가정되어 있다. In addition, the area set by the operator may be manipulated to be folded around the boundary point partitioning into a plurality of areas or to be restored to its original position. That is, as shown in FIGS. 24 and 25, the operator divides a region set to include S1 and S2 into a first individual region including only S1 and a second individual region including only S2, and then each individual region is Folding operations can be performed to act as a restricted or permitted area. For reference, FIG. 24 assumes a case where the restricted area is planarly deformed by folding or restoring a planarly set area flatly, and FIG. 25 assumes a case where the restricted area is folded into a three-dimensional limited area. .
도 24를 참조하면, 수술자는 경계 지점으로 기능하도록 둘 이상의 포인트(예를 들어, Q1 및 Q2)를 지정하고 접기 명령을 입력하면, 인스트루먼트 끝단이 위치한 지점을 포함하는 개별 영역이 경계 지점을 중심으로 회전되어 접기 동작이 수행된다. 이로써, 해당 영역이 제한구역으로 설정된 경우라면 S2는 제한구역에서 해제된 상태에서 인스트루먼트 조작이 가능해지게 된다. 이후, 복원 명령에 의해 접혀진 영역들이 원상태로 복원되면 S2 역시 제한구역으로 재설정된다.Referring to FIG. 24, when an operator specifies two or more points (for example, Q1 and Q2) to function as boundary points and inputs a folding command, an individual area including a point where an instrument end is located is centered on the boundary point. Rotation is performed. As a result, when the corresponding area is set as the restricted area, the S2 is able to operate the instrument in a state in which it is released from the restricted area. Thereafter, when the areas folded by the restore command are restored to their original state, S2 is also reset to the restricted area.
마찬가지로 도 25에 도시된 바와 같이, 수술자는 경계 지점으로 기능하는 복수의 포인트(예를 들어, Q1-Q2 및 Q3-Q4)를 지정하고 접기 명령을 입력하면, 일 방향(예를 들어 인스트루먼트 끝단의 위치에 상응하는 방향)으로 회전 및 접기 명령이 실행되어 입체 도형의 형상으로 제한 구역이 설정되어질 수 있다. 물론, 경계 지점을 중심으로 어느 위치의 개별 영역이 어느 각도만큼 회전될 것인지에 대해서는 미리 설정될 수 있을 것이다. 이외에도, 평면적으로 도시된 영역 이미지를 접기 명령에 의해 입체 도형화하는 방식은 다양할 수 있을 것이다.Similarly, as shown in FIG. 25, the operator specifies a plurality of points (e.g., Q1-Q2 and Q3-Q4) that serve as boundary points and enters a fold command, whereby one direction (e.g., The rotation and folding commands may be executed in the direction corresponding to the position so that the restricted area may be set in the shape of a three-dimensional figure. Of course, it may be set in advance as to what angle the individual regions at which positions are to be rotated around the boundary point. In addition, the three-dimensional shape of the area image shown in plan by the folding command may be various.
상술한 영역 설정 및 변형 등의 방법은 수술자가 인스트루먼트를 이용하여 영역 설정을 선행하는 방법 이외에도, 템플릿으로 미리 지정된 임의의 도형(예를 들어, 원, 삼각형, 사각형, 구, 육면체, 사면체 등 중 하나 이상)이나 장기의 형상이 공간상에 도시되도록 한 후, 3차원 마우스(또는 인스트루먼트 끝단)를 이용하여 변형, 분리 등이 가능함은 당연하다. 예를 들어, 도 26에 도시된 바와 같이, 심장 영역을 제한 구역으로 설정하고자 하는 경우, 심장에 해당되는 템플릿을 선택하여 도시되도록 한 후 스케일을 지정하거나 외곽선이나 외곽면을 잡아끄는 등의 방법으로 해당 템플릿을 수술환자의 특성에 맞게 변형하여 적용할 수 있다. 이 경우, 수술자의 제어 대상 물체 조작에 의해 변형되는 실제 장기의 변화(예를 들어, 위치 이동, 모양 변화 등)를 추적하여 설정된 제한 구역이 상응하도록 변형되도록 제어될 수도 있다. 이를 위해, 예를 들어 실제 장기의 영상 이미지에 대해 외곽선 검출 등의 영상 해석 기법이 적용될 수 있다.The method of setting and modifying the above-described area is one of arbitrary shapes (for example, circles, triangles, squares, spheres, hexahedrons, tetrahedrons, etc.) designated as a template, in addition to the method in which the operator precedes the area setting using an instrument. After the shape of the organ or the organ is shown in space, it is natural that the three-dimensional mouse (or the end of the instrument) can be deformed or separated. For example, as shown in FIG. 26, when the heart region is to be set as a restricted area, a template corresponding to the heart is selected to be shown, and then scaled or drawn by drawing an outline or an outline. The template can be adapted to suit the characteristics of the surgical patient. In this case, the control may be controlled so that the set restricted area is deformed correspondingly by tracking the change of the actual organ (for example, position shift, shape change, etc.) that is deformed by the operator's control object manipulation. To this end, for example, an image analysis technique such as an edge detection may be applied to an image of an actual organ.
제4 실시예로서, 터치 감응 입력 장치로 구현된 모니터를 통해 표시되는 수술 환자에 대한 참조 영상(예를 들어 복강경(5)에 의해 획득된 영상 이미지, 환자의 MRI 영상 등)이나 일반화되어 생성되는 인체 모델링 영상에서, 수술자가 손가락으로 폐곡선의 도형 형태로 일정 범위를 도시하면 해당 영역이 제한구역으로 설정되도록 할 수 있다. 이때, 수술자가 도시한 해당 범위의 좌표는 임의의 기준점(예를 들어, 특징적 장기의 위치, 복강경(5)의 위치 등)을 중심으로 수술 환자의 신체 내부의 장기나 혈관의 위치에 대응되도록 좌표 정보가 매핑 처리되며, 이러한 좌표 정보 매핑은 통상의 영상 해석에 의한 위치 인식 방법에 의해 처리될 수 있으므로 이에 대한 설명은 생략한다.As a fourth embodiment, a reference image (eg, an image image obtained by the laparoscope 5, an MRI image of a patient, etc.) of a surgical patient displayed through a monitor implemented as a touch-sensitive input device is generated and generalized. In the human modeling image, when the operator shows a certain range in the shape of a closed curve with a finger, the corresponding region may be set as a restricted area. In this case, the coordinates of the range shown by the operator are coordinated to correspond to the position of organs or blood vessels inside the body of the surgical patient based on an arbitrary reference point (for example, the position of the characteristic organs, the position of the laparoscope 5, etc.). The information is mapped and the coordinate information mapping may be processed by a position recognition method by conventional image analysis, and thus description thereof will be omitted.
제5 실시예로서, 수술 환자에 대한 참조 영상(예를 들어, CT 영상, MRI 영상 등)을 이용하여 생성한 인체 모델링 영상이나 일반적인 인체에 대해 생성된 인체 모델링 영상의 각 장기나 혈관 등에 관한 모델링 데이터를 이용하여 해당 영역을 자동 설정하도록 할 수도 있다.As a fifth embodiment, modeling of each organ or blood vessel of a human model image generated using a reference image (eg, CT image, MRI image, etc.) of a surgical patient or a human model image generated of a general human body You can also set the area automatically using the data.
도 27을 참조하면, 단계 P450에서 수술 환자에 대한 참조 영상(예를 들어, CT 영상, MRI 영상 등)을 이용하여 인체 모델링 영상이 생성되고, 단계 P455에서 인체 모델링 영상에 상응하는 모델링 데이터(예를 들어, 각 장기 및 혈관의 색상, 형상, 크기 등)가 생성되고 저장된다. 일반적인 인체에 대해 생성된 인체 모델링 영상 및 모델링 데이터가 이용되는 경우 단계 P450 및 단계 P455는 생략될 수도 있다.Referring to FIG. 27, a human modeling image is generated using a reference image (eg, CT image, MRI image, etc.) of a surgical patient in step P450, and modeling data corresponding to the human modeling image (for example, in step P455). For example, the color, shape, size, etc. of each organ and blood vessel) are generated and stored. When the human modeling image and the modeling data generated for the general human body are used, steps P450 and P455 may be omitted.
단계 P460에서 장기 선택 정보가 획득된다. 장기 선택 정보는 예를 들어 복강경(5)에 의해 획득되어 표시되는 화상 이미지 내의 장기나 혈관이 무엇인지 여부를 영상 해석 기법(예를 들어, 픽셀별 색상 분석, 장기의 외곽선 해석 등)을 이용하여 해석한 후 미리 저장된 모델링 데이터에 대비하여 자동 인식되도록 할 수도 있다. 물론, 수술자가 드롭다운 메뉴로 구성된 장기 리스트 또는 모니터부(6)를 통해 표시되는 인체 모델링 영상에서 임의의 장기를 선택하는 방법 등이 이용될 수도 있다. In step P460, long-term selection information is obtained. The organ selection information is obtained by using an image analysis technique (for example, pixel-by-pixel color analysis, organ outline analysis, etc.) to determine whether organs or blood vessels in the image image obtained and displayed by the laparoscope 5 are displayed, for example. After analysis, it can be automatically recognized against pre-stored modeling data. Of course, a method in which an operator selects an arbitrary organ from an organ list including a drop down menu or a human modeling image displayed through the monitor unit 6 may be used.
단계 P465에서 모델링 데이터를 참조하여 상응하는 영역을 구획하고 수술자가 인식할 수 있도록 영역 이미지(도 16 내지 도 26 참조)로서 디스플레이한다. 마스터 로봇(1)은 수술환자의 키, 눕혀진 형태 및 인체 내에서의 장기의 일반적인 위치 등을 고려하여 장기 선택 정보에 상응하는 장기의 위치를 인식한 후, 모델링 데이터에 의해 규정된 형상의 영역이 설정되어 모니터부(6) 또는/및 복강경(5)에 의해 획득된 화상 이미지 내에 오버레이되어 표시되도록 처리한다. 예를 들어, 장기 선택 정보로서 심장이 선택된 경우, 심장의 형태에 상응하는 영역 이미지를 디스플레이하기 위해 모델링 데이터가 저장부(341) 등으로부터 추출되고, 추출된 모델링 데이터에 상응하는 영역 이미지가 실제 장기를 나타내는 영상 이미지 위에 오버레이되어 표시되도록 처리될 수 있다. Referring to modeling data in step P465, the corresponding area is partitioned and displayed as an area image (see FIGS. 16 to 26) for the operator to recognize. The master robot 1 recognizes the position of the organ corresponding to the organ selection information in consideration of the height of the patient, the lying down shape and the general position of the organ in the human body, and then the area of the shape defined by the modeling data. This is set so as to be overlaid and displayed in the image image obtained by the monitor unit 6 and / or the laparoscope 5. For example, when the heart is selected as the organ selection information, modeling data is extracted from the storage 341 or the like to display an area image corresponding to the shape of the heart, and the area image corresponding to the extracted modeling data is the actual organ. The image may be processed to be overlaid and displayed on the video image.
단계 P470에서 수술자는 영상 이미지 위에 오버레이되어 표시되는 영역 이미지가 실제 장기의 영역에 부합하는지 여부를 확인한 후 부합하지 않는 경우 오버레이되어 표시되는 영역 이미지를 변형(도 19 내지 도 26 참조) 처리한다. 단계 P470의 영역 이미지 변형은 전술한 바와 같이 수술자에 의해 처리될 수도 있지만, 복강경(5)에 의해 획득되어 표시되는 영상 이미지 내에 해당 장기가 존재하고 있다면 영상 해석 기법을 이용하여 해당 장기의 형상(크기 등을 포함함)을 해석한 후 영역 이미지가 해당 장기의 형상에 부합되도록 자동 변형 처리할 수도 있음은 당연하다.In operation P470, the operator checks whether the area image overlaid on the image image corresponds to the area of the actual organ, and then modifies the area image overlaid (see FIGS. 19 to 26) when the area image does not match. The region image deformation of step P470 may be processed by the operator as described above, but if the organ is present in the image image acquired and displayed by the laparoscope 5, the shape (size) of the organ is determined using an image analysis technique. And so on, the image may be automatically transformed to match the shape of the organ.
이외에도 제한구역을 사전에 설정하여 이용하는 방법 등도 적용될 수 있다. 예를 들어 CT, MRI 영상 등을 재구성하여 얻은 3차원 영상을 이용하는 방법으로서, 미리 사전에 3차원 영상에서 제한구역을 설정한 후에 이를 복강경(5)에 의해 획득된 화상 이미지(예를 들어, 실제 내시경 영상)와 정합하여 이용할 수도 있다. 물론, 3차원 영상과 실시간 화상 이미지에 대한 정합은 생략하고, 필요한 몇 개의 포인트만 매칭시키는 방법도 적용될 수 있을 것이다.In addition, a method of setting and using a restricted area in advance may also be applied. For example, a method of using a three-dimensional image obtained by reconstructing a CT, MRI image, etc., in which a restriction zone is set in advance in a three-dimensional image, and the image image obtained by the laparoscope 5 (for example, Endoscope images). Of course, the matching of the 3D image and the real-time image image may be omitted, and a method of matching only a few points required may be applied.
암 조작부(330)는 슬레이브 로봇(2)의 로봇 암(3)의 위치 및 기능 등을 수술자가 조작할 수 있도록 하는 수단이다. 암 조작부(331)는 도 14에 예시된 바와 같이 핸들(10)의 형상으로 형성될 수 있으나, 그 형상이 이에 제한되지 않으며 동일한 목적 달성을 위한 다양한 형상으로 변형 구현될 수 있다. 또한, 예를 들어 일부는 핸들 형상으로, 다른 일부는 클러치 버튼 등의 상이한 형상으로 형성될 수도 있으며, 수술도구의 조작을 용이하도록 하기 위해 수술자의 손가락을 삽입 고정할 수 있도록 하는 손가락 삽입관 또는 삽입 고리가 더 형성될 수도 있다. 또한, 영상을 입력받기 위한 복강경(5)이 특정 위치에 고정되지 않고, 그 위치 및/또는 영상 입력 각도가 수술자의 조정에 의해 이동되거나 변경될 수 있다면 클러치 버튼(14) 등은 복강경(5)의 위치 및/또는 영상 입력 각도의 조정을 위해 기능하도록 설정될 수도 있다.The arm operation unit 330 is a means for allowing the operator to manipulate the position and function of the robot arm 3 of the slave robot 2. The arm manipulation unit 331 may be formed in the shape of the handle 10 as illustrated in FIG. 14, but the shape is not limited thereto and may be modified in various shapes for achieving the same purpose. Further, for example, some may be formed in the shape of a handle, others may be formed in a different shape, such as a clutch button, finger insertion tube or insertion to enable the operator's fingers can be inserted and fixed to facilitate the operation of the surgical tool More rings may be formed. In addition, if the laparoscope 5 for receiving an image is not fixed at a specific position, and the position and / or the image input angle can be moved or changed by the operator's adjustment, the clutch button 14 or the like is used as the laparoscope 5. It may be set to function for the adjustment of the position and / or the image input angle of.
조작 판단부(371)는 수술자의 암 조작부(330) 조작에 의한 조작정보가 저장부(341)에 저장된 조작 설정정보 및 제한구역 좌표 정보 중 하나 이상을 참조하여 유효한 조작정보인지를 판단한다. The operation determination unit 371 determines whether the operation information by the operator's operation of the arm operation unit 330 is valid operation information by referring to at least one of operation setting information and restricted area coordinate information stored in the storage unit 341.
예를 들어, 조작 판단부(371)는 조작 설정정보를 참조하여 암 조작부(330) 조작에 따른 조작정보가 조작 불능으로 설정된 로봇 암인 경우 해당 조작정보가 유효하지 않은 것으로 판단할 수 있다. For example, the manipulation determination unit 371 may determine that the manipulation information is invalid when the manipulation information according to the manipulation of the arm manipulation unit 330 is set to be inoperable with reference to the manipulation setting information.
또한, 조작 판단부(371)는 제한구역 좌표 정보를 참조하여 암 조작부(330) 조작에 따른 조작정보가 로봇 암(3)이 제한구역에 접촉되도록 하는 것인 경우 유효하지 않은 것으로 판단할 수 있다. 이를 위해, 조작 판단부(371)는 위치 정보 제공 유닛으로부터 수신되는 예를 들어 로봇 암(3)이나 인스트루먼트가 기본 위치에서 어느 각도로 어느 거리만큼 움직였는지에 대한 변위 정보(예를 들어, 로봇 암 등의 구동 모터의 회전각 등)를 참조할 수 있다.In addition, the operation determination unit 371 may determine that the operation information according to the operation of the arm operation unit 330 is invalid when the robot arm 3 contacts the restricted area with reference to the restricted area coordinate information. . To this end, the operation determination unit 371 may determine displacement information (for example, the robot arm 3) at which angle and at which angle the robot arm 3 or the instrument received from the position information providing unit has moved from the basic position. Rotation angle of a driving motor, etc.).
반응정보 처리부(1380)는 조작 판단부(371)에 의해 수술자의 조작정보가 유효하지 않은 조작 정보인 것으로 판단된 경우 수술자 또는/및 설정자에 의해 지정된 반응정보의 처리를 수행한다. 반응정보의 처리는 핸들(10)에 포스 피드백(force feedback)을 가하여 수술자가 인지하도록 하는 촉각정보 출력, LED 점멸이나 복강경(5)에 의해 획득된 영상 이미지에 상응하여 경고 메시지 출력 등의 시각 정보 출력, 경고음향의 출력 등의 청각 정보 출력 등 중 하나 이상의 방식으로 처리될 수 있다.The response information processing unit 1380 performs the processing of the response information specified by the operator and / or the setter when the operation determination unit 371 determines that the operator's operation information is invalid operation information. The processing of the response information includes visual information such as tactile information output for the operator to recognize force by applying force feedback to the handle 10 and outputting a warning message corresponding to the image image obtained by the blinking LED or the laparoscope 5. May be processed in one or more ways, such as output of auditory information, such as output of warning sounds.
제어부(370)는 상술한 기능이 수행될 수 있도록 각 구성 요소들의 동작을 제어한다. 제어부(370)는 영상 입력부(310)를 통해 입력되는 영상이 화면 표시부(320)를 통해 표시될 화상 이미지로 변환하는 기능을 수행할 수도 있다. The controller 370 controls the operation of each component so that the above-described function can be performed. The controller 370 may perform a function of converting an image input through the image input unit 310 into an image image to be displayed through the screen display unit 320.
도 28은 본 발명의 실시예에 따른 제한구역 설정 방법을 나타낸 순서도이다.28 is a flowchart illustrating a method for setting a restricted area according to an embodiment of the present invention.
이하에서 설명되는 본 실시예에서는 암 조작 설정정보의 생성 및 저장과 제한구역 설정정보의 생성 및 저장이 순차적인 단계로서 예시되었으나, 각 단계의 순서는 변경되거나 동시에 수행될 수도 있음은 당연하다.In the present embodiment described below, the generation and storage of the arm operation setting information and the generation and storage of the restricted area setting information are illustrated as sequential steps, but the order of each step may be changed or performed simultaneously.
도 28을 참조하면, 단계 P510에서 마스터 로봇(1)은 수술자 또는 설정자로부터 가시외 영역에 위치하는 로봇 암(3) 및/또는 인스트루먼트가 조작되도록 할 것인지 여부에 대한 암 조작 설정을 입력받는다.Referring to FIG. 28, in step P510, the master robot 1 receives an arm manipulation setting on whether to operate the robot arm 3 and / or the instrument located in the out-of-visible region from the operator or setter.
단계 P520에서 마스터 로봇(1)은 입력된 암 조작 설정에 상응하는 암 조작 설정정보를 생성하여 저장한다.In step P520, the master robot 1 generates and stores arm operation setting information corresponding to the input arm operation setting.
단계 P530에서 마스터 로봇(1)은 수술자 또는 설정자로부터 가시 영역 및 가시외 영역에 위치하는 로봇 암(3) 및/또는 인스트루먼트가 동작되는 과정에서 진입하지 않도록 제한되는 구역 지정을 위한 제한구역 설정을 입력받는다.In step P530, the master robot 1 inputs a restricted area setting for designating a zone that is restricted from entering the process of operating the robot arm 3 and / or the instrument located in the visible and non-visible areas from the operator or setter. Receive.
단계 P540에서 마스터 로봇(1)은 입력된 제한구역 설정에 상응하는 제한구역 설정정보를 생성하여 저장한다.In step P540, the master robot 1 generates and stores restricted zone setting information corresponding to the entered restricted zone setting.
이와 같이, 수술자 또는/및 설정자는 미리 획득된 영상 정보(예를 들어, CT, MRI 등과 같은 환자 영상, 일반화되도록 모델링된 가상 영상 등 중 하나 이상) 또는/및 드롭다운 메뉴의 클릭에 의해 표시되는 장기 항목 등을 이용하여 수술 이전 또는 수술 과정에서 제한구역을 설정할 수 있다. 제한구역으로 설정된 영역에 대해서는 사용자의 조작에도 불구하고 로봇 암(3) 또는/및 인스트루먼트가 동작하지 않도록 제한되고, 제한구역으로의 접근이나 접촉이 반응정보로서 수술자에게 피드백됨으로써, 수술 과정에서 손상되어서는 안 되는 중요한 부위에 대하여 오동작으로 인한 의료사고의 발생이 방지될 수 있다. 반응정보는 촉각정보 형태, 시각정보 형태 및 청각정보 형태 등 중 하나 이상으로 수술자에게 출력될 수 있다.As such, the operator or / and setter may be displayed by pre-acquired image information (e.g., one or more of patient images such as CT, MRI, etc., virtual images modeled to be generalized, etc.) or / and clicked on a drop down menu. Long-term items can be used to establish restricted areas before or during surgery. The area set as the restricted area is restricted so that the robot arm 3 and / or the instrument does not operate in spite of the user's operation, and the access or contact to the restricted area is fed back to the operator as response information, thereby being damaged during the surgery. The occurrence of medical accidents due to malfunctions can be prevented for important areas that should not be allowed. The response information may be output to the operator in one or more of a tactile information type, a visual information type, and an auditory information type.
예를 들어, 수술 과정에서 수술자가 복강경(5)에 의해 획득되어 보여지는 영상 이미지에서 터치스크린이나 마우스 장치 등의 외부 입력장치를 이용하여 특정 영역을 지정하면, 그 지정된 영역이 제한구역으로 설정되도록 할 수 있다. For example, when a surgeon designates a specific area using an external input device such as a touch screen or a mouse device in an image image obtained and viewed by the laparoscope 5, the designated area is set as a restricted area. can do.
또는, 미리 촬영된 CT, MRI 등의 환자 영상 정보 또는 해당 환자 영상 정보로부터 재구성한 영상을 수술자가 보는 화면에 표시하고, 수술자는 화면에 표시된 영상에서 특정 영역을 지정하면 그 지정된 영역이 제한구역으로 설정되도록 할 수도 있다.Alternatively, the patient image information such as CT, MRI, or the like taken in advance or a reconstructed image from the patient image information is displayed on the operator's screen, and if the operator designates a specific region in the image displayed on the screen, the designated region becomes a restricted area. It can also be set.
이 경우, 전술한 바와 같이, 지정된 특정 영역의 좌표범위를 영상 해석 등의 방식으로 인식하는 과정이 수행될 수 있다.In this case, as described above, a process of recognizing the coordinate range of the designated specific region may be performed by image analysis or the like.
이와 같이 제한구역을 지정할 때, 수술자는 해당 영상이 표시되는 화면을 참조하여 지정할 수도 있고, 촬영한 영상에서 미리 지정되도록 할 수도 있으며, 이 외에도 다양한 방법으로 제한구역이 지정되도록 할 수 있다.As such, when the restricted area is designated, the operator may designate the restricted area by referring to the screen on which the corresponding image is displayed, may be specified in advance in the photographed image, and in addition, the restricted area may be specified in various ways.
도 29 및 도 30은 본 발명의 실시예에 따른 수술 로봇 시스템의 동작 제한 방법을 나타낸 순서도이고, 도 31은 본 발명의 실시예에 따른 동작 제한 방법을 설명하기 위한 화면 표시의 예시도이다.29 and 30 are flowcharts illustrating an operation limiting method of a surgical robot system according to an exemplary embodiment of the present invention, and FIG. 31 is an exemplary view of a screen display for explaining an operation limiting method according to an exemplary embodiment of the present invention.
도 29를 참조하면, 단계 P610에서 마스터 로봇(1)은 수술자로부터 암 조작부(330)의 조작에 따른 암 조작정보를 입력받는다.Referring to FIG. 29, in operation P610, the master robot 1 receives arm manipulation information according to manipulation of the arm manipulation unit 330 from an operator.
마스터 로봇(1)은 단계 P520에서 입력된 암 조작정보가 가시외 영역에 위치하는 로봇 암(3) 또는/및 인스트루먼트에 대한 조작인지 여부를 판단한다. 예를 들어, 수술자의 오조작에 의해서, 또는 가시 영역 내에 위치하는 로봇 암(3)을 가시외 영역에 위치하는 로봇 암(3)으로 대체하여 조작하고자 하는 경우 등에서 가시외 영역에 위치하는 로봇 암(3) 조작이 명령될 수 있을 것이다. 전술한 바와 같이, 본 명세서에서 명시적으로 한정하지 않는 한 로봇 암(3)은 인스트루먼트를 포함하는 개념으로 해석될 수 있다.The master robot 1 determines whether the arm operation information input in step P520 is for the robot arm 3 and / or the instrument located in the out-of-visible region. For example, the robot arm positioned in the non-visible region may be manipulated by a misoperation of an operator or when the robot arm 3 positioned in the visible region is to be replaced with the robot arm 3 positioned in the non-visible region. (3) The operation may be commanded. As mentioned above, the robot arm 3 may be interpreted as a concept including an instrument, unless expressly limited herein.
단계 P620의 판단 결과로, 가시외 영역(즉, 가시 영역)에 위치하는 로봇 암(3) 등에 대한 조작 명령이 아닌 경우에는 단계 P630으로 진행하여, 마스터 로봇(1)은 암 조작명령을 생성하여 출력한다. 단계 P630의 수행 이후 도 30에 도시된 단계 P710이 후속된다. 도 31을 참조할 때, 가시 영역에 위치하는 로봇 암 또는 인스트루먼트는 830a 및 830b라 할 수 있고, 가시외 영역에 위치하는 로봇 암 또는 인스트루먼트는 850이라 할 수 있다.As a result of the determination in step P620, if it is not an operation command for the robot arm 3 or the like located in the non-visible region (i.e., the visible region), the flow advances to step P630, whereby the master robot 1 generates an arm manipulation instruction. Output After performing step P630, step P710 shown in FIG. 30 is followed. Referring to FIG. 31, robot arms or instruments positioned in the visible region may be referred to as 830a and 830b, and robot arms or instruments positioned in the non-visible region may be referred to as 850.
그러나 단계 P620의 판단 결과로, 가시외 영역에 위치하는 로봇 암(3) 등에 대한 조작 명령인 경우에는 단계 P640으로 진행하여, 마스터 로봇(1)은 가시외 영역에 위치하는 로봇 암(3) 등에 대한 조작이 허용되도록 하는 조작 설정정보가 저장되었는지 여부를 판단한다.However, as a result of the determination in step P620, in the case of an operation instruction for the robot arm 3 or the like located in the out-of-visible region, the flow advances to step P640, where the master robot 1 moves to the robot arm 3 or the like in the out-of-visible region. It is determined whether the operation setting information for allowing the operation to be permitted is stored.
단계 P640의 판단 결과로, 가시외 영역에 위치하는 로봇 암(3) 등에 대한 조작이 허용되도록 하는 조작 설정정보가 저장되었다면 단계 P630으로 진행하여 마스터 로봇(1)은 암 조작명령을 생성하여 출력한다. 이 경우, 암 조작명령에 의해 가시 영역이나 가시외 영역에 위치하는 로봇 암(3) 등이 동일한 방식으로 이동이나 조작되도록 할 수도 있으나, 가시외 영역에 위치하는 로봇 암(3) 등의 이동이나 조작시 가시 영역에 위치하는 로봇 암(3) 등의 이동이나 조작 속도보다 느리게 처리되도록 하는 암 조작명령이 생성될 수도 있다.As a result of the determination in step P640, if operation setting information for allowing operation on the robot arm 3 and the like located in the out-of-visible region is stored, the process proceeds to step P630 and the master robot 1 generates and outputs an arm operation command. . In this case, the robot arm 3 or the like located in the visible region or the non-visible region may be moved or manipulated in the same manner by the arm manipulation command. An arm operation instruction may be generated to allow the robot arm 3 or the like positioned in the visible area to be processed slower than the operation speed or the operation speed.
그러나 단계 P640의 판단 결과로, 가시외 영역에 위치하는 로봇 암(3) 등에 대한 조작이 허용되지 않도록 하는 조작 설정정보가 저장되었다면 단계 P650으로 진행하여 마스터 로봇(1)은 반응정보의 출력 처리를 수행하여 수술자가 이를 인식하도록 한다.However, as a result of the determination in step P640, if operation setting information is stored so that the operation on the robot arm 3, etc. located in the out-of-visible region is not allowed, the process proceeds to step P650 and the master robot 1 performs output processing of the reaction information. To allow the operator to recognize it.
도 30을 참조하면, 단계 P710에서 마스터 로봇(1)은 슬레이브 로봇(2)으로부터 암 조작상태 정보를 수신한다. 암 조작상태 정보는 예를 들어 로봇 암(3)이나 인스트루먼트가 기본 위치에서 어느 각도로 어느 정도 움직였는지에 대한 정보(예를 들어, 로봇 암 등의 구동 모터의 회전각 등)를 포함할 수 있으며, 해당 정보는 위치 정보 제공 유닛으로부터 제공될 수 있다.Referring to FIG. 30, in step P710, the master robot 1 receives arm operation state information from the slave robot 2. The arm operation state information may include, for example, information about how much the robot arm 3 or the instrument has moved at an angle from the basic position (for example, the rotation angle of a driving motor such as the robot arm), The corresponding information may be provided from the location information providing unit.
단계 P720에서 마스터 로봇(1)은 암 조작상태 정보를 참조하여 로봇 암(3)이나 인스트루먼트가 미리 설정된 제한구역에 접촉되었는지 여부를 판단한다.In step P720, the master robot 1 determines whether the robot arm 3 or the instrument is in contact with the preset restricted area with reference to the arm operation state information.
전술한 바와 같이, 제한구역은 수술자가 해당 영상이 표시되는 화면을 참조하여 지정할 수도 있고, 촬영한 영상에서 미리 지정되도록 할 수도 있으며, 이 외에도 다양한 방법에 의해 설정되도록 할 수 있다.As described above, the restricted area may be designated by the operator with reference to the screen on which the corresponding image is displayed, or may be specified in advance in the captured image, or may be set by various methods.
상술한 과정에 의해 설정된 제한구역은 복강경(5) 등에 의해 획득된 실제 영상에 증강현실(augmented reality) 기법을 적용하여 빗금 처리 방식(도 31의 840 참조), 특정 색깔의 라인 처리 방식, 불투명 처리 방식 등 중 하나 이상으로 다양하게 처리할 수 있다. 이로써, 수술자는 지속적으로 해당 영역이 제한구역임을 알려줄 수 있다. 이러한 제한구역 표시 방식은 표시 화면의 이동 또는 확대 등의 경우에도 연동되어 처리되도록 할 수 있음은 당연하다.In the restricted area set by the above-described process, the augmented reality technique is applied to the real image acquired by the laparoscope 5, etc. (see 840 of FIG. 31), a line treatment method of a specific color, and an opacity process. It can be handled variously in one or more ways. As a result, the operator may continuously inform that the area is a restricted area. Naturally, the restricted zone display method may be processed in conjunction with the movement or enlargement of the display screen.
단계 P720의 판단 결과로, 로봇 암(3)이나 인스트루먼트가 제한구역에 접촉되지 않았다면 단계 P610으로 다시 진행한다.As a result of the determination in step P720, if the robot arm 3 or the instrument has not contacted the restricted area, the process proceeds to step P610 again.
그러나 단계 P720의 판단 결과로, 로봇 암(3)이나 인스트루먼트가 제한구역에 접촉되었다면, 단계 P730으로 진행하여 마스터 로봇(1)은 로봇 암의 동작제한 제어를 수행하고, 또한 반응정보 출력 처리를 수행하여 이를 수술자가 인식하도록 한다.However, if the robot arm 3 or the instrument is in contact with the restricted area as a result of the determination in step P720, the process proceeds to step P730 where the master robot 1 performs the operation limitation control of the robot arm, and also performs reaction information output processing. This allows the operator to recognize it.
제한구역에서의 로봇 암(3)의 동작 제한 제어는, 예를 들어 설정된 제한 구역이 마치 가상의 벽인 것처럼 인식되도록 할 수 있다. 즉, 로봇 수술 과정에서 인스트루먼트의 끝단 또는 중간 샤프트 일부 등이 제한 영역으로 진입하려고 할 때에, 가상의 벽에 걸려 인스트루먼트가 동작되지 않는 것처럼 인스트루먼트의 동작을 제한하고 이를 반응정보의 출력을 통해 수술자가 인식할 수 있도록 한다. Operation restriction control of the robot arm 3 in the restricted zone can, for example, allow the set restricted zone to be recognized as if it were a virtual wall. In other words, when the end of the instrument or part of the intermediate shaft tries to enter the restricted area during the robotic operation, the operator restricts the operation of the instrument as if the instrument is not operated by the virtual wall and recognizes it through the output of the response information. Do it.
물론, 제한 구역에 접촉된 경우 반드시 인스트루먼트가 동작되지 않도록 제한되어야 하는 것은 아니며, 수술자의 선택에 의해 동작 여부가 결정되도록 할 수도 있다. 예를 들어, 수술자의 의도와 무관하게 제한 구역으로 설정된 영역(예를 들어, 혈관 등)에서 출혈이 발생된 경우, 인스트루먼트의 동작이 가능하도록 설정된다면 신속한 지혈이 가능할 수 있기 때문이다. 여기서, 제한 구역에 접촉된 경우, 제한구역에서 인스트루먼트 등이 조작될 수 있도록 하기 위한 제한구역 설정무시 명령은 예를 들어 수술자가 미리 설정된 임의의 버튼이나 페달(pedal) 등을 조작함으로써 입력될 수 있을 것이다. Of course, the contact with the restricted area is not necessarily limited to not operate the instrument, it is also possible to determine whether the operation by the operator's choice. For example, if bleeding occurs in a region (eg, a blood vessel) set as a restricted area irrespective of an operator's intention, rapid bleeding may be possible if the instrument is set to operate. Here, the command for ignoring the restricted area setting for allowing an instrument or the like to be operated in the restricted area when the contacted with the restricted area may be input by, for example, an operator operating a predetermined button or pedal. will be.
또한, 제한구역은 반드시 현재 화면에 표시된 영역(즉, 가시 영역(810)) 내에서만 설정되어야 하는 것은 아니며, 도 31에 예시된 바와 같이 화면에 표시되지 않는 영역(즉, 가시외 영역(820))에서도 제한구역(840)이 지정 및 설정될 수 있음은 당연하다. 즉, 제한구역(840)이 가시외 영역(820)에 설정된 경우에도, 인스트루먼트가 해당 제한구역에 접촉된다면 수술 환자의 안전에 심각한 위험이 초래될 수 있으므로, 이를 방지하게 위해 인스트루먼트의 움직임이 제한되도록 제어할 수 있다.Further, the restricted area is not necessarily set only within the area currently displayed on the screen (ie, the visible area 810), and is not displayed on the screen (ie, the non-visible area 820) as illustrated in FIG. 31. Of course, the restricted area 840 may be designated and set. That is, even when the restricted area 840 is set in the out-of-visible region 820, if the instrument is in contact with the restricted area may cause a serious risk to the safety of the surgical patient, the movement of the instrument is limited to prevent this Can be controlled.
인스트루먼트의 동작을 제한하는 제어방법은, 예를 들어 인스트루먼트가 동작하지 않도록 로킹(locking)하는 방법이 사용될 수 있으며, 더 나아가 로킹되는 인스트루먼트를 조작하는 사용자의 조작 핸들에 반력이 피드백되도록 함으로써, 인스트루먼트의 동작이 제한됨을 사용자에게 알려줄 수 있다. 이외에도 반응정보가 출력되는 방식이 다양할 수 있음은 앞서 설명한 바와 같다.As a control method for limiting the operation of the instrument, for example, a method of locking the instrument so as not to operate may be used, and further, by allowing reaction force to be fed back to the operation handle of the user who operates the locked instrument, The user may be informed that the operation is limited. In addition, the manner in which the reaction information is output may vary as described above.
상술한 수술 로봇 시스템의 동작 제한 방법은 소프트웨어 프로그램 등으로 구현될 수도 있다. 프로그램을 구성하는 코드들 및 코드 세그먼트들은 당해 분야의 컴퓨터 프로그래머에 의하여 용이하게 추론될 수 있다. 또한, 프로그램은 컴퓨터가 읽을 수 있는 정보저장매체(computer readable media)에 저장되고, 컴퓨터에 의하여 읽혀지고 실행됨으로써 상기 방법을 구현한다. 정보저장매체는 자기 기록매체, 광 기록매체 및 캐리어 웨이브 매체를 포함한다.The operation limiting method of the above-described surgical robot system may be implemented by a software program or the like. Codes and code segments constituting a program can be easily inferred by a computer programmer in the art. The program is also stored in a computer readable media, and read and executed by a computer to implement the method. The information storage medium includes a magnetic recording medium, an optical recording medium and a carrier wave medium.
도 32는 본 발명의 실시예에 따른 수술용 로봇의 전체구조를 나타낸 평면도이고, 도 33은 본 발명의 실시예에 따른 수술용 로봇의 마스터 인터페이스를 나타낸 개념도이며, 도 34 내지 도 37은 본 발명의 실시예에 따른 접면부의 움직임 형태를 예시한 도면이다.32 is a plan view showing the overall structure of a surgical robot according to an embodiment of the present invention, Figure 33 is a conceptual diagram showing a master interface of the surgical robot according to an embodiment of the present invention, Figures 34 to 37 is the present invention Figure is a view illustrating a movement form of the contact portion according to the embodiment.
도 32 및 도 33을 참조하면, 수술용 로봇 시스템은 수술대에 누워있는 환자에게 수술을 행하는 슬레이브 로봇(2)과 슬레이브 로봇(2)을 수술자가 원격 조종하는 마스터 로봇(1)을 포함한다. 마스터 로봇(1)과 슬레이브 로봇(2)이 반드시 물리적으로 독립된 별도의 장치로 분리되어야 하는 것은 아니며, 하나로 통합되어 일체형으로 구성될 수 있으며, 이 경우 마스터 인터페이스(4)는 예를 들어 일체형 로봇의 인터페이스 부분에 상응할 수 있다.32 and 33, the surgical robot system includes a slave robot 2 performing surgery on a patient lying on an operating table and a master robot 1 remotely controlling the slave robot 2. The master robot 1 and the slave robot 2 are not necessarily separated into separate devices that are physically independent, but may be integrated into one and integrally formed, in which case the master interface 4 may be, for example, of an integrated robot. May correspond to an interface portion.
마스터 로봇(1)의 마스터 인터페이스(4)는 모니터부(6), 텔레스코픽 표시부(20) 및 마스터 조종기를 포함하고, 슬레이브 로봇(2)은 로봇 암(3) 및 복강경(5)을 포함한다. The master interface 4 of the master robot 1 comprises a monitor 6, a telescopic display 20 and a master manipulator, and the slave robot 2 comprises a robot arm 3 and a laparoscope 5.
마스터 인터페이스(4)의 모니터부(6)는 하나 이상의 모니터들로 구성될 수 있으며, 각 모니터에 수술시 필요한 정보들이 개별적으로 표시되도록 할 수 있다. 도 32 및 도 33에는 모니터부(6)가 텔레스코픽 표시부(20)를 기준으로 양 측면에 하나씩 포함되는 경우가 예시되었으나, 모니터의 수량은 표시를 요하는 정보의 유형이나 종류 등에 따라 다양하게 결정될 수 있다.The monitor unit 6 of the master interface 4 may be composed of one or more monitors, and each monitor may be individually displayed information necessary for the operation. 32 and 33 illustrate a case in which the monitor unit 6 is included on each side of the telescopic display unit 20 one by one, but the quantity of the monitor may vary depending on the type or type of information requiring display. have.
모니터부(6)는 예를 들어 환자에 대한 하나 이상의 생체 정보를 출력할 수 있다. 이 경우, 모니터부(6)의 하나 이상의 모니터를 통해 환자의 상태를 나타내는 지표, 예를 들면, 체온, 맥박, 호흡 및 혈압 등과 같은 생체 정보 중 하나 이상이 출력될 수 있으며, 복수의 정보가 출력되는 경우 각 정보는 영역별로 나뉘어져 출력될 수도 있다. 이러한 생체 정보를 마스터 로봇(1)으로 제공하기 위해, 슬레이브 로봇(2)은 체온 측정 모듈, 맥박 측정 모듈, 호흡 측정 모듈, 혈압 측정 모듈, 심전도 측정 모듈 등 중 하나 이상을 포함하는 생체 정보 측정 유닛을 포함할 수 있다. 각 모듈에 의해 측정된 생체 정보는 아날로그 신호 또는 디지털 신호의 형태로 슬레이브 로봇(2)에서 마스터 로봇(1)으로 전송될 수도 있으며, 마스터 로봇(1)은 수신된 생체 정보를 모니터부(6)를 통해 디스플레이할 수 있다.The monitor unit 6 may output one or more biometric information about the patient, for example. In this case, at least one of indicators indicating the patient's condition, for example, biometric information such as body temperature, pulse rate, respiration and blood pressure may be output through at least one monitor of the monitor unit 6, and a plurality of pieces of information may be output. In this case, each piece of information may be divided and output by area. In order to provide such biometric information to the master robot 1, the slave robot 2 includes a biometric information measuring unit including at least one of a body temperature measuring module, a pulse measuring module, a respiratory measuring module, a blood pressure measuring module, an electrocardiogram measuring module, and the like. It may include. The biometric information measured by each module may be transmitted from the slave robot 2 to the master robot 1 in the form of an analog signal or a digital signal, and the master robot 1 monitors the received biometric information. Can be displayed via
마스터 인터페이스(4)의 텔레스코픽 표시부(20)는 수술자에게 수술 부위로서 복강경(5)을 통해 입력되는 영상을 제공한다. 수술자는 텔레스코픽 표시부(20)의 접면부(210)에 형성된 접안부(220)를 통해 해당 영상을 보고, 마스터 조종기를 조종하여 로봇 암(3) 및 단부 이펙터(effector)를 조작함으로써 수술부위에 대한 수술을 진행한다. 도 33에는 접면부(210)의 일예로서 패널(panel) 형태로 구현된 경우가 예시되었으나, 접면부(210)는 마스터 인터페이스(4)의 내측을 향하도록 함입되어 형성될 수도 있다. 또한, 도 33에는 접면부(210)에 수술자가 복강경(5)에 의해 획득된 영상을 보기위한 접안부(220)가 형성된 경우가 예시되었으나, 접면부(210)가 그 뒷면의 영상이 투과되어 보여지는 재료로 형성되는 경우 접안부(220)의 형성은 생략될 수도 있다. 접면부(210) 뒷면의 영상이 투과되어 수술자에게 보여질 수 있도록 접면부(210)는 예를 들어 투명 재질의 재료로 형성되거나, 편광 필름으로 코팅되거나 3D IMAX 영화를 보기 위해 사용되는 색안경 등의 빛 투과성 재료에 의해 형성될 수도 있다.The telescopic display unit 20 of the master interface 4 provides the operator with an image input through the laparoscope 5 as a surgical site. The operator views the image through the eyepiece 220 formed on the contact portion 210 of the telescopic display portion 20, and manipulates the robot arm 3 and the end effector by manipulating the master controller to operate on the surgical site. Proceed. 33 illustrates a case in which a panel portion 210 is implemented as an example of the folding portion 210, the folding portion 210 may be formed to be recessed to face the inside of the master interface 4. 33 illustrates an example in which the eyepiece 220 for the operator to view an image acquired by the laparoscope 5 is formed on the contacting part 210, but the contacting part 210 shows the image of the backside being transmitted. When formed of a losing material, the formation of the eyepiece 220 may be omitted. For example, the folding unit 210 may be formed of a transparent material, coated with a polarizing film, or used to watch a 3D IMAX film so that an image of the rear surface of the folding unit 210 may be transmitted to the operator. It may be formed by a light transmissive material.
텔레스코픽 표시부(20)는 수술자가 접안부(220)를 통해 복강경(5)에 의한 영상을 확인하기 위한 표시 장치로서의 기능뿐 아니라, 복강경(5)의 위치 및 영상 입력 각도 제어를 위한 제어 명령 입력부로서의 기능을 함께 구비하도록 구성된다.The telescopic display unit 20 functions not only as a display device for the operator to check the image of the laparoscope 5 through the eyepiece 220, but also as a control command input unit for controlling the position and image input angle of the laparoscope 5. It is configured to have together.
텔레스코픽 표시부(20)의 접면부(210)에 수술자의 얼굴이 접촉 또는 근접되고, 수술자의 얼굴 움직임이 인식될 수 있도록 하기 위한 복수의 지지부(230, 240)가 돌출되어 형성된다. 예를 들어, 상부에 형성된 지지부(230)는 수술자의 이마에 접촉하여 이마 위치가 고정되도록 이용될 수 있고, 측부에 형성된 지지부(240)는 수술자의 눈 밑 부위(예를 들어, 광대뼈 부위)에 접촉되어 얼굴 위치가 고정되도록 이용될 수 있다. 도 33에 예시된 지지부의 위치 및 수량은 예시적인 것으로서, 지지부의 위치나 형상은 예를 들어 턱 고정용 받침, 얼굴 좌우받침(290) 등으로 다양할 수 있으며, 지지부의 수량 역시 다양할 수 있다. 얼굴 좌우받침의 경우 얼굴 왼쪽이나 오른쪽을 움직일 때 접면부(210)가 상응하는 방향으로 움직이도록 지지하기 위해 예를 들어 막대나 벽 등의 형태로 형성될 수 있다.The operator's face is in contact with or close to the contact portion 210 of the telescopic display unit 20, and a plurality of supports 230 and 240 are formed to protrude so that the operator's face movement can be recognized. For example, the support 230 formed at the top may be used to contact the operator's forehead to fix the forehead position, and the support 240 formed at the side may be located at an area under the operator's eye (for example, the cheekbone area). It can be used to contact and fix the face position. The position and quantity of the support illustrated in FIG. 33 are exemplary, and the position or the shape of the support may be varied, for example, a jaw fixing support, a face left and right support 290, or the like. . In the case of the left and right face support, for example, when the left or right side of the face moves, the contact portion 210 may be formed in the form of a rod or a wall to support the movement in the corresponding direction.
이와 같이 형성된 지지부(230, 240)에 의해 수술자의 얼굴 위치가 고정되어지며, 수술자가 접안부(220)를 통해 복강경(5)에 의한 영상을 보는 중에 임의의 방향으로 얼굴을 돌리면 이에 따른 얼굴 움직임이 감지되어 이를 복강경(5)의 위치 및/또는 영상 입력 각도의 조절을 위한 입력 정보로서 이용할 수 있다. 예를 들어, 수술자가 현재 영상으로 표시되는 수술부위보다 왼쪽의 부위(즉, 표시화면상 왼쪽에 위치된 부위)를 확인하고자 하는 경우, 얼굴이 상대적으로 왼쪽을 향하도록 고개를 돌리는 것만으로 이에 상응하도록 복강경(5)이 조작되어 해당 부위의 영상이 출력되도록 제어될 수 있다. The position of the operator's face is fixed by the support parts 230 and 240 formed as described above, and when the operator turns the face in an arbitrary direction while viewing the image by the laparoscope 5 through the eyepiece 220, the facial movement accordingly This can be detected and used as input information for adjusting the position of the laparoscope 5 and / or the image input angle. For example, if the operator wants to check the area on the left side (i.e., the area on the left side of the display screen) rather than the surgical area displayed on the current image, the operator can turn his head so that his face is relatively left. The laparoscope 5 may be manipulated so that the image of the corresponding area is output.
즉, 텔레스코픽 표시부(20)의 접면부(210)는 수술자의 얼굴 움직임에 따라 연동되어 그 위치 및/또는 각도가 변형되도록 마스터 인터페이스(4)에 결합되어 구성된다. 이를 위해, 마스터 인터페이스(4)와 텔레스코픽 표시부(20)의 접면부(210)는 유동부(240)에 의해 상호 결합될 수 있다. 유동부(250)는 텔레스코픽 표시부(20)의 위치 및/또는 각도 변형이 용이하고, 수술자의 얼굴 움직임에 의한 외력이 제거된 경우 원상태로 복원할 수 있도록 예를 들어 탄성체로 형성될 수 있다. 또한, 유동부(250)가 비탄성체로 형성되는 경우에도 텔레스코픽 표시부(20)가 원상태 복원부(도 40 참조)를 제어함으로써 텔레스코픽 표시부(20)가 원상태로 복원되도록 할 수도 있다. In other words, the contact portion 210 of the telescopic display unit 20 is coupled to the master interface 4 so that the position and / or the angle is changed in accordance with the operator's face movement. To this end, the master interface 4 and the contact portion 210 of the telescopic display portion 20 may be coupled to each other by the flow portion 240. The flow unit 250 may be formed of, for example, an elastic body so as to easily change the position and / or angle of the telescopic display unit 20 and restore the original state when the external force caused by the operator's face movement is removed. In addition, even when the flow unit 250 is formed of an inelastic material, the telescopic display unit 20 may control the original state restoration unit (see FIG. 40) so that the telescopic display unit 20 may be restored to the original state.
유동부(250)에 의해 접면부(210)는 XYZ축으로 형성된 3차원 공간상에서 가상의 중심점 및 좌표를 기준으로 직선방향 이동 조작되거나, 임의의 방향(예를 들어, 시계방향, 반시계 방향 등 중 하나 이상)으로 회전 이동 조작될 수 있다. 여기서, 가상의 중심점은 접면부(210) 내의 임의의 한 점 또는 축일 수 있으며, 예를 들어 접면부(210)의 중심점일 수 있다.The contact part 210 is moved by the flow part 250 based on a virtual center point and coordinates in a three-dimensional space formed on the XYZ axis, or is operated in any direction (eg, clockwise, counterclockwise, etc.). Rotational movement). Here, the virtual center point may be any one point or axis in the contact portion 210, for example, the center point of the contact portion 210.
도 34 내지 도 37에 접면부(210)의 움직임 형태가 예시되어 있다. 34 to 37 illustrate the movement of the contact portion 210.
수술자의 얼굴 움직임 방향이 X, Y 또는 Z축에 평행한 경우, 접면부(210)는 도 34에 예시된 바와 같이 얼굴 움직임에 의한 힘이 가해지는 방향으로 평행이동되어진다. When the operator's face movement direction is parallel to the X, Y or Z axis, the contact portion 210 is moved in the direction in which the force due to the face movement is applied as illustrated in FIG.
수술자의 얼굴 움직임 방향이 X-Y 평면상에서 회전하는 경우, 접면부(210)는 도 35에 예시된 바와 같이 얼굴 움직임에 의한 힘이 가해지는 방향으로 회전이동되어진다. 이때, 힘이 가해지는 방향에 따라 접면부(210)는 시계방향 또는 반시계방향으로 회전이동될 수 있다.When the operator's face movement direction rotates on the X-Y plane, the contact portion 210 is rotated in a direction in which a force caused by the face movement is applied as illustrated in FIG. 35. At this time, the contact portion 210 may be rotated in a clockwise or counterclockwise direction depending on the direction in which the force is applied.
수술자의 얼굴 움직임 방향이 X, Y 또는 Z축을 중심으로 회전하는 경우, 접면부(210)는 도 36에 예시된 바와 같이 기준되는 축을 중심으로 얼굴 움직임에 의한 힘이 가해지는 방향으로 회전이동되어진다. 이때, 힘이 가해지는 방향에 따라 접면부(210)는 시계방향 또는 반시계방향으로 회전 이동될 수 있다.When the operator's face movement is rotated about the X, Y or Z axis, the contact portion 210 is rotated in the direction in which the force by the face movement is applied about the reference axis as illustrated in FIG. 36. . In this case, the contact portion 210 may be rotated in a clockwise or counterclockwise direction according to the direction in which the force is applied.
수술자의 얼굴 움직임에 따른 힘이 X, Y 및 Z축 중 두 개의 축에 대해 가해지는 경우, 접면부(210)는 도 37에 예시된 바와 같이 가상의 중심점 및 힘이 가해지는 두 개의 축을 기준으로 회전이동되어진다. When the force according to the operator's face movement is applied to two of the X, Y and Z axes, the contact portion 210 is based on the virtual center point and the two axes to which the force is applied as illustrated in FIG. Rotation is moved.
이와 같이 접면부(210)의 수직/수평 방향의 이동 및 회전 이동은 얼굴 움직임에 의해 가해지는 힘의 방향에 의해 결정되며, 앞서 설명한 하나 이상의 움직임 형태가 조합적으로 나타날 수도 있음은 당연하다.As described above, the vertical and horizontal movements of the contact portion 210 and the rotational movement are determined by the direction of the force applied by the face movement, and one or more types of movements described above may be combined.
텔레스코픽 표시부(20)가 수술자의 얼굴 움직임을 감지하여 이에 따른 조작 명령을 생성하는 방법 및 구성은 이후 관련 도면을 참조하여 상세히 설명하기로 한다.A method and a configuration in which the telescopic display unit 20 detects an operator's face movement and generates an operation command according to the present invention will be described in detail with reference to the accompanying drawings.
도 32 및 도 33에 예시된 바와 같이, 마스터 인터페이스(4)는 수술자가 양손에 각각 파지되어 조작할 수 있도록 마스터 조종기를 구비한다. 마스터 조종기는 두 개의 핸들(10) 또는 그 이상의 수량의 핸들(10)로 구현될 수 있으며, 수술자의 핸들(10) 조작에 따른 조작신호가 슬레이브 로봇(2)으로 전송되어 로봇 암(3)이 제어된다. 수술자의 핸들(10) 조작에 의해 로봇 암(3)의 위치 이동, 회전, 절단 작업 등의 수술 동작이 수행될 수 있다.As illustrated in FIGS. 32 and 33, the master interface 4 is provided with a master manipulator so that the operator can be gripped and manipulated by both hands. The master controller may be implemented by two handles 10 or more handles 10, and an operation signal according to the operator's manipulation of the handle 10 is transmitted to the slave robot 2 so that the robot arm 3 may be moved. Controlled. By operating the handle 10 of the operator, a surgical operation such as a position movement, rotation, and cutting operation of the robot arm 3 may be performed.
예를 들어, 핸들(10)은 메인 핸들(main handle)과 서브 핸들(sub handle)을 포함하여 구성될 수 있다. 수술자는 메인 핸들만으로 슬레이브 로봇 암(3)이나 복강경(5) 등을 조작하거나, 서브 핸들을 조작하여 동시에 복수의 수술 장비가 실시간 조작되도록 할 수도 있다. 메인 핸들 및 서브 핸들은 그 조작방식에 따라 다양한 기구적 구성을 가질 수 있으며, 예를 들면, 조이스틱 형태, 키패드, 트랙볼, 터치스크린 등 슬레이브 로봇(2)의 로봇 암(3) 및/또는 기타 수술 장비를 작동시키기 위한 다양한 입력수단이 사용될 수 있다.For example, the handle 10 may be configured to include a main handle and a sub handle. The operator may operate the slave robot arm 3 or the laparoscope 5 or the like only by the main handle, or may operate the sub handles to simultaneously operate a plurality of surgical equipments in real time. The main handle and the sub handle may have various mechanical configurations depending on the operation method thereof. For example, the robot arm 3 and / or other surgery of the slave robot 2, such as a joystick type, a keypad, a trackball, and a touch screen, may be used. Various input means for operating the equipment can be used.
마스터 조종기는 핸들(10)의 형상으로 제한되지 않으며, 네트워크를 통해 로봇 암(3)의 동작을 제어할 수 있는 형태이면 아무런 제한 없이 적용될 수 있다.The master manipulator is not limited to the shape of the handle 10 and may be applied without any limitation as long as it can control the operation of the robot arm 3 through a network.
마스터 로봇(1)과 슬레이브 로봇(2)은 유선 통신망 또는 무선 통신망을 통해 상호 결합되어 조작신호, 복강경(5)을 통해 입력된 복강경 영상 등이 상대방으로 전송될 수 있다. 만일, 마스터 인터페이스(4)에 구비된 복수의 핸들(10)에 의한 복수의 조작신호 및/또는 복강경(5) 조정을 위한 조작신호가 동시에 및/또는 유사한 시점에서 전송될 필요가 있는 경우, 각 조작신호는 상호 독립적으로 슬레이브 로봇(2)으로 전송될 수 있다. 여기서 각 조작신호가 '독립적으로' 전송된다는 것은, 조작신호 간에 서로 간섭을 주지 않으며, 어느 하나의 조작신호가 다른 하나의 신호에 영향을 미치지 않음을 의미한다. 이처럼, 복수의 조작신호가 서로 독립적으로 전송되도록 하기 위해서는, 각 조작신호의 생성 단계에서 각 조작신호에 대한 헤더 정보를 부가하여 전송시키거나, 각 조작신호가 그 생성 순서에 따라 전송되도록 하거나, 또는 각 조작신호의 전송 순서에 관하여 미리 우선순위를 정해 놓고 그에 따라 전송되도록 하는 등 다양한 방식이 이용될 수 있다. 이 경우, 각 조작신호가 전송되는 전송 경로가 독립적으로 구비되도록 함으로써 각 조작신호간에 간섭이 근본적으로 방지되도록 할 수도 있을 것이다.The master robot 1 and the slave robot 2 may be coupled to each other through a wired communication network or a wireless communication network so that an operation signal and a laparoscope image input through the laparoscope 5 may be transmitted to the counterpart. If a plurality of operation signals by the plurality of handles 10 provided in the master interface 4 and / or an operation signal for adjusting the laparoscope 5 need to be transmitted at the same time and / or at a similar time point, each The operation signal may be transmitted to the slave robot 2 independently of each other. Herein, when each operation signal is 'independently' transmitted, it means that the operation signals do not interfere with each other and one operation signal does not affect the other signal. As described above, in order to transmit the plurality of operation signals independently of each other, in the generation step of each operation signal, header information for each operation signal is added and transmitted, or each operation signal is transmitted in the generation order thereof, or Various methods may be used such as prioritizing each operation signal in advance and transmitting the operation signal accordingly. In this case, the transmission path through which each operation signal is transmitted may be provided independently so that interference between each operation signal may be fundamentally prevented.
슬레이브 로봇(2)의 로봇 암(3)은 다자유도를 가지며 구동되도록 구현될 수 있다. 로봇 암(3)은 예를 들어 환자의 수술 부위에 삽입되는 수술기구, 수술기구를 수술 위치에 따라 요(yaw)방향으로 회전시키는 요동 구동부, 요동 구동부의 회전 구동과 직교하는 피치(pitch) 방향으로 수술기구를 회전시키는 피치 구동부, 수술기구를 길이 방향으로 이동시키는 이송 구동부와, 수술기구를 회전시키는 회전 구동부, 수술기구의 끝단에 설치되어 수술 병변을 절개 또는 절단하는 수술기구 구동부를 포함하여 구성될 수 있다. 다만, 로봇 암(3)의 구성이 이에 제한되지 않으며, 이러한 예시가 본 발명의 권리범위를 제한하지 않는 것으로 이해되어야 한다. 또한, 수술자가 핸들(10)을 조작함에 의해 로봇 암(3)이 상응하는 방향으로 회전, 이동하는 등의 실제적인 제어 과정은 본 발명의 요지와 다소 거리감이 있으므로 이에 대한 구체적인 설명은 생략한다.The robot arm 3 of the slave robot 2 can be implemented to be driven with multiple degrees of freedom. The robot arm 3 includes, for example, a surgical instrument inserted into a surgical site of a patient, a rocking drive unit for rotating the surgical instrument in the yaw direction according to the surgical position, and a pitch direction perpendicular to the rotational drive of the rocking drive unit. It comprises a pitch drive unit for rotating the surgical instruments, a transfer drive for moving the surgical instruments in the longitudinal direction, a rotation drive for rotating the surgical instruments, a surgical instrument drive unit installed on the end of the surgical instruments to cut or cut the surgical lesion Can be. However, the configuration of the robot arm 3 is not limited thereto, and it should be understood that this example does not limit the scope of the present invention. In addition, the actual control process, such as the operator rotates the robot arm 3 in the corresponding direction by operating the handle 10 is somewhat distanced from the subject matter of the present invention, so a detailed description thereof will be omitted.
슬레이브 로봇(2)은 환자를 수술하기 위하여 하나 이상으로 이용될 수 있으며, 수술 부위가 접안부(220)를 통해 확인할 수 있는 영상(즉, 화상 이미지)로 표시되도록 하기 위한 복강경(5)은 독립된 슬레이브 로봇(2)으로 구현될 수도 있다. 또한, 앞서 설명된 바와 같이, 본 발명의 실시예들은 복강경 이외의 다양한 수술용 내시경(예를 들어, 흉강경, 관절경, 비경 등)이 이용되는 수술들에 범용적으로 사용될 수 있다.One or more slave robots 2 may be used to operate the patient, and the laparoscope 5 for allowing the surgical site to be displayed as an image (that is, an image image) that can be seen through the eyepiece 220 is an independent slave. It may be implemented by the robot (2). In addition, as described above, embodiments of the present invention may be used universally in operations in which various surgical endoscopes (eg, thoracoscopic, arthroscopy, parenteral, etc.) other than laparoscopic are used.
도 38은 본 발명의 실시예에 따른 복강경 조작 명령을 생성하기 위한 텔레스코픽 표시부의 구성을 개략적으로 나타낸 블록 구성도이고, 도 39는 본 발명의 실시예에 따른 복강경 조작 명령 전송 방법을 나타낸 순서도이다.FIG. 38 is a block diagram schematically illustrating a configuration of a telescopic display unit for generating a laparoscopic manipulation command according to an embodiment of the present invention, and FIG. 39 is a flowchart illustrating a method of transmitting a laparoscopic manipulation command according to an embodiment of the present invention.
도 38을 참조하면, 텔레스코픽 표시부(20)는 움직임 감지부(311), 조작명령 생성부(321) 및 전송부(332)를 포함한다. 이외에, 텔레스코픽 표시부(20)는 복강경(5)을 통해 입력되는 수술 부위의 영상을 수술자가 접안부(220)를 통해 시각적으로 인식할 수 있도록 하는 구성 요소를 더 포함할 수 있으나, 이는 본 발명의 요지와는 다소 거리감이 있으므로 이에 대한 설명은 생략한다.Referring to FIG. 38, the telescopic display unit 20 includes a motion detector 311, an operation command generator 321, and a transmitter 332. In addition, the telescopic display unit 20 may further include a component that allows the operator to visually recognize the image of the surgical site input through the laparoscope 5 through the eyepiece 220, but this is the gist of the present invention. Since there is a little distance from the description thereof will be omitted.
움직임 감지부(311)는 접면부(210)의 지지부(230 및/또는 240)에 수술자가 얼굴을 접촉한 상태에서 어느 방향으로 얼굴을 움직였는지를 감지하여 센싱 정보를 출력한다. 움직임 감지부(311)는 얼굴이 움직인 방향 및 크기(예를 들어 거리)를 감지하기 위한 센싱 수단을 포함할 수 있다. 센싱 수단은 접면부(210)가 어느 방향으로 어느 정도 움직였는지를 감지할 수 있는 센싱 수단이면 충분하며, 예를 들어 접면부(210)를 지지하는 탄성력을 가지는 유동부(250)가 어느 방향으로 어느 크기만큼 인장(引張)되었는지를 감지하는 센서이거나, 마스터 로봇(1)의 내측에 구비되어 접면부(210)의 내측 표면에 형성된 특징점이 어느 정도 근접 또는/및 회전하였는지 감지하는 센서 등일 수 있다.The motion detector 311 outputs sensing information by detecting in which direction the operator moves the face while the operator contacts the face 230 and / or 240 of the contact portion 210. The motion detector 311 may include sensing means for detecting a direction and a size (eg, a distance) of the face moving. The sensing means is sufficient as sensing means capable of detecting how much the contact portion 210 has moved in which direction, for example, in which direction the flow portion 250 having an elastic force supporting the contact portion 210 is directed. It may be a sensor that detects to what extent the tension (引 張), or a sensor provided to the inside of the master robot 1 to detect how close or / and rotated feature points formed on the inner surface of the contact portion 210, and the like. .
조작명령 생성부(321)는 움직임 감지부(311)로부터 수신되는 센싱 정보를 이용하여 수술자의 얼굴 움직임 방향 및 크기를 해석하고, 해석된 결과에 따라 복강경(5)의 위치 및 영상 입력 각도를 제어하기 위한 조작명령을 생성한다.The manipulation command generation unit 321 analyzes the operator's face movement direction and size using the sensing information received from the motion detection unit 311, and controls the position and image input angle of the laparoscope 5 according to the analyzed result. Create an operation command to
전송부(332)는 조작명령 생성부(321)에 의해 생성된 조작명령을 슬레이브 로봇(2)으로 전송하여, 복강경(5)의 위치 및 영상 입력 각도가 조작되도록 하고 이에 따른 영상이 제공되도록 한다. 전송부(332)는 로봇 암(3)의 조작을 위한 조작명령을 전송하기 위해 마스터 로봇(1)에 이미 구비된 전송부일 수도 있다.The transmission unit 332 transmits the operation command generated by the operation command generation unit 321 to the slave robot 2 so that the position and image input angle of the laparoscope 5 are manipulated, and an image is provided accordingly. . The transmission unit 332 may be a transmission unit already provided in the master robot 1 to transmit an operation command for the operation of the robot arm 3.
도 39에는 본 발명의 실시예에 따른 복강경 조작 명령 전송 방법이 도시되어 있다. 39 shows a method of transmitting a laparoscope manipulation command according to an embodiment of the present invention.
도 39를 참조하면, 텔레스코픽 표시부(20)는 단계 410에서 수술자의 얼굴 움직임을 감지하고, 단계 420으로 진행하여 얼굴 움직임의 감지에 의해 생성된 센싱 정보를 이용하여 복강경(5)의 조작을 위한 조작명령을 생성한다.Referring to FIG. 39, the telescopic display unit 20 detects the operator's face movement in step 410, and proceeds to step 420 to manipulate the laparoscopic 5 using the sensing information generated by the detection of the face movement. Create a command.
이어서, 단계 430에서 복강경(5)의 조작을 위해 단계 420에 의해 생성된 조작 명령이 슬레이브 로봇(2)으로 전송된다.Subsequently, in step 430, the manipulation command generated by step 420 is transmitted to the slave robot 2 for manipulation of the laparoscope 5.
여기서, 복강경(5)의 조작을 위해 생성된 조작 명령은 마스터 로봇(1)에 대해서도 특정의 동작이 수행되도록 기능될 수 있다. 예를 들어, 얼굴 회전을 감지하여 복강경(5)을 회전시키고자 하는 경우, 회전에 대한 조작 명령이 슬레이브 로봇(2)으로 전달됨과 동시에 마스터 로봇(1)의 조작핸들의 방향도 이에 상응하여 변경되도록 함으로써 수술자의 직관성 및 수술 편의성이 유지되도록 할 수 있다. 예를 들어, 접면부(210)에 의한 회전신호가 감지되면 생성된 조작신호에 의해 복강경(5)이 회전을 하게 되고, 이때 화면에 표시되는 영상 및 그 영상에 보이는 수술용도구의 위치가 현재 조작핸들의 손의 위치와 일치하지 않을 수 있으므로, 조작핸들의 위치를 움직여 화면상에 표시되는 수술용 도구의 위치와 일치시키는 동작이 수행될 수 있다. 이러한 조작핸들 방향의 제어는 접면부(210)의 회전운동의 경우뿐 아니라 직선운동의 경우에도 화면상에 표시되는 수술용 도구의 위치/방향과 실제적인 조작핸들의 위치/방향이 불일치하는 경우라면 동일하게 적용될 수 있다.Here, the operation command generated for the operation of the laparoscope 5 may be functioned so that a specific operation is performed also on the master robot 1. For example, when the face is rotated to detect the rotation of the laparoscope 5, a manipulation command for the rotation is transmitted to the slave robot 2 and the direction of the manipulation handle of the master robot 1 is correspondingly changed. By doing so, it is possible to maintain the intuition and ease of operation of the operator. For example, when the rotation signal is detected by the contact unit 210, the laparoscope 5 is rotated by the generated operation signal, and the image displayed on the screen and the position of the surgical tool shown in the image are currently Since it may not coincide with the position of the hand of the manipulation handle, an operation of matching the position of the surgical tool displayed on the screen by moving the position of the manipulation handle may be performed. The control of the operation handle direction is a case where the position / direction of the surgical tool displayed on the screen and the actual operation handle position / direction are not only in the case of the rotary motion of the contact portion 210 but also in the case of the linear motion. The same may apply.
도 40은 본 발명의 실시예에 따른 복강경 조작 명령을 생성하기 위한 텔레스코픽 표시부의 구성을 개략적으로 나타낸 블록 구성도이고, 도 41은 본 발명의 실시예에 따른 복강경 조작 명령 전송 방법을 나타낸 순서도이다.40 is a block diagram schematically illustrating the configuration of a telescopic display unit for generating a laparoscopic manipulation command according to an embodiment of the present invention, and FIG. 41 is a flowchart illustrating a method of transmitting a laparoscopic manipulation command according to an embodiment of the present invention.
도 40을 참조하면, 텔레스코픽 표시부(20)는 움직임 감지부(311), 조작명령 생성부(321), 전송부(332), 접촉 감지부(510), 원상태 복원부(520)를 포함할 수 있다.Referring to FIG. 40, the telescopic display unit 20 may include a motion detector 311, an operation command generator 321, a transmitter 332, a contact detector 510, and an original state restorer 520. have.
도시된 움직임 감지부(311), 조작명령 생성부(321) 및 전송부(332)에 대해서는 앞서 도 38을 참조하여 설명하였으므로 이에 대한 설명은 생략한다. 다만, 움직임 감지부(311)는 접촉 감지부(510)에 의한 센싱 정보로서 수술자 얼굴이 지지부(230 또는/및 240)에 접촉되었음이 인식되는 동안 동작을 수행할 수 있을 것이다.Since the motion detector 311, the operation command generator 321, and the transmitter 332 are described with reference to FIG. 38, description thereof will be omitted. However, the motion detector 311 may perform an operation while it is recognized that the operator's face is in contact with the support 230 or / and 240 as the sensing information by the touch detector 510.
접촉 감지부(510)는 수술자의 얼굴이 지지부(230 또는/및 240)에 접촉되었는지 여부를 감지하여 센싱 정보를 출력한다. 이를 위해, 예를 들어 지지부의 단부에 접촉 감지 센서가 구비될 수 있으며, 이외에도 얼굴의 접촉 여부를 감지할 수 있는 다양한 센싱 방식이 적용될 수도 있음은 당연하다.The touch detector 510 detects whether the operator's face is in contact with the support 230 or / and 240 and outputs sensing information. To this end, for example, a touch sensor may be provided at the end of the support, and in addition, various sensing schemes may be applied to detect whether a face is in contact.
원상태 복원부(520)는 접촉 감지부(510)에 의한 센싱 정보로서 수술자의 얼굴이 지지부(230 또는/및 240)와의 접촉이 종료된 것으로 인식되면, 모터 구동부(530)를 제어하여 접면부(210)가 원상태로 복귀되도록 한다. 원상태 복원부(520)는 이하에서 설명될 모터 구동부(530)를 포함할 수 있다. The original state restoring unit 520 controls the motor driver 530 when the face of the operator is sensed to be in contact with the supporting unit 230 or / and 240 as the sensing information by the contact detecting unit 510. 210 is returned to its original state. The original state restorer 520 may include a motor driver 530 to be described below.
도 40에는 접면부(210)가 원상태로 복귀되도록 하는 동작 수단으로 모터를 이용하는 모터 구동부(530)가 예시되었으나, 동일한 목적 달성을 위한 동작 수단이 이에 제한되지 않음은 당연하다. 예를 들어, 공압이나 유압 등과 같은 다양한 방법에 의해 접면부(210)가 원상태로 복귀되도록 처리될 수도 있을 것이다.In FIG. 40, the motor driving unit 530 using the motor is illustrated as an operation means for returning the contact portion 210 to its original state, but it is obvious that the operation means for achieving the same purpose is not limited thereto. For example, the contact portion 210 may be treated to return to its original state by various methods such as pneumatic or hydraulic pressure.
원상태 복원부(520)는 예를 들어 접면부(210)의 기준 상태(즉, 위치 및/또는 각도)에 관한 정보를 이용하여 모터 구동부(530)를 제어하거나, 조작명령 생성부(321)에 의해 해석된 얼굴 움직임 방향 및 크기를 이용하여 그 역의 방향 및 크기로 조작되도록 모터 구동부(530)를 제어하여 접면부(210)가 원위치로 복귀되도록 할 수 있다.The original state restoring unit 520 controls the motor driving unit 530 using, for example, information on the reference state (ie, position and / or angle) of the contacting unit 210, or the operation command generation unit 321. The motor driver 530 may be controlled to be manipulated in the reverse direction and size using the face movement direction and size analyzed by the face movement, so that the contact portion 210 may be returned to its original position.
예를 들어, 수술자가 현재 영상으로 표시되는 수술부위와는 다른 부위를 확인하거나 해당 부위에 조치를 하기 위해 해당 방향으로 얼굴을 돌려(이에 의해 접면부(210)도 이동 또는 회전됨) 복강경(5)이 이에 상응하도록 조작된 상태에서 접면부(210)에 대한 얼굴 접촉이 종료되었음을 감지하면, 원상태 복원부(520)는 접면부(210)가 디폴트(default)로 지정된 기준 상태로 복귀되도록 모터 구동부(530)를 제어할 수 있다.For example, the operator turns his face in the corresponding direction (by which the contact portion 210 is also moved or rotated) in order to check a region different from the surgical region displayed in the current image or to take an action on the region. ) Detects that the face contact with the contacting unit 210 has ended in a state where it is operated accordingly, the original state restoring unit 520 may return the motor unit to return to the reference state designated by the contacting unit 210 as the default. 530 may be controlled.
모터 구동부(530)는 원상태 복원부(520)의 제어에 의해 회전하는 모터를 포함할 수 있고, 모터의 회전에 의해 접면부(210)의 상태(즉, 위치 및/또는 각도)가 조정되도록 모터 구동부(530)와 접면부(210)는 상호 결합된다. 모터 구동부(530)는 마스터 인터페이스(4)의 내측에 수납되도록 형성될 수 있다. 모터 구동부(530)에 포함되는 모터는 예를 들어 다 자유도(degree of freedom) 운동이 가능하도록 하기 위한 구형 모터일 수 있으며, 기울어지는 각도의 한계를 없애기 위해 구형 모터의지지 구조는 구형 베어링과 원형의 회전자로 구성되거나, 원형의 회전자를 지지하기 위한 3 자유도를 가지는 프레임 구조로 구성될 수도 있다.The motor driving unit 530 may include a motor that rotates under the control of the original state restoring unit 520, and the motor (eg, position and / or angle) of the contacting unit 210 is adjusted by the rotation of the motor. The driving unit 530 and the contacting unit 210 are coupled to each other. The motor driver 530 may be formed to be received inside the master interface 4. The motor included in the motor driving unit 530 may be, for example, a spherical motor for allowing a degree of freedom movement, and the support structure of the spherical motor may be a spherical bearing in order to remove the limitation of the inclination angle. It may be composed of a circular rotor or a frame structure having three degrees of freedom for supporting the circular rotor.
전술한 각 구성요소의 동작에 의해 접면부(210)가 원 상태로 복원될지라도 조작명령 생성부(321)는 이에 대한 조작명령을 생성 및 전송하지 않으므로 복강경(5)에 의해 입력되어 출력되는 영상 이미지는 변경되지 않는다. 따라서 수술자가 이후 접안부(220)를 통해 복강경(5) 영상을 확인하며 수술을 진행함에 있어 일관성이 유지될 수 있다.Even if the contact unit 210 is restored to its original state by the operation of the above-described components, the operation command generation unit 321 does not generate and transmit the operation command for the image, and thus is input and output by the laparoscope 5. The image does not change. Therefore, after the operator checks the laparoscopic (5) image through the eyepiece 220 may be consistent in the operation.
이제까지 도 40을 참조하여 텔레스코픽 표시부(20)의 접면부(210)의 원상태 복귀가 모터 구동부(530)의 조작에 의해 이루어지는 경우를 설명하였으나, 탄성력을 가지는 탄성체 재질의 유동부(250)에 의해 수술자의 얼굴 움직임에 의한 외력이 제거된 경우 원상태로 복귀되도록 할 수도 있음은 당연하다. 탄성력에 의해 접면부(210)가 원상태로 복귀되는 경우에도 복강경(5)의 조작을 위한 조작신호는 생성되지 않을 것이다.Although the case where the original state return of the contact portion 210 of the telescopic display unit 20 is performed by the operation of the motor driving unit 530 has been described with reference to FIG. 40, the operation is performed by the flow unit 250 of the elastic material having elastic force. Naturally, if the external force caused by the movement of the face of the child is removed, it may be returned to its original state. Even when the contact portion 210 is returned to its original state by the elastic force, an operation signal for manipulating the laparoscope 5 will not be generated.
도 41에는 본 발명의 실시예에 따른 복강경 조작 명령 전송 방법이 도시되어 있다. 41 shows a method of transmitting a laparoscope manipulation command according to an embodiment of the present invention.
도 41을 참조하면, 텔레스코픽 표시부(20)는 단계 410에서 수술자의 얼굴 움직임을 감지하고, 단계 420으로 진행하여 얼굴 움직임의 감지에 의해 생성된 센싱 정보를 이용하여 복강경(5)의 조작을 위한 조작명령을 생성한다. 이후, 단계 430에서 복강경(5)의 조작을 위해 단계 420에 의해 생성된 조작 명령이 슬레이브 로봇(2)으로 전송된다.Referring to FIG. 41, the telescopic display unit 20 detects the operator's face movement in step 410, and proceeds to step 420 to manipulate the laparoscopic 5 using the sensing information generated by the detection of the face movement. Create a command. Thereafter, in step 430, the manipulation command generated by step 420 is transmitted to the slave robot 2 for the manipulation of the laparoscope 5.
이어서, 텔레스코픽 표시부(20)는 단계 Q610에서 수술자가 접면부(210)에 대한 접촉을 해제하였는지 여부를 판단한다. 만일 접촉이 유지되는 상태라면 단계 410으로 다시 진행하되, 접촉이 해제된 상태라면 단계 Q620으로 진행하여 접면부(210)가 원위치로 복귀되도록 제어한다.Subsequently, the telescopic display unit 20 determines whether the operator releases contact with the contacting unit 210 in step Q610. If the contact is maintained, the process proceeds to step 410 again. If the contact is released, the process proceeds to step Q620 to control the contact unit 210 to return to its original position.
도 42는 본 발명의 실시예에 따른 복강경 조작 명령을 생성하기 위한 텔레스코픽 표시부의 구성을 개략적으로 나타낸 블록 구성도이다.42 is a block diagram schematically illustrating a configuration of a telescopic display unit for generating a laparoscopic manipulation command according to an embodiment of the present invention.
도 42를 참조하면, 텔레스코픽 표시부(20)는 접촉 감지부(510), 카메라부(710), 저장부(720), 아이트래커부(730), 조작명령 생성부(321), 전송부(332) 및 제어부(740)를 포함할 수 있다.Referring to FIG. 42, the telescopic display unit 20 includes a contact detector 510, a camera unit 710, a storage unit 720, an eye tracker unit 730, an operation command generation unit 321, and a transmission unit 332. And the controller 740.
접촉 감지부(510)는 수술자의 얼굴이 접면부(210)에 돌출되도록 형성된 지지부(230 또는/및 240)에 접촉되었는지 여부를 감지하여 센싱 정보를 출력한다. The touch detector 510 detects whether the operator's face is in contact with the support 230 or / and 240 formed to protrude from the contact portion 210 and outputs sensing information.
카메라부(710)는 접촉 감지부(510)의 센싱 정보에 의해 수술자의 얼굴이 접면부(210)에 접촉되었음이 감지되면 수술자의 눈에 대한 영상을 실시간 촬영한다. 카메라부(710)는 마스터 인터페이스(4)의 내측에서 접안부(220)를 통해 보이는 수술자의 눈을 촬영하도록 배치된다. 카메라부(710)에 의해 촬영된 수술자의 눈에 대한 이미지는 아이트래커부(730)의 아이트래킹(eye tracking) 처리를 위해 저장부(720)에 저장된다. When the camera unit 710 detects that the operator's face is in contact with the contacting unit 210 by sensing information of the contact sensor 510, the camera unit 710 photographs an image of the operator's eyes in real time. The camera unit 710 is arranged to photograph the operator's eye seen through the eyepiece 220 inside the master interface 4. The image of the operator's eye photographed by the camera unit 710 is stored in the storage unit 720 for the eye tracking process of the eye tracker unit 730.
카메라부(710)에 의해 촬영되는 이미지는 아이트래커부(730)의 아이트래킹 처리가 가능한 형태이면 충분하며, 아이트래커부(730)의 처리를 위해 필요한 전처리가 수행된 후 저장부(720)에 저장될 수도 있다. 아이트래킹 처리를 위한 이미지 생성 방법 및 생성되는 이미지 유형은 당업자에게 자명하므로 이에 대한 설명은 생략한다.The image photographed by the camera unit 710 is sufficient that the eye tracking process of the eye tracker unit 730 is possible. May be stored. Since the image generating method and the generated image type for the eye tracking process will be apparent to those skilled in the art, a description thereof will be omitted.
아이트래커부(730)는 실시간 또는 소정의 주기로 저장부(720)에 저장된 이미지를 시간 순서대로 비교 분석하여 수술자의 눈동자 위치 변화와 이에 의한 주시 방향을 해석하여 해석 정보를 출력한다. 또한, 아이트래커부(730)는 눈동자의 모양(예를 들어 눈 깜빡임 등)을 더 해석하여 이에 대한 해석 정보를 출력할 수도 있다. The eye tracker unit 730 analyzes the images stored in the storage unit 720 in real time or at predetermined intervals in chronological order, and analyzes the change of the pupil position of the operator and the gaze direction by the operator and outputs the analysis information. In addition, the eye tracker unit 730 may further analyze the shape of the pupil (for example, blinking eyes, etc.) and output analysis information thereof.
조작명령 생성부(321)는 아이트래커부(730)에 의한 해석 정보를 참조하여 수술자의 주시 방향이 변화된 경우 이에 상응하여 복강경(5)의 위치 및/또는 영상 입력 각도가 제어되도록 하기 위한 조작 명령을 생성한다. 또한, 조작명령 생성부(321)는 눈동자의 모양 변화가 미리 지정된 명령을 입력하기 위한 것이라면 이를 위한 조작 명령을 생성할 수도 있다. 예를 들어, 눈동자의 모양 변화에 따른 지정 명령은 예를 들어, 오른쪽 눈의 연속 2회 깜빡임의 경우 수술부위로의 복강경(5) 접근, 왼쪽 눈의 연속 2회 깜빡임의 경우 시계방향으로 회전 등과 같이 미리 지정될 수 있다.The operation command generation unit 321 refers to the analysis information by the eye tracker unit 730 when the operator's gaze direction is changed according to the operation command for controlling the position and / or image input angle of the laparoscope 5 accordingly. Create In addition, the operation command generation unit 321 may generate an operation command for this if the change in the shape of the pupil is for inputting a predetermined command. For example, the designation command according to the change in the shape of the pupil is, for example, the laparoscopic (5) approach to the surgical site in the case of two consecutive blinks of the right eye, and the clockwise rotation in the case of two consecutive blinks of the left eye. It can be specified in advance.
전송부(332)는 조작명령 생성부(321)에 의해 생성된 조작명령을 슬레이브 로봇(2)으로 전송하여, 복강경(5)의 위치 및 영상 입력 각도가 조작되도록 하고 이에 따른 영상이 제공되도록 한다. 전송부(332)는 로봇 암(3)의 조작을 위한 조작명령을 전송하기 위해 마스터 로봇(1)에 이미 구비된 전송부일 수도 있다.The transmission unit 332 transmits the operation command generated by the operation command generation unit 321 to the slave robot 2 so that the position and image input angle of the laparoscope 5 are manipulated, and an image is provided accordingly. . The transmission unit 332 may be a transmission unit already provided in the master robot 1 to transmit an operation command for the operation of the robot arm 3.
제어부(740)는 상술한 각 구성요소가 지정된 동작을 수행하도록 제어한다.The controller 740 controls each of the above components to perform a specified operation.
이제까지 도 42를 참조하여, 눈동자의 움직임을 아이트래킹 기술을 이용하여 인식 및 처리하는 텔레스코픽 처리부(20)를 설명하였다. 그러나, 이에 제한되지 않고, 텔레스코픽 처리부(20)는 수술자의 얼굴 자체의 움직임을 감지하여 인식 및 처리하는 방식으로 구현될 수도 있음은 당연하다. 일 예로, 카메라부(710)가 얼굴 이미지를 촬영하고, 아이트래커부(730)를 대체하는 분석 처리부가 촬영된 이미지 중 특징점(예를 들어, 두 눈의 위치, 코의 위치, 인중의 위치 등 중 하나 이상)의 위치 및 변화를 해석하면 조작명령 생성부(321)가 이에 상응하는 조작명령을 생성할 수도 있다.Up to now, referring to FIG. 42, the telescopic processor 20 for recognizing and processing eye movements using eye tracking technology has been described. However, the present invention is not limited thereto, and the telescopic processor 20 may be implemented in a manner of detecting, recognizing, and processing a movement of the operator's face itself. For example, the camera unit 710 captures a face image, and the analysis processing unit replacing the eye tracker unit 730 captures a feature point (for example, a position of two eyes, a position of a nose, a position of a person, etc.). If the position and change of one or more of the) is analyzed, the operation command generation unit 321 may generate a corresponding operation command.
도 43은 본 발명의 실시예에 따른 복강경 조작 명령 전송 방법을 나타낸 순서도이다.43 is a flowchart illustrating a method of transmitting a laparoscopic manipulation command according to an embodiment of the present invention.
도 43을 참조하면, 단계 Q810에서 텔레스코픽 표시부(20)는 접촉 감지부(510)에 의해 수술자 얼굴의 접촉이 감지되면, 카메라부(710)를 활성화하여 접안부(220)를 통해 보이는 수술자의 눈에 대한 디지털 이미지 데이터를 생성하여 저장부(720)에 저장되도록 한다.Referring to FIG. 43, when the contact of the operator's face is detected by the contact sensor 510 in step Q810, the telescopic display unit 20 activates the camera unit 710 to the eye of the operator visible through the eyepiece 220. The digital image data is generated and stored in the storage unit 720.
단계 Q820에서 텔레스코픽 표시부(20)는 실시간 또는 소정의 주기로 저장부(720)에 저장된 디지털 이미지 데이터를 비교 판단하여 수술자의 눈동자 위치 및 주시 방향의 변화에 대한 해석 정보를 생성한다. 비교 판단시 텔레스코픽 표시부(20)는 일정한 오차를 허용하여 일정한 범위의 위치 정보의 변화는 눈동자의 위치가 변동하지 않은 것으로 인식하도록 할 수도 있다.In step Q820, the telescopic display unit 20 compares and determines digital image data stored in the storage unit 720 in real time or at predetermined intervals, and generates analysis information about changes in the eye position and the gaze direction of the operator. In the comparison determination, the telescopic display unit 20 may allow a certain error so that a change in the position information of a certain range may be recognized as not changing the position of the pupil.
단계 Q830에서 텔레스코픽 표시부(20)는 미리 설정된 임계 시간 이상 변경된 수술자의 주시 방향이 유지되는지 여부를 판단한다.In step Q830, the telescopic display unit 20 determines whether the gaze direction of the operator changed over the preset threshold time is maintained.
만일 임계 시간 이상 변경된 주시 방향이 유지되면, 단계 Q840에서 텔레스코픽 표시부(20)는 복강경(5)이 해당 위치의 영상을 입력받도록 조작(예를 들어, 이동 또는/및 영상 입력 각도의 변경)하기 위한 조작 명령을 생성하여 슬레이브 로봇(2)으로 전송한다. 여기서, 임계 시간은 수술자의 눈동자 흔들림이나 수술 부위의 전체적인 개괄적 확인 등을 위한 눈동자가 움직임에 의해 복강경(5)이 조작되지 않도록 하기 위한 시간으로 설정될 수 있으며, 그 시간값은 실험적, 통계적으로 설정되거나 수술자 등에 의해 설정될 수도 있다.If the changed gaze direction is maintained for more than a threshold time, in step Q840, the telescopic display unit 20 manipulates (e.g., moves or / or changes the image input angle) the laparoscope 5 to receive an image of the corresponding position. An operation command is generated and transmitted to the slave robot 2. Here, the threshold time may be set to a time for preventing the laparoscopic 5 from being manipulated by the movement of the pupil for movement of the operator's pupils or general overview of the surgical site, and the time value is set experimentally and statistically. Or set by an operator or the like.
그러나 만일 임계 시간 이상 변경된 주시 방향이 유지되지 않는다면 단계 Q810으로 다시 진행한다.However, if the changed gaze direction is not maintained for more than the threshold time, the flow returns to step Q810 again.
도 44는 본 발명의 실시예에 따른 복강경 조작 명령 전송 방법을 나타낸 순서도이고, 도 45는 본 발명의 실시예에 따른 텔레스코픽 표시부에 의한 영상 표시 형태를 예시한 도면이다.44 is a flowchart illustrating a method of transmitting a laparoscopic manipulation command according to an embodiment of the present invention, and FIG. 45 is a view illustrating an image display form by a telescopic display unit according to an embodiment of the present invention.
도 44를 참조하면, 단계 Q810에서 텔레스코픽 표시부(20)는 접촉 감지부(510)에 의해 수술자 얼굴의 접촉이 감지되면, 카메라부(710)를 활성화하여 접안부(220)를 통해 보이는 수술자의 눈에 대한 디지털 이미지 데이터를 생성하여 저장부(720)에 저장되도록 한다.Referring to FIG. 44, when the contact of the operator's face is detected by the contact sensor 510 in step Q810, the telescopic display unit 20 activates the camera unit 710 to the eye of the operator visible through the eyepiece 220. The digital image data is generated and stored in the storage unit 720.
단계 Q820에서 텔레스코픽 표시부(20)는 실시간 또는 소정의 주기로 저장부(720)에 저장된 디지털 이미지 데이터를 비교 판단하여 수술자의 눈동자 위치 및 주시 방향의 변화에 대한 해석 정보를 생성한다. In step Q820, the telescopic display unit 20 compares and determines digital image data stored in the storage unit 720 in real time or at predetermined intervals, and generates analysis information about changes in the eye position and the gaze direction of the operator.
단계 Q910에서 텔레스코픽 표시부(20)는 수술자의 주시 위치가 미리 지정된 설정인지 여부를 판단한다.In step Q910, the telescopic display unit 20 determines whether the gaze position of the operator is a predetermined setting.
도 45에는 텔레스코픽 표시부(20)에 의한 영상 표시 형태가 예시되어 있다. 45 illustrates an image display form by the telescopic display unit 20.
도 45에 예시된 바와 같이, 접안부(220)를 통해 수술자는 복강경(5)을 통해 제공되는 영상 이미지(1010)를 확인할 수 있으며, 해당 영상 이미지는 수술 부위와 인스트루먼트(1020)를 포함할 수 있다. 또한, 텔레스코픽 표시부(20)에 의한 영상은 수술자의 주시 위치(1030)가 오버랩되어 표시될 수 있으며, 설정 위치들도 함께 표시될 수 있다. As illustrated in FIG. 45, through the eyepiece 220, an operator may check an image image 1010 provided through the laparoscope 5, and the image image may include a surgical part and an instrument 1020. . In addition, the image by the telescopic display unit 20 may be displayed by overlapping the gaze position 1030 of the operator, and the setting positions may be displayed together.
설정 위치로는 외곽 테두리(1040), 제1 회전 지시 위치(1050) 및 제2 회전 지시 위치(1060) 등 중 하나 이상이 포함될 수 있다. 예를 들어, 수술자가 외곽 테두리(1040) 중 임의의 방향에 위치한 변을 임계 시간 이상 주시하고 있는 경우 복강경(5)이 해당 방향으로 이동되도록 제어될 수 있다. 즉, 외곽 테두리(1040) 중 왼쪽의 변을 임계 시간 이상 주시하면 복강경(5)은 현재 표시 위치보다 왼쪽의 부위를 촬영하기 위해 왼쪽으로 이동되도록 제어될 수 있다. 또한, 수술자가 제1 회전 지시 위치(1050)를 임계 시간 이상 주시하면 복강경이 반시계 방향으로 회전하도록 제어되고, 수술자가 제2 회전 지시 위치(1060)를 임계 시간 이상 주시하면 복강경이 시계 방향으로 회전하도록 제어될 수도 있다.The setting position may include one or more of an outer edge 1040, a first rotation instruction position 1050, a second rotation instruction position 1060, and the like. For example, when the operator watches the side positioned in any direction of the outer edge 1040 or more for a threshold time, the laparoscope 5 may be controlled to move in the corresponding direction. That is, when the left side of the outer edge 1040 is watched for more than a threshold time, the laparoscope 5 may be controlled to be moved to the left to photograph the left side of the current display position. In addition, when the operator watches the first rotational instruction position 1050 for a threshold time or more, the laparoscope is controlled to rotate in a counterclockwise direction, and when the operator watches the second rotational instruction position 1060 for more than the threshold time, the laparoscope is in a clockwise direction. It may be controlled to rotate.
다시 도 44를 참조하여, 수술자가 주시 위치가 전술한 설정 위치가 아닌 경우에는 단계 Q810으로 다시 진행한다.Referring back to FIG. 44, if the operator has a gaze position other than the above-described setting position, the process proceeds to step Q810 again.
그러나 다시 도 44를 참조하여, 수술자가 주시 위치가 전술한 설정 위치인 경우에는 단계 Q920으로 진행하여 미리 설정된 임계 시간 이상 수술자의 주시가 유지되는지 여부를 판단한다.However, referring back to FIG. 44, when the operator has the gaze position as described above, the flow advances to step Q920 to determine whether the gaze of the operator is maintained for more than a predetermined threshold time.
만일 임계 시간 이상 설정 위치에 대한 수술자의 주시가 유지되면, 단계 Q930에서 텔레스코픽 표시부(20)는 해당 설정 위치에 대해 지정된 명령에 따라 복강경(5)이 조작되도록 조작 명령을 생성하여 슬레이브 로봇(2)으로 전송한다. If the operator's attention to the set position is maintained for more than the threshold time, the telescopic display unit 20 generates an operation command to operate the laparoscope 5 according to the command specified for the set position in step Q930, so that the slave robot 2 To send.
그러나 만일 임계 시간 이상 해당 설정 위치에 대한 수술자의 주시가 유지되지 않으면 단계 Q810으로 다시 진행한다.However, if the operator's attention to the set position is not maintained for more than the threshold time, the process proceeds to step Q810 again.
도 46은 본 발명의 실시예에 따른 복강경 조작 명령 전송 방법을 나타낸 순서도이다.46 is a flowchart illustrating a method of transmitting a laparoscopic manipulation command according to an embodiment of the present invention.
도 46을 참조하면, 단계 Q810에서 텔레스코픽 표시부(20)는 접촉 감지부(510)에 의해 수술자 얼굴의 접촉이 감지되면, 카메라부(710)를 활성화하여 접안부(220)를 통해 보이는 수술자의 눈에 대한 디지털 이미지 데이터를 생성하여 저장부(720)에 저장되도록 한다.Referring to FIG. 46, when the contact of the operator's face is detected by the contact sensor 510 in step Q810, the telescopic display unit 20 activates the camera unit 710 to the eye of the operator visible through the eyepiece 220. The digital image data is generated and stored in the storage unit 720.
단계 1110에서 텔레스코픽 표시부(20)는 실시간 또는 소정의 주기로 저장부(720)에 저장된 이미지 정보를 비교 판단하여 운전자의 눈 모양 변화에 대한 해석 정보를 생성한다. 예를 들어, 해석 정보는 일정한 시간 동안에 수술자의 눈이 몇 번 깜빡였는지, 깜빡였다면 어느 쪽 눈이 깜빡였는지 등에 대한 정보일 수 있다.In operation 1110, the telescopic display unit 20 compares and determines image information stored in the storage unit 720 in real time or at a predetermined period to generate analysis information about a change in the shape of the driver's eyes. For example, the interpretation information may be information about how many times the operator's eyes blinked during a certain time, and which eyes blinked when the operator blinked.
단계 1120에서 텔레스코픽 표시부(20)는 눈 모양 변화에 대한 해석 정보가 미리 지정된 지정 조건을 만족하는지 여부를 판단한다. 예를 들어 눈 모양 변화에 따른 지정 조건은 예를 들어, 소정 시간 내에 오른쪽 눈의 연속 2회 깜빡임인지 여부, 소정 시간 내에 왼쪽 눈의 연속 2회 깜빡임인지 여부 등으로 미리 설정될 수 있다.In operation 1120, the telescopic display unit 20 determines whether analysis information regarding the change in eye shape satisfies a predetermined predetermined condition. For example, the designated condition according to the change of the eye shape may be set in advance, for example, whether the right eye blinks twice in a predetermined time or whether the left eye blinks twice in a predetermined time.
만일 눈 모양 변화에 대한 해석 정보가 미리 지정된 조건을 만족하는 경우라면, 단계 1130으로 진행하여 해당 조건을 만족하는 경우의 지정 명령으로서 복강경(5) 조작을 위한 조작 명령을 생성하여 슬레이브 로봇(2)으로 전송한다. 예를 들어, 눈 모양 변화에 따른 지정 명령은 예를 들어, 오른쪽 눈의 연속 2회 깜빡임의 경우 수술부위로의 복강경(5) 접근, 왼쪽 눈의 연속 2회 깜빡임의 경우 시계방향으로 회전 등과 같이 미리 지정될 수 있다.If the analysis information on the change in eye shape satisfies a predetermined condition, the flow advances to step 1130 and generates an operation command for manipulating the laparoscope 5 as a designated command in the case of satisfying the condition, thereby generating the slave robot 2. To send. For example, the designation command according to the change of the eye shape may be, for example, the laparoscopic (5) approach to the surgical site in the case of two consecutive blinks of the right eye, or the clockwise rotation in the case of two consecutive blinks of the left eye. It may be specified in advance.
그러나 만일 눈 모양 변화에 대한 해석 정보가 미리 지정된 조건을 만족하지 않는다면 단계 Q910으로 진행한다.However, if the analysis information for the eye shape change does not satisfy the predetermined condition, the flow advances to step Q910.
상술한 복강경 조작 방법은 소프트웨어 프로그램 등으로 구현될 수도 있다. 프로그램을 구성하는 코드들 및 코드 세그먼트들은 당해 분야의 컴퓨터 프로그래머에 의하여 용이하게 추론될 수 있다. 또한, 프로그램은 컴퓨터가 읽을 수 있는 정보저장매체(computer readable media)에 저장되고, 컴퓨터에 의하여 읽혀지고 실행됨으로써 상기 방법을 구현한다. 정보저장매체는 자기 기록매체, 광 기록매체 및 캐리어 웨이브 매체를 포함한다.The above-described laparoscopic manipulation method may be implemented by a software program or the like. Codes and code segments constituting a program can be easily inferred by a computer programmer in the art. The program is also stored in a computer readable media, and read and executed by a computer to implement the method. The information storage medium includes a magnetic recording medium, an optical recording medium and a carrier wave medium.
도 47은 본 발명의 실시예에 따른 수술용 로봇의 전체구조를 나타낸 평면도이고, 도 48은 본 발명의 실시예에 따른 수술용 로봇의 마스터 인터페이스를 나타낸 개념도이다.47 is a plan view showing the overall structure of the surgical robot according to an embodiment of the present invention, Figure 48 is a conceptual diagram showing a master interface of the surgical robot according to an embodiment of the present invention.
본 실시예는 수술용 내시경의 움직임에 따라 변화하는 내시경의 관점에 상응하여, 사용자가 보는 모니터에 출력되는 내시경 영상(9)의 출력 위치를 변화시킴으로써, 사용자가 실제 수술 상황을 보다 현실감 있게 느낄 수 있도록 하는 특징이 있다. 즉, 내시경의 관점은 수술을 수행하는 사용자의 관점과 일치할 수 있으므로, 본 실시예는 복강 내의 내시경의 관점과 외부 수술 현장에서 내시경 영상(9)을 출력하는 모니터의 위치 및 출력 방향을 일치시킴으로써, 외부 수술 현장에 위치한 시스템의 모션이 실제 환자 내부에서 움직이는 내시경의 모션을 반영하여 보다 현실감을 줄 수 있는 특징이 있다. According to the present embodiment, the output position of the endoscope image 9 output to the monitor viewed by the user corresponds to the viewpoint of the endoscope changing according to the movement of the surgical endoscope, so that the user can feel the actual surgical situation more realistically. There is a characteristic to make. That is, since the viewpoint of the endoscope may coincide with the viewpoint of the user performing the surgery, the present embodiment may match the view of the endoscope in the abdominal cavity with the position and output direction of the monitor outputting the endoscope image 9 at the external surgery site. In addition, the motion of the system located at the external surgery site reflects the motion of the endoscope moving inside the actual patient.
본 실시예에 따른 수술용 내시경은 복강경뿐만 아니라 흉강경, 관절경, 비경, 방광경, 직장경, 십이지장경, 종격경, 심장경 등과 같이 수술시 촬영 도구로 사용되는 다양한 종류의 도구가 될 수 있다. 또한, 본 실시예에 따른 수술용 내시경은 입체 내시경이 될 수 있다. 즉, 본 실시예에 따른 수술용 내시경은 입체 영상 정보를 생성하는 입체 내시경이 될 수 있으며, 이러한 입체 영상 정보 생성 방식은 다양한 기술로 구현될 수 있다. 예를 들면, 본 실시예에 따른 수술용 내시경이 입체 정보를 가지는 복수의 영상을 획득하기 위해서 복수의 카메라를 구비하거나 또는 하나의 카메라를 이용하여 복수의 영상을 획득하는 등 다양한 방식을 이용하여 입체 영상을 획득할 수 있다. 이러한 방식 이외에도 본 실시예에 따른 수술용 내시경은 기타 다양한 방식에 의해 입체 영상을 생성할 수 있음은 물론이다.Surgical endoscopes according to the present embodiment may be a variety of tools used as an imaging tool during surgery, such as laparoscopic as well as thoracoscopic, arthroscopy, parenteral, bladder, rectal, duodenum, mediastinal, cardiac. In addition, the surgical endoscope according to the present embodiment may be a stereoscopic endoscope. That is, the surgical endoscope according to the present embodiment may be a stereoscopic endoscope for generating stereoscopic image information, and the stereoscopic image information generating method may be implemented by various techniques. For example, the surgical endoscope according to the present embodiment includes a plurality of cameras to acquire a plurality of images having stereoscopic information, or acquire a plurality of images using a single camera. An image can be obtained. In addition to this method, the surgical endoscope according to the present embodiment may of course generate a stereoscopic image by various other methods.
또한, 본 실시예에 따른 체감형 수술용 영상 처리 장치는 반드시 도시된 바와 같은 수술용 로봇 시스템에 한정되어 구현되지 않으며, 수술시 내시경 영상(9)을 출력하며 수술 도구를 이용하여 수술하는 시스템이라면 적용가능하다. 이하에서는 수술용 로봇 시스템에 본 실시예에 따른 수술용 영상 처리 장치가 적용된 경우를 중심으로 설명한다. In addition, the bodily-type surgical image processing apparatus according to the present embodiment is not necessarily implemented to be limited to the surgical robot system as shown, and if the system outputs an endoscope image 9 during surgery and operates using a surgical tool. Applicable. Hereinafter, the case where the surgical image processing apparatus according to the present embodiment is applied to the surgical robot system will be described.
도 47 및 도 48을 참조하면, 수술용 로봇 시스템은 수술대에 누워있는 환자에게 수술을 행하는 슬레이브 로봇(2)과 슬레이브 로봇(2)을 수술자가 원격 조종하는 마스터 로봇(1)을 포함하여 구성된다. 마스터 로봇(1)과 슬레이브 로봇(2)이 반드시 물리적으로 독립된 별도의 장치로 분리되어야 하는 것은 아니며, 하나로 통합되어 일체형으로 구성될 수 있으며, 이 경우 마스터 인터페이스(4)는 예를 들어 일체형 로봇의 인터페이스 부분에 상응할 수 있다. Referring to FIGS. 47 and 48, the surgical robot system includes a slave robot 2 performing surgery on a patient lying on an operating table and a master robot 1 remotely controlling the slave robot 2. . The master robot 1 and the slave robot 2 are not necessarily separated into separate devices that are physically independent, but may be integrated into one and integrally formed, in which case the master interface 4 may be, for example, of an integrated robot. May correspond to an interface portion.
마스터 로봇(1)의 마스터 인터페이스(4)는 모니터부(6) 및 마스터 조종기를 포함하고, 슬레이브 로봇(2)은 로봇 암(3) 및 인스트루먼트(8)를 포함한다. 인스트루먼트(8)는 복강경 등과 같은 내시경, 환부에 직접 조작을 가하는 수술용 인스트루먼트 등과 같은 수술 도구이다. The master interface 4 of the master robot 1 comprises a monitor part 6 and a master controller, and the slave robot 2 comprises a robot arm 3 and an instrument 8. The instrument 8 is a surgical tool such as an endoscope, such as a laparoscope, or a surgical instrument for directly manipulating the affected part.
마스터 인터페이스(4)는 수술자가 양손에 각각 파지되어 조작할 수 있도록 마스터 조종기를 구비한다. 마스터 조종기는 도 47 및 도 48에 예시된 바와 같이 두 개의 핸들(10)로 구현될 수 있으며, 수술자의 핸들(10) 조작에 따른 조작신호가 슬레이브 로봇(2)으로 전송되어 로봇 암(3)이 제어된다. 수술자의 핸들(10) 조작에 의해 로봇 암(3) 및/또는 인스트루먼트(8)의 위치 이동, 회전, 절단 작업 등이 수행될 수 있다.The master interface 4 is provided with a master controller so that the operator can be gripped and manipulated by both hands. The master controller may be implemented with two handles 10 as illustrated in FIGS. 47 and 48. The operation signal according to the manipulation of the operator's handle 10 is transmitted to the slave robot 2 so that the robot arm 3 may be operated. This is controlled. By the operation of the operator's handle 10, the position movement, rotation, cutting, etc. of the robot arm 3 and / or the instrument 8 may be performed.
예를 들어, 핸들(10)은 메인 핸들(main handle)과 서브 핸들(sub handle)로 구성될 수 있다. 하나의 핸들만으로 슬레이브 로봇 암(3)이나 인스트루먼트(8) 등을 조작할 수도 있고, 서브 핸들을 추가하여 동시에 복수의 수술 장비를 실시간으로 조작할 수도 있다. 메인 핸들 및 서브 핸들은 그 조작방식에 따라 다양한 기구적 구성을 가질 수 있으며, 예를 들면, 조이스틱 형태, 키패드, 트랙볼, 터치스크린 등 슬레이브 로봇(2)의 로봇 암(3) 및/또는 기타 수술 장비를 작동시키기 위한 다양한 입력수단이 사용될 수 있다.For example, the handle 10 may be composed of a main handle and a sub handle. It is also possible to operate the slave robot arm 3, the instrument 8, etc. with only one handle, or to operate a plurality of surgical equipment in real time by adding a sub handle. The main handle and the sub handle may have various mechanical configurations depending on the operation method thereof. For example, the robot arm 3 and / or other surgery of the slave robot 2, such as a joystick type, a keypad, a trackball, and a touch screen, may be used. Various input means for operating the equipment can be used.
마스터 조종기는 핸들(10)의 형상으로 제한되지 않으며, 네트워크를 통해 로봇 암(3)의 동작을 제어할 수 있는 형태이면 아무런 제한없이 적용될 수 있다.The master controller is not limited to the shape of the handle 10 and may be applied without any limitation as long as it can control the operation of the robot arm 3 through a network.
마스터 인터페이스(4)의 모니터부(6)에는 인스트루먼트(8)에 의해 입력되는 내시경 영상(9), 카메라 영상 및 모델링 영상이 화상 이미지로 표시된다. 또한, 모니터부(6)에 표시되는 정보는 선택된 영상의 종류에 의해 다양할 수 있을 것이다. The monitor 6 of the master interface 4 displays an endoscope image 9, a camera image, and a modeling image input by the instrument 8 as an image image. In addition, the information displayed on the monitor unit 6 may vary according to the type of the selected image.
모니터부(6)는 하나 이상의 모니터들로 구성될 수 있으며, 각 모니터에 수술시 필요한 정보들이 개별적으로 표시되도록 할 수 있다. 도 47 및 도 48에는 모니터부(6)가 세 개의 모니터를 포함하는 경우가 예시되었으나, 모니터의 수량은 표시를 요하는 정보의 유형이나 종류 등에 따라 다양하게 결정될 수 있다. 또한, 모니터부(6)가 복수의 모니터를 포함하는 경우 화면이 서로 연동되어 확장될 수 있다. 즉, 내시경 영상(9)은 하나의 모니터에 출력되는 윈도우(window)와 같이 각 모니터를 자유롭게 이동할 수 있으며, 각 모니터에 서로 연결되는 일부 영상이 출력됨으로써 전체 영상이 출력될 수도 있다. The monitor unit 6 may be composed of one or more monitors, and may display information necessary for surgery on each monitor separately. 47 and 48 illustrate the case in which the monitor unit 6 includes three monitors, the quantity of monitors may be variously determined according to the type or type of information requiring display. In addition, when the monitor unit 6 includes a plurality of monitors, the screen may be extended in cooperation with each other. That is, the endoscope image 9 may freely move each monitor, such as a window output to one monitor, and the entire image may be output by outputting some images connected to each monitor.
슬레이브 로봇(2)과 마스터 로봇(1)은 유선 또는 무선을 통해 상호 결합되어, 마스터 로봇(1)이 슬레이브 로봇(2)에게 조작신호를 전송하고, 슬레이브 로봇(2)은 마스터 로봇(1)에게 인스트루먼트(8)를 통해 입력된 내시경 영상(9)을 전송할 수 있다. 만일, 마스터 인터페이스(4)에 구비된 두 개의 핸들(10)에 의한 두 개의 조작신호 및/또는 인스트루먼트(8)의 위치 조정을 위한 조작신호가 동시에 및/또는 유사한 시점에서 전송될 필요가 있는 경우, 각 조작신호는 상호 독립적으로 슬레이브 로봇(2)으로 전송될 수 있다. 여기서 각 조작신호가 '독립적으로' 전송된다는 것은, 조작신호 간에 서로 간섭을 주지 않으며, 어느 하나의 조작신호가 다른 하나의 신호에 영향을 미치지 않음을 의미한다. 이처럼, 복수의 조작신호가 서로 독립적으로 전송되도록 하기 위해서는, 각 조작신호의 생성 단계에서 각 조작신호에 대한 헤더 정보를 부가하여 전송시키거나, 각 조작신호가 그 생성 순서에 따라 전송되도록 하거나, 또는 각 조작신호의 전송 순서에 관하여 미리 우선순위를 정해 놓고 그에 따라 전송되도록 하는 등 다양한 방식이 이용될 수 있다. 이 경우, 각 조작신호가 전송되는 전송 경로가 독립적으로 구비되도록 함으로써 각 조작신호간에 간섭이 근본적으로 방지되도록 할 수도 있을 것이다.The slave robot 2 and the master robot 1 are coupled to each other via wired or wireless, so that the master robot 1 transmits an operation signal to the slave robot 2, and the slave robot 2 is the master robot 1. The endoscope image 9 input through the instrument 8 may be transmitted to the. If two operation signals provided by the two handles 10 provided in the master interface 4 and / or operation signals for adjusting the position of the instrument 8 need to be transmitted at the same time and / or at a similar time point, Each operation signal may be independently transmitted to the slave robot 2. Herein, when each operation signal is 'independently' transmitted, it means that the operation signals do not interfere with each other and one operation signal does not affect the other signal. As described above, in order to transmit the plurality of operation signals independently of each other, in the generation step of each operation signal, header information for each operation signal is added and transmitted, or each operation signal is transmitted in the generation order thereof, or Various methods may be used such as prioritizing each operation signal in advance and transmitting the operation signal accordingly. In this case, the transmission path through which each operation signal is transmitted may be provided independently so that interference between each operation signal may be fundamentally prevented.
슬레이브 로봇(2)의 로봇 암(3)은 다자유도를 가지며 구동되도록 구현될 수 있다. 로봇 암(3)은 예를 들어 환자의 수술 부위에 삽입되는 수술 도구, 수술 도구를 수술 위치에 따라 요(yaw)방향으로 회전시키는 요동 구동부, 요동 구동부의 회전 구동과 직교하는 피치(pitch) 방향으로 수술 도구를 회전시키는 피치 구동부, 수술 도구를 길이 방향으로 이동시키는 이송 구동부와, 수술 도구를 회전시키는 회전 구동부, 수술 도구의 끝단에 설치되어 수술 병변을 절개 또는 절단하는 수술 도구 구동부를 포함하여 구성될 수 있다. 다만, 로봇 암(3)의 구성이 이에 제한되지 않으며, 이러한 예시가 본 발명의 권리범위를 제한하지 않는 것으로 이해되어야 한다. 또한, 수술자가 핸들(10)을 조작함에 의해 로봇 암(3)이 상응하는 방향으로 회전, 이동하는 등의 실제적인 제어 과정은 본 발명의 요지와 다소 거리감이 있으므로 이에 대한 구체적인 설명은 생략한다.The robot arm 3 of the slave robot 2 can be implemented to be driven with multiple degrees of freedom. The robot arm 3 includes, for example, a surgical tool inserted into a surgical site of a patient, a rocking drive unit for rotating the surgical tool in the yaw direction according to a surgical position, and a pitch direction perpendicular to the rotational drive of the rocking drive unit. It comprises a pitch drive unit for rotating the surgical tool, a transfer drive for moving the surgical tool in the longitudinal direction, a rotation drive for rotating the surgical tool, the surgical tool drive is installed on the end of the surgical tool to cut or cut the surgical lesion Can be. However, the configuration of the robot arm 3 is not limited thereto, and it should be understood that this example does not limit the scope of the present invention. In addition, the actual control process, such as the operator rotates the robot arm 3 in the corresponding direction by operating the handle 10 is somewhat distanced from the subject matter of the present invention, so a detailed description thereof will be omitted.
슬레이브 로봇(2)은 환자를 수술하기 위하여 하나 이상으로 이용될 수 있으며, 수술 부위가 모니터부(6)를 통해 화상 이미지로 표시되도록 하기 위한 인스트루먼트(8)는 독립된 슬레이브 로봇(2)으로 구현될 수도 있으며, 마스터 로봇(1)도 슬레이브 로봇(2)과 일체화되어 구현될 수도 있다. One or more slave robots 2 may be used to operate the patient, and the instrument 8 for causing the surgical site to be displayed as an image image through the monitor unit 6 may be implemented as an independent slave robot 2. In addition, the master robot 1 may also be implemented integrally with the slave robot 2.
도 49는 본 발명의 실시예에 따른 수술용 로봇의 블록 구성도이다. 도 49를 참조하면, 영상 입력부(310), 화면 표시부(320), 암 조작부(330), 조작신호 생성부(340), 화면 표시 제어부(3350), 제어부(370)를 포함하는 마스터 로봇(1) 및 로봇 암(3), 내시경(5)을 포함하는 슬레이브 로봇(2)이 도시된다. 49 is a block diagram of a surgical robot according to an embodiment of the present invention. Referring to FIG. 49, a master robot 1 including an image input unit 310, a screen display unit 320, an arm operation unit 330, an operation signal generation unit 340, a screen display control unit 3350, and a control unit 370. And a slave robot 2 comprising a robot arm 3, an endoscope 5.
본 실시예에 따른 체감형 수술용 영상 처리 장치는 영상 입력부(310), 화면 표시부(320), 화면 표시 제어부(3350)를 포함하는 모듈로 구현될 수 있으며, 물론, 이러한 모듈은 암 조작부(330), 조작신호 생성부(340), 제어부(370)를 포함할 수도 있다. The haptic surgical image processing apparatus according to the present embodiment may be implemented as a module including an image input unit 310, a screen display unit 320, and a screen display control unit 3350. Of course, such a module may be an arm manipulation unit 330. ), The operation signal generator 340 and the controller 370 may be included.
영상 입력부(310)는 슬레이브 로봇(2)의 내시경(5)을 통해 입력된 영상을 유선 또는 무선 전송을 통해 수신한다. 내시경(5)도 본 실시예에 따른 수술 도구의 한 종류가 될 수 있으며, 그 개수는 하나 이상이 될 수 있다.The image input unit 310 receives an image input through the endoscope 5 of the slave robot 2 through wired or wireless transmission. Endoscope 5 may also be one type of surgical tool according to the present embodiment, the number may be one or more.
화면 표시부(320)는 영상 입력부(310)를 통해 수신된 영상에 상응하는 화상 이미지를 시각(視覺)적 정보로 출력한다. 화면 표시부(320)는 내시경 영상을 크기 그대로 또는 줌인(Zoom In)/줌아웃(Zoom Out)하여 출력하거나 또는 내시경 영상과 후술할 모델링 영상을 서로 정합하거나 각각 별도의 영상으로 출력할 수 있다. The screen display unit 320 outputs an image image corresponding to the image received through the image input unit 310 as visual information. The screen display 320 may output the endoscope image as it is or zoom in / zoom out, or may match the endoscope image and a modeling image to be described later, or output each as a separate image.
또한, 화면 표시부(320)는 내시경 영상과 전체 수술 상황을 비추는 영상, 예를 들면, 수술 대상의 외부를 카메라가 촬영하여 생성하는 카메라 영상을 동시 및/또는 서로 정합하여 출력하여 수술시 상황 파악을 용이하게 할 수도 있다. In addition, the screen display unit 320 outputs the endoscope image and the image reflecting the entire surgical situation, for example, a camera image generated by the camera photographing the outside of the surgical target simultaneously and / or matched with each other and outputted to identify the surgical situation. It may be easy.
또한, 화면 표시부(320)는 출력 영상의 일부분 또는 별도의 화면에 생성된 창(window)에 전체 영상(내시경 영상, 모델링 영상 및 카메라 영상 등)에 대한 축소된 영상을 출력하고, 수술자가 상술한 마스터 조종기를 이용하여 출력된 축소된 영상 중 특정 지점을 선택하거나 회전시키는 경우 출력 영상 전체가 이동 또는 회전하는 기능, 이른바, 캐드 프로그램의 조감뷰(bird's eye view) 기능을 수행할 수도 있다. 상술한 바와 같은 화면 표시부(320)에 출력되는 영상의 줌인/줌아웃, 이동, 회전 등과 같은 기능은 마스터 조종기의 조작에 상응하여 제어부(370)에서 제어할 수 있다. In addition, the screen display unit 320 outputs a reduced image of the entire image (endoscopic image, modeling image, camera image, etc.) in a portion of the output image or a window generated on a separate screen, and the operator described above When selecting or rotating a specific point among the reduced images output using the master controller, the entire output image may be moved or rotated, so-called bird's eye view function of the CAD program. Functions such as zooming in / out, moving, and rotating the image output to the screen display unit 320 as described above may be controlled by the controller 370 according to the operation of the master controller.
화면 표시부(320)는 모니터부(6) 등의 형태로 구현될 수 있으며, 수신된 영상이 화면 표시부(320)를 통해 화상 이미지로 출력되도록 하기 위한 영상 처리 프로세스가 제어부(370), 화면 표시 제어부(3350) 또는 별도의 영상 처리부(도시되지 않음)에 의해 수행될 수 있다. 본 실시예에 따른 화면 표시부(320)는 다양한 기술로 구현되는 디스플레이어가 될 수 있으며, 예를 들면, UHDTV(7380x4320)와 같은 초고해상도 모니터가 될 수 있다. 또한, 본 실시예에 따른 화면 표시부(320)는 3D 디스플레이가 될 수 있다. 예를 들면, 본 실시예에 따른 화면 표시부(320)는 양안시차의 원리를 이용하여 사용자로 하여금 좌안용과 우안용 영상을 분리하여 인식하도록 할 수 있다. 이러한 3D 영상 구현 방식은 안경식(예를 들면, 적청안경방식(애너글리프), 편광안경방식(패시브 글라스), 셔터안경방식(액티브 글라스) 등), 렌티큘러 방식, 배리어 방식 등 다양한 방식으로 구현될 수 있다.The screen display unit 320 may be implemented in the form of a monitor unit 6. An image processing process for outputting a received image as an image image through the screen display unit 320 may include a control unit 370 and a screen display control unit. 3350 or a separate image processor (not shown). The screen display unit 320 according to the present embodiment may be a displayer implemented by various technologies, and may be, for example, an ultra high resolution monitor such as a UHDTV (7380x4320). In addition, the screen display 320 according to the present embodiment may be a 3D display. For example, the screen display 320 according to the present exemplary embodiment may allow the user to separately recognize the left eye and right eye images by using the principle of binocular disparity. Such a 3D image implementation method may be implemented in various ways such as glasses (for example, red blue glasses (anaglyph), polarized glasses (passive glass), shutter glasses (active glass), etc.), lenticular method, barrier method. have.
화면 표시부(320)는 입력된 내시경 영상을 특정 영역에 출력한다. 여기서 특정 영역은 소정의 크기 및 위치를 가지는 화면 상의 영역이 될 수 있다. 이러한 특정 영역은 상술한 바와 같이 내시경(5)의 관점의 변화에 상응하여 결정될 수 있다.The screen display unit 320 outputs the input endoscope image to a specific region. Here, the specific area may be an area on the screen having a predetermined size and location. This particular area may be determined in correspondence with the change of viewpoint of the endoscope 5 as described above.
화면 표시 제어부(3350)는 내시경(5)의 관점에 상응하여 이러한 특정 영역을 설정할 수 있다. 즉, 화면 표시 제어부(3350)는 내시경(5)의 회전, 이동 등 모션에 상응하여 그 관점을 추적하고, 이를 반영하여 화면 표시부(320)에서 내시경 영상을 출력하는 특정 영역을 설정한다.The screen display controller 3350 may set this specific area in accordance with the viewpoint of the endoscope 5. That is, the screen display control unit 3350 tracks the point of view corresponding to the motion, such as rotation and movement of the endoscope 5, and sets the specific area for outputting the endoscope image on the screen display unit 320 by reflecting the viewpoint.
암 조작부(330)는 슬레이브 로봇(2)의 로봇 암(3)의 위치 및 기능을 수술자가 조작할 수 있도록 하는 수단이다. 암 조작부(330)는 도 48에 예시된 바와 같이 핸들(10)의 형상으로 형성될 수 있으나, 그 형상이 이에 제한되지 않으며 동일한 목적 달성을 위한 다양한 형상으로 변형 구현될 수 있다. 또한, 예를 들어 일부는 핸들 형상으로, 다른 일부는 클러치 버튼 등의 다른 형상으로 형성될 수도 있으며, 수술 도구의 조작을 용이하도록 하기 위해 수술자의 손가락을 삽입 고정할 수 있도록 하는 손가락 삽입관 또는 삽입 고리가 더 형성될 수도 있다.The arm manipulation unit 330 is a means for allowing the operator to manipulate the position and function of the robot arm 3 of the slave robot 2. The arm manipulation unit 330 may be formed in the shape of the handle 10 as illustrated in FIG. 48, but the shape is not limited thereto and may be modified in various shapes for achieving the same purpose. Further, for example, some may be formed in the shape of a handle, others may be formed in other shapes such as a clutch button, and a finger cannula or insertion may be inserted to fix the operator's finger to facilitate manipulation of the surgical tool. More rings may be formed.
조작신호 생성부(340)는 로봇 암(3) 및/또는 내시경(5)의 위치 이동 또는 수술을 위한 조작을 위해 수술자가 암 조작부(330)를 조작하는 경우 이에 상응하는 조작신호를 생성하여 슬레이브 로봇(2)으로 전송한다. 조작신호는 유선 또는 무선 통신망을 통해 송수신될 수 있다.The operation signal generator 340 generates a corresponding operation signal when the operator manipulates the arm operation unit 330 for the movement of the robot arm 3 and / or the endoscope 5 or the operation for surgery. Transfer to the robot (2). The operation signal may be transmitted and received via a wired or wireless communication network.
조작신호 생성부(340)는 수술자의 암 조작부(340) 조작에 따른 조작 정보를 이용하여 조작신호를 생성하고, 생성한 조작신호를 슬레이브 로봇(2)으로 전송하여 결과적으로 실제 수술 도구가 조작 정보에 상응하도록 조작되도록 한다. 아울러, 조작신호에 의해 조작된 실제 수술 도구의 위치 및 조작 형상은 내시경(5)에 의해 입력된 영상에 의해 수술자의 확인이 가능하다. The operation signal generator 340 generates an operation signal by using the operation information according to the operation of the operator's arm operation unit 340, and transmits the generated operation signal to the slave robot 2. To be manipulated accordingly. In addition, the position and operation shape of the actual surgical instrument operated by the operation signal can be confirmed by the operator by the image input by the endoscope (5).
도 50은 본 발명의 실시예에 따른 체감형 수술용 영상 처리 장치의 블록 구성도이다. 도 50을 참조하면, 화면 표시 제어부(3350)는 내시경 관점 추적부(1351), 영상 이동 정보 추출부(1353), 영상 위치 설정부(1355)를 포함할 수 있다. 50 is a block diagram illustrating an apparatus for immersive surgical image processing according to an embodiment of the present invention. Referring to FIG. 50, the screen display controller 3350 may include an endoscope perspective tracker 1351, an image movement information extractor 1353, and an image position setter 1355.
내시경 관점 추적부(1351)는 내시경(5)의 이동 및 회전에 상응하여 내시경(5)의 관점 정보를 추적한다. 여기서, 관점 정보는 내시경(5)이 바라보는 관점(view point)을 의미하며, 이러한 관점 정보는 상술한 수술용 로봇 시스템에서 내시경(5)을 조작하는 신호로부터 추출될 수 있다. 즉, 관점 정보는 내시경(5)의 이동 및 회전 운동을 조작하는 신호에 의해 특정될 수 있다. 이러한 내시경(5) 조작 신호는 수술용 로봇 시스템에서 생성되어 내시경(5)을 조작하는 로봇 암(3)에 전달되기 때문에 이러한 신호를 이용하면 내시경(5)이 바라보는 방향을 추적할 수 있다. The endoscope perspective tracking unit 1351 tracks the perspective information of the endoscope 5 corresponding to the movement and rotation of the endoscope 5. Here, the view point information refers to a view point viewed by the endoscope 5, and the view point information may be extracted from a signal for manipulating the endoscope 5 in the above-described surgical robot system. That is, the viewpoint information can be specified by signals for manipulating the movement and rotational movement of the endoscope 5. Since the endoscope 5 manipulation signal is generated in the surgical robot system and transmitted to the robot arm 3 for manipulating the endoscope 5, the signal used to track the endoscope 5 can be tracked.
영상 이동 정보 추출부(1353)는 내시경(5)의 관점 정보를 이용하여 내시경 영상의 이동 정보를 추출한다. 즉, 내시경(5)의 관점 정보는 취득되는 내시경 영상의 촬영 대상의 위치 변화량에 대한 정보를 포함할 수 있으며, 이러한 정보로부터 내시경 영상의 이동 정보가 추출될 수 있다. The image movement information extractor 1353 extracts movement information of the endoscope image using the viewpoint information of the endoscope 5. That is, the viewpoint information of the endoscope 5 may include information on the position change amount of the target object of the acquired endoscope image, and the movement information of the endoscope image may be extracted from this information.
영상 위치 설정부(1355)는 추출된 이동 정보를 이용하여 내시경 영상이 출력되는 화면 표시부(320)의 특정 영역을 설정한다. 예를 들면, 내시경(5)의 관점 정보가 소정의 벡터(A)만큼 변화하였다면, 환자의 내부 장기에 대한 내시경 영상은 해당 벡터에 상응하여 그 이동 정보가 특정되며, 이러한 이동 정보를 이용하여 화면 표시부(320)의 특정 영역이 설정된다. 내시경 영상이 소정의 벡터(B) 만큼 변화하였다면, 이러한 정보와 화면 표시부(320)의 크기, 형상, 해상도를 이용하여 내시경 영상이 화면 표시부(320)에 실제 출력될 특정 영역이 설정될 수 있다. The image position setting unit 1355 sets a specific area of the screen display unit 320 on which the endoscope image is output using the extracted movement information. For example, if the viewpoint information of the endoscope 5 has been changed by a predetermined vector A, the endoscope image of the patient's internal organs has its movement information corresponding to the corresponding vector, and the screen using the movement information A specific area of the display unit 320 is set. If the endoscope image is changed by a predetermined vector B, a specific area in which the endoscope image is actually output to the screen display unit 320 may be set using this information and the size, shape, and resolution of the screen display unit 320.
도 51은 본 발명의 실시예에 따른 체감형 수술용 영상 처리 방법의 흐름도이다. 이하에서 수행할 각 단계는 화면 표시 제어부(3350)가 주체가 되어 실행할 수 있으며, 각 단계가 반드시 기술된 순서로 시계열적으로 실행될 필요가 없음은 물론이다.51 is a flowchart illustrating a tangible surgical image processing method according to an embodiment of the present invention. Each step to be performed below may be executed by the screen display control unit 3350 as a subject, and the steps need not be executed in time series in the order described.
단계 R511에서, 내시경(5)의 이동 및 회전에 상응하여 내시경(5)이 바라보는 관점에 대한 정보인 내시경(5)의 관점 정보를 추적한다. 관점 정보는 내시경(5)의 이동 및 회전 운동을 조작하는 신호에 의해 특정됨으로써 내시경(5)이 바라보는 방향은 추적될 수 있다. In step R511, the viewpoint information of the endoscope 5, which is information about the viewpoint viewed by the endoscope 5, is tracked corresponding to the movement and rotation of the endoscope 5. The viewpoint information is specified by signals for manipulating the movement and rotational movement of the endoscope 5 so that the direction that the endoscope 5 faces can be tracked.
단계 R513에서, 내시경(5)의 관점 정보를 이용하여 내시경 영상의 촬영 대상의 위치 변화량에 대응되는 내시경 영상의 이동 정보를 추출한다. In step R513, the movement information of the endoscope image corresponding to the change amount of the position of the object to be captured of the endoscope image is extracted using the viewpoint information of the endoscope 5.
단계 R515에서, 추출된 이동 정보를 이용하여 내시경 영상이 출력되는 화면 표시부(320)의 특정 영역을 설정한다. 즉, 내시경(5)의 관점 정보 및 내시경 영상의 이동 정보가 상술한 바와 같이 특정되면, 이러한 이동 정보를 이용하여 화면 표시부(320)에서 내시경 영상이 출력될 특정 영역이 설정된다. In step R515, a specific area of the screen display unit 320 for outputting the endoscope image is set using the extracted movement information. That is, when the viewpoint information of the endoscope 5 and the movement information of the endoscope image are specified as described above, a specific area for outputting the endoscope image on the screen display unit 320 is set using the movement information.
단계 R517에서, 화면 표시부(320)에서 설정된 특정 영역에 내시경 영상을 출력한다. In step R517, the endoscope image is output to the specific area set by the screen display unit 320.
도 52는 본 발명의 실시예에 따른 체감형 수술용 영상 처리 방법에 따른 출력 영상의 구성도이다. 화면 표시부(320)는 전체 화면이 될 수 있으며, 내시경(5)에서 취득된 내시경 영상(620)이 화면 표시부(320)의 특정 위치, 예를 들면, 좌표(X,Y)에 내시경 영상(620)의 중심점을 둔 위치에 출력될 수 있다. 좌표(X,Y)는 내시경(5)의 관점의 변화량에 상응하여 설정될 수 있다. 예를 들면, 내시경(5)의 관점 정보 및 내시경 영상의 이동량이 좌로 +1, 세로로 -1만큼 변화하면, 내시경 영상(620)의 중심점은 좌표(X+1,Y-1)의 위치로 이동할 수 있다.52 is a block diagram of an output image according to the tangible surgical image processing method according to the embodiment of the present invention. The screen display unit 320 may be a full screen, and the endoscope image 620 acquired from the endoscope 5 may include the endoscope image 620 at a specific position of the screen display unit 320, for example, coordinates (X, Y). ) Can be output at the centered position. Coordinates (X, Y) can be set corresponding to the amount of change in the viewpoint of the endoscope (5). For example, when the viewpoint information of the endoscope 5 and the movement amount of the endoscope image change by +1 to the left and -1 to the vertical, the center point of the endoscope image 620 is moved to the position of coordinates (X + 1, Y-1). I can move it.
도 53은 본 발명의 실시예에 따른 수술용 로봇의 블록 구성도이다. 도 53을 참조하면, 영상 입력부(310), 화면 표시부(320), 암 조작부(330), 조작신호 생성부(340), 화면 표시 제어부(3350), 영상 저장부(360), 제어부(370)를 포함하는 마스터 로봇(1) 및 로봇 암(3), 내시경(5)을 포함하는 슬레이브 로봇(2)이 도시된다. 상술한 바와의 차이점을 위주로 설명한다. 53 is a block diagram of a surgical robot according to an embodiment of the present invention. Referring to FIG. 53, the image input unit 310, the screen display unit 320, the arm operation unit 330, the operation signal generation unit 340, the screen display control unit 3350, the image storage unit 360, and the control unit 370. A master robot 1 and a robot arm 3 comprising a slave robot 2 including an endoscope 5 are shown. The differences from the above will be explained mainly.
본 실시예는 현재 시점에서, 이전에 입력되어 저장된 내시경 영상을 추출하여 현재의 내시경 영상과 함께 화면 표시부(320)에 출력함으로써, 내시경 영상의 변화에 대한 정보를 사용자에게 알려 줄 수 있는 특징이 있다. The present embodiment has a feature of extracting a previously input and stored endoscope image at the present time point and outputting the endoscope image together with the current endoscope image to inform the user of the change of the endoscope image. .
영상 입력부(310)는 수술용 내시경으로부터 서로 다른 시점에 제공되는 제1 내시경 영상 및 제2 내시경 영상을 입력받는다. 여기서, 제1, 제2 등과 같은 서수는 서로 다른 내시경 영상을 구분하기 위한 식별자가 될 수 있으며, 제1 내시경 영상 및 제2 내시경 영상은 내시경(5)이 서로 다른 시점 및 관점에서 촬영한 영상이 될 수 있다. 또한, 영상 입력부(310)는 제1 내시경 영상을 제2 내시경 영상보다 먼저 입력받을 수 있다. The image input unit 310 receives a first endoscope image and a second endoscope image provided at different time points from the surgical endoscope. Here, the ordinal numbers, such as the first and the second, may be identifiers for distinguishing different endoscope images, and the first endoscope image and the second endoscope image may be images captured by the endoscope 5 from different viewpoints and viewpoints. Can be. In addition, the image input unit 310 may receive the first endoscope image before the second endoscope image.
영상 저장부(360)는 이러한 제1 내시경 영상과 제2 내시경 영상을 저장한다. 영상 저장부(360)는 제1 내시경 영상과 제2 내시경 영상의 실제 영상 내용인 영상 정보뿐만 아니라 화면 표시부(320)에 출력될 특정 영역에 대한 정보도 저장한다. The image storage unit 360 stores the first endoscope image and the second endoscope image. The image storage unit 360 stores not only image information which is actual image content of the first endoscope image and the second endoscope image, but also information on a specific region to be output to the screen display unit 320.
화면 표시부(320)는 제1 내시경 영상과 제2 내시경 영상을 서로 다른 영역에 출력하며, 화면 표시 제어부(3350)는 내시경(5)의 서로 다른 관점에 상응하여 제1 내시경 영상과 제2 내시경 영상을 서로 다른 영역에 출력하도록 화면 표시부(320)를 제어할 수 있다.The screen display unit 320 outputs the first endoscope image and the second endoscope image to different areas, and the screen display control unit 3350 corresponds to the first endoscope image and the second endoscope image according to different viewpoints of the endoscope 5. The screen display 320 may be controlled to output the data to different areas.
여기서, 화면 표시부(320)는 제1 내시경 영상과 제2 내시경 영상을 채도, 명도, 색상 및 화면 패턴 중 어느 하나 이상을 다르게 출력할 수 있다. 예를 들면, 화면 표시부(320)는 현재 입력되는 제2 내시경 영상은 컬러 영상으로 출력하고, 과거 영상인 제1 내시경 영상은 흑백 영상 등으로 출력함으로써, 사용자가 영상을 서로 구분하도록 할 수 있다. 도 56을 참조하면, 현재 입력된 영상인 제2 내시경 영상(623)은 컬러 영상으로 좌표(X1,Y1)에 출력되며, 과거 입력된 영상인 제1 내시경 영상(621)은 화면 패턴, 즉, 빗금 패턴이 형성되어 좌표(X2,Y2)에 출력되는 예가 도시된다. Here, the screen display unit 320 may differently output one or more of the saturation, brightness, color, and screen pattern of the first endoscope image and the second endoscope image. For example, the screen display unit 320 may output a second endoscope image currently input as a color image, and output a first endoscope image, which is a past image, as a black and white image, so that the user may distinguish the images from each other. Referring to FIG. 56, a second endoscope image 623 which is a currently input image is output as a color image at coordinates X1 and Y1, and a first endoscope image 621 which is a previously input image is a screen pattern, that is, An example in which hatched patterns are formed and output at coordinates X2 and Y2 is shown.
또한, 이전 영상인 제1 내시경 영상은 계속 또는 미리 설정된 시간동안만 출력될 수 있다. 후자의 경우 과거 영상은 화면 표시부(320)에 소정 시간동안만 출력됨으로써, 화면 표시부(320)에는 새로운 내시경 영상이 지속적으로 업데이트될 수 있다. In addition, the first endoscope image, which is a previous image, may be output only for a continuous or preset time. In the latter case, the past image is output to the screen display unit 320 only for a predetermined time, so that the new endoscope image may be continuously updated on the screen display unit 320.
도 54는 본 발명의 실시예에 따른 체감형 수술용 영상 처리 장치의 블록 구성도이다. 도 54를 참조하면, 화면 표시 제어부(3350)는 내시경 관점 추적부(1351), 영상 이동 정보 추출부(1353), 영상 위치 설정부(1355), 저장 영상 표시부(1357)를 포함할 수 있다. 54 is a block diagram illustrating an apparatus for immersive surgical image processing according to an embodiment of the present invention. Referring to FIG. 54, the screen display controller 3350 may include an endoscope perspective tracker 1351, an image movement information extractor 1353, an image position setting unit 1355, and a stored image display unit 1357.
내시경 관점 추적부(1351)는 내시경(5)의 이동 및 회전에 상응하여 내시경(5)의 관점 정보를 추적하며, 영상 이동 정보 추출부(1353)는 내시경(5)의 관점 정보를 이용하여 상술한 바와 같이 내시경 영상의 이동 정보를 추출한다.The endoscope perspective tracking unit 1351 tracks the perspective information of the endoscope 5 in correspondence with the movement and rotation of the endoscope 5, and the image movement information extracting unit 1353 uses the perspective information of the endoscope 5 in detail. As described above, the movement information of the endoscope image is extracted.
영상 위치 설정부(1355)는 추출된 이동 정보를 이용하여 내시경 영상이 출력되는 화면 표시부(320)의 특정 영역을 설정한다.The image position setting unit 1355 sets a specific area of the screen display unit 320 on which the endoscope image is output using the extracted movement information.
저장 영상 표시부(1357)는 화면 표시부(320)가 실시간으로 입력된 제2 내시경 영상을 출력하는 동안 영상 저장부(360)에 저장된 제1 내시경 영상을 추출하여 화면 표시부(320)에 출력한다. 제1 내시경 영상과 제2 내시경 영상은 그 출력되는 영역 및 영상 정보가 서로 다르므로, 저장 영상 표시부(1357)는 이러한 정보를 영상 저장부(360)에서 추출하여 저장된 과거 영상인 제1 내시경 영상을 화면 표시부(320)에 출력하도록 한다. The stored image display unit 1357 extracts the first endoscope image stored in the image storage unit 360 and outputs the image to the screen display unit 320 while the screen display unit 320 outputs the second endoscope image input in real time. Since the output region and the image information of the first endoscope image and the second endoscope image are different from each other, the storage image display unit 1357 may extract the information from the image storage unit 360 to store the first endoscope image, which is a past image. Output to the screen display unit 320.
도 55는 본 발명의 실시예에 따른 체감형 수술용 영상 처리 방법의 흐름도이다. 이하에서 설명될 각 단계는 화면 표시 제어부(3350)가 주체가 되어 실행할 수 있으며, 크게 제1 내시경 영상을 출력하는 단계와 제2 내시경 영상을 출력하는 단계로 구분될 수 있고, 상술한 바와 같이 제1 내시경 영상은 제2 내시경 영상과 함께 출력될 수 있다.55 is a flowchart of a tangible surgical image processing method according to an embodiment of the present invention. Each step to be described below may be executed by the screen display control unit 3350 as a main subject, and may be classified into a step of outputting a first endoscope image and a step of outputting a second endoscope image. The first endoscope image may be output together with the second endoscope image.
단계 R511에서는, 내시경(5)의 제1 이동 및 회전 정보에 상응하여 내시경(5)이 바라보는 관점에 대한 정보인 내시경(5)의 관점 정보를 추적한다. In step R511, the viewpoint information of the endoscope 5, which is information about the viewpoint viewed by the endoscope 5, is tracked corresponding to the first movement and rotation information of the endoscope 5.
단계 R513에서는, 제1 내시경 영상의 이동 정보가 추출되며, 단계 R515에서는, 추출된 이동 정보를 이용하여 내시경 영상이 출력되는 화면 표시부(320)의 특정 영역을 설정하고, 단계 R517에서는, 설정된 위치에 제1 내시경 영상을 출력한다.In step R513, the movement information of the first endoscope image is extracted. In step R515, a specific area of the screen display unit 320 on which the endoscope image is output is set using the extracted movement information. The first endoscope image is output.
단계 R519에서는, 출력된 제1 내시경 영상에 대한 정보 및 제1 화면 위치를 영상 저장부(360)에 저장한다.In operation R519, the information about the outputted first endoscope image and the first screen position are stored in the image storage unit 360.
단계 R521에서는, 내시경(5)의 제2 이동 및 회전 정보에 상응하여 내시경(5)이 바라보는 관점에 대한 정보인 내시경(5)의 관점 정보를 추적한다. In step R521, the viewpoint information of the endoscope 5, which is information about the viewpoint viewed by the endoscope 5, is tracked corresponding to the second movement and rotation information of the endoscope 5.
단계 R522에서는, 제2 내시경 영상의 이동 정보가 추출되며, 단계 R523에서는, 추출된 이동 정보를 이용하여 내시경 영상이 출력되는 화면 표시부(320)의 특정 영역을 설정하고, 단계 R524에서는, 설정된 위치에 제2 내시경 영상을 출력한다.In step R522, movement information of the second endoscope image is extracted. In step R523, a specific area of the screen display unit 320 for outputting the endoscope image is set using the extracted movement information, and in step R524, The second endoscope image is output.
단계 R525에서는, 출력된 제2 내시경 영상에 대한 정보 및 제2 화면 위치를 영상 저장부(360)에 저장한다. 단계 R526에서는, 제2 내시경 영상과 함께 영상 저장부(360)에 저장된 제1 내시경 영상을 제1 화면 위치에 출력한다. 여기서, 제1 내시경 영상은 제2 내시경 영상과 채도, 명도, 색상 및 화면 패턴 중 어느 하나 이상을 다르게 하여 출력할 수 있다.In operation R525, the information about the outputted second endoscope image and the second screen position are stored in the image storage unit 360. In operation R526, the first endoscope image stored in the image storage unit 360 together with the second endoscope image is output to the first screen position. Here, the first endoscope image may be output by differently performing any one or more of saturation, brightness, color, and screen pattern from the second endoscope image.
도 57은 본 발명의 실시예에 따른 수술용 로봇의 블록 구성도이다. 도 57을 참조하면, 영상 입력부(310), 화면 표시부(320), 암 조작부(330), 조작신호 생성부(340), 화면 표시 제어부(3350), 제어부(370), 영상 정합부(450)를 포함하는 마스터 로봇(1) 및 로봇 암(3), 내시경(5)을 포함하는 슬레이브 로봇(2)이 도시된다. 상술한 바와의 차이점을 위주로 설명한다.57 is a block diagram of a surgical robot according to an embodiment of the present invention. Referring to FIG. 57, the image input unit 310, the screen display unit 320, the arm operation unit 330, the operation signal generator 340, the screen display control unit 3350, the control unit 370, and the image matching unit 450. A master robot 1 and a robot arm 3 comprising a slave robot 2 including an endoscope 5 are shown. The differences from the above will be explained mainly.
본 실시예는 수술시 내시경을 이용하여 실제 촬영한 내시경 영상과 미리 수술 도구에 대해 생성하여 영상 저장부(360)에 저장한 모델링 영상을 각각 또는 서로 정합하거나 그 크기를 조절하는 등 영상을 수정하여 사용자가 관찰가능한 화면 표시부(320)에 출력할 수 있는 특징이 있다. In this embodiment, the endoscope image actually photographed using an endoscope during surgery and a modeling image generated in advance for a surgical tool and stored in the image storage unit 360 are matched with each other, or the size thereof is corrected. There is a feature that can be output to the screen display unit 320 that the user can observe.
영상 정합부(450)는 영상 입력부(310)를 통해 수신된 내시경 영상과 상술한 영상 저장부(360)에 저장된 수술 도구에 대한 모델링 영상을 서로 정합하여 출력 영상을 생성하고 이를 화면 표시부(320)에 출력한다. 내시경 영상은 내시경을 이용하여 환자의 몸속 내부를 촬영한 영상으로서 제한된 영역만을 촬영하여 획득된 영상이므로, 수술 도구의 일부 모습에 대한 영상을 포함한다. The image matching unit 450 generates an output image by matching the endoscope image received through the image input unit 310 with the modeling image of the surgical tool stored in the image storage unit 360 and generating the output image. Output to. The endoscope image is an image of the inside of the patient's body using the endoscope. Since the image is obtained by capturing only a limited area, the endoscope image includes an image of a part of the surgical instrument.
모델링 영상은 전체 수술 도구에 대한 형상을 2D 또는 3D 영상으로 구현하여 생성한 영상이다. 모델링 영상은 수술 개시 시점 전 특정 시점, 예를 들면, 초기 설정 상태에 촬영한 수술 도구에 관한 영상이 될 수 있다. 모델링 영상은 수술 도구를 컴퓨터 시뮬레이션 기법에 의해 생성된 영상이므로 영상 정합부(450)는 실제 내시경 영상에 도시된 수술 도구와 모델링 영상을 정합하여 출력할 수 있다. 실제 물체를 모델링하여 영상을 획득하는 기술은 본 발명의 요지와 다소 거리감이 있으므로 이에 대한 구체적인 설명은 생략한다. 또한, 영상 정합부(450)의 구체적인 기능, 다양한 세부 구성 등은 이후 관련 도면을 참조하여 상세히 설명한다. The modeling image is an image generated by realizing the shape of the entire surgical tool as a 2D or 3D image. The modeling image may be an image of a surgical tool photographed at a specific time point before the start of surgery, for example, an initial setting state. Since the modeling image is an image generated by a computer simulation technique of the surgical tool, the image matching unit 450 may output the registered surgical tool and the modeling image shown in the actual endoscope image. Since a technique of obtaining an image by modeling a real object has a little distance from the gist of the present invention, a detailed description thereof will be omitted. In addition, specific functions, various detailed configurations, and the like of the image matching unit 450 will be described in detail with reference to the accompanying drawings.
제어부(370)는 상술한 기능이 수행될 수 있도록 각 구성 요소들의 동작을 제어한다. 제어부(370)는 영상 입력부(310)를 통해 입력되는 영상이 화면 표시부(320)를 통해 표시될 화상 이미지로 변환하는 기능을 수행할 수도 있다. 또한, 제어부(360)는 암 조작부(330)의 조작에 따른 조작 정보가 입력되면 이에 상응하여 모델링 영상이 화면 표시부(320)를 통해 출력되도록 영상 정합부(450)를 제어한다. The controller 370 controls the operation of each component so that the above-described function can be performed. The controller 370 may perform a function of converting an image input through the image input unit 310 into an image image to be displayed through the screen display unit 320. In addition, the controller 360 controls the image matching unit 450 to output the modeling image through the screen display unit 320 in response to the manipulation information according to the manipulation of the arm manipulation unit 330.
내시경 영상에 포함되는 실제 수술 도구는 내시경(5)에 의해 입력되어 마스터 로봇(1)으로 전송된 영상에 포함된 수술 도구로서 환자의 신체에 직접적인 수술 행위를 가하는 수술 도구이다. 이에 비해, 모델링 영상에 포함되는 모델링 수술 도구는 수술 도구 전체에 대해 미리 수학적으로 모델링되어 2D 또는 3D 영상으로 영상 저장부(360)에 저장된다. 내시경 영상의 수술 도구 및 모델링 영상의 모델링 수술 도구는 수술자가 암 조작부(330)를 조작함에 따라 마스터 로봇(1)이 인식하는 조작 정보(즉, 수술 도구의 이동, 회전 등에 관한 정보)에 의해 제어될 수 있다. 실제 수술 도구 및 모델링 수술 도구는 그 위치 및 조작 형상이 조작 정보에 의해 결정될 수 있다. 도 60을 참조하면, 내시경 영상(620)은 모델링 영상(610)과 서로 정합되어 화면 표시부(320)의 좌표(X,Y)에 출력된다. The actual surgical tool included in the endoscope image is a surgical tool included in the image inputted by the endoscope 5 and transmitted to the master robot 1 and is a surgical tool that applies a surgical operation directly to the patient's body. In contrast, the modeling surgical tool included in the modeling image is mathematically modeled with respect to the entire surgical tool in advance and stored in the image storage unit 360 as a 2D or 3D image. Surgical tools and modeling images of the endoscope image Surgical tools are controlled by the operation information (that is, information about the movement, rotation, etc. of the surgical tool) recognized by the master robot 1 as the operator operates the cancer operation unit 330 Can be. In actual surgical tools and modeling surgical tools, their position and manipulation shape may be determined by the manipulation information. Referring to FIG. 60, the endoscope image 620 is matched with the modeling image 610 and output to the coordinates (X, Y) of the screen display unit 320.
또한, 모델링 영상은 수술 도구뿐만 아니라 환자의 장기(臟器)에 대해서도 모델링하여 재구성한 영상을 포함할 수 있다. 즉, 모델링 영상은 CT(Computer Tomography, 컴퓨터단층촬영검사), MR(Magnetic Resonance, 자기공명), PET(Positron Emission Tomography, 양전자방출단층촬영술), SPECT(Single Photon Emission Computed Tomography, 단광자방출단층검사), US(Ultrasonography, 초음파촬영술) 등의 영상 장비로부터 획득한 영상을 참조하여 재구성한 환자의 장기 표면의 2D 또는 3D 영상을 포함할 수 있으며, 이 경우 실제 내시경 영상과 컴퓨터적 모델링 영상을 정합하면 보다 수술자에게 수술 부위를 포함한 전체 영상을 제공할 수 있는 효과가 있다. In addition, the modeling image may include an image reconstructed by modeling not only the surgical instrument but also the organ of the patient. In other words, the modeled images are CT (Computer Tomography), MR (Magnetic Resonance), PET (Positron Emission Tomography), SPECT (Single Photon Emission Computed Tomography), single photon emission tomography ), Which may include 2D or 3D images of the organ surface of the patient, reconstructed with reference to images acquired from imaging equipment such as US (Ultrasonography), in which case the actual endoscope image and the computer modeling image are matched. It is more effective to provide the operator with a full image including the surgical site.
도 58은 본 발명의 실시예에 따른 체감형 수술용 영상 처리 장치의 블록 구성도이다. 도 58을 참조하면, 영상 정합부(450)는 특성값 연산부(451), 모델링 영상 구현부(453), 중첩 영상 처리부(455)를 포함할 수 있다. 58 is a block diagram illustrating an apparatus for immersive surgical image processing according to an embodiment of the present invention. Referring to FIG. 58, the image matcher 450 may include a feature value calculator 451, a modeled image implementer 453, and an overlapped image processor 455.
특성값 연산부(451)는 슬레이브 로봇(2)의 복강경(5)에 의해 입력되어 제공되는 영상 및/또는 로봇 암(3)에 결합된 실제 수술 도구의 위치에 대한 좌표정보 등을 이용하여 특성값을 연산한다. 실제 수술 도구의 위치는 슬레이브 로봇(2)의 로봇 암(3)의 위치값을 참조하여 인식할 수 있으며, 해당 위치에 대한 정보는 슬레이브 로봇(2)으로부터 마스터 로봇(1)으로 제공될 수도 있다.The characteristic value calculator 451 uses the characteristic value by using the image inputted by the laparoscope 5 of the slave robot 2 and / or coordinate information on the position of the actual surgical tool coupled to the robot arm 3. Calculate The actual position of the surgical tool can be recognized by referring to the position value of the robot arm 3 of the slave robot 2, the information on the position may be provided from the slave robot 2 to the master robot (1). .
특성값 연산부(451)는 예를 들어 복강경(5)에 의한 영상 등을 이용하여 복강경(5)의 화각(FOV, Field of View), 확대율, 관점(예를 들어, 보는 방향), 보는 깊이 등과, 실제 수술 도구의 종류, 방향, 깊이, 꺽인 정도 등의 특성값을 연산할 수 있다. 복강경(5)에 의한 영상을 이용하여 특성값을 연산하는 경우, 해당 영상에 포함된 피사체의 외곽선 추출, 형상 인식, 기울어진 각도 등을 인식하기 위한 영상 인식 기술이 이용될 수도 있다. 또한, 실제 수술 도구의 종류 등은 로봇 암(3)에 해당 수술 도구를 결합하는 과정 등에서 미리 입력될 수도 있다.The characteristic value calculator 451 may use, for example, a field of view (FOV), an enlargement ratio, a viewpoint (for example, a viewing direction), a viewing depth, etc. of the laparoscope 5 using an image of the laparoscope 5. In addition, it is possible to calculate characteristic values such as type, direction, depth, and degree of bending of the actual surgical instrument. When the characteristic value is calculated using the image of the laparoscope 5, an image recognition technique for recognizing the outline of the subject included in the image, shape recognition, tilt angle, or the like may be used. In addition, the type of the actual surgical tool may be input in advance in the process of coupling the corresponding surgical tool to the robot arm (3).
모델링 영상 구현부(453)는 특성값 연산부(451)에서 연산된 특성값에 상응하는 모델링 영상을 구현한다. 모델링 영상과 관련된 데이터는 영상 저장부(360)로부터 추출될 수 있다. 즉, 모델링 영상 구현부(453)는 복강경(5)의 특성값(화각(FOV, Field of View), 확대율, 관점, 보는 깊이 등과, 실제 수술 도구의 종류, 방향, 깊이, 꺽인 정도 등)에 상응하는 수술 도구 등에 대한 모델링 영상 데이터를 추출하여 내시경 영상의 수술 도구 등과 정합되도록 모델링 영상을 구현한다. The modeling image implementer 453 implements a modeling image corresponding to the feature value calculated by the feature value calculator 451. Data related to the modeled image may be extracted from the image storage unit 360. That is, the modeling image implementer 453 may determine the characteristic values of the laparoscope 5 (field of view (FOV), magnification, perspective, viewing depth, etc., type, direction, depth, degree of bending, etc. of the actual surgical instrument). The modeling image is implemented to extract the modeling image data of the corresponding surgical tool and the like to match the surgical tool of the endoscope image.
모델링 영상 구현부(453)가 특성값 연산부(451)에서 연산된 특성값에 상응하여 영상을 추출하는 방법은 다양하게 구현될 수 있다. 예를 들면, 모델링 영상 구현부(453)는 복강경(5)의 특성값을 직접 이용하여 이에 상응하는 모델링 영상을 추출할 수 있다. 즉, 모델링 영상 구현부(453)는 상술한 복강경(5)의 화각, 확대율 등의 데이터를 참조하여 이에 상응하는 2D 또는 3D 모델링 수술 도구 영상을 추출하고 이를 내시경 영상과 정합하도록 할 수 있다. 여기서, 화각, 확대율과 같은 특성값은 최초 설정값에 따른 기준 영상과의 비교를 통해서 산출되거나 순차적으로 생성되는 복강경(5)의 영상을 서로 비교 분석하여 연산될 수 있다. The modeling image implementer 453 may extract various images according to the feature values calculated by the feature value calculator 451. For example, the modeling image implementer 453 may extract a modeling image corresponding to the characteristic value of the laparoscope 5 directly. That is, the modeling image implementer 453 may extract the 2D or 3D modeling surgical tool image corresponding to the above-described data such as the angle of view and the magnification of the laparoscope 5 and match it with the endoscope image. Here, the characteristic values such as the angle of view and the magnification may be calculated by comparing and analyzing the images of the laparoscope 5 which are calculated or sequentially generated through comparison with the reference image according to the initial set value.
또한, 다른 실시예에 따르면, 모델링 영상 구현부(453)는 복강경(5) 및 로봇 암(3)에 대한 위치 및 조작 형상을 결정하는 조작 정보를 이용하여 모델링 영상을 추출할 수 있다. 즉, 상술한 바와 같이 내시경 영상의 수술 도구는 수술자가 암 조작부(330)를 조작함에 따라 마스터 로봇(1)이 인식하는 조작 정보에 의해 제어될 수 있으므로, 내시경 영상의 특성값에 상응하는 모델링 수술 도구의 위치 및 조작 형상은 조작 정보에 의해 결정될 수 있다. In addition, according to another exemplary embodiment, the modeling image implementation unit 453 may extract the modeling image by using the manipulation information for determining the position and the manipulation shape of the laparoscope 5 and the robot arm 3. That is, as described above, since the surgical tool of the endoscope image may be controlled by the operation information recognized by the master robot 1 as the operator manipulates the cancer operation unit 330, the modeling surgery corresponding to the characteristic value of the endoscope image is performed. The position and manipulation shape of the tool can be determined by the manipulation information.
이러한 조작 정보는 시간적 순서에 따라 별도의 데이터베이스에 저장될 수 있으며, 모델링 영상 구현부(453)는 이러한 데이터베이스를 참조하여 실제 수술 도구의 특성값을 인식할 수 있으며, 이에 상응하여 모델링 영상에 대한 정보를 추출할 수 있다. 즉, 모델링 영상에 출력되는 수술 도구의 위치는 수술 도구의 위치 변경 신호의 누적 데이터를 이용하여 설정될 수 있다. 예를 들면, 수술 도구 중의 하나인 수술용 인스트루먼트에 대한 조작 정보가 시계방향으로 90도 회전 및 연장방향으로 1cm 이동이라는 정보를 포함하고 있으면, 모델링 영상 구현부(453)는 이러한 조작 정보에 상응하여 모델링 영상에 포함되는 수술용 인스트루먼트의 영상을 변환하여 추출할 수 있다. Such manipulation information may be stored in a separate database according to the temporal order, and the modeling image implementer 453 may recognize the characteristic values of the actual surgical tool by referring to the database, and correspondingly, the information about the modeling image. Can be extracted. That is, the position of the surgical tool output on the modeling image may be set using cumulative data of the position change signal of the surgical tool. For example, if the operation information for the surgical instrument, which is one of the surgical instruments, includes the information that it is rotated 90 degrees clockwise and 1 cm in the extending direction, the modeling image implementer 453 may correspond to the operation information. An image of a surgical instrument included in the modeling image may be converted and extracted.
여기서, 수술용 인스트루먼트는 액추에이터가 구비된 수술용 로봇 암의 선단부에 장착되고, 액추에이터로부터 구동력을 전달받아 구동부(미도시)에 구비되는 구동휠(미도시)이 작동하며, 구동휠과 연결되고 수술 환자의 체내로 삽입되는 조작자가 소정의 작동을 함으로써, 수술을 하게 된다. 구동휠은 원판형으로 형성되며, 액추에이터에 클러칭되어 구동력을 전달받을 수 있다. 또한, 구동휠의 개수는 제어 대상의 개수에 상응하여 결정될 수 있으며, 이러한 구동휠에 대한 기술은 수술용 인스트루먼트와 관련된 기술자에게 자명한 사항이므로 자세한 설명은 생략한다. Here, the surgical instrument is mounted to the front end of the surgical robot arm is provided with an actuator, the driving wheel (not shown) provided in the drive unit (not shown) by receiving the driving force from the actuator is operated, connected to the drive wheel and surgery The operator inserted into the patient's body performs the operation by predetermined operation. The driving wheel is formed in a disc shape, and may be clutched to the actuator to receive the driving force. In addition, the number of driving wheels may be determined corresponding to the number of objects to be controlled, and the description of such driving wheels will be apparent to those skilled in the art related to surgical instruments, and thus detailed description thereof will be omitted.
중첩 영상 처리부(455)는 실제 촬영된 내시경 영상과 모델링 영상이 중첩되지 않기 위해 모델링 영상의 일부 영상을 출력한다. 즉, 내시경 영상이 수술 도구의 일부 형상을 포함하고 모델링 영상 구현부(453)가 이에 상응하는 모델링 수술 도구를 출력하는 경우 중첩 영상 처리부(455)는 내시경 영상의 실제 수술 도구 영상과 모델링 수술 도구 영상의 중첩 영역을 확인하고, 모델링 수술 도구 영상에서 중첩 부분을 삭제함으로써 두 영상이 서로 매칭될 수 있도록 한다. 중첩 영상 처리부(455)는 모델링 수술 도구 영상과 실제 수술 도구 영상의 중첩 영역을 모델링 수술 도구 영상으로부터 제거함으로써 중첩 영역을 처리할 수 있다. The superimposed image processor 455 outputs a partial image of the modeled image so that the actually captured endoscope image and the modeled image do not overlap. That is, when the endoscope image includes some shape of the surgical tool and the modeling image implementer 453 outputs the corresponding modeling surgical tool, the superimposed image processor 455 may perform the actual surgical tool image and the modeling surgical tool image of the endoscope image. By checking the overlapping region of, and deleting the overlapping portion from the modeling surgical tool image, the two images can be matched with each other. The overlapping image processor 455 may process the overlapping region by removing the overlapping region of the modeling surgical tool image and the actual surgical tool image from the modeling surgical tool image.
예를 들면, 실제 수술 도구의 전체 길이가 20cm이고, 특성값(화각(FOV, Field of View), 확대율, 관점, 보는 깊이 등과, 실제 수술 도구의 종류, 방향, 깊이, 꺽인 정도 등)을 고려할 때 내시경 영상의 실제 수술 도구 영상의 길이가 3cm인 경우 중첩 영상 처리부(455)는 특성값을 이용하여 내시경 영상에 출력되지 않은 모델링 수술 도구 영상을 모델링 영상에 포함하여 출력하도록 한다. For example, the total length of an actual surgical instrument is 20 cm, and characteristics values (field of view (FOV), magnification, perspective, depth of view, etc., type, direction, depth, degree of bending, etc. of the actual surgical instrument) are considered. When the length of the actual surgical tool image of the endoscope image is 3cm, the superimposed image processor 455 outputs a modeling surgical tool image which is not output to the endoscope image by using the characteristic value.
도 59는 본 발명의 실시예에 따른 체감형 수술용 영상 처리 방법의 흐름도이다. 상술한 바와의 차이점을 위주로 설명한다. 59 is a flowchart of a tangible surgical image processing method according to an embodiment of the present invention. The differences from the above will be explained mainly.
단계 R131에서, 수술 대상 및/또는 수술 도구에 관해 미리 모델링 영상을 생성하여 저장한다. 모델링 영상은 컴퓨터 시뮬레이션에 의해 모델링될 수 있으며, 본 실시예는 별도의 모델링 영상 생성 장치를 이용하여 모델링 영상을 생성할 수도 있다. In step R131, a modeling image is generated and stored in advance with respect to the surgical target and / or the surgical tool. The modeled image may be modeled by computer simulation, and the embodiment may generate a modeled image by using a separate modeling image generating apparatus.
단계 R132에서, 특성값 연산부(1351)는 내시경 영상의 특성값을 연산한다. 상술한 바와 같이 특성값 연산부(1351)는 슬레이브 로봇(2)의 복강경(5)에 의해 입력되어 제공되는 영상 및/또는 로봇 암(3)에 결합된 실제 수술 도구의 위치에 대한 좌표정보 등을 이용하여 특성값을 연산하며, 특성값은 복강경(5)의 화각(FOV, Field of View), 확대율, 관점(예를 들어, 보는 방향), 보는 깊이 등과, 실제 수술 도구의 종류, 방향, 깊이, 꺽인 정도 등이 될 수 있다. In operation R132, the characteristic value calculator 1351 calculates a characteristic value of the endoscope image. As described above, the characteristic value calculating unit 1351 receives coordinates of an image provided by the laparoscope 5 of the slave robot 2 and / or coordinate information on the position of the actual surgical tool coupled to the robot arm 3. The characteristic value is calculated using the field of view (FOV, field of view), magnification, perspective (for example, viewing direction), viewing depth, and the type, direction, and depth of the surgical instrument. , Bend, and so on.
단계 R133에서, 영상 정합부(450)는 내시경 영상에 상응하여 모델링 영상을 추출하고, 중첩 영역을 처리하여 두 영상을 서로 정합하여 화면 표시부(320)에 출력되도록 한다. 여기서, 내시경 영상과 모델링 영상이 서로 동일한 시점에 최초 출력되거나 또는 내시경 영상이 출력된 후 모델링 영상이 함께 출력되는 등 그 출력 시점은 다양하게 설정될 수 있음은 물론이다.In operation R133, the image matching unit 450 extracts the modeling image corresponding to the endoscope image, processes the overlapping region, and matches the two images to be output to the screen display unit 320. Here, the output time point may be variously set such that the endoscope image and the modeling image are initially output at the same time point or the endoscope image is output and the modeling image is also output together.
도 61은 본 발명의 실시예에 따른 수술용 로봇의 마스터 인터페이스를 나타낸 개념도이다. 도 61을 참조하면, 마스터 인터페이스(4)는 모니터부(6), 핸들(10), 모니터 구동 수단(12), 이동홈(13)을 포함할 수 있다. 상술한 바와의 차이점을 위주로 설명한다. 61 is a conceptual diagram illustrating a master interface of a surgical robot according to an embodiment of the present invention. Referring to FIG. 61, the master interface 4 may include a monitor unit 6, a handle 10, a monitor driving unit 12, and a moving groove 13. The differences from the above will be explained mainly.
본 실시예는 상술한 바와 같이 다양하게 변화하는 내시경(5)의 관점에 상응하여 마스터 인터페이스(4)의 모니터부(6)를 회전 및 이동시킴으로써, 사용자가 수술에 대한 현실감을 보다 생생하게 느끼게 할 수 있는 특징이 있다. The present embodiment can rotate and move the monitor unit 6 of the master interface 4 in accordance with the viewpoint of the endoscope 5, which is variously changed as described above, thereby making the user feel more realistic about the surgery. There is a characteristic.
모니터 구동 수단(12)은 일단이 모니터부(6)에 결합하고, 타단이 마스터 인터페이스(4)의 본체부에 결합하여, 모니터부(6)에 구동력을 가함으로써 모니터부(6)를 회전 및 이동시킨다. 여기서, 회전은 다양한 축(X,Y,Z) 중심의 회전, 즉, 피치(pitch), 롤(roll), 요(yaw) 축에 의한 회전을 포함할 수 있다. 도 61을 참조하면, 요 축에 의한 회전(A)이 도시된다. One end of the monitor driving means 12 is coupled to the monitor portion 6, and the other end thereof is coupled to the main body portion of the master interface 4 to rotate the monitor portion 6 by applying a driving force to the monitor portion 6. Move it. Here, the rotation may include rotation about various axes (X, Y, Z), that is, rotation by a pitch, roll, yaw axis. Referring to FIG. 61, rotation A by the yaw axis is shown.
또한, 모니터 구동 수단(12)은 모니터부(6)의 하단에 위치한 마스터 인터페이스(4)의 본체부에 형성된 이동홈(13)을 따라 이동(B방향)함으로써 모니터부(6)를 내시경(5)의 관점에 따라 이동시킬 수 있다. 이동홈(13)은 사용자를 향한 방향이 오목하게 형성되어 모니터부(6)가 이동홈(13)을 따라 이동시 모니터부(6)의 전면이 항상 사용자를 향하도록 할 수 있다. In addition, the monitor driving means 12 moves (B direction) along the moving groove 13 formed in the main body of the master interface 4 located at the lower end of the monitor 6 to endoscope 5. ) Can be moved according to the point of view. The moving groove 13 may have a concave direction toward the user so that the front surface of the monitor 6 always faces the user when the monitor 6 moves along the moving groove 13.
도 62는 본 발명의 실시예에 따른 수술용 로봇의 블록 구성도이다. 도 62를 참조하면, 영상 입력부(310), 화면 표시부(320), 암 조작부(330), 조작신호 생성부(340), 제어부(370), 화면 구동 제어부(2380), 화면 구동부(390)를 포함하는 마스터 로봇(1) 및 로봇 암(3), 내시경(5)을 포함하는 슬레이브 로봇(2)이 도시된다. 상술한 바와의 차이점을 위주로 설명한다. 62 is a block diagram of a surgical robot according to an embodiment of the present invention. Referring to FIG. 62, the image input unit 310, the screen display unit 320, the arm operation unit 330, the operation signal generation unit 340, the control unit 370, the screen driving control unit 2380, and the screen driving unit 390 may be used. A master robot 1 and robot arm 3 comprising, a slave robot 2 comprising an endoscope 5 is shown. The differences from the above will be explained mainly.
화면 구동부(390)는 화면 표시부(320)를 회전 및 이동시키는 수단으로서, 예를 들면, 모터, 모니터부(6) 지지 수단 등을 포함할 수 있다. 화면 구동 제어부(2380)는 내시경(5)의 관점에 상응하여 화면 구동부(390)가 화면 표시부(320)를 회전 및 이동시키도록 화면 구동부(390)를 제어할 수 있다. The screen driver 390 may be a means for rotating and moving the screen display 320, and may include, for example, a motor and a monitor 6 supporting means. The screen driving controller 2380 may control the screen driving unit 390 so that the screen driving unit 390 rotates and moves the screen display unit 320 according to the viewpoint of the endoscope 5.
도 63을 참조하면, 화면 구동 제어부(2380)는, 내시경(5)의 이동 및 회전에 상응하여 내시경(5)의 관점 정보를 추적하는 내시경 관점 추적부(381)와, 내시경(5)의 관점 정보를 이용하여 내시경 영상의 이동 정보를 추출하는 영상 이동 정보 추출부(383)와, 이동 정보를 이용하여 화면 표시부(320)의 모션 정보(화면 구동 정보)를 생성하는 구동 정보 생성부(385)를 포함할 수 있다. 화면 구동부(390)는 구동 정보 생성부(385)에서 생성된 화면 표시부(320)의 모션 정보를 이용하여 화면 표시부(320)를 상술한 바와 같이 구동할 수 있다. Referring to FIG. 63, the screen driving controller 2380 may include an endoscope perspective tracker 381 and a perspective view of the endoscope 5, which track perspective information of the endoscope 5 in correspondence with the movement and rotation of the endoscope 5. An image movement information extraction unit 383 for extracting movement information of the endoscope image using the information, and a driving information generation unit 385 for generating motion information (screen driving information) of the screen display unit 320 using the movement information. It may include. The screen driver 390 may drive the screen display 320 as described above using the motion information of the screen display 320 generated by the drive information generator 385.
또한, 다른 실시예에 따르면, 화면 구동부(390)는 사용자의 명령에 의해 구동될 수도 있다. 예를 들면, 화면 구동 제어부(2380)는 사용자 인터페이스, 예를 들면, 사용자가 조작가능한 스위치(ex. 발판 스위치)로 대체 가능하며, 이 경우 사용자의 조작에 의해 화면 구동부(390)의 회전 및 이동이 제어될 수도 있다. According to another embodiment, the screen driver 390 may be driven by a user's command. For example, the screen driving control unit 2380 may be replaced with a user interface, for example, a switch (eg, a step switch) that is operable by a user. In this case, the screen driving unit 390 may be rotated and moved by a user's operation. This may be controlled.
이러한 화면 구동부(390)의 모션은 터치 스크린에 의해서도 제어가능하다. 예를 들면, 화면 표시부(320)는 터치 스크린으로 구현되며, 사용자가 손가락 등을 이용하여 화면 표시부(320)를 터치한 상태에서 소정 방향으로 드래그하면, 화면 표시부(320)가 이에 상응하여 회전 및 이동할 수도 있다. 또한, 사용자의 눈동자를 추적하거나 안면접촉부의 이동 방향에 따라 생성한 회전/이동 신호 또는 음성명령에 따라 생성한 회전/이동 신호 등을 이용하여 화면 표시부(320)의 모션을 제어할 수도 있다.The motion of the screen driver 390 can also be controlled by the touch screen. For example, the screen display unit 320 is implemented as a touch screen, and when the user touches the screen display unit 320 using a finger or the like and drags in a predetermined direction, the screen display unit 320 rotates accordingly. You can also move. In addition, the motion of the screen display 320 may be controlled by using the rotation / movement signal generated according to the user's eyes or the rotation / movement signal generated according to the moving direction of the face contact unit or the voice command.
도 64는 본 발명의 실시예에 따른 체감형 수술용 영상 처리 방법의 흐름도이다. 이하에서 수행할 각 단계는 화면 구동 제어부(2380)가 주체가 되어 실행할 수 있다.64 is a flowchart of a tangible surgical image processing method according to an embodiment of the present invention. Each step to be performed below may be performed by the screen driving controller 2380 as a main agent.
단계 R181에서는, 내시경(5)의 이동 및 회전에 상응하여 내시경(5)이 바라보는 관점에 대한 정보인 내시경(5)의 관점 정보를 추적한다. In step R181, the viewpoint information of the endoscope 5, which is information about the viewpoint viewed by the endoscope 5, is tracked corresponding to the movement and rotation of the endoscope 5.
단계 R182에서, 내시경(5)의 관점 정보를 이용하여 내시경 영상의 촬영 대상의 위치 변화량에 대응되는 내시경 영상의 이동 정보를 추출한다. In step R182, the movement information of the endoscope image corresponding to the amount of change in the position of the image capturing object of the endoscope image is extracted using the viewpoint information of the endoscope 5.
단계 R183에서, 내시경(5)의 관점 정보 및/또는 추출된 이동 정보를 이용하여 상술한 화면 구동 정보를 생성한다. 즉, 내시경(5)의 관점 정보 및 내시경 영상의 이동 정보가 상술한 바와 같이 특정되면, 이러한 이동 정보를 이용하여 화면 표시부(320)를 이동 및 회전시키는 정보를 생성한다.In step R183, the above-described screen driving information is generated using the viewpoint information of the endoscope 5 and / or the extracted movement information. That is, when the viewpoint information of the endoscope 5 and the movement information of the endoscope image are specified as described above, information for moving and rotating the screen display unit 320 is generated using the movement information.
단계 R184에서, 화면 구동 정보에 상응하여 화면 표시부(320)를 이동 및 회전시킨다. In step R184, the screen display unit 320 is moved and rotated according to the screen driving information.
도 65는 본 발명의 실시예에 따른 수술용 로봇의 마스터 인터페이스를 나타낸 개념도이다. 도 65를 참조하면, 돔 스크린(191), 프로젝터(192), 작업대(193), 제1 내시경 영상(621), 제2 내시경 영상(623)이 도시된다. 65 is a conceptual diagram illustrating a master interface of a surgical robot according to an embodiment of the present invention. Referring to FIG. 65, a dome screen 191, a projector 192, a work bench 193, a first endoscope image 621, and a second endoscope image 623 are illustrated.
본 실시예는 상술한 바와 같이 화면 표시부(320)의 특정 영역에 내시경 영상을 출력하는 기능을 돔 스크린(191) 및 프로젝터(192)를 이용하여 구현함으로써, 사용자가 넓은 스크린에서 수술 상황을 보다 빠르고 편리하게 확인할 수 있는 특징이 있다. The present embodiment implements a function of outputting an endoscope image to a specific region of the screen display unit 320 using the dome screen 191 and the projector 192 as described above, so that the user can quickly operate the surgery on a wide screen. There is a characteristic which can be confirmed conveniently.
프로젝터(192)는 내시경 영상을 돔 스크린(191)에 투사한다. 여기서, 내시경 영상은 투사되는 영상의 전단부의 형상이 구형인 구형의 영상이 될 수 있다. 여기서, 구형은 수학적으로 엄밀하게 구의 형태를 가지는 형상만을 의미하지 않으며, 타원형, 단면이 곡선인 형태, 일부 구형 등 다양한 형태를 포함할 수 있다. The projector 192 projects the endoscope image onto the dome screen 191. The endoscope image may be a spherical image having a spherical shape in front of the projected image. Here, the spherical shape does not mean mathematically strictly a shape having a spherical shape, and may include various shapes such as an ellipse, a curved shape of a cross section, and some spherical shapes.
돔 스크린(191)은 개방된 전면 단부와 프로젝터(192)에서 투사된 영상을 반사하는 반구 형상의 내부 돔면을 가진다. 돔 스크린(191)의 크기는 사용자가 보기 편한 정도의 크기, 예를 들면, 그 지름이 1m ~ 2m 정도가 될 수 있다. 돔 스크린(191)의 내부 돔면은 블록별로 면처리되거나 반구 형상을 가질 수 있다. 또한, 돔 스크린(191)은 그 중심축을 중심으로 축상 대칭으로 형성되고, 사용자의 시선은 돔 스크린(191)의 중심축에 위치할 수 있다. The dome screen 191 has an open front end and a hemispherical inner dome surface that reflects the image projected from the projector 192. The size of the dome screen 191 is easy to see the user, for example, the diameter may be about 1m ~ 2m. The inner dome surface of the dome screen 191 may be faceted or have a hemispherical shape for each block. In addition, the dome screen 191 may be axially symmetrically formed about a central axis thereof, and the line of sight of the user may be located at the central axis of the dome screen 191.
프로젝터(192)는 수술을 수행하는 사용자와 돔 스크린(191) 사이에 위치하여 사용자에 의해 투사되는 영상이 가리지 않도록 할 수 있다. 또한, 사용자의 작업시 투사된 영상을 가리지 않고, 작업대의 공간 확보를 위해서 프로젝터(192)는 작업대(530)의 밑면에 부착될 수 있다. 내부 돔면은 반사율이 높은 물질로 형성되거나 이러한 물질로 도포될 수 있다. The projector 192 may be located between the user performing the surgery and the dome screen 191 to prevent the image projected by the user from being blocked. In addition, the projector 192 may be attached to the bottom surface of the work bench 530 in order to secure the space of the work table without covering the projected image during the user's work. The inner dome surface may be formed of or coated with a material with high reflectivity.
이러한 돔 스크린(191) 및 프로젝터(192)를 이용하는 경우 상술한 바와 같이 내시경(5)의 다양한 관점에 상응하여 돔 스크린(191)의 특정 영역에 제1 내시경 영상(621) 및 제2 내시경 영상(623)을 투사할 수 있다. When the dome screen 191 and the projector 192 are used, the first endoscope image 621 and the second endoscope image (i.e., the first endoscope image 621 and the second endoscope image) in a specific area of the dome screen 191 may correspond to various viewpoints of the endoscope 5 as described above. 623) can be projected.
그 외 본 발명의 실시예에 따른 체감형 수술용 영상 처리 장치에 대한 구체적인 규격, 임베디드 시스템, O/S 등의 공통 플랫폼 기술과 통신 프로토콜, I/O 인터페이스 등 인터페이스 표준화 기술 및 엑추에이터, 배터리, 카메라, 센서 등 부품 표준화 기술 등에 대한 구체적인 설명은 본 발명이 속하는 기술 분야의 통상의 지식을 가진자에게 자명한 사항이므로 생략하기로 한다.In addition, specific standards for immersive surgical image processing apparatus according to an embodiment of the present invention, common platform technology such as embedded system, O / S, communication protocol, interface standardization technology such as I / O interface, actuator, battery, camera Details of parts standardization technology such as sensors and the like will be omitted since it is obvious to those skilled in the art.
본 발명에 따른 체감형 수술용 영상 처리 방법은 다양한 컴퓨터 수단을 통하여 수행될 수 있는 프로그램 명령 형태로 구현되어 컴퓨터 판독 가능 매체에 기록될 수 있다. 즉, 기록 매체는 컴퓨터에 상술한 각 단계를 실행시키기 위한 프로그램을 기록한 컴퓨터로 읽을 수 있는 기록매체가 될 수 있다.The immersive surgical image processing method according to the present invention may be implemented in the form of program instructions that can be executed by various computer means and recorded in a computer readable medium. In other words, the recording medium may be a computer-readable recording medium having recorded thereon a program for causing the computer to execute the above-described steps.
상기 컴퓨터 판독 가능 매체는 프로그램 명령, 데이터 파일, 데이터 구조 등을 단독으로 또는 조합한 형태로 포함할 수 있다. 상기 매체에 기록되는 프로그램 명령은 본 발명을 위하여 특별히 설계되고 구성된 것들이거나 컴퓨터 소프트웨어 당업자에게 공지되어 사용 가능한 것일 수도 있다. 컴퓨터 판독 가능 기록 매체의 예에는 하드 디스크, 플로피 디스크 및 자기 테이프와 같은 자기 매체(Magnetic Media), CD-ROM, DVD와 같은 광기록 매체(Optical Media), 플롭티컬 디스크(Floptical Disk)와 같은 자기-광 매체(Magneto-Optical Media), 및 롬(ROM), 램(RAM), 플래시 메모리 등과 같은 프로그램 명령을 저장하고 수행하도록 특별히 구성된 하드웨어 장치가 포함된다.The computer readable medium may include a program command, a data file, a data structure, etc. alone or in combination. Program instructions recorded on the media may be those specially designed and constructed for the purposes of the present invention, or they may be of the kind well-known and available to those having skill in the computer software arts. Examples of computer readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tape, optical media such as CD-ROMs, DVDs, and magnetic disks such as floppy disks. -Magneto-Optical Media, and hardware devices specifically configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
상기 매체는 프로그램 명령, 데이터 구조 등을 지정하는 신호를 전송하는 반송파를 포함하는 광 또는 금속선, 도파관 등의 전송 매체일 수도 있다. 프로그램 명령의 예에는 컴파일러에 의해 만들어지는 것과 같은 기계어 코드뿐만 아니라 인터프리터 등을 사용해서 컴퓨터에 의해서 실행될 수 있는 고급 언어 코드를 포함한다. 상술한 하드웨어 장치는 본 발명의 동작을 수행하기 위해 하나 이상의 소프트웨어 모듈로서 작동하도록 구성될 수 있다. The medium may be a transmission medium such as an optical or metal wire, a waveguide, or the like including a carrier wave for transmitting a signal specifying a program command, a data structure, or the like. Examples of program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like. The hardware device described above may be configured to operate as one or more software modules to perform the operations of the present invention.
해당 기술 분야에서 통상의 지식을 가진 자라면 하기의 특허 청구의 범위에 기재된 본 발명의 사상 및 영역으로부터 벗어나지 않는 범위 내에서 본 발명을 다양하게 수정 및 변경시킬 수 있음을 이해할 수 있을 것이다.Those skilled in the art will appreciate that various modifications and changes can be made in the present invention without departing from the spirit and scope of the invention as set forth in the claims below.

Claims (10)

  1. 수술용 내시경으로부터 제공되는 내시경 영상을 입력받는 영상 입력부와;An image input unit for receiving an endoscope image provided from a surgical endoscope;
    상기 수술용 내시경이 촬영하는 수술 대상을 수술하는 수술 도구에 관한 모델링 영상을 저장하는 영상 저장부와;An image storage unit for storing a modeling image of a surgical tool for operating a surgical target photographed by the surgical endoscope;
    상기 내시경 영상과 상기 모델링 영상을 서로 정합하여 출력 영상을 생성하는 영상 정합부와;An image matching unit which matches the endoscope image and the modeling image to generate an output image;
    상기 내시경 영상과 상기 모델링 영상을 포함하는 출력 영상을 출력하는 화면 표시부를 포함하는 수술용 영상 처리 장치. Surgical image processing apparatus comprising a screen display for outputting an output image including the endoscope image and the modeling image.
  2. 제1항에 있어서, The method of claim 1,
    상기 수술용 내시경은 복강경, 흉강경, 관절경, 비경, 방광경, 직장경, 십이지장경, 종격경, 심장경 중 하나 이상인 것을 특징으로 하는 수술용 영상 처리 장치. The surgical endoscope is a surgical image processing device, characterized in that at least one of laparoscopic, thoracoscopic, arthroscopic, parenteral, bladder, rectal, duodenum, mediastinal, cardiac.
  3. 제1항에 있어서, The method of claim 1,
    상기 영상 정합부는 상기 내시경 영상에 포함되는 실제 수술 도구 영상과 상기 모델링 영상에 포함되는 모델링 수술 도구 영상을 서로 정합하여 상기 출력 영상을 생성하는 것을 특징으로 하는 수술용 영상 처리 장치. And the image matching unit generates the output image by matching the actual surgical tool image included in the endoscope image with the modeling surgical tool image included in the modeling image.
  4. 제3항에 있어서, The method of claim 3,
    상기 영상 정합부는, The image matching unit,
    상기 내시경 영상 및 하나 이상의 로봇 암에 결합된 실제 수술 도구의 위치 좌표정보 중 하나 이상을 이용하여 특성값을 연산하는 특성값 연산부와;A characteristic value calculator for calculating characteristic values using at least one of the endoscope image and position coordinate information of an actual surgical tool coupled to at least one robot arm;
    상기 특성값 연산부에서 연산된 특성값에 상응하는 모델링 영상을 구현하는 모델링 영상 구현부를 더 포함하는 수술용 영상 처리 장치. And a modeling image implementer configured to implement a modeling image corresponding to the characteristic value calculated by the characteristic value calculator.
  5. 제3항에 있어서, The method of claim 3,
    상기 모델링 수술 도구 영상으로부터 상기 모델링 수술 도구 영상과 상기 실제 수술 도구 영상의 중첩 영역을 제거하는 중첩 영상 처리부를 더 포함하는 수술용 영상 처리 장치. And an overlapping image processor configured to remove an overlapping area between the modeling surgical tool image and the actual surgical tool image from the modeling surgical tool image.
  6. 제3항에 있어서, The method of claim 3,
    상기 모델링 영상에 출력되는 상기 모델링 수술 도구 영상의 위치는 상기 수술 도구의 조작정보를 이용하여 설정되는 것을 특징으로 하는 수술용 영상 처리 장치. And a position of the modeling surgical tool image output to the modeling image is set using operation information of the surgical tool.
  7. 제1항에 있어서, The method of claim 1,
    상기 화면 표시부는, 상기 내시경 영상을 출력 영상의 임의의 영역에 출력하며, 상기 모델링 영상을 상기 출력 영상의 주변 영역에 출력하는 것을 특징으로 하는 수술용 영상 처리 장치. And the screen display unit outputs the endoscope image to an arbitrary region of the output image, and outputs the modeling image to a peripheral region of the output image.
  8. 제1항에 있어서, The method of claim 1,
    상기 모델링 영상은 상기 수술 개시 시점 전 특정 시점에 촬영한 상기 수술 도구에 관한 영상인 것을 특징으로 하는 수술용 영상 처리 장치. The modeling image is a surgical image processing apparatus, characterized in that the image of the surgical tool taken at a specific time before the start of the operation.
  9. 제1항에 있어서, The method of claim 1,
    상기 수술시 상기 수술 대상 외부를 촬영하여 카메라 영상을 생성하는 카메라를 더 포함하는 수술용 영상 처리 장치. Surgical image processing apparatus further comprises a camera for generating a camera image by photographing the outside of the surgical target during the operation.
  10. 제9항에 있어서, The method of claim 9,
    상기 영상 정합부는 상기 내시경 영상, 상기 모델링 영상, 상기 카메라 영상 및 이들의 조합 영상을 서로 정합하여 상기 출력 영상을 생성하는 것을 특징으로 하는 수술용 영상 처리 장치.And the image matching unit generates the output image by matching the endoscope image, the modeling image, the camera image, and a combination thereof.
PCT/KR2010/006662 2009-10-01 2010-09-30 Surgical image processing device, image-processing method, laparoscopic manipulation method, surgical robot system and an operation-limiting method therefor WO2011040769A2 (en)

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
KR10-2009-0094124 2009-10-01
KR1020090094124A KR101598774B1 (en) 2009-10-01 2009-10-01 Apparatus and Method for processing surgical image
KR20090104379 2009-10-30
KR10-2009-0104379 2009-10-30
KR20090105861 2009-11-04
KR10-2009-0105861 2009-11-04
KR10-2009-0114651 2009-11-25
KR1020090114651A KR101683057B1 (en) 2009-10-30 2009-11-25 Surgical robot system and motion restriction control method thereof
KR10-2010-0034033 2010-04-13
KR20100034033 2010-04-13

Publications (2)

Publication Number Publication Date
WO2011040769A2 true WO2011040769A2 (en) 2011-04-07
WO2011040769A3 WO2011040769A3 (en) 2011-09-15

Family

ID=43826794

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2010/006662 WO2011040769A2 (en) 2009-10-01 2010-09-30 Surgical image processing device, image-processing method, laparoscopic manipulation method, surgical robot system and an operation-limiting method therefor

Country Status (1)

Country Link
WO (1) WO2011040769A2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016060308A1 (en) * 2014-10-17 2016-04-21 서울대학교병원 Needle insertion type robot apparatus for interventional surgery
CN109498162A (en) * 2018-12-20 2019-03-22 深圳市精锋医疗科技有限公司 Promote the master operating station and operating robot of feeling of immersion
US10285765B2 (en) 2014-05-05 2019-05-14 Vicarious Surgical Inc. Virtual reality surgical device
CN110279427A (en) * 2012-12-10 2019-09-27 直观外科手术操作公司 Collision avoidance during controlled movement of an image capture device and an actuatable device movable arm
CN110740677A (en) * 2017-06-21 2020-01-31 索尼公司 Surgical system and surgical image capturing device
US10799308B2 (en) 2017-02-09 2020-10-13 Vicarious Surgical Inc. Virtual reality surgical tools system
US10834332B2 (en) 2017-08-16 2020-11-10 Covidien Lp Synthesizing spatially-aware transitions between multiple camera viewpoints during minimally invasive surgery
US11583342B2 (en) 2017-09-14 2023-02-21 Vicarious Surgical Inc. Virtual reality surgical camera system
US11957415B2 (en) 2018-02-20 2024-04-16 Hutom Co., Ltd. Method and device for optimizing surgery

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020035458A1 (en) * 2000-09-20 2002-03-21 Chang-Hun Kim Method and system for virtual surgery
US20070225553A1 (en) * 2003-10-21 2007-09-27 The Board Of Trustees Of The Leland Stanford Junio Systems and Methods for Intraoperative Targeting
KR20080068640A (en) * 2005-10-20 2008-07-23 인튜어티브 서지컬 인코포레이티드 Auxiliary image display and manipulation on a computer display in a medical robotic system
KR20090060908A (en) * 2007-12-10 2009-06-15 고려대학교 산학협력단 Display system for displaying interior of the body

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020035458A1 (en) * 2000-09-20 2002-03-21 Chang-Hun Kim Method and system for virtual surgery
US20070225553A1 (en) * 2003-10-21 2007-09-27 The Board Of Trustees Of The Leland Stanford Junio Systems and Methods for Intraoperative Targeting
KR20080068640A (en) * 2005-10-20 2008-07-23 인튜어티브 서지컬 인코포레이티드 Auxiliary image display and manipulation on a computer display in a medical robotic system
KR20090060908A (en) * 2007-12-10 2009-06-15 고려대학교 산학협력단 Display system for displaying interior of the body

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110279427B (en) * 2012-12-10 2024-01-16 直观外科手术操作公司 Collision avoidance during controlled movement of movable arm of image acquisition device and steerable device
CN110279427A (en) * 2012-12-10 2019-09-27 直观外科手术操作公司 Collision avoidance during controlled movement of an image capture device and an actuatable device movable arm
US11540888B2 (en) 2014-05-05 2023-01-03 Vicarious Surgical Inc. Virtual reality surgical device
US10285765B2 (en) 2014-05-05 2019-05-14 Vicarious Surgical Inc. Virtual reality surgical device
US11744660B2 (en) 2014-05-05 2023-09-05 Vicarious Surgical Inc. Virtual reality surgical device
US10842576B2 (en) 2014-05-05 2020-11-24 Vicarious Surgical Inc. Virtual reality surgical device
US11045269B2 (en) 2014-05-05 2021-06-29 Vicarious Surgical Inc. Virtual reality surgical device
WO2016060308A1 (en) * 2014-10-17 2016-04-21 서울대학교병원 Needle insertion type robot apparatus for interventional surgery
US10799308B2 (en) 2017-02-09 2020-10-13 Vicarious Surgical Inc. Virtual reality surgical tools system
US11690692B2 (en) 2017-02-09 2023-07-04 Vicarious Surgical Inc. Virtual reality surgical tools system
CN110740677A (en) * 2017-06-21 2020-01-31 索尼公司 Surgical system and surgical image capturing device
US11503980B2 (en) 2017-06-21 2022-11-22 Sony Corporation Surgical system and surgical imaging device
US11258964B2 (en) 2017-08-16 2022-02-22 Covidien Lp Synthesizing spatially-aware transitions between multiple camera viewpoints during minimally invasive surgery
US10834332B2 (en) 2017-08-16 2020-11-10 Covidien Lp Synthesizing spatially-aware transitions between multiple camera viewpoints during minimally invasive surgery
US11583342B2 (en) 2017-09-14 2023-02-21 Vicarious Surgical Inc. Virtual reality surgical camera system
US11911116B2 (en) 2017-09-14 2024-02-27 Vicarious Surgical Inc. Virtual reality surgical camera system
US11957415B2 (en) 2018-02-20 2024-04-16 Hutom Co., Ltd. Method and device for optimizing surgery
CN109498162B (en) * 2018-12-20 2023-11-03 深圳市精锋医疗科技股份有限公司 Main operation table for improving immersion sense and surgical robot
CN109498162A (en) * 2018-12-20 2019-03-22 深圳市精锋医疗科技有限公司 Promote the master operating station and operating robot of feeling of immersion

Also Published As

Publication number Publication date
WO2011040769A3 (en) 2011-09-15

Similar Documents

Publication Publication Date Title
WO2011040769A2 (en) Surgical image processing device, image-processing method, laparoscopic manipulation method, surgical robot system and an operation-limiting method therefor
WO2012060586A2 (en) Surgical robot system, and a laparoscope manipulation method and a body-sensing surgical image processing device and method therefor
WO2010110560A2 (en) Surgical robot system using augmented reality, and method for controlling same
WO2019164275A1 (en) Method and device for recognizing position of surgical instrument and camera
WO2010093152A2 (en) Surgical robot system, and method for controlling same
KR101374709B1 (en) Surgical tool position and identification indicator displayed in a boundary area of a computer display screen
KR102117273B1 (en) Surgical robot system and method for controlling the same
US7907166B2 (en) Stereo telestration for robotic surgery
JP4156606B2 (en) Remote operation method and system with a sense of reality
WO2016060475A1 (en) Method of providing information using plurality of displays and ultrasound apparatus therefor
WO2017069324A1 (en) Mobile terminal and control method therefor
WO2014200289A2 (en) Apparatus and method for providing medical information
EP3851024B1 (en) Medical observation system, medical observation device and medical observation method
WO2014208969A1 (en) Method and apparatus for providing information related to location of target object on medical apparatus
WO2016043411A1 (en) X-ray apparatus and method of scanning the same
US20230126611A1 (en) Information processing apparatus, information processing system, and information processing method
WO2014200265A1 (en) Method and apparatus for providing medical information
JP6773609B2 (en) Remote support system, information presentation system, display system and surgery support system
WO2023013832A1 (en) Surgical robot control system using headset-based contactless hand-tracking technology
WO2017169824A1 (en) Control device, method and surgery system
WO2023163572A1 (en) Surgical instrument and surgical robot comprising same
WO2022231337A1 (en) Multi-joint type surgical device
WO2023085879A1 (en) Surgical robot arm
WO2024204878A1 (en) Method and system for tracking marker in augmented reality
WO2024014910A1 (en) Motion compensation device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10820842

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N SENT ON 10.7.2012)

122 Ep: pct application non-entry in european phase

Ref document number: 10820842

Country of ref document: EP

Kind code of ref document: A2