CN115486939A - Method, device and system for intelligently sensing anatomical structure of orthopedic machine surgical robot - Google Patents
Method, device and system for intelligently sensing anatomical structure of orthopedic machine surgical robot Download PDFInfo
- Publication number
- CN115486939A CN115486939A CN202211057487.8A CN202211057487A CN115486939A CN 115486939 A CN115486939 A CN 115486939A CN 202211057487 A CN202211057487 A CN 202211057487A CN 115486939 A CN115486939 A CN 115486939A
- Authority
- CN
- China
- Prior art keywords
- mechanical arm
- trolley
- soft tissue
- network model
- orthopedic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 210000003484 anatomy Anatomy 0.000 title claims abstract description 51
- 230000000399 orthopedic effect Effects 0.000 title claims abstract description 48
- 210000004872 soft tissue Anatomy 0.000 claims abstract description 105
- 238000000605 extraction Methods 0.000 claims abstract description 16
- 238000004891 communication Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 9
- 230000008447 perception Effects 0.000 claims description 8
- 230000006698 induction Effects 0.000 claims description 3
- 210000001519 tissue Anatomy 0.000 abstract description 10
- 210000004204 blood vessel Anatomy 0.000 abstract description 5
- 238000001356 surgical procedure Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000002432 robotic surgery Methods 0.000 description 5
- 210000000988 bone and bone Anatomy 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 230000009467 reduction Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000000642 iatrogenic effect Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/70—Manipulators specially adapted for use in surgery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2055—Optical tracking systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Biomedical Technology (AREA)
- Robotics (AREA)
- Veterinary Medicine (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Manipulator (AREA)
Abstract
The invention provides a method, a device and a system for intelligently sensing an anatomical structure by an orthopedic surgery robot, wherein the method comprises the following steps: after the mechanical arm trolley is driven to move to a target position away from the operating table and the mechanical arm of the mechanical arm trolley is adjusted to a target angle with the operating table, controlling camera equipment on the mechanical arm trolley to shoot a picture aiming at the operating table to obtain a human joint image; inputting the human body joint image into a trained soft tissue identification network model to perform soft tissue feature extraction operation, so as to obtain soft tissue preliminary features output by the soft tissue identification network model; and inputting the preliminary features of the soft tissue into the trained feature classification network model for classification operation to obtain the soft tissue region output by the feature classification network model. The method, the device and the system for intelligently sensing the anatomical structure of the orthopedic machine surgical robot can automatically and accurately identify soft tissues and help surgeons accurately mark tissue blood vessels.
Description
Technical Field
The invention relates to the technical field of robots, in particular to a method, a device and a system for intelligently sensing an anatomical structure by an orthopedic surgery robot.
Background
In the surgical clinic, a surgeon needs to accurately mark a tissue blood vessel and position the edge of the tissue so as to accurately and completely cut the tissue, the process generally depends on the subjective clinical experience of the surgeon at present, so that the tissue cannot be accurately and completely cut, some normal tissues may be cut, some iatrogenic damage is caused finally, and the rights and interests of patients cannot be guaranteed. Therefore, it is proposed to provide a method for automatically and precisely identifying soft tissue to help surgeons precisely mark tissue vessels.
Disclosure of Invention
The invention provides a method, a device and a system for intelligently sensing an anatomical structure by an orthopedic machine surgical robot, which can automatically and accurately identify soft tissues and help surgeons accurately mark tissue blood vessels.
The invention provides a method for intelligently sensing an anatomical structure by an orthopedic mechanical surgical robot, which comprises the following steps:
after the mechanical arm trolley is driven to move to a target position far away from an operating table and a mechanical arm of the mechanical arm trolley is adjusted to a target angle with the operating table, controlling camera equipment on the mechanical arm trolley to shoot a picture aiming at the operating table to obtain a human body joint image;
inputting the human body joint image into a trained soft tissue identification network model to perform soft tissue feature extraction operation, so as to obtain soft tissue preliminary features output by the soft tissue identification network model;
and inputting the preliminary soft tissue features into a trained feature classification network model for classification operation to obtain a soft tissue region output by the feature classification network model.
According to the method for intelligently sensing the anatomical structure of the orthopedic machine surgical robot, provided by the invention, the soft tissue identification network model is obtained by training a UNet network model based on a preset human body joint image sample and corresponding soft tissue preliminary characteristics;
the feature classification network model is obtained by training a PointRend network model based on a preset soft tissue preliminary feature sample and a soft tissue region corresponding to the soft tissue preliminary feature sample.
The method for intelligently sensing the anatomical structure of the orthopedic machine surgical robot provided by the invention further comprises the following steps:
before the mechanical arm trolley is driven to move to the target position and the mechanical arm is adjusted to the target angle, acquiring pose information between the mechanical arm trolley and the operating table;
determining the positioning parameters of the mechanical arm trolley according to the pose information and a preset pose threshold value;
and driving the mechanical arm trolley to move to the target position according to the positioning parameters of the mechanical arm trolley, and adjusting the angle of the mechanical arm to the target angle.
According to the method for intelligently sensing the anatomical structure by the orthopedic machine surgical robot provided by the invention, the acquisition of the pose information between the mechanical arm trolley and the operating table comprises the following steps:
and acquiring the pose information sent by the navigation trolley, wherein the pose information is acquired by the navigation trolley through induction collection of first reference equipment on the mechanical arm trolley and second reference equipment on the operating table.
According to the method for intelligently sensing the anatomical structure by the orthopedic machine surgical robot, the determination of the positioning parameters of the mechanical arm trolley according to the pose information and the preset pose threshold value comprises the following steps:
determining the coordinates of the mechanical arm trolley and the coordinates of the operating table in the same coordinate system based on the pose information;
and determining the positioning parameters of the mechanical arm trolley based on the coordinate difference value between the mechanical arm trolley coordinate and the operating table coordinate and the preset pose threshold value.
The method for intelligently sensing the anatomical structure of the orthopedic mechanical surgical robot provided by the invention further comprises the following steps:
and under the condition that the coordinate difference value is smaller than the preset pose threshold value, determining that the mechanical arm trolley is located at the target position and the mechanical arm is located at the target angle.
The method for intelligently sensing the anatomical structure of the orthopedic mechanical surgical robot provided by the invention further comprises the following steps:
after the mechanical arm trolley is determined to be located at the target position and the mechanical arm is located at the target angle, and before the camera shooting equipment on the mechanical table trolley is controlled to be aligned with the operating table to shoot, the mechanical arm trolley and the mechanical arm are locked.
The invention also provides a device for intelligently sensing the anatomical structure of the orthopedic mechanical surgical robot, which comprises:
the acquisition module is used for controlling camera equipment on the mechanical table trolley to aim at the operating table to take a picture after driving the mechanical arm trolley to move to a target position away from the operating table and adjusting a mechanical arm of the mechanical arm trolley to a target angle with the operating table so as to acquire a human body joint image;
the extraction module is used for inputting the human body joint image into a trained soft tissue identification network model to carry out soft tissue feature extraction operation so as to obtain soft tissue preliminary features output by the soft tissue identification network model;
and the classification module is used for inputting the preliminary soft tissue features into a trained feature classification network model for classification operation to obtain a soft tissue region output by the feature classification network model.
The invention also provides a system for intelligently sensing the anatomical structure by the orthopedic machine surgical robot, which comprises a mechanical arm trolley and a main control trolley;
the main control trolley is in communication connection with the mechanical arm trolley and is used for executing the method for intelligently sensing the anatomical structure of the orthopedic machine surgical robot.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the method for intelligently sensing the anatomical structure of the orthopedic robotic surgery robot.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of intelligently perceiving an anatomical structure by an orthopaedic robotic surgical tool as described in any one of the above.
The invention also provides a computer program product comprising a computer program which, when executed by a processor, implements a method for intelligently sensing an anatomical structure by an orthopedic robotic surgical tool as described in any one of the above.
According to the method, the device and the system for intelligently sensing the anatomical structure of the orthopedic machine surgical robot, provided by the invention, the camera equipment on the mechanical arm trolley is controlled to aim at the operating table to take a picture, so that a human body joint image is obtained, then the preliminary soft tissue features in the human body joint image are extracted based on the soft tissue identification network model, and then the preliminary soft tissue features are subjected to feature classification based on the feature classification network model, so that a soft tissue region is obtained. The process from obtaining the human joint image to finally extracting the soft tissue region is automatically realized, the soft tissue can be automatically and accurately identified, and a surgeon is helped to accurately mark the tissue blood vessel.
Drawings
In order to more clearly illustrate the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic flow chart of a method for intelligently sensing an anatomical structure by an orthopedic robotic surgical robot according to the present invention;
FIG. 2 is a schematic block diagram of an orthopaedic robot provided by the present invention;
FIG. 3 is a schematic structural diagram of an orthopaedic robot provided by the present invention;
FIG. 4 is a schematic view of the installation of a navigation module on the navigation trolley provided by the present invention;
FIG. 5 is a schematic representation of the soft tissue identification results provided by the present invention;
FIG. 6 is a schematic diagram of an image capturing apparatus provided by the present invention for capturing an image of a joint of a human body;
FIG. 7 is a flow chart of the process for providing an image of a human joint according to the present invention;
fig. 8 is a schematic structural diagram of the device for intelligently sensing the anatomical structure of the orthopedic robotic surgical device provided by the invention;
fig. 9 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method, device and system for intelligently sensing the anatomical structure of the orthopedic robotic surgical robot of the present invention are described below with reference to fig. 1-9.
As shown in fig. 1, the present invention provides a method for an orthopedic robotic surgical robot to intelligently perceive an anatomical structure, comprising:
It is understood that, as shown in fig. 2 and 3, the orthopaedic robot includes a master trolley 210, a robot arm trolley 220, and a navigation trolley 230, and the method provided by the present invention may be performed by the master trolley 210.
The master cart 210 contains a control processor, input and output communication cables, power lines, and a display. The control processor is provided with algorithm software to execute the method provided by the invention, the control processor is firstly in communication connection with the mechanical arm trolley 220 through input and output communication, and the main control trolley 210 can display the positions of the mechanical arm trolley 220 and the operating bed 240 through a display.
The robot trolley 220 is provided with a magnetic induction sensor for autonomously recognizing a route, and an intelligent motor driving and steering mechanism.
The camera device on the mechanical arm trolley 220 aims at the operating table 240 to take a picture, and after the human body joint image is obtained, the human body joint image is sent to the main control trolley 210.
And 120, inputting the human body joint image into a trained soft tissue identification network model to perform soft tissue feature extraction operation, so as to obtain soft tissue preliminary features output by the soft tissue identification network model.
It can be understood that the soft tissue identification network model may be first trained by the cloud server and then sent to the main control trolley 210, and the main control trolley 210 identifies the human joint image and extracts the soft tissue features. Alternatively, the training of the soft tissue recognition network model is done locally at the master trolley 210.
And step 130, inputting the preliminary features of the soft tissue into a trained feature classification network model for classification operation to obtain a soft tissue region output by the feature classification network model.
It can be understood that the feature classification network model may be sent to the main control cart 210 after the cloud server training is completed, or the feature classification network model training may be completed locally on the main control cart 210.
In some embodiments, the soft tissue identification network model is obtained by training a UNet network model based on a preset human body joint image sample and a corresponding soft tissue preliminary feature;
the feature classification network model is obtained by training a PointRend network model based on a preset soft tissue preliminary feature sample and a soft tissue area corresponding to the soft tissue preliminary feature sample.
It can be understood that the UNet network model mainly comprises two parts of feature extraction and feature reduction, the feature extraction part comprises a basic convolution module and a pooling operation, and the convolution module comprises convolution, batch normalization, activation and other operations; the feature reduction part is mainly completed by a convolution module and an up-sampling operation, wherein the up-sampling method is bilinear interpolation and aims to reduce the size of a feature map. The UNet network model may further be an Attention UNet network model.
The PointRendPointRend network model mainly comprises a convolution layer and a full connection layer, and the PointRend network model can reclassify the boundary segmented by the UNet network model, so that a better boundary segmentation effect is obtained, and a soft tissue region can be identified more accurately.
In some embodiments, the method for intelligent perception of anatomical structures by an orthopedic robotic surgical robot further comprises:
acquiring pose information between the robot trolley 220 and the operating table 240 before driving the robot trolley 220 to move to the target position and adjusting the robot to the target angle;
determining a positioning parameter of the mechanical arm trolley 220 according to the pose information and a preset pose threshold;
according to the positioning parameters of the robot trolley 220, the robot trolley 220 is driven to move to the target position, and the robot is adjusted to the target angle.
It is understood that the pose information between the robotic arm trolley 220 and the surgical bed 240 may include a distance between the robotic arm trolley 220 and the surgical bed 240, an orientation of the robotic arm trolley 220 relative to the surgical bed 240, and a distance between the robotic arm of the robotic arm trolley 220 and the surgical bed 240 and an orientation of the robotic arm relative to the surgical bed 240.
The pose information is compared with a preset pose threshold value, a difference value between the position information and the preset pose threshold value is determined, and the positioning parameters of the mechanical arm trolley 220 are determined.
Further, the pose threshold includes a first distance threshold, a second distance threshold, a first orientation threshold, and a second orientation threshold.
Comparing the distance between the robot trolley 220 and the operating bed 240 with a first distance threshold value, and determining that the difference value between the distance and the threshold value is a first distance difference value; the distance between the robotic arm and the surgical bed 240 is compared to a second distance threshold and the difference between the two is determined to be a second distance difference.
Comparing the orientation of the robotic arm trolley 220 relative to the operating bed 240 with a first orientation threshold, and determining that the orientation difference between the two is a first orientation difference; the orientation of the robotic arm relative to the surgical bed 240 is compared to a second orientation threshold and the difference in orientation between the two is determined to be a second orientation difference.
A first positioning parameter may be determined based on the first distance difference and the first orientation difference, the first positioning parameter being used to control a trajectory of the mechanical trolley. And determining a second positioning parameter based on the second distance difference and the second orientation difference, wherein the second positioning parameter is used for controlling the running track of the mechanical arm.
The robot trolley 220 is driven to move to a target position away from the operating bed 240 based on the first swing position parameter, and after the robot trolley 220 is driven to move to the target position, the robot is adjusted to a target angle with the operating bed 240 based on the second swing position parameter.
In some embodiments, the method for the orthopedic robotic surgical robot to intelligently perceive the anatomical structure, the acquiring pose information between the robotic trolley 220 and the operating bed 240, includes:
acquiring the pose information sent by the navigation trolley 230, wherein the pose information is acquired by the navigation trolley 230 through sensing and collecting of a first reference device on the mechanical arm trolley 220 and a second reference device 250 on the operating bed 240.
It is understood that the robot trolley 220 and the navigation trolley 230 are both communicatively connected to the main control trolley 210, further, the robot trolley 220 and the main control trolley 210 may be connected through a cable communication connection or a wireless communication connection, and the navigation trolley 230 and the main control trolley 210 may be connected through a cable communication connection or a wireless communication connection.
As shown in fig. 4, the navigation trolley 230 includes a structural frame 232 and a navigation module 231, the navigation module 231 is fixed on the structural frame 232, and the navigation module 231 is used for collecting pose information between the robotic trolley 220 and the operating table 240.
The navigation trolley 230 collects pose information between the robot trolley 220 and the operating bed 240 and then transmits the pose information to the main control trolley 210.
The navigation trolley 230 is provided with a tracking system, feedback signals of the first reference device and the second reference device 250 are collected through the tracking system, and pose information between the mechanical arm trolley 220 and the operating table 240 is determined based on the feedback signals of the first reference device and the second reference device 250.
In some embodiments, the first and second reference devices 250 are both optical feedback components.
It can be understood that the tracking system disposed on the navigation trolley 230 is an optical tracking system, and the optical tracking system can feed back the pose information between the robot trolley 220 and the operating bed 240 to the main control trolley 210 through a signal cable, and the main control trolley 210 performs comparison calculation until the robot trolley 220 and the robot arm coincide with the preset pose.
In some embodiments, the determining the positioning parameters of the robotic arm trolley 220 according to the pose information and the preset pose threshold value includes:
determining the coordinates of the mechanical arm trolley 220 and the coordinates of the operating table 240 in the same coordinate system based on the pose information;
determining a positioning parameter of the mechanical arm trolley 220 based on a coordinate difference value between the coordinate of the mechanical arm trolley 220 and the coordinate of the operating table 240 and the preset pose threshold value.
It can be understood that the pose information between the robot trolley 220 and the operating table 240 can be converted into the robot trolley 220 coordinates and the operating table 240 coordinates in the robot coordinate system, where the coordinate system may be a coordinate origin at the center of the robot, a horizontal axis direction at the length direction of the robot, and a vertical axis direction at the perpendicular direction to the length direction of the robot.
In a mechanical arm coordinate system, based on a difference value between a horizontal coordinate of the mechanical arm trolley 220 and a horizontal coordinate of the operating bed 240 and a first preset pose threshold value, a positioning parameter of the mechanical arm trolley 220 in the horizontal axis direction is determined, and based on the positioning parameter corresponding to the horizontal axis direction, the operation of the mechanical arm trolley 220 in the horizontal axis direction is controlled, so that the difference value between the horizontal coordinate of the mechanical arm trolley 220 and the horizontal coordinate of the operating bed 240 is smaller than the first preset pose threshold value.
Based on the difference value between the longitudinal coordinate of the mechanical arm trolley 220 and the longitudinal coordinate of the operating bed 240 and a second preset pose threshold value, the corresponding positioning parameter of the mechanical arm trolley 220 in the longitudinal axis direction is determined, and based on the corresponding positioning parameter in the longitudinal axis direction, the operation of the mechanical arm trolley 220 in the longitudinal axis direction is controlled, so that the difference value between the longitudinal coordinate of the mechanical arm trolley 220 and the longitudinal coordinate of the operating bed 240 is smaller than the second preset pose threshold value.
In some embodiments, the method for intelligent perception of anatomical structures by an orthopedic robotic surgical robot further comprises:
and under the condition that the coordinate difference value is smaller than the preset pose threshold value, determining that the mechanical arm trolley 220 is located at the target position and the mechanical arm is located at the target angle.
It is understood that when the difference between the abscissa of the robotic arm trolley 220 and the abscissa of the operating bed 240 is smaller than the first preset pose threshold value, and the difference between the ordinate of the robotic arm trolley 220 and the ordinate of the operating bed 240 is smaller than the second preset pose threshold value, it can be determined that the robotic arm trolley 220 is located at the target position, and the robotic arm is located at the target angle.
In some embodiments, the method for intelligent perception of anatomical structures by an orthopedic robotic surgical robot further comprises:
after the determination that the mechanical arm trolley 220 is located at the target position and the mechanical arm is located at the target angle, and before the camera shooting equipment on the mechanical table trolley is controlled to align with the operating table 240 for taking a picture, the mechanical arm trolley 220 and the mechanical arm are locked.
It is understood that after determining that the robot trolley 220 is located at the target position and the robot is located at the target angle, the robot trolley 220 and the robot are locked, that is, the robot trolley 220 and the robot are fixed, so as to prevent the robot from shaking in the subsequent working process.
Fig. 5 shows a schematic diagram of a soft tissue identification result obtained by the method of the present invention, which includes a soft tissue region 510 and a bone region 520.
In other embodiments, as shown in fig. 6, the camera device 221 takes a picture of the patient's leg region 610 on the operating bed 240 to obtain an image of the body's joint, and the leg region 610 is cut open with a scalpel to expose bone and muscle tissue.
In other embodiments, after the human joint image is acquired, the processing is performed by the UNet network model and the PointRend network model shown in fig. 7. The UNet network model shown in fig. 7 includes three convolutional layers of 258x128x128, 128x64x128, and 64x32x 256. And carrying out rough segmentation on the human joint image through a UNet network model to obtain preliminary characteristics of soft tissues, and carrying out pixel-level segmentation through a PointRend network model to obtain a soft tissue region.
In summary, the method for intelligently sensing the anatomical structure of the bone surgery robot provided by the invention comprises the following steps: after the mechanical arm trolley 220 is driven to move to a target position away from the operating table 240 and the mechanical arm of the mechanical arm trolley 220 is adjusted to a target angle with the operating table 240, controlling a camera on the mechanical arm trolley 220 to shoot at the operating table 240, and acquiring a human body joint image; inputting the human body joint image into a trained soft tissue identification network model to perform soft tissue feature extraction operation, so as to obtain soft tissue preliminary features output by the soft tissue identification network model; and inputting the preliminary soft tissue features into a trained feature classification network model for classification operation to obtain a soft tissue region output by the feature classification network model.
In the method for intelligently sensing the anatomical structure by the orthopedic mechanical surgical robot, provided by the invention, the camera device on the mechanical arm trolley 220 is controlled to shoot at the operating bed 240, so as to obtain the human body joint image, then the preliminary soft tissue features in the human body joint image are extracted based on the soft tissue identification network model, and then the preliminary soft tissue features are subjected to feature classification based on the feature classification network model, so as to obtain the soft tissue region. The process from obtaining the human joint image to finally extracting the soft tissue region is automatically realized, the soft tissue can be automatically and accurately identified, and a surgeon is helped to accurately mark the tissue blood vessel.
The device for intelligently sensing the anatomical structure of the orthopedic robotic surgery provided by the invention is described below, and the device for intelligently sensing the anatomical structure of the orthopedic robotic surgery described below and the method for intelligently sensing the anatomical structure of the orthopedic robotic surgery described above can be correspondingly referred to.
As shown in fig. 8, the present invention also provides an apparatus 800 for intelligently sensing an anatomical structure by an orthopedic robotic surgical robot, comprising:
the obtaining module 810 is configured to control a camera on the mechanical table trolley to shoot at the operating bed 240 after the mechanical arm trolley 220 is driven to move to a target position away from the operating bed 240 and the mechanical arm of the mechanical arm trolley 220 is adjusted to a target angle with the operating bed 240, so as to obtain a human body joint image;
an extraction module 820, configured to input the human body joint image into a trained soft tissue identification network model to perform soft tissue feature extraction operation, so as to obtain a soft tissue preliminary feature output by the soft tissue identification network model;
and the classification module 830 is configured to input the preliminary soft tissue features to a trained feature classification network model for classification operation, so as to obtain a soft tissue region output by the feature classification network model.
The invention also provides a system for intelligently sensing the anatomical structure of the orthopedic surgery robot, which comprises a mechanical arm trolley 220 and a main control trolley 210.
The main control trolley 210 is in communication connection with the mechanical arm trolley 220, and is configured to execute any one of the above methods for intelligently sensing an anatomical structure by an orthopedic robotic surgical robot.
The electronic device, the computer program product and the storage medium provided by the present invention are described below, and the electronic device, the computer program product and the storage medium described below and the method for intelligently sensing an anatomical structure of an orthopedic robotic surgery described above may be referred to in correspondence.
Fig. 9 illustrates a physical structure diagram of an electronic device, and as shown in fig. 9, the electronic device may include: a processor (processor) 910, a communication Interface (Communications Interface) 920, a memory (memory) 930, and a communication bus 940, wherein the processor 910, the communication Interface 920, and the memory 930 communicate with each other via the communication bus 940. Processor 910 may invoke logic instructions in memory 930 to perform a method of intelligent perception of an anatomical structure by an orthopedic robotic surgical robot, the method comprising:
after the mechanical arm trolley is driven to move to a target position far away from an operating table and a mechanical arm of the mechanical arm trolley is adjusted to a target angle with the operating table, controlling camera equipment on the mechanical arm trolley to shoot a picture aiming at the operating table to obtain a human body joint image;
inputting the human body joint image into a trained soft tissue identification network model to perform soft tissue feature extraction operation, so as to obtain soft tissue preliminary features output by the soft tissue identification network model;
and inputting the preliminary soft tissue features into a trained feature classification network model for classification operation to obtain a soft tissue region output by the feature classification network model.
Furthermore, the logic instructions in the memory 930 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program, the computer program being stored on a non-transitory computer-readable storage medium, the computer program, when executed by a processor, being capable of executing the method for intelligently perceiving an anatomical structure of an orthopaedic surgical robot provided by the above methods, the method comprising:
after the mechanical arm trolley is driven to move to a target position far away from an operating table and a mechanical arm of the mechanical arm trolley is adjusted to a target angle with the operating table, controlling camera equipment on the mechanical arm trolley to aim at the operating table to take a picture, and acquiring a human body joint image;
inputting the human body joint image into a trained soft tissue identification network model to perform soft tissue feature extraction operation, so as to obtain soft tissue preliminary features output by the soft tissue identification network model;
and inputting the preliminary soft tissue features into a trained feature classification network model for classification operation to obtain a soft tissue region output by the feature classification network model.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method for intelligently sensing an anatomical structure by an orthopedic robotic surgical robot provided by performing the methods described above, the method comprising:
after the mechanical arm trolley is driven to move to a target position far away from an operating table and a mechanical arm of the mechanical arm trolley is adjusted to a target angle with the operating table, controlling camera equipment on the mechanical arm trolley to aim at the operating table to take a picture, and acquiring a human body joint image;
inputting the human body joint image into a trained soft tissue identification network model to perform soft tissue feature extraction operation, so as to obtain soft tissue preliminary features output by the soft tissue identification network model;
and inputting the preliminary features of the soft tissue into a trained feature classification network model for classification operation to obtain a soft tissue region output by the feature classification network model.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A method for an orthopedic robotic surgical robot to intelligently perceive anatomical structures, comprising:
after the mechanical arm trolley is driven to move to a target position far away from an operating table and a mechanical arm of the mechanical arm trolley is adjusted to a target angle with the operating table, controlling camera equipment on the mechanical arm trolley to shoot a picture aiming at the operating table to obtain a human body joint image;
inputting the human body joint image into a trained soft tissue identification network model to perform soft tissue feature extraction operation, so as to obtain soft tissue preliminary features output by the soft tissue identification network model;
and inputting the preliminary features of the soft tissue into a trained feature classification network model for classification operation to obtain a soft tissue region output by the feature classification network model.
2. The method of intelligent perception of anatomy by an orthopedic robotic surgical person of claim 1,
the soft tissue identification network model is obtained by training a UNet network model based on a preset human body joint image sample and corresponding soft tissue preliminary characteristics;
the feature classification network model is obtained by training a PointRend network model based on a preset soft tissue preliminary feature sample and a soft tissue region corresponding to the soft tissue preliminary feature sample.
3. The method of intelligent perception of anatomy by an orthopedic robotic surgical person of claim 1, further comprising:
before the mechanical arm trolley is driven to move to the target position and the mechanical arm is adjusted to the target angle, acquiring pose information between the mechanical arm trolley and the operating table;
determining the positioning parameters of the mechanical arm trolley according to the pose information and a preset pose threshold value;
and driving the mechanical arm trolley to move to the target position and adjusting the angle of the mechanical arm to the target angle according to the positioning parameters of the mechanical arm trolley.
4. The method for intelligently sensing the anatomical structure of the orthopedic robotic surgical robot according to claim 3, wherein the acquiring pose information between the robotic arm trolley and the operating bed comprises:
and acquiring the pose information sent by the navigation trolley, wherein the pose information is acquired by the navigation trolley through induction collection of first reference equipment on the mechanical arm trolley and second reference equipment on the operating table.
5. The method for intelligently sensing the anatomical structure of the orthopedic robotic surgical robot according to claim 3, wherein the determining the positioning parameters of the robotic trolley according to the pose information and a preset pose threshold comprises:
determining the coordinates of the mechanical arm trolley and the coordinates of the operating table in the same coordinate system based on the pose information;
and determining the positioning parameters of the mechanical arm trolley based on the coordinate difference value between the mechanical arm trolley coordinate and the operating table coordinate and the preset pose threshold value.
6. The method of intelligent perception of anatomy by an orthopedic robotic surgical person of claim 5, further comprising:
and under the condition that the coordinate difference value is smaller than the preset pose threshold value, determining that the mechanical arm trolley is located at the target position and the mechanical arm is located at the target angle.
7. The method of intelligent perception of an orthopedic robotic surgical robot of anatomical structures as claimed in any of claims 1-6, further comprising:
after the mechanical arm trolley is located at the target position and the mechanical arm is located at the target angle, and before the camera shooting equipment on the mechanical table trolley is controlled to aim at the operating table to shoot, the mechanical arm trolley and the mechanical arm are locked.
8. An apparatus for an orthopedic robotic surgical robot to intelligently perceive anatomical structures, comprising:
the acquisition module is used for controlling the camera equipment on the mechanical table trolley to shoot at the operating bed after driving the mechanical arm trolley to move to a target position away from the operating bed and adjusting the mechanical arm of the mechanical arm trolley to a target angle with the operating bed, so as to acquire a human body joint image;
the extraction module is used for inputting the human body joint image into a trained soft tissue identification network model to carry out soft tissue feature extraction operation so as to obtain soft tissue preliminary features output by the soft tissue identification network model;
and the classification module is used for inputting the preliminary soft tissue features into a trained feature classification network model for classification operation to obtain a soft tissue region output by the feature classification network model.
9. A system for intelligently sensing an anatomical structure by an orthopedic machine surgical robot is characterized by comprising a mechanical arm trolley and a main control trolley;
the master control trolley is in communication connection with the mechanical arm trolley and is used for executing the method for intelligently sensing the anatomical structure of the orthopedic machine surgical robot as claimed in any one of claims 1-7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the method for intelligently sensing an anatomical structure of an orthopedic robotic surgical device as claimed in any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211057487.8A CN115486939A (en) | 2022-08-31 | 2022-08-31 | Method, device and system for intelligently sensing anatomical structure of orthopedic machine surgical robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211057487.8A CN115486939A (en) | 2022-08-31 | 2022-08-31 | Method, device and system for intelligently sensing anatomical structure of orthopedic machine surgical robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115486939A true CN115486939A (en) | 2022-12-20 |
Family
ID=84468591
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211057487.8A Pending CN115486939A (en) | 2022-08-31 | 2022-08-31 | Method, device and system for intelligently sensing anatomical structure of orthopedic machine surgical robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115486939A (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104116517A (en) * | 2014-07-18 | 2014-10-29 | 北京航空航天大学 | Intraoperative X-ray image system based on cooperation of double mechanical arms |
CN110653821A (en) * | 2019-10-10 | 2020-01-07 | 上海电气集团股份有限公司 | Control method, system, medium and equipment for mechanical arm |
CN111640127A (en) * | 2020-05-29 | 2020-09-08 | 成都金盘电子科大多媒体技术有限公司 | Accurate clinical diagnosis navigation method for orthopedics department |
CN111973228A (en) * | 2020-06-17 | 2020-11-24 | 谈斯聪 | B-ultrasonic data acquisition, analysis and diagnosis integrated robot and platform |
CN112971991A (en) * | 2021-02-05 | 2021-06-18 | 上海电气集团股份有限公司 | Method and device for controlling movement of mechanical arm system |
CN113017829A (en) * | 2020-08-22 | 2021-06-25 | 张逸凌 | Preoperative planning method, system, medium and equipment for total knee replacement based on deep learning |
CN113797088A (en) * | 2021-08-31 | 2021-12-17 | 中科尚易健康科技(北京)有限公司 | Linkage control method and system for mechanical arm and bed |
CN113967071A (en) * | 2020-10-23 | 2022-01-25 | 李志强 | Control method and device for mechanical arm of surgical robot to move along with surgical bed |
CN114299072A (en) * | 2022-03-11 | 2022-04-08 | 四川大学华西医院 | Artificial intelligence-based anatomy variation identification prompting method and system |
CN114638852A (en) * | 2022-02-25 | 2022-06-17 | 汉斯夫(杭州)医学科技有限公司 | Jaw bone and soft tissue identification and reconstruction method, device and medium based on CBCT image |
CN114631961A (en) * | 2022-02-08 | 2022-06-17 | 查显进 | Multi-degree-of-freedom medical minimally invasive robot |
-
2022
- 2022-08-31 CN CN202211057487.8A patent/CN115486939A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104116517A (en) * | 2014-07-18 | 2014-10-29 | 北京航空航天大学 | Intraoperative X-ray image system based on cooperation of double mechanical arms |
CN110653821A (en) * | 2019-10-10 | 2020-01-07 | 上海电气集团股份有限公司 | Control method, system, medium and equipment for mechanical arm |
CN111640127A (en) * | 2020-05-29 | 2020-09-08 | 成都金盘电子科大多媒体技术有限公司 | Accurate clinical diagnosis navigation method for orthopedics department |
CN111973228A (en) * | 2020-06-17 | 2020-11-24 | 谈斯聪 | B-ultrasonic data acquisition, analysis and diagnosis integrated robot and platform |
CN113017829A (en) * | 2020-08-22 | 2021-06-25 | 张逸凌 | Preoperative planning method, system, medium and equipment for total knee replacement based on deep learning |
CN113967071A (en) * | 2020-10-23 | 2022-01-25 | 李志强 | Control method and device for mechanical arm of surgical robot to move along with surgical bed |
CN112971991A (en) * | 2021-02-05 | 2021-06-18 | 上海电气集团股份有限公司 | Method and device for controlling movement of mechanical arm system |
CN113797088A (en) * | 2021-08-31 | 2021-12-17 | 中科尚易健康科技(北京)有限公司 | Linkage control method and system for mechanical arm and bed |
CN114631961A (en) * | 2022-02-08 | 2022-06-17 | 查显进 | Multi-degree-of-freedom medical minimally invasive robot |
CN114638852A (en) * | 2022-02-25 | 2022-06-17 | 汉斯夫(杭州)医学科技有限公司 | Jaw bone and soft tissue identification and reconstruction method, device and medium based on CBCT image |
CN114299072A (en) * | 2022-03-11 | 2022-04-08 | 四川大学华西医院 | Artificial intelligence-based anatomy variation identification prompting method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10105187B2 (en) | Systems, apparatus, methods and computer-readable storage media facilitating surgical procedures utilizing augmented reality | |
CN108701170B (en) | Image processing system and method for generating three-dimensional (3D) views of an anatomical portion | |
WO2022073342A1 (en) | Surgical robot and motion error detection method and detection device therefor | |
CN110215284A (en) | A kind of visualization system and method | |
EP3213259B1 (en) | System for estimating a deflated lung shape for video assisted thoracic surgery | |
Groeger et al. | Motion tracking for minimally invasive robotic surgery | |
WO2023066072A1 (en) | Catheter positioning method, interventional surgery system, electronic device and storage medium | |
CN114176726B (en) | Puncture method based on phase registration | |
CN112998749A (en) | Automatic ultrasonic inspection system based on visual servoing | |
CN108814717A (en) | surgical robot system | |
CN114452508A (en) | Catheter motion control method, interventional operation system, electronic device, and storage medium | |
CN112704566B (en) | Surgical consumable checking method and surgical robot system | |
WO2022199650A1 (en) | Computer-readable storage medium, electronic device, and surgical robot system | |
CN111588469A (en) | Ophthalmic robot end effector guidance and positioning system | |
KR101334007B1 (en) | Surgical Robot Control System and Method therefor | |
CN115211966A (en) | Orthopedic robot positioning method, system, equipment and medium | |
CN114224428A (en) | Osteotomy plane positioning method, osteotomy plane positioning system and osteotomy plane positioning device | |
CN112712016B (en) | Surgical instrument identification method, identification platform and medical robot system | |
CN115486939A (en) | Method, device and system for intelligently sensing anatomical structure of orthopedic machine surgical robot | |
CN115443108A (en) | Surgical procedure guidance system | |
CN117462258A (en) | Computer vision-based collision avoidance method, device, equipment and storage medium for surgical auxiliary robot | |
WO2023169108A1 (en) | Target region positioning method, electronic device, and medium | |
CN114283179B (en) | Fracture far-near end space pose real-time acquisition and registration system based on ultrasonic image | |
CN110559075B (en) | Intraoperative augmented reality registration method and device | |
CN115880469B (en) | Registration method of surface point cloud data and three-dimensional image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 100176 2201, 22 / F, building 1, yard 2, Ronghua South Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing Applicant after: Beijing Changmugu Medical Technology Co.,Ltd. Applicant after: Zhang Yiling Address before: 100176 2201, 22 / F, building 1, yard 2, Ronghua South Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing Applicant before: BEIJING CHANGMUGU MEDICAL TECHNOLOGY Co.,Ltd. Applicant before: Zhang Yiling |