CN117562678B - Auxiliary system for neurosurgery microscope - Google Patents
Auxiliary system for neurosurgery microscope Download PDFInfo
- Publication number
- CN117562678B CN117562678B CN202410022498.5A CN202410022498A CN117562678B CN 117562678 B CN117562678 B CN 117562678B CN 202410022498 A CN202410022498 A CN 202410022498A CN 117562678 B CN117562678 B CN 117562678B
- Authority
- CN
- China
- Prior art keywords
- boundary
- voice
- server
- tissue
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003993 interaction Effects 0.000 claims abstract description 50
- 239000011521 glass Substances 0.000 claims abstract description 38
- 238000012545 processing Methods 0.000 claims abstract description 10
- 206010028980 Neoplasm Diseases 0.000 claims description 21
- 238000004891 communication Methods 0.000 claims description 20
- 238000000034 method Methods 0.000 claims description 20
- 238000012544 monitoring process Methods 0.000 claims description 18
- 230000009471 action Effects 0.000 claims description 17
- 230000008569 process Effects 0.000 claims description 11
- 230000008520 organization Effects 0.000 claims description 5
- 238000004458 analytical method Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 3
- 239000003086 colorant Substances 0.000 claims description 2
- 210000003128 head Anatomy 0.000 description 12
- 238000001356 surgical procedure Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000002372 labelling Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 210000005013 brain tissue Anatomy 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 208000003174 Brain Neoplasms Diseases 0.000 description 1
- 208000012902 Nervous system disease Diseases 0.000 description 1
- 208000030886 Traumatic Brain injury Diseases 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 208000026106 cerebrovascular disease Diseases 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 238000000386 microscopy Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 210000000653 nervous system Anatomy 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/30—Devices for illuminating a surgical field, the devices having an interrelation with other surgical devices or with a surgical procedure
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/20—Surgical microscopes characterised by non-optical aspects
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/30—Devices for illuminating a surgical field, the devices having an interrelation with other surgical devices or with a surgical procedure
- A61B90/35—Supports therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/693—Acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/695—Preprocessing, e.g. image segmentation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/63—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/67—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Theoretical Computer Science (AREA)
- Veterinary Medicine (AREA)
- General Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Pathology (AREA)
- Animal Behavior & Ethology (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- General Engineering & Computer Science (AREA)
- Radiology & Medical Imaging (AREA)
- Human Computer Interaction (AREA)
- Microscoopes, Condenser (AREA)
Abstract
An auxiliary system for a neurosurgery microscope relates to the technical field of neurosurgery microscopes, comprising: the system comprises a collecting device, a voice interaction device, a visible light image collecting device and a server, wherein a data processing system is arranged on the server and used for identifying uploaded microscopic images, identifying received voice instructions, executing corresponding instructions and feeding back voice to the voice interaction device for playing, and further identifying corresponding instructions and executing according to images of feet of a doctor, and the server is further used for processing received medical data or data uploaded to the server to obtain characters and/or patterns and displaying the characters and/or patterns on the mobile display device, the collecting device and/or the AR glasses and/or the MR glasses.
Description
Technical Field
The invention relates to the technical field of neurosurgery microscopes, in particular to an auxiliary system for a neurosurgery microscope.
Background
Neurosurgery is a high-technology medical operation, and is mainly used for treating various nervous system diseases, such as brain tumor, cerebrovascular diseases, brain trauma and the like. With the continued advancement of medical technology, neurosurgical techniques are also continually evolving and improving, with microscopy being one of the most important tools in neurosurgery. Through the operation microscope, a doctor can enlarge the operation visual field, more accurately observe the nervous system structure, and more accurately cut off pathological tissues, but the visual field scope of the microscope is relatively smaller, which increases the operation difficulty, so that the requirement on the doctor performing the operation is higher, and the operation time is relatively longer because the operation difficulty of the microscope operation is higher, and the probability of misoperation of the doctor during the operation on the tissues is increased.
Disclosure of Invention
The embodiment of the invention provides an auxiliary system for a neurosurgery microscope, which is used for solving the problems that the requirement on a doctor for operation is high and the operation time is relatively long due to the high difficulty of operation in the operation process through the operation microscope, and the probability of misoperation of the doctor in the operation process when the doctor operates tissues is increased.
An assist system for a neurosurgical microscope, comprising:
the acquisition device is used for acquiring microscopic images of the operation microscope and uploading the microscopic images to the server;
the mobile display device is used for displaying the microscopic image processed by the server;
the voice interaction device is in communication connection with the server and is used for receiving or uploading voice data to the server;
the visible light image acquisition device is in communication connection with the server and is used for acquiring images of the feet of a doctor and uploading the images to the server;
the server is provided with a data processing system which is used for identifying the uploaded microscopic image, marking the identified tissue on the microscopic image, then sending the marked tissue to the acquisition device and/or the AR glasses and/or the MR glasses for display, identifying the received voice command, feeding back voice to the voice interaction device for playing, identifying the corresponding command according to the image of the foot of the doctor, and executing the command;
the server is further used for processing the received medical data or the data uploaded to the server to obtain characters and/or patterns, the characters and/or patterns are displayed on the mobile display device, the acquisition device and/or the AR glasses and/or the MR glasses, and the layout of display interfaces on the acquisition device, the mobile display device, the AR glasses and the MR glasses is kept consistent.
Further, collection device includes two urceolus that set up side by side, two the urceolus passes through the mount spare to be connected the inside of mount is provided with the controller the urceolus is close to the one end of operation microscope eyepiece is provided with connecting portion, connecting portion connect operation microscope eyepiece is last, two the inside of urceolus has all set gradually and is used for catching operation microscope eyepiece department microscopic image's image receiving portion and be used for showing image display portion and lens.
Further, the signal output end of the controller is in communication connection with the signal input ends of the display part and the server, and the signal input end of the controller is in communication connection with the signal output ends of the image receiving part and the server.
Further, the mobile display device comprises a movable vertical stand and an image display installed on the movable vertical stand, and a signal input end of the image display is in communication connection with a signal output end of the server.
Further, the voice interaction device, the AR glasses and the signal input end of the MR glasses are in communication connection with the signal output end of the server, and the visible light image acquisition device and the signal output end of the voice interaction device are in communication connection with the signal input end of the server.
Further, the data processing system comprises a data acquisition module, an identification module, a three-dimensional line marking module, an extreme value boundary marking module and an auxiliary interaction module;
the data acquisition module is used for acquiring microscopic images, images of the feet of a doctor and voice data uploaded to the server by the acquisition device, the visible light image acquisition device and the voice interaction device;
the recognition module is used for recognizing the acquired data to obtain an organization recognition result, an action recognition result and a voice recognition result;
the three-dimensional line marking module is used for marking each tissue according to the tissue identification result;
the extreme value boundary marking module is used for calculating boundary dangerous extreme values of all tissues according to the marking and the tissue identification result and marking by using boundary lines;
the auxiliary interaction module is used for carrying out corresponding marks on the microscopic image according to the marks of the three-dimensional line marking module and the extreme value boundary marking module, displaying the marks on a microscopic image display area of a display interface, organizing corresponding used characters or patterns on the microscopic image according to a result of recognizing the microscopic image, marking the marks, and displaying the marks on the microscopic image display area of the display interface;
the auxiliary interaction module is also used for executing corresponding operation instructions according to the action recognition result;
the auxiliary interaction module is also used for executing corresponding operation instructions according to the voice recognition result;
the auxiliary interaction module is also used for displaying medical monitoring data of a patient in a physiological data display area of the display interface according to the collected monitoring data;
the auxiliary interaction module is also used for displaying operation flow information in an operation flow display area of the display interface and displaying operation reminding items in a reminding item display area of the display interface;
the auxiliary interaction module is also used for analyzing operation flow information, operation reminding items and tissue recognition results, displaying operation advice by using characters in an operation advice display area of the display interface or outputting voice through the voice interaction device.
Further, the data acquisition module comprises an image acquisition device a, an image acquisition device b, a voice acquisition device and a monitoring data acquisition device;
the image collector a is suitable for collecting microscopic images captured by the collecting device;
the image collector b is suitable for collecting the image of the foot of the doctor captured by the visible light image collecting device;
the voice collector is suitable for collecting voice output by a doctor;
the monitoring data collector is suitable for collecting medical monitoring data of patients.
Further, the identification module comprises an organization identification unit, an action identification unit and a semantic identification unit;
the tissue identification unit is suitable for extracting the characteristics of the acquired microscopic images and identifying the type and the range of the tissue;
the action recognition unit is suitable for extracting action characteristics of the acquired images of the feet of the doctor and recognizing operation instructions corresponding to the actions;
the semantic recognition unit is suitable for carrying out semantic analysis on the collected voice output by the doctor and recognizing an operation instruction corresponding to the voice.
Further, the three-dimensional line marking module comprises a tissue boundary marking unit and a tumor boundary marking unit;
the tissue boundary marking unit is suitable for identifying the edge of the tissue according to the identified tissue type and the identified tissue range and marking the edge by using a first boundary line;
the tumor border marking unit is adapted to identify the identified tumor border according to the identified tissue type and extent and to mark with borderline two.
Further, the extremum boundary marking module comprises an extremum boundary calculating unit and an extremum boundary marking unit;
the extreme value boundary calculating unit is suitable for calculating boundary dangerous extreme values of all tissues according to the identified tissue types and the identified ranges and the boundary lines I and II;
the extremum boundary marking unit is adapted to mark the boundary of each tissue by using the boundary line III according to the calculated boundary dangerous extremum of each tissue.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
1. the microscopic images acquired by the operation microscope are acquired, the images are analyzed, and then the images are marked by using the characters/images and the boundary lines, so that a doctor can conveniently identify the range and the operation distance of the tissues in the operation process, can conveniently identify the tissues and the boundaries of the tissues, and can avoid misoperation of the doctor while reducing the requirements of the doctor.
2. The method can also analyze the operation flow information, the operation reminding items and the tissue recognition results and display operation suggestions by using characters or output voice to prompt a doctor through the voice interaction device, and has the effect of reducing the requirement on the doctor besides relieving the burden of the doctor in the operation process.
3. Can recognize foot images of doctors or voices of doctors and execute corresponding operation instructions, thereby facilitating the doctors to interact with the auxiliary system in the operation process.
4. The marked images can be displayed through the AR glasses, the MR glasses or the acquisition device, and the proper interaction mode can be selected according to actual conditions, so that convenience in the use process is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a schematic diagram showing the structural components of an auxiliary system according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a positional relationship structure between an acquisition device and a surgical microscope according to an embodiment of the present invention;
FIG. 3 is a schematic cross-sectional view of a collection device according to an embodiment of the present invention;
FIG. 4 is a communication block diagram of an auxiliary system disclosed in an embodiment of the present invention;
FIG. 5 is a schematic diagram of a communication structure of an auxiliary system according to an embodiment of the present invention;
fig. 6 is a schematic diagram of microscopic image labeling according to an embodiment of the present invention.
Reference numerals:
1. a collection device; 11. an outer cylinder; 12. a connection part; 13. a fixing frame; 14. an image receiving section; 15. a display unit; 16. a lens; 17. a controller; 2. a mobile display device; 21. a movable stand; 22. an image display; 3. a server; 4. a data processing system; 41. a data acquisition module; 411. an image collector a; 412. an image collector b; 413. a voice collector; 414. monitoring a data collector; 42. an identification module; 421. a tissue identification unit; 422. an action recognition unit; 433. a semantic recognition unit; 43. a three-dimensional line marking module; 431. a tissue boundary marking unit; 432. a tumor boundary marking unit; 44. an extremum boundary marking module; 441. an extremum boundary calculating unit; 442. an extremum boundary marking unit; 45. an auxiliary interaction module; 451. a labeling unit; 452. a prompting unit; 46. displaying an interface; 461. a physiological data display area; 462. a surgical procedure display area; 463. a reminding item display area; 464. a surgical advice display area; 465. a microscopic image display area; 5. a voice interaction device; 6. a visible light image acquisition device; 7. a surgical microscope; 71. an eyepiece; 8. AR glasses; 9. MR glasses.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 to 5 are schematic structural diagrams of an auxiliary system disclosed in an embodiment of the present invention, which are composed of a collecting device 1, a mobile display device 2, a voice interaction device 5, a visible light image collecting device 6, AR glasses 8 and MR glasses 9.
The collecting device 1 is used for collecting microscopic images of the operation microscope 7 and uploading the microscopic images to the server 3, and is also used for displaying the tissue identified on the microscopic images on the server 3, then sending the tissue identified on the microscopic images to the collecting device 1 for displaying, and further used for processing received medical data or the data uploaded to the server 3 by the server 3, and then obtaining characters and/or patterns for displaying on the collecting device 1.
Meanwhile, when the AR glasses 8 and/or the MR glasses 9 are used, all of the above can be displayed as well.
It should be noted that, the layout of the display interface 46 on the acquisition device 1, the mobile display device 2, the AR glasses 8, and the MR glasses 9 includes a physiological data display area 461, an operation procedure display area 462, and a reminder display area 463 and an operation advice display area 464 disposed on the left side of the edges, and a microscopic image display area 465 is disposed in the middle of the display interface 46, so that the layout has the advantage that auxiliary information can be displayed by using the edges, which occupies a small display area, and the microscopic image displayed in the middle is more convenient for the doctor to view.
It should be noted that one or more of the acquisition device 1, the mobile display device 2, the AR glasses 8, and the MR glasses 9 may be selected for simultaneous use.
As shown in fig. 1 to 4, the collecting device 1 comprises two outer cylinders 11 arranged side by side, the two outer cylinders 11 are connected through a fixing frame 13, a connecting part 12 is arranged at one end of the outer cylinder 11 close to an eyepiece 71 of the operation microscope 7, the connecting part 12 is connected to the eyepiece 71 of the operation microscope 7, and an image receiving part 14 for capturing microscopic images at the eyepiece 71 of the operation microscope 7, a display part 15 for displaying images and a lens 16 are sequentially arranged inside the two outer cylinders 11.
The signal output end of the controller 17 is connected with the signal input ends of the display part 15 and the server 3 in a communication way, and the signal input end of the controller 17 is connected with the signal output ends of the image receiving part 14 and the server 3 in a communication way.
When the above-mentioned acquisition device 1 is in operation, the microscopic image of the eyepiece 71 of the operation microscope 7 is captured by the image receiving portion 14, in this embodiment, the image receiving portion 14 is a device capable of acquiring an image, such as a camera, and the received microscopic image is uploaded to the server 3 through the controller 17, the image returned by the server 3 is displayed through the display portion 15, in this embodiment, the display portion 15 may be any one of an OLED and an LCD display screen, and the image on the display portion 15 enters the eyes of a doctor after passing through the lens 16.
It should be noted that, the lens 16 is a magnifying lens for increasing the bright view distance, and it should be noted that, since the scheme of increasing the bright view distance from the display screen to the human eye by the lens 16 is a prior art, the focal length of the lens 16 and the distance between the lens and the display portion 15 will not be described herein, and those skilled in the art can obtain all the above parameters according to the disclosed prior art.
As shown in fig. 1 and 4, the mobile display device 2 includes a movable stand 21 and an image display 22 mounted on the movable stand 21, wherein a signal input end of the image display 22 is connected with a signal output end of the server 3 in a communication manner, the image display 22 is used for displaying microscopic images processed by the server 3, and the image display 22 is also used for displaying characters and/or patterns obtained after the server 3 processes medical data or data uploaded to the server 3.
The images displayed by the mobile display device 2 are used for viewing the surgical procedure by other doctors in addition to the doctor of the main knife.
As shown in fig. 1 and 4, the voice interaction device 5 is communicatively connected with the server 3, and is used for receiving or uploading voice data to the server 3;
in this embodiment, the voice interaction device 5 may be a bluetooth headset or be composed of a microphone and a speaker.
As shown in fig. 1 and 4, the visible light image acquisition device 6 is in communication connection with the server 3 and is used for acquiring and uploading images of the feet of the doctor to the server 3;
in this embodiment, the visible light image capturing device 6 is a camera.
As shown in fig. 1 to 5, the data processing system 4 includes a data acquisition module 41, an identification module 42, a three-dimensional line marking module 43, an extremum boundary marking module 44, and an auxiliary interaction module 45.
As shown in fig. 5, the data acquisition module 41 is configured to acquire microscopic images, images of the feet of the doctor, and voice data uploaded to the server 3 by the acquisition device 1, the visible light image acquisition device 6, and the voice interaction device 5, and the data acquisition module 41 includes an image acquisition unit a411, an image acquisition unit b412, a voice acquisition unit 413, and a monitoring data acquisition unit 414.
Further, the image collector a411 is adapted to acquire microscopic images captured by the collecting device 1.
Further, the image collector b412 is adapted to collect images of the doctor's foot captured by the visible light image collection device 6.
Further, the voice collector 413 is adapted to collect voice output by the doctor.
Further, the monitoring data collector 414 is adapted to collect medical monitoring data of the patient.
As shown in fig. 5 to 6, the recognition module 42 is configured to recognize the collected data to obtain an organization recognition result, an action recognition result, and a voice recognition result, and the recognition module 42 includes an organization recognition unit 421, an action recognition unit 422, and a semantic recognition unit 433.
Further, the tissue identification unit 421 is adapted to perform feature extraction on the acquired microscopic image and identify the type and extent of the tissue.
Further, the motion recognition unit 422 is adapted to perform motion feature extraction on the acquired image of the foot of the doctor and recognize the operation instruction corresponding to the motion.
It should be noted that, in the above embodiment, the motion recognition unit 422 is preset with a foot motion feature library, and each foot motion feature corresponds to a different operation instruction.
Further, the semantic recognition unit 433 is adapted to perform semantic analysis on the collected voice output by the doctor, and recognize an operation instruction corresponding to the voice.
It should be noted that, in the foregoing embodiment, a voice feature library is preset in the semantic recognition unit 433, each voice feature corresponds to a different operation instruction, and the corresponding operation instruction is obtained by performing semantic analysis and traversing the voice feature library according to the voice feature obtained by recognition.
The operation instruction is used for interacting with an auxiliary system, for example, a doctor needs to mark a certain area or tissue or store images or the auxiliary system needs to provide help or connect an external network to search data, and the corresponding operation can be executed after the foot action or voice is identified.
As shown in fig. 5 to 6, the three-dimensional line marking module 43 is configured to mark each tissue according to the tissue identification result, and the three-dimensional line marking module 43 includes a tissue boundary marking unit 431 and a tumor boundary marking unit 432.
Further, the tissue boundary marking unit 431 is adapted to identify its edges according to the identified tissue type and extent and to mark them with the boundary line one.
Further, the tumor border marking unit 432 is adapted to identify the identified tumor border according to the identified tissue type and extent and mark it with the boundary line two.
As shown in fig. 5 to 6, the extremum boundary marking module 44 is configured to calculate the boundary risk extremum of each tissue according to the marking and the tissue identification result and mark the boundary with the boundary line, and the extremum boundary marking module 44 includes an extremum boundary calculating unit 441 and an extremum boundary marking unit 442.
Further, the extremum boundary calculating unit 441 is adapted to calculate the boundary risk extremum of the respective tissue based on the identified tissue type and extent and the boundary line one and boundary line two.
Further, the extremum boundary marking unit 442 is adapted to mark the boundary of each tissue with the boundary line three according to the calculated boundary risk extremum of each tissue.
The first boundary line, the second boundary line and the third boundary line are marked by different colors, and it is required to be noted that the region between the first boundary line, the second boundary line and the third boundary line is a dangerous region, the third boundary line is located at the periphery of the first boundary line or the second boundary line, the third boundary line is identified according to the marked range of different tissues, the serious damage caused by damage is the largest range of the third boundary line, the slight damage caused by damage is the smallest range of the third boundary line, wherein the damage caused by collision with a surgical instrument is generated, and the severity of the damage caused by damage is determined by simulation or known prior published information.
The first, second, and third boundary lines dynamically mark the tissue in the visual field during the movement of the surgical microscope 7, and a three-dimensional effect is displayed on the AR glasses 8, the MR glasses 9, or the acquisition device 1.
As shown in fig. 5 to 6, the auxiliary interaction module 45 is configured to perform corresponding marking on the microscopic image according to the marking of the three-dimensional line marking module 43 and the extremum boundary marking module 44, and display the corresponding marking on the microscopic image display area 465 of the display interface 46, and further configured to organize corresponding used characters or patterns on the microscopic image according to the result of identifying the microscopic image, and display the corresponding used characters or patterns on the microscopic image display area 465 of the display interface 46;
further, the auxiliary interaction module 45 is further configured to execute a corresponding operation instruction according to the action recognition result;
further, the auxiliary interaction module 45 is further configured to execute a corresponding operation instruction according to the voice recognition result;
further, the auxiliary interaction module 45 is further configured to display medical monitoring data of the patient in the physiological data display area 461 of the display interface 46 according to the collected monitoring data;
further, the auxiliary interaction module 45 is further configured to display surgical procedure information in a surgical procedure display area 462 of the display interface 46 and display surgical reminder in a reminder display area 463 of the display interface 46;
further, the auxiliary interaction module 45 is further configured to analyze the surgical procedure information, the surgical reminder, and the tissue recognition result, and display a surgical suggestion using text in the surgical suggestion display area 464 of the display interface 46 or output voice through the voice interaction device 5.
As shown in fig. 5 to 6, the auxiliary interaction module 45 includes a labeling unit 451 and a prompting unit 452.
The labeling unit 451 is used for labeling corresponding to the microscopic image and organizing corresponding words or patterns to be used on the microscopic image according to the result of identifying the microscopic image, and displaying the words or patterns on the microscopic image display area 465 of the display interface 46.
The prompting unit 452 is configured to display the medical monitoring data of the patient in the physiological data display area 461 of the display interface 46, display the surgical procedure information in the surgical procedure display area 462 of the display interface 46 and the surgical reminder in the reminder display area 463 of the display interface 46 according to the collected monitoring data, analyze the surgical procedure information, the surgical reminder and the tissue recognition result, and display the surgical advice in the surgical advice display area 464 of the display interface 46 by using text or output voice through the voice interaction device 5.
Specifically, the recognition module 42 imports the CT tomogram of the patient's head in a tomogram order to generate a three-dimensional model of the patient's head, the tissue recognition unit 421 is preset with a feature database of brain tissue, after the recognition unit 421 marks a tumor in the three-dimensional model of the patient's head partially, the recognition unit 421 recognizes the tumor in the three-dimensional model of the patient's head according to the features (color, size, shape, edge) of the tumor, after a microscopic image is obtained, the features are extracted and compared with the three-dimensional model, a tumor of the microscopic image is recognized, the tumor boundary marking unit 432 marks the edge of the tumor with the boundary line two according to the recognition result, the tissue recognition unit 421 performs feature recognition on the obtained microscopic image according to the feature database of the brain tissue in combination with the three-dimensional model of the patient's head, and the tissue boundary marking unit 431 marks the recognized tissue with the boundary line one.
As shown in fig. 6, in one example of the above embodiment, a patient has a star-shaped cytoma, a patient's head CT tomogram is obtained by performing a tomography on the patient's brain before an operation, the patient's head CT tomogram is vertically stacked by introducing the patient's head CT tomogram into a three-dimensional model tool to generate a three-dimensional model of the patient's head, which includes the whole tissue structure of the patient's head, a doctor marks a part of a tumor in the three-dimensional model of the patient's head by a recognition unit 421, the recognition unit 421 recognizes the whole tumor area by extending the color, size, shape and edge characteristics of the marked part along the marked area, a tumor boundary marking unit 432 marks the edge of the tumor by using a red boundary line two according to the recognition result, the tissue identification unit 421 identifies each tissue in the obtained microscopic image according to the feature database of the brain tissue in combination with the three-dimensional model of the patient's head, the tissue boundary marking unit 431 marks the identified tissue with the orange boundary line one, the extremum boundary marking unit 442 marks the boundary of each tissue with the green boundary line three, the marking unit 451 marks the corresponding on the microscopic image according to the above identification marking result and marks and displays the microscopic image display area 465 of the display interface 46 with the text or pattern corresponding to the tissue on the microscopic image according to the result of the microscopic image identification, as shown in fig. 6, marks and illustrates the identified blood vessel, nerve, tumor with the text, and the doctor performs the marking process through the display portion 15, the AR glasses 8, after the MR glasses or the image display 22 acquire the microscopic images, the respective tissues and the dangerous degree can be identified in an assisted manner according to the boundary lines one to three, so that the operation of doctors is facilitated.
The auxiliary interaction module 45 is further used for analyzing the microscopic image, and when the surgical instrument enters the range of the boundary line III, the prompting unit 452 uses the characters and the patterns to remind the doctor in the surgical advice display area 464 or outputs voice through the voice interaction device 5 to remind the doctor.
The specific model specifications of the image receiving unit 14, the display unit 15, the controller 17, the image display 22, the server 3, the voice interaction device 5, the visible light image acquisition device 6, the AR glasses 8, the MR glasses 9, and the surgical microscope 7 need to be determined by selecting the model according to the actual specifications of the device, and the specific model selection calculation method adopts the prior art in the art, so detailed description thereof is omitted.
The power supply and the principle of the image receiving unit 14, the display unit 15, the controller 17, the image display 22, the server 3, the voice interaction device 5, the visible light image acquisition device 6, the AR glasses 8, the MR glasses 9, the surgical microscope 7 will be clear to those skilled in the art, and will not be described in detail here.
It should be understood that the specific order or hierarchy of steps in the processes disclosed are examples of exemplary approaches. Based on design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate preferred embodiment of this invention.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. The processor and the storage medium may reside as discrete components in a user terminal.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. These software codes may be stored in memory units and executed by processors. The memory unit may be implemented within the processor or external to the processor, in which case it can be communicatively coupled to the processor via various means as is known in the art.
The foregoing description includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, as used in the specification or claims, the term "comprising" is intended to be inclusive in a manner similar to the term "comprising," as interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean "non-exclusive or".
Claims (6)
1. An assist system for a neurosurgical microscope, comprising:
the acquisition device (1) is used for acquiring microscopic images of the operation microscope (7) and uploading the microscopic images to the server (3);
the acquisition device (1) comprises two outer cylinders (11) which are arranged side by side, the two outer cylinders (11) are connected through a fixing frame (13), a controller (17) is arranged in the fixing frame (13), one end, close to an eyepiece (71) of the operation microscope (7), of the outer cylinder (11) is provided with a connecting part (12), the connecting part (12) is connected to the eyepiece (71) of the operation microscope (7), and an image receiving part (14) for capturing microscopic images at the eyepiece (71) of the operation microscope (7), a display part (15) for displaying the images and a lens (16) are sequentially arranged in the two outer cylinders (11);
a mobile display device (2) for displaying the microscopic image processed by the server (3);
the voice interaction device (5) is in communication connection with the server (3) and is used for receiving or uploading voice data to the server (3);
the visible light image acquisition device (6) is in communication connection with the server (3) and is used for acquiring images of the feet of a doctor and uploading the images to the server (3);
the server (3) is provided with a data processing system (4) for identifying the uploaded microscopic image, marking the identified tissue on the microscopic image, then sending the marked tissue to the acquisition device (1) and/or the AR glasses (8) and/or the MR glasses (9) for display, identifying the received voice command, feeding back voice to the voice interaction device (5) for playing, identifying the corresponding command according to the image of the foot of the doctor, and executing the command;
the server (3) is further configured to process the received medical data or the data uploaded to the server (3) to obtain characters and/or patterns, and display the characters and/or patterns on the mobile display device (2), the acquisition device (1) and/or the AR glasses (8) and/or the MR glasses (9), wherein the layouts of the display interfaces (46) on the acquisition device (1), the mobile display device (2), the AR glasses (8) and the MR glasses (9) are kept consistent;
the data processing system (4) comprises a data acquisition module (41), an identification module (42), a three-dimensional line marking module (43), an extreme value boundary marking module (44) and an auxiliary interaction module (45);
the data acquisition module (41) is used for acquiring microscopic images, images of the feet of a doctor and voice data uploaded to the server (3) by the acquisition device (1), the visible light image acquisition device (6) and the voice interaction device (5);
the recognition module (42) is used for recognizing the collected data to obtain an organization recognition result, an action recognition result and a voice recognition result;
the three-dimensional line marking module (43) is used for marking each tissue according to the tissue identification result;
the extreme value boundary marking module (44) is used for calculating boundary dangerous extreme values of all tissues according to the marking and the tissue identification result and marking by using boundary lines;
the auxiliary interaction module (45) is used for carrying out corresponding marks on a microscopic image according to the marks of the three-dimensional line marking module (43) and the extreme value boundary marking module (44), displaying the marks on a microscopic image display area (465) of a display interface (46), and organizing corresponding words or patterns to be used on the microscopic image according to a microscopic image recognition result, and displaying the marks on the microscopic image display area (465) of the display interface (46);
the auxiliary interaction module (45) is also used for executing corresponding operation instructions according to the action recognition result;
the auxiliary interaction module (45) is also used for executing corresponding operation instructions according to the voice recognition result;
the auxiliary interaction module (45) is further used for displaying medical monitoring data of a patient in a physiological data display area (461) of the display interface (46) according to the collected monitoring data;
the auxiliary interaction module (45) is further used for displaying operation flow information in an operation flow display area (462) of the display interface (46) and displaying operation reminding items in a reminding item display area (463) of the display interface (46);
the auxiliary interaction module (45) is also used for analyzing operation flow information, operation reminding items and tissue recognition results, displaying operation advice by using characters in an operation advice display area (464) of the display interface (46) or outputting voice through the voice interaction device (5);
the three-dimensional line marking module (43) comprises a tissue boundary marking unit (431) and a tumor boundary marking unit (432);
the tissue boundary marking unit (431) is adapted to identify its edges according to the identified tissue type and extent and to mark with boundary line one;
the tumor boundary marking unit (432) is adapted to identify the identified tumor margin according to the identified tissue type and extent and mark with boundary line two;
the extremum boundary marking module (44) comprises an extremum boundary calculating unit (441) and an extremum boundary marking unit (442);
the extremum boundary calculating unit (441) is adapted to calculate boundary risk extremum for each tissue based on the identified tissue type and extent and the boundary line one and boundary line two;
the extremum boundary marking unit (442) is adapted to mark the boundary of each tissue by using the boundary line III according to the calculated boundary dangerous extremum of each tissue;
the first, second and third boundary lines are marked with different colors, the region between the first, second and third boundary lines is a dangerous region, the third boundary line is located at the periphery of the first or second boundary line, and the third boundary line is marked with different ranges according to different tissues.
2. An auxiliary system for a neurosurgical microscope according to claim 1, characterized in that the signal output of the controller (17) is connected in communication with the signal inputs of the display (15) and the server (3), and the signal input of the controller (17) is connected in communication with the signal outputs of the image receiving part (14) and the server (3).
3. An auxiliary system for a neurosurgical microscope according to claim 1, characterized in that the mobile display device (2) comprises a mobile stand (21) and an image display (22) mounted on the mobile stand (21), the signal input of the image display (22) being connected in communication with the signal output of the server (3).
4. An auxiliary system for a neurosurgical microscope according to claim 1, characterized in that the signal inputs of the voice interaction means (5), the AR glasses (8) and the MR glasses (9) are connected in communication with the signal output of the server (3), and the signal outputs of the visible light image acquisition means (6) and the voice interaction means (5) are connected in communication with the signal input of the server (3).
5. An auxiliary system for a neurosurgical microscope according to claim 1, wherein the data acquisition module (41) comprises an image acquisition unit a (411), an image acquisition unit b (412), a voice acquisition unit (413) and a monitoring data acquisition unit (414);
the image collector a (411) is suitable for collecting microscopic images captured by the collecting device (1);
the image collector b (412) is suitable for collecting the image of the foot of the doctor captured by the visible light image collecting device (6);
the voice collector (413) is suitable for collecting voice output by a doctor;
the monitoring data collector (414) is adapted to collect medical monitoring data of a patient.
6. An auxiliary system for a neurosurgical microscope according to claim 1, characterized in that the recognition module (42) comprises a tissue recognition unit (421), an action recognition unit (422) and a semantic recognition unit (433);
the tissue identification unit (421) is adapted to perform feature extraction on the acquired microscopic image and identify the type and extent of tissue;
the action recognition unit (422) is suitable for extracting action characteristics of the acquired images of the feet of the doctor and recognizing operation instructions corresponding to the actions;
the semantic recognition unit (433) is suitable for performing semantic analysis on the collected voice output by the doctor, and recognizing an operation instruction corresponding to the voice.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410022498.5A CN117562678B (en) | 2024-01-08 | 2024-01-08 | Auxiliary system for neurosurgery microscope |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410022498.5A CN117562678B (en) | 2024-01-08 | 2024-01-08 | Auxiliary system for neurosurgery microscope |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117562678A CN117562678A (en) | 2024-02-20 |
CN117562678B true CN117562678B (en) | 2024-04-12 |
Family
ID=89864522
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410022498.5A Active CN117562678B (en) | 2024-01-08 | 2024-01-08 | Auxiliary system for neurosurgery microscope |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117562678B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108348295A (en) * | 2015-09-24 | 2018-07-31 | 圣纳普医疗(巴巴多斯)公司 | Motor-driven full visual field adaptability microscope |
KR20200065297A (en) * | 2018-11-30 | 2020-06-09 | 숭실대학교산학협력단 | Method for microscope based augmented reality navigation, device and computer readable medium for performing the method |
CN211484971U (en) * | 2019-12-31 | 2020-09-15 | 上海交通大学医学院附属第九人民医院 | Intelligent auxiliary system for comprehensive vision of operation |
CN114173693A (en) * | 2019-07-15 | 2022-03-11 | 外科手术公司 | Augmented reality system and method for remotely supervising surgical procedures |
CN114668497A (en) * | 2022-03-29 | 2022-06-28 | 四川大学华西医院 | Computer-aided liver surgery planning three-dimensional modeling system |
CN114903635A (en) * | 2021-02-10 | 2022-08-16 | 苏州速迈医学科技股份有限公司 | Dental microscopic diagnosis and treatment system |
CN114903590A (en) * | 2022-04-13 | 2022-08-16 | 中南大学湘雅医院 | Morse microsurgery marker information processing method, system and storage medium |
WO2023046630A1 (en) * | 2021-09-22 | 2023-03-30 | Leica Instruments (Singapore) Pte. Ltd. | Surgical microscope system and corresponding system, method and computer program for a surgical microscope system |
CN116829098A (en) * | 2021-02-01 | 2023-09-29 | B·布莱恩新风险投资有限责任公司 | Surgical assistance system with surgical microscope and camera and presentation method |
CN117323002A (en) * | 2023-11-30 | 2024-01-02 | 北京万特福医疗器械有限公司 | Neural endoscopic surgery visualization system based on mixed reality technology |
-
2024
- 2024-01-08 CN CN202410022498.5A patent/CN117562678B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108348295A (en) * | 2015-09-24 | 2018-07-31 | 圣纳普医疗(巴巴多斯)公司 | Motor-driven full visual field adaptability microscope |
KR20200065297A (en) * | 2018-11-30 | 2020-06-09 | 숭실대학교산학협력단 | Method for microscope based augmented reality navigation, device and computer readable medium for performing the method |
CN114173693A (en) * | 2019-07-15 | 2022-03-11 | 外科手术公司 | Augmented reality system and method for remotely supervising surgical procedures |
CN211484971U (en) * | 2019-12-31 | 2020-09-15 | 上海交通大学医学院附属第九人民医院 | Intelligent auxiliary system for comprehensive vision of operation |
CN116829098A (en) * | 2021-02-01 | 2023-09-29 | B·布莱恩新风险投资有限责任公司 | Surgical assistance system with surgical microscope and camera and presentation method |
CN114903635A (en) * | 2021-02-10 | 2022-08-16 | 苏州速迈医学科技股份有限公司 | Dental microscopic diagnosis and treatment system |
WO2023046630A1 (en) * | 2021-09-22 | 2023-03-30 | Leica Instruments (Singapore) Pte. Ltd. | Surgical microscope system and corresponding system, method and computer program for a surgical microscope system |
CN114668497A (en) * | 2022-03-29 | 2022-06-28 | 四川大学华西医院 | Computer-aided liver surgery planning three-dimensional modeling system |
CN114903590A (en) * | 2022-04-13 | 2022-08-16 | 中南大学湘雅医院 | Morse microsurgery marker information processing method, system and storage medium |
CN117323002A (en) * | 2023-11-30 | 2024-01-02 | 北京万特福医疗器械有限公司 | Neural endoscopic surgery visualization system based on mixed reality technology |
Also Published As
Publication number | Publication date |
---|---|
CN117562678A (en) | 2024-02-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220172828A1 (en) | Endoscopic image display method, apparatus, computer device, and storage medium | |
US20150065803A1 (en) | Apparatuses and methods for mobile imaging and analysis | |
US20220020118A1 (en) | Digital Image Optimization For Ophthalmic Surgery | |
CN111785363A (en) | AI-guidance-based chronic disease auxiliary diagnosis system | |
CN110310282B (en) | System for intelligently analyzing corneal nerve fibers by using in-vivo confocal microscope images | |
CN108229584A (en) | A kind of Multimodal medical image recognition methods and device based on deep learning | |
CN110338759B (en) | Facial pain expression data acquisition method | |
CN112465771B (en) | Spine nuclear magnetic resonance image analysis method and device and computer equipment | |
CN117562678B (en) | Auxiliary system for neurosurgery microscope | |
CN115496700A (en) | Disease detection system and method based on eye image | |
CN115937609A (en) | Corneal disease image detection and classification method and device based on local and global information | |
KR20210126243A (en) | Method and system for navigating vascular during surgery | |
CN107822628B (en) | Epileptic brain focus area automatic positioning device and system | |
CN112712016B (en) | Surgical instrument identification method, identification platform and medical robot system | |
CN111667905A (en) | Examination device and information processing method for traditional Chinese medicine eye diagnosis | |
WO2017193581A1 (en) | Automatic processing system and method for mammary gland screening image | |
CN115553779A (en) | Emotion recognition method and device, electronic equipment and storage medium | |
JP7545135B2 (en) | Surgery process identification system, surgery process identification method, and surgery process identification program | |
US10510145B2 (en) | Medical image comparison method and system thereof | |
JP2016187475A5 (en) | Ophthalmic information processing system, image processing apparatus, and image processing method | |
CN111091566A (en) | Diabetic complication retinopathy data detection method | |
JP7580169B1 (en) | Sperm sorting assistance device, sperm sorting assistance system, and sperm sorting assistance program | |
CN116245831B (en) | Tumor treatment auxiliary method and system based on bimodal imaging | |
CN117224087A (en) | Morphology classification system and method for wound surface | |
CN118262220B (en) | Quality assessment method, device and equipment for radiographic image report |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |