CN118552793B - Postoperative incision healing state identification system based on artificial intelligence - Google Patents
Postoperative incision healing state identification system based on artificial intelligence Download PDFInfo
- Publication number
- CN118552793B CN118552793B CN202411000912.9A CN202411000912A CN118552793B CN 118552793 B CN118552793 B CN 118552793B CN 202411000912 A CN202411000912 A CN 202411000912A CN 118552793 B CN118552793 B CN 118552793B
- Authority
- CN
- China
- Prior art keywords
- scale
- incision state
- feature
- state
- surgical incision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000035876 healing Effects 0.000 title claims abstract description 65
- 230000002980 postoperative effect Effects 0.000 title claims abstract description 47
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 38
- 230000004927 fusion Effects 0.000 claims abstract description 54
- 238000012545 processing Methods 0.000 claims abstract description 23
- 230000005856 abnormality Effects 0.000 claims abstract description 20
- 208000002847 Surgical Wound Diseases 0.000 claims description 210
- 230000004913 activation Effects 0.000 claims description 63
- 239000013598 vector Substances 0.000 claims description 58
- 230000006835 compression Effects 0.000 claims description 48
- 238000007906 compression Methods 0.000 claims description 48
- 239000011159 matrix material Substances 0.000 claims description 26
- 238000000605 extraction Methods 0.000 claims description 20
- 230000008447 perception Effects 0.000 claims description 20
- 238000010586 diagram Methods 0.000 claims description 15
- 238000005728 strengthening Methods 0.000 claims description 10
- 230000006870 function Effects 0.000 claims description 8
- 238000012935 Averaging Methods 0.000 claims description 3
- 238000000034 method Methods 0.000 abstract description 16
- 238000001514 detection method Methods 0.000 abstract description 8
- 238000004458 analytical method Methods 0.000 abstract description 5
- 238000010801 machine learning Methods 0.000 abstract description 4
- 238000004422 calculation algorithm Methods 0.000 abstract description 3
- 238000013135 deep learning Methods 0.000 abstract description 3
- 238000001994 activation Methods 0.000 description 48
- 230000002159 abnormal effect Effects 0.000 description 8
- 238000001356 surgical procedure Methods 0.000 description 8
- 230000029663 wound healing Effects 0.000 description 8
- 206010052428 Wound Diseases 0.000 description 7
- 208000027418 Wounds and injury Diseases 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 230000002787 reinforcement Effects 0.000 description 5
- 230000001149 cognitive effect Effects 0.000 description 4
- 230000010363 phase shift Effects 0.000 description 4
- 238000012512 characterization method Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000008961 swelling Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 206010053692 Wound complication Diseases 0.000 description 2
- 206010048038 Wound infection Diseases 0.000 description 2
- 230000001364 causal effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000007788 liquid Substances 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 238000011179 visual inspection Methods 0.000 description 2
- 206010061218 Inflammation Diseases 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 230000004054 inflammatory process Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Image Analysis (AREA)
Abstract
The application discloses an artificial intelligence-based postoperative incision healing state identification system, which relates to the field of intelligent identification, and is characterized in that an operation incision state image of a patient is acquired through a camera, and an image processing and analysis algorithm based on artificial intelligence and deep learning is introduced at the rear end to analyze the operation incision state image, so that hidden characteristics and multi-scale fusion information related to the operation incision state of the patient in the image are learned and identified, and the identification and detection of the operation incision healing state of the patient are carried out to judge whether abnormality exists. In this way, more scientific and intelligent support can be provided for the healing status identification and abnormality detection of a patient's post-operative wound based on artificial intelligence and machine learning techniques.
Description
Technical Field
The application relates to the field of intelligent recognition, and more particularly relates to an artificial intelligence-based postoperative incision healing state recognition system.
Background
Post-operative incision healing status monitoring is a key element in patient post-operative care and recovery. Post-operative incision healing status identification refers to the observation and evaluation of an operative incision to determine whether the incision healing condition is normal. The process plays a vital role in postoperative care and postoperative care, can discover and treat abnormal wound conditions in time, ensures good healing of incisions of patients, and reduces risks of infection and complications.
However, conventional monitoring methods for the healing state of an incision after operation generally rely on regular examinations by a doctor, and appearance characteristics of the incision, such as redness, liquid seepage, swelling, etc., are observed through visual inspection and empirical judgment of the doctor's ward, so as to determine the wound healing condition of the patient after operation, which is time-consuming and labor-consuming, and may cause the wound healing condition of the patient to be not accurately or timely checked due to artificial factors, thereby increasing the risk of wound infection and complications of the patient.
Accordingly, an optimized post-operative incision healing status recognition system is desired.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems.
According to one aspect of the present application, there is provided an artificial intelligence based post-operative incision healing status recognition system, comprising:
the surgical incision state image acquisition module is used for acquiring a surgical incision state image acquired by the camera;
The surgical incision state HOG feature extraction module is used for extracting HOG features from the surgical incision state image to obtain a surgical incision state HOG feature vector;
The surgical incision state feature boundary compensation module is used for performing boundary compensation based on the surgical incision state feature on the surgical incision state HOG feature vector to obtain a surgical incision state boundary compensation multi-scale feature map;
The surgical incision state characteristic multi-scale sensing module is used for inputting the surgical incision state boundary compensation multi-scale characteristic map into the characteristic multi-scale sensing strengthening module to obtain a strengthened surgical incision state boundary compensation multi-scale characteristic map;
The surgical incision state importance feature guiding and strengthening expression module is used for carrying out joint constraint expression based on cross-modal feature guiding on the strengthening surgical incision state boundary compensation multi-scale feature map based on the surgical incision state HOG feature vector so as to obtain a texture feature guiding surgical incision state multi-scale fusion feature map;
And the surgical incision state abnormality recognition module is used for guiding the surgical incision state multi-scale fusion feature map based on the texture features and determining a recognition result, wherein the recognition result is used for indicating whether abnormality exists or not.
In the above-mentioned postoperative incision healing state identification system based on artificial intelligence, the operation incision state characteristic boundary compensation module is used for: the surgical incision state image is input based on MBCNet including a backbone network and boundary feature extraction branches to obtain the surgical incision state boundary compensation multi-scale feature map.
In the above-mentioned postoperative incision healing state identification system based on artificial intelligence, the operation incision state characteristic multiscale sensing module includes:
The channel perception enhancement unit is used for carrying out channel perception enhancement processing on the operation incision state boundary compensation multi-scale feature map on different branches so as to obtain a first operation incision state boundary compensation multi-scale channel local activation feature vector, a second operation incision state boundary compensation multi-scale channel compression feature map and a receptive field expansion operation incision state boundary compensation global multi-scale activation feature matrix;
The incision state boundary compensation global multi-scale activation unit is used for multiplying the receptive field expansion operation incision state boundary compensation global multi-scale activation feature matrix and each feature matrix of the second operation incision state boundary compensation multi-scale channel compression feature map along the channel dimension according to position points to obtain a second channel compression operation incision state boundary compensation global multi-scale activation feature map;
The incision state boundary compensation multi-scale channel compression local activation unit is used for taking each position characteristic value in the first operation incision state boundary compensation multi-scale channel local activation characteristic vector as a weighting weight, and weighting each characteristic matrix of the second operation incision state boundary compensation multi-scale channel compression characteristic map along the channel dimension to obtain a second operation incision state boundary compensation multi-scale channel compression local activation characteristic map;
the incision state boundary compensation multi-scale fusion unit is used for carrying out position addition on the second channel compression operation incision state boundary compensation global multi-scale activation characteristic diagram and the second operation incision state boundary compensation multi-scale channel compression local activation characteristic diagram to obtain a second channel compression operation incision state boundary compensation multi-scale fusion activation characteristic diagram;
and the reinforced operation incision state boundary compensation multi-scale feature extraction unit is used for carrying out cavity convolution encoding on the second channel compressed operation incision state boundary compensation multi-scale fusion activation feature map so as to obtain the reinforced operation incision state boundary compensation multi-scale feature map.
In the above-mentioned postoperative incision healing state identification system based on artificial intelligence, the channel perception intensification unit is used for:
Performing point convolution processing on the boundary compensation multi-scale feature map of the surgical incision state in a first branch to obtain a compression feature map of a first surgical incision state boundary compensation multi-scale channel; global averaging is carried out on the first operation incision state boundary compensation multi-scale channel compression feature map so as to obtain a first operation incision state boundary compensation multi-scale channel compression feature vector; performing nonlinear activation processing on the first surgical incision state boundary compensation multi-scale channel compression feature vector to obtain the first surgical incision state boundary compensation multi-scale channel local activation feature vector;
performing point convolution processing on the boundary compensation multi-scale feature map of the surgical incision state in a second branch to obtain a compression feature map of the boundary compensation multi-scale channel of the surgical incision state;
In a third branch, carrying out cavity convolution coding on the boundary compensation multiscale feature map of the surgical incision state to obtain a boundary compensation multiscale receptive field expansion feature map of the surgical incision state; performing point convolution processing on the surgical incision state boundary compensation multiscale receptive field expansion feature map to obtain a receptive field expansion surgical incision state boundary compensation global multiscale feature matrix; and performing nonlinear activation on the receptive field expansion operation incision state boundary compensation global multi-scale feature matrix to obtain the receptive field expansion operation incision state boundary compensation global multi-scale activation feature matrix.
In the above-mentioned postoperative incision healing state identification system based on artificial intelligence, the operation incision state importance characteristic guides and strengthens expression module for: inputting the reinforced surgical incision state boundary compensation multi-scale feature map and the surgical incision state HOG feature vector into a MetaNet-based cross-modal joint constraint encoder to obtain the texture feature guided surgical incision state multi-scale fusion feature map.
In the above-mentioned postoperative incision healing state identification system based on artificial intelligence, the unusual identification module of operation incision state is used for: and inputting the multi-scale fusion characteristic map of the surgical incision state guided by the texture characteristics into a healing state recognition module based on a classifier to obtain a recognition result, wherein the recognition result is used for indicating whether an abnormality exists.
In the above-mentioned postoperative incision healing state identification system based on artificial intelligence, the abnormal identification module of operation incision state includes: the unfolding unit is used for unfolding the multi-scale fusion feature map of the texture feature guided operation incision state into classification feature vectors based on row vectors or column vectors; the full-connection coding unit is used for carrying out full-connection coding on the classification characteristic vectors by using a plurality of full-connection layers of the classifier so as to obtain coded classification characteristic vectors; and the classification result generation unit is used for passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the identification result.
Compared with the prior art, the postoperative incision healing state identification system based on artificial intelligence provided by the application has the advantages that the image of the postoperative incision state of a patient is acquired through the camera, and the image processing and analysis algorithm based on artificial intelligence and deep learning is introduced at the rear end to analyze the image of the postoperative incision state, so that hidden characteristics and multi-scale fusion information related to the postoperative incision state of the patient in the image are learned and identified, and the identification and detection of the postoperative incision healing state of the patient are carried out, so that whether abnormality exists or not is judged. In this way, more scientific and intelligent support can be provided for the healing status identification and abnormality detection of a patient's post-operative wound based on artificial intelligence and machine learning techniques.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing embodiments of the present application in more detail with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 is a block diagram of an artificial intelligence based post-operative incision healing state identification system in accordance with an embodiment of the present application;
FIG. 2 is a system architecture diagram of an artificial intelligence based post-operative incision healing status recognition system according to an embodiment of the present application;
fig. 3 is a block diagram of a surgical incision state feature multi-scale sensing module in an artificial intelligence based post-operative incision healing state recognition system according to an embodiment of the present application.
Detailed Description
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
As used in the specification and in the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Although the present application makes various references to certain modules in a system according to embodiments of the present application, any number of different modules may be used and run on a user terminal and/or server. The modules are merely illustrative, and different aspects of the systems and methods may use different modules.
A flowchart is used in the present application to describe the operations performed by a system according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the various steps may be processed in reverse order or simultaneously, as desired. Also, other operations may be added to or removed from these processes.
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Traditional monitoring methods for the healing state of a postoperative incision generally rely on regular examination by a doctor, and appearance features of the incision, such as redness, swelling, liquid seepage, swelling and the like, are observed through visual inspection and empirical judgment of a doctor's ward, so as to determine the wound healing condition of the postoperative patient, which is time-consuming and labor-consuming, and may cause the wound healing condition of the patient to be not accurately or timely checked due to artificial factors, thereby increasing the wound infection and complication risk of the patient. Accordingly, an optimized post-operative incision healing status recognition system is desired.
With the rapid development of artificial intelligence technology, machine learning, particularly deep learning, has made remarkable progress in the fields of image recognition, pattern recognition and the like. The deep learning model can automatically learn features from a large amount of data, and has natural advantages for processing and analyzing complex images. Artificial intelligence is increasingly used in the medical field, and particularly in the aspect of postoperative management, the advantages of accuracy and efficiency are fully exhibited.
In the technical scheme of the application, an artificial intelligence-based postoperative incision healing state identification system is provided. FIG. 1 is a block diagram of an artificial intelligence based post-operative incision healing status recognition system in accordance with an embodiment of the present application. Fig. 2 is a system architecture diagram of an artificial intelligence based post-operative incision healing status recognition system according to an embodiment of the present application. As shown in fig. 1 and 2, an artificial intelligence based post-operative incision healing state identification system 300 according to an embodiment of the present application includes: a surgical incision state image acquisition module 310 for acquiring a surgical incision state image acquired by the camera; a surgical incision state HOG feature extraction module 320, configured to extract HOG features from the surgical incision state image to obtain a surgical incision state HOG feature vector; a surgical incision state feature boundary compensation module 330, configured to perform boundary compensation based on the surgical incision state feature on the surgical incision state HOG feature vector to obtain a surgical incision state boundary compensation multi-scale feature map; the surgical incision state feature multi-scale perception module 340 is configured to input the surgical incision state boundary compensation multi-scale feature map into a feature multi-scale perception enhancement module to obtain an enhanced surgical incision state boundary compensation multi-scale feature map; the surgical incision state importance feature guidance reinforcement expression module 350 is configured to perform cross-modal feature guidance-based joint constraint expression on the reinforcement surgical incision state boundary compensation multi-scale feature map based on the surgical incision state HOG feature vector to obtain a texture feature guidance surgical incision state multi-scale fusion feature map; the surgical incision state abnormality recognition module 360 is configured to guide the surgical incision state multi-scale fusion feature map based on the texture feature, and determine a recognition result, where the recognition result is used to indicate whether an abnormality exists.
In particular, the surgical incision state image acquisition module 310 is configured to acquire a surgical incision state image acquired by a camera. It should be understood that the surgical incision state image refers to an image for displaying the state of a surgical incision acquired through imaging techniques (such as medical imaging). These images may provide detailed information about the surgical incision. The surgical incision state image plays a key role in judging the identification and detection of the healing state of the incision after the operation of a patient. Medical staff can judge the healing state by analyzing the information of the closing degree of the incision, inflammation condition, tissue reconstruction and the like through images.
In particular, the surgical incision state HOG feature extraction module 320 is configured to extract HOG features from the surgical incision state image to obtain a surgical incision state HOG feature vector. It should be appreciated that the directional gradient histogram (Histogram of Oriented Gradients, HOG) is a feature descriptor widely used in computer vision that builds feature vectors by counting the gradient direction and magnitude of local regions in an image, effectively describing edge and texture information in the image, and thus describing local features in the image to facilitate a better representation of the status features of the surgical incision later. In addition, HOG features have certain invariance to illumination change, scale change, rotation and other image transformation, can overcome the influence of the changes on feature extraction to a certain extent, and improve the robustness and stability of the system. Based on this, in the technical solution of the present application, HOG features are extracted from the surgical incision state image to obtain a surgical incision state HOG feature vector.
In particular, the surgical incision state feature boundary compensation module 330 is configured to perform boundary compensation based on the surgical incision state feature on the surgical incision state HOG feature vector to obtain a surgical incision state boundary compensation multi-scale feature map. In particular, in one specific example of the application, the surgical incision state image input is based on MBCNet comprising a backbone network and boundary feature extraction branches to obtain the surgical incision state boundary compensation multiscale feature map. Considering that feature information in different aspects and scales exists in the surgical incision state image, the features have significance to the identification and anomaly detection of the surgical incision state, and therefore, the feature extraction of the surgical incision state image is performed using a convolutional neural network model having excellent performance in terms of implicit feature extraction of the image. Particularly, considering MBCNet as a deep convolutional neural network for image semantic segmentation and understanding, the problem of boundary information loss caused by repeated convolution and up-sampling is mainly solved, and the network adopts a multi-scale fused boundary feature extraction branch, so that the accuracy of image semantic understanding can be improved. Thus, the surgical incision state image is further input based on MBCNet including a backbone network and boundary feature extraction branches to obtain a surgical incision state boundary compensation multi-scale feature map. It should be noted that, here, MBCNet includes two branches, one is a backbone network, and the other is a boundary feature extraction branch. The backbone network is used for extracting global features related to the surgical incision state in the surgical incision state image, and the boundary feature extraction branches are used for extracting boundary feature information of the surgical incision state, so that key features in the surgical incision state image can be more comprehensively and accurately captured, and understanding of the model on the surgical incision state is enhanced.
In particular, the surgical incision state feature multi-scale sensing module 340 is configured to input the surgical incision state boundary compensation multi-scale feature map to a feature multi-scale sensing reinforcement module to obtain a reinforced surgical incision state boundary compensation multi-scale feature map. Considering that the healing state of a surgical incision exhibits different characteristic information on different characteristic scales, for example, small texture variations may be more pronounced on smaller scales, while larger scales may be more helpful in identifying overall healing trends. Moreover, the multi-scale characteristics of the incision state are of great significance for subsequent healing state perception and abnormal condition identification, and through analysis and enhancement of the multi-scale characteristics, the fine and macroscopic characteristics of the wound healing state of a patient after operation can be captured at the same time, so that an identification system can be helped to more accurately understand the healing state of the incision. Based on the above, in the technical scheme of the application, the surgical incision state boundary compensation multi-scale feature map is further input into a feature multi-scale perception enhancement module to obtain an enhanced surgical incision state boundary compensation multi-scale feature map. Through the multi-scale feature analysis and strengthening treatment of the feature multi-scale perception strengthening module, the system can pay attention to the wound state feature information under different scales at the same time, and learn and recognize the association degree between the wound state features with different scales and the subsequent wound healing recognition task, so that the key features are strengthened and expressed. Therefore, more comprehensive healing characteristics of the surgical incision state can be automatically learned and depicted, and the accuracy and the robustness of the wound healing state identification are improved. In particular, in one specific example of the present application, as shown in fig. 3, the surgical incision state feature multiscale sensing module 340 includes: the channel perception enhancement unit 341 is configured to perform channel perception enhancement processing on the surgical incision state boundary compensation multi-scale feature map on different branches to obtain a first surgical incision state boundary compensation multi-scale channel local activation feature vector, a second surgical incision state boundary compensation multi-scale channel compression feature map, and a receptive field expansion surgical incision state boundary compensation global multi-scale activation feature matrix; the incision state boundary compensation global multi-scale activation unit 342 is configured to multiply the receptive field expansion operation incision state boundary compensation global multi-scale activation feature matrix and each feature matrix of the second operation incision state boundary compensation multi-scale channel compression feature map along the channel dimension by location points to obtain a second channel compression operation incision state boundary compensation global multi-scale activation feature map; An incision state boundary compensation multi-scale channel compressed local activation unit 343 configured to weight each feature matrix of the second surgical incision state boundary compensation multi-scale channel compressed feature map along a channel dimension by using each position feature value in the first surgical incision state boundary compensation multi-scale channel local activation feature vector as a weighting weight so as to obtain a second surgical incision state boundary compensation multi-scale channel compressed local activation feature map; an incision state boundary compensation multi-scale fusion unit 344, configured to add the second channel compression surgical incision state boundary compensation global multi-scale activation feature map and the second surgical incision state boundary compensation multi-scale channel compression local activation feature map according to positions to obtain a second channel compression surgical incision state boundary compensation multi-scale fusion activation feature map; And the reinforced operation incision state boundary compensation multi-scale feature extraction unit 345 is configured to perform hole convolution encoding on the second channel compressed operation incision state boundary compensation multi-scale fusion activation feature map to obtain a reinforced operation incision state boundary compensation multi-scale feature map.
Specifically, the channel perception enhancement unit 341 is configured to perform channel perception enhancement processing on the boundary compensation multi-scale feature map of the surgical incision state on different branches to obtain a local activation feature vector of the first surgical incision state boundary compensation multi-scale channel, a compression feature map of the second surgical incision state boundary compensation multi-scale channel, and a global multi-scale activation feature matrix of the boundary compensation of the receptive field expansion surgical incision state. In the embodiment of the application, in a first branch, performing point convolution processing on the surgical incision state boundary compensation multi-scale feature map to obtain a first surgical incision state boundary compensation multi-scale channel compression feature map; global averaging is carried out on the first operation incision state boundary compensation multi-scale channel compression feature map so as to obtain a first operation incision state boundary compensation multi-scale channel compression feature vector; performing nonlinear activation processing on the first surgical incision state boundary compensation multi-scale channel compression feature vector to obtain the first surgical incision state boundary compensation multi-scale channel local activation feature vector; performing point convolution processing on the boundary compensation multi-scale feature map of the surgical incision state in a second branch to obtain a compression feature map of the boundary compensation multi-scale channel of the surgical incision state; in a third branch, carrying out cavity convolution coding on the boundary compensation multiscale feature map of the surgical incision state to obtain a boundary compensation multiscale receptive field expansion feature map of the surgical incision state; performing point convolution processing on the surgical incision state boundary compensation multiscale receptive field expansion feature map to obtain a receptive field expansion surgical incision state boundary compensation global multiscale feature matrix; and performing nonlinear activation on the receptive field expansion operation incision state boundary compensation global multi-scale feature matrix to obtain the receptive field expansion operation incision state boundary compensation global multi-scale activation feature matrix.
Specifically, the incision state boundary compensation global multi-scale activation unit 342 is configured to multiply the receptive field expansion operation incision state boundary compensation global multi-scale activation feature matrix and each feature matrix of the second operation incision state boundary compensation multi-scale channel compression feature map along the channel dimension by location points to obtain a second channel compression operation incision state boundary compensation global multi-scale activation feature map. It should be appreciated that by multiplying the global multi-scale activation profile of the first channel by the boundary compensation profile of the compressed surgical incision state of the second channel by location points, information fusion of different layers and channels can be achieved. This helps to improve the recognition and compensation capabilities of the model for the boundary of the surgical incision state, enables the model to more accurately capture the boundary information of the surgical incision, and improves the understanding and judgment capabilities of the surgical incision state.
Specifically, the incision state boundary compensation multi-scale channel compression local activation unit 343 and the incision state boundary compensation multi-scale fusion unit 344 are configured to use each position feature value in the first surgical incision state boundary compensation multi-scale channel local activation feature vector as a weighting weight, and weight each feature matrix of the second surgical incision state boundary compensation multi-scale channel compression feature map along the channel dimension to obtain a second surgical incision state boundary compensation multi-scale channel compression local activation feature map; and the second channel compression operation incision state boundary compensation global multi-scale activation characteristic diagram and the second operation incision state boundary compensation multi-scale channel compression local activation characteristic diagram are subjected to position addition to obtain a second channel compression operation incision state boundary compensation multi-scale fusion activation characteristic diagram. By the method, the understanding and identifying ability of the model to the state of the surgical incision can be improved, and the model can accurately capture the characteristics and boundary information of the surgical incision.
Specifically, the enhanced surgical incision state boundary compensation multi-scale feature extraction unit 345 is configured to perform hole convolution encoding on the second channel compressed surgical incision state boundary compensation multi-scale fusion activation feature map to obtain an enhanced surgical incision state boundary compensation multi-scale feature map. It will be appreciated that the hole convolution encoding may help the model better capture detailed features of the surgical incision state, especially in boundary-compensated multi-scale feature maps. This helps to enhance the characterization of the state of the surgical incision in the feature map, and improves the recognition and understanding of the model to the boundaries of the surgical incision. Among them, hole convolution is a special convolution operation in a convolutional neural network, and by introducing holes (or referred to as expansion rate) between convolution kernels, the field of view of the convolution kernels can be increased, thereby capturing wider context information.
To sum up, in the above embodiment, inputting the surgical incision state boundary compensation multi-scale feature map into a feature multi-scale perception enhancement module to obtain an enhanced surgical incision state boundary compensation multi-scale feature map, including: inputting the surgical incision state boundary compensation multi-scale feature map into the feature multi-scale perception enhancement module to process by the following multi-scale perception enhancement formula so as to obtain the enhanced surgical incision state boundary compensation multi-scale feature map; wherein, the multiscale perception intensification formula is:
;
wherein, Compensating a multi-scale feature map for the surgical incision state boundary,AndFor a3 x 3 hole convolution operation with a number of holes of 3 and 2,For a convolution operation of 1 x1,The global mean value pooling processing is carried out on each feature matrix along the channel dimension in the feature graph,In order to perform a non-linear activation process,Representing the multiplication by the position point,Representing a weighted multiplication process along the channel dimension of the feature map with feature vectors,Is the boundary compensation multi-scale characteristic diagram of the reinforced operation incision state.
It should be noted that, in other specific examples of the present application, the surgical incision state boundary compensation multi-scale feature map may be input to the feature multi-scale perception enhancement module in other manners to obtain an enhanced surgical incision state boundary compensation multi-scale feature map, for example: inputting the boundary compensation multi-scale characteristic map of the surgical incision state; performing multi-scale feature extraction on the input surgical incision state boundary compensation multi-scale feature map, wherein the multi-scale feature extraction possibly comprises convolution kernels or pooling operations with different sizes so as to capture feature information on different scales; the extracted multi-scale features are fused, and the modes of splicing, weighted summation and the like can be adopted to integrate the feature information of different scales; enhancement processing is carried out on the fused features, such as application of an activation function, normalization operation and the like, so as to enhance the characterization capability of the features; outputting the boundary compensation multi-scale characteristic map of the reinforced operation incision state.
In particular, the surgical incision state importance feature guidance reinforcement expression module 350 is configured to perform cross-modal feature guidance-based joint constraint expression on the reinforcement surgical incision state boundary compensation multi-scale feature map based on the surgical incision state HOG feature vector to obtain a texture feature guidance surgical incision state multi-scale fusion feature map. In particular, in one specific example of the present application, the enhanced surgical incision state boundary compensation multi-scale feature map and the surgical incision state HOG feature vector are input to a MetaNet-based cross-modal joint constraint encoder to obtain the texture feature guided surgical incision state multi-scale fusion feature map. It should be understood that the multi-scale feature map for boundary compensation of the enhanced surgical incision state and the HOG feature vector of the surgical incision state respectively contain multi-scale enhanced features and semantic information related to the surgical incision state, while the HOG feature vector of the surgical incision state contains feature information such as textures of the surgical incision state, and the two features are different feature expression forms related to the healing state of the incision after the operation of the patient, and both the feature expression and the feature information related to the surgical incision state are contained. Based on the method, in order to integrate the postoperative incision state characteristic information from different analysis layers and methods, so that the system can comprehensively utilize the wound state characteristics of different categories, the expression capability and the degree of distinction of the final wound healing state characteristics are improved, in the technical scheme of the application, the reinforced operation incision state boundary compensation multi-scale characteristic map and the operation incision state HOG characteristic vector are further input into a cross-mode joint constraint coder based on MetaNet to obtain a texture characteristic guided operation incision state multi-scale fusion characteristic map. Through the processing of the MetaNet-based cross-modal joint constraint encoder, the representation of the reinforced surgical incision state boundary compensation multi-scale feature map can be constrained by using the HOG feature of the surgical incision state based on the channel dimension, so that the recessive feature and the key feature of the incision state after the operation of the patient are more prominent. That is, by introducing texture features, the system can better capture texture information in the surgical incision state image, the texture information has significance for identifying the incision healing state, the texture features guide the generation of a multi-scale fusion feature map of the surgical incision state, the system can be helped to more comprehensively and accurately characterize the characteristics of the surgical incision state, and the performance of the identification system is improved.
In particular, the surgical incision state abnormality recognition module 360 is configured to guide a surgical incision state multi-scale fusion feature map based on the texture feature, and determine a recognition result, where the recognition result is used to indicate whether an abnormality exists. In particular, in one specific example of the present application, the texture feature guided surgical incision state multi-scale fusion feature map is input to a classifier-based healing state recognition module to obtain the recognition result, where the recognition result is used to indicate whether an abnormality exists. That is, the fusion characterization information of the multi-scale feature expression of the surgical incision state is guided by the texture features of the wound state of the patient to be classified, so that the identification and detection of the healing state of the incision after the operation of the patient are performed to judge whether the healing abnormality exists. In this way, more scientific and intelligent support can be provided for the healing status identification and abnormality detection of a patient's post-operative wound based on artificial intelligence and machine learning techniques. Specifically, the texture feature guided surgical incision state multi-scale fusion feature map is input into a healing state identification module based on a classifier to obtain the identification result, and the process of the identification result for indicating whether an abnormality exists comprises the following steps: firstly, expanding the texture feature guided operation incision state multi-scale fusion feature map into classification feature vectors based on row vectors or column vectors; performing full-connection coding on the classification feature vectors by using a plurality of full-connection layers of the classifier to obtain coded classification feature vectors; and then the coding classification feature vector is passed through a Softmax classification function of the classifier to obtain the identification result.
Preferably, inputting the texture feature-guided surgical incision state multi-scale fusion feature map into a classifier-based healing state recognition module to obtain a recognition result includes:
carrying out probability activation function based on each feature value of the texture feature guided surgery incision state multi-scale fusion feature map, such as sigmoid function and softmax function, so as to obtain a probabilistic texture feature guided surgery incision state multi-scale fusion feature map;
Obtaining the multi-scale fusion characteristic map of the texture feature guided operation incision state, and inputting the multi-scale fusion characteristic map into an abnormal probability value obtained by a healing state identification module based on a classifier;
Determining a class cognitive symbol value based on a comparison of each feature value of the probabilistic texture feature-guided surgical incision state multi-scale fusion feature map with the abnormal probability value, wherein the class cognitive symbol value is equal to one, zero and negative one in response to the probabilistic texture feature-guided surgical incision state multi-scale fusion feature map feature values being greater than, equal to and less than the abnormal probability value, respectively;
calculating the average value of all the characteristic values of the probabilistic texture feature guided operation incision state multi-scale fusion characteristic map to obtain an integral-like phase shift value;
Multiplying each characteristic value of the probabilistic texture feature guided surgery incision state multiscale fusion characteristic graph by the class cognitive symbol value and the class integral phase shift value respectively, then carrying out weighted difference calculation, and taking an absolute value to obtain an optimized characteristic value of the probabilistic texture feature guided surgery incision state multiscale fusion characteristic graph;
And inputting the optimized texture feature-guided surgical incision state multi-scale fusion feature map composed of the optimized feature values into a healing state identification module based on a classifier to obtain an identification result.
The concrete steps are as follows:
;
;
And Is the feature value before and after the optimization of the multi-scale fusion feature map of the probabilistic texture feature guided operation incision state,Is an abnormal probability value of the probability of an abnormality,Is a class-cognition sign function,AndIs a super-parameter which is used for the processing of the data,And guiding the mean value of all the characteristic values of the surgical incision state multi-scale fusion characteristic map for the probabilistic texture characteristic, namely the quasi-integral phase shift value.
Here, the enhanced surgical incision state boundary compensation multi-scale feature map expresses a skeleton-boundary image semantic feature distribution multi-scale perceptively enhanced image semantic feature distribution of a surgical incision state image, so that after the enhanced surgical incision state boundary compensation multi-scale feature map and the surgical incision state HOG feature vector are input into a cross-modal joint constraint encoder based on MetaNet, a channel distribution of the enhanced surgical incision state boundary compensation multi-scale feature map is constrained based on the HOG feature distribution of the surgical incision state image expressed by the surgical incision state HOG feature vector, thereby causing a priori-posterior probability causal association loss of the image semantic feature representation of the texture feature-guided surgical incision state multi-scale fusion feature map relative to the enhanced surgical incision state boundary compensation multi-scale feature map, and affecting the accuracy of a classification result.
Therefore, in the optimization process, the class cognitive phase transformation response of the texture feature guided surgery incision state multi-scale fusion feature map is obtained by comparing the probability amplitude of the feature value of the texture feature guided surgery incision state multi-scale fusion feature map with the class probability, the invariance transformation of the feature distribution sequence based on the class differential distribution expansion is carried out on the phase shift response of the feature value of the texture feature guided surgery incision state multi-scale fusion feature map relative to the class probability representation of the whole feature map, so that the causal constraint of the posterior class probability of the texture feature guided surgery incision state multi-scale fusion feature map on the prior feature distribution representation is realized, and the accuracy of the identification result obtained by the healing state identification module based on the classifier is improved. Therefore, the healing state of the incision of the patient after operation can be more accurately identified and detected, so as to judge whether abnormal conditions of the healing state exist.
As described above, an artificial intelligence based post-operative incision healing state recognition system 300 according to an embodiment of the present application may be implemented in various wireless terminals, such as a server or the like having an artificial intelligence based post-operative incision healing state recognition algorithm. In one possible implementation, an artificial intelligence based post-operative incision healing status identification system 300 according to embodiments of the present application may be integrated into a wireless terminal as a software module and/or hardware module. For example, the artificial intelligence based post-operative incision healing status recognition system 300 may be a software module in the operating system of the wireless terminal or may be an application developed for the wireless terminal; of course, the artificial intelligence based post-operative incision healing status recognition system 300 could equally be one of the many hardware modules of the wireless terminal.
Alternatively, in another example, the one artificial intelligence based post-operative incision healing state identification system 300 and the wireless terminal may also be separate devices, and the one artificial intelligence based post-operative incision healing state identification system 300 may be connected to the wireless terminal through a wired and/or wireless network and communicate interactive information in accordance with a agreed data format.
The foregoing description of embodiments of the application has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (5)
1. An artificial intelligence based post-operative incision healing state identification system, comprising:
the surgical incision state image acquisition module is used for acquiring a surgical incision state image acquired by the camera;
The surgical incision state HOG feature extraction module is used for extracting HOG features from the surgical incision state image to obtain a surgical incision state HOG feature vector;
The surgical incision state feature boundary compensation module is used for performing boundary compensation based on the surgical incision state feature on the surgical incision state HOG feature vector to obtain a surgical incision state boundary compensation multi-scale feature map;
The surgical incision state characteristic multi-scale sensing module is used for inputting the surgical incision state boundary compensation multi-scale characteristic map into the characteristic multi-scale sensing strengthening module to obtain a strengthened surgical incision state boundary compensation multi-scale characteristic map;
The surgical incision state importance feature guiding and strengthening expression module is used for carrying out joint constraint expression based on cross-modal feature guiding on the strengthening surgical incision state boundary compensation multi-scale feature map based on the surgical incision state HOG feature vector so as to obtain a texture feature guiding surgical incision state multi-scale fusion feature map;
The surgical incision state abnormality recognition module is used for guiding the surgical incision state multi-scale fusion feature map based on the texture features and determining a recognition result, wherein the recognition result is used for representing whether abnormality exists or not;
the surgical incision state characteristic multi-scale sensing module comprises:
The channel perception enhancement unit is used for carrying out channel perception enhancement processing on the operation incision state boundary compensation multi-scale feature map on different branches so as to obtain a first operation incision state boundary compensation multi-scale channel local activation feature vector, a second operation incision state boundary compensation multi-scale channel compression feature map and a receptive field expansion operation incision state boundary compensation global multi-scale activation feature matrix;
The incision state boundary compensation global multi-scale activation unit is used for multiplying the receptive field expansion operation incision state boundary compensation global multi-scale activation feature matrix and each feature matrix of the second operation incision state boundary compensation multi-scale channel compression feature map along the channel dimension according to position points to obtain a second channel compression operation incision state boundary compensation global multi-scale activation feature map;
The incision state boundary compensation multi-scale channel compression local activation unit is used for taking each position characteristic value in the first operation incision state boundary compensation multi-scale channel local activation characteristic vector as a weighting weight, and weighting each characteristic matrix of the second operation incision state boundary compensation multi-scale channel compression characteristic map along the channel dimension to obtain a second operation incision state boundary compensation multi-scale channel compression local activation characteristic map;
the incision state boundary compensation multi-scale fusion unit is used for carrying out position addition on the second channel compression operation incision state boundary compensation global multi-scale activation characteristic diagram and the second operation incision state boundary compensation multi-scale channel compression local activation characteristic diagram to obtain a second channel compression operation incision state boundary compensation multi-scale fusion activation characteristic diagram;
The reinforced operation incision state boundary compensation multi-scale feature extraction unit is used for carrying out cavity convolution encoding on the second channel compressed operation incision state boundary compensation multi-scale fusion activation feature map so as to obtain a reinforced operation incision state boundary compensation multi-scale feature map;
The channel perception strengthening unit is used for:
Performing point convolution processing on the boundary compensation multi-scale feature map of the surgical incision state in a first branch to obtain a compression feature map of a first surgical incision state boundary compensation multi-scale channel; global averaging is carried out on the first operation incision state boundary compensation multi-scale channel compression feature map so as to obtain a first operation incision state boundary compensation multi-scale channel compression feature vector; performing nonlinear activation processing on the first surgical incision state boundary compensation multi-scale channel compression feature vector to obtain the first surgical incision state boundary compensation multi-scale channel local activation feature vector;
performing point convolution processing on the boundary compensation multi-scale feature map of the surgical incision state in a second branch to obtain a compression feature map of the boundary compensation multi-scale channel of the surgical incision state;
In a third branch, carrying out cavity convolution coding on the boundary compensation multiscale feature map of the surgical incision state to obtain a boundary compensation multiscale receptive field expansion feature map of the surgical incision state; performing point convolution processing on the surgical incision state boundary compensation multiscale receptive field expansion feature map to obtain a receptive field expansion surgical incision state boundary compensation global multiscale feature matrix; and performing nonlinear activation on the receptive field expansion operation incision state boundary compensation global multi-scale feature matrix to obtain the receptive field expansion operation incision state boundary compensation global multi-scale activation feature matrix.
2. The system for identifying the healing state of an incision after operation based on artificial intelligence according to claim 1, wherein the surgical incision state characteristic boundary compensation module is used for: the surgical incision state image is input based on MBCNet including a backbone network and boundary feature extraction branches to obtain the surgical incision state boundary compensation multi-scale feature map.
3. The system for identifying the healing state of an incision after operation based on artificial intelligence according to claim 2, wherein the characteristic of importance of the state of the incision of operation guides the strengthening expression module for: inputting the reinforced surgical incision state boundary compensation multi-scale feature map and the surgical incision state HOG feature vector into a MetaNet-based cross-modal joint constraint encoder to obtain the texture feature guided surgical incision state multi-scale fusion feature map.
4. A post-operative incision healing state identification system based on artificial intelligence according to claim 3, wherein the operative incision state abnormality identification module is configured to: and inputting the multi-scale fusion characteristic map of the surgical incision state guided by the texture characteristics into a healing state recognition module based on a classifier to obtain a recognition result, wherein the recognition result is used for indicating whether an abnormality exists.
5. The artificial intelligence based post-operative incision healing state identification system of claim 4, wherein the operative incision state anomaly identification module comprises:
The unfolding unit is used for unfolding the multi-scale fusion feature map of the texture feature guided operation incision state into classification feature vectors based on row vectors or column vectors;
the full-connection coding unit is used for carrying out full-connection coding on the classification characteristic vectors by using a plurality of full-connection layers of the classifier so as to obtain coded classification characteristic vectors;
and the classification result generation unit is used for passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the identification result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411000912.9A CN118552793B (en) | 2024-07-25 | 2024-07-25 | Postoperative incision healing state identification system based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411000912.9A CN118552793B (en) | 2024-07-25 | 2024-07-25 | Postoperative incision healing state identification system based on artificial intelligence |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118552793A CN118552793A (en) | 2024-08-27 |
CN118552793B true CN118552793B (en) | 2024-09-24 |
Family
ID=92446503
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411000912.9A Active CN118552793B (en) | 2024-07-25 | 2024-07-25 | Postoperative incision healing state identification system based on artificial intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118552793B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112289447A (en) * | 2020-10-30 | 2021-01-29 | 四川大学华西医院 | Surgical incision healing grade discrimination system |
CN115700759A (en) * | 2022-11-14 | 2023-02-07 | 上海微创医疗机器人(集团)股份有限公司 | Medical image display method, medical image processing method, and image display system |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6753915B2 (en) * | 2018-11-28 | 2020-09-09 | 株式会社東芝 | Image processing equipment, image processing methods, image processing programs and image processing systems |
KR20230147967A (en) * | 2022-04-15 | 2023-10-24 | 충남대학교산학협력단 | Apparatus, system, method, computer-readable storage medium and computer program for guiding lesion incision |
CN117315235A (en) * | 2023-10-16 | 2023-12-29 | 四川大学华西医院 | Deep learning-based surgical incision target detection method |
-
2024
- 2024-07-25 CN CN202411000912.9A patent/CN118552793B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112289447A (en) * | 2020-10-30 | 2021-01-29 | 四川大学华西医院 | Surgical incision healing grade discrimination system |
CN115700759A (en) * | 2022-11-14 | 2023-02-07 | 上海微创医疗机器人(集团)股份有限公司 | Medical image display method, medical image processing method, and image display system |
Also Published As
Publication number | Publication date |
---|---|
CN118552793A (en) | 2024-08-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yap et al. | Deep learning in diabetic foot ulcers detection: A comprehensive evaluation | |
CN110909780B (en) | Image recognition model training and image recognition method, device and system | |
CN109544518B (en) | Method and system applied to bone maturity assessment | |
CN117274270B (en) | Digestive endoscope real-time auxiliary system and method based on artificial intelligence | |
CN112465905A (en) | Characteristic brain region positioning method of magnetic resonance imaging data based on deep learning | |
CN115359066B (en) | Focus detection method and device for endoscope, electronic device and storage medium | |
CN110660480A (en) | Auxiliary diagnosis method and system for spondylolisthesis | |
CN117012370A (en) | Multi-mode disease auxiliary reasoning system, method, terminal and storage medium | |
CN107590806B (en) | Detection method and system based on brain medical imaging | |
CN118552793B (en) | Postoperative incision healing state identification system based on artificial intelligence | |
CN113506274A (en) | Detection system for human cognitive condition based on visual saliency difference map | |
CN117274278B (en) | Retina image focus part segmentation method and system based on simulated receptive field | |
CN112861881A (en) | Honeycomb lung recognition method based on improved MobileNet model | |
CN113610746A (en) | Image processing method and device, computer equipment and storage medium | |
CN114862868B (en) | Cerebral apoplexy final infarction area division method based on CT perfusion source data | |
CN112614092A (en) | Spine detection method and device | |
CN111340739A (en) | Image processing method and system | |
CN116758068B (en) | Marrow picture cell morphology analysis method based on artificial intelligence | |
CN118570570B (en) | Traditional Chinese medicine tongue picture identification system and method based on image identification technology | |
CN118470037B (en) | Unsupervised semantic segmentation method based on depth circle detection | |
CN118542639B (en) | Fundus image diagnosis analysis system and method based on pattern recognition | |
CN118521966B (en) | Bladder irrigation state monitoring system and method based on image recognition | |
CN113850796B (en) | CT data-based lung disease identification method and device, medium and electronic equipment | |
CN118521574B (en) | Urological postoperative care management system and method | |
Attia et al. | A deep learning framework for accurate diagnosis of colorectal cancer using histological images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |