CN111493935B - Artificial intelligence-based automatic prediction and identification method and system for echocardiogram - Google Patents
Artificial intelligence-based automatic prediction and identification method and system for echocardiogram Download PDFInfo
- Publication number
- CN111493935B CN111493935B CN202010353559.8A CN202010353559A CN111493935B CN 111493935 B CN111493935 B CN 111493935B CN 202010353559 A CN202010353559 A CN 202010353559A CN 111493935 B CN111493935 B CN 111493935B
- Authority
- CN
- China
- Prior art keywords
- video
- color doppler
- video frame
- frame
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 18
- 239000013598 vector Substances 0.000 claims abstract description 27
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 22
- 238000012549 training Methods 0.000 claims description 16
- 206010067171 Regurgitation Diseases 0.000 claims description 14
- 210000005246 left atrium Anatomy 0.000 claims description 12
- 230000001746 atrial effect Effects 0.000 claims description 11
- 238000002592 echocardiography Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000005259 measurement Methods 0.000 claims description 5
- 238000012512 characterization method Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 3
- 230000007423 decrease Effects 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims description 2
- 230000006870 function Effects 0.000 description 16
- 238000002604 ultrasonography Methods 0.000 description 11
- 230000017531 blood circulation Effects 0.000 description 9
- 239000000523 sample Substances 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 206010027727 Mitral valve incompetence Diseases 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000000747 cardiac effect Effects 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 210000004556 brain Anatomy 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 206010067660 Heart valve incompetence Diseases 0.000 description 2
- 208000035478 Interatrial communication Diseases 0.000 description 2
- 201000001943 Tricuspid Valve Insufficiency Diseases 0.000 description 2
- 208000001910 Ventricular Heart Septal Defects Diseases 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 201000002064 aortic valve insufficiency Diseases 0.000 description 2
- 208000013914 atrial heart septal defect Diseases 0.000 description 2
- 206010003664 atrial septal defect Diseases 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 208000003278 patent ductus arteriosus Diseases 0.000 description 2
- 201000010298 pulmonary valve insufficiency Diseases 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 201000003130 ventricular septal defect Diseases 0.000 description 2
- 206010002915 Aortic valve incompetence Diseases 0.000 description 1
- 206010037448 Pulmonary valve incompetence Diseases 0.000 description 1
- 206010044640 Tricuspid valve incompetence Diseases 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000003709 heart valve Anatomy 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 208000005907 mitral valve insufficiency Diseases 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
- 238000007794 visualization technique Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Detecting organic movements or changes, e.g. tumours, cysts, swellings
- A61B8/0883—Detecting organic movements or changes, e.g. tumours, cysts, swellings for diagnosis of the heart
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/48—Diagnostic techniques
- A61B8/488—Diagnostic techniques involving Doppler signals
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5207—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Cardiology (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
- Image Processing (AREA)
Abstract
The application discloses an ultrasonic cardiogram automatic prediction and identification method and a system based on artificial intelligence, wherein the method comprises the following steps: acquiring a color Doppler video of at least one section of an echocardiogram of a detected object; extracting each video frame in the color Doppler video, and inputting each video frame into a trained convolutional neural network to obtain an N-dimensional feature vector corresponding to each video frame; generating the weight corresponding to each video frame by the N-dimensional feature vector of each video frame through an attention module; calculating a weighted sum of the N-dimensional feature vectors of each video frame by using the weights to obtain an overall feature representation of the color Doppler video; and calculating to obtain a predicted value containing the pre-identified image features based on the overall feature representation. By the method, whether the image features to be identified exist in the echocardiogram can be accurately predicted.
Description
Technical Field
The application relates to the technical field of intelligent identification of medical video images, in particular to an ultrasonic cardiogram automatic prediction identification method and system based on artificial intelligence.
Background
Echocardiography is currently one of the most important methods for assessing cardiac structure and function. For identification of the heart valve regurgitation features in echocardiography, the physician typically identifies the presence or absence of abnormal color flow by visual inspection, which is susceptible to velocity range and color gain, thus overestimating or underestimating the severity of the features and not suitable for accurate assessment of the heart valve regurgitation features. In addition, methods such as a contraction flow method and a continuous Doppler method can quantitatively analyze the degree of regurgitation of the heart valve, but the echocardiogram has obvious individual differences in the aspects of image acquisition, measurement, analysis, judgment and the like, and is greatly influenced by the experience and the capability of medical workers, so that the inspection accuracy and consistency are difficult to ensure, and great difficulty is often brought to clinical identification.
The artificial intelligence technology has great advantages in automatic measurement, analysis and interpretation of the heart ultrasonic images compared with manual operation of professional doctors, and can realize standardization of data analysis and identification, thereby eliminating interference of artificial subjective factors, reducing inter-individual differences and intra-individual differences of heart ultrasonic judgment, and improving accuracy and consistency of ultrasonic image judgment. The artificial intelligence technology also enables the cardiac ultrasound to be more efficient and better meets the clinical practical requirements, thereby greatly improving the efficiency of the medical industry, and reducing the medical cost and the burden of families and socioeconomic people.
In recent years, although it is a research hotspot to process medical images by using an artificial intelligence technology, more traditional machine learning algorithms such as decision trees, clustering, bayesian classification, support vector machines, EM and the like are adopted, the algorithms do not use dynamic videos of echocardiograms, only carry out classification by randomly extracted ultrasonic images, the motion information of the heart is lost, and color doppler videos contain flow direction information of blood flow in the heart, so that the identification of valve regurgitation is facilitated, and if the information is neglected, the classification is not accurate enough, and the practical application effect is general.
Disclosure of Invention
In view of the above-mentioned defects or shortcomings in the prior art, the present application provides an echocardiogram automatic prediction and recognition method and system based on artificial intelligence, the method adopts a deep learning model specially designed for the echocardiogram, automatically predicts the pre-recognition feature in the echocardiogram, and outputs the video frame most relevant to the pre-recognition feature, thereby greatly improving the accuracy of the artificial intelligence technology for processing the echocardiogram, meeting the clinical requirements of the medical system, and reducing the workload of the medical staff.
The first aspect of the invention provides an echocardiogram automatic prediction and identification method and a system based on artificial intelligence, comprising the following steps:
acquiring a color Doppler video of at least one section in an echocardiogram of a detected object;
extracting each video frame in the color Doppler video, and inputting each video frame into a trained convolutional neural network to obtain an N-dimensional feature vector corresponding to each video frame;
generating the weight corresponding to each video frame by the N-dimensional feature vector of each video frame through an attention module;
calculating a weighted sum of the N-dimensional feature vectors of each video frame by using the weights to obtain an overall feature representation of the color Doppler video;
and calculating a predicted value of the color Doppler video containing the pre-identified image characteristics based on the overall characteristic representation.
Further, the method also comprises the following steps: and outputting the frame with the maximum weight as a key frame.
Further, the at least one slice comprises an apical four-chamber cardiac slice.
Further, the pre-identified image is characterized by valve regurgitation.
Further, the method also comprises the following steps:
measuring the relative area of the left atrial region in the keyframe;
measuring the relative area of the regurgitation stream within the left atrial region in said keyframe;
calculating a ratio of the relative area of the back flow stream to the relative area of the left atrium.
Further, before inputting each video frame to the pre-trained convolutional neural network, the method further comprises: and inputting the color Doppler video of at least one section of the echocardiogram containing the pre-identified image characteristics into a pre-trained convolutional neural network as a training sample, and training the pre-trained convolutional neural network.
Further, the method also comprises the following steps:
function of current lossNo longer decreases, or loss functionsStopping the training of the convolutional neural network when the value of (a) is lower than a predetermined value;
wherein, theRepresenting a classification loss calculated at the video level, saidRepresents the sparse loss used to adjust the weight of each video frame,is composed ofA normalization constant of (d);
wherein,indicating whether the nth color Doppler video contains the pre-identified image characteristicsN is (1, 2, … … N), N is the total number of color Doppler videos, ifIndicating that the nth color Doppler video does not contain the pre-identified image features; if it isIndicating that the nth color Doppler video contains the pre-identified image characteristics;the predicted value of the characteristic of the pre-identified image contained in the nth color Doppler video is shown;for the global characterization of the nth color doppler video,then isAndcross entropy of (d);
wherein,,representing the t-th in the n-th color Doppler videoThe weight of the video frame, T is (1, 2, … …, T), T is the frame number of the video; the value range of N is (1, 2, … … N), and N is the total number of color Doppler videos.
The second aspect of the present invention also provides an echocardiogram automatic prediction and recognition system based on artificial intelligence, which comprises:
the video acquisition module is used for acquiring a color Doppler video of at least one section in an echocardiogram of the detected object;
the input extraction module is used for extracting each video frame in the color Doppler video and inputting each video frame to a trained convolutional neural network so as to obtain an N-dimensional feature vector corresponding to each video frame;
the weight generating module is used for generating the weight corresponding to each video frame by the N-dimensional characteristic vector of each video frame through the attention module;
the overall characteristic calculation module is used for calculating the weighted sum of the N-dimensional characteristic vectors of each video frame by using the weights so as to obtain the overall characteristic representation of the color Doppler video;
and the prediction output module is used for calculating a prediction value of the color Doppler video containing the pre-identified image characteristics based on the overall characteristic representation.
Further, the method also comprises the following steps: and the key frame output module is used for outputting the frame with the maximum weight as a key frame.
Further, the at least one section comprises an apical four-chamber section; the pre-identified image feature is valve regurgitation; the system further comprises: a pre-identified image feature measurement module for measuring the relative area of the left atrial region in the keyframe; measuring the relative area of the regurgitation stream within the left atrial region in said keyframe; calculating a ratio of the relative area of the back flow stream to the relative area of the left atrium.
In summary, the method and system for identifying an echocardiogram automatic prediction based on artificial intelligence of the invention identify important image features of cardiac ultrasound from a video by identifying a group of video frames consisting of the important image features, learn how to measure the importance of each frame in the video based on a brand-new deep neural network, and automatically select a sparse subset representing frames to predict the classification of video levels. By using the method and the system provided by the invention, the method and the system can be used for accurately identifying the mitral valve regurgitation image characteristics in the echocardiogram, and can also be used for accurately identifying the heart ultrasonic image characteristics in the echocardiogram, such as tricuspid valve regurgitation, aortic valve regurgitation, pulmonary valve regurgitation, atrial septal defect, ventricular septal defect, patent ductus arteriosus, and the like.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of an artificial intelligence-based echocardiography automatic prediction identification method according to an embodiment of the present invention;
FIG. 2 is a flowchart of an echocardiogram automatic prediction identification method with a key frame output function according to another embodiment of the present invention;
fig. 3 is a flowchart of an echocardiogram automatic prediction identification method with a saliency estimation function according to another embodiment of the present invention;
FIG. 4 is a flow chart of a method for automated predictive identification of an echocardiogram with a pre-training function according to another embodiment of the present invention;
FIG. 5 is a functional block diagram of an echocardiographic automatic predictive identification system according to another embodiment of the present invention;
fig. 6 is a block diagram of an electronic device according to another embodiment of the present invention;
fig. 7 is a component assembly diagram of an electronic device according to another embodiment of the invention.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
The echocardiograms described herein are ultrasound images that examine the anatomical and functional state of the heart and great vessels using the special physical characteristics of ultrasound.
Referring to fig. 1, an artificial intelligence based echocardiography automatic prediction identification method is shown according to an embodiment of the present invention. In the present embodiment, the mitral regurgitation feature in the echocardiogram is predicted and identified as an example, but the method proposed in the present embodiment is also applicable to the prediction and identification of ultrasound images of other heart blood flow features such as tricuspid regurgitation, aortic regurgitation, pulmonary regurgitation, atrial septal defect, ventricular septal defect, patent ductus arteriosus, and the like.
Step S101, acquiring a color Doppler video of at least one section in an echocardiogram of a detected object.
The most common doppler ultrasound techniques include pulsed doppler, continuous doppler, and color doppler flow imaging. Color doppler is a visualization technique that is an area display with many beams transmitted and received back in the same area. The direction of blood flow in color doppler flow visualization is defined as the direction of blood flow towards the probe being red and the direction of blood flow away from the probe being blue. The orifices of the heart and the great vessels are the single antegrade blood flow, and once the retrograde blood flow occurs, whether the blood flow is abnormal or not should be considered.
Specifically, in the ultrasonic examination, an original echocardiogram image containing a plurality of sequences can be obtained for different postures of the examination object and different sections of the heart, and each sequence corresponds to one section in the ultrasonic examination.
In this embodiment, a color doppler video (for example, a video of one slice or a combination of videos of multiple slices) related to at least one slice in an echocardiogram is first acquired, the color doppler video has multiple video frames, and the color doppler video is used as an input of the depth learning algorithm module. It should be noted that the video is used instead of a single image as input because the video provides more information in the time dimension, which includes frame-to-frame variation information, and a single image often loses heart motion information, which tends to cause inaccurate classification. In addition, the color Doppler video can display the blood flow direction in the valve, the characteristics are more obvious, and the classification accuracy of the algorithm module is further improved.
Step S102, extracting each video frame in the color Doppler video, and inputting each video frame into a trained convolutional neural network to obtain an N-dimensional feature vector corresponding to each video frame.
Specifically, the color doppler video is composed of a group of video frames, each video frame can be understood as a frame of ultrasound image, all T video frames in a single video are extracted first, and then input into the trained convolutional neural network frame by frame. The present embodiment is not limited to the type of the convolutional neural network, and for example, an I3D network that is a combination of a dual-stream network and a 3D convolution may be used, or a ResNet residual network may be used.
Obtaining N-dimensional characteristic vector after each video frame passes through Resnet neural networkWhereinIs a reference number for a video frame,is (1, 2 … … T), T is the frame number of the video; the dimension N represents information contained in a corresponding frame, N can be neither too small nor too large, too small setting of N can cause the video frame to contain too little characteristic information, too large setting of N can cause the video frame to contain too much useless information,the calculation resources are wasted, N is preferably 1024 in the embodiment, and N can obtain an appropriate setting value through multiple manual attempts during actual application.
Step S103, generating the weight corresponding to each video frame by the attention module according to the N-dimensional feature vector of each video frame.
In clinical examination, only a few key frames in color Doppler video often contain characteristic information of whether a sample flows back or not, whether the video frames are provided with the characteristic information or not and the amount of the provided characteristic information determine the weight of the video frames, and the weight is usedTo represent the weight of the T-th frame, where T is (1, 2, … …, T), and T is the frame number of the video.
The weight is an N-dimensional feature vector output by the Resnet neural networkInput into attention module (attention module). The attention module is an attention model in which a deep learning algorithm is reused for simulating the human brain, for example, when a picture is watched, although the whole picture can be seen, when the picture is deeply and carefully observed, only a small block is focused on the eyes, and the human brain mainly focuses on the small block pattern, that is, the attention of the human brain to the whole picture is not balanced at this time, and the attention is distinguished with certain weight. N-dimensional feature vectorThe weight corresponding to each video frame can be obtained after the calculation of the attention module. Since the specific implementation method of the attention module has been widely applied to the deep learning model, the detailed description is omitted in this embodiment.
And step S104, calculating the weighted sum of the N-dimensional feature vectors of each video frame by using the weights so as to obtain the overall feature representation of the color Doppler video.
Specifically, each video frame corresponds to a 1024-dimensional feature vectorCan indicate the video representation of the frame, and the overall characteristic representation of the video can be represented by the weight of each frameWith feature vectors of each frameIs calculated as the weighted sum of:
wherein,for the purpose of the overall characterization of the video,for the video feature representation of the t-th frame,is the weight of the T-th frame, and T is the frame number of the video.
And step S105, calculating and obtaining a predicted value containing the pre-identified image features through an FC full connection layer and a sigmoid activation function based on the overall feature representation.
Obtaining an overall feature representation of a videoThen, after two-layer operation of an FC full-connection layer and a sigmod activation function, a final predicted value is obtained,The value of (a) is between 0 and 1, which represents the probability of mitral regurgitation feature existing in the sample video, namely:
referring to fig. 2, it is preferable to further include on the basis of the embodiment shown in fig. 1:
Weight ofThe largest video frame means the video frame which shows the mitral regurgitation characteristic most obviously or contributes most, the key frame is automatically output and generated on the ultrasonic report, the operation step of selecting the screenshot by a doctor is saved, and the generation efficiency of the ultrasonic report is improved.
Further, at least one of the slices in the above embodiments preferably comprises an apical four-chamber heart (A4C) slice. The section of the apical four-chamber heart (A4C) is a standard section in clinical echocardiography, and is also an important section which is most widely applied, and image characteristics such as mitral regurgitation and the like can be more accurately identified through the section of the apical four-chamber heart (A4C).
Referring to fig. 3, preferably, on the basis that whether the pre-identified image features exist in the color doppler video of the detected object is identified according to the embodiment shown in fig. 1, the method further includes:
in step S107, the degree of saliency of the pre-identified image features is identified.
Taking the identification of mitral regurgitation features as an example, the specific method is to measure the ratio of regurgitation area to left atrium area on the key frame where mitral regurgitation is most evident, and output the ratio to an ultrasound report.
First, the left atrial area is measured. Usually, there are hundreds of groups of delineated echocardiograms (so-called delineations are artificial borders drawn on key parts of the heart, such as the left atrium) which have been carefully delineated by the annotator in the early stage. And putting the data into the neural network for learning to finish the automatic identification of the left atrium by the model. The relative area of the left atrium can be obtained by calculating the relative area of the left atrium in the map, i.e. the number of pixels in the delineated region.
The area of the back-flow stream is then measured. Because color doppler ultrasound is used, the regurgitant beam is usually labeled blue (unretreamed blood is usually red), so the relative area of the regurgitant beam can be calculated by simply calculating the number of blue pixels in the left atrium.
Finally, the ratio of the relative area of the regurgitation beam to the relative area of the left atrium is calculated, and the significance degree of the regurgitation image characteristics in the ultrasonic video can be obtained through the ratio.
Referring to fig. 4, before the method shown in fig. 1 is performed, the method preferably further comprises
Step S100, inputting a color Doppler video of at least one section in an echocardiogram containing pre-recognition image characteristics into a pre-trained convolutional neural network as a training sample, and training the pre-trained convolutional neural network.
The process of training the model is actually a process of continuously adjusting parameters to enable a prediction result to be better and better, in the process, a loss function is usually used as a reference, the loss function is usually used for evaluating the difference between a predicted value and a true value of the model, and the neural network model has two main tasks, namely accurately judging whether a sample echocardiogram contains pre-recognition image characteristics or not, and outputting a video key frame which is most helpful for judging the result, so that the corresponding loss functionAlso designed as two-part, respectively classified lossesAnd loss of sparseness。
wherein,whether the nth color Doppler video contains the pre-identified image features or not is shown, the value range of N is (1, 2, … … N), N is the total number of the color Doppler videos, and if the value range of N is not (1, 2, … … N), the total number of the color Doppler videos is not included, the color Doppler videos are not included, and if the value range of NIndicating that the nth color Doppler video does not contain the pre-identified image features; if it isIndicating that the nth color Doppler video contains the pre-identified image characteristics;the predicted value of the characteristic of the pre-identified image contained in the nth color Doppler video is shown;for the global characterization of the nth color doppler video,then isAndcross entropy of (d).
So-called sparse lossesI.e. for adjusting the weight of a particular video frame in a videoBecause in clinical tests, often only a few critical frames represent the backflow of the sample, only a few frames of all frames are important, i.e. only a few frames correspond to approximately 1 and the rest of the frames correspond to approximately 0, in other words the importance of these framesThe method is sparse, in this embodiment, an L1 norm is used to measure sparsity of the whole data, according to a mathematical definition, the L1 norm is a sum of absolute values of each element in a vector, and the smaller L1 is, the more sparse the whole element is, and a specific form is as follows:
wherein,,the weight of the T video frame in the nth color Doppler video is represented, the value range of T is (1, 2, … …, T), and T is the frame number of the video; the value range of N is (1, 2, … … N), and N is the total number of color Doppler videos.
in a specific training process, samples for training the neural network can be recycled, and when the set cycle number and the set loss function are reachedIs below a predetermined value, or a loss functionWhen the value of (c) is no longer decreasing, it indicates that training of the convolutional neural network can be stopped.
It should be noted that while the operations of the method of the present invention are depicted in FIG. 1 in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Rather, the steps depicted in the flowcharts may change order of execution if desired.
Referring to fig. 5, an artificial intelligence based echocardiography automatic predictive identification system 200 is shown according to another embodiment of the invention, including:
a video obtaining module 201, configured to obtain a color doppler video of at least one slice in an echocardiogram of a detected object;
an input extraction module 202, configured to extract each video frame in the color doppler video, and input each video frame to a trained convolutional neural network to obtain an N-dimensional feature vector corresponding to each video frame;
the weight generating module 203 is used for generating the weight corresponding to each video frame by the attention module according to the N-dimensional feature vector of each video frame;
the overall feature calculation module 204 calculates a weighted sum of the N-dimensional feature vectors of each video frame by using the weights to obtain an overall feature representation of the color doppler video;
and the prediction output module 205 is used for calculating a prediction value containing the pre-identified image characteristics through an FC full connection layer and a sigmoid activation function based on the overall characteristic representation.
Further, the echocardiographic automatic predictive identification system 200 includes: a pre-identified image feature measurement module 206 for measuring the area of the left atrial region in the keyframe; measuring the area of the regurgitation stream within the left atrial region in said keyframe; calculating a ratio of an area of the back flow stream to an area of the left atrium.
It should be understood that the modules described in the echocardiographic automatic predictive identification system 200 in this embodiment correspond to the steps in the method described in fig. 1. Therefore, the operations and features described above for the method are also applicable to each module of the present embodiment, and are not described herein again. The system of this embodiment may be implemented in the electronic device in advance, or may be loaded into the electronic device by downloading or the like. The corresponding modules in the system of this embodiment may cooperate with units in the electronic device to implement the solution of this embodiment. In addition, the modules described in the present embodiment may be implemented by software or hardware. The names of these units or modules do not in some cases constitute a limitation on the units or modules themselves, e.g., the video acquisition module 201 may also be described as "module 201 for acquiring color doppler video of at least one slice in an echocardiogram of a test subject".
Referring to fig. 6, there is shown an electronic device 300 according to another embodiment of the invention, comprising:
at least one processor 301; and the number of the first and second groups,
a memory 302 communicatively coupled to the at least one processor 301; wherein,
the memory 302 stores instructions executable by the at least one processor 301 to enable the at least one processor 301 to perform the steps of the above-described method embodiments.
Referring to fig. 7, the electronic device in the embodiment shown in fig. 6 may be, for example, a B-mode ultrasound machine. The B-mode ultrasound machine may also comprise a computer system 700 including a Central Processing Unit (CPU) 701 which may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the system 700 are also stored. The CPU 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704. The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
As another aspect, the present application also provides a computer-readable storage medium, which may be a computer-readable storage medium included in the system or the electronic device described in the above embodiments; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the methods for automated predictive identification of echocardiograms described herein.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by a person skilled in the art that the scope of the invention as referred to in the present application is not limited to the embodiments with a specific combination of the above-mentioned features, but also covers other embodiments with any combination of the above-mentioned features or their equivalents without departing from the inventive concept. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.
Claims (10)
1. An ultrasonic cardiogram automatic prediction and identification method based on artificial intelligence is characterized by comprising the following steps:
acquiring a color Doppler video of at least one section in an echocardiogram of a detected object;
extracting each video frame in the color Doppler video, and inputting each video frame into a trained convolutional neural network to obtain an N-dimensional feature vector corresponding to each video frame;
generating the weight corresponding to each video frame by the N-dimensional feature vector of each video frame through an attention module;
calculating a weighted sum of the N-dimensional feature vectors of each video frame by using the weights to obtain an overall feature representation of the color Doppler video;
and calculating a predicted value of the color Doppler video containing the pre-identified image characteristics based on the overall characteristic representation.
2. The method according to claim 1, further comprising the steps of:
and outputting the frame with the maximum weight as a key frame.
3. The method according to claim 2, wherein the method comprises: the at least one section comprises an apical four-chamber section.
4. The method according to claim 3, wherein the pre-identified image is characterized by valve regurgitation.
5. The method according to claim 4, further comprising the steps of:
measuring the relative area of the left atrial region in the keyframe;
measuring the relative area of the regurgitation stream within the left atrial region in said keyframe;
calculating a ratio of the relative area of the back flow stream to the relative area of the left atrium.
6. The artificial intelligence based echocardiogram auto-predictive recognition method of claim 1, further comprising, prior to inputting each video frame to a pre-trained convolutional neural network:
and inputting the color Doppler video of at least one section of the echocardiogram containing the pre-identified image characteristics into a pre-trained convolutional neural network as a training sample, and training the pre-trained convolutional neural network.
7. The method according to claim 6, further comprising:
function of current lossNo longer decreases, or loss functionsStopping the training of the convolutional neural network when the value of (a) is lower than a predetermined value;
wherein, theRepresenting a classification loss calculated at the video level, saidRepresents the sparse loss used to adjust the weight of each video frame,is composed ofA normalization constant of (d);
wherein,whether the nth color Doppler video contains the pre-recognition image features or not is shown, the value range of N is (1, 2, … … N), N is the total number of the color Doppler videos used for training the convolutional neural network model, and if the value range of N is (1, 2, … … N), the number of the color Doppler videos is not less than the total number of the color Doppler videos used for training the convolutional neural network modelIndicating that the nth color Doppler video does not contain the pre-identified image features; if it isIndicating that the nth color Doppler video contains the pre-identified image characteristics;the predicted value of the characteristic of the pre-identified image contained in the nth color Doppler video is shown;for the global characterization of the nth color doppler video,then isAndcross entropy of (d);
8. An echocardiography automatic prediction and recognition system based on artificial intelligence, which is characterized by comprising:
the video acquisition module is used for acquiring a color Doppler video of at least one section in an echocardiogram of the detected object;
the input extraction module is used for extracting each video frame in the color Doppler video and inputting each video frame to a trained convolutional neural network so as to obtain an N-dimensional feature vector corresponding to each video frame;
the weight generating module is used for generating the weight corresponding to each video frame by the N-dimensional characteristic vector of each video frame through the attention module;
the overall characteristic calculation module is used for calculating the weighted sum of the N-dimensional characteristic vectors of each video frame by using the weights so as to obtain the overall characteristic representation of the color Doppler video;
and the prediction output module is used for calculating a prediction value of the color Doppler video containing the pre-identified image characteristics based on the overall characteristic representation.
9. The system according to claim 8, further comprising:
and the key frame output module is used for outputting the frame with the maximum weight as a key frame.
10. The system according to claim 9, wherein said system comprises:
the at least one section comprises an apical four-chamber section;
the pre-identified image feature is valve regurgitation;
the system further comprises:
a pre-identified image feature measurement module for measuring the relative area of the left atrial region in the keyframe; measuring the relative area of the regurgitation stream within the left atrial region in said keyframe; calculating a ratio of the relative area of the back flow stream to the relative area of the left atrium.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010353559.8A CN111493935B (en) | 2020-04-29 | 2020-04-29 | Artificial intelligence-based automatic prediction and identification method and system for echocardiogram |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010353559.8A CN111493935B (en) | 2020-04-29 | 2020-04-29 | Artificial intelligence-based automatic prediction and identification method and system for echocardiogram |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111493935A CN111493935A (en) | 2020-08-07 |
CN111493935B true CN111493935B (en) | 2021-01-15 |
Family
ID=71866649
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010353559.8A Active CN111493935B (en) | 2020-04-29 | 2020-04-29 | Artificial intelligence-based automatic prediction and identification method and system for echocardiogram |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111493935B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112101413A (en) * | 2020-08-12 | 2020-12-18 | 海南大学 | Intelligent system for predicting cerebral apoplexy risk |
CN112258476B (en) * | 2020-10-22 | 2024-07-09 | 东软教育科技集团有限公司 | Method, system and storage medium for analyzing abnormal motion pattern of heart muscle of echocardiography |
CN112435247B (en) * | 2020-11-30 | 2022-03-25 | 中国科学院深圳先进技术研究院 | Patent foramen ovale detection method, system, terminal and storage medium |
CN112419313B (en) * | 2020-12-10 | 2023-07-28 | 清华大学 | Multi-section classification method based on heart disease ultrasound |
CN112489043B (en) * | 2020-12-21 | 2024-08-13 | 无锡祥生医疗科技股份有限公司 | Heart disease detection device, model training method, and storage medium |
CN113180737B (en) * | 2021-05-06 | 2022-02-08 | 中国人民解放军总医院 | Artificial intelligence-based oval hole closure detection method, system, equipment and medium |
CN113487665B (en) * | 2021-06-04 | 2022-03-11 | 中国人民解放军总医院 | Method, device, equipment and medium for measuring cavity gap |
CN114469176B (en) * | 2021-12-31 | 2024-07-05 | 深圳度影医疗科技有限公司 | Fetal heart ultrasonic image detection method and related device |
CN114666571B (en) * | 2022-03-07 | 2024-06-14 | 中国科学院自动化研究所 | Video sensitive content detection method and system |
CN114723710A (en) * | 2022-04-11 | 2022-07-08 | 安徽鲲隆康鑫医疗科技有限公司 | Method and device for detecting ultrasonic video key frame based on neural network |
CN115797330B (en) * | 2022-12-30 | 2024-04-05 | 北京百度网讯科技有限公司 | Algorithm correction method based on ultrasonic video, ultrasonic video generation method and equipment |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10321892B2 (en) * | 2010-09-27 | 2019-06-18 | Siemens Medical Solutions Usa, Inc. | Computerized characterization of cardiac motion in medical diagnostic ultrasound |
BR112013015639A8 (en) * | 2010-12-23 | 2018-02-06 | Koninklijke Philips Nv | DIAGNOSTIC ULTRASOUND SYSTEM TO EVALUATE REGURGITANT FLOW |
CN103824284B (en) * | 2014-01-26 | 2017-05-10 | 中山大学 | Key frame extraction method based on visual attention model and system |
US10271817B2 (en) * | 2014-06-23 | 2019-04-30 | Siemens Medical Solutions Usa, Inc. | Valve regurgitant detection for echocardiography |
JP6727286B2 (en) * | 2015-04-02 | 2020-07-22 | カーディアウェイブ | Method and apparatus for treating pericardial disease |
CN105913084A (en) * | 2016-04-11 | 2016-08-31 | 福州大学 | Intensive track and DHOG-based ultrasonic heartbeat video image classifying method |
CN108171141B (en) * | 2017-12-25 | 2020-07-14 | 淮阴工学院 | Attention model-based cascaded multi-mode fusion video target tracking method |
-
2020
- 2020-04-29 CN CN202010353559.8A patent/CN111493935B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111493935A (en) | 2020-08-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111493935B (en) | Artificial intelligence-based automatic prediction and identification method and system for echocardiogram | |
KR101908680B1 (en) | A method and apparatus for machine learning based on weakly supervised learning | |
Liao et al. | On modelling label uncertainty in deep neural networks: Automatic estimation of intra-observer variability in 2d echocardiography quality assessment | |
US11508061B2 (en) | Medical image segmentation with uncertainty estimation | |
US11367001B2 (en) | Neural network image analysis | |
CN111612756B (en) | Coronary artery specificity calcification detection method and device | |
US11995823B2 (en) | Technique for quantifying a cardiac function from CMR images | |
Jafari et al. | Deep Bayesian image segmentation for a more robust ejection fraction estimation | |
CN114170478A (en) | Defect detection and positioning method and system based on cross-image local feature alignment | |
Lin et al. | Echocardiography-based AI detection of regional wall motion abnormalities and quantification of cardiac function in myocardial infarction | |
CN113011340B (en) | Cardiovascular operation index risk classification method and system based on retina image | |
US20240127432A1 (en) | Image sequence analysis | |
CN113222985B (en) | Image processing method, image processing device, computer equipment and medium | |
CN114010227B (en) | Right ventricle characteristic information identification method and device | |
Hatfaludi et al. | Deep learning based aortic valve detection and state classification on echocardiographies | |
US20220338816A1 (en) | Fully automated cardiac function and myocardium strain analyses using deep learning | |
US20230196557A1 (en) | Late Gadolinium Enhancement Analysis for Magnetic Resonance Imaging | |
Anton et al. | Automated quantification of myocardial tissue characteristics from native T1 mapping using neural networks with Bayesian inference for uncertainty-based quality-control | |
Zhang et al. | Image quality assessment for population cardiac magnetic resonance imaging | |
Van De Vyver et al. | Regional quality estimation for echocardiography using deep learning | |
Thennakoon et al. | Automatic classification of left ventricular function of the human heart using echocardiography | |
WO2024127209A1 (en) | Automated analysis of echocardiographic images | |
Tripathi et al. | Learning the Imaging Landmarks: Unsupervised Key point Detection in Lung Ultrasound Videos | |
CN118141419A (en) | Method and device for evaluating myocardial motion, electronic device and storage medium | |
Kumari et al. | Gestational age determination of ultrasound foetal images using artificial neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |