CN114041758A - Radial artery palpation positioning method and device, electronic equipment and storage medium - Google Patents
Radial artery palpation positioning method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114041758A CN114041758A CN202210011181.2A CN202210011181A CN114041758A CN 114041758 A CN114041758 A CN 114041758A CN 202210011181 A CN202210011181 A CN 202210011181A CN 114041758 A CN114041758 A CN 114041758A
- Authority
- CN
- China
- Prior art keywords
- video data
- radial artery
- neural network
- layer
- network model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 210000002321 radial artery Anatomy 0.000 title claims abstract description 102
- 238000000034 method Methods 0.000 title claims abstract description 62
- 238000002559 palpation Methods 0.000 title claims abstract description 36
- 238000012545 processing Methods 0.000 claims abstract description 64
- 210000000707 wrist Anatomy 0.000 claims abstract description 49
- 238000010606 normalization Methods 0.000 claims abstract description 47
- 238000003062 neural network model Methods 0.000 claims abstract description 35
- 238000013527 convolutional neural network Methods 0.000 claims description 39
- 238000012549 training Methods 0.000 claims description 33
- 238000011176 pooling Methods 0.000 claims description 31
- 238000012360 testing method Methods 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 12
- 238000012795 verification Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 6
- 238000002790 cross-validation Methods 0.000 claims description 4
- 238000011156 evaluation Methods 0.000 claims description 4
- 238000003745 diagnosis Methods 0.000 abstract description 10
- 238000013507 mapping Methods 0.000 abstract description 9
- 239000003814 drug Substances 0.000 abstract description 5
- 238000010009 beating Methods 0.000 abstract description 4
- 210000005242 cardiac chamber Anatomy 0.000 abstract description 2
- 230000008602 contraction Effects 0.000 abstract description 2
- 125000004122 cyclic group Chemical group 0.000 abstract description 2
- 230000000737 periodic effect Effects 0.000 abstract description 2
- 230000001020 rhythmical effect Effects 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 11
- 239000000284 extract Substances 0.000 description 8
- 230000000694 effects Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 230000003068 static effect Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000004807 localization Effects 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 230000008034 disappearance Effects 0.000 description 3
- 238000004880 explosion Methods 0.000 description 3
- 238000001931 thermography Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 210000000245 forearm Anatomy 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 230000007306 turnover Effects 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4854—Diagnosis based on concepts of traditional oriental medicine
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0075—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by spectroscopy, i.e. measuring spectra, e.g. Raman spectroscopy, infrared absorption spectroscopy
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2562/00—Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
- A61B2562/02—Details of sensors specially adapted for in-vivo measurements
- A61B2562/0247—Pressure sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Theoretical Computer Science (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- General Physics & Mathematics (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Evolutionary Computation (AREA)
- Pathology (AREA)
- Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Animal Behavior & Ethology (AREA)
- Heart & Thoracic Surgery (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Physiology (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Psychiatry (AREA)
- Fuzzy Systems (AREA)
- Evolutionary Biology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Cardiology (AREA)
- Alternative & Traditional Medicine (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Image Analysis (AREA)
Abstract
The application belongs to the technical field of traditional Chinese medicine pulse diagnosis, and discloses a radial artery palpation positioning method, a device, electronic equipment and a storage medium, wherein near-infrared video information of a wrist of a measured person is obtained; extracting green channel images from each frame of image of the near-infrared video information to obtain single-channel video data; performing pixel value normalization processing on the single-channel video data to obtain video data to be detected; inputting the video data to be detected into a pre-trained three-dimensional convolution neural network model to obtain radial artery coordinates of the detected person; due to the fact that rhythmic cyclic changes can be generated by contraction and relaxation of the heart chambers of the tested person, obvious periodic beating signals can be further generated at the radial artery position of the wrist, time and space characteristics can be extracted from the video data of the wrist through the three-dimensional convolution neural network model, mapping from the video data to space coordinates is completed, and therefore accurate positioning of the radial artery without contact can be achieved.
Description
Technical Field
The application relates to the technical field of traditional Chinese medicine pulse diagnosis, in particular to a radial artery palpation positioning method, a radial artery palpation positioning device, electronic equipment and a storage medium.
Background
The diagnosis of the pulse condition of the radial artery is an indispensable part of the four diagnostic principles of traditional Chinese medicine. It can provide abundant physiological information for the health assessment of patients, and is considered as an important tool in the non-invasive diagnosis practice. In the traditional Chinese medicine pulse diagnosis process, a doctor senses the pulse beating condition of the radial artery of a patient through fingers, so that the condition of an illness is diagnosed according to the pulse beating condition. However, the traditional Chinese medicine pulse diagnosis has the practical problems of low standardization degree and subjective pulse condition judgment due to different accumulated experiences and slight perception differences of traditional Chinese medical doctors.
Therefore, some automatic pulse diagnosis devices for pulse diagnosis appear on the market, which can automatically acquire pulse wave signals at the radial artery, and for the automatic pulse diagnosis devices, the positioning precision of the radial artery position when acquiring the pulse wave signals has a great influence on the accuracy of the acquired pulse wave signals. The existing automatic pulse feeling device mainly comprises the following positioning methods:
1) the wrist is wrapped directly using an array of tactile or pressure sensors, with position determined by force feedback. The positioning precision of the mode is low, the mode is too dependent on the size and consistency of the sensor, the patient needs to be in direct contact with the positioning device, the positioning device is not suitable for the patient with burn or other conditions which are not suitable for direct contact, and the use scene is greatly limited;
2) the radial artery was located using a thermal imaging method. The method is a non-contact positioning method, the use scene is limited to be smaller than that of the former method, but the positioning accuracy of the method greatly depends on the sensitivity of the infrared thermal imaging device, and finally, the positioning is extremely prone to deviation inevitably caused by the individual difference change of the shape of the wrist.
Disclosure of Invention
The application aims to provide a radial artery palpation positioning method, a radial artery palpation positioning device, an electronic device and a storage medium, which can realize accurate positioning of a radial artery without direct contact with a wrist.
In a first aspect, the present application provides a radial artery palpation positioning method for positioning the radial artery position at the wrist of a subject, comprising the steps of:
A1. acquiring near-infrared video information of the wrist of a tested person;
A2. extracting green channel images from each frame of image of the near-infrared video information to obtain single-channel video data;
A3. performing pixel value normalization processing on the single-channel video data to obtain video data to be detected;
A4. and inputting the video data to be detected into a pre-trained three-dimensional convolution neural network model to obtain the radial artery coordinates of the detected person.
The radial artery palpation positioning method can extract time and space characteristics of the video data of the wrist through a three-dimensional convolution neural network model so as to complete mapping from the video data to space coordinates, thereby realizing accurate positioning of the radial artery without contact.
Preferably, step a1 includes:
acquiring the near-infrared video information with preset duration collected at a preset frame rate; the preset frame rate is 28-32 fps, and the preset duration is 4-6 s.
Preferably, step a3 includes:
carrying out pixel value normalization processing on each frame of image of the single-channel video data according to the following formula:
wherein,the normalized pixel value of the ith pixel point in the frame image,the pixel value of the ith pixel point in the frame image before normalization processing,is the minimum pixel value before the normalization processing of the frame image,and the maximum pixel value before the normalization processing of the frame image is obtained.
The pixel value normalization processing is carried out on the single-channel video data, the pixel value range of each frame image in the single-channel video data is reduced from [0,225] to [0,1], the unit limitation of the data can be removed, and the following effects can be achieved: the calculated amount of a model (a three-dimensional convolution neural network model) is reduced, and the convergence speed of the model is improved; the difference of data distribution is reduced, and the precision of the model can be improved; data normalization can prevent model gradient explosion or gradient disappearance in back propagation of the model.
Preferably, the resolution of the near-infrared video information is 2048 × 1088;
the three-dimensional convolutional neural network model comprises an input layer, a first convolutional layer, a first pooling layer, a second convolutional layer, a second pooling layer, a third convolutional layer, a third pooling layer, a first full-connection layer, a second full-connection layer and an output layer which are connected in sequence;
convolution kernels of the first convolution layer, the second convolution layer and the third convolution layer are all three-dimensional convolution kernels with the size of 15 x 3, window moving step length during convolution processing is 1, the number of the convolution kernels of the first convolution layer is 32, the number of the convolution kernels of the second convolution layer is 16, and the number of the convolution kernels of the third convolution layer is 8;
the pooling methods of the first, second and third pooling layers are 2 x 2 maximum pooling methods, and the window moving step size during the pooling treatment is 2;
the first full connection layer and the second full connection layer are both linear layers.
The three-dimensional convolution neural network model can effectively extract time and space characteristics of video data through a three-dimensional convolution kernel so as to complete mapping from the video data to space coordinates, and reduce the size of an image frame of input video data through a pooling layer, so that the radial artery can be quickly and accurately positioned.
Preferably, the three-dimensional convolutional neural network model is obtained by training through the following steps:
B1. collecting near-infrared video data samples of wrists of a plurality of different subjects; wherein a plurality of said near infrared video data samples are collected for each of said subjects;
B2. after the near-infrared video data sample is preprocessed, the method comprises the following steps of: 1: 1, dividing the training set into a training set, a testing set and a verification set;
B3. training the three-dimensional convolutional neural network model by using the training set based on a preset hyper-parameter;
B4. performing cross validation and evaluation on the trained three-dimensional convolutional neural network model by using the test set;
B5. and adjusting the hyper-parameters for multiple times, and verifying the three-dimensional convolutional neural network model through the verification set after each adjustment so as to obtain the optimal three-dimensional convolutional neural network model.
Preferably, step B2 includes:
extracting a green channel image from each frame of image of the near-infrared video data sample to obtain a single-channel video data sample;
calibrating a radial artery position in the single-channel video data sample;
and carrying out pixel value normalization processing on the calibrated single-channel video data sample.
Preferably, the preset hyper-parameter comprises:
a batch size, the batch size being 8 or 16;
a momentum parameter, the momentum parameter being 0.95;
a learning rate which is a dynamic learning rate and an initial value of which is 0.001;
and the iteration number is 100.
In a second aspect, the present application provides a radial artery palpation positioning device for positioning the radial artery position at the wrist of the testee, comprising:
the first acquisition module is used for acquiring near-infrared video information of the wrist of the measured person;
the first extraction module is used for extracting green channel images from each frame of image of the near-infrared video information to obtain single-channel video data;
the normalization processing module is used for carrying out pixel value normalization processing on the single-channel video data to obtain video data to be detected;
and the positioning module is used for inputting the video data to be detected into a pre-trained three-dimensional convolutional neural network model to obtain the radial artery coordinates of the detected person.
The radial artery palpation positioning method can extract time and space characteristics of the video data of the wrist through a three-dimensional convolution neural network model so as to complete mapping from the video data to space coordinates, thereby realizing accurate positioning of the radial artery without contact.
In a third aspect, the present application provides an electronic device, comprising a processor and a memory, where the memory stores a computer program executable by the processor, and the processor executes the computer program to perform the steps of the radial artery palpation positioning method as described above.
In a fourth aspect, the present application provides a storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the steps of the radial artery palpation positioning method as described above.
Has the advantages that:
according to the radial artery palpation positioning method, the radial artery palpation positioning device, the electronic equipment and the storage medium, the near-infrared video information of the wrist of the measured person is obtained; extracting green channel images from each frame of image of the near-infrared video information to obtain single-channel video data; performing pixel value normalization processing on the single-channel video data to obtain video data to be detected; inputting the video data to be detected into a pre-trained three-dimensional convolution neural network model to obtain radial artery coordinates of the detected person; due to the fact that rhythmic cyclic changes can be generated by contraction and relaxation of the heart chambers of the tested person, obvious periodic beating signals can be further generated at the radial artery position of the wrist, time and space characteristics can be extracted from the video data of the wrist through the three-dimensional convolution neural network model, mapping from the video data to space coordinates is completed, and therefore accurate positioning of the radial artery without contact can be achieved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application.
Drawings
Fig. 1 is a flowchart of a radial artery palpation positioning method according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of a radial artery palpation positioning device provided in the embodiment of the present application.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a three-dimensional convolutional neural network model.
FIG. 5 is a graph of the output of each convolutional layer of an exemplary two-dimensional convolutional neural network model.
FIG. 6 is a graph of the output of each convolutional layer of an exemplary three-dimensional convolutional neural network model.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, fig. 1 is a radial artery palpation positioning method in some embodiments of the present application, for positioning the radial artery position at the wrist of the subject, including the steps of:
A1. acquiring near-infrared video information of the wrist of a tested person;
A2. extracting green channel images from each frame of image of the near-infrared video information to obtain single-channel video data;
A3. performing pixel value normalization processing on the single-channel video data to obtain video data to be detected;
A4. and inputting the video data to be detected into a pre-trained three-dimensional convolution neural network model to obtain the radial artery coordinates of the detected person.
The radial artery palpation positioning method can extract time and space characteristics (including information of the change of the space characteristics of the wrist along with the time) of the video data of the wrist through a three-dimensional convolution neural network model so as to complete the mapping of the video data to space coordinates, thereby realizing the non-contact accurate positioning of the radial artery, having smaller limitation of use scenes, and avoiding the influence of equipment sensitivity on a positioning result compared with a thermal imaging positioning method in the prior art, thereby being beneficial to reducing the equipment cost.
Actually, there is a method for locating a pulse mouth by using a deep learning neural network model based on a single wrist image in the prior art, but only a single image is used as input, input data only includes a spatial feature of a wrist and does not include a time feature, so that a change of the spatial feature of the wrist along with time cannot be reflected, and the location accuracy of locating a radial artery only according to the spatial feature at a certain moment is lower than that of locating a radial artery according to the spatial feature along with time in the present application.
Specifically, step a1 includes:
acquiring near-infrared video information of preset duration acquired at a preset frame rate; the preset frame rate is 28-32 fps, and the preset duration is 4-6 s.
The near-infrared video information can be collected through the infrared camera, and the wrist of the testee needs to be kept static in the collection process.
The acquisition frame rate (namely the preset frame rate) can be set according to actual needs, and is preferably 30 fps; the sampling time (i.e. the preset time) can be set according to actual needs, and is preferably 5 s.
In step a2, extracting green channel images from each frame of image of the near-infrared video information to obtain a plurality of green channel images, and arranging the green channel images in the order of the corresponding original images of each frame in the original near-infrared video information to form a new image sequence, which is single-channel video data. The method for extracting the green channel image from the image is the prior art, and the detailed description thereof is omitted here.
In some embodiments, step a3 includes:
carrying out pixel value normalization processing on each frame of image of single-channel video data according to the following formula:
wherein,is the normalized pixel value of the ith pixel point in the frame image,the pixel value of the ith pixel point in the frame image before normalization processing,is the minimum pixel value before the normalization processing of the frame image (i.e. the minimum value among the pixel values of the respective pixel points of the frame image),the maximum pixel value before the normalization processing of the frame image (i.e., the maximum value among the pixel values of the respective pixel points of the frame image) is obtained.
It should be noted that the above steps are performed for each frame image of the single-channel video data, corresponding to different frame images, which are obtained by the above stepsAndthe values can be different.
The pixel value normalization processing is carried out on the single-channel video data, the pixel value range of each frame image in the single-channel video data is reduced from [0,225] to [0,1], the unit limitation of the data can be removed, and the following effects can be achieved: the calculated amount of a model (a three-dimensional convolution neural network model) is reduced, and the convergence speed of the model is improved; the difference of data distribution is reduced, and the precision of the model can be improved; data normalization can prevent model gradient explosion or gradient disappearance in back propagation of the model.
In the present embodiment, referring to fig. 4, the resolution of the near-infrared video information is 2048 × 1088;
the three-dimensional convolutional neural network model comprises an input layer 90, a first convolutional layer 91, a first pooling layer 92, a second convolutional layer 93, a second pooling layer 94, a third convolutional layer 95, a third pooling layer 96, a first full-connection layer 97, a second full-connection layer 98 and an output layer 99 which are connected in sequence;
the convolution kernels of the first convolution layer 91, the second convolution layer 93 and the third convolution layer 95 are all three-dimensional convolution kernels with the size of 15 × 3, the window moving step length during convolution processing is 1, the number of the convolution kernels of the first convolution layer 91 is 32, the number of the convolution kernels of the second convolution layer 93 is 16, and the number of the convolution kernels of the third convolution layer 95 is 8;
the pooling methods of the first, second and third pooling layers 92, 94 and 96 are all 2 × 2 maximum pooling methods, and the window moving step size during the pooling process is all 2;
the first full connection layer 97 and the second full connection layer 98 are both linear layers (linear layers).
The three-dimensional convolution neural network model can effectively extract time and space characteristics of video data through a three-dimensional convolution kernel so as to complete mapping from the video data to space coordinates, and reduce the size of an image frame of input video data through a pooling layer, so that the radial artery can be quickly and accurately positioned. Referring to fig. 6, image data output by each layer of convolution layer when radial artery localization is performed on an exemplary video data to be detected by using the three-dimensional convolution neural network model of the present embodiment; fig. 5 is image data output by each convolution layer for radial artery localization of the exemplary video data to be detected by using a two-dimensional convolution neural network model (the convolution kernel of which is a two-dimensional convolution kernel, and other data network conditions are the same as the three-dimensional convolution neural network model of the present embodiment) having the same data network conditions as the three-dimensional convolution neural network model of the present embodiment; thus, as can be seen from fig. 5 and 6, for the same data network condition, the three-dimensional convolutional neural network model of the present embodiment has relatively better effect on feature extraction of data.
It should be noted that the resolution of the near-infrared video information is not limited to 2048 × 1088, and for near-infrared video information with other resolutions, the size of the convolution kernel in the three-dimensional convolution neural network model and the number of convolution kernels of each convolution layer need to be adjusted accordingly.
Preferably, the three-dimensional convolutional neural network model is obtained by training through the following steps:
B1. collecting near-infrared video data samples of wrists of a plurality of different subjects; wherein a plurality of near-infrared video data samples are acquired for each subject;
B2. after preprocessing the near-infrared video data sample, processing according to the following steps of 4: 1: 1, dividing the training set into a training set, a testing set and a verification set;
B3. training a three-dimensional convolutional neural network model by using a training set based on a preset hyper-parameter;
B4. performing cross validation and evaluation on the trained three-dimensional convolutional neural network model by using a test set;
B5. and adjusting the hyper-parameters for multiple times, and verifying the three-dimensional convolutional neural network model through a verification set after each adjustment so as to obtain the optimal three-dimensional convolutional neural network model.
In step B1, preferably, for each subject, not less than 10 near-infrared video data samples are collected, and the posture of the wrist needs to be adjusted after each near-infrared video data sample is collected, including at least one of position (front, back, left, and right positions) adjustment, azimuth angle adjustment (adjustment by left-right swinging of the forearm), and rolling angle adjustment (i.e., angle in the circumferential direction of the wrist). Under the existing experimental conditions, it is difficult to acquire a large number of image data sets, and a certain number of data sets are needed for training a model by using a deep learning technology, so that the current limited data can be expanded by acquiring the acquired data of the same subject for multiple times, including inversion, rotation and the like, thereby enriching the data quantity of the training set of the neural network training model and enhancing the robustness of the model.
When the near-infrared video data samples are collected, collecting the near-infrared video data samples with preset duration at a preset frame rate; the preset frame rate is 28-32 fps, and the preset duration is 4-6 s.
The near-infrared video data samples can be collected through the infrared camera, and the wrists of the testee need to be kept static in the process of collecting each infrared video data sample.
The acquisition frame rate (namely the preset frame rate) can be set according to actual needs, and is preferably 30 fps; the sampling time (i.e. the preset time) can be set according to actual needs, and is preferably 5 s.
In some preferred embodiments, the collected near-infrared video data samples can be subjected to data enhancement processing to obtain more near-infrared video data samples; thus, the step of training the three-dimensional convolutional neural network model further comprises: and carrying out data enhancement processing on the collected near-infrared video data sample.
Different near-infrared video data samples can be obtained by performing operations (at least one item) such as rotation, turnover, scaling and the like on the acquired near-infrared video data samples, so that a near-infrared video data sample set can be greatly expanded, and the generalization capability of the trained three-dimensional convolutional neural network model is enhanced. Wherein, the step of performing data enhancement processing on the collected near-infrared video data sample can be performed after the step B1 and before the step B2.
In this embodiment, step B2 includes:
extracting green channel images from each frame of image of the near-infrared video data sample to obtain a single-channel video data sample;
calibrating the radial artery position in the single-channel video data sample;
carrying out pixel value normalization processing on the calibrated single-channel video data sample;
and (3) normalizing the pixel values of the single-channel video data samples according to the ratio of 4: 1: 1 into a training set, a test set and a validation set.
When calibrating the radial artery position in the single-channel video data sample, each frame image in the single-channel video data sample can be manually calibrated (generally manually calibrated by an expert), or after some frame images are manually calibrated, other frame images in the single-channel video data sample are automatically calibrated according to a calibration result. For the latter approach, for example, the following steps may be included:
acquiring coordinate data of calibration positions in each first image after manual calibration in a single-channel video data sample; wherein, each first image is a frame image distributed at equal intervals in time sequence in the single-channel video data sample (namely, when carrying out artificial calibration, calibration is carried out once every preset number of frame images);
fitting a first curve of which the abscissa of the calibration position changes along with the sequence number of the images (the sequence number of the frame image in a single-channel video data sample) according to the coordinate data of the calibration position in each first image, and fitting a second curve of which the ordinate of the calibration position changes along with the sequence number of the images;
acquiring abscissa data of the position to be calibrated of each second image according to the sorting serial number and the first curve of each second image, and acquiring ordinate data of the position to be calibrated of each second image according to the sorting serial number and the second curve of each second image; the second image is a frame image except the first image in the single-channel video data sample;
and calibrating the corresponding position of each second image according to the abscissa data of the position to be calibrated of each second image and the ordinate data of the position to be calibrated.
By the method, automatic calibration is performed, the workload of operators can be greatly reduced, the labor intensity is reduced, and the working efficiency is improved. And automatic calibration is realized through fitting a curve, and even if the wrist of the subject moves in the process of acquiring the near-infrared video data sample, the calibration of the radial artery position can be accurately realized.
Preferably, after the radial artery position calibration and the pixel value normalization processing in the single-channel video data sample are completed, the processed single-channel video data sample is subjected to data enhancement processing, and the single-channel video data sample is divided into a training set, a testing set and a verification set after the data enhancement processing is completed. Compared with the mode of performing data enhancement processing after the step B1 and before the step B2, the method has the advantages that the radial artery position calibration and the pixel value normalization processing do not need to be additionally performed on the sample obtained through the data enhancement processing, so that the workload can be greatly reduced, and the working efficiency is improved.
In some preferred embodiments, in step B3, the preset hyper-parameters include:
batch size (batch size), batch size 8 or 16;
momentum parameter (momentum), momentum parameter 0.95;
a learning rate (learning rate) which is a dynamic learning rate and whose initial value is 0.001;
the number of iterations (epoch), which is 100.
The network is easy to accelerate convergence due to large batch size setting, but insufficient memory operation is easy to cause due to too large batch size, and the network is quick in acceleration convergence and moderate in requirement on memory operation capacity due to the fact that the batch size is set to be 8 or 16.
The momentum parameter influences the speed of the gradient to the optimal value, and the momentum parameter is set to 0.95, so that the speed of the gradient to the optimal value can be ensured to be high.
The learning rate determines the update speed of the weight, and in this application, the learning rate is set as a dynamic learning rate, the initial value of which is 0.001, and then dynamically decreases according to the change of the loss function. In this embodiment, in step B3, MSE (mean square error) is selected as the loss function during the training process.
In practical application, the radial artery positioning accuracy of the neural network model can be calculated by the following formula:
wherein,for the (i) th test sample,for testing samplesThe tag coordinates (i.e. the coordinates of the radial artery index point),for model pair test sampleThe predicted point of the radial artery location of (a),for the pixel threshold (which represents the required positioning accuracy),in order to test the number of the lump,for the number of the test set with the distance between the prediction point and the radial artery index point smaller than the threshold value,the accuracy of radial artery positioning is improved.
Based on the radial artery positioning accuracyThe three-dimensional convolutional neural network model (3 DC) of the present application was compared under the same test set (where the resolution of the input image was 2048 × 1088)NN model) and the existing two-dimensional convolutional neural network model (AlexNet model, Vgg16 model, and Vgg19 model), the comparison results are shown in the following table:
as can be seen from the above table, under the image resolution of 2048 × 1088, when the positioning accuracy is required to be within 50 pixels, the radial artery positioning accuracy of the three-dimensional convolutional neural network model reaches 87%, which is higher than that of other models; when the positioning accuracy is required to be within 40 pixels, the radial artery positioning accuracy rate reaches 61 percent, and is higher than that of other models; when the positioning accuracy is required to be within 30 pixels, the radial artery positioning accuracy rate reaches 45 percent, and is higher than that of other models; when the positioning accuracy is required to be within 20 pixels, the radial artery positioning accuracy rate reaches 27%, and is higher than the radial artery positioning accuracy rates of other models. Therefore, the three-dimensional convolution neural network model has a much higher positioning effect on the radial artery than other common two-dimensional convolution neural network models, has a larger application potential, can be applied to a real wrist palpation positioning scene, and brings key convenience for pulse diagnosis.
According to the radial artery palpation positioning method, the near-infrared video information of the wrist of the tested person is obtained; extracting green channel images from each frame of image of the near-infrared video information to obtain single-channel video data; performing pixel value normalization processing on the single-channel video data to obtain video data to be detected; inputting video data to be detected into a pre-trained three-dimensional convolution neural network model to obtain radial artery coordinates of a detected person; thereby realizing the accurate positioning of the radial artery without contact.
Referring to fig. 2, the present application provides a radial artery palpation positioning device for positioning the radial artery position at the wrist of the subject, comprising:
the first acquisition module 1 is used for acquiring near-infrared video information of the wrist of a measured person;
the first extraction module 2 is used for extracting green channel images from each frame of image of the near-infrared video information to obtain single-channel video data;
the normalization processing module 3 is used for carrying out pixel value normalization processing on the single-channel video data to obtain video data to be detected;
and the positioning module 4 is used for inputting the video data to be detected into the pre-trained three-dimensional convolutional neural network model to obtain the radial artery coordinates of the detected person.
The radial artery palpation positioning device can extract time and space characteristics of the video data of the wrist through a three-dimensional convolution neural network model so as to complete mapping from the video data to space coordinates, thereby realizing non-contact accurate positioning of the radial artery.
Specifically, the first obtaining module 1 is configured to, when obtaining near-infrared video information of a wrist of a subject, perform:
acquiring near-infrared video information of preset duration acquired at a preset frame rate; the preset frame rate is 28-32 fps, and the preset duration is 4-6 s.
The near-infrared video information can be collected through the infrared camera, and the wrist of the testee needs to be kept static in the collection process.
The acquisition frame rate (namely the preset frame rate) can be set according to actual needs, and is preferably 30 fps; the sampling time (i.e. the preset time) can be set according to actual needs, and is preferably 5 s.
The first extraction module 2 extracts the green channel images of each frame of image of the near-infrared video information to obtain a plurality of green channel images, and the green channel images are arranged according to the sequence of the corresponding original image of each frame in the original near-infrared video information to form a new image sequence, namely single-channel video data. The method for extracting the green channel image from the image is the prior art, and the detailed description thereof is omitted here.
In some embodiments, the normalization processing module 3 is configured to, when performing pixel value normalization processing on single-channel video data to obtain video data to be detected, perform:
carrying out pixel value normalization processing on each frame of image of single-channel video data according to the following formula:
wherein,is the normalized pixel value of the ith pixel point in the frame image,the pixel value of the ith pixel point in the frame image before normalization processing,is the minimum pixel value before the normalization processing of the frame image (i.e. the minimum value among the pixel values of the respective pixel points of the frame image),the maximum pixel value before the normalization processing of the frame image (i.e., the maximum value among the pixel values of the respective pixel points of the frame image) is obtained.
It should be noted that the above steps are performed for each frame image of the single-channel video data, corresponding to different frame images, which are obtained by the above stepsAndthe values can be different.
The pixel value normalization processing is carried out on the single-channel video data, the pixel value range of each frame image in the single-channel video data is reduced from [0,225] to [0,1], the unit limitation of the data can be removed, and the following effects can be achieved: the calculated amount of a model (a three-dimensional convolution neural network model) is reduced, and the convergence speed of the model is improved; the difference of data distribution is reduced, and the precision of the model can be improved; data normalization can prevent model gradient explosion or gradient disappearance in back propagation of the model.
In the present embodiment, referring to fig. 4, the resolution of the near-infrared video information is 2048 × 1088;
the three-dimensional convolutional neural network model comprises an input layer 90, a first convolutional layer 91, a first pooling layer 92, a second convolutional layer 93, a second pooling layer 94, a third convolutional layer 95, a third pooling layer 96, a first full-connection layer 97, a second full-connection layer 98 and an output layer 99 which are connected in sequence;
the convolution kernels of the first convolution layer 91, the second convolution layer 93 and the third convolution layer 95 are all three-dimensional convolution kernels with the size of 15 × 3, the window moving step length during convolution processing is 1, the number of the convolution kernels of the first convolution layer 91 is 32, the number of the convolution kernels of the second convolution layer 93 is 16, and the number of the convolution kernels of the third convolution layer 95 is 8;
the pooling methods of the first, second and third pooling layers 92, 94 and 96 are all 2 × 2 maximum pooling methods, and the window moving step size during the pooling process is all 2;
the first full connection layer 97 and the second full connection layer 98 are both linear layers (linear layers).
The three-dimensional convolution neural network model can effectively extract time and space characteristics of video data through a three-dimensional convolution kernel so as to complete mapping from the video data to space coordinates, and reduce the size of an image frame of input video data through a pooling layer, so that the radial artery can be quickly and accurately positioned. Referring to fig. 6, image data output by each layer of convolution layer when radial artery localization is performed on an exemplary video data to be detected by using the three-dimensional convolution neural network model of the present embodiment; fig. 5 is image data output by each convolution layer for radial artery localization of the exemplary video data to be detected by using a two-dimensional convolution neural network model (the convolution kernel of which is a two-dimensional convolution kernel, and other data network conditions are the same as the three-dimensional convolution neural network model of the present embodiment) having the same data network conditions as the three-dimensional convolution neural network model of the present embodiment; thus, as can be seen from fig. 5 and 6, for the same data network condition, the three-dimensional convolutional neural network model of the present embodiment has relatively better effect on feature extraction of data.
It should be noted that the resolution of the near-infrared video information is not limited to 2048 × 1088, and for near-infrared video information with other resolutions, the size of the convolution kernel in the three-dimensional convolution neural network model and the number of convolution kernels of each convolution layer need to be adjusted accordingly.
Preferably, the three-dimensional convolutional neural network model is obtained by training through the following steps:
B1. collecting near-infrared video data samples of wrists of a plurality of different subjects; wherein a plurality of near-infrared video data samples are acquired for each subject;
B2. after preprocessing the near-infrared video data sample, processing according to the following steps of 4: 1: 1, dividing the training set into a training set, a testing set and a verification set;
B3. training a three-dimensional convolutional neural network model by using a training set based on a preset hyper-parameter;
B4. performing cross validation and evaluation on the trained three-dimensional convolutional neural network model by using a test set;
B5. and adjusting the hyper-parameters for multiple times, and verifying the three-dimensional convolutional neural network model through a verification set after each adjustment so as to obtain the optimal three-dimensional convolutional neural network model.
In step B1, preferably, for each subject, not less than 10 near-infrared video data samples are collected, and the posture of the wrist needs to be adjusted after each near-infrared video data sample is collected, including at least one of position (front, back, left, and right positions) adjustment, azimuth angle adjustment (adjustment by left-right swinging of the forearm), and rolling angle adjustment (i.e., angle in the circumferential direction of the wrist). Under the existing experimental conditions, it is difficult to acquire a large number of image data sets, and a certain number of data sets are needed for training a model by using a deep learning technology, so that the current limited data can be expanded by acquiring the acquired data of the same subject for multiple times, including inversion, rotation and the like, thereby enriching the data quantity of the training set of the neural network training model and enhancing the robustness of the model.
When the near-infrared video data samples are collected, collecting the near-infrared video data samples with preset duration at a preset frame rate; the preset frame rate is 28-32 fps, and the preset duration is 4-6 s.
The near-infrared video data samples can be collected through the infrared camera, and the wrists of the testee need to be kept static in the process of collecting each infrared video data sample.
The acquisition frame rate (namely the preset frame rate) can be set according to actual needs, and is preferably 30 fps; the sampling time (i.e. the preset time) can be set according to actual needs, and is preferably 5 s.
In some preferred embodiments, the collected near-infrared video data samples can be subjected to data enhancement processing to obtain more near-infrared video data samples; thus, the step of training the three-dimensional convolutional neural network model further comprises: and carrying out data enhancement processing on the collected near-infrared video data sample.
Different near-infrared video data samples can be obtained by performing operations (at least one item) such as rotation, turnover, scaling and the like on the acquired near-infrared video data samples, so that a near-infrared video data sample set can be greatly expanded, and the generalization capability of the trained three-dimensional convolutional neural network model is enhanced. Wherein, the step of performing data enhancement processing on the collected near-infrared video data sample can be performed after the step B1 and before the step B2.
In this embodiment, step B2 includes:
extracting green channel images from each frame of image of the near-infrared video data sample to obtain a single-channel video data sample;
calibrating the radial artery position in the single-channel video data sample;
carrying out pixel value normalization processing on the calibrated single-channel video data sample;
and (3) normalizing the pixel values of the single-channel video data samples according to the ratio of 4: 1: 1 into a training set, a test set and a validation set.
When calibrating the radial artery position in the single-channel video data sample, each frame image in the single-channel video data sample can be manually calibrated (generally manually calibrated by an expert), or after some frame images are manually calibrated, other frame images in the single-channel video data sample are automatically calibrated according to a calibration result. For the latter approach, for example, the following steps may be included:
acquiring coordinate data of calibration positions in each first image after manual calibration in a single-channel video data sample; wherein, each first image is a frame image distributed at equal intervals in time sequence in the single-channel video data sample (namely, when carrying out artificial calibration, calibration is carried out once every preset number of frame images);
fitting a first curve of which the abscissa of the calibration position changes along with the sequence number of the images (the sequence number of the frame image in a single-channel video data sample) according to the coordinate data of the calibration position in each first image, and fitting a second curve of which the ordinate of the calibration position changes along with the sequence number of the images;
acquiring abscissa data of the position to be calibrated of each second image according to the sorting serial number and the first curve of each second image, and acquiring ordinate data of the position to be calibrated of each second image according to the sorting serial number and the second curve of each second image; the second image is a frame image except the first image in the single-channel video data sample;
and calibrating the corresponding position of each second image according to the abscissa data of the position to be calibrated of each second image and the ordinate data of the position to be calibrated.
By the method, automatic calibration is performed, the workload of operators can be greatly reduced, the labor intensity is reduced, and the working efficiency is improved. And automatic calibration is realized through fitting a curve, and even if the wrist of the subject moves in the process of acquiring the near-infrared video data sample, the calibration of the radial artery position can be accurately realized.
Preferably, after the radial artery position calibration and the pixel value normalization processing in the single-channel video data sample are completed, the processed single-channel video data sample is subjected to data enhancement processing, and the single-channel video data sample is divided into a training set, a testing set and a verification set after the data enhancement processing is completed. Compared with the mode of performing data enhancement processing after the step B1 and before the step B2, the method has the advantages that the radial artery position calibration and the pixel value normalization processing do not need to be additionally performed on the sample obtained through the data enhancement processing, so that the workload can be greatly reduced, and the working efficiency is improved.
In some preferred embodiments, in step B3, the preset hyper-parameters include:
batch size (batch size), batch size 8 or 16;
momentum parameter (momentum), momentum parameter 0.95;
a learning rate (learning rate) which is a dynamic learning rate and whose initial value is 0.001;
the number of iterations (epoch), which is 100.
The network is easy to accelerate convergence due to large batch size setting, but insufficient memory operation is easy to cause due to too large batch size, and the network is quick in acceleration convergence and moderate in requirement on memory operation capacity due to the fact that the batch size is set to be 8 or 16.
The momentum parameter influences the speed of the gradient to the optimal value, and the momentum parameter is set to 0.95, so that the speed of the gradient to the optimal value can be ensured to be high.
The learning rate determines the update speed of the weight, and in this application, the learning rate is set as a dynamic learning rate, the initial value of which is 0.001, and then dynamically decreases according to the change of the loss function. In this embodiment, in step B3, MSE (mean square error) is selected as the loss function during the training process.
According to the radial artery palpation positioning device, the near-infrared video information of the wrist of the tested person is obtained; extracting green channel images from each frame of image of the near-infrared video information to obtain single-channel video data; performing pixel value normalization processing on the single-channel video data to obtain video data to be detected; inputting video data to be detected into a pre-trained three-dimensional convolution neural network model to obtain radial artery coordinates of a detected person; thereby realizing the accurate positioning of the radial artery without contact.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, where the present disclosure provides an electronic device, including: the processor 301 and the memory 302, the processor 301 and the memory 302 being interconnected and communicating with each other via the communication bus 303 and/or other types of connection mechanisms (not shown), the memory 302 storing a computer program executable by the processor 301, the processor 301 executing the computer program when the electronic device is running to perform the radial artery palpation location method in any of the alternative implementations of the above embodiments to implement the following functions: acquiring near-infrared video information of the wrist of a tested person; extracting green channel images from each frame of image of the near-infrared video information to obtain single-channel video data; performing pixel value normalization processing on the single-channel video data to obtain video data to be detected; and inputting the video data to be detected into a pre-trained three-dimensional convolution neural network model to obtain the radial artery coordinates of the detected person.
The embodiment of the present application provides a storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the radial artery palpation positioning method in any optional implementation manner of the foregoing embodiment is executed, so as to implement the following functions: acquiring near-infrared video information of the wrist of a tested person; extracting green channel images from each frame of image of the near-infrared video information to obtain single-channel video data; performing pixel value normalization processing on the single-channel video data to obtain video data to be detected; and inputting the video data to be detected into a pre-trained three-dimensional convolution neural network model to obtain the radial artery coordinates of the detected person. The storage medium may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (10)
1. A radial artery palpation positioning method is used for positioning the radial artery position of the wrist of the testee, and is characterized by comprising the following steps:
A1. acquiring near-infrared video information of the wrist of a tested person;
A2. extracting green channel images from each frame of image of the near-infrared video information to obtain single-channel video data;
A3. performing pixel value normalization processing on the single-channel video data to obtain video data to be detected;
A4. and inputting the video data to be detected into a pre-trained three-dimensional convolution neural network model to obtain the radial artery coordinates of the detected person.
2. The radial artery palpation positioning method of claim 1, wherein step a1 comprises:
acquiring the near-infrared video information with preset duration collected at a preset frame rate; the preset frame rate is 28-32 fps, and the preset duration is 4-6 s.
3. The radial artery palpation positioning method of claim 1, wherein step a3 comprises:
carrying out pixel value normalization processing on each frame of image of the single-channel video data according to the following formula:
wherein,the normalized pixel value of the ith pixel point in the frame image,the pixel value of the ith pixel point in the frame image before normalization processing,is the minimum pixel value before the normalization processing of the frame image,and the maximum pixel value before the normalization processing of the frame image is obtained.
4. The radial artery palpation location method of claim 1, wherein the resolution of said near infrared video information is 2048 x 1088;
the three-dimensional convolutional neural network model comprises an input layer, a first convolutional layer, a first pooling layer, a second convolutional layer, a second pooling layer, a third convolutional layer, a third pooling layer, a first full-connection layer, a second full-connection layer and an output layer which are connected in sequence;
convolution kernels of the first convolution layer, the second convolution layer and the third convolution layer are all three-dimensional convolution kernels with the size of 15 x 3, window moving step length during convolution processing is 1, the number of the convolution kernels of the first convolution layer is 32, the number of the convolution kernels of the second convolution layer is 16, and the number of the convolution kernels of the third convolution layer is 8;
the pooling methods of the first, second and third pooling layers are 2 x 2 maximum pooling methods, and the window moving step size during the pooling treatment is 2;
the first full connection layer and the second full connection layer are both linear layers.
5. The radial artery palpation positioning method of claim 4, wherein said three-dimensional convolutional neural network model is trained by the steps of:
B1. collecting near-infrared video data samples of wrists of a plurality of different subjects; wherein a plurality of said near infrared video data samples are collected for each of said subjects;
B2. after the near-infrared video data sample is preprocessed, the method comprises the following steps of: 1: 1, dividing the training set into a training set, a testing set and a verification set;
B3. training the three-dimensional convolutional neural network model by using the training set based on a preset hyper-parameter;
B4. performing cross validation and evaluation on the trained three-dimensional convolutional neural network model by using the test set;
B5. and adjusting the hyper-parameters for multiple times, and verifying the three-dimensional convolutional neural network model through the verification set after each adjustment so as to obtain the optimal three-dimensional convolutional neural network model.
6. The radial artery palpation positioning method of claim 5, wherein step B2 comprises:
extracting a green channel image from each frame of image of the near-infrared video data sample to obtain a single-channel video data sample;
calibrating a radial artery position in the single-channel video data sample;
and carrying out pixel value normalization processing on the calibrated single-channel video data sample.
7. The radial artery palpation positioning method of claim 5, wherein said preset hyper-parameters comprise:
a batch size, the batch size being 8 or 16;
a momentum parameter, the momentum parameter being 0.95;
a learning rate which is a dynamic learning rate and an initial value of which is 0.001;
and the iteration number is 100.
8. A radial artery palpation positioner for fix a position the radial artery position of measurand wrist department, its characterized in that includes:
the first acquisition module is used for acquiring near-infrared video information of the wrist of the measured person;
the first extraction module is used for extracting green channel images from each frame of image of the near-infrared video information to obtain single-channel video data;
the normalization processing module is used for carrying out pixel value normalization processing on the single-channel video data to obtain video data to be detected;
and the positioning module is used for inputting the video data to be detected into a pre-trained three-dimensional convolutional neural network model to obtain the radial artery coordinates of the detected person.
9. An electronic device comprising a processor and a memory, wherein the memory stores a computer program executable by the processor, and the processor executes the computer program to perform the steps of the radial artery palpation location method according to any of the claims 1-7.
10. A storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, performs the steps of the radial artery palpation location method according to any of the claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210011181.2A CN114041758B (en) | 2022-01-06 | 2022-01-06 | Radial artery palpation positioning method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210011181.2A CN114041758B (en) | 2022-01-06 | 2022-01-06 | Radial artery palpation positioning method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114041758A true CN114041758A (en) | 2022-02-15 |
CN114041758B CN114041758B (en) | 2022-05-03 |
Family
ID=80213456
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210011181.2A Active CN114041758B (en) | 2022-01-06 | 2022-01-06 | Radial artery palpation positioning method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114041758B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117617910A (en) * | 2024-01-23 | 2024-03-01 | 季华实验室 | Pulse positioning model training method, electronic equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6580835B1 (en) * | 1999-06-02 | 2003-06-17 | Eastman Kodak Company | Method for enhancing the edge contrast of a digital image |
CN105595971A (en) * | 2015-12-15 | 2016-05-25 | 清华大学 | Pulse condition information collecting system based on video and collecting method thereof |
CN106682607A (en) * | 2016-12-23 | 2017-05-17 | 山东师范大学 | Offline face recognition system and offline face recognition method based on low-power-consumption embedded and infrared triggering |
CN106780569A (en) * | 2016-11-18 | 2017-05-31 | 深圳市唯特视科技有限公司 | A kind of human body attitude estimates behavior analysis method |
CN108829232A (en) * | 2018-04-26 | 2018-11-16 | 深圳市深晓科技有限公司 | The acquisition methods of skeleton artis three-dimensional coordinate based on deep learning |
CN112396565A (en) * | 2020-11-19 | 2021-02-23 | 同济大学 | Method and system for enhancing and segmenting blood vessels of images and videos of venipuncture robot |
CN113303771A (en) * | 2021-07-30 | 2021-08-27 | 天津慧医谷科技有限公司 | Pulse acquisition point determining method and device and electronic equipment |
-
2022
- 2022-01-06 CN CN202210011181.2A patent/CN114041758B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6580835B1 (en) * | 1999-06-02 | 2003-06-17 | Eastman Kodak Company | Method for enhancing the edge contrast of a digital image |
CN105595971A (en) * | 2015-12-15 | 2016-05-25 | 清华大学 | Pulse condition information collecting system based on video and collecting method thereof |
CN106780569A (en) * | 2016-11-18 | 2017-05-31 | 深圳市唯特视科技有限公司 | A kind of human body attitude estimates behavior analysis method |
CN106682607A (en) * | 2016-12-23 | 2017-05-17 | 山东师范大学 | Offline face recognition system and offline face recognition method based on low-power-consumption embedded and infrared triggering |
CN108829232A (en) * | 2018-04-26 | 2018-11-16 | 深圳市深晓科技有限公司 | The acquisition methods of skeleton artis three-dimensional coordinate based on deep learning |
CN112396565A (en) * | 2020-11-19 | 2021-02-23 | 同济大学 | Method and system for enhancing and segmenting blood vessels of images and videos of venipuncture robot |
CN113303771A (en) * | 2021-07-30 | 2021-08-27 | 天津慧医谷科技有限公司 | Pulse acquisition point determining method and device and electronic equipment |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117617910A (en) * | 2024-01-23 | 2024-03-01 | 季华实验室 | Pulse positioning model training method, electronic equipment and storage medium |
CN117617910B (en) * | 2024-01-23 | 2024-04-05 | 季华实验室 | Pulse positioning model training method, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114041758B (en) | 2022-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yani et al. | Application of transfer learning using convolutional neural network method for early detection of terry’s nail | |
CN109829880A (en) | A kind of CT image detecting method based on deep learning, device and control equipment | |
CN108830826A (en) | A kind of system and method detecting Lung neoplasm | |
CN110200601A (en) | A kind of pulse condition acquisition device and system | |
CN106909778A (en) | A kind of Multimodal medical image recognition methods and device based on deep learning | |
CN106651827A (en) | Fundus image registering method based on SIFT characteristics | |
CN107330949A (en) | A kind of artifact correction method and system | |
US11501441B2 (en) | Biomarker determination using optical flows | |
CN112465905A (en) | Characteristic brain region positioning method of magnetic resonance imaging data based on deep learning | |
Loureiro et al. | Using a skeleton gait energy image for pathological gait classification | |
US11615508B2 (en) | Systems and methods for consistent presentation of medical images using deep neural networks | |
CN107280118A (en) | A kind of Human Height information acquisition method and the fitting cabinet system using this method | |
CN112001122A (en) | Non-contact physiological signal measuring method based on end-to-end generation countermeasure network | |
CN111588353A (en) | Body temperature measuring method | |
CN109919212A (en) | The multi-dimension testing method and device of tumour in digestive endoscope image | |
CN114241187A (en) | Muscle disease diagnosis system, device and medium based on ultrasonic bimodal images | |
CN112884759B (en) | Method and related device for detecting metastasis state of axillary lymph nodes of breast cancer | |
CN114041758B (en) | Radial artery palpation positioning method and device, electronic equipment and storage medium | |
CN112465773A (en) | Facial nerve paralysis disease detection method based on human face muscle movement characteristics | |
CN115578789A (en) | Scoliosis detection apparatus, system, and computer-readable storage medium | |
CN113313714B (en) | Coronary OCT (optical coherence tomography) image lesion plaque segmentation method based on improved U-Net network | |
CN117690583B (en) | Internet of things-based rehabilitation and nursing interactive management system and method | |
CN117770788A (en) | Electrical impedance imaging body movement interference suppression method based on signal structure characteristics | |
CN117883074A (en) | Parkinson's disease gait quantitative analysis method based on human body posture video | |
CN109480842B (en) | System and apparatus for diagnosing functional dyspepsia |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |