CN110276271A - Merge the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming - Google Patents
Merge the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming Download PDFInfo
- Publication number
- CN110276271A CN110276271A CN201910462756.0A CN201910462756A CN110276271A CN 110276271 A CN110276271 A CN 110276271A CN 201910462756 A CN201910462756 A CN 201910462756A CN 110276271 A CN110276271 A CN 110276271A
- Authority
- CN
- China
- Prior art keywords
- face
- image
- heart rate
- ippg
- depth information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 230000033001 locomotion Effects 0.000 claims abstract description 56
- 230000001815 facial effect Effects 0.000 claims abstract description 51
- 230000008859 change Effects 0.000 claims abstract description 26
- 238000005516 engineering process Methods 0.000 claims abstract description 14
- 230000004927 fusion Effects 0.000 claims abstract description 9
- 241001123248 Arma Species 0.000 claims abstract description 7
- 238000000605 extraction Methods 0.000 claims abstract description 7
- 238000010183 spectrum analysis Methods 0.000 claims abstract description 7
- 230000003044 adaptive effect Effects 0.000 claims abstract description 6
- 238000004364 calculation method Methods 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 12
- 238000007689 inspection Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 8
- 238000012360 testing method Methods 0.000 claims description 7
- 230000010355 oscillation Effects 0.000 claims description 6
- 238000012216 screening Methods 0.000 claims description 6
- 238000005286 illumination Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 3
- 230000008030 elimination Effects 0.000 claims description 3
- 238000003379 elimination reaction Methods 0.000 claims description 3
- 230000001121 heart beat frequency Effects 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000000926 separation method Methods 0.000 claims description 3
- 238000004513 sizing Methods 0.000 claims description 3
- 239000004744 fabric Substances 0.000 claims 1
- 238000013186 photoplethysmography Methods 0.000 description 11
- 230000000007 visual effect Effects 0.000 description 10
- 238000005259 measurement Methods 0.000 description 7
- 238000009532 heart rate measurement Methods 0.000 description 6
- 238000002156 mixing Methods 0.000 description 4
- 230000033764 rhythmic process Effects 0.000 description 4
- 239000008280 blood Substances 0.000 description 3
- 210000004369 blood Anatomy 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000000691 measurement method Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 239000000853 adhesive Substances 0.000 description 2
- 230000001070 adhesive effect Effects 0.000 description 2
- 210000004204 blood vessel Anatomy 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 230000004886 head movement Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 208000024172 Cardiovascular disease Diseases 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000004132 cross linking Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006806 disease prevention Effects 0.000 description 1
- 230000002526 effect on cardiovascular system Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 231100000225 lethality Toxicity 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000010561 standard procedure Methods 0.000 description 1
- 230000000638 stimulation Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/024—Detecting, measuring or recording pulse rate or heart rate
- A61B5/02416—Detecting, measuring or recording pulse rate or heart rate using photoplethysmograph signals, e.g. generated by infrared radiation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/02—Preprocessing
- G06F2218/04—Denoising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30048—Heart; Cardiac
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Cardiology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Pathology (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Ophthalmology & Optometry (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Physiology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The present invention relates to the non-contact heart rate estimations technique of fusion IPPG and depth information anti-noise jamming comprising following steps: 1) facial image and its corresponding depth information of measured are acquired by RGBD camera;2) Face datection and positioning are carried out using template matching to collected facial image using Face datection algorithm, forms monitor area;3) movement of tracking face is carried out using monitor area corresponding to multiple facial images of the Face tracking algorithm to continuous acquisition, obtains multiple facial images with motion change;4) it is interfered with the motion artifacts that weighted mean method eliminates multiple facial images with motion change obtained, realizes dynamic human face identification;5) to the facial image after recognition of face using depth information come adaptive selection ROI region;6) source extraction is carried out to ROI region, signal denoising is carried out using experience resolution model EMD method and carries out the heart rate value that energy spectrum analysis seeks detected person using arma modeling using IPPG technology.
Description
Technical field
The present invention relates to heart rate estimation technique field more particularly to a kind of blending image formula photoplethysmographies (IPPG)
With the non-contact heart rate estimation method of the anti-motion artifacts interference of depth information.
Background technique
It is shown according to World Health Organization's data, every year because cardiovascular disease death toll is more than 17,500,000 people, the disease
Positioned at the first place of the uninfection lethality cause of disease.Heart rate is the important physiological parameter for being able to reflect heart state, is cardiovascular
The important indicator of disease prevention and clinical diagnosis.
Traditional method for measuring heart rate, electrocardiogram are the standard methods of clinical practice heart rate measurement.It needs medical staff
Crosslinking electrode is disposed according to correct lead position, process is cumbersome, needs the operation of profession, is needed when in use in electrode and skin
Conducting medium is filled between skin or applies pressure fixing, therefore very big discomfort is brought to measured, and complicated connecting line is not
Convenient for operation.Existing heartbeat detection device and coherent signal processing technique mainly include following several schemes:
(1) patch type heart rate measuring instrument: certain specific position adhesive electrode pieces with measured are extracted from human body signal
Heart rate.However, adhesive electrode sector-meeting causes measured not accommodating stimulation centainly the patch type heart rate measuring instrument when detecting, survey
Test ring border is restricted, cannot long-time real-time detection, process complex operations are inconvenient.
(2) condenser type heart rate measurement: it is based on capacitive coupling principle, completes non-contact heart rate detection;Capacitance-type no touch electrocardio
Sensor has good electrocardio measurement performance, and high reliablity can be used for the long-time measurement of electrocardiosignal.However, the capacitor
The heart rate measurements of formula heart rate measurement are easy to be influenced by motion artifacts, obtain that the effect is unsatisfactory in this way.
(3) photoelectric sphyg measures: using penetrability of the height opacity and illumination of blood in general tissue than
Several ten times larger feature, checks pulse signal in blood.However, photoelectric sphyg measurement is either gone back using reflective
It is projection-type mode, other than equally being easy to be influenced by motion artifacts, alsos for reducing external environment light noise jamming,
It is required that sensor is adjacent to skin.
(4) photoelectric plethysmography heart rate measurement: by record light through skin and after organizing, suction of the skin histology for light
Receive number mode calculate heart rate, which belongs to non-cpntact measurement due to not contacting directly with measurand, have compared with
Good man-machine experience sense.However, the photoelectric plethysmography carries out heart rate detection, limited by environment light, once illumination has change
Change or not strong enough, heart rate test will become inaccuracy;And tester is needed to be sitting in fixed position, position change can also make
Obtain testing result inaccuracy;Under moving condition, ROI region selection also can inaccuracy;And it eliminates in signal due to tested pair
As the noise that relative motion generates between image capture device is technological difficulties.
(5) image pulses measure: according to blood vessel distortion caused by heartbeat can by thus generate the deformation of blood vessel surrounding skin come
Observation.However, the image pulses measure is easy, dynamic by puppet and vibration is interfered measurement data and by the striation part tested
It influences.
(6) in terms of signal processing, main processing technique has at present: finite impulse response (FIR) (finite impulse
Response, FIR) filter, bandpass filter and moving average filter.
In recent years, with the development of electronic technology and measurement method.Contactless heart rate measurement is suggested and realizes.Light
Power Capacity graphical method detects a kind of non-invasive detection methods of volumetric blood variation using photoelectric sensor in living tissue.By
Different in uptake of the human body different tissues for light, after through transmission or reflection, the light intensity received will appear rule
The variation of rule property.The main cause that heart regularity, which is shunk, and diastole is causes regularity to change, therefore these fluctuating signals and heart are jumped
It moves directly related.But there is acquisition signal and be easy to be interfered by motion artifacts in above-mentioned measurement method, cause accuracy rate is low to ask
Topic.
Image-type photoplethysmography (image photoplenthysmography, IPPG) is in recent years in tradition
The one kind to grow up on the basis of photoplethysmography can be realized " remote " (can be greater than 0.5 meter) contactless life
Manage signal measurement technique.The non-cpntact measurement of IPPG technology, low cost, it is easy to operate the features such as, especially non-cpntact measurement side
Formula can realize clinic and routine testing under some specific conditions.The image photoplethysmography having proposed
(IPPG), fixed ROI region is generallyd use, while measured being required to sit quietly before image capture device, does not account for being tested
The mobile situation of person.Estimation caused by the noise for setting fixed ROI region in method and being introduced when measured cannot be overcome to move
The problem of heart rate signal inaccuracy.
Summary of the invention
In view of the above-mentioned deficiencies in the prior art, it is an object of the present invention to provide, a kind of design is reasonable, and accuracy is high, is resistant to move
The blending image formula photoplethysmography (IPPG) of noise jamming and the non-contact heart rate estimation method of depth information.
To achieve the above object, the invention adopts the following technical scheme:
Merge the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming comprising following steps:
1) facial image and its corresponding depth information of measured information collection: are acquired by RGBD camera;
2) facial image detects: carrying out Face datection using template matching to collected facial image using Face datection algorithm
And positioning, form monitor area;
3) face tracking: monitor area corresponding to the multiple facial images arrived using Face tracking algorithm to continuous acquisition is carried out
The movement of tracking face, obtains multiple facial images with motion change;
4) motion artifacts interference is eliminated: eliminating multiple facial images with motion change obtained with weighted mean method
Motion artifacts interference, realize dynamic human face identification;
5) ROI region is extracted: to the facial image after recognition of face using depth information come adaptive selection ROI region;
6) rate calculation is carried out using IPPG technology: source extraction is carried out to ROI region using IPPG technology, utilizes experience point
Solution mode EMD method carries out signal denoising and carries out the heart rate value that energy spectrum analysis seeks detected person using arma modeling.
Preferably, RGBD camera described in step 1) includes RGB camera and depth camera, the RGB camera shooting
Head is used to shoot the depth image of face, the RGB camera and depth for shooting the RGB image of human body, depth camera
After camera calibration, the depth image of depth camera shooting and the RGB image of RGB camera shooting are overlapped.
Preferably, the method for step 2) facial image detection are as follows: using collected facial image as input picture, search
The rectangular area of possible scale and position, i.e., candidate window to be detected, to each candidate window according to such as in rope input picture
Lower step is handled: first being carried out coarse sizing using eyes template matching, then is carried out mean square deviation standardization to image in window, makes
The standardization of human face region distributed areas, eliminates the influence of illumination variation, face template matching is then carried out, if being not above setting
Threshold value, then as candidate face export.
Preferably, the method for step 3) face tracking are as follows: assuming that only occur a face in sequence in each frame, setting
Initial monitor area is R0, face first is detected in the monitor area of each frame, if detecting face, is pressed according to its positioning result
Monitor area new in next frame is calculated according to the formula for solving monitor area, so handles each frame;If detection failure, keeps supervising
Viewed area is constant, and continues to test in subsequent frames, to prevent from tracking failure as caused by accidental missing inspection;When missing inspection frame is super
Cross certain amount, i.e. missing inspection time-out, it is believed that face has disappeared, then restarts to detect in subsequent frame in initial monitor area
The new face being likely to occur;
It is assumed that monitor area is expressed as LR=(x with one hexa-atomic groupmin,xmax,ymin,ymax,chmin,chmax), xmin,
xmaxIt is range of the face center in the direction x, ymin,ymaxIt is range of the face center in the direction y, chmin,chmaxIt is
The range of face scale;It is approximately a square by human face region, is shown as L=(x with a triple tablecenter,xcenter,
chface), each element successively describes center and the size of face;Then, the formula for solving monitor area is as follows:
Wherein, a, b, which are respectively indicated, defines face x between two frames, y location, and c indicates the maximum variation of face scale.
Preferably, step 4) motion artifacts interference elimination method are as follows: it is swung left and right the variation of angle according to face, it is fixed
The weight calculation method of the every width face variation of justice, proposes the construction strategy of weighted average face, is then based on the pitch angle of face
Degree variation, is divided into overlooking, look squarely and looking up three levels, and weighted mean face is constructed in each level, is formed
Weighted mean face matrix;
It is assumed that given motion change Gray Face image is Ij(x, y), then, the formula of weighted mean face are as follows:
Wherein, ZM indicates the different motion total number of images of same people, and j represents jth width modified-image, and x, y indicate that two dimensional image is flat
Areal coordinate, ωjIndicate the weighted value of corresponding jth width facial image;
ωjDetermination and face the movement angle that is swung left and right it is related, ωjCalculation method be: assuming that before face is swung left and right
Eyes coordinate is respectively A (xA,yA) and B (xB,yB), it is respectively A'(x that the eyes coordinate after waving to the right, which occurs, for faceA',yA') and
B'(xB',yB'), the central coordinate of circle of circle is O (x where face cross-sectional viewo,yo), the angle of oscillation of face to the right is ∠ δ, then K
Width facial image weighted value ωKCalculation formula it is as follows:
Wherein, δkIndicate the angle that is swung left and right of corresponding kth width facial image, then weight ωkSolution be converted to calculating and wave
Angle δk;
Eyes coordinate A (xA,yA)、B(xB,yB) and its eyes coordinate A'(x after being swung left and rightA',yA')、B'(xB',yB')
Determine position;Assuming that the radius of circle is R where eyes rotation, then
Centre point O (the xo,yo) coordinate obtained by following calculating process:
(6) (7) two formula above simultaneous, abbreviation arrangement can obtain:
It can similarly obtain:
Centre point o (x can be obtained by (8) (9) two formula simultaneous aboveo,yo) coordinate, bring the result into formula (6), find out half
Diameter R then can solve swing angle δ by formula (5);Angle of oscillation δ is the function of eyes coordinate value, i.e.,
δ=f (xA,xB,xA',xB') (10)
Assuming that the angle δ that is swung left and right, pitch angle γ, the motion change range that face is made divides and its corresponding weighted mean face
Relation table;The Gray Face image I of given motion changei(x, y), the then angle that is swung left and right are the weighted mean in 0 ° -90 °
Face definition is as shown in formula (11):
Wherein, ZM represents a certain pitch angle, a certain face sum being swung left and right in angular range, and j indicates jth width face,
X, y represent two dimensional image plane coordinate, and p, q correspond to the mean value face number of the variation range, F'p,qExpression is swung left and right
Weighted mean face of the angle between -90 ° -0 °;
The weighted mean face being calculated, ultimately forming face, there are the weighted mean face squares under multi-motion situation of change
Battle array:
To the facial image after recognition of face, outlined with rectangle and to indicate that depth camera arrives body surface with h
Distance, N indicate that the measured value of number of pixels in the human face region of image center out, S indicate object real area size;M table
Show the number of pixel in the entire field range of depth camera, S1Indicate the area of the entire visual field of depth camera;α and β difference
Indicate the horizontal field of view and vertical field of view of depth camera.According to depth camera the distance field range and the visual field model
The ratio of the size S of number of pixels measured value N and actual object in enclosing and depth camera under the distance are entire
The number M of pixel and entire visual field area s in field range1Ratio it is equal, i.e.,
Wherein N is the number of pixels of the face region gone out according to image statistics, S1It can be according to the visual field model of depth camera
It encloses and is found out by geometrical relationship, is shown below:
Again due to visual field S1In total number of pixels M, using this principle, for the image capturing system put up, in S1、M、
Human face region area S can be found out in situation known to N:
Determination for ROI region area, the present invention selection to the facial image after recognition of face from top edge down
The region of 45%-75% draws parallel lines, and 30% height as ROI of its image whole height is taken, after the width of ROI takes identification
The 70% of picture traverse obtains ROI region only comprising nose and left and right cheek.
Preferably, the method that step 6) carries out rate calculation using IPPG technology are as follows: carry out source signal to ROI region and mention
It takes, ROI region image is divided into tri- channels R, G, B, using the smallest channel G of noise as the channel of source signal, after separation
G channel image carry out space pixel average treatment:
Wherein k is number of image frames, and K is the totalframes of image, and Z (k) is the one-dimensional source signal in the channel R, zi,j(n) for pixel (i,
J) in the channel G color intensity value, g, d are the height and width of image respectively;
Signal denoising is carried out using experience resolution model EMD method, its step are as follows:
(1) average value for taking original signal u (t) obtains signal m1(t);
(2) single order surplus hp is calculated1(t)=u (t)-m1(t), hp is checked1(t) whether meet IMF condition, if it is not, then returning to
Step (1), uses hp1(t) as the original signal of programmed screening, it may be assumed that
hp2(t)=hp1(t)-m2(t) (17)
Repeat screening k times:
hpk(t)=hpk-1(t)-mk(t) (18)
In hpk(t) before meeting IMF condition, the one-component IMF of IMF is obtained1, i.e.,
IMF1=hpk(t) (19)
(3) original signal u (t) subtracts IMF1Surplus r can be obtained1(t), i.e.,
r1(t)=u (t)-IMF1 (20)
(4) u is enabled1(t)=r1(t), by u1(t) as new original signal, above-mentioned steps is repeated and obtain second IMF points
Measure IMF2, such repeatedly n times:
(5) as n-th of component rn(t) monotonic function is had become, when can not decompose IMF again, the decomposable process of entire EMD is completed;
Original signal u (t) can be expressed as n IMF component and an average tendency component rn(t) combination, it may be assumed that
The signal that its frequency is located at 0.75-2.0Hz in heartbeat frequency band after being decomposed with arma modeling to EMD does energy spectrum analysis, i.e.,
Corresponding 45-120 times/min of human normal heart rate range, the corresponding frequency in energy highest point is palmic rate fh, heart rate are as follows:
XLV=60fh (22)。
The invention adopts the above technical scheme, estimates to further increase image-type photoplethysmography (IPPG) heart rate
The accuracy of meter, the invention proposes the non-contact hearts rate of blending image formula photoplethysmography (IPPG) and depth information to estimate
Meter method, the image that the present invention can be measured in real time using RGBD camera, by increasing depth information and being fused to
In IPPG signal extraction, adaptive ROI region extracting method is devised, reduces to choose introducing noise due to inappropriate ROI region
Interference to IPPG source signal, while considering that detected person's yaw situation will cause the variation of flexion-extension up and down and the left and right of facial image
Variation is waved, the interference of motion artifacts caused by measured's head movement is reduced using weighted average method, improves image-type
The accuracy rate of photoplethysmography (IPPG) heart rate estimation.Compared with the conventional method, heart rate acquisition method proposed by the present invention
Using the non-contacting acquisition human face image information of RGBD camera and depth information, when rhythm of the heart, does not need patch electrode slice, makes
With more convenient, the limitation by environment light and motion artifacts are not influenced, and improve the accuracy rate of rhythm of the heart.
Detailed description of the invention
Now in conjunction with attached drawing, the present invention is further elaborated:
Fig. 1 is the flow diagram of the non-contact heart rate estimation method of present invention fusion IPPG technology and depth information;
Fig. 2 is the flow diagram of the present inventor's face detection algorithm;
Fig. 3 is the flow diagram of Face tracking algorithm of the present invention;
Fig. 4 is the present inventor's face image cross-sectional view;
Fig. 5 is that the motion change range of face of the present invention divides and its correspond to the relation table of weighted mean face;
Fig. 6 is the visual field figure of depth camera of the present invention.
Specific embodiment
As shown in one of Fig. 1-6, the non-contact heart rate estimation method of fusion IPPG technology and depth information of the invention is melted
Close the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming comprising following steps:
1) facial image and its corresponding depth information of measured information collection: are acquired by RGBD camera;
2) facial image detects: carrying out Face datection using template matching to collected facial image using Face datection algorithm
And positioning, form monitor area;
3) face tracking: monitor area corresponding to the multiple facial images arrived using Face tracking algorithm to continuous acquisition is carried out
The movement of tracking face, obtains multiple facial images with motion change;
4) motion artifacts interference is eliminated: eliminating multiple facial images with motion change obtained with weighted mean method
Motion artifacts interference, realize dynamic human face identification;
5) ROI region is extracted: to the facial image after recognition of face using depth information come adaptive selection ROI region;
6) rate calculation is carried out using IPPG technology: source extraction is carried out to ROI region using IPPG technology, utilizes experience point
Solution mode EMD method carries out signal denoising and carries out the heart rate value that energy spectrum analysis seeks detected person using arma modeling.
Preferably, RGBD camera described in step 1) includes RGB camera and depth camera, the RGB camera shooting
Head is used to shoot the depth image of face, the RGB camera and depth for shooting the RGB image of human body, depth camera
After camera calibration, the depth image of depth camera shooting and the RGB image of RGB camera shooting are overlapped.
As shown in Fig. 2, the method for step 2) facial image detection are as follows: using collected facial image as input picture,
Search for input picture in may scale and position rectangular area, i.e., candidate window to be detected, to each candidate window according to
Following steps are handled: coarse sizing first carried out using eyes template matching, then mean square deviation standardization is carried out to image in window,
Make face area distribution area normalization, eliminate the influence of illumination variation, then carries out face template matching, set if being not above
Fixed threshold value is then exported as candidate face.
As shown in figure 3, the method for step 3) face tracking are as follows: assuming that only occur a face in sequence in each frame, if
Fixed initial monitor area is R0, face first is detected in the monitor area of each frame, if detecting face, according to its positioning result
Monitor area new in next frame is calculated according to the formula for solving monitor area, so handles each frame;If detection failure, keeps
Monitor area is constant, and continues to test in subsequent frames, to prevent from tracking failure as caused by accidental missing inspection;When missing inspection frame
More than certain amount, i.e. missing inspection time-out, it is believed that face has disappeared, then restarts to detect subsequent frame in initial monitor area
In the new face that is likely to occur;
The monitor area refers to some face possible position and range scale in certain frame, it is assumed that monitor area is used
One hexa-atomic group is expressed as LR=(xmin,xmax,ymin,ymax,chmin,chmax), xmin,xmaxIt is face center in the direction x
Range, ymin,ymaxIt is range of the face center in the direction y, chmin,chmaxIt is the range of face scale;By face area
Domain is approximately a square, is shown as L=(x with a triple tablecenter,xcenter,chface), each element successively describes
The center of face and size;
The task of face tracking is the movement of tracking face in consecutive image.Track algorithm is fixed according to existing face
Position is as a result, the corresponding monitor area of each face and detect face in the range of its constraint in the new frame of prediction.Due to human body
Movement has random (such as sudden change direction of motion), thus personage's behavior is lacked understand under the premise of, using existing
Face motion profile predict new monitor area have certain difficulty.Using relatively simple method, former frame is used only
Tracking result, according to the demand of practical application limit face two interframe maximum change, further solve new surveillance zone
Domain.Although processing speed will receive some influences in this way, it can guarantee the robustness of tracking.Face tracking problem is targeted to be
Front face image sequence, the movement of face are mainly shown as translation and two kinds of forms of dimensional variation.Therefore, monitor area is solved
Formula it is as follows:
Wherein, a, b, which are respectively indicated, defines face x between two frames, y location, and c indicates the maximum variation of face scale, they
And the factors such as the movement velocity of face and face and the relative positional relationship of camera are related.
The variation of face in three dimensions is divided into translation and rotation along horizontal axis, the longitudinal axis and number axis, part translation
It can be effectively overcome by gathering normalized method with rotation.But for facial image and upper and lower pitching variation and a left side
Variation is waved on the right side, and geometrical normalization can not often overcome.To Face datection difficult problem when for movement, using equal based on weighting
It is worth face recognition algorithms.
Preferably, step 4) motion artifacts interference elimination method are as follows: it is swung left and right the variation of angle according to face, it is fixed
The weight calculation method of the every width face variation of justice, proposes the construction strategy of weighted average face, is then based on the pitch angle of face
Degree variation, is divided into overlooking, look squarely and looking up three levels, and weighted mean face is constructed in each level, is formed
Weighted mean face matrix realizes dynamic human face identification;The collected facial image with motion change is stacked up shape
At mean value face, such image contains the integrated information of several movement faces, and the face for being able to reflect different motion becomes
Change situation.
It is assumed that given motion change Gray Face image is Ij(x, y), then, and weighted mean face (weighted
Mean face, WMF) formula it is as follows:
Wherein, ZM indicates the different motion total number of images of same people, and j represents jth width modified-image, and x, y indicate that two dimensional image is flat
Areal coordinate, ωjIndicate the weighted value of corresponding jth width facial image;
As shown in figure 4, ωjDetermination and face the movement angle that is swung left and right it is related, ωjCalculation method be: assuming that face
Eyes coordinate before being swung left and right is respectively A (xA,yA) and B (xB,yB), the eyes coordinate difference after waving to the right occurs for face
For A'(xA',yA') and B'(xB',yB'), the central coordinate of circle of circle is O (x where face cross-sectional viewo,yo), face waving to the right
Angle is ∠ δ, then K width facial image weighted value ωKCalculation formula it is as follows:
Wherein, δkIndicate the angle that is swung left and right of corresponding kth width facial image, then weight ωkSolution be converted to calculating and wave
Angle δk;
Eyes coordinate A (xA,yA)、B(xB,yB) and its eyes coordinate A'(x after being swung left and rightA',yA')、B'(xB',yB')
Determine position;Assuming that the radius of circle is R where eyes rotation, then
Centre point O (the xo,yo) coordinate obtained by following calculating process:
(6) (7) two formula above simultaneous, abbreviation arrangement can obtain:
It can similarly obtain:
Centre point o (x can be obtained by (8) (9) two formula simultaneous aboveo,yo) coordinate, bring the result into formula (6), find out half
Diameter R then can solve swing angle δ by formula (5);Angle of oscillation δ is the function of eyes coordinate value, i.e.,
δ=f (xA,xB,xA',xB') (10)
Assuming that the angle δ that is swung left and right, pitch angle γ, the motion change range that face is made divides and its corresponding weighted mean face
Relation table, as shown in Figure 5;The Gray Face image I of given motion changei(x, y), the then angle that is swung left and right are in 0 ° -90 °
The definition of weighted mean face as shown in formula (11):
Wherein, ZM represents a certain pitch angle, a certain face sum being swung left and right in angular range, and j indicates jth width face,
X, y represent two dimensional image plane coordinate, and p, q correspond to the mean value face number of the variation range, F'p,qExpression is swung left and right
Weighted mean face of the angle between -90 ° -0 °;
The weighted mean face being calculated, ultimately forming face, there are the weighted mean face squares under multi-motion situation of change
Battle array:
As shown in fig. 6, to the facial image after recognition of face, outlined with rectangle and to indicate that depth camera arrives with h
The distance of body surface, N indicate that the measured value of number of pixels in the human face region of image center out, S indicate the practical face of object
Product size;M indicates the number of pixel in the entire field range of depth camera, S1Indicate the face of the entire visual field of depth camera
Product;α and β respectively indicates the horizontal field of view and vertical field of view of depth camera.
According to number of pixels measured value N and reality of the depth camera in the field range of the distance and the field range
The ratio of the size S of object and the number M of pixel and entire in the entire field range of depth camera under the distance
Visual field area s1Ratio it is equal, i.e.,
Wherein N is the number of pixels of the face region gone out according to image statistics, S1It can be according to the visual field model of depth camera
It encloses and is found out by geometrical relationship, is shown below:
Again due to visual field S1In total number of pixels M, using this principle, for the image capturing system put up, in S1、M、
Human face region area S can be found out in situation known to N:
Determination for ROI region area, the present invention selection to the facial image after recognition of face from top edge down
The region of 45%-75% draws parallel lines, and 30% height as ROI of its image whole height is taken, after the width of ROI takes identification
The 70% of picture traverse obtains ROI region only comprising nose and left and right cheek.
Preferably, the method that step 6) carries out rate calculation using IPPG technology are as follows: carry out source signal to ROI region and mention
It takes, ROI region image is divided into tri- channels R, G, B, using the smallest channel G of noise as the channel of source signal, after separation
G channel image carry out space pixel average treatment:
Wherein k is number of image frames, and K is the totalframes of image, and Z (k) is the one-dimensional source signal in the channel R, zi,j(n) for pixel (i,
J) in the channel G color intensity value, g, d are the height and width of image respectively;
Signal denoising is carried out using experience resolution model EMD method, its step are as follows:
(1) average value for taking original signal u (t) obtains signal m1(t);
(2) single order surplus hp is calculated1(t)=u (t)-m1(t), hp is checked1(t) whether meet IMF condition, if it is not, then returning to
Step (1), uses hp1(t) as the original signal of programmed screening, it may be assumed that
hp2(t)=hp1(t)-m2(t) (17)
Repeat screening k times:
hpk(t)=hpk-1(t)-mk(t) (18)
In hpk(t) before meeting IMF condition, the one-component IMF of IMF is obtained1, i.e.,
IMF1=hpk(t) (19)
(3) original signal u (t) subtracts IMF1Surplus r can be obtained1(t), i.e.,
r1(t)=u (t)-IMF1 (20)
(4) u is enabled1(t)=r1(t), by u1(t) as new original signal, above-mentioned steps is repeated and obtain second IMF points
Measure IMF2, such repeatedly n times:
(5) as n-th of component rn(t) monotonic function is had become, when can not decompose IMF again, the decomposable process of entire EMD is completed;
Original signal u (t) can be expressed as n IMF component and an average tendency component rn(t) combination, it may be assumed that
The signal that its frequency is located at 0.75-2.0Hz in heartbeat frequency band after being decomposed with arma modeling to EMD does energy spectrum analysis, i.e.,
Corresponding 45-120 times/min of human normal heart rate range, the corresponding frequency in energy highest point is palmic rate fh, heart rate are as follows:
XLV=60fh (22)。
The invention adopts the above technical scheme, estimates to further increase image-type photoplethysmography (IPPG) heart rate
The accuracy of meter, the invention proposes the non-contact hearts rate of blending image formula photoplethysmography (IPPG) and depth information to estimate
Meter method, the image that the present invention can be measured in real time using RGBD camera, by increasing depth information and being fused to
In IPPG signal extraction, adaptive ROI region extracting method is devised, reduces to choose introducing noise due to inappropriate ROI region
Interference to IPPG source signal, while considering that detected person's yaw situation will cause the variation of flexion-extension up and down and the left and right of facial image
Variation is waved, the interference of motion artifacts caused by measured's head movement is reduced using weighted average method, improves image-type
The accuracy rate of photoplethysmography (IPPG) heart rate estimation.Compared with the conventional method, heart rate acquisition method proposed by the present invention
Using the non-contacting acquisition human face image information of RGBD camera and depth information, when rhythm of the heart, does not need patch electrode slice, makes
With more convenient, the limitation by environment light and motion artifacts are not influenced, and improve the accuracy rate of rhythm of the heart.
Above description should not have any restriction to protection scope of the present invention.
Claims (7)
1. merging the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming, it is characterised in that: it includes following step
It is rapid:
1) facial image and its corresponding depth information of measured information collection: are acquired by RGBD camera;
2) facial image detects: carrying out Face datection using template matching to collected facial image using Face datection algorithm
And positioning, form monitor area;
3) face tracking: monitor area corresponding to the multiple facial images arrived using Face tracking algorithm to continuous acquisition is carried out
The movement of tracking face, obtains multiple facial images with motion change;
4) motion artifacts interference is eliminated: eliminating multiple facial images with motion change obtained with weighted mean method
Motion artifacts interference, realize dynamic human face identification;
5) ROI region is extracted: to the facial image after recognition of face using depth information come adaptive selection ROI region;
6) rate calculation is carried out using IPPG technology: source extraction is carried out to ROI region using IPPG technology, utilizes experience point
Solution mode EMD method carries out signal denoising and carries out the heart rate value that energy spectrum analysis seeks detected person using arma modeling.
2. the non-contact heart rate estimation technique of fusion IPPG and depth information anti-noise jamming according to claim 1, special
Sign is: RGBD camera described in step 1) includes RGB camera and depth camera, and the RGB camera is for shooting
The RGB image of human body, depth camera are used to shoot the depth image of face, the RGB camera and depth camera calibration
Afterwards, the depth image of depth camera shooting and the RGB image of RGB camera shooting are overlapped.
3. the non-contact heart rate estimation technique of fusion IPPG and depth information anti-noise jamming according to claim 1, special
Sign is: the method for step 2) facial image detection are as follows: using collected facial image as input picture, searches for input picture
The rectangular area of middle possibility scale and position, i.e., candidate window to be detected carry out each candidate window in accordance with the following steps
Processing: coarse sizing first is carried out using eyes template matching, then mean square deviation standardization is carried out to image in window, makes human face region point
Cloth area normalization eliminates the influence of illumination variation, then carries out face template matching, if being not above the threshold value of setting,
It is exported as candidate face.
4. the non-contact heart rate estimation technique of fusion IPPG and depth information anti-noise jamming according to claim 1, special
Sign is: the method for step 3) face tracking are as follows: assuming that only occurring a face in sequence in each frame, sets initial surveillance zone
Domain is R0, face first is detected in the monitor area of each frame, if detecting face, is monitored according to its positioning result according to solution
The formula in region calculates monitor area new in next frame, so handles each frame;If detection failure, keeps monitor area not
Become, and continue to test in subsequent frames, to prevent from tracking failure as caused by accidental missing inspection;When missing inspection frame is more than a fixed number
Amount, i.e. missing inspection time-out, it is believed that face has disappeared, then restarts to be likely to occur in detection subsequent frame in initial monitor area
New face;
It is assumed that monitor area is expressed as LR=(x with one hexa-atomic groupmin,xmax,ymin,ymax,chmin,chmax), xmin,xmaxIt is
Range of the face center in the direction x, ymin,ymaxIt is range of the face center in the direction y, chmin,chmaxIt is face
The range of scale;It is approximately a square by human face region, is shown as L=(x with a triple tablecenter,xcenter,
chface), each element successively describes center and the size of face;Then, the formula for solving monitor area is as follows:
Wherein, a, b, which are respectively indicated, defines face x between two frames, y location, and c indicates the maximum variation of face scale.
5. the non-contact heart rate estimation technique of fusion IPPG and depth information anti-noise jamming according to claim 1, special
Sign is: step 4) motion artifacts interference elimination method are as follows: is swung left and right the variation of angle according to face, defines every width face
The weight calculation method of variation proposes the construction strategy of weighted average face, the pitch angle variation of face is then based on, by it
It is divided into vertical view, look squarely and looks up three levels, and constructs weighted mean face in each level, forms weighted mean face
Matrix;
It is assumed that given motion change Gray Face image is Ij(x, y), then, the formula of weighted mean face are as follows:
Wherein, ZM indicates the different motion total number of images of same people, and j represents jth width modified-image, and x, y indicate that two dimensional image is flat
Areal coordinate, ωjIndicate the weighted value of corresponding jth width facial image;
ωjDetermination and face the movement angle that is swung left and right it is related, ωjCalculation method be: assuming that face be swung left and right before it is double
Eye coordinates are respectively A (xA,yA) and B (xB,yB), it is respectively A'(x that the eyes coordinate after waving to the right, which occurs, for faceA',yA') and B'
(xB',yB'), the central coordinate of circle of circle is O (x where face cross-sectional viewo,yo), the angle of oscillation of face to the right is ∠ δ, then K width
Facial image weighted value ωKCalculation formula it is as follows:
Wherein, δkIndicate the angle that is swung left and right of corresponding kth width facial image, then weight ωkSolution be converted to calculating angle of oscillation
Spend δk;
Eyes coordinate A (xA,yA)、B(xB,yB) and its eyes coordinate A'(x after being swung left and rightA',yA')、B'(xB',yB')
Determine position;Assuming that the radius of circle is R where eyes rotation, then
Centre point O (the xo,yo) coordinate obtained by following calculating process:
(6) (7) two formula above simultaneous, abbreviation arrangement can obtain:
It can similarly obtain:
Centre point o (x can be obtained by (8) (9) two formula simultaneous aboveo,yo) coordinate, bring the result into formula (6), find out half
Diameter R then can solve swing angle δ by formula (5);Angle of oscillation δ is the function of eyes coordinate value, i.e.,
δ=f (xA,xB,xA',xB') (10)
Assuming that the angle δ that is swung left and right, pitch angle γ, the motion change range that face is made divides and its corresponding weighted mean face
Relation table;The Gray Face image I of given motion changei(x, y), the then angle that is swung left and right are the weighted mean in 0 ° -90 °
Face definition is as shown in formula (11):
Wherein, ZM represents a certain pitch angle, a certain face sum being swung left and right in angular range, and j indicates jth width face,
X, y represent two dimensional image plane coordinate, and p, q correspond to the mean value face number of the variation range, F 'P, qExpression is swung left and right
Weighted mean face of the angle between -90 ° -0 °;
The weighted mean face being calculated, ultimately forming face, there are the weighted mean face squares under multi-motion situation of change
Battle array:
6. the non-contact heart rate estimation technique of fusion IPPG and depth information anti-noise jamming according to claim 1, special
Sign is: the method that step 5) ROI region is extracted are as follows: to the facial image after recognition of face from top edge 45%-75% down
Region draw parallel lines, take 30% height as ROI of its image whole height, the width of ROI takes picture traverse after identification
70%, obtain only include nose and left and right cheek ROI region.
7. the non-contact heart rate estimation technique of fusion IPPG and depth information anti-noise jamming according to claim 1, special
Sign is: the method that step 6) carries out rate calculation using IPPG technology are as follows: source extraction is carried out to ROI region, by the area ROI
Area image is divided into tri- channels R, G, B, using the smallest channel G of noise as the channel of source signal, to the G channel image after separation
Carry out space pixel average treatment:
Wherein k is number of image frames, and K is the totalframes of image, and Z (k) is the one-dimensional source signal in the channel R, zi,j(n) for pixel (i,
J) in the channel G color intensity value, g, d are the height and width of image respectively;
Signal denoising is carried out using experience resolution model EMD method, its step are as follows:
(1) average value for taking original signal u (t) obtains signal m1(t);
(2) single order surplus hp is calculated1(t)=u (t)-m1(t), hp is checked1(t) whether meet IMF condition, if it is not, then returning to
Step (1), uses hp1(t) as the original signal of programmed screening, it may be assumed that
hp2(t)=hp1(t)-m2(t) (17)
Repeat screening k times:
hpk(t)=hpk-1(t)-mk(t) (18)
In hpk(t) before meeting IMF condition, the one-component IMF of IMF is obtained1, i.e.,
IMF1=hpk(t) (19)
(3) original signal u (t) subtracts IMF1Surplus r can be obtained1(t), i.e.,
r1(t)=u (t)-IMF1 (20)
(4) u is enabled1(t)=r1(t), by u1(t) as new original signal, above-mentioned steps is repeated and obtain second IMF points
Measure IMF2, such repeatedly n times:
(5) as n-th of component rn(t) monotonic function is had become, when can not decompose IMF again, the decomposable process of entire EMD is completed;It is former
Beginning signal u (t) can be expressed as n IMF component and an average tendency component rn(t) combination, it may be assumed that
The signal that its frequency is located at 0.75-2.0 Hz in heartbeat frequency band after being decomposed with arma modeling to EMD does energy spectrum analysis, i.e.,
Corresponding 45-120 times/min of human normal heart rate range, the corresponding frequency in energy highest point is palmic rate fh, heart rate are as follows:
XLV=60fh (22)。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910462756.0A CN110276271A (en) | 2019-05-30 | 2019-05-30 | Merge the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910462756.0A CN110276271A (en) | 2019-05-30 | 2019-05-30 | Merge the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110276271A true CN110276271A (en) | 2019-09-24 |
Family
ID=67961224
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910462756.0A Pending CN110276271A (en) | 2019-05-30 | 2019-05-30 | Merge the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110276271A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110866498A (en) * | 2019-11-15 | 2020-03-06 | 北京华宇信息技术有限公司 | Portable heart rate monitoring device and heart rate monitoring method thereof |
CN112155540A (en) * | 2020-09-30 | 2021-01-01 | 重庆仙桃前沿消费行为大数据有限公司 | Acquisition imaging system and method for complex object |
CN112733650A (en) * | 2020-12-29 | 2021-04-30 | 深圳云天励飞技术股份有限公司 | Target face detection method and device, terminal equipment and storage medium |
WO2021184620A1 (en) * | 2020-03-19 | 2021-09-23 | 南京昊眼晶睛智能科技有限公司 | Camera-based non-contact heart rate and body temperature measurement method |
CN113876311A (en) * | 2021-09-02 | 2022-01-04 | 天津大学 | Self-adaptively-selected non-contact multi-player heart rate efficient extraction device |
CN114120770A (en) * | 2021-03-24 | 2022-03-01 | 张银合 | Barrier-free communication method for hearing-impaired people |
CN114387479A (en) * | 2022-01-07 | 2022-04-22 | 河北工业大学 | Non-contact heart rate measurement method and system based on face video |
CN114403838A (en) * | 2022-01-24 | 2022-04-29 | 佛山科学技术学院 | Portable raspberry pi-based remote heart rate detection device and method |
CN114431849A (en) * | 2022-01-10 | 2022-05-06 | 厦门大学 | Aquatic animal heart rate detection method based on video image processing |
WO2022101785A1 (en) * | 2020-11-14 | 2022-05-19 | Facense Ltd. | Improvements in acquisition and analysis of imaging photoplethysmogram signals |
CN118411751A (en) * | 2024-07-03 | 2024-07-30 | 宁波星巡智能科技有限公司 | Heart rate measurement stability augmentation method, device and equipment based on facial image processing |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105678780A (en) * | 2016-01-14 | 2016-06-15 | 合肥工业大学智能制造技术研究院 | Video heart rate detection method removing interference of ambient light variation |
CN106778695A (en) * | 2017-01-19 | 2017-05-31 | 北京理工大学 | A kind of many people's examing heartbeat fastly methods based on video |
CN106886216A (en) * | 2017-01-16 | 2017-06-23 | 深圳前海勇艺达机器人有限公司 | Robot automatic tracking method and system based on RGBD Face datections |
US9750420B1 (en) * | 2014-12-10 | 2017-09-05 | Amazon Technologies, Inc. | Facial feature selection for heart rate detection |
CN107358220A (en) * | 2017-07-31 | 2017-11-17 | 江西中医药大学 | A kind of human heart rate and the contactless measurement of breathing |
-
2019
- 2019-05-30 CN CN201910462756.0A patent/CN110276271A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9750420B1 (en) * | 2014-12-10 | 2017-09-05 | Amazon Technologies, Inc. | Facial feature selection for heart rate detection |
CN105678780A (en) * | 2016-01-14 | 2016-06-15 | 合肥工业大学智能制造技术研究院 | Video heart rate detection method removing interference of ambient light variation |
CN106886216A (en) * | 2017-01-16 | 2017-06-23 | 深圳前海勇艺达机器人有限公司 | Robot automatic tracking method and system based on RGBD Face datections |
CN106778695A (en) * | 2017-01-19 | 2017-05-31 | 北京理工大学 | A kind of many people's examing heartbeat fastly methods based on video |
CN107358220A (en) * | 2017-07-31 | 2017-11-17 | 江西中医药大学 | A kind of human heart rate and the contactless measurement of breathing |
Non-Patent Citations (3)
Title |
---|
刘祎 等: "基于人脸视频的非接触式心率测量方法", 《纳米技术与精密工程》 * |
梁路宏 等: "基于人脸检测的人脸跟踪算法", 《计算机工程与应用》 * |
邹国锋 等: "基于加权均值人脸的多姿态人脸识别", 《计算机应用研究》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110866498B (en) * | 2019-11-15 | 2021-07-13 | 北京华宇信息技术有限公司 | Heart rate monitoring method |
CN110866498A (en) * | 2019-11-15 | 2020-03-06 | 北京华宇信息技术有限公司 | Portable heart rate monitoring device and heart rate monitoring method thereof |
WO2021184620A1 (en) * | 2020-03-19 | 2021-09-23 | 南京昊眼晶睛智能科技有限公司 | Camera-based non-contact heart rate and body temperature measurement method |
CN112155540A (en) * | 2020-09-30 | 2021-01-01 | 重庆仙桃前沿消费行为大数据有限公司 | Acquisition imaging system and method for complex object |
WO2022101785A1 (en) * | 2020-11-14 | 2022-05-19 | Facense Ltd. | Improvements in acquisition and analysis of imaging photoplethysmogram signals |
CN112733650A (en) * | 2020-12-29 | 2021-04-30 | 深圳云天励飞技术股份有限公司 | Target face detection method and device, terminal equipment and storage medium |
CN112733650B (en) * | 2020-12-29 | 2024-05-07 | 深圳云天励飞技术股份有限公司 | Target face detection method and device, terminal equipment and storage medium |
CN114120770A (en) * | 2021-03-24 | 2022-03-01 | 张银合 | Barrier-free communication method for hearing-impaired people |
CN113876311B (en) * | 2021-09-02 | 2023-09-15 | 天津大学 | Non-contact type multi-player heart rate efficient extraction device capable of adaptively selecting |
CN113876311A (en) * | 2021-09-02 | 2022-01-04 | 天津大学 | Self-adaptively-selected non-contact multi-player heart rate efficient extraction device |
CN114387479A (en) * | 2022-01-07 | 2022-04-22 | 河北工业大学 | Non-contact heart rate measurement method and system based on face video |
CN114431849A (en) * | 2022-01-10 | 2022-05-06 | 厦门大学 | Aquatic animal heart rate detection method based on video image processing |
CN114431849B (en) * | 2022-01-10 | 2023-08-11 | 厦门大学 | Aquatic animal heart rate detection method based on video image processing |
CN114403838A (en) * | 2022-01-24 | 2022-04-29 | 佛山科学技术学院 | Portable raspberry pi-based remote heart rate detection device and method |
CN118411751A (en) * | 2024-07-03 | 2024-07-30 | 宁波星巡智能科技有限公司 | Heart rate measurement stability augmentation method, device and equipment based on facial image processing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110276271A (en) | Merge the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming | |
US11771381B2 (en) | Device, system and method for measuring and processing physiological signals of a subject | |
Sikdar et al. | Computer-vision-guided human pulse rate estimation: a review | |
CN105636505B (en) | For obtaining the device and method of the vital sign of object | |
CN102499664B (en) | Video-image-based method and system for detecting non-contact vital sign | |
CN105869144B (en) | A kind of contactless monitoring of respiration method based on depth image data | |
JP2019503824A (en) | Observational heart failure monitoring system | |
CN106264568A (en) | Contactless emotion detection method and device | |
CN102309318A (en) | Method for detecting human body physiological parameters on basis of infrared sequence image | |
CN106073742A (en) | A kind of blood pressure measuring system and method | |
CN106236049A (en) | Blood pressure measuring method based on video image | |
Blöcher et al. | An online PPGI approach for camera based heart rate monitoring using beat-to-beat detection | |
WO2011127487A2 (en) | Method and system for measurement of physiological parameters | |
Zhou et al. | The noninvasive blood pressure measurement based on facial images processing | |
CN108186000A (en) | Real-time blood pressure monitor system and method based on heart impact signal and photosignal | |
CN112233813A (en) | Non-contact non-invasive heart rate and respiration measurement method and system based on PPG | |
Allen et al. | Photoplethysmography (PPG): state-of-the-art methods and applications | |
Przybyło | A deep learning approach for remote heart rate estimation | |
CN109009052A (en) | The embedded heart rate measurement system and its measurement method of view-based access control model | |
CN107115103A (en) | A kind of pulse condition width detection designed based on sensor array and 3D arteries and veins figure construction methods | |
CN113456042A (en) | Non-contact facial blood pressure measuring method based on 3D CNN | |
TWI595860B (en) | Featured points identification methods for mechanocardiography spectrum | |
CN114898882A (en) | Method and system for ultrasound-based assessment of right heart function | |
TWI603712B (en) | Cardiac Physiological Measurement System | |
CN109602412A (en) | The method for realizing heart rate detection using facial video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190924 |