Nothing Special   »   [go: up one dir, main page]

CN110276271A - Merge the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming - Google Patents

Merge the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming Download PDF

Info

Publication number
CN110276271A
CN110276271A CN201910462756.0A CN201910462756A CN110276271A CN 110276271 A CN110276271 A CN 110276271A CN 201910462756 A CN201910462756 A CN 201910462756A CN 110276271 A CN110276271 A CN 110276271A
Authority
CN
China
Prior art keywords
face
image
heart rate
ippg
depth information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910462756.0A
Other languages
Chinese (zh)
Inventor
罗堪
刘肖
李建兴
都可钦
邹复民
马莹
陈炜
黄炳法
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian University of Technology
Original Assignee
Fujian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian University of Technology filed Critical Fujian University of Technology
Priority to CN201910462756.0A priority Critical patent/CN110276271A/en
Publication of CN110276271A publication Critical patent/CN110276271A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • A61B5/02416Detecting, measuring or recording pulse rate or heart rate using photoplethysmograph signals, e.g. generated by infrared radiation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Cardiology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Pathology (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Physiology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The present invention relates to the non-contact heart rate estimations technique of fusion IPPG and depth information anti-noise jamming comprising following steps: 1) facial image and its corresponding depth information of measured are acquired by RGBD camera;2) Face datection and positioning are carried out using template matching to collected facial image using Face datection algorithm, forms monitor area;3) movement of tracking face is carried out using monitor area corresponding to multiple facial images of the Face tracking algorithm to continuous acquisition, obtains multiple facial images with motion change;4) it is interfered with the motion artifacts that weighted mean method eliminates multiple facial images with motion change obtained, realizes dynamic human face identification;5) to the facial image after recognition of face using depth information come adaptive selection ROI region;6) source extraction is carried out to ROI region, signal denoising is carried out using experience resolution model EMD method and carries out the heart rate value that energy spectrum analysis seeks detected person using arma modeling using IPPG technology.

Description

Merge the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming
Technical field
The present invention relates to heart rate estimation technique field more particularly to a kind of blending image formula photoplethysmographies (IPPG) With the non-contact heart rate estimation method of the anti-motion artifacts interference of depth information.
Background technique
It is shown according to World Health Organization's data, every year because cardiovascular disease death toll is more than 17,500,000 people, the disease Positioned at the first place of the uninfection lethality cause of disease.Heart rate is the important physiological parameter for being able to reflect heart state, is cardiovascular The important indicator of disease prevention and clinical diagnosis.
Traditional method for measuring heart rate, electrocardiogram are the standard methods of clinical practice heart rate measurement.It needs medical staff Crosslinking electrode is disposed according to correct lead position, process is cumbersome, needs the operation of profession, is needed when in use in electrode and skin Conducting medium is filled between skin or applies pressure fixing, therefore very big discomfort is brought to measured, and complicated connecting line is not Convenient for operation.Existing heartbeat detection device and coherent signal processing technique mainly include following several schemes:
(1) patch type heart rate measuring instrument: certain specific position adhesive electrode pieces with measured are extracted from human body signal Heart rate.However, adhesive electrode sector-meeting causes measured not accommodating stimulation centainly the patch type heart rate measuring instrument when detecting, survey Test ring border is restricted, cannot long-time real-time detection, process complex operations are inconvenient.
(2) condenser type heart rate measurement: it is based on capacitive coupling principle, completes non-contact heart rate detection;Capacitance-type no touch electrocardio Sensor has good electrocardio measurement performance, and high reliablity can be used for the long-time measurement of electrocardiosignal.However, the capacitor The heart rate measurements of formula heart rate measurement are easy to be influenced by motion artifacts, obtain that the effect is unsatisfactory in this way.
(3) photoelectric sphyg measures: using penetrability of the height opacity and illumination of blood in general tissue than Several ten times larger feature, checks pulse signal in blood.However, photoelectric sphyg measurement is either gone back using reflective It is projection-type mode, other than equally being easy to be influenced by motion artifacts, alsos for reducing external environment light noise jamming, It is required that sensor is adjacent to skin.
(4) photoelectric plethysmography heart rate measurement: by record light through skin and after organizing, suction of the skin histology for light Receive number mode calculate heart rate, which belongs to non-cpntact measurement due to not contacting directly with measurand, have compared with Good man-machine experience sense.However, the photoelectric plethysmography carries out heart rate detection, limited by environment light, once illumination has change Change or not strong enough, heart rate test will become inaccuracy;And tester is needed to be sitting in fixed position, position change can also make Obtain testing result inaccuracy;Under moving condition, ROI region selection also can inaccuracy;And it eliminates in signal due to tested pair As the noise that relative motion generates between image capture device is technological difficulties.
(5) image pulses measure: according to blood vessel distortion caused by heartbeat can by thus generate the deformation of blood vessel surrounding skin come Observation.However, the image pulses measure is easy, dynamic by puppet and vibration is interfered measurement data and by the striation part tested It influences.
(6) in terms of signal processing, main processing technique has at present: finite impulse response (FIR) (finite impulse Response, FIR) filter, bandpass filter and moving average filter.
In recent years, with the development of electronic technology and measurement method.Contactless heart rate measurement is suggested and realizes.Light Power Capacity graphical method detects a kind of non-invasive detection methods of volumetric blood variation using photoelectric sensor in living tissue.By Different in uptake of the human body different tissues for light, after through transmission or reflection, the light intensity received will appear rule The variation of rule property.The main cause that heart regularity, which is shunk, and diastole is causes regularity to change, therefore these fluctuating signals and heart are jumped It moves directly related.But there is acquisition signal and be easy to be interfered by motion artifacts in above-mentioned measurement method, cause accuracy rate is low to ask Topic.
Image-type photoplethysmography (image photoplenthysmography, IPPG) is in recent years in tradition The one kind to grow up on the basis of photoplethysmography can be realized " remote " (can be greater than 0.5 meter) contactless life Manage signal measurement technique.The non-cpntact measurement of IPPG technology, low cost, it is easy to operate the features such as, especially non-cpntact measurement side Formula can realize clinic and routine testing under some specific conditions.The image photoplethysmography having proposed (IPPG), fixed ROI region is generallyd use, while measured being required to sit quietly before image capture device, does not account for being tested The mobile situation of person.Estimation caused by the noise for setting fixed ROI region in method and being introduced when measured cannot be overcome to move The problem of heart rate signal inaccuracy.
Summary of the invention
In view of the above-mentioned deficiencies in the prior art, it is an object of the present invention to provide, a kind of design is reasonable, and accuracy is high, is resistant to move The blending image formula photoplethysmography (IPPG) of noise jamming and the non-contact heart rate estimation method of depth information.
To achieve the above object, the invention adopts the following technical scheme:
Merge the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming comprising following steps:
1) facial image and its corresponding depth information of measured information collection: are acquired by RGBD camera;
2) facial image detects: carrying out Face datection using template matching to collected facial image using Face datection algorithm And positioning, form monitor area;
3) face tracking: monitor area corresponding to the multiple facial images arrived using Face tracking algorithm to continuous acquisition is carried out The movement of tracking face, obtains multiple facial images with motion change;
4) motion artifacts interference is eliminated: eliminating multiple facial images with motion change obtained with weighted mean method Motion artifacts interference, realize dynamic human face identification;
5) ROI region is extracted: to the facial image after recognition of face using depth information come adaptive selection ROI region;
6) rate calculation is carried out using IPPG technology: source extraction is carried out to ROI region using IPPG technology, utilizes experience point Solution mode EMD method carries out signal denoising and carries out the heart rate value that energy spectrum analysis seeks detected person using arma modeling.
Preferably, RGBD camera described in step 1) includes RGB camera and depth camera, the RGB camera shooting Head is used to shoot the depth image of face, the RGB camera and depth for shooting the RGB image of human body, depth camera After camera calibration, the depth image of depth camera shooting and the RGB image of RGB camera shooting are overlapped.
Preferably, the method for step 2) facial image detection are as follows: using collected facial image as input picture, search The rectangular area of possible scale and position, i.e., candidate window to be detected, to each candidate window according to such as in rope input picture Lower step is handled: first being carried out coarse sizing using eyes template matching, then is carried out mean square deviation standardization to image in window, makes The standardization of human face region distributed areas, eliminates the influence of illumination variation, face template matching is then carried out, if being not above setting Threshold value, then as candidate face export.
Preferably, the method for step 3) face tracking are as follows: assuming that only occur a face in sequence in each frame, setting Initial monitor area is R0, face first is detected in the monitor area of each frame, if detecting face, is pressed according to its positioning result Monitor area new in next frame is calculated according to the formula for solving monitor area, so handles each frame;If detection failure, keeps supervising Viewed area is constant, and continues to test in subsequent frames, to prevent from tracking failure as caused by accidental missing inspection;When missing inspection frame is super Cross certain amount, i.e. missing inspection time-out, it is believed that face has disappeared, then restarts to detect in subsequent frame in initial monitor area The new face being likely to occur;
It is assumed that monitor area is expressed as LR=(x with one hexa-atomic groupmin,xmax,ymin,ymax,chmin,chmax), xmin, xmaxIt is range of the face center in the direction x, ymin,ymaxIt is range of the face center in the direction y, chmin,chmaxIt is The range of face scale;It is approximately a square by human face region, is shown as L=(x with a triple tablecenter,xcenter, chface), each element successively describes center and the size of face;Then, the formula for solving monitor area is as follows:
Wherein, a, b, which are respectively indicated, defines face x between two frames, y location, and c indicates the maximum variation of face scale.
Preferably, step 4) motion artifacts interference elimination method are as follows: it is swung left and right the variation of angle according to face, it is fixed The weight calculation method of the every width face variation of justice, proposes the construction strategy of weighted average face, is then based on the pitch angle of face Degree variation, is divided into overlooking, look squarely and looking up three levels, and weighted mean face is constructed in each level, is formed Weighted mean face matrix;
It is assumed that given motion change Gray Face image is Ij(x, y), then, the formula of weighted mean face are as follows:
Wherein, ZM indicates the different motion total number of images of same people, and j represents jth width modified-image, and x, y indicate that two dimensional image is flat Areal coordinate, ωjIndicate the weighted value of corresponding jth width facial image;
ωjDetermination and face the movement angle that is swung left and right it is related, ωjCalculation method be: assuming that before face is swung left and right Eyes coordinate is respectively A (xA,yA) and B (xB,yB), it is respectively A'(x that the eyes coordinate after waving to the right, which occurs, for faceA',yA') and B'(xB',yB'), the central coordinate of circle of circle is O (x where face cross-sectional viewo,yo), the angle of oscillation of face to the right is ∠ δ, then K Width facial image weighted value ωKCalculation formula it is as follows:
Wherein, δkIndicate the angle that is swung left and right of corresponding kth width facial image, then weight ωkSolution be converted to calculating and wave Angle δk
Eyes coordinate A (xA,yA)、B(xB,yB) and its eyes coordinate A'(x after being swung left and rightA',yA')、B'(xB',yB') Determine position;Assuming that the radius of circle is R where eyes rotation, then
Centre point O (the xo,yo) coordinate obtained by following calculating process:
(6) (7) two formula above simultaneous, abbreviation arrangement can obtain:
It can similarly obtain:
Centre point o (x can be obtained by (8) (9) two formula simultaneous aboveo,yo) coordinate, bring the result into formula (6), find out half Diameter R then can solve swing angle δ by formula (5);Angle of oscillation δ is the function of eyes coordinate value, i.e.,
δ=f (xA,xB,xA',xB') (10)
Assuming that the angle δ that is swung left and right, pitch angle γ, the motion change range that face is made divides and its corresponding weighted mean face Relation table;The Gray Face image I of given motion changei(x, y), the then angle that is swung left and right are the weighted mean in 0 ° -90 ° Face definition is as shown in formula (11):
Wherein, ZM represents a certain pitch angle, a certain face sum being swung left and right in angular range, and j indicates jth width face, X, y represent two dimensional image plane coordinate, and p, q correspond to the mean value face number of the variation range, F'p,qExpression is swung left and right Weighted mean face of the angle between -90 ° -0 °;
The weighted mean face being calculated, ultimately forming face, there are the weighted mean face squares under multi-motion situation of change Battle array:
To the facial image after recognition of face, outlined with rectangle and to indicate that depth camera arrives body surface with h Distance, N indicate that the measured value of number of pixels in the human face region of image center out, S indicate object real area size;M table Show the number of pixel in the entire field range of depth camera, S1Indicate the area of the entire visual field of depth camera;α and β difference Indicate the horizontal field of view and vertical field of view of depth camera.According to depth camera the distance field range and the visual field model The ratio of the size S of number of pixels measured value N and actual object in enclosing and depth camera under the distance are entire The number M of pixel and entire visual field area s in field range1Ratio it is equal, i.e.,
Wherein N is the number of pixels of the face region gone out according to image statistics, S1It can be according to the visual field model of depth camera It encloses and is found out by geometrical relationship, is shown below:
Again due to visual field S1In total number of pixels M, using this principle, for the image capturing system put up, in S1、M、 Human face region area S can be found out in situation known to N:
Determination for ROI region area, the present invention selection to the facial image after recognition of face from top edge down The region of 45%-75% draws parallel lines, and 30% height as ROI of its image whole height is taken, after the width of ROI takes identification The 70% of picture traverse obtains ROI region only comprising nose and left and right cheek.
Preferably, the method that step 6) carries out rate calculation using IPPG technology are as follows: carry out source signal to ROI region and mention It takes, ROI region image is divided into tri- channels R, G, B, using the smallest channel G of noise as the channel of source signal, after separation G channel image carry out space pixel average treatment:
Wherein k is number of image frames, and K is the totalframes of image, and Z (k) is the one-dimensional source signal in the channel R, zi,j(n) for pixel (i, J) in the channel G color intensity value, g, d are the height and width of image respectively;
Signal denoising is carried out using experience resolution model EMD method, its step are as follows:
(1) average value for taking original signal u (t) obtains signal m1(t);
(2) single order surplus hp is calculated1(t)=u (t)-m1(t), hp is checked1(t) whether meet IMF condition, if it is not, then returning to Step (1), uses hp1(t) as the original signal of programmed screening, it may be assumed that
hp2(t)=hp1(t)-m2(t) (17)
Repeat screening k times:
hpk(t)=hpk-1(t)-mk(t) (18)
In hpk(t) before meeting IMF condition, the one-component IMF of IMF is obtained1, i.e.,
IMF1=hpk(t) (19)
(3) original signal u (t) subtracts IMF1Surplus r can be obtained1(t), i.e.,
r1(t)=u (t)-IMF1 (20)
(4) u is enabled1(t)=r1(t), by u1(t) as new original signal, above-mentioned steps is repeated and obtain second IMF points Measure IMF2, such repeatedly n times:
(5) as n-th of component rn(t) monotonic function is had become, when can not decompose IMF again, the decomposable process of entire EMD is completed; Original signal u (t) can be expressed as n IMF component and an average tendency component rn(t) combination, it may be assumed that
The signal that its frequency is located at 0.75-2.0Hz in heartbeat frequency band after being decomposed with arma modeling to EMD does energy spectrum analysis, i.e., Corresponding 45-120 times/min of human normal heart rate range, the corresponding frequency in energy highest point is palmic rate fh, heart rate are as follows:
XLV=60fh (22)。
The invention adopts the above technical scheme, estimates to further increase image-type photoplethysmography (IPPG) heart rate The accuracy of meter, the invention proposes the non-contact hearts rate of blending image formula photoplethysmography (IPPG) and depth information to estimate Meter method, the image that the present invention can be measured in real time using RGBD camera, by increasing depth information and being fused to In IPPG signal extraction, adaptive ROI region extracting method is devised, reduces to choose introducing noise due to inappropriate ROI region Interference to IPPG source signal, while considering that detected person's yaw situation will cause the variation of flexion-extension up and down and the left and right of facial image Variation is waved, the interference of motion artifacts caused by measured's head movement is reduced using weighted average method, improves image-type The accuracy rate of photoplethysmography (IPPG) heart rate estimation.Compared with the conventional method, heart rate acquisition method proposed by the present invention Using the non-contacting acquisition human face image information of RGBD camera and depth information, when rhythm of the heart, does not need patch electrode slice, makes With more convenient, the limitation by environment light and motion artifacts are not influenced, and improve the accuracy rate of rhythm of the heart.
Detailed description of the invention
Now in conjunction with attached drawing, the present invention is further elaborated:
Fig. 1 is the flow diagram of the non-contact heart rate estimation method of present invention fusion IPPG technology and depth information;
Fig. 2 is the flow diagram of the present inventor's face detection algorithm;
Fig. 3 is the flow diagram of Face tracking algorithm of the present invention;
Fig. 4 is the present inventor's face image cross-sectional view;
Fig. 5 is that the motion change range of face of the present invention divides and its correspond to the relation table of weighted mean face;
Fig. 6 is the visual field figure of depth camera of the present invention.
Specific embodiment
As shown in one of Fig. 1-6, the non-contact heart rate estimation method of fusion IPPG technology and depth information of the invention is melted Close the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming comprising following steps:
1) facial image and its corresponding depth information of measured information collection: are acquired by RGBD camera;
2) facial image detects: carrying out Face datection using template matching to collected facial image using Face datection algorithm And positioning, form monitor area;
3) face tracking: monitor area corresponding to the multiple facial images arrived using Face tracking algorithm to continuous acquisition is carried out The movement of tracking face, obtains multiple facial images with motion change;
4) motion artifacts interference is eliminated: eliminating multiple facial images with motion change obtained with weighted mean method Motion artifacts interference, realize dynamic human face identification;
5) ROI region is extracted: to the facial image after recognition of face using depth information come adaptive selection ROI region;
6) rate calculation is carried out using IPPG technology: source extraction is carried out to ROI region using IPPG technology, utilizes experience point Solution mode EMD method carries out signal denoising and carries out the heart rate value that energy spectrum analysis seeks detected person using arma modeling.
Preferably, RGBD camera described in step 1) includes RGB camera and depth camera, the RGB camera shooting Head is used to shoot the depth image of face, the RGB camera and depth for shooting the RGB image of human body, depth camera After camera calibration, the depth image of depth camera shooting and the RGB image of RGB camera shooting are overlapped.
As shown in Fig. 2, the method for step 2) facial image detection are as follows: using collected facial image as input picture, Search for input picture in may scale and position rectangular area, i.e., candidate window to be detected, to each candidate window according to Following steps are handled: coarse sizing first carried out using eyes template matching, then mean square deviation standardization is carried out to image in window, Make face area distribution area normalization, eliminate the influence of illumination variation, then carries out face template matching, set if being not above Fixed threshold value is then exported as candidate face.
As shown in figure 3, the method for step 3) face tracking are as follows: assuming that only occur a face in sequence in each frame, if Fixed initial monitor area is R0, face first is detected in the monitor area of each frame, if detecting face, according to its positioning result Monitor area new in next frame is calculated according to the formula for solving monitor area, so handles each frame;If detection failure, keeps Monitor area is constant, and continues to test in subsequent frames, to prevent from tracking failure as caused by accidental missing inspection;When missing inspection frame More than certain amount, i.e. missing inspection time-out, it is believed that face has disappeared, then restarts to detect subsequent frame in initial monitor area In the new face that is likely to occur;
The monitor area refers to some face possible position and range scale in certain frame, it is assumed that monitor area is used One hexa-atomic group is expressed as LR=(xmin,xmax,ymin,ymax,chmin,chmax), xmin,xmaxIt is face center in the direction x Range, ymin,ymaxIt is range of the face center in the direction y, chmin,chmaxIt is the range of face scale;By face area Domain is approximately a square, is shown as L=(x with a triple tablecenter,xcenter,chface), each element successively describes The center of face and size;
The task of face tracking is the movement of tracking face in consecutive image.Track algorithm is fixed according to existing face Position is as a result, the corresponding monitor area of each face and detect face in the range of its constraint in the new frame of prediction.Due to human body Movement has random (such as sudden change direction of motion), thus personage's behavior is lacked understand under the premise of, using existing Face motion profile predict new monitor area have certain difficulty.Using relatively simple method, former frame is used only Tracking result, according to the demand of practical application limit face two interframe maximum change, further solve new surveillance zone Domain.Although processing speed will receive some influences in this way, it can guarantee the robustness of tracking.Face tracking problem is targeted to be Front face image sequence, the movement of face are mainly shown as translation and two kinds of forms of dimensional variation.Therefore, monitor area is solved Formula it is as follows:
Wherein, a, b, which are respectively indicated, defines face x between two frames, y location, and c indicates the maximum variation of face scale, they And the factors such as the movement velocity of face and face and the relative positional relationship of camera are related.
The variation of face in three dimensions is divided into translation and rotation along horizontal axis, the longitudinal axis and number axis, part translation It can be effectively overcome by gathering normalized method with rotation.But for facial image and upper and lower pitching variation and a left side Variation is waved on the right side, and geometrical normalization can not often overcome.To Face datection difficult problem when for movement, using equal based on weighting It is worth face recognition algorithms.
Preferably, step 4) motion artifacts interference elimination method are as follows: it is swung left and right the variation of angle according to face, it is fixed The weight calculation method of the every width face variation of justice, proposes the construction strategy of weighted average face, is then based on the pitch angle of face Degree variation, is divided into overlooking, look squarely and looking up three levels, and weighted mean face is constructed in each level, is formed Weighted mean face matrix realizes dynamic human face identification;The collected facial image with motion change is stacked up shape At mean value face, such image contains the integrated information of several movement faces, and the face for being able to reflect different motion becomes Change situation.
It is assumed that given motion change Gray Face image is Ij(x, y), then, and weighted mean face (weighted Mean face, WMF) formula it is as follows:
Wherein, ZM indicates the different motion total number of images of same people, and j represents jth width modified-image, and x, y indicate that two dimensional image is flat Areal coordinate, ωjIndicate the weighted value of corresponding jth width facial image;
As shown in figure 4, ωjDetermination and face the movement angle that is swung left and right it is related, ωjCalculation method be: assuming that face Eyes coordinate before being swung left and right is respectively A (xA,yA) and B (xB,yB), the eyes coordinate difference after waving to the right occurs for face For A'(xA',yA') and B'(xB',yB'), the central coordinate of circle of circle is O (x where face cross-sectional viewo,yo), face waving to the right Angle is ∠ δ, then K width facial image weighted value ωKCalculation formula it is as follows:
Wherein, δkIndicate the angle that is swung left and right of corresponding kth width facial image, then weight ωkSolution be converted to calculating and wave Angle δk
Eyes coordinate A (xA,yA)、B(xB,yB) and its eyes coordinate A'(x after being swung left and rightA',yA')、B'(xB',yB') Determine position;Assuming that the radius of circle is R where eyes rotation, then
Centre point O (the xo,yo) coordinate obtained by following calculating process:
(6) (7) two formula above simultaneous, abbreviation arrangement can obtain:
It can similarly obtain:
Centre point o (x can be obtained by (8) (9) two formula simultaneous aboveo,yo) coordinate, bring the result into formula (6), find out half Diameter R then can solve swing angle δ by formula (5);Angle of oscillation δ is the function of eyes coordinate value, i.e.,
δ=f (xA,xB,xA',xB') (10)
Assuming that the angle δ that is swung left and right, pitch angle γ, the motion change range that face is made divides and its corresponding weighted mean face Relation table, as shown in Figure 5;The Gray Face image I of given motion changei(x, y), the then angle that is swung left and right are in 0 ° -90 ° The definition of weighted mean face as shown in formula (11):
Wherein, ZM represents a certain pitch angle, a certain face sum being swung left and right in angular range, and j indicates jth width face, X, y represent two dimensional image plane coordinate, and p, q correspond to the mean value face number of the variation range, F'p,qExpression is swung left and right Weighted mean face of the angle between -90 ° -0 °;
The weighted mean face being calculated, ultimately forming face, there are the weighted mean face squares under multi-motion situation of change Battle array:
As shown in fig. 6, to the facial image after recognition of face, outlined with rectangle and to indicate that depth camera arrives with h The distance of body surface, N indicate that the measured value of number of pixels in the human face region of image center out, S indicate the practical face of object Product size;M indicates the number of pixel in the entire field range of depth camera, S1Indicate the face of the entire visual field of depth camera Product;α and β respectively indicates the horizontal field of view and vertical field of view of depth camera.
According to number of pixels measured value N and reality of the depth camera in the field range of the distance and the field range The ratio of the size S of object and the number M of pixel and entire in the entire field range of depth camera under the distance Visual field area s1Ratio it is equal, i.e.,
Wherein N is the number of pixels of the face region gone out according to image statistics, S1It can be according to the visual field model of depth camera It encloses and is found out by geometrical relationship, is shown below:
Again due to visual field S1In total number of pixels M, using this principle, for the image capturing system put up, in S1、M、 Human face region area S can be found out in situation known to N:
Determination for ROI region area, the present invention selection to the facial image after recognition of face from top edge down The region of 45%-75% draws parallel lines, and 30% height as ROI of its image whole height is taken, after the width of ROI takes identification The 70% of picture traverse obtains ROI region only comprising nose and left and right cheek.
Preferably, the method that step 6) carries out rate calculation using IPPG technology are as follows: carry out source signal to ROI region and mention It takes, ROI region image is divided into tri- channels R, G, B, using the smallest channel G of noise as the channel of source signal, after separation G channel image carry out space pixel average treatment:
Wherein k is number of image frames, and K is the totalframes of image, and Z (k) is the one-dimensional source signal in the channel R, zi,j(n) for pixel (i, J) in the channel G color intensity value, g, d are the height and width of image respectively;
Signal denoising is carried out using experience resolution model EMD method, its step are as follows:
(1) average value for taking original signal u (t) obtains signal m1(t);
(2) single order surplus hp is calculated1(t)=u (t)-m1(t), hp is checked1(t) whether meet IMF condition, if it is not, then returning to Step (1), uses hp1(t) as the original signal of programmed screening, it may be assumed that
hp2(t)=hp1(t)-m2(t) (17)
Repeat screening k times:
hpk(t)=hpk-1(t)-mk(t) (18)
In hpk(t) before meeting IMF condition, the one-component IMF of IMF is obtained1, i.e.,
IMF1=hpk(t) (19)
(3) original signal u (t) subtracts IMF1Surplus r can be obtained1(t), i.e.,
r1(t)=u (t)-IMF1 (20)
(4) u is enabled1(t)=r1(t), by u1(t) as new original signal, above-mentioned steps is repeated and obtain second IMF points Measure IMF2, such repeatedly n times:
(5) as n-th of component rn(t) monotonic function is had become, when can not decompose IMF again, the decomposable process of entire EMD is completed; Original signal u (t) can be expressed as n IMF component and an average tendency component rn(t) combination, it may be assumed that
The signal that its frequency is located at 0.75-2.0Hz in heartbeat frequency band after being decomposed with arma modeling to EMD does energy spectrum analysis, i.e., Corresponding 45-120 times/min of human normal heart rate range, the corresponding frequency in energy highest point is palmic rate fh, heart rate are as follows:
XLV=60fh (22)。
The invention adopts the above technical scheme, estimates to further increase image-type photoplethysmography (IPPG) heart rate The accuracy of meter, the invention proposes the non-contact hearts rate of blending image formula photoplethysmography (IPPG) and depth information to estimate Meter method, the image that the present invention can be measured in real time using RGBD camera, by increasing depth information and being fused to In IPPG signal extraction, adaptive ROI region extracting method is devised, reduces to choose introducing noise due to inappropriate ROI region Interference to IPPG source signal, while considering that detected person's yaw situation will cause the variation of flexion-extension up and down and the left and right of facial image Variation is waved, the interference of motion artifacts caused by measured's head movement is reduced using weighted average method, improves image-type The accuracy rate of photoplethysmography (IPPG) heart rate estimation.Compared with the conventional method, heart rate acquisition method proposed by the present invention Using the non-contacting acquisition human face image information of RGBD camera and depth information, when rhythm of the heart, does not need patch electrode slice, makes With more convenient, the limitation by environment light and motion artifacts are not influenced, and improve the accuracy rate of rhythm of the heart.
Above description should not have any restriction to protection scope of the present invention.

Claims (7)

1. merging the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming, it is characterised in that: it includes following step It is rapid:
1) facial image and its corresponding depth information of measured information collection: are acquired by RGBD camera;
2) facial image detects: carrying out Face datection using template matching to collected facial image using Face datection algorithm And positioning, form monitor area;
3) face tracking: monitor area corresponding to the multiple facial images arrived using Face tracking algorithm to continuous acquisition is carried out The movement of tracking face, obtains multiple facial images with motion change;
4) motion artifacts interference is eliminated: eliminating multiple facial images with motion change obtained with weighted mean method Motion artifacts interference, realize dynamic human face identification;
5) ROI region is extracted: to the facial image after recognition of face using depth information come adaptive selection ROI region;
6) rate calculation is carried out using IPPG technology: source extraction is carried out to ROI region using IPPG technology, utilizes experience point Solution mode EMD method carries out signal denoising and carries out the heart rate value that energy spectrum analysis seeks detected person using arma modeling.
2. the non-contact heart rate estimation technique of fusion IPPG and depth information anti-noise jamming according to claim 1, special Sign is: RGBD camera described in step 1) includes RGB camera and depth camera, and the RGB camera is for shooting The RGB image of human body, depth camera are used to shoot the depth image of face, the RGB camera and depth camera calibration Afterwards, the depth image of depth camera shooting and the RGB image of RGB camera shooting are overlapped.
3. the non-contact heart rate estimation technique of fusion IPPG and depth information anti-noise jamming according to claim 1, special Sign is: the method for step 2) facial image detection are as follows: using collected facial image as input picture, searches for input picture The rectangular area of middle possibility scale and position, i.e., candidate window to be detected carry out each candidate window in accordance with the following steps Processing: coarse sizing first is carried out using eyes template matching, then mean square deviation standardization is carried out to image in window, makes human face region point Cloth area normalization eliminates the influence of illumination variation, then carries out face template matching, if being not above the threshold value of setting, It is exported as candidate face.
4. the non-contact heart rate estimation technique of fusion IPPG and depth information anti-noise jamming according to claim 1, special Sign is: the method for step 3) face tracking are as follows: assuming that only occurring a face in sequence in each frame, sets initial surveillance zone Domain is R0, face first is detected in the monitor area of each frame, if detecting face, is monitored according to its positioning result according to solution The formula in region calculates monitor area new in next frame, so handles each frame;If detection failure, keeps monitor area not Become, and continue to test in subsequent frames, to prevent from tracking failure as caused by accidental missing inspection;When missing inspection frame is more than a fixed number Amount, i.e. missing inspection time-out, it is believed that face has disappeared, then restarts to be likely to occur in detection subsequent frame in initial monitor area New face;
It is assumed that monitor area is expressed as LR=(x with one hexa-atomic groupmin,xmax,ymin,ymax,chmin,chmax), xmin,xmaxIt is Range of the face center in the direction x, ymin,ymaxIt is range of the face center in the direction y, chmin,chmaxIt is face The range of scale;It is approximately a square by human face region, is shown as L=(x with a triple tablecenter,xcenter, chface), each element successively describes center and the size of face;Then, the formula for solving monitor area is as follows:
Wherein, a, b, which are respectively indicated, defines face x between two frames, y location, and c indicates the maximum variation of face scale.
5. the non-contact heart rate estimation technique of fusion IPPG and depth information anti-noise jamming according to claim 1, special Sign is: step 4) motion artifacts interference elimination method are as follows: is swung left and right the variation of angle according to face, defines every width face The weight calculation method of variation proposes the construction strategy of weighted average face, the pitch angle variation of face is then based on, by it It is divided into vertical view, look squarely and looks up three levels, and constructs weighted mean face in each level, forms weighted mean face Matrix;
It is assumed that given motion change Gray Face image is Ij(x, y), then, the formula of weighted mean face are as follows:
Wherein, ZM indicates the different motion total number of images of same people, and j represents jth width modified-image, and x, y indicate that two dimensional image is flat Areal coordinate, ωjIndicate the weighted value of corresponding jth width facial image;
ωjDetermination and face the movement angle that is swung left and right it is related, ωjCalculation method be: assuming that face be swung left and right before it is double Eye coordinates are respectively A (xA,yA) and B (xB,yB), it is respectively A'(x that the eyes coordinate after waving to the right, which occurs, for faceA',yA') and B' (xB',yB'), the central coordinate of circle of circle is O (x where face cross-sectional viewo,yo), the angle of oscillation of face to the right is ∠ δ, then K width Facial image weighted value ωKCalculation formula it is as follows:
Wherein, δkIndicate the angle that is swung left and right of corresponding kth width facial image, then weight ωkSolution be converted to calculating angle of oscillation Spend δk
Eyes coordinate A (xA,yA)、B(xB,yB) and its eyes coordinate A'(x after being swung left and rightA',yA')、B'(xB',yB') Determine position;Assuming that the radius of circle is R where eyes rotation, then
Centre point O (the xo,yo) coordinate obtained by following calculating process:
(6) (7) two formula above simultaneous, abbreviation arrangement can obtain:
It can similarly obtain:
Centre point o (x can be obtained by (8) (9) two formula simultaneous aboveo,yo) coordinate, bring the result into formula (6), find out half Diameter R then can solve swing angle δ by formula (5);Angle of oscillation δ is the function of eyes coordinate value, i.e.,
δ=f (xA,xB,xA',xB') (10)
Assuming that the angle δ that is swung left and right, pitch angle γ, the motion change range that face is made divides and its corresponding weighted mean face Relation table;The Gray Face image I of given motion changei(x, y), the then angle that is swung left and right are the weighted mean in 0 ° -90 ° Face definition is as shown in formula (11):
Wherein, ZM represents a certain pitch angle, a certain face sum being swung left and right in angular range, and j indicates jth width face, X, y represent two dimensional image plane coordinate, and p, q correspond to the mean value face number of the variation range, F 'P, qExpression is swung left and right Weighted mean face of the angle between -90 ° -0 °;
The weighted mean face being calculated, ultimately forming face, there are the weighted mean face squares under multi-motion situation of change Battle array:
6. the non-contact heart rate estimation technique of fusion IPPG and depth information anti-noise jamming according to claim 1, special Sign is: the method that step 5) ROI region is extracted are as follows: to the facial image after recognition of face from top edge 45%-75% down Region draw parallel lines, take 30% height as ROI of its image whole height, the width of ROI takes picture traverse after identification 70%, obtain only include nose and left and right cheek ROI region.
7. the non-contact heart rate estimation technique of fusion IPPG and depth information anti-noise jamming according to claim 1, special Sign is: the method that step 6) carries out rate calculation using IPPG technology are as follows: source extraction is carried out to ROI region, by the area ROI Area image is divided into tri- channels R, G, B, using the smallest channel G of noise as the channel of source signal, to the G channel image after separation Carry out space pixel average treatment:
Wherein k is number of image frames, and K is the totalframes of image, and Z (k) is the one-dimensional source signal in the channel R, zi,j(n) for pixel (i, J) in the channel G color intensity value, g, d are the height and width of image respectively;
Signal denoising is carried out using experience resolution model EMD method, its step are as follows:
(1) average value for taking original signal u (t) obtains signal m1(t);
(2) single order surplus hp is calculated1(t)=u (t)-m1(t), hp is checked1(t) whether meet IMF condition, if it is not, then returning to Step (1), uses hp1(t) as the original signal of programmed screening, it may be assumed that
hp2(t)=hp1(t)-m2(t) (17)
Repeat screening k times:
hpk(t)=hpk-1(t)-mk(t) (18)
In hpk(t) before meeting IMF condition, the one-component IMF of IMF is obtained1, i.e.,
IMF1=hpk(t) (19)
(3) original signal u (t) subtracts IMF1Surplus r can be obtained1(t), i.e.,
r1(t)=u (t)-IMF1 (20)
(4) u is enabled1(t)=r1(t), by u1(t) as new original signal, above-mentioned steps is repeated and obtain second IMF points Measure IMF2, such repeatedly n times:
(5) as n-th of component rn(t) monotonic function is had become, when can not decompose IMF again, the decomposable process of entire EMD is completed;It is former Beginning signal u (t) can be expressed as n IMF component and an average tendency component rn(t) combination, it may be assumed that
The signal that its frequency is located at 0.75-2.0 Hz in heartbeat frequency band after being decomposed with arma modeling to EMD does energy spectrum analysis, i.e., Corresponding 45-120 times/min of human normal heart rate range, the corresponding frequency in energy highest point is palmic rate fh, heart rate are as follows:
XLV=60fh (22)。
CN201910462756.0A 2019-05-30 2019-05-30 Merge the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming Pending CN110276271A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910462756.0A CN110276271A (en) 2019-05-30 2019-05-30 Merge the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910462756.0A CN110276271A (en) 2019-05-30 2019-05-30 Merge the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming

Publications (1)

Publication Number Publication Date
CN110276271A true CN110276271A (en) 2019-09-24

Family

ID=67961224

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910462756.0A Pending CN110276271A (en) 2019-05-30 2019-05-30 Merge the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming

Country Status (1)

Country Link
CN (1) CN110276271A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110866498A (en) * 2019-11-15 2020-03-06 北京华宇信息技术有限公司 Portable heart rate monitoring device and heart rate monitoring method thereof
CN112155540A (en) * 2020-09-30 2021-01-01 重庆仙桃前沿消费行为大数据有限公司 Acquisition imaging system and method for complex object
CN112733650A (en) * 2020-12-29 2021-04-30 深圳云天励飞技术股份有限公司 Target face detection method and device, terminal equipment and storage medium
WO2021184620A1 (en) * 2020-03-19 2021-09-23 南京昊眼晶睛智能科技有限公司 Camera-based non-contact heart rate and body temperature measurement method
CN113876311A (en) * 2021-09-02 2022-01-04 天津大学 Self-adaptively-selected non-contact multi-player heart rate efficient extraction device
CN114120770A (en) * 2021-03-24 2022-03-01 张银合 Barrier-free communication method for hearing-impaired people
CN114387479A (en) * 2022-01-07 2022-04-22 河北工业大学 Non-contact heart rate measurement method and system based on face video
CN114403838A (en) * 2022-01-24 2022-04-29 佛山科学技术学院 Portable raspberry pi-based remote heart rate detection device and method
CN114431849A (en) * 2022-01-10 2022-05-06 厦门大学 Aquatic animal heart rate detection method based on video image processing
WO2022101785A1 (en) * 2020-11-14 2022-05-19 Facense Ltd. Improvements in acquisition and analysis of imaging photoplethysmogram signals
CN118411751A (en) * 2024-07-03 2024-07-30 宁波星巡智能科技有限公司 Heart rate measurement stability augmentation method, device and equipment based on facial image processing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678780A (en) * 2016-01-14 2016-06-15 合肥工业大学智能制造技术研究院 Video heart rate detection method removing interference of ambient light variation
CN106778695A (en) * 2017-01-19 2017-05-31 北京理工大学 A kind of many people's examing heartbeat fastly methods based on video
CN106886216A (en) * 2017-01-16 2017-06-23 深圳前海勇艺达机器人有限公司 Robot automatic tracking method and system based on RGBD Face datections
US9750420B1 (en) * 2014-12-10 2017-09-05 Amazon Technologies, Inc. Facial feature selection for heart rate detection
CN107358220A (en) * 2017-07-31 2017-11-17 江西中医药大学 A kind of human heart rate and the contactless measurement of breathing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9750420B1 (en) * 2014-12-10 2017-09-05 Amazon Technologies, Inc. Facial feature selection for heart rate detection
CN105678780A (en) * 2016-01-14 2016-06-15 合肥工业大学智能制造技术研究院 Video heart rate detection method removing interference of ambient light variation
CN106886216A (en) * 2017-01-16 2017-06-23 深圳前海勇艺达机器人有限公司 Robot automatic tracking method and system based on RGBD Face datections
CN106778695A (en) * 2017-01-19 2017-05-31 北京理工大学 A kind of many people's examing heartbeat fastly methods based on video
CN107358220A (en) * 2017-07-31 2017-11-17 江西中医药大学 A kind of human heart rate and the contactless measurement of breathing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘祎 等: "基于人脸视频的非接触式心率测量方法", 《纳米技术与精密工程》 *
梁路宏 等: "基于人脸检测的人脸跟踪算法", 《计算机工程与应用》 *
邹国锋 等: "基于加权均值人脸的多姿态人脸识别", 《计算机应用研究》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110866498B (en) * 2019-11-15 2021-07-13 北京华宇信息技术有限公司 Heart rate monitoring method
CN110866498A (en) * 2019-11-15 2020-03-06 北京华宇信息技术有限公司 Portable heart rate monitoring device and heart rate monitoring method thereof
WO2021184620A1 (en) * 2020-03-19 2021-09-23 南京昊眼晶睛智能科技有限公司 Camera-based non-contact heart rate and body temperature measurement method
CN112155540A (en) * 2020-09-30 2021-01-01 重庆仙桃前沿消费行为大数据有限公司 Acquisition imaging system and method for complex object
WO2022101785A1 (en) * 2020-11-14 2022-05-19 Facense Ltd. Improvements in acquisition and analysis of imaging photoplethysmogram signals
CN112733650A (en) * 2020-12-29 2021-04-30 深圳云天励飞技术股份有限公司 Target face detection method and device, terminal equipment and storage medium
CN112733650B (en) * 2020-12-29 2024-05-07 深圳云天励飞技术股份有限公司 Target face detection method and device, terminal equipment and storage medium
CN114120770A (en) * 2021-03-24 2022-03-01 张银合 Barrier-free communication method for hearing-impaired people
CN113876311B (en) * 2021-09-02 2023-09-15 天津大学 Non-contact type multi-player heart rate efficient extraction device capable of adaptively selecting
CN113876311A (en) * 2021-09-02 2022-01-04 天津大学 Self-adaptively-selected non-contact multi-player heart rate efficient extraction device
CN114387479A (en) * 2022-01-07 2022-04-22 河北工业大学 Non-contact heart rate measurement method and system based on face video
CN114431849A (en) * 2022-01-10 2022-05-06 厦门大学 Aquatic animal heart rate detection method based on video image processing
CN114431849B (en) * 2022-01-10 2023-08-11 厦门大学 Aquatic animal heart rate detection method based on video image processing
CN114403838A (en) * 2022-01-24 2022-04-29 佛山科学技术学院 Portable raspberry pi-based remote heart rate detection device and method
CN118411751A (en) * 2024-07-03 2024-07-30 宁波星巡智能科技有限公司 Heart rate measurement stability augmentation method, device and equipment based on facial image processing

Similar Documents

Publication Publication Date Title
CN110276271A (en) Merge the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming
US11771381B2 (en) Device, system and method for measuring and processing physiological signals of a subject
Sikdar et al. Computer-vision-guided human pulse rate estimation: a review
CN105636505B (en) For obtaining the device and method of the vital sign of object
CN102499664B (en) Video-image-based method and system for detecting non-contact vital sign
CN105869144B (en) A kind of contactless monitoring of respiration method based on depth image data
JP2019503824A (en) Observational heart failure monitoring system
CN106264568A (en) Contactless emotion detection method and device
CN102309318A (en) Method for detecting human body physiological parameters on basis of infrared sequence image
CN106073742A (en) A kind of blood pressure measuring system and method
CN106236049A (en) Blood pressure measuring method based on video image
Blöcher et al. An online PPGI approach for camera based heart rate monitoring using beat-to-beat detection
WO2011127487A2 (en) Method and system for measurement of physiological parameters
Zhou et al. The noninvasive blood pressure measurement based on facial images processing
CN108186000A (en) Real-time blood pressure monitor system and method based on heart impact signal and photosignal
CN112233813A (en) Non-contact non-invasive heart rate and respiration measurement method and system based on PPG
Allen et al. Photoplethysmography (PPG): state-of-the-art methods and applications
Przybyło A deep learning approach for remote heart rate estimation
CN109009052A (en) The embedded heart rate measurement system and its measurement method of view-based access control model
CN107115103A (en) A kind of pulse condition width detection designed based on sensor array and 3D arteries and veins figure construction methods
CN113456042A (en) Non-contact facial blood pressure measuring method based on 3D CNN
TWI595860B (en) Featured points identification methods for mechanocardiography spectrum
CN114898882A (en) Method and system for ultrasound-based assessment of right heart function
TWI603712B (en) Cardiac Physiological Measurement System
CN109602412A (en) The method for realizing heart rate detection using facial video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190924