CN111353425A - Sleeping posture monitoring method based on feature fusion and artificial neural network - Google Patents
Sleeping posture monitoring method based on feature fusion and artificial neural network Download PDFInfo
- Publication number
- CN111353425A CN111353425A CN202010126488.8A CN202010126488A CN111353425A CN 111353425 A CN111353425 A CN 111353425A CN 202010126488 A CN202010126488 A CN 202010126488A CN 111353425 A CN111353425 A CN 111353425A
- Authority
- CN
- China
- Prior art keywords
- sleeping
- image
- area
- sleeping posture
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000012544 monitoring process Methods 0.000 title claims abstract description 33
- 230000004927 fusion Effects 0.000 title claims abstract description 26
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 22
- 238000007781 pre-processing Methods 0.000 claims abstract description 19
- 238000000605 extraction Methods 0.000 claims abstract description 15
- 230000036544 posture Effects 0.000 claims description 95
- 239000013598 vector Substances 0.000 claims description 23
- 238000013145 classification model Methods 0.000 claims description 17
- 238000012549 training Methods 0.000 claims description 16
- 239000000284 extract Substances 0.000 claims description 10
- 230000001605 fetal effect Effects 0.000 claims description 8
- 101100436077 Caenorhabditis elegans asm-1 gene Proteins 0.000 claims description 6
- 101100083446 Danio rerio plekhh1 gene Proteins 0.000 claims description 6
- 101100204282 Neurospora crassa (strain ATCC 24698 / 74-OR23-1A / CBS 708.71 / DSM 1257 / FGSC 987) Asm-1 gene Proteins 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 230000003068 static effect Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 230000000877 morphologic effect Effects 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 5
- 238000004458 analytical method Methods 0.000 claims description 4
- 230000011218 segmentation Effects 0.000 claims description 4
- 241001270131 Agaricus moelleri Species 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 3
- 230000007774 longterm Effects 0.000 claims description 3
- 238000007500 overflow downdraw method Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 4
- 238000002474 experimental method Methods 0.000 abstract description 2
- 230000009466 transformation Effects 0.000 abstract description 2
- 238000012360 testing method Methods 0.000 description 4
- 208000004210 Pressure Ulcer Diseases 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000013399 early diagnosis Methods 0.000 description 2
- 230000002265 prevention Effects 0.000 description 2
- 208000023504 respiratory system disease Diseases 0.000 description 2
- 238000005728 strengthening Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000452 restraining effect Effects 0.000 description 1
- 230000003860 sleep quality Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/1116—Determining posture transitions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4806—Sleep evaluation
- A61B5/4809—Sleep detection, i.e. determining whether a subject is asleep or not
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4806—Sleep evaluation
- A61B5/4812—Detecting sleep stages or cycles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4806—Sleep evaluation
- A61B5/4815—Sleep quality
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Surgery (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Medical Informatics (AREA)
- Heart & Thoracic Surgery (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Physiology (AREA)
- Dentistry (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Anesthesiology (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Image Analysis (AREA)
Abstract
本发明为一种基于特征融合与人工神经网络的睡姿监测方法,该方法针对六种睡姿类型的特点,通过对睡姿图像进行直方图分析,采用多种图像处理技术结合的方式,对图像进行针对性的预处理,在去除噪声、提高图像质量的同时,有效的保留了尽量多的有用信息,为后续的特征提取及整体的监测精度做好准备,针对性地预处理手段,获得更加完整,特征更加明显地睡姿图像。将多特征融合与人工神经网络联合使用,获得了更高地识别准确率,达到了99.17%,具有更好的实时性,实验表明识别180图片只需要0.13s即可完成。本发明直接采集人体与床垫之间的压力数据生成睡姿图像,数据处理时间短,提高了睡姿识别的实时性,有利于后期建立睡姿变换与动态压力的关系模型。
The present invention is a sleeping posture monitoring method based on feature fusion and artificial neural network. The method aims at the characteristics of six sleeping posture types. The image is subjected to targeted preprocessing. While removing noise and improving image quality, it effectively retains as much useful information as possible to prepare for subsequent feature extraction and overall monitoring accuracy. A more complete and more characteristic sleeping position image. The combination of multi-feature fusion and artificial neural network achieves a higher recognition accuracy, reaching 99.17%, with better real-time performance. The experiment shows that it only takes 0.13s to recognize 180 pictures. The invention directly collects the pressure data between the human body and the mattress to generate the sleeping posture image, the data processing time is short, the real-time performance of the sleeping posture recognition is improved, and the relationship model between the sleeping posture transformation and the dynamic pressure is established later.
Description
技术领域technical field
本发明涉及生理信息监测领域,具体是一种无束缚无干扰的睡姿监测方法,特别是一种基于特征信息融合与人工神经网络的睡姿监测方法。The invention relates to the field of physiological information monitoring, in particular to an unconstrained and interference-free sleeping posture monitoring method, in particular to a sleeping posture monitoring method based on feature information fusion and artificial neural network.
背景技术Background technique
睡眠是我们生活的重要组成部分,睡眠状态直接关系到人的心理和生理健康,在睡眠状态监测中,睡眠姿势是客观评价睡眠质量的关键之一。在家居环境下对睡眠姿势的有效监测可以实现呼吸疾病及压疮等疾病的早诊断、早预防。Sleep is an important part of our life, and sleep state is directly related to people's psychological and physical health. In sleep state monitoring, sleep posture is one of the keys to objectively evaluate sleep quality. Effective monitoring of sleep posture in the home environment can achieve early diagnosis and prevention of respiratory diseases and pressure ulcers.
目前无束缚、无干扰的睡姿监测方法逐渐成为主要的研究方向,叶荫球等人(叶荫球,姜太平,张蕾.基于水平集方法和神经网络的人体睡姿识别[J].工业控制计算机(05):91-93.)通过相机获取人体的静态睡姿与动态视频,使用基于BP神经网络的算法对四种睡姿进行识别,平均识别精度为73%,识别精度较低且容易造成隐私隐患。张艺超等人(张艺超,袁贞明,孙晓燕.基于心冲击信号的睡姿识别[J].计算机工程与应用(1):135-140.)使用压电薄膜传感器以非接触方式采集人体局部的微小震动信号,使用基于BP神经网络的算法进行睡姿识别,平均正确识别率为93%,但也只识别了四种睡姿且识别精度较低,难以实现实际应用。At present, the method of monitoring the sleeping posture without restraint and interference has gradually become the main research direction. Ye Yinqiu et al. ): 91-93.) Obtain the static sleeping position and dynamic video of the human body through the camera, and use the algorithm based on BP neural network to identify the four sleeping positions. The average recognition accuracy is 73%, which is low and easy to cause privacy risks. . Zhang Yichao et al. (Zhang Yichao, Yuan Zhenming, Sun Xiaoyan. Sleeping posture recognition based on cardiac shock signal [J]. Computer Engineering and Applications (1): 135-140.) Using piezoelectric thin film sensors to collect tiny vibrations in the human body in a non-contact way Signal, using the algorithm based on BP neural network for sleeping posture recognition, the average correct recognition rate is 93%, but only four sleeping postures are recognized and the recognition accuracy is low, which is difficult to achieve practical application.
发明内容SUMMARY OF THE INVENTION
本发明旨在克服以上睡姿监测方法的不足与缺点,提供一种基于睡姿特征融合与人工神经网络的无干扰无束缚的睡姿监测方法,提高无束缚睡姿识别的准确率,为家居环境下的睡姿识别与监测提供技术支持,实现对呼吸疾病及压疮等疾病的早诊断、早预防。The invention aims to overcome the shortcomings and shortcomings of the above sleeping posture monitoring methods, provides a non-interference and unconstrained sleeping posture monitoring method based on the fusion of sleeping posture features and artificial neural networks, improves the accuracy of the unconstrained sleeping posture recognition, and provides home furnishing services. Provide technical support for sleeping position recognition and monitoring in the environment, and realize early diagnosis and early prevention of respiratory diseases and pressure ulcers.
本发明解决上述问题采用的技术方案是:The technical scheme adopted by the present invention to solve the above problems is:
一种基于特征融合与人工神经网络的睡姿监测方法,该方法的步骤是:A sleeping posture monitoring method based on feature fusion and artificial neural network, the steps of the method are:
第一步:睡姿图像的采集Step 1: Collection of sleeping posture images
使用床单式的大面积的压力传感器阵列采集六种睡姿(仰卧、俯卧、右侧树干型、右侧胎儿型、左侧树干型、左侧胎儿型)的人体压力数据,并经过重组和排序后转换为二维睡姿图像,二维睡姿图像中压力值的坐标与传感器单元的位置一一对应,切实还原人体在传感器上躺卧的方向和位置;The human body pressure data of six sleeping positions (supine, prone, right trunk type, right fetal type, left trunk type, and left fetal type) were collected using a sheet-type large-area pressure sensor array, and were reorganized and sorted. After converting into a two-dimensional sleeping position image, the coordinates of the pressure value in the two-dimensional sleeping position image correspond to the position of the sensor unit one-to-one, so as to effectively restore the direction and position of the human body lying on the sensor;
第二步:睡姿图像的预处理Step 2: Preprocessing of sleeping position images
将睡姿图像与空载情况下传感器输出的压力数据做差,消除传感器本身带来的噪声,得到实际的睡姿压力图像,并对其进行直方图分析,依次通过反转、局部均衡化、睡姿分割、形态学除噪方式获取四肢分明特征更加明显的预处理后睡姿图像;Make the difference between the sleeping position image and the pressure data output by the sensor under no-load condition, eliminate the noise brought by the sensor itself, obtain the actual sleeping position pressure image, and perform histogram analysis on it. Sleeping posture segmentation and morphological de-noising methods are used to obtain preprocessed sleeping posture images with more distinct limb features;
第三步:图像的特征提取与融合Step 3: Feature extraction and fusion of images
对第二步预处理后睡姿图像进行特征提取,首先提取图像的HOG特征,将图像按m×m像素大小进行单元划分,并计算其梯度大小与方向,再按n×n单元大小进行块划分,以块为窗口滑动扫描图像得到HOG特征Fhog;Perform feature extraction on the sleeping posture image after the second step of preprocessing. First, extract the HOG feature of the image, divide the image into units according to the size of m × m pixels, and calculate its gradient size and direction, and then block according to the size of n × n units. Divide, take the block as the window sliding scan image to obtain the HOG feature F hog ;
再对第二步预处理后睡姿图像提取图像的GLCM特征,将256个灰度等级的图像转换为8个灰度等级,计算不同方向θ和不同距离d的灰度共生矩阵,并求解得到对比度CON、相关COOR、角二阶矩阵ASM、逆差分矩HOM共四个特征的值,分别求取其相同距离下不同角度的均值M和方差S作为最终的GLCM特征,各特征求取公式分别为:Then extract the GLCM features of the sleeping posture image after the second step of preprocessing, convert the 256 grayscale image into 8 grayscale levels, calculate the grayscale co-occurrence matrix of different directions θ and different distances d, and solve to get Contrast CON, correlation COOR, angular second-order matrix ASM, and inverse difference moment HOM are the values of four features, and the mean M and variance S of different angles at the same distance are obtained as the final GLCM features, and the formulas for each feature are respectively for:
其中:in:
i和j是像素的灰度;p(i,j,d,θ)为在θ方向上,相隔距离为d的一对像素灰度值分别为i和j的频率;L为图像的灰度等级;设共有四个方向,d的取值个数为k;i and j are the grayscales of the pixels; p(i,j,d,theta) is the frequency of a pair of pixel grayscale values separated by a distance d in the direction of theta, i and j respectively; L is the grayscale of the image Level; suppose there are four directions, and the number of d values is k;
设d=1时,MCON1是对比度的均值,SCON1是对比度的方差,则其具体公式为:When d=1, M CON1 is the mean value of contrast, and S CON1 is the variance of contrast, then its specific formula is:
其余3个特征的求取公式同理,最终可得d=1时,四个特征的向量集合为:The formulas for the remaining three features are the same. When d=1, the vector set of the four features is:
M1={MCON1,SCON1,MCOOR1,SCOOR1,MASM1,SASM1,MHOM1,SHOM1}(8)M 1 ={M CON1 ,S CON1 ,M COOR1 ,S COOR1 ,M ASM1 ,S ASM1 ,M HOM1 ,S HOM1 }(8)
改变距离d的取值,则最终得到GLCM特征向量Fglcm其表达式为:Change the value of the distance d, and finally get the GLCM feature vector F glcm whose expression is:
Fglcm={M1,M2,...,Mk}(9)F glcm ={M 1 ,M 2 ,...,M k }(9)
接下来对第二步预处理后睡姿图像提取局部特征:局部特征为睡姿区域面积A、腿部区域个数N、头部区域中心坐标点H、头部区域中心坐标点H与腿部区域中心坐标点H1的连线与水平方向所成夹角α;Next, extract local features from the sleeping posture image after the second step of preprocessing: the local features are the sleeping posture area A, the number of leg areas N, the center coordinate point H of the head area, the center coordinate point H of the head area and the leg area. The angle α formed by the connection line of the regional center coordinate point H1 and the horizontal direction;
睡姿区域面积A,即睡姿区域的像素个数,预处理后睡姿图像大小为64×32,背景区域的灰度值为255,则根据图像中非255的像素个数可计算出睡姿区域面积A;The area A of the sleeping posture area is the number of pixels in the sleeping posture area. After preprocessing, the size of the sleeping posture image is 64×32, and the gray value of the background area is 255. According to the number of pixels other than 255 in the image, the sleeping posture can be calculated. The area of the posture area A;
然后对图像进行边缘提取,获取睡姿轮廓后,以预处理后的睡姿图像的左下角为坐标原点,以床长度方向为Y方向,且定义Y正方向为由腿部到头部的方向,以床宽度方向为X方向建立坐标系,X轴为列数,Y轴为行数,将Y∈[5764],X∈[032]的区域划分为头部区域Shead,则头部区域中心坐标点计算公式为:Then, the edge is extracted from the image, and after obtaining the sleeping posture outline, the lower left corner of the preprocessed sleeping posture image is taken as the coordinate origin, the length direction of the bed is the Y direction, and the positive Y direction is defined as the direction from the legs to the head , establish a coordinate system with the width direction of the bed as the X direction, the X axis is the number of columns, the Y axis is the number of rows, and the area of Y∈[5764], X∈[032] is divided into the head area S head , then the head area The formula for calculating the center coordinate point is:
其中xmin,xmax分别是头部轮廓横坐标的最小值与最大值;ymin,ymax则是头部轮廓纵坐标的最小值与最大值;Where x min , x max are the minimum and maximum values of the abscissa of the head profile, respectively; y min , y max are the minimum and maximum values of the ordinate of the head profile;
将Y∈[016],X∈[032]的区域划分为腿部区域Sleg,以该区域内的连通域数量作为腿部区域个数N,则腿部区域压力中心点的坐标H1的计算公式为:Divide the area of Y∈[016], X∈[032] into the leg area S leg , and take the number of connected domains in this area as the number of leg areas N, then the coordinate H 1 of the pressure center point of the leg area is The calculation formula is:
其中xmin1,xmax1分别是腿部区域横坐标的最小值与最大值;ymin1,ymax1则是腿部区域纵坐标的最小值与最大值,Among them, x min1 and x max1 are the minimum and maximum values of the abscissa of the leg area, respectively; y min1 and y max1 are the minimum and maximum values of the ordinate of the leg area.
根据上述所得的H和H1的坐标值,可得夹角α的计算公式为:According to the coordinate values of H and H 1 obtained above, the calculation formula of the included angle α can be obtained as follows:
将以上特征提取出来后生成局部特征集合Fs={A,N,H,α},再将局部特征集合与HOG特征及GLCM特征融合为反应六种静态压力睡姿图像的融合特征向量F,其表达式为:After the above features are extracted, a local feature set F s ={A,N,H,α} is generated, and then the local feature set, HOG feature and GLCM feature are fused into a fusion feature vector F that reflects the six static pressure sleeping posture images, Its expression is:
F={Fhog,Fglcm,Fs} (13)F={F hog ,F glcm ,F s } (13)
第四步:睡姿分类模型的训练Step 4: Training of Sleeping Posture Classification Model
使用基于人工神经网络的算法对六种静态压力睡姿图像的融合特征向量进行训练获得不同睡姿的分类模型;Using the algorithm based on artificial neural network to train the fusion feature vector of six static pressure sleeping position images to obtain the classification model of different sleeping positions;
第五步:睡姿实时监测与识别Step 5: Real-time monitoring and recognition of sleeping position
将实时采集的压力数据在系统前端界面进行实时显示,并重复前三步获得当前睡姿图像的融合特征向量,再利用第四步训练得到的分类模型对当前睡姿图像的融合特征向量不断进行睡姿种类识别,将识别结果生成日志记录,实现对人体睡姿的长时间监测。Display the pressure data collected in real time on the front-end interface of the system, and repeat the first three steps to obtain the fusion feature vector of the current sleeping position image, and then use the classification model trained in the fourth step to continuously perform the fusion feature vector of the current sleeping position image. Type recognition of sleeping postures, generate log records of the recognition results, and realize long-term monitoring of human sleeping postures.
第四步的具体过程是:首先是构建基于人工神经网络的网络结构;然后使用第三步提取的融合特征向量作为睡姿数据集,六种睡姿的标签分别设为yi={0,1,2,3,4,5},将特征向量与对应标签组合则得到了睡姿样本训练集,将其作为神经网络的输入,经过训练后,得到不同睡姿的分类模型,识别准确率达到99%以上,将其保存为分类算子,直接用于睡姿的分类识别。The specific process of the fourth step is: the first is to build a network structure based on artificial neural network; then the fusion feature vector extracted in the third step is used as the sleeping posture data set, and the labels of the six sleeping postures are set as y i ={0, 1, 2, 3, 4, 5}, combine the feature vector and the corresponding label to get the training set of sleeping posture samples, which is used as the input of the neural network. After training, the classification models of different sleeping postures are obtained, and the recognition accuracy is If it reaches more than 99%, it will be saved as a classification operator and used directly for classification and recognition of sleeping postures.
人工神经网络的网络结构包含一个输入层、一个隐藏层、一个输出层,隐藏层的激活函数为sigmoid函数,输出层的函数为softmax;隐藏层节点数为100。The network structure of the artificial neural network includes an input layer, a hidden layer, and an output layer. The activation function of the hidden layer is the sigmoid function, and the function of the output layer is softmax; the number of hidden layer nodes is 100.
HOG特征提取时,m=2,n=2,窗口滑动步长为1个单元;GLCM特征提取时,k=10,四个方向为:θ1=0°,θ2=45°,θ3=90°,θ4=135°。In HOG feature extraction, m=2, n=2, and the window sliding step is 1 unit; in GLCM feature extraction, k=10, and the four directions are: θ 1 =0°, θ 2 =45°, θ 3 =90°, θ 4 =135°.
本发明还保护一种上述的监测方法的应用系统,该系统包括:大面积的压力传感器、数据采集设备以及上位机终端;压力传感器直接铺在床垫上,压力传感器的覆盖面积大于使用者的横向和纵向宽度和高度,且使用者居中躺在压力传感器上面;压力传感器与采集设备通过排线连接,数据采集设备与上位机终端通过USB接口进行连接;上位机终端上加载上述训练好的分类模型、图像预处理及特征融合方式。The present invention also protects an application system of the above monitoring method. The system includes: a large-area pressure sensor, a data acquisition device, and a host computer terminal; the pressure sensor is directly laid on the mattress, and the coverage area of the pressure sensor is larger than that of the user. Horizontal and vertical width and height, and the user lies on the pressure sensor in the center; the pressure sensor and the acquisition device are connected through a cable, and the data acquisition device and the host computer terminal are connected through the USB interface; the above-mentioned trained classification is loaded on the host computer terminal Model, image preprocessing and feature fusion methods.
与已有技术相比,本发明的优点主要体现在:Compared with the prior art, the advantages of the present invention are mainly reflected in:
(1)本发明针对六种睡姿类型的特点,通过对睡姿图像进行直方图分析,采用多种图像处理技术结合的方式,对图像进行针对性的预处理,在去除噪声、提高图像质量的同时,有效的保留了尽量多的有用信息,为后续的特征提取及整体的监测精度做好准备,针对性地预处理手段,获得更加完整,特征更加明显地睡姿图像。(1) Aiming at the characteristics of six types of sleeping postures, the present invention performs histogram analysis on the sleeping posture images, and adopts a combination of various image processing techniques to perform targeted preprocessing on the images to remove noise and improve image quality. At the same time, it effectively retains as much useful information as possible to prepare for the subsequent feature extraction and overall monitoring accuracy. Targeted preprocessing methods are used to obtain more complete and more obvious sleeping posture images.
(2)本发明针对睡姿图像进行多种特征的提取,统计特征(HOG+GLCM)与局部特征(睡姿区域面积、以及腿部区域个数、角度α等)融合生成的特征向量,既包含了不同睡姿类型在整体上的差异,也体现出了在细节上的不同。强化不同睡姿之间的特点,在训练时能有效提高分类模型的性能,提高识别准确率,且提取方式的增多不会带来算法过拟合或信息干扰问题。(2) The present invention extracts a variety of features for the sleeping posture image, and the statistical features (HOG+GLCM) and local features (the area of the sleeping posture area, the number of leg areas, the angle α, etc.) are fused to generate the feature vector, which is It includes the overall differences of different sleeping position types, and also reflects the differences in details. Strengthening the characteristics between different sleeping positions can effectively improve the performance of the classification model and the recognition accuracy during training, and the increase in extraction methods will not cause algorithm overfitting or information interference problems.
(3)本发明将多特征融合与人工神经网络联合使用,获得了更高地识别准确率,达到了99.17%,具有更好的实时性,实验表明识别180图片只需要0.13s即可完成。本发明直接采集人体与床垫之间的压力数据生成睡姿图像,数据处理时间短,提高了睡姿识别的实时性,有利于后期建立睡姿变换与动态压力的关系模型。(3) The present invention combines multi-feature fusion and artificial neural network to obtain higher recognition accuracy, reaching 99.17%, with better real-time performance. Experiments show that it only takes 0.13s to recognize 180 pictures. The invention directly collects the pressure data between the human body and the mattress to generate the sleeping posture image, the data processing time is short, the real-time performance of the sleeping posture recognition is improved, and the relationship model between the sleeping posture transformation and the dynamic pressure is established later.
(4)本发明使用大面积压力传感器采集人体睡姿,可以在对人体无束缚、无干扰的情况下采集到人体的完整睡姿,提高了该方法在家居环境下监测睡姿的实用性,克服了现有睡姿识别方法需要将传感器捆绑在人体上或使用相机存在隐私隐患的问题。(4) The present invention uses a large-area pressure sensor to collect the sleeping posture of the human body, and can collect the complete sleeping posture of the human body without restraining and interfering with the human body, which improves the practicability of the method for monitoring the sleeping posture in a home environment. It overcomes the problem that the existing sleeping posture recognition method needs to bind the sensor to the human body or use the camera, which has hidden privacy problems.
附图说明Description of drawings
图1为睡姿监测与识别系统的组成图;标号1为压力传感器,2为床体,3与5为数据采集设备,4为上位机终端,6为普通床垫,7为使用者。Figure 1 is the composition diagram of the sleeping posture monitoring and recognition system; the
图2(a)为左侧树干型原始睡姿图及其直方图,左侧为原始睡姿图,也就是转换的二维睡姿图像;右侧为其直方图。Figure 2(a) shows the original trunk-shaped sleeping posture map and its histogram on the left, the original sleeping posture map on the left, that is, the converted two-dimensional sleeping posture image, and the histogram on the right.
图2(b)为左侧树干型减去静态压力后得到的实际睡姿压力图及其直方图。Figure 2(b) shows the actual sleeping position pressure map and its histogram obtained by subtracting the static pressure from the left trunk shape.
图2(c)为左侧树干型反转后睡姿图像及其直方图。Figure 2(c) shows the sleeping position image and its histogram after the left trunk shape is inverted.
图2(d)为左侧树干型局部均衡化后睡姿图像及其直方图。Figure 2(d) shows the sleeping position image and its histogram after partial equalization of the left trunk.
图3(a)为左侧树干型二值化睡姿图像,Figure 3(a) is the left tree trunk-type binarized sleeping image,
图3(b)为左侧树干型分割后的睡姿图像,Figure 3(b) is the sleeping image after the left trunk-shaped segmentation,
图3(c)为形态学滤波后的睡姿图像。Figure 3(c) is the sleeping position image after morphological filtering.
图4(a)为HOG特征提取中为图像的单元划分和块划分示意图,Figure 4(a) is a schematic diagram of the unit division and block division of the image in the HOG feature extraction,
图4(b)为提取的HOG特征图。Figure 4(b) is the extracted HOG feature map.
图5为人工神经网络训练睡姿分类模型的流程图。FIG. 5 is a flowchart of an artificial neural network training a sleeping posture classification model.
图6为睡姿实时监测与识别的流程图。FIG. 6 is a flowchart of real-time monitoring and recognition of sleeping posture.
具体实施方式Detailed ways
为了使本发明实现上述的功能,下面结合附图与实施例对本发明进一步说明,本发明的实施方式包括下列实例,但并不以此作为对本申请保护范围的限定。In order to realize the above-mentioned functions of the present invention, the present invention will be further described below with reference to the accompanying drawings and examples. The embodiments of the present invention include the following examples, which are not intended to limit the protection scope of the present application.
本发明一种基于特征融合与人工神经网络的睡姿监测方法,利用压力传感器采集睡姿数据集,通过神经网络算法训练出分类模型,实现睡姿的实时监测与识别,包括睡姿图像的采集及预处理、压力图像的特征提取与融合、睡姿的识别与分类,主要步骤如下:The present invention is a sleeping posture monitoring method based on feature fusion and artificial neural network, which utilizes a pressure sensor to collect a sleeping posture data set, trains a classification model through a neural network algorithm, and realizes real-time monitoring and recognition of the sleeping posture, including the collection of sleeping posture images. The main steps are as follows:
第一步:睡姿图像的采集Step 1: Collection of sleeping posture images
使用床单式的大面积的压力传感器阵列采集六种睡姿(仰卧、俯卧、右侧树干型、右侧胎儿型、左侧树干型、左侧胎儿型)的人体压力数据,并经过重组和排序后转换为二维睡姿图像,如图2(a)所示左侧为左侧树干型睡姿图像实例图像,右侧为其直方图,直方图中像素值为横坐标,其数量为纵坐标,图像中压力值的坐标与传感器单元的位置一一对应,切实还原人体在传感器上躺卧的方向和位置。The human body pressure data of six sleeping positions (supine, prone, right trunk type, right fetal type, left trunk type, and left fetal type) were collected using a sheet-type large-area pressure sensor array, and were reorganized and sorted. Then, it is converted into a two-dimensional sleeping position image. As shown in Figure 2(a), the left side is an example image of the left trunk-shaped sleeping position image, and the right side is its histogram. The pixel value in the histogram is the abscissa, and the number is the vertical Coordinates, the coordinates of the pressure value in the image correspond to the position of the sensor unit one-to-one, which effectively restores the direction and position of the human body lying on the sensor.
第二步:睡姿图像的预处理Step 2: Preprocessing of sleeping position images
将睡姿图像与空载情况下压力传感器输出的压力数据做差,消除压力传感器本身带来的噪声,得到实际的睡姿压力图像,并对其进行直方图分析,通过反转、局部均衡化、睡姿分割、形态学除噪等预处理方法得到特征明显的睡姿图像。Make the difference between the sleeping position image and the pressure data output by the pressure sensor under no-load condition, eliminate the noise caused by the pressure sensor itself, obtain the actual sleeping position pressure image, and perform histogram analysis on it. , sleeping position segmentation, morphological denoising and other preprocessing methods to obtain sleeping position images with obvious features.
以第一步采集的左侧树干型的预处理过程为示例,假设传感器受压时,压力值为g(x,y);空载时,压力值为f(x,y),则实际人体压力值p(x,y)=g(x,y)-f(x,y),睡姿图像与空载情况下压力传感器的压力数据做差,可以消除传感器本身带来的噪声,得到实际的睡姿压力图像(参见图2(b))。然后对其图像进行反转,显化图像中灰色或白色的细节(参见图2(c));根据上文中反转图像的直方图判断图像中睡姿区域的有效灰度区间,可知该睡姿图像有效压力值区间为160-240,对区间内的像素值进行局部均衡化增强睡姿区域与背景之间对比度(参见图2(d)),公式如下:Taking the preprocessing process of the left trunk type collected in the first step as an example, assuming that when the sensor is under pressure, the pressure value is g(x,y); The pressure value p(x,y)=g(x,y)-f(x,y), the difference between the sleeping position image and the pressure data of the pressure sensor under no-load condition can eliminate the noise brought by the sensor itself, and get the actual The sleeping position pressure image (see Figure 2(b)). Then invert the image to visualize the gray or white details in the image (see Figure 2(c)); according to the histogram of the inverted image above, to determine the effective gray range of the sleeping posture area in the image, it can be known that the sleeping position The effective pressure value range of the posture image is 160-240, and the pixel values in the range are locally equalized to enhance the contrast between the sleeping posture area and the background (see Figure 2(d)). The formula is as follows:
其中Pr(ru)为原始图像灰度值为ru的数量,(r0~r80与160~240一一对应),rv为处理后的像素值。然后通过最大类间方差法进行二值化,再对睡姿进行分割,去除背景噪声,并通过形态学技术弥合图像的裂缝,消除孤立的噪点,获取四肢分明特征更加明显地睡姿图像(参见图3(c))。Among them, P r (r u ) is the number of original image grayscale values of r u , (r 0 to r 80 correspond to 160 to 240 one-to-one), and r v is the processed pixel value. Then, the maximum inter-class variance method is used for binarization, and then the sleeping position is segmented to remove background noise, and the cracks in the image are bridged by morphological technology to eliminate the isolated noise, and obtain a sleeping position image with more distinct features of the limbs (see Figure 3(c)).
第三步:图像的特征提取与融合Step 3: Feature extraction and fusion of images
对第二步经过预处理后的睡姿图像进行特征提取,首先提取图像的HOG特征,睡姿图像大小为64×32像素,首先将图像按2×2像素进行单元划分,并计算其梯度大小与方向,再对图像按2×2单元进行块划分,以块为窗口滑动,步长为1个单元(参见图4(a)),扫描图像得到HOG特征Fhog(参见图4(b))。Perform feature extraction on the preprocessed sleeping position image in the second step. First, extract the HOG feature of the image. The size of the sleeping position image is 64×32 pixels. First, divide the image into 2×2 pixels and calculate its gradient size. and the direction, and then divide the image into blocks by 2×2 units, take the block as the window to slide, the step size is 1 unit (see Figure 4(a)), and scan the image to obtain the HOG feature F hog (see Figure 4(b) ).
然后提取图像的GLCM特征,将第二步预处理后的睡姿图像由256个灰度等级转换为8个灰度等级,即(0-255对应1-8),计算不同方向θ1=0°,θ2=45°,θ3=90°,θ4=135°和不同距离d的灰度共生矩阵,并求解得到对比度CON、相关COOR、角二阶矩阵ASM、逆差分矩HOM共四个特征,取其相同距离下不同角度的四个特征值的均值M和方差S作为最终的特征参数,各特征求取公式分别为:Then extract the GLCM features of the image, convert the preprocessed sleeping image in the second step from 256 gray levels to 8 gray levels, that is (0-255 corresponds to 1-8), and calculate different directions θ 1 =0 °, θ 2 =45°, θ 3 =90°, θ 4 =135° and the grayscale co-occurrence matrix of different distances d, and solve the contrast CON, correlation COOR, angular second-order matrix ASM, inverse difference moment HOM, a total of four For each feature, the mean M and variance S of the four eigenvalues at different angles at the same distance are taken as the final feature parameters. The formulas for each feature are:
其中:in:
i和j是像素的灰度,p(i,j,d,θ)为在θ方向上,相隔距离为d的一对像素灰度值分别为i和j的频率,L为图像的灰度等级。对预处理后的图像进行扫描,取距离d=1时,计算得到四个方向上的GLCM特征如下表1所示。i and j are the grayscales of the pixels, p(i,j,d,theta) is the frequency of a pair of pixel grayscale values separated by a distance d in the θ direction, respectively, and L is the grayscale of the image grade. The preprocessed image is scanned, and when the distance d=1 is taken, the GLCM features in the four directions are calculated as shown in Table 1 below.
表1Table 1
设d=1时,MCON1是对比度的均值,SCON1是对比度的方差,其公式为:When d=1, M CON1 is the mean value of contrast, S CON1 is the variance of contrast, and its formula is:
其余3个特征的求取公式同理,最终可得d=1时,四个特征的集合为:The formulas for the remaining three features are the same. When d=1, the set of four features is:
M1={MCON1,SCON1,MCOOR1,SCOOR1,MASM1,SASM1,MHOM1,SHOM1} (8)M 1 ={M CON1 ,S CON1 ,M COOR1 ,S COOR1 ,M ASM1 ,S ASM1 ,M HOM1 ,S HOM1 } (8)
改变距离d的取值,取值范围设为[1,10],则最终得到一个80维的GLCM特征向量Fglcm,其表达式如下:Change the value of the distance d and set the value range to [1, 10], and finally get an 80-dimensional GLCM feature vector F glcm , whose expression is as follows:
Fglcm={M1,M2,...,M9,M10} (9)F glcm = {M 1 ,M 2 ,...,M 9 ,M 10 } (9)
接下来是计算睡姿图像中的局部特征:睡姿区域面积A、腿部区域个数N、头部区域压力中心坐标点H、头部区域压力中心坐标点H与腿部区域压力中心点H1的连线与水平方向所成夹角α。The next step is to calculate the local features in the sleeping position image: sleeping position area A, leg area number N, head area pressure center coordinate point H, head area pressure center coordinate point H and leg area pressure center point H The angle α formed by the line connecting 1 and the horizontal direction.
首先计算睡姿区域面积,经过预处理的图像大小为64×32,背景区域的灰度值为255,则根据图像中非255的像素个数可计算出睡姿区域面积A;然后对图像进行边缘提取,获取睡姿轮廓后,以预处理后的睡姿图像的左下角为坐标原点,以床长度方向为Y方向,且定义Y正方向为由腿部到头部的方向,以床宽度方向为X方向建立坐标系,X轴为列数,Y轴为行数,将Y∈[5764],X∈[032]的区域划分为头部区域Shead,则头部区域压力中心坐标点计算公式如下:First calculate the area of the sleeping posture area, the size of the preprocessed image is 64×32, and the gray value of the background area is 255, then the area A of the sleeping posture area can be calculated according to the number of pixels other than 255 in the image; Edge extraction, after obtaining the sleeping posture outline, take the lower left corner of the preprocessed sleeping posture image as the coordinate origin, take the length direction of the bed as the Y direction, and define the positive Y direction as the direction from the legs to the head, and take the width of the bed as the direction. The direction is the X direction to establish a coordinate system, the X axis is the number of columns, the Y axis is the number of rows, and the area of Y∈[5764], X∈[032] is divided into the head area S head , then the pressure center coordinate point of the head area Calculated as follows:
其中xmin,xmax分别是头部轮廓横坐标的最小值与最大值,ymin,ymax则是头部轮廓纵坐标的最小值与最大值。与此类似,将Y∈[016],X∈[032]的区域划分为腿部区域Sleg,以该区域内的连通域数量作为腿部区域的个数N,则腿部区域压力中心点的坐标H1的计算公式为:where x min and x max are the minimum and maximum values of the abscissa of the head contour, respectively, and y min and y max are the minimum and maximum values of the ordinate of the head contour. Similarly, the area of Y∈[016], X∈[032] is divided into the leg area S leg , and the number of connected domains in this area is taken as the number N of the leg area, then the pressure center point of the leg area The formula for calculating the coordinate H1 is:
其中xmin1,xmax1分别是腿部区域横坐标的最小值与最大值,ymin1,ymax1则是腿部区域纵坐标的最小值与最大值;上述区域划分时,区域大小要大于特征区域大小,即人体相应部位在不同睡姿状态下,相应的部位始终处于该区域内;Among them, x min1 and x max1 are the minimum and maximum values of the abscissa of the leg area, respectively, and y min1 and y max1 are the minimum and maximum values of the ordinate of the leg area. When the above areas are divided, the size of the area should be larger than that of the feature area. Size, that is, the corresponding parts of the human body are always in this area under different sleeping positions;
根据上述所得的H和H1的坐标,可得夹角α的计算公式为:According to the coordinates of H and H 1 obtained above, the calculation formula of the included angle α can be obtained as follows:
将以上特征提取出来后生成局部特征集合Fs={A,N,H,α},After the above features are extracted, a local feature set F s ={A,N,H,α} is generated,
将其与HOG特征及GLCM特征融合为反应六种静态压力睡姿图像的融合特征向量F,其表达式为:It is fused with HOG feature and GLCM feature into a fused feature vector F that reflects six kinds of static pressure sleeping position images, and its expression is:
F={Fhog,Fglcm,Fs} (13)F={F hog ,F glcm ,F s } (13)
融合后的特征既包含了不同睡姿类型在整体上的差异,也体现出了在细节上的不同。强化不同睡姿之间的特点,在训练时能有效提高分类模型的性能,提高识别准确率。The fused features not only include the overall differences in different sleeping position types, but also reflect the differences in details. Strengthening the characteristics between different sleeping positions can effectively improve the performance of the classification model and improve the recognition accuracy during training.
第四步:睡姿分类模型的训练Step 4: Training of Sleeping Posture Classification Model
在训练之前首先通过第一步采集大量的六种睡姿的样本,并对其进行以第二步和第三步的处理,获取不同睡姿类型的融合特征向量数据集;然后将六种睡姿(仰卧、俯卧、右侧树干型、右侧胎儿型、左侧树干型、左侧胎儿型)的标签分别设为yi={0,1,2,3,4,5}。将其中一部分特征向量与对应标签组合生成睡姿样本训练集,其余样本用作测试集对网络性能进行测试。通过训练得到用于分类睡姿的函数参数。Before training, first collect a large number of samples of six sleeping positions through the first step, and process them in the second and third steps to obtain a fusion feature vector dataset of different sleeping position types; The labels of the postures (supine, prone, right trunk, right fetal, left trunk, left fetal) are respectively set to y i ={0,1,2,3,4,5}. A part of the feature vectors and corresponding labels are combined to generate a training set of sleeping posture samples, and the rest of the samples are used as a test set to test the network performance. The function parameters for classifying sleeping positions are obtained through training.
然后构建基于人工神经网络的网络结构,网络包含一个输入层、一个隐藏层、一个输出层,隐藏层的激活函数为sigmoid函数,输出层的函数为softmax。通过实际测试优化网络的隐藏层节点数,结果表明节点数为100时效果最佳,设置好参数后则使用之前获取的数据集开始进行训练。Then a network structure based on artificial neural network is constructed. The network includes an input layer, a hidden layer, and an output layer. The activation function of the hidden layer is the sigmoid function, and the function of the output layer is softmax. Through the actual test to optimize the number of hidden layer nodes of the network, the results show that the best effect is when the number of nodes is 100. After setting the parameters, the data set obtained before is used to start training.
训练时,以训练集中的样本作为神经网络的输入,以交叉熵作为损失函数不断优化网络的权重和阈值,以测试集的识别准确率判断分类模型的性能,不断训练优化,最终得到性能良好的睡姿分类模型,其识别准确率达到了99.17%,且识别180张图片只需要0.13s,将其保存为分类算子,直接用于睡姿的分类识别(参见图5)。During training, the samples in the training set are used as the input of the neural network, the cross entropy is used as the loss function to continuously optimize the weight and threshold of the network, and the recognition accuracy of the test set is used to judge the performance of the classification model. The sleeping posture classification model has a recognition accuracy of 99.17%, and it only takes 0.13s to recognize 180 pictures, which is saved as a classification operator and used directly for classification and recognition of sleeping postures (see Figure 5).
第五步:睡姿实时监测与识别Step 5: Real-time monitoring and recognition of sleeping position
将实时采集的压力数据在系统前端界面进行实时显示,并重复前三步获得当前睡姿图像的融合特征向量,再利用第四步训练得到的分类模型对当前睡姿图像的融合特征向量不断进行睡姿种类识别,将识别结果生成日志记录,实现对人体睡姿的长时间监测(参见图6)。Display the pressure data collected in real time on the front-end interface of the system, and repeat the first three steps to obtain the fusion feature vector of the current sleeping position image, and then use the classification model trained in the fourth step to continuously perform the fusion feature vector of the current sleeping position image. Type identification of sleeping postures, generate log records of identification results, and realize long-term monitoring of human sleeping postures (see Figure 6).
本方法所应用的系统(参见图1)包括:大面积的压力传感器1、数据采集设备3和5以及上位机终端4。使用时,压力传感器1直接铺在床垫6上,使用者7直接躺在上面,使用者躺下,压力传感器的覆盖面积大于使用者的横向和纵向宽度和高度,且使用者居中躺卧,本申请所使用的大面积的压力传感器可以为一整张64*32的柔性压力传感器阵列,压力传感器通过数据采集设备获取数据,并传输给上位机终端。本发明主要针对成年人使用尤其是老年人,实验者身高在1.6-1.8m左右,使用者卧躺时,头部区域位于传感器阵列的Y∈[5764],X∈[032]的区域内,腿部区域位于Y∈[016],X∈[032]的区域内。The system to which the method is applied (see FIG. 1 ) includes: a large-
为了在高采集速率和低噪声的情况下采集人体完整睡姿,压力传感器为两个32*32的柔性压力传感器拼接而成,设置采用两个数据采集设备同步采集,压力数据传输至上位机终端后进行处理与识别,上位机终端上加载上述训练好的分类算子、图像预处理及特征融合方法。In order to collect the complete sleeping posture of the human body under the condition of high acquisition rate and low noise, the pressure sensor is made of two 32*32 flexible pressure sensors. After processing and identification, the above-mentioned trained classification operators, image preprocessing and feature fusion methods are loaded on the upper computer terminal.
以上结合具体实施例描述了本发明中基于特征融合和人工神经网络的睡姿监测方法的基本原理,本发明的睡姿识别技术方案在表现出具有很好的精确性和实时性,能较好地实现睡姿监测。The basic principles of the method for monitoring sleeping posture based on feature fusion and artificial neural network in the present invention are described above in conjunction with specific embodiments. Realize sleep posture monitoring.
本发明未述及之处适用于现有技术。What is not described in the present invention applies to the prior art.
以上对本发明的实施进行了详细说明,只是本发明的较佳实施过程,不能被认为用于限定本申请权利要求的保护范围。凡以本发明申请权利要求范围所做的均等变化与改进,均应归属于本申请的保护范围之内。The implementation of the present invention has been described in detail above, which is only a preferred implementation process of the present invention, and cannot be considered to limit the protection scope of the claims of the present application. All equivalent changes and improvements made within the scope of the claims of the present application shall fall within the protection scope of the present application.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010126488.8A CN111353425A (en) | 2020-02-28 | 2020-02-28 | Sleeping posture monitoring method based on feature fusion and artificial neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010126488.8A CN111353425A (en) | 2020-02-28 | 2020-02-28 | Sleeping posture monitoring method based on feature fusion and artificial neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111353425A true CN111353425A (en) | 2020-06-30 |
Family
ID=71194169
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010126488.8A Pending CN111353425A (en) | 2020-02-28 | 2020-02-28 | Sleeping posture monitoring method based on feature fusion and artificial neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111353425A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112287783A (en) * | 2020-10-19 | 2021-01-29 | 燕山大学 | Intelligent ward nursing identification method and system based on vision and pressure sensing |
CN113269096A (en) * | 2021-05-18 | 2021-08-17 | 浙江理工大学 | Head pressure data deep learning processing method for sleeping posture recognition |
CN113261951A (en) * | 2021-04-29 | 2021-08-17 | 北京邮电大学 | Sleeping posture identification method and device based on piezoelectric ceramic sensor |
CN113273998A (en) * | 2021-07-08 | 2021-08-20 | 南京大学 | Human body sleep information acquisition method and device based on RFID label matrix |
CN113688720A (en) * | 2021-08-23 | 2021-11-23 | 安徽农业大学 | A method for predicting sleeping position based on neural network recognition |
CN113688718A (en) * | 2021-08-23 | 2021-11-23 | 安徽农业大学 | An interference-free adaptive sleeping posture recognition method based on the finite element analysis of pillows |
CN113867167A (en) * | 2021-10-28 | 2021-12-31 | 中央司法警官学院 | Household environment intelligent monitoring method and system based on artificial neural network |
CN113921108A (en) * | 2021-10-08 | 2022-01-11 | 重庆邮电大学 | An automatic segmentation method for elastic band resistance training force data |
CN115137315A (en) * | 2022-09-06 | 2022-10-04 | 深圳市心流科技有限公司 | Sleep environment scoring method, device, terminal and storage medium |
CN116563887A (en) * | 2023-04-21 | 2023-08-08 | 华北理工大学 | A Sleeping Position Monitoring Method Based on Lightweight Convolutional Neural Network |
WO2023178462A1 (en) * | 2022-03-21 | 2023-09-28 | Super Rich Moulders Limited | Smart mattress |
CN117574056A (en) * | 2023-11-21 | 2024-02-20 | 中南大学 | Wide-area electromagnetic data denoising method and system based on hybrid neural network model |
WO2024125566A1 (en) * | 2022-12-14 | 2024-06-20 | 深圳市三分之一睡眠科技有限公司 | Sleeping posture recognition method and system based on deep neural network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102789672A (en) * | 2012-07-03 | 2012-11-21 | 北京大学深圳研究生院 | Intelligent identification method and device for baby sleep position |
CN106097360A (en) * | 2016-06-17 | 2016-11-09 | 中南大学 | A kind of strip steel surface defect identification method and device |
CN107330352A (en) * | 2016-08-18 | 2017-11-07 | 河北工业大学 | Sleeping position pressure image-recognizing method based on HOG features and machine learning |
-
2020
- 2020-02-28 CN CN202010126488.8A patent/CN111353425A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102789672A (en) * | 2012-07-03 | 2012-11-21 | 北京大学深圳研究生院 | Intelligent identification method and device for baby sleep position |
CN106097360A (en) * | 2016-06-17 | 2016-11-09 | 中南大学 | A kind of strip steel surface defect identification method and device |
CN107330352A (en) * | 2016-08-18 | 2017-11-07 | 河北工业大学 | Sleeping position pressure image-recognizing method based on HOG features and machine learning |
Non-Patent Citations (2)
Title |
---|
JASON J. LIU ET AL.: "Sleep posture analysis using a dense pressure sensitive bedsheet", 《PERVASIVE AND MOBILE COMPUTING》 * |
叶荫球: "基于计算机视觉的人体睡姿识别系统的研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112287783A (en) * | 2020-10-19 | 2021-01-29 | 燕山大学 | Intelligent ward nursing identification method and system based on vision and pressure sensing |
CN113261951A (en) * | 2021-04-29 | 2021-08-17 | 北京邮电大学 | Sleeping posture identification method and device based on piezoelectric ceramic sensor |
CN113269096B (en) * | 2021-05-18 | 2024-05-24 | 浙江理工大学 | Deep learning processing method of head pressure data for sleeping posture recognition |
CN113269096A (en) * | 2021-05-18 | 2021-08-17 | 浙江理工大学 | Head pressure data deep learning processing method for sleeping posture recognition |
CN113273998A (en) * | 2021-07-08 | 2021-08-20 | 南京大学 | Human body sleep information acquisition method and device based on RFID label matrix |
CN113688720A (en) * | 2021-08-23 | 2021-11-23 | 安徽农业大学 | A method for predicting sleeping position based on neural network recognition |
CN113688718A (en) * | 2021-08-23 | 2021-11-23 | 安徽农业大学 | An interference-free adaptive sleeping posture recognition method based on the finite element analysis of pillows |
CN113688718B (en) * | 2021-08-23 | 2024-06-07 | 安徽农业大学 | Non-interference self-adaptive sleeping posture recognition method based on pillow finite element analysis |
CN113688720B (en) * | 2021-08-23 | 2024-05-31 | 安徽农业大学 | Method for predicting sleeping gesture based on neural network recognition |
CN113921108A (en) * | 2021-10-08 | 2022-01-11 | 重庆邮电大学 | An automatic segmentation method for elastic band resistance training force data |
CN113867167A (en) * | 2021-10-28 | 2021-12-31 | 中央司法警官学院 | Household environment intelligent monitoring method and system based on artificial neural network |
WO2023178462A1 (en) * | 2022-03-21 | 2023-09-28 | Super Rich Moulders Limited | Smart mattress |
CN115137315A (en) * | 2022-09-06 | 2022-10-04 | 深圳市心流科技有限公司 | Sleep environment scoring method, device, terminal and storage medium |
WO2024125566A1 (en) * | 2022-12-14 | 2024-06-20 | 深圳市三分之一睡眠科技有限公司 | Sleeping posture recognition method and system based on deep neural network |
CN116563887B (en) * | 2023-04-21 | 2024-03-12 | 华北理工大学 | A sleeping posture monitoring method based on lightweight convolutional neural network |
CN116563887A (en) * | 2023-04-21 | 2023-08-08 | 华北理工大学 | A Sleeping Position Monitoring Method Based on Lightweight Convolutional Neural Network |
CN117574056B (en) * | 2023-11-21 | 2024-05-10 | 中南大学 | Wide-area electromagnetic data denoising method and system based on hybrid neural network model |
CN117574056A (en) * | 2023-11-21 | 2024-02-20 | 中南大学 | Wide-area electromagnetic data denoising method and system based on hybrid neural network model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111353425A (en) | Sleeping posture monitoring method based on feature fusion and artificial neural network | |
CN101807245B (en) | Artificial neural network-based multi-source gait feature extraction and identification method | |
CN107862249B (en) | A method and device for identifying bifurcated palm prints | |
CN112766165B (en) | Falling pre-judging method based on deep neural network and panoramic segmentation | |
CN113269096B (en) | Deep learning processing method of head pressure data for sleeping posture recognition | |
CN107330352A (en) | Sleeping position pressure image-recognizing method based on HOG features and machine learning | |
CN102117404A (en) | Reflective finger vein feature acquisition device and personal identity authentication method thereof | |
CN111126240A (en) | A three-channel feature fusion face recognition method | |
CN111839519A (en) | Non-contact respiratory rate monitoring method and system | |
CN108764123A (en) | Intelligent recognition human body sleep posture method based on neural network algorithm | |
CN116563887B (en) | A sleeping posture monitoring method based on lightweight convolutional neural network | |
CN111488850A (en) | Neural network-based old people falling detection method | |
CN102289670B (en) | Image characteristic extraction method with illumination robustness | |
CN111967363A (en) | Emotion prediction method based on micro-expression recognition and eye movement tracking | |
CN103020602A (en) | Face recognition method based on neural network | |
CN108875645A (en) | A kind of face identification method under the conditions of underground coal mine complex illumination | |
CN114628020A (en) | Model construction, detection method, device and application of remote plethysmographic signal detection | |
CN109522865A (en) | A kind of characteristic weighing fusion face identification method based on deep neural network | |
CN115985464B (en) | A method and system for classifying muscle fatigue based on multimodal data fusion | |
CN111666813B (en) | Subcutaneous sweat gland extraction method of three-dimensional convolutional neural network based on non-local information | |
CN113486712A (en) | Multi-face recognition method, system and medium based on deep learning | |
JP2023143768A (en) | Person identification method of mapping skeleton information to image | |
CN106407975B (en) | Multi-scale Hierarchical Target Detection Method Based on Spatial-Spectral Structure Constraints | |
CN115581435A (en) | Sleep monitoring method and device based on multiple sensors | |
CN115187855A (en) | Seabed substrate sonar image classification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200630 |