CN111353425A - Sleeping posture monitoring method based on feature fusion and artificial neural network - Google Patents
Sleeping posture monitoring method based on feature fusion and artificial neural network Download PDFInfo
- Publication number
- CN111353425A CN111353425A CN202010126488.8A CN202010126488A CN111353425A CN 111353425 A CN111353425 A CN 111353425A CN 202010126488 A CN202010126488 A CN 202010126488A CN 111353425 A CN111353425 A CN 111353425A
- Authority
- CN
- China
- Prior art keywords
- sleeping posture
- image
- sleeping
- posture
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000012544 monitoring process Methods 0.000 title claims abstract description 34
- 230000004927 fusion Effects 0.000 title claims abstract description 28
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 25
- 238000007781 pre-processing Methods 0.000 claims abstract description 15
- 238000000605 extraction Methods 0.000 claims abstract description 13
- 238000004458 analytical method Methods 0.000 claims abstract description 5
- 230000036544 posture Effects 0.000 claims description 160
- 239000013598 vector Substances 0.000 claims description 23
- 238000012549 training Methods 0.000 claims description 21
- 238000013145 classification model Methods 0.000 claims description 20
- 238000004364 calculation method Methods 0.000 claims description 12
- 210000003754 fetus Anatomy 0.000 claims description 8
- 230000003068 static effect Effects 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 4
- 230000000877 morphologic effect Effects 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000002156 mixing Methods 0.000 claims description 2
- 239000004576 sand Substances 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 abstract description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000002474 experimental method Methods 0.000 abstract description 2
- 238000002360 preparation method Methods 0.000 abstract description 2
- 230000009466 transformation Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 5
- 238000012360 testing method Methods 0.000 description 4
- 241001270131 Agaricus moelleri Species 0.000 description 2
- 206010011985 Decubitus ulcer Diseases 0.000 description 2
- 208000004210 Pressure Ulcer Diseases 0.000 description 2
- 238000013399 early diagnosis Methods 0.000 description 2
- 208000023504 respiratory system disease Diseases 0.000 description 2
- 241000234314 Zingiber Species 0.000 description 1
- 235000006886 Zingiber officinale Nutrition 0.000 description 1
- 230000000747 cardiac effect Effects 0.000 description 1
- 230000032823 cell division Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 230000006806 disease prevention Effects 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 235000008397 ginger Nutrition 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000035939 shock Effects 0.000 description 1
- 230000003860 sleep quality Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1116—Determining posture transitions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4806—Sleep evaluation
- A61B5/4809—Sleep detection, i.e. determining whether a subject is asleep or not
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4806—Sleep evaluation
- A61B5/4812—Detecting sleep stages or cycles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4806—Sleep evaluation
- A61B5/4815—Sleep quality
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Surgery (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Medical Informatics (AREA)
- Heart & Thoracic Surgery (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Physiology (AREA)
- Dentistry (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Anesthesiology (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a sleep posture monitoring method based on feature fusion and an artificial neural network, which aims at the characteristics of six sleep posture types, performs histogram analysis on sleep posture images, and performs targeted preprocessing on the images by adopting a mode of combining various image processing technologies, so that the noise is removed, the image quality is improved, simultaneously, more useful information is effectively reserved, the preparation is made for subsequent feature extraction and overall monitoring precision, and the targeted preprocessing means obtains more complete sleep posture images with more obvious features. The multi-feature fusion and the artificial neural network are used in a combined mode, the higher identification accuracy rate is achieved and reaches 99.17%, the real-time performance is better, and experiments show that 180 pictures can be identified only in 0.13 s. The invention directly collects the pressure data between the human body and the mattress to generate the sleeping posture image, has short data processing time, improves the real-time of sleeping posture identification, and is beneficial to establishing a relation model of sleeping posture transformation and dynamic pressure in the later period.
Description
Technical Field
The invention relates to the field of physiological information monitoring, in particular to a no-binding and no-interference sleeping posture monitoring method, and particularly relates to a sleeping posture monitoring method based on characteristic information fusion and an artificial neural network.
Background
Sleep is an important part of our lives, the sleep state is directly related to the psychological and physiological health of people, and in the sleep state monitoring, the sleep posture is one of the keys for objectively evaluating the sleep quality. The effective monitoring of the sleeping posture in the household environment can realize the early diagnosis and early prevention of diseases such as respiratory diseases, pressure sores and the like.
At present, a non-binding and non-interference sleeping posture monitoring method gradually becomes a main research direction, people such as leaf shadow bulbs and the like (leaf shadow bulbs, ginger is too flat, and buds, human body sleeping posture identification based on a level set method and a neural network [ J ]. an industrial control computer (05):91-93.) acquire static sleeping postures and dynamic videos of a human body through a camera, and four sleeping postures are identified by using an algorithm based on a BP (back propagation) neural network, so that the average identification precision is 73%, the identification precision is lower, and privacy hidden dangers are easily caused. Zhangyi super et al (Zhangyi super, Yuanzheng, Sunxiao Yan. sleeping posture identification [ J ] based on cardiac shock signals computer engineering and applications (1): 135-.
Disclosure of Invention
The invention aims to overcome the defects and shortcomings of the sleep posture monitoring method, provides the non-interference and non-binding sleep posture monitoring method based on the sleep posture characteristic fusion and the artificial neural network, improves the accuracy of non-binding sleep posture identification, provides technical support for sleep posture identification and monitoring in a home environment, and realizes early diagnosis and early prevention of respiratory diseases, pressure sores and other diseases.
The technical scheme adopted by the invention for solving the problems is as follows:
a sleeping posture monitoring method based on feature fusion and an artificial neural network comprises the following steps:
the first step is as follows: acquisition of sleeping posture image
The method comprises the following steps that a bed-sheet type large-area pressure sensor array is used for collecting human body pressure data of six sleeping postures (supine, prone, right trunk type, right fetus type, left trunk type and left fetus type), the human body pressure data are converted into two-dimensional sleeping posture images after being recombined and sequenced, coordinates of pressure values in the two-dimensional sleeping posture images correspond to positions of sensor units one by one, and the lying direction and positions of a human body on a sensor are practically restored;
the second step is that: preprocessing of sleeping posture images
The method comprises the steps of subtracting a sleeping posture image from pressure data output by a sensor under the condition of no load, eliminating noise brought by the sensor to obtain an actual sleeping posture pressure image, performing histogram analysis on the actual sleeping posture pressure image, and obtaining a preprocessed sleeping posture image with more obvious four limb distinct characteristics in a mode of inversion, local equalization, sleeping posture segmentation and morphological denoising;
the third step: feature extraction and fusion of images
And (3) performing feature extraction on the sleep posture image preprocessed in the second step, firstly extracting HOG features of the image, performing unit division on the image according to m × m pixel size, calculating the gradient size and direction of the image, performing block division according to n × n unit size, and performing sliding scanning on the image by taking blocks as windows to obtain HOG features Fhog;
And then extracting GLCM characteristics of the image from the preprocessed sleeping posture image in the second step, converting the image with 256 gray levels into 8 gray levels, calculating gray level co-occurrence matrixes in different directions theta and different distances d, solving to obtain four characteristic values of contrast CON, related COOR, angular second-order matrix ASM and inverse differential moment HOM, and respectively calculating the mean value M and variance S of different angles at the same distance as the final GLCM characteristics, wherein the characteristic calculation formulas are respectively as follows:
wherein:
i and j are the gray levels of the pixels; p (i, j, d, theta) is the frequency of a pair of pixel gray values i and j respectively separated by a distance d in the theta direction; l is the gray scale of the image; setting four directions in total, wherein the value number of d is k;
when d is 1, MCON1Is the mean value of the contrast, SCON1If it is the variance of contrast, the specific formula is:
the same applies to the solving formulas of the other 3 features, and when d is finally obtained as 1, the vector set of the four features is as follows:
M1={MCON1,SCON1,MCOOR1,SCOOR1,MASM1,SASM1,MHOM1,SHOM1}(8)
changing the value of the distance d to finally obtain the GLCM feature vector FglcmThe expression is as follows:
Fglcm={M1,M2,...,Mk}(9)
then, local features of the preprocessed sleeping posture image in the second step are extracted: the local characteristics are the sleeping posture area A, the number of leg areas N, the central coordinate point H of the head area and the central coordinate point H of the leg areas1An included angle α between the connecting line and the horizontal direction;
the area A of the sleeping posture area, namely the number of pixels of the sleeping posture area, is 64 × 32 after the preprocessing of the sleeping posture image, the gray value of the background area is 255, and the area A of the sleeping posture area can be calculated according to the number of pixels which are not 255 in the image;
then extracting the edge of the image, obtaining the sleeping gesture outline, establishing a coordinate system by taking the left lower corner of the preprocessed sleeping gesture image as a coordinate origin, the length direction of the bed as the Y direction, defining the positive Y direction as the direction from the legs to the head, the width direction of the bed as the X direction, taking the X axis as the column number and the Y axis as the row number, and taking Y ∈ [5764]],X∈[032]Is divided into a head region SheadThen, the calculation formula of the center coordinate point of the head region is:
wherein xmin,xmaxThe minimum and maximum values of the abscissa of the head profile, respectively; y ismin,ymaxThe minimum value and the maximum value of the longitudinal coordinate of the head contour are obtained;
mixing Y ∈ [016],X∈[032]Is divided into leg regions SlegTaking the number of connected domains in the region as the number N of the leg regions, the coordinate H of the pressure center point of the leg region1The calculation formula of (2) is as follows:
wherein xmin1,xmax1The minimum and maximum of the leg region abscissa, respectively; y ismin1,ymax1Then the minimum and maximum of the leg region ordinate,
h and H obtained according to the above1The formula of the calculation of the angle α is:
extracting the above features to generate a local feature set FsAnd (3) fusing the local feature set, the HOG feature and the GLCM feature into a fused feature vector F reflecting six static pressure sleeping posture images, wherein the expression is as follows:
F={Fhog,Fglcm,Fs} (13)
the fourth step: training of sleep posture classification model
Training fusion feature vectors of the six static pressure sleeping posture images by using an algorithm based on an artificial neural network to obtain classification models of different sleeping postures;
the fifth step: sleeping posture real-time monitoring and identifying
And displaying the pressure data acquired in real time on a front-end interface of the system in real time, repeating the previous three steps to obtain a fusion feature vector of the current sleeping posture image, continuously identifying the sleeping posture type of the fusion feature vector of the current sleeping posture image by using the classification model obtained by the fourth step, generating a log record of the identification result, and realizing long-time monitoring of the sleeping posture of the human body.
The fourth step comprises the following specific processes: firstly, constructing a network structure based on an artificial neural network; then using the fusion characteristic vector extracted in the third step as a sleeping posture data set, and setting labels of six sleeping postures as yiCombining the feature vectors and the corresponding labels to obtain a sleep posture sample training set, using the sleep posture sample training set as the input of a neural network, obtaining classification models of different sleep postures after training, wherein the recognition accuracy reaches over 99%, and storing the classification models as classification operators for directly classifying and recognizing the sleep postures.
The network structure of the artificial neural network comprises an input layer, a hidden layer and an output layer, wherein the activation function of the hidden layer is a sigmoid function, and the function of the output layer is softmax; the number of hidden layer nodes is 100.
When the HOG features are extracted, m is 2, n is 2, and the window sliding step length is 1 unit; when GLCM features are extracted, k is 10, and four directions are: theta1=0°,θ2=45°,θ3=90°,θ4=135°。
The invention also protects an application system of the monitoring method, which comprises the following steps: the system comprises a large-area pressure sensor, data acquisition equipment and an upper computer terminal; the pressure sensor is directly paved on the mattress, the coverage area of the pressure sensor is larger than the transverse and longitudinal width and height of a user, and the user lies on the pressure sensor in the middle; the pressure sensor is connected with the acquisition equipment through a flat cable, and the data acquisition equipment is connected with the upper computer terminal through a USB interface; and loading the trained classification model, image preprocessing and feature fusion mode on the upper computer terminal.
Compared with the prior art, the invention has the advantages that:
(1) according to the method, aiming at the characteristics of six sleeping posture types, the histogram analysis is carried out on the sleeping posture image, the mode of combining various image processing technologies is adopted, the image is subjected to targeted preprocessing, the noise is removed, the image quality is improved, meanwhile, as much useful information as possible is effectively reserved, the preparation is made for subsequent feature extraction and overall monitoring precision, and the more complete and more obvious characteristic sleeping posture image is obtained by targeted preprocessing means.
(2) The invention extracts various features aiming at the sleeping posture image, and the statistical feature (HOG + GLCM) and the local feature (sleeping posture area, number of leg areas, angle α and the like) are fused to generate a feature vector, so that the invention not only contains the overall difference of different sleeping posture types, but also embodies the difference in details, strengthens the characteristics among different sleeping postures, can effectively improve the performance of a classification model during training, improves the identification accuracy, and does not cause the problems of algorithm overfitting or information interference due to the increase of extraction modes.
(3) The invention combines multi-feature fusion with artificial neural network, obtains higher identification accuracy rate which reaches 99.17%, has better real-time performance, and experiments show that 180 picture identification can be completed only by 0.13 s. The invention directly collects the pressure data between the human body and the mattress to generate the sleeping posture image, has short data processing time, improves the real-time of sleeping posture identification, and is beneficial to establishing a relation model of sleeping posture transformation and dynamic pressure in the later period.
(4) The invention uses the large-area pressure sensor to collect the sleeping posture of the human body, can collect the complete sleeping posture of the human body under the condition of no restriction and no interference to the human body, improves the practicability of the method for monitoring the sleeping posture in a home environment, and solves the problem that the existing sleeping posture identification method needs to bind the sensor on the human body or has privacy hidden trouble when a camera is used.
Drawings
Fig. 1 is a block diagram of a sleep posture monitoring and recognition system; the reference numeral 1 is a pressure sensor, 2 is a bed body, 3 and 5 are data acquisition equipment, 4 is an upper computer terminal, 6 is a common mattress, and 7 is a user.
Fig. 2(a) is a left trunk-type original sleeping posture diagram and a histogram thereof, and the left side is the original sleeping posture diagram, i.e. a converted two-dimensional sleeping posture image; the right side is its histogram.
Fig. 2(b) is an actual sleeping posture pressure diagram obtained by subtracting static pressure from the left trunk type and a histogram thereof.
Fig. 2(c) is the left trunk type inverted sleeping posture image and its histogram.
Fig. 2(d) is a left trunk type post-local equalization sleeping posture image and a histogram thereof.
FIG. 3(a) is a left trunk type binarized sleeping posture image,
FIG. 3(b) is a sleeping posture image of the left trunk type after being divided,
fig. 3(c) is a morphologically filtered sleep posture image.
FIG. 4(a) is a schematic diagram of cell division and block division for an image in HOG feature extraction,
fig. 4(b) is an extracted HOG feature map.
FIG. 5 is a flow chart of an artificial neural network training sleeping posture classification model.
Fig. 6 is a flow chart of real-time sleeping posture monitoring and recognition.
Detailed Description
In order to make the present invention achieve the above functions, the present invention is further described below with reference to the drawings and examples, and the embodiments of the present invention include the following examples, which should not be construed as limiting the scope of the present invention.
The invention discloses a sleeping posture monitoring method based on feature fusion and an artificial neural network, which is characterized in that a pressure sensor is used for collecting a sleeping posture data set, a classification model is trained through a neural network algorithm, and the real-time monitoring and identification of the sleeping posture are realized, and the method comprises the following main steps of collecting and preprocessing a sleeping posture image, extracting and fusing the features of the pressure image, and identifying and classifying the sleeping posture:
the first step is as follows: acquisition of sleeping posture image
The method comprises the steps of collecting human body pressure data of six sleeping postures (supine, prone, right trunk type, right fetus type, left trunk type and left fetus type) by using a bed-sheet large-area pressure sensor array, converting the human body pressure data into two-dimensional sleeping posture images after recombination and sequencing, wherein as shown in fig. 2(a), the left side is a left trunk type sleeping posture image example image, the right side is a histogram of the image, pixel values in the histogram are horizontal coordinates, the quantity of the pixel values is vertical coordinates, and the coordinates of the pressure values in the image correspond to the positions of sensor units one by one, so that the lying direction and the lying position of a human body on a sensor are practically restored.
The second step is that: preprocessing of sleeping posture images
And (3) subtracting the sleeping posture image from the pressure data output by the pressure sensor under the no-load condition, eliminating noise brought by the pressure sensor to obtain an actual sleeping posture pressure image, performing histogram analysis on the actual sleeping posture pressure image, and obtaining the sleeping posture image with obvious characteristics by preprocessing methods such as inversion, local equalization, sleeping posture segmentation, morphological denoising and the like.
Taking the preprocessing process of the left trunk type collected in the first step as an example, assuming that the pressure value is g (x, y) when the sensor is pressed; when the pressure value is f (x, y) during idling, the actual human body pressure value p (x, y) is g (x, y) -f (x, y), and the sleeping posture image is different from the pressure data of the pressure sensor during idling, so that the noise caused by the sensor can be eliminated, and the actual sleeping posture pressure image can be obtained (see fig. 2 (b)). Then, the image is inverted, and the details of gray or white in the image are displayed (see fig. 2 (c)); according to the histogram of the inverted image, the effective gray level interval of the sleeping gesture area in the image is determined, and it can be known that the effective pressure value interval of the sleeping gesture image is 160-240, and the pixel values in the interval are locally equalized to enhance the contrast between the sleeping gesture area and the background (see fig. 2(d)), and the formula is as follows:
wherein P isr(ru) For the gray value of the original image as ru(r) of0~r80One-to-one correspondence with 160 to 240), rvIs the processed pixel value. Then binarization is carried out by a maximum inter-class variance method, then the sleeping posture is segmented, background noise is removed, cracks of the image are closed by a morphological technology, isolated noise points are eliminated, and a sleeping posture image with clear four limbs features and more obvious sleeping posture features is obtained (see fig. 3 (c)).
The third step: feature extraction and fusion of images
Performing feature extraction on the preprocessed sleeping posture image in the second step, firstly extracting HOG features of the image, wherein the size of the sleeping posture image is 64 × 32 pixels, firstly performing unit division on the image according to 2 × 2 pixels, calculating the gradient size and direction of the image, then performing block division on the image according to 2 × 2 units, sliding by taking a block as a window, and obtaining HOG features F by scanning the image, wherein the step length is 1 unit (see fig. 4(a)), and the HOG features F are obtained by scanning the imagehog(see FIG. 4 (b)).
Then extracting GLCM characteristic of the image, converting the sleeping posture image preprocessed in the second step from 256 gray scales into 8 gray scales (0-255 corresponds to 1-8), and calculating theta in different directions1=0°,θ2=45°,θ3=90°,θ4Obtaining a gray level co-occurrence matrix with 135 degrees and different distances d, and solving to obtain contrast CON, correlation COOR, an angle second-order matrix ASM and an inverse differential moment HOMThe method comprises the following steps of taking the mean M and the variance S of four characteristic values of different angles at the same distance as final characteristic parameters, wherein the calculation formulas of the characteristics are as follows:
wherein:
i and j are the gray levels of the pixels, p (i, j, d, theta) is the frequency at which the gray levels of a pair of pixels separated by a distance d in the theta direction are i and j, respectively, and L is the gray level of the image. When the preprocessed image is scanned and the distance d is taken to be 1, the GLCM characteristics in four directions are calculated as shown in table 1 below.
TABLE 1
Angle of rotation | Contrast ratio | Correlation | Second moment of angle | Inverse differential moment |
θ=0° | 1.2087 | 0.8910 | 0.5976 | 0.9076 |
θ=45° | 1.4424 | 0.8707 | 0.5881 | 0.9049 |
θ=90° | 0.5392 | 0.9505 | 0.6218 | 0.9346 |
θ=135° | 1.4619 | 0.8690 | 0.5903 | 0.9023 |
When d is 1, MCON1Is the mean value of the contrast, SCON1Is the variance of the contrast, which is expressed as:
the same applies to the formula for obtaining the other 3 features, and when d is finally obtained as 1, the set of the four features is:
M1={MCON1,SCON1,MCOOR1,SCOOR1,MASM1,SASM1,MHOM1,SHOM1} (8)
changing the value of the distance d, and setting the value range as [1, 10 ]]Finally, an 80-dimensional GLCM feature vector F is obtainedglcmThe expression is as follows:
Fglcm={M1,M2,...,M9,M10} (9)
the following is the calculation of local features in the sleeping posture image: sleeping posture area A, number of leg areas N, head area pressure center coordinate point H and leg area pressure center point H1The line of (a) makes an angle α with the horizontal.
Firstly, calculating the area of a sleeping posture area, wherein the size of a preprocessed image is 64 × 32, the gray value of a background area is 255, then calculating the area A of the sleeping posture area according to the number of pixels which are not 255 in the image, then carrying out edge extraction on the image, obtaining the sleeping posture outline, taking the lower left corner of the preprocessed sleeping posture image as a coordinate origin, the length direction of a bed as a Y direction, defining the positive Y direction as the direction from legs to heads, establishing a coordinate system by taking the width direction of the bed as the X direction, taking the X axis as the column number and the Y axis as the row number, and carrying out Y ∈ [5764] on the basis of Y ∈ (the X axis is the column number, the],X∈[032]Is divided into a head region SheadThen, the head region pressure center coordinate point calculation formula is as follows:
wherein xmin,xmaxRespectively minimum and maximum of the abscissa of the head contour, ymin,ymaxThen the minimum and maximum of the head profile ordinate, similarly, Y ∈ [016],X∈[032]Is divided into leg regions SlegTaking the number of connected domains in the region as the number N of the leg regions, the coordinate H of the pressure center point of the leg region1The calculation formula of (2) is as follows:
wherein xmin1,xmax1Respectively minimum and maximum of the leg region abscissa, ymin1,ymax1The minimum and maximum values of the ordinate of the leg region; when the region is divided, the size of the region is larger than that of the characteristic region, namely, the corresponding part of the human body is always positioned in the region under different sleeping postures;
h and H obtained according to the above1The formula of the angle α can be obtained as follows:
extracting the above features to generate a local feature set Fs={A,N,H,α},
The fusion feature vector F is fused with the HOG feature and the GLCM feature to reflect six static pressure sleeping posture images, and the expression is as follows:
F={Fhog,Fglcm,Fs} (13)
the fused features not only contain the overall difference of different sleeping posture types, but also reflect the difference in details. The characteristics of different sleeping postures are strengthened, the performance of the classification model can be effectively improved during training, and the identification accuracy is improved.
The fourth step: training of sleep posture classification model
Before training, firstly, collecting a large number of samples of six sleeping postures in a first step, and carrying out second-step and third-step processing on the samples to obtain fusion characteristic vector data sets of different sleeping posture types; then the labels of six sleeping positions (supine, prone, right trunk type, right fetus type, left trunk type and left fetus type) are respectively set as y i0,1,2,3,4, 5. Combining a part of feature vectors with corresponding labels to generate a sleeping posture sample training set, and using the rest samples asThe test set tests the network performance. And obtaining function parameters for classifying the sleeping postures through training.
And then constructing a network structure based on an artificial neural network, wherein the network comprises an input layer, a hidden layer and an output layer, the activation function of the hidden layer is a sigmoid function, and the function of the output layer is softmax. And optimizing the number of nodes of the hidden layer of the network through actual test, wherein the result shows that the effect is optimal when the number of the nodes is 100, and training is started by using the previously acquired data set after the parameters are set.
During training, samples in a training set are used as input of a neural network, cross entropy is used as a loss function to continuously optimize weight and a threshold value of the network, the performance of the classification model is judged according to the recognition accuracy of the test set, the training and optimization are continuously carried out, the sleep posture classification model with good performance is finally obtained, the recognition accuracy reaches 99.17%, only 0.13s is needed for recognizing 180 pictures, and the pictures are stored as classification operators and directly used for classification recognition of the sleep postures (see fig. 5).
The fifth step: sleeping posture real-time monitoring and identifying
And (3) displaying the pressure data acquired in real time on a front-end interface of the system in real time, repeating the previous three steps to obtain a fusion feature vector of the current sleeping posture image, continuously identifying the sleeping posture type of the fusion feature vector of the current sleeping posture image by using the classification model obtained by the fourth step, generating a log record of the identification result, and realizing long-time monitoring of the sleeping posture of the human body (see fig. 6).
The system (see figure 1) applied by the method comprises a large-area pressure sensor 1, data acquisition devices 3 and 5 and an upper computer terminal 4. when in use, the pressure sensor 1 is directly laid on a mattress 6, a user 7 is directly laid on the mattress, the user lies down, the coverage area of the pressure sensor is larger than the transverse and longitudinal width and height of the user, and the user lies in the middle, the large-area pressure sensor used in the method can be a whole 64X 32 flexible pressure sensor array, the pressure sensor acquires data through the data acquisition devices and transmits the data to the upper computer terminal, the invention is mainly used for adults, particularly old people, the height of an experimenter is about 1.6-1.8m, when the user lies down, the head area is positioned in the areas of Y ∈ [5764] and X ∈ [032] of the sensor array, and the leg area is positioned in the areas of Y ∈ [016 [032] and X ∈ [032 ].
In order to collect the complete sleeping posture of a human body under the conditions of high collection rate and low noise, the pressure sensor is formed by splicing two 32-by-32 flexible pressure sensors, two data collection devices are adopted for synchronous collection, pressure data are transmitted to an upper computer terminal and then processed and identified, and the trained classification operator, image preprocessing and feature fusion method is loaded on the upper computer terminal.
The basic principle of the sleep posture monitoring method based on the feature fusion and the artificial neural network is described above with the specific embodiment, and the sleep posture identification technical scheme of the invention has good accuracy and real-time performance and can better realize the sleep posture monitoring.
Nothing in this specification is said to apply to the prior art.
The above detailed description of the embodiments of the present invention is merely a preferred embodiment of the present invention, and should not be considered as limiting the scope of the claims of the present application. All equivalent changes and modifications made within the scope of the claims of the present application shall fall within the protection scope of the present application.
Claims (5)
1. A sleeping posture monitoring method based on feature fusion and an artificial neural network comprises the following steps:
the first step is as follows: acquisition of sleeping posture image
The method comprises the following steps that a bed-sheet type large-area pressure sensor array is used for collecting human body pressure data of six sleeping postures (supine, prone, right trunk type, right fetus type, left trunk type and left fetus type), the human body pressure data are converted into two-dimensional sleeping posture images after being recombined and sequenced, coordinates of pressure values in the two-dimensional sleeping posture images correspond to positions of sensor units one by one, and the lying direction and positions of a human body on a sensor are practically restored;
the second step is that: preprocessing of sleeping posture images
The method comprises the steps of subtracting a sleeping posture image from pressure data output by a sensor under the condition of no load, eliminating noise brought by the sensor to obtain an actual sleeping posture pressure image, performing histogram analysis on the actual sleeping posture pressure image, and obtaining a preprocessed sleeping posture image with more obvious four limb distinct characteristics in a mode of inversion, local equalization, sleeping posture segmentation and morphological denoising;
the third step: feature extraction and fusion of images
And (3) performing feature extraction on the sleep posture image preprocessed in the second step, firstly extracting HOG features of the image, performing unit division on the image according to m × m pixel size, calculating the gradient size and direction of the image, performing block division according to n × n unit size, and performing sliding scanning on the image by taking blocks as windows to obtain HOG features Fhog;
And then extracting GLCM characteristics of the image from the preprocessed sleeping posture image in the second step, converting the image with 256 gray levels into 8 gray levels, calculating gray level co-occurrence matrixes in different directions theta and different distances d, solving to obtain four characteristic values of contrast CON, related COOR, angular second-order matrix ASM and inverse differential moment HOM, and respectively calculating the mean value M and variance S of different angles at the same distance as the final GLCM characteristics, wherein the characteristic calculation formulas are respectively as follows:
wherein:
i and j are the gray levels of the pixels; p (i, j, d, theta) is the frequency of a pair of pixel gray values i and j respectively separated by a distance d in the theta direction; l is the gray scale of the image; setting four directions in total, wherein the value number of d is k;
when d is 1, MCON1Is the mean value of the contrast, SCON1If it is the variance of contrast, the specific formula is:
the same applies to the solving formulas of the other 3 features, and when d is finally obtained as 1, the vector set of the four features is as follows:
M1={MCON1,SCON1,MCOOR1,SCOOR1,MASM1,SASM1,MHOM1,SHOM1} (8)
changing the value of the distance d to finally obtain the GLCM feature vector FglcmThe expression is as follows:
Fglcm={M1,M2,...,Mk} (9)
then, local features of the preprocessed sleeping posture image in the second step are extracted: the local characteristics are the sleeping posture area A, the number of leg areas N, the central coordinate point H of the head area and the central coordinate point H of the leg areas1An included angle α between the connecting line and the horizontal direction;
the area A of the sleeping posture area, namely the number of pixels of the sleeping posture area, is 64 × 32 after the preprocessing of the sleeping posture image, the gray value of the background area is 255, and the area A of the sleeping posture area can be calculated according to the number of pixels which are not 255 in the image;
then, edge extraction is carried out on the image, after the sleeping posture outline is obtained, the left lower corner of the preprocessed sleeping posture image is taken as the origin of coordinates, the length direction of the bed is taken as the Y direction, anddefining the positive Y direction as the direction from leg to head, establishing a coordinate system with the width direction of the bed as the X direction, the X axis as column number and the Y axis as row number, and taking Y ∈ [5764]],X∈[032]Is divided into a head region SheadThen, the calculation formula of the center coordinate point of the head region is:
wherein xmin,xmaxThe minimum and maximum values of the abscissa of the head profile, respectively; y ismin,ymaxThe minimum value and the maximum value of the longitudinal coordinate of the head contour are obtained;
mixing Y ∈ [016],X∈[032]Is divided into leg regions SlegTaking the number of connected domains in the region as the number N of the leg regions, the coordinate H of the pressure center point of the leg region1The calculation formula of (2) is as follows:
wherein xmin1,xmax1The minimum and maximum of the leg region abscissa, respectively; y ismin1,ymax1Then the minimum and maximum of the leg region ordinate,
h and H obtained according to the above1The formula of the calculation of the angle α is:
extracting the above features to generate a local feature set FsAnd (3) fusing the local feature set, the HOG feature and the GLCM feature into a fused feature vector F reflecting six static pressure sleeping posture images, wherein the expression is as follows:
F={Fhog,Fglcm,Fs} (13)
the fourth step: training of sleep posture classification model
Training fusion feature vectors of the six static pressure sleeping posture images by using an algorithm based on an artificial neural network to obtain classification models of different sleeping postures;
the fifth step: sleeping posture real-time monitoring and identifying
And displaying the pressure data acquired in real time on a front-end interface of the system in real time, repeating the previous three steps to obtain a fusion feature vector of the current sleeping posture image, continuously identifying the sleeping posture type of the fusion feature vector of the current sleeping posture image by using the classification model obtained by the fourth step, generating a log record of the identification result, and realizing long-time monitoring of the sleeping posture of the human body.
2. The monitoring method according to claim 1, wherein the fourth step is carried out by the following specific processes: firstly, constructing a network structure based on an artificial neural network; then using the fusion characteristic vector extracted in the third step as a sleeping posture data set, and setting labels of six sleeping postures as yiCombining the feature vectors and the corresponding labels to obtain a sleep posture sample training set, using the sleep posture sample training set as the input of a neural network, obtaining classification models of different sleep postures after training, wherein the recognition accuracy reaches over 99%, and storing the classification models as classification operators for directly classifying and recognizing the sleep postures.
3. The monitoring method according to claim 2, wherein the network structure of the artificial neural network comprises an input layer, a hidden layer and an output layer, the activation function of the hidden layer is sigmoid function, and the function of the output layer is softmax; the number of hidden layer nodes is 100.
4. The monitoring method according to claim 1, wherein in the HOG feature extraction, m is 2, n is 2, and the window sliding step length is 1 unit; when GLCM features are extracted, k is 10, and four directions are: theta1=0°,θ2=45°,θ3=90°,θ4=135°。
5. A system for applying the monitoring method according to any one of claims 1 to 4, the system comprising: the system comprises a large-area pressure sensor, data acquisition equipment and an upper computer terminal; the pressure sensor is directly paved on the mattress, the coverage area of the pressure sensor is larger than the transverse and longitudinal width and height of a user, and the user lies on the pressure sensor in the middle; the pressure sensor is connected with the acquisition equipment through a flat cable, and the data acquisition equipment is connected with the upper computer terminal through a USB interface; and loading the trained classification model, image preprocessing and feature fusion mode on the upper computer terminal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010126488.8A CN111353425A (en) | 2020-02-28 | 2020-02-28 | Sleeping posture monitoring method based on feature fusion and artificial neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010126488.8A CN111353425A (en) | 2020-02-28 | 2020-02-28 | Sleeping posture monitoring method based on feature fusion and artificial neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111353425A true CN111353425A (en) | 2020-06-30 |
Family
ID=71194169
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010126488.8A Pending CN111353425A (en) | 2020-02-28 | 2020-02-28 | Sleeping posture monitoring method based on feature fusion and artificial neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111353425A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112287783A (en) * | 2020-10-19 | 2021-01-29 | 燕山大学 | Intelligent ward nursing identification method and system based on vision and pressure sensing |
CN113261951A (en) * | 2021-04-29 | 2021-08-17 | 北京邮电大学 | Sleeping posture identification method and device based on piezoelectric ceramic sensor |
CN113269096A (en) * | 2021-05-18 | 2021-08-17 | 浙江理工大学 | Head pressure data deep learning processing method for sleeping posture recognition |
CN113273998A (en) * | 2021-07-08 | 2021-08-20 | 南京大学 | Human body sleep information acquisition method and device based on RFID label matrix |
CN113688720A (en) * | 2021-08-23 | 2021-11-23 | 安徽农业大学 | Neural network recognition-based sleeping posture prediction method |
CN113688718A (en) * | 2021-08-23 | 2021-11-23 | 安徽农业大学 | Non-interference self-adaptive sleeping posture identification method based on pillow finite element analysis |
CN113867167A (en) * | 2021-10-28 | 2021-12-31 | 中央司法警官学院 | Household environment intelligent monitoring method and system based on artificial neural network |
CN113921108A (en) * | 2021-10-08 | 2022-01-11 | 重庆邮电大学 | Automatic segmentation method for elastic band resistance training force data |
CN115137315A (en) * | 2022-09-06 | 2022-10-04 | 深圳市心流科技有限公司 | Sleep environment scoring method, device, terminal and storage medium |
CN116563887A (en) * | 2023-04-21 | 2023-08-08 | 华北理工大学 | Sleeping posture monitoring method based on lightweight convolutional neural network |
WO2023178462A1 (en) * | 2022-03-21 | 2023-09-28 | Super Rich Moulders Limited | Smart mattress |
CN117574056A (en) * | 2023-11-21 | 2024-02-20 | 中南大学 | Wide-area electromagnetic data denoising method and system based on hybrid neural network model |
WO2024125566A1 (en) * | 2022-12-14 | 2024-06-20 | 深圳市三分之一睡眠科技有限公司 | Sleeping posture recognition method and system based on deep neural network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102789672A (en) * | 2012-07-03 | 2012-11-21 | 北京大学深圳研究生院 | Intelligent identification method and device for baby sleep position |
CN106097360A (en) * | 2016-06-17 | 2016-11-09 | 中南大学 | A kind of strip steel surface defect identification method and device |
CN107330352A (en) * | 2016-08-18 | 2017-11-07 | 河北工业大学 | Sleeping position pressure image-recognizing method based on HOG features and machine learning |
-
2020
- 2020-02-28 CN CN202010126488.8A patent/CN111353425A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102789672A (en) * | 2012-07-03 | 2012-11-21 | 北京大学深圳研究生院 | Intelligent identification method and device for baby sleep position |
CN106097360A (en) * | 2016-06-17 | 2016-11-09 | 中南大学 | A kind of strip steel surface defect identification method and device |
CN107330352A (en) * | 2016-08-18 | 2017-11-07 | 河北工业大学 | Sleeping position pressure image-recognizing method based on HOG features and machine learning |
Non-Patent Citations (2)
Title |
---|
JASON J. LIU ET AL.: "Sleep posture analysis using a dense pressure sensitive bedsheet", 《PERVASIVE AND MOBILE COMPUTING》 * |
叶荫球: "基于计算机视觉的人体睡姿识别系统的研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112287783A (en) * | 2020-10-19 | 2021-01-29 | 燕山大学 | Intelligent ward nursing identification method and system based on vision and pressure sensing |
CN113261951A (en) * | 2021-04-29 | 2021-08-17 | 北京邮电大学 | Sleeping posture identification method and device based on piezoelectric ceramic sensor |
CN113269096B (en) * | 2021-05-18 | 2024-05-24 | 浙江理工大学 | Head pressure data deep learning processing method for sleeping gesture recognition |
CN113269096A (en) * | 2021-05-18 | 2021-08-17 | 浙江理工大学 | Head pressure data deep learning processing method for sleeping posture recognition |
CN113273998A (en) * | 2021-07-08 | 2021-08-20 | 南京大学 | Human body sleep information acquisition method and device based on RFID label matrix |
CN113688720A (en) * | 2021-08-23 | 2021-11-23 | 安徽农业大学 | Neural network recognition-based sleeping posture prediction method |
CN113688718A (en) * | 2021-08-23 | 2021-11-23 | 安徽农业大学 | Non-interference self-adaptive sleeping posture identification method based on pillow finite element analysis |
CN113688718B (en) * | 2021-08-23 | 2024-06-07 | 安徽农业大学 | Non-interference self-adaptive sleeping posture recognition method based on pillow finite element analysis |
CN113688720B (en) * | 2021-08-23 | 2024-05-31 | 安徽农业大学 | Method for predicting sleeping gesture based on neural network recognition |
CN113921108A (en) * | 2021-10-08 | 2022-01-11 | 重庆邮电大学 | Automatic segmentation method for elastic band resistance training force data |
CN113867167A (en) * | 2021-10-28 | 2021-12-31 | 中央司法警官学院 | Household environment intelligent monitoring method and system based on artificial neural network |
WO2023178462A1 (en) * | 2022-03-21 | 2023-09-28 | Super Rich Moulders Limited | Smart mattress |
CN115137315A (en) * | 2022-09-06 | 2022-10-04 | 深圳市心流科技有限公司 | Sleep environment scoring method, device, terminal and storage medium |
WO2024125566A1 (en) * | 2022-12-14 | 2024-06-20 | 深圳市三分之一睡眠科技有限公司 | Sleeping posture recognition method and system based on deep neural network |
CN116563887B (en) * | 2023-04-21 | 2024-03-12 | 华北理工大学 | Sleeping posture monitoring method based on lightweight convolutional neural network |
CN116563887A (en) * | 2023-04-21 | 2023-08-08 | 华北理工大学 | Sleeping posture monitoring method based on lightweight convolutional neural network |
CN117574056B (en) * | 2023-11-21 | 2024-05-10 | 中南大学 | Wide-area electromagnetic data denoising method and system based on hybrid neural network model |
CN117574056A (en) * | 2023-11-21 | 2024-02-20 | 中南大学 | Wide-area electromagnetic data denoising method and system based on hybrid neural network model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111353425A (en) | Sleeping posture monitoring method based on feature fusion and artificial neural network | |
CN112733950A (en) | Power equipment fault diagnosis method based on combination of image fusion and target detection | |
CN114565761B (en) | Deep learning-based method for segmenting tumor region of renal clear cell carcinoma pathological image | |
CN111539331B (en) | Visual image reconstruction system based on brain-computer interface | |
CN107767935A (en) | Medical image specification processing system and method based on artificial intelligence | |
CN103049892A (en) | Non-local image denoising method based on similar block matrix rank minimization | |
CN109636766B (en) | Edge information enhancement-based polarization difference and light intensity image multi-scale fusion method | |
SG190730A1 (en) | Method and an apparatus for determining vein patterns from a colour image | |
CN111967363B (en) | Emotion prediction method based on micro-expression recognition and eye movement tracking | |
CN113837974B (en) | NSST domain power equipment infrared image enhancement method based on improved BEEPS filtering algorithm | |
CN112288645B (en) | Skull face restoration model construction method and restoration method and system | |
CN107862249A (en) | A kind of bifurcated palm grain identification method and device | |
CN109242812A (en) | Image interfusion method and device based on conspicuousness detection and singular value decomposition | |
CN106709967A (en) | Endoscopic imaging algorithm and control system | |
CN112766165B (en) | Falling pre-judging method based on deep neural network and panoramic segmentation | |
CN111062936B (en) | Quantitative index evaluation method for facial deformation diagnosis and treatment effect | |
CN107330352A (en) | Sleeping position pressure image-recognizing method based on HOG features and machine learning | |
CN116664462B (en) | Infrared and visible light image fusion method based on MS-DSC and I_CBAM | |
CN116563887B (en) | Sleeping posture monitoring method based on lightweight convolutional neural network | |
CN111582276B (en) | Recognition method and system for parasite eggs based on multi-feature fusion | |
CN109522865A (en) | A kind of characteristic weighing fusion face identification method based on deep neural network | |
CN113008380A (en) | Intelligent AI body temperature early warning method, system and storage medium | |
CN106407975B (en) | Multiple dimensioned layering object detection method based on space-optical spectrum structural constraint | |
CN111402249A (en) | Image evolution analysis method based on deep learning | |
CN113222879A (en) | Generation countermeasure network for fusion of infrared and visible light images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200630 |
|
RJ01 | Rejection of invention patent application after publication |