Nothing Special   »   [go: up one dir, main page]

CN115381396A - Method and apparatus for assessing sleep breathing function - Google Patents

Method and apparatus for assessing sleep breathing function Download PDF

Info

Publication number
CN115381396A
CN115381396A CN202110567956.XA CN202110567956A CN115381396A CN 115381396 A CN115381396 A CN 115381396A CN 202110567956 A CN202110567956 A CN 202110567956A CN 115381396 A CN115381396 A CN 115381396A
Authority
CN
China
Prior art keywords
user
sleep
snore
state
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110567956.XA
Other languages
Chinese (zh)
Inventor
许德省
李靖
许培达
沈东崎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110567956.XA priority Critical patent/CN115381396A/en
Priority to PCT/CN2022/092419 priority patent/WO2022247649A1/en
Publication of CN115381396A publication Critical patent/CN115381396A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4809Sleep detection, i.e. determining whether a subject is asleep or not
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4812Detecting sleep stages or cycles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4815Sleep quality
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4818Sleep apnoea
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/681Wristwatch-type devices

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physiology (AREA)
  • Dentistry (AREA)
  • Anesthesiology (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The method automatically starts or stops the evaluation of the sleep breathing functions of the user, including sleep cycle staging, low ventilation, sleep apnea, snore symptom grade risks and the like, by adopting different detection modes in different detection scenes, and can start or stop the evaluation of the sleep breathing functions without manually clicking the start or the end of sleep by the user, so that the user experience is friendly. And the evaluation accuracy is improved.

Description

Method and apparatus for assessing sleep breathing function
Technical Field
The present application relates to the field of computer technology, and more particularly, to a method and apparatus for evaluating sleep breathing function.
Background
With the increasing living pressure of people, the sleep quality is gradually reduced, and people pay attention to the quality of sleep and the detection of sleep respiratory diseases in the sleep process. Sleep disordered breathing is a major factor affecting sleep quality. Pure snoring, hypoventilation, sleep apnea are the most common causes of sleep disordered breathing. Respiratory diseases in the sleeping process can be fatal, so that effective real-time detection and early warning are carried out on sleep stage abnormality, simple snore, hypopnea, sleep apnea and the like in the sleeping process, and the method is particularly important for personal health and family care.
At present, technologies of acquiring respiratory signals through ultrasound to perform sleep staging and sleep in and out detection, and dividing snore, respiratory sound and non-target fragments based on recorded data are gradually mature. For example, the polysomnography monitor on the market analyzes the sleep condition of a detected person and the severity of diseases such as snoring, hypopnea, sleep apnea and the like by recording various signals such as brain waves, electromyography, electrocardiogram, oral-nasal airflow, chest and abdominal respiratory movement, sound and the like in the sleeping process. The polysomnography instrument requires to be operated by professional personnel in professional places, is poor in comfort and high in cost, and is difficult to meet the requirements of convenience and high efficiency in family daily life. In addition, applications (apps) for sleep breathing function evaluation are also available on the market, and most of these apps set a threshold value in the environment where a user (or a subject) sleeps or use a neural network method to identify sounds and breathing sounds during the sleep process, so as to evaluate the quality of sleep breathing of the user all night. This evaluation method requires the user to manually click the start and end of sleep, and the user experience is very unfriendly.
Disclosure of Invention
The application provides a method and a device for evaluating sleep respiratory function, which can conveniently evaluate the sleep respiratory function of a user and improve user experience.
In a first aspect, there is provided a method of assessing sleep breathing function, the method comprising: determining a current detection scene, wherein the current detection scene belongs to one of preset detection scenes, the preset detection scenes comprise a first detection scene and a second detection scene, the first detection scene is that the user wears the intelligent wearable device, and the second detection scene is that the user does not wear the intelligent wearable device;
selecting a detection mode according to the current detection scene,
judging the state of the user by adopting the selected detection mode, wherein the state of the user comprises that the user is in a sleep state or the user is in a non-sleep state;
turning on or off the evaluation of the user's sleep breathing function in dependence on the user's state,
wherein the assessment of sleep breathing function comprises assessment of one or more of: sleep cycle staging, hypopnea, sleep apnea, and snoring level risk.
In the technical scheme of the application, a current detection scene is determined, and a corresponding detection mode is selected based on the current detection scene. Different detection modes adopt different means to judge whether the user is in a sleep state. And if the user is in the sleep state, automatically starting the evaluation of the sleep breathing function. If it is determined that the user is in a non-sleep state, the evaluation of the sleep breathing function is not turned on.
Therefore, compared with the prior art that the user needs to manually click the sleep start or the sleep end on the sleep monitoring device to start or close the sleep breathing function of the sleep monitoring device, the technical scheme of the application obtains the state of the user through different detection modes in different detection scenes, so that the evaluation of the sleep breathing function can be automatically started or closed, and the user experience is more friendly.
With reference to the first aspect, in some implementations of the first aspect, the selecting a detection mode according to the current detection scenario includes:
if the current detection scene is the first detection scene, selecting a first detection mode, wherein the first detection mode is used for judging the state of the user through the intelligent wearable device; or,
if the current detection scene is the second detection scene, selecting a second detection mode, wherein the second detection mode is used for judging the state of the user according to the state of the intelligent wearable device and the historical sleep information of the user;
wherein the state of the smart wearable device comprises one or more of: the duration of the intelligent wearable device in a static state in a specified time period is greater than or equal to a first duration threshold, and the duration of the intelligent wearable device in a large action state is greater than or equal to a second duration threshold;
the historical sleep information of the user comprises one or more of the following items: the historical sleep time period of the user, the preset sleep time of the user and the preset alarm clock time of the user.
In this implementation, under the detection scene that the user dresses intelligent wearing equipment, judge user's state in real time through intelligent wearing equipment, specifically, judge user's the circumstances of falling asleep and going out of sleep. And under the condition that the user does not wear the intelligent wearable device, the state of the user needs to be judged by combining the state of the intelligent wearable device and the historical sleep information of the user. From this, no matter whether the user dresses intelligent wearing equipment, all can judge user's state through suitable detection mode, and then realize the automation of aassessment and open or close, improved user experience.
With reference to the first aspect, in certain implementations of the first aspect, after initiating the evaluation of the sleep breathing function of the user, the method further comprises:
acquiring available signals in the first detection scene or the second detection scene;
evaluating sleep breathing function of the user as a function of the available signals, wherein the available signals include one or more of:
one or more breathing indicators comprising a breathing frequency and/or a respiratory wave decline;
one or more snore indicators, the snore indicators comprising snore loudness; and
an action index for the user, the action index comprising an amplitude of action and/or a frequency of gross action for the user.
With reference to the first aspect, in certain implementations of the first aspect, the evaluating sleep breathing function of the user according to the available signals includes:
and predicting the available signals by adopting a trained light gradient hoist GBM model so as to evaluate the sleep respiratory function of the user.
In the implementation mode, the trained light GBM model is adopted to predict the availability acquired in the current detection scene, and the light GBM model is obtained by training a large amount of data in advance, and the data used for training only reserve frequency bands related to signals such as respiration and snore, so that the sensitivity is high, a good basis is provided for the subsequent sleep respiratory function assessment, and the assessment accuracy is improved.
With reference to the first aspect, in certain implementations of the first aspect, before the predicting the available signal using the trained lightweight GBM prediction model, the method further includes:
acquiring audio data and ultrasonic data of the user;
preprocessing the audio data and the ultrasonic data, and performing feature extraction processing on the preprocessed audio data and the preprocessed ultrasonic data to obtain extracted data, wherein the feature extraction processing comprises the steps of extracting original features of the preprocessed audio data and the preprocessed ultrasonic data and aggregating statistical features;
and training the light GBM prediction model by using the extracted data to obtain the trained light GBM prediction model.
In the implementation mode, available signals obtained in a detection scene, such as audio data and ultrasonic data, are subjected to preprocessing, feature extraction and other processing, the obtained extracted data are used for training the light-weight GBM model, the prediction performance of the light-weight GBM model can be improved, and compared with an existing sleep respiratory function evaluation scheme, the evaluation result of the technical scheme is more accurate and the sensitivity is higher.
With reference to the first aspect, in certain implementations of the first aspect, the predicting the available signals using a trained lightweight GBM prediction model to evaluate sleep breathing function of the user includes:
if the snore loudness is greater than or equal to a first loudness threshold and the duration is greater than or equal to a first duration threshold, determining that the snore level risk of the user is high risk;
if the snore loudness is greater than or equal to a first loudness threshold and the duration is less than a first time threshold, determining that the snore level risk of the user is a medium risk;
if the loudness of the snore is greater than or equal to a second loudness threshold and the duration is greater than or equal to a second duration threshold, determining that the snore level risk of the user is low risk, wherein the second loudness threshold is smaller than the first loudness threshold, and the second duration threshold is smaller than the first duration threshold;
otherwise, judging that the snore grade risk of the user is normal.
With reference to the first aspect, in certain implementations of the first aspect, the predicting the available signals using a trained lightweight GBM prediction model to evaluate sleep breathing function of the user includes:
if the respiratory frequency is greater than the first frequency and the slope is greater than a first value, judging that the sleep stage is rapid eye movement sleep REM;
if the respiratory frequency is less than or equal to a first frequency and greater than or equal to a second frequency, judging the sleep staging stage as light sleep;
and if the respiratory frequency is less than or equal to a second frequency, judging the sleep staging stage as deep sleep.
With reference to the first aspect, in certain implementations of the first aspect, the predicting the available signals using a trained lightweight GBM prediction model to evaluate sleep breathing function of the user includes:
if the descending proportion of the amplitude of the respiratory wave is larger than a first percentage proportion, judging that the ventilation is low;
and if the reduction ratio of the kurtosis of the respiratory wave is larger than a second percentage ratio, judging the sleep apnea.
In each implementation manner, the accuracy of sleep breathing function evaluation can be improved by setting a proper judgment rule and a threshold, duration or percentage.
In a second aspect, a communication device is provided, which has the functions of implementing the method in the first aspect or any possible implementation manner thereof, and the functions can be implemented by hardware or by hardware executing corresponding software. The hardware or software includes one or more units corresponding to the above functions.
In a third aspect, a communication device is provided, comprising a processor and a communication interface, wherein the communication interface is configured to receive data and/or information and transmit the received data and/or information to the processor, and the processor processes the data and/or information to enable the communication device to perform the method according to the first aspect or any possible implementation manner thereof.
Alternatively, the processor may be a processing circuit.
Alternatively, the communication interface may be an interface circuit.
Optionally, the communication interface may comprise an input interface and an output interface. The input interface is used for receiving data and/or information to be processed, and the output interface is used for outputting the processed data and/or information.
In a fourth aspect, the present application provides a communication device, comprising at least one processor coupled to at least one memory for storing a computer program or instructions, the at least one processor being configured to invoke and execute the computer program or instructions from the at least one memory, such that the communication device performs the method of the first aspect or any possible implementation thereof.
Optionally, the at least one processor is integrated with the at least one memory.
Alternatively, the communication apparatus of the above second to fourth aspects may be a chip or a system-on-chip, for example, a system on a chip (SOC) chip.
In a fifth aspect, the present application provides a computer-readable storage medium having stored thereon computer instructions which, when run on a computer, cause the method as in the first aspect or any possible implementation thereof to be performed.
In a sixth aspect, the present application provides a computer program product comprising computer program code to, when run on a computer, cause the method as in the first aspect or any possible implementation thereof to be performed.
In a seventh aspect, the present application provides an intelligent wearable device, including the communication apparatus according to the second aspect.
Drawings
Fig. 1 is a schematic flow chart of a method of evaluating sleep breathing function provided herein.
Fig. 2 is a schematic flow chart of a method of evaluating sleep breathing function provided herein.
Fig. 3 is a schematic block diagram of the evaluation device performing preprocessing and feature extraction.
Fig. 4 is an exemplary flow chart of a method of evaluating sleep breathing function provided herein.
Fig. 5 is an evaluation apparatus 500 for evaluating sleep respiration function provided in the present application.
Fig. 6 is a schematic block diagram of an evaluation device provided in the present application.
Fig. 7 is a schematic configuration diagram of an evaluation device provided in the present application.
Fig. 8 is a schematic block diagram of an intelligent wearable device provided herein.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
The technical scheme of the application can be applied to the intelligent wearable device, can automatically evaluate the sleep breathing function of the user (for example, sleep breathing related diseases such as sleep staging, sleep apnea, snoring detection, snoring generation part detection, hypopnea and the like), does not need the evaluation of manually starting or closing the sleep breathing function of the user, and improves user experience. In addition, the accuracy of the evaluation is improved.
Illustratively, the smart wearable device mentioned in the present application includes but is not limited to: intelligent electronic products such as intelligent bracelet, intelligent wrist-watch, smart mobile phone and iPad.
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for evaluating sleep breathing function provided herein. As in fig. 1, the method 100 may be performed by an evaluation device. Illustratively, the evaluation device may be a smart wearable device, or a chip (or a chip system) built in the smart wearable device. Method 100 generally includes steps 110-140.
110. The evaluation device determines a current detection scenario, wherein the current detection scenario belongs to one of the preset detection scenarios.
For example, the preset detection scenario may include a first detection scenario and a second detection scenario.
The first detection scene is that the user wears intelligent wearing equipment, and the second detection scene is that the user does not wear intelligent wearing equipment.
120. The evaluation device selects a detection mode according to the current detection scene.
In the application, the evaluation device can select a detection mode suitable for the detection scene under different detection scenes so as to further judge whether the user is in a sleep state.
Specifically, if the current detection scenario is the first detection scenario, the evaluation device selects the first detection mode. The first detection mode is that the evaluation device judges the state of the user through the intelligent wearable device.
If the current detection scene is a second detection scene, the evaluation device selects a second detection mode, wherein if the second detection mode is adopted, the evaluation device needs to further acquire the state of the intelligent wearable device and the historical sleep information of the user, and judges the state of the user according to the state of the intelligent wearable device and the historical sleep information of the user.
Optionally, the state of the smart wearable device includes one or more of:
the duration of the intelligent wearable device in a static state in a specified time period is greater than or equal to a first duration threshold, and the duration of the intelligent wearable device in a large action state is greater than or equal to a second duration threshold.
Whether the intelligent wearable device is in a large-motion state or not can be judged through the motion amplitude of the intelligent wearable device. For example, a threshold value of the motion amplitude is set, if the motion amplitude of the intelligent wearable device is larger than the threshold value, the intelligent wearable device is in a large motion state, otherwise, the intelligent wearable device is not in the large motion state.
Optionally, the historical sleep information of the user includes one or more of: historical sleep time periods of the user, sleep times preset by the user, alarm clock times preset by the user, and the like.
Illustratively, the sleep time preset by the user may be a preset falling-asleep time, or a preset getting-up time (i.e., falling-asleep time), or the like.
130. The evaluation device judges the state of the user by adopting the selected detection mode.
The state of the user comprises that the user is in a sleep state or the user is in a non-sleep state.
140. The evaluation device turns on or off the evaluation of the sleep breathing function of the user depending on the state of the user.
In particular, in case the user is in a sleeping state, the evaluation of the user's sleep breathing function is started. In the case where the user is in a non-sleep state, the evaluation of the user's sleep breathing function is not turned on or off.
It should be understood that the closing of the evaluation of the sleep respiratory function refers to the closing of the evaluation of the sleep respiratory function after the user falls asleep by determining the state of the user in real time after the evaluation of the sleep respiratory function is turned on.
Optionally, the assessment of sleep respiratory function may include, but is not limited to, assessment of one or more of: sleep cycle staging, hypopnea, sleep apnea, and snoring level risk.
In the application, based on the current detection scene, the evaluation device can judge whether the user is in a sleep state, and then automatically start or stop the evaluation of the sleep breathing function. Specifically, the evaluation means automatically starts the evaluation of the sleep breathing function if it is determined that the user is in the sleep state. In contrast, if the evaluation means determines that the user is in a non-sleep state, the evaluation of the sleep breathing function is not turned on. Or after the evaluation device automatically starts the evaluation of the sleep breathing function, the evaluation device automatically stops the evaluation of the sleep breathing function after judging the state of the user in real time and judging that the user is asleep.
It can be seen that, compared with some existing schemes, the user needs to manually click the start of sleep or the end of sleep on the sleep monitoring device to turn on or turn off the sleep respiration function of the sleep monitoring device, according to the technical scheme of the application, the evaluation device obtains the state of the user through different detection modes in different detection scenes, so that evaluation of the sleep respiration function can be automatically turned on or turned off, and user experience is more friendly.
As described in detail below with reference to fig. 2, the evaluation device performs the evaluation of the sleep breathing function after starting the evaluation of the sleep breathing function when determining that the user is in the sleep state.
Referring to fig. 2, fig. 2 is a schematic flow chart of a method for evaluating sleep breathing function provided herein. Similar to method 100, method 200 may be performed by an evaluation device. Referring to FIG. 2, method 200 generally includes steps 210-240.
210. The evaluation device determines that the current detection scene is in the first detection scene, that is, the user wears the smart wearable device.
220. The evaluation device selects a first detection mode corresponding to the first detection scenario.
230. The evaluation device judges the state of the user through the intelligent wearable device to obtain a judgment result.
240. If the user is in the sleep state, the evaluation device starts the evaluation of the sleep breathing function.
After the evaluation of the sleep breathing function is started, the evaluation device continues to perform the following steps.
250. The evaluation device acquires available signals in the current detection scene through the intelligent wearable equipment.
Optionally, the available signals include one or more of:
one or more breathing indicators comprising a breathing frequency and/or a respiratory wave decline;
one or more snore indicators, the snore indicators comprising snore loudness; and
an action index of the user, the action index comprising an amplitude of action and/or a frequency of gross movement of the user.
Illustratively, the evaluation device acquires available signals in the current detection scene through a microphone, an ultrasonic wave transmitting and/or receiving sensor and the like of the intelligent wearable device.
260. The evaluation device adopts a machine learning prediction model trained in advance to predict the obtained available signals and obtain a prediction result.
270. And outputting a prediction result.
Specifically, as one example, the machine learning model in the present application employs a lightweight GBM prediction model.
It will be appreciated that the lightweight GBM predictive model requires extensive training to improve the accuracy and sensitivity of the prediction before it can be used for prediction.
Illustratively, the evaluation device acquires audio data of the user and ultrasound data and preprocesses the audio data and the ultrasound data. Further, the evaluation device performs feature extraction on the preprocessed audio data and ultrasonic data. Specifically, the feature extraction may mainly include extraction of original features and aggregation of statistical features. For convenience of description, hereinafter, data obtained by feature extraction will be referred to as extracted data.
The evaluation device uses the extracted data to train the light GBM prediction model, so that the trained light GBM prediction model is obtained, and good guarantee is provided for the accuracy and the sensitivity of the evaluation of the subsequent sleep respiratory function.
For example, the detailed flow of the evaluation device for preprocessing and feature extraction can be as shown in fig. 3.
Referring to fig. 3, fig. 3 is a schematic block diagram of the evaluation device performing preprocessing and feature extraction. As shown in fig. 3, the evaluation device obtains the target audio data and/or ultrasound data in the current detection scene through silence detection, judgment of voiced segments and unvoiced segments, and acquisition of audio data and ultrasound data. Further, the evaluation device preprocesses the acquired target audio data and/or ultrasonic data to eliminate the influence of the data magnitude and local fluctuation per se.
Illustratively, the preprocessing may include amplitude normalization, median filtering, and band-pass filtering of the target audio data and/or ultrasound data to preserve the frequency bands of the respiratory signal and the snore signal as much as possible to improve the sensitivity of the evaluation.
In addition, the evaluation device performs feature extraction processing on the preprocessed signals to obtain extracted data. Illustratively, the processing of feature extraction includes extraction of raw features of the target audio data and/or ultrasound data and aggregation of statistical features.
Exemplary, raw features include, but are not limited to: mel-frequency cepstral coefficients, difference features, and spectral flatness. Statistical features include, but are not limited to: mean, variance, peak, skewness, and range features.
Further, the machine learning model pair is trained using the extracted data. Illustratively, the machine learning model may employ a light gradient boosting machine (light GBM) model. In the process of training the model, the light GBM model analyzes the input signals to obtain the probability and the label of respiratory signals, snore signals and non-snore signals in the input signals, and counts and caches the probability and the label for subsequent prediction.
When the evaluation device obtains the available signals in the current detection scene, the obtained available signals are input into the light GBM model, the prediction result is obtained, and the evaluation of the sleep breathing function of the user is completed.
Some examples of predictive results are given below.
Illustratively, the snore risk is evaluated through a user's breathing index, snore index, action index and the like.
For example, if the loudness of the snore is greater than or equal to a first loudness threshold (denoted as DB 1) and the duration is greater than or equal to a first duration threshold (denoted as T1), the level of the snore risk of the user is determined to be high risk;
if the snore loudness is greater than or equal to a first loudness threshold and the duration is less than a first duration threshold, determining that the snore level risk of the user is a risk of stroke;
if the snore loudness is greater than or equal to a second loudness threshold (recorded as DB 2) and the duration is greater than or equal to a second duration threshold (recorded as T2), determining that the snore level risk of the user is low risk;
if the snore symptom level risk is in other conditions, judging that the snore symptom level risk of the user is normal;
wherein the second loudness threshold is less than the first loudness threshold, and the second duration threshold is less than the first duration threshold.
Illustratively, the sleep staging stage of the user is identified based on a breathing rate characteristic of the user.
For example, if the breathing rate is greater than a first rate (e.g., X1) and the slope is greater than a first value (e.g., Y1), determining that the sleep stage of the user is rapid eye movement sleep (REM);
if the breathing frequency is less than or equal to a first frequency (e.g., X1) and greater than or equal to a second frequency (e.g., X2), determining that the sleep stage of the user is shallow sleep;
if the respiratory rate is less than or equal to a second rate (e.g., X2), determining that the user's sleep staging stage is deep sleep;
wherein X2 is less than X1.
Illustratively, the sleep apnea or hypopnea of the user is determined based on the magnitude of the decrease in the user's respiratory wave.
For example, if the rate of decrease in the amplitude of the user's respiratory waves is greater than a first percentage (e.g., X%), it is determined to be hypopnea;
if the kurtosis of the respiratory wave decreases by more than a second percentage (e.g., Y%), sleep apnea is determined.
An example explanation of the workflow of the evaluation apparatus is given below in conjunction with fig. 4.
Referring to fig. 4, fig. 4 is an exemplary flow chart of a method 400 of evaluating sleep breathing function provided herein. It should be understood that the method 400 may be performed by an evaluation device. The evaluation device may be implemented for the smart wearable device or a module (e.g., a chip or a chip system, etc.) with corresponding functions in the smart wearable device, and is not limited. Illustratively, the chip may be a system on a chip (SOC).
401. And judging whether the user wears the intelligent wearable device.
It should be understood that whether the user wears the intelligent wearable device or not is judged, that is, it is determined that the current detection scene is specifically the first detection scene or the second detection scene.
In the case of yes, step 402 is entered. In the case of no, the flow proceeds to step 403.
402. Whether the user falls asleep is judged through the intelligent wearable device.
That is, it is determined whether the user is in a sleep state or a non-sleep state.
If false, step 404 is performed.
If so, step 405 is performed.
404. Evaluation of sleep breathing function is not initiated.
405. The evaluation of the sleep breathing function is automatically started.
404. And judging whether the intelligent wearable device is in a static state or not, wherein the duration of the intelligent wearable device in the static state is equal to or greater than a first duration threshold.
If not, step 406 is performed.
If so, step 407 is performed.
406. Evaluation of sleep breathing function is not initiated.
407. An evaluation of sleep breathing function is automatically initiated.
After initiating the evaluation of the sleep breathing function, step 408 is performed.
408. And counting and analyzing the respiratory index and the snore index in real time.
Optionally, the breathing index may include one or more, for example, the frequency of breathing.
Alternatively, the snore indicator may include one or more, for example, the loudness of snore.
409. When carrying out the breathing aassessment of sleeping, judge whether the user goes out to sleep through intelligent wearing equipment.
If so, step 410 is performed.
If not, execution continues with step 408.
410. The sleep breathing assessment function is automatically turned off.
In step 405, after the sleep breathing assessment function is automatically turned on, step 411 is performed.
411. And counting and analyzing the respiratory index and the snore index in real time.
412. And when the sleep respiration evaluation is carried out, the state of the intelligent wearable device is obtained in real time, and whether the duration of the intelligent wearable device in the large action state exceeds a second duration threshold or not is judged.
If so, step 413 is performed.
If not, execution continues with step 411.
413. The sleep breathing assessment function is automatically turned off.
A schematic block diagram of the evaluation device provided in the present application is given below.
Referring to fig. 5, fig. 5 is a diagram of an evaluation apparatus 500 for evaluating sleep breathing function provided by the present application. The evaluation device 500 may include an automatic detection module 510, a data collection and preprocessing module 520, a feature extraction and statistics module 530, and a sleep breath evaluation function module 540.
The automatic detection module 510 is mainly used for automatically starting or stopping the microphone, the ultrasound receiving device and/or the ultrasound transmitting device, and starting or stopping the evaluation of the sleep breathing function in different detection scenes.
The data collecting and preprocessing module 520 is mainly used for collecting audio data and/or ultrasonic data, collecting available signals, and preprocessing sound segments and silent segments. Illustratively, the audio data includes general sound recording data.
And the feature extraction and statistics module 530 is used for extracting and counting features of the action, the respiration and the snore of the user.
And a sleep respiratory function evaluation module 540, configured to evaluate the sleep respiratory function of the user according to the data completely extracted and counted by the feature extraction and statistics module 530.
The method for sleep breathing provided by the present application is described in detail above, and the evaluation device provided by the present application is described below.
Referring to fig. 6, fig. 6 is a schematic block diagram of an evaluation device provided in the present application. As shown in fig. 6, the evaluation apparatus 1000 includes a processing unit 1100, a receiving unit 1200, and a transmitting unit 1300.
A processing unit 1100, configured to determine a current detection scenario, where the current detection scenario belongs to one of preset detection scenarios, where the preset detection scenario includes a first detection scenario and a second detection scenario, where the first detection scenario is that the user wears the smart wearable device, and the second detection scenario is that the user does not wear the smart wearable device;
selecting a detection mode according to the current detection scene;
judging the state of the user by adopting the selected detection mode, wherein the state of the user comprises that the user is in a sleep state or the user is in a non-sleep state;
turning on or off the evaluation of the user's sleep breathing function in dependence on the user's state,
wherein the assessment of sleep breathing function comprises assessment of one or more of: sleep cycle staging, hypopnea, sleep apnea, and snoring level risk.
A sending unit 1300, configured to output a result of the evaluation.
Optionally, as an embodiment, the processing unit 1100 is further configured to:
judging the current detection scene, and selecting a first detection mode under the condition that the current detection scene is judged to be the first detection scene, wherein the first detection mode is used for judging the state of the user through the intelligent wearable device; or,
under the condition that the current detection scene is judged to be the second detection scene, selecting a second detection mode, wherein the second detection mode is used for judging the state of the user through the state of the intelligent wearable device and the historical sleep information of the user;
wherein, the state of the intelligent wearable device comprises one or more of the following items: the duration of the intelligent wearable device in a static state in a specified time period is greater than or equal to a first duration threshold, and the duration of the intelligent wearable device in a large action state is greater than or equal to a second duration threshold;
the historical sleep information of the user comprises one or more of the following items: the historical sleep time period of the user, the preset sleep time of the user and the preset alarm clock time of the user.
Optionally, as an embodiment, the receiving unit 1200 is configured to acquire a usable signal in the first detection scenario or the second detection scenario;
and the processing unit 1100, configured to evaluate the sleep breathing function of the user according to the available signals, wherein the available signals include one or more of the following:
one or more breathing indicators comprising a breathing frequency and/or a respiratory wave decline;
one or more snore indicators, the snore indicators comprising snore loudness; and
an action index of the user, the action index comprising an amplitude of action and/or a frequency of gross movement of the user.
Optionally, as an embodiment, the processing unit 1100 is further configured to predict the available signals by using a trained light gradient elevator GBM model to evaluate the sleep breathing function of the user.
Optionally, as an embodiment, the receiving unit 1200 is configured to obtain audio data and ultrasound data of the user;
the processing unit 1100 is further configured to pre-process the audio data and the ultrasonic data, and perform feature extraction processing on the pre-processed audio data and ultrasonic data to obtain extracted data, where the feature extraction processing includes extracting original features and aggregating statistical features of the pre-processed audio data and ultrasonic data;
and training the light GBM prediction model by using the extracted data to obtain the trained light GBM prediction model.
Optionally, as an embodiment, the processing unit 100 is specifically configured to:
if the snore loudness is greater than or equal to a first loudness threshold and the duration is greater than or equal to a first duration threshold, determining that the snore level risk of the user is high risk;
if the snore loudness is greater than or equal to a first loudness threshold and the duration is less than a first time threshold, determining that the snore level risk of the user is a medium risk;
if the loudness of the snore is greater than or equal to a second loudness threshold and the duration is greater than or equal to a second duration threshold, determining that the snore level risk of the user is low risk, wherein the second loudness threshold is less than the first loudness threshold, and the second duration threshold is less than the first duration threshold;
otherwise, judging the snore symptom grade risk of the user to be normal.
Optionally, as an embodiment, if the breathing rate is greater than the first rate and the slope is greater than the first value, determining that the sleep staging stage is a rapid eye movement sleep REM;
if the respiratory frequency is less than or equal to a first frequency and greater than or equal to a second frequency, judging that the sleep staging stage is light sleep;
and if the respiratory frequency is less than or equal to a second frequency, judging the sleep staging stage as deep sleep.
Optionally, as an embodiment, the predicting the available signal by using the trained lightweight GBM prediction model to evaluate the sleep respiratory function of the user includes:
if the descending proportion of the amplitude of the respiratory wave is larger than a first percentage proportion, judging that the ventilation is low;
and if the reduction ratio of the kurtosis of the respiratory wave is larger than a second percentage, judging the sleep apnea.
In the above embodiments, receiving section 1200 and transmitting section 1300 may be integrated into one transmitting/receiving section, and have both functions of receiving and transmitting, which is not limited herein.
Additionally, in various embodiments, processing unit 1100 is operative to perform processes and/or operations implemented internally by communications device 1000 in addition to transmitting and receiving actions. Receiving unit 1200 is configured to perform the receiving operation, and transmitting unit 1300 is configured to perform the transmitting operation.
Illustratively, as one implementation, the functions of the automatic detection module 510, the data acquisition and preprocessing module 520, the feature extraction and statistics module 530, and the sleep breathing function assessment module 540 shown in fig. 5 may be integrated in the processing unit 1100 in fig. 6.
Optionally, in an implementation, the evaluation apparatus 500 may further include a display unit 1400 for displaying (i.e., presenting) the evaluation result to the user.
Referring to fig. 7, fig. 7 is a schematic structural view of an evaluation device provided in the present application. As shown in fig. 7, the communication apparatus 10 includes: one or more processors 11, one or more memories 12, and one or more communication interfaces 13. The processor 11 is configured to control the communication interface 13 to send and receive signals, the memory 12 is configured to store a computer program, and the processor 11 is configured to call and run the computer program from the memory 12, so as to enable the communication device 10 to perform the processes and/or operations performed by the evaluation device in the method embodiments of the present application.
For example, the processor 11 may have the functions of the processing unit 1100 shown in fig. 6, and the communication interface 13 may have the functions of the receiving unit 1200 and/or the transmitting unit 1300 shown in fig. 6. In particular, the processor 11 may be used to perform the processes and/or operations performed internally by the evaluable device in the respective method embodiments, and the communication interface 13 may be used to perform the transmitting and/or receiving actions performed by the evaluation device in the respective method embodiments.
Illustratively, in the method 100 shown in FIG. 1, the processor 11 is configured to perform steps 110-140. Alternatively, in FIG. 2, processor 11 is configured to perform steps 210-240, step 260; the receiving unit 1200 is configured to perform step 250; and the sending unit 1300 is configured to perform step 270. Alternatively, in FIG. 4, processor 11 is configured to perform steps 401-411.
Additionally, the communication device 10 may also include one or more memories 14, and the one or more memories 14 may be used to store available signals obtained from the detection scenario, to store data of the lightweight GBM model, to store intermediate processing results, and the like.
In one implementation, the communication device 10 may be a smart wearable device.
In another implementation, the communication device 10 may be a chip or a chip system installed in the smart wearable device. In such an implementation, the communication interface 13 may be an interface circuit or an input/output interface.
Wherein the dashed box behind a device (e.g., a processor, a memory, or a communication interface) in fig. 7 indicates that the device may be more than one.
Fig. 8 is a schematic block diagram of an intelligent wearable device provided herein. Referring to fig. 8, smart wearable device 30 may include a processor 310, a memory 320, a wireless communication module 330, a display screen 340, a camera 350, an audio module 360, and a sensor module 370, and the like. The audio module 360 may include a speaker 360A, a receiver 360B, a microphone 360C, and the like. Optionally, the above devices may be one or more.
It is to be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation to the smart wearable device 30. In other embodiments of the present application, the smart wearable device 30 may include more or fewer components than those shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Illustratively, the processor 310 may correspond to the processing unit 1100 of fig. 6, for executing the steps executed by the processing unit 1100. Alternatively, the processor 310 has the function of the processor 11 in fig. 7.
Alternatively, processor 310 may include one or more processing units, such as: the processor 310 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), among others. Wherein, the different processing units may be independent devices or may be integrated in one or more processors.
Memory 320 may be used to store computer-executable program code, including instructions. Processor 310 executes various functional applications and data processing of smart wearable device 30 by executing instructions stored in memory 320. The internal memory 320 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program required for at least one function (e.g., a sound playing function, an image playing function, etc.), and the like. The data storage area may store data (e.g., audio data, a phone book, etc.) created during use of the smart wearable device 30, and the like. Further, the memory 320 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a Universal Flash Storage (UFS), and the like. Illustratively, memory 320 has the functionality of memory 12 in FIG. 7.
The wireless communication module 330 may provide a solution for wireless communication applied to the smart wearable device 30, including Wireless Local Area Networks (WLANs) such as wireless fidelity (Wi-Fi) networks, bluetooth (bluetooth, BT), global Navigation Satellite Systems (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. The wireless communication module 360 may be one or more devices integrating at least one communication processing module.
Illustratively, the wireless communication module 330 may interact with other devices, modules or apparatuses through the communication interface 13 as in fig. 7.
And a display screen 340 for displaying images, videos, text messages and the like. Illustratively, the display screen 340 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-o led, a quantum dot light-emitting diode (QLED), or the like. Optionally, smart-wearable device 30 may include 1 or more display screens 340.
Illustratively, the display screen 340 is used for displaying the evaluation result of the sleep breathing function, and may also display a prompt message that the evaluation of the sleep breathing function is ongoing or is in an off state.
A camera 350 for capturing still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to be converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, smart wearable device 30 may include 1 or more cameras 350.
In addition, the smart wearable device 30 may implement an audio function through the audio module 370, the speaker 370A, the receiver 370B, the microphone 370C, and the application processor. Such as sound recordings, etc.
In some embodiments, the audio module 370 may be disposed in the processor 310, or some functional modules of the audio module 370 may be disposed in the processor 310.
Illustratively, the microphone 370C on the smart wearable device 30 is used to collect a sound signal, reduce noise, and further implement a directional recording function, etc. to collect a sound signal in a detection scene.
The sensor module 380 may include a variety of sensors, such as pressure sensors, gyroscope sensors, barometric pressure sensors, magnetic sensors, acceleration sensors, distance sensors, proximity light sensors, ambient light sensors, fingerprint sensors, temperature sensors, touch sensors, and bone conduction sensors, among others. Some or all of the sensors can be applied to the scheme of the application to assist the evaluation device in judging the detection scene and acquiring signals in the sleep respiratory function evaluation process. In addition, the smart wearable device 30 may further include other more sensors, which are not limited.
Optionally, the memory and the processor in the foregoing device embodiments may be physically separate units, or the memory and the processor may be integrated together, which is not limited herein.
Furthermore, the present application also provides a computer-readable storage medium, in which computer instructions are stored, and when the computer instructions are executed on a computer, the computer instructions cause the operations and/or processes performed by the evaluation device in the method embodiments of the present application to be performed.
Furthermore, the present application also provides a computer program product, which comprises computer program code or instructions, when the computer program code or instructions runs on a computer, causes the operations and/or processes performed by the evaluation device in the method embodiments of the present application to be performed.
Furthermore, the present application also provides a chip, which includes a processor, a memory for storing a computer program is provided separately from the chip, and the processor is configured to execute the computer program stored in the memory, so that a device in which the chip is installed performs the operations and/or processes performed by the evaluation apparatus in any one of the method embodiments.
Further, the chip may also include a communication interface. The communication interface may be an input/output interface, an interface circuit, or the like. Further, the chip may further include the memory.
Optionally, the number of the processors may be one or more, the number of the memories may be one or more, and the number of the memories may be one or more.
Alternatively, the processor may be a processing circuit or the like.
Furthermore, the present application also provides a communication device (for example, a chip or a chip system), which includes a processor and a communication interface, wherein the communication interface is used for receiving (or called as input) data and/or information and transmitting the received data and/or information to the processor, the processor processes the data and/or information, and the communication interface is also used for outputting (or called as output) the data and/or information processed by the processor, so that the operation and/or the processing performed by the evaluation device in any one of the method embodiments is performed.
Furthermore, the present application also provides a communication device, comprising at least one processor coupled with at least one memory, the at least one processor being configured to execute computer programs or instructions stored in the at least one memory, so that the communication device performs the operations and/or processes performed by the evaluation device in any one of the method embodiments.
In addition, the present application also provides a communication device comprising a processor and a memory. Optionally, a transceiver may also be included. Wherein the memory is used for storing the computer program, and the processor is used for calling and running the computer program stored in the memory and controlling the transceiver to transmit and receive signals, so that the communication device executes the operation and/or process executed by the evaluation device in any one of the method embodiments.
The memory in the embodiments of the present application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, synchronous Link DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (20)

1. A method of assessing sleep breathing function, comprising:
determining a current detection scene, wherein the current detection scene belongs to one of preset detection scenes, the preset detection scenes comprise a first detection scene and a second detection scene, the first detection scene is that the user wears the intelligent wearable device, and the second detection scene is that the user does not wear the intelligent wearable device;
selecting a detection mode according to the current detection scene,
judging the state of the user by adopting the selected detection mode, wherein the state of the user comprises that the user is in a sleep state or the user is in a non-sleep state;
turning on or off the evaluation of the user's sleep breathing function in dependence on the user's state,
wherein the assessment of sleep breathing function comprises assessment of one or more of: sleep cycle staging, hypopnea, sleep apnea, and snoring level risk.
2. The method of claim 1, wherein selecting a detection mode based on the current detection scenario comprises:
if the current detection scene is the first detection scene, selecting a first detection mode, wherein the first detection mode is used for judging the state of the user through the intelligent wearable device; or,
if the current detection scene is the second detection scene, selecting a second detection mode, wherein the second detection mode is used for judging the state of the user according to the state of the intelligent wearable device and the historical sleep information of the user;
wherein the state of the smart wearable device comprises one or more of: the duration of the intelligent wearable device in a static state in a specified time period is greater than or equal to a first duration threshold, and the duration of the intelligent wearable device in a large action state is greater than or equal to a second duration threshold;
the historical sleep information of the user comprises one or more of the following items: the historical sleep time period of the user, the preset sleep time of the user and the preset alarm clock time of the user.
3. The method of claim 1 or 2, wherein after initiating the evaluation of the user's sleep breathing function, the method further comprises:
acquiring available signals in the first detection scene or the second detection scene;
evaluating the user's sleep breathing function based on the available signals,
wherein the available signal comprises one or more of:
one or more breathing indicators comprising a breathing frequency and/or a respiratory wave decline;
one or more snore indicators, the snore indicators comprising snore loudness; and
an action index of the user, the action index comprising an amplitude of action and/or a frequency of gross movement of the user.
4. The method of claim 3, wherein said evaluating sleep breathing function of said user based on said available signals comprises:
and predicting the available signals by adopting a trained light gradient hoist GBM model so as to evaluate the sleep respiratory function of the user.
5. The method of claim 4, wherein prior to predicting the available signals using the trained lightweight GBM predictive model, the method further comprises:
acquiring audio data and ultrasonic data of the user;
preprocessing the audio data and the ultrasonic data, and performing feature extraction processing on the preprocessed audio data and the preprocessed ultrasonic data to obtain extracted data, wherein the feature extraction processing comprises the steps of extracting original features of the preprocessed audio data and the preprocessed ultrasonic data and aggregating statistical features;
and training the light GBM prediction model by using the extracted data to obtain the trained light GBM prediction model.
6. The method of claim 4 or 5, wherein predicting the available signals using a trained lightweight GBM predictive model to assess the user's sleep breathing function comprises:
if the snore loudness is greater than or equal to a first loudness threshold and the duration is greater than or equal to a first duration threshold, determining that the snore grade risk of the user is high risk;
if the snore loudness is greater than or equal to a first loudness threshold and the duration is less than a first time threshold, determining that the snore level risk of the user is a medium risk;
if the loudness of the snore is greater than or equal to a second loudness threshold and the duration is greater than or equal to a second duration threshold, determining that the snore level risk of the user is low risk, wherein the second loudness threshold is smaller than the first loudness threshold, and the second duration threshold is smaller than the first duration threshold;
otherwise, judging the snore symptom grade risk of the user to be normal.
7. The method of any one of claims 4-6, wherein said predicting the available signals using a trained lightweight GBM predictive model to assess the sleep breathing function of the user comprises:
if the respiratory frequency is greater than the first frequency and the slope is greater than a first value, judging that the sleep stage is rapid eye movement sleep REM;
if the respiratory frequency is less than or equal to a first frequency and greater than or equal to a second frequency, judging that the sleep staging stage is light sleep;
and if the respiratory frequency is less than or equal to a second frequency, judging that the sleep staging stage is deep sleep.
8. The method of any one of claims 4-7, wherein predicting the available signals using a trained lightweight GBM predictive model to assess sleep respiratory function of the user comprises:
if the reduction ratio of the amplitude of the respiratory wave is larger than a first percentage proportion, judging that the ventilation is low;
and if the reduction ratio of the kurtosis of the respiratory wave is larger than a second percentage ratio, judging the sleep apnea.
9. An apparatus for assessing sleep respiratory function, comprising:
the processing unit is used for determining a current detection scene, wherein the current detection scene belongs to one of preset detection scenes, the preset detection scenes comprise a first detection scene and a second detection scene, the first detection scene is that the user wears the intelligent wearable device, and the second detection scene is that the user does not wear the intelligent wearable device;
selecting a detection mode according to the current detection scene;
judging the state of the user by adopting the selected detection mode, wherein the state of the user comprises that the user is in a sleep state or the user is in a non-sleep state;
and, depending on the state of the user, turning on or off the evaluation of the user's sleep breathing function,
wherein the assessment of sleep breathing function comprises assessment of one or more of: sleep cycle staging, hypopnea, sleep apnea, and snoring level risk.
10. The apparatus according to claim 9, wherein the processing unit is specifically configured to:
if the current detection scene is the first detection scene, selecting a first detection mode, wherein the first detection mode is used for judging the state of the user through the intelligent wearable device; or,
if the current detection scene is the second detection scene, selecting a second detection mode, wherein the second detection mode is used for judging the state of the user according to the state of the intelligent wearable device and the historical sleep information of the user;
wherein the state of the smart wearable device comprises one or more of: the historical sleep time period of the user, the preset sleep time of the user and the preset alarm clock time of the user.
11. The apparatus of claim 9 or 10, further comprising:
a communication interface for acquiring available signals in the first detection scenario or the second detection scenario;
the processing unit is further configured to evaluate the sleep breathing function of the user according to the available signal,
wherein the available signal comprises one or more of:
one or more breathing indicators comprising a breathing frequency and/or a declining amplitude of a breathing wave;
one or more snore indicators, the snore indicators comprising snore loudness; and
an action index of the user, the action index comprising an amplitude of action and/or a frequency of gross movement of the user.
12. The apparatus according to claim 11, wherein the processing unit is specifically configured to:
predicting the available signals using a trained lightweight GBM model to evaluate sleep breathing function of the user.
13. The apparatus of claim 12, wherein the communication interface is further configured to:
acquiring audio data and ultrasonic data of the user;
and the processing unit is further configured to:
preprocessing the audio data and the ultrasonic data, and performing feature extraction processing on the preprocessed audio data and the preprocessed ultrasonic data to obtain extracted data, wherein the feature extraction processing comprises the steps of extracting original features of the preprocessed audio data and the preprocessed ultrasonic data and aggregating statistical features;
and training the light GBM prediction model by using the extracted data to obtain the trained light GBM prediction model.
14. The apparatus according to claim 12 or 13, wherein the processing unit is specifically configured to:
if the snore loudness is greater than or equal to a first loudness threshold and the duration is greater than or equal to a first duration threshold, determining that the snore level risk of the user is high risk;
if the snore loudness is greater than or equal to a first loudness threshold and the duration is less than a first time threshold, determining that the snore level risk of the user is a medium risk;
if the loudness of the snore is greater than or equal to a second loudness threshold and the duration is greater than or equal to a second duration threshold, determining that the snore level risk of the user is low risk, wherein the second loudness threshold is smaller than the first loudness threshold, and the second duration threshold is smaller than the first duration threshold;
otherwise, judging that the snore grade risk of the user is normal.
15. The apparatus according to any one of claims 12 to 14, wherein the processing unit is specifically configured to:
if the respiratory frequency is greater than the first frequency and the slope is greater than a first value, judging that the sleep stage is rapid eye movement sleep REM;
if the respiratory frequency is less than or equal to a first frequency and greater than or equal to a second frequency, judging that the sleep staging stage is light sleep;
and if the respiratory frequency is less than or equal to a second frequency, judging the sleep staging stage as deep sleep.
16. The apparatus according to any one of claims 12 to 15, wherein the processing unit is specifically configured to:
if the descending proportion of the amplitude of the respiratory wave is larger than a first percentage proportion, judging that the ventilation is low;
and if the reduction ratio of the kurtosis of the respiratory wave is larger than a second percentage ratio, judging the sleep apnea.
17. A communications apparatus, comprising at least one processor coupled with at least one memory, the at least one processor to execute a computer program or instructions stored in the at least one memory to cause the communications apparatus to perform the method of any of claims 1-8.
18. A chip comprising a processor and a communication interface for receiving data and/or information and transmitting the received data and/or information to the processor, the processor processing the data and/or information to perform the method of any one of claims 1-8.
19. A computer-readable storage medium having stored thereon computer instructions which, when run on a computer, cause the method of any one of claims 1-8 to be implemented.
20. A computer program product, characterized in that it comprises computer program code which, when run on a computer, causes the method according to any one of claims 1-8 to be implemented.
CN202110567956.XA 2021-05-24 2021-05-24 Method and apparatus for assessing sleep breathing function Pending CN115381396A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110567956.XA CN115381396A (en) 2021-05-24 2021-05-24 Method and apparatus for assessing sleep breathing function
PCT/CN2022/092419 WO2022247649A1 (en) 2021-05-24 2022-05-12 Method and apparatus for evaluating respiratory function during sleep

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110567956.XA CN115381396A (en) 2021-05-24 2021-05-24 Method and apparatus for assessing sleep breathing function

Publications (1)

Publication Number Publication Date
CN115381396A true CN115381396A (en) 2022-11-25

Family

ID=84114248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110567956.XA Pending CN115381396A (en) 2021-05-24 2021-05-24 Method and apparatus for assessing sleep breathing function

Country Status (2)

Country Link
CN (1) CN115381396A (en)
WO (1) WO2022247649A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117077812B (en) * 2023-09-13 2024-03-08 荣耀终端有限公司 Network training method, sleep state evaluation method and related equipment
CN117771508B (en) * 2024-02-26 2024-05-10 深圳智尚未来科技有限公司 Sleep quality intervention method, device and system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120071741A1 (en) * 2010-09-21 2012-03-22 Zahra Moussavi Sleep apnea monitoring and diagnosis based on pulse oximetery and tracheal sound signals
CN111467644B (en) * 2013-07-08 2023-04-11 瑞思迈传感器技术有限公司 Method and system for sleep management
US10111615B2 (en) * 2017-03-11 2018-10-30 Fitbit, Inc. Sleep scoring based on physiological information
CN109528214A (en) * 2018-11-07 2019-03-29 深圳市新元素医疗技术开发有限公司 A kind of Multifunctional wrist BOLD contrast
CN109620158B (en) * 2018-12-28 2021-10-15 惠州Tcl移动通信有限公司 Sleep monitoring method, intelligent terminal and storage device
CN109745011B (en) * 2019-02-20 2021-12-03 华为终端有限公司 User sleep respiration risk monitoring method, terminal and computer readable medium
CN110074778A (en) * 2019-05-29 2019-08-02 北京脑陆科技有限公司 A kind of extensive brain electrosleep monitoring system based on EEG equipment
CN111657948B (en) * 2020-05-25 2024-04-05 深圳市云中飞电子有限公司 Sleep breathing state detection method, device and equipment

Also Published As

Publication number Publication date
WO2022247649A1 (en) 2022-12-01

Similar Documents

Publication Publication Date Title
CN113520340B (en) Sleep report generation method, device, terminal and storage medium
US10092219B2 (en) Automatic detection of user's periods of sleep and sleep stage
US20160345832A1 (en) System and method for monitoring biological status through contactless sensing
US20180008191A1 (en) Pain management wearable device
US20070004969A1 (en) Health monitor
WO2022247649A1 (en) Method and apparatus for evaluating respiratory function during sleep
US20150342519A1 (en) System and method for diagnosing medical condition
US9408562B2 (en) Pet medical checkup device, pet medical checkup method, and non-transitory computer readable recording medium storing program
CN106897569A (en) A kind of method of work of intelligent health terminal system
US20220104725A9 (en) Screening of individuals for a respiratory disease using artificial intelligence
WO2021238460A1 (en) Risk pre-warning method, risk behavior information acquisition method, and electronic device
CN114269241A (en) System and method for detecting object falls using wearable sensors
KR20200104758A (en) Method and apparatus for determining a dangerous situation and managing the safety of the user
US20220008030A1 (en) Detection of agonal breathing using a smart device
KR20200104759A (en) System for determining a dangerous situation and managing the safety of the user
WO2023025037A1 (en) Health management method and system, and electronic device
CN110881987A (en) Old person emotion monitoring system based on wearable equipment
CN110638419A (en) Method and device for identifying chewed food and/or drunk beverages
CN110755091A (en) Personal mental health monitoring system and method
JP2019051129A (en) Deglutition function analysis system and program
KR20210060246A (en) The arraprus for obtaining biometiric data and method thereof
KR102188076B1 (en) method and apparatus for using IoT technology to monitor elderly caregiver
KR102171742B1 (en) Senior care system and method therof
CN115336968A (en) Sleep state detection method and electronic equipment
JP5669302B2 (en) Behavior information collection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination