Nothing Special   »   [go: up one dir, main page]

EP3672274A1 - Adaptive audio control device and method based on scenario identification - Google Patents

Adaptive audio control device and method based on scenario identification Download PDF

Info

Publication number
EP3672274A1
EP3672274A1 EP19741628.2A EP19741628A EP3672274A1 EP 3672274 A1 EP3672274 A1 EP 3672274A1 EP 19741628 A EP19741628 A EP 19741628A EP 3672274 A1 EP3672274 A1 EP 3672274A1
Authority
EP
European Patent Office
Prior art keywords
ambient sound
sound signal
signal
user
pressure level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP19741628.2A
Other languages
German (de)
French (fr)
Other versions
EP3672274A4 (en
Inventor
Jian Zhao
Jiandan LIU
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaoniao Tingting Technology Co Ltd
Original Assignee
Beijing Xiaoniao Tingting Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaoniao Tingting Technology Co Ltd filed Critical Beijing Xiaoniao Tingting Technology Co Ltd
Publication of EP3672274A1 publication Critical patent/EP3672274A1/en
Publication of EP3672274A4 publication Critical patent/EP3672274A4/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1041Mechanical or electronic switches, or control elements
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1783Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase handling or detecting of non-standard events or conditions, e.g. changing operating modes under specific operating conditions
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1783Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase handling or detecting of non-standard events or conditions, e.g. changing operating modes under specific operating conditions
    • G10K11/17837Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase handling or detecting of non-standard events or conditions, e.g. changing operating modes under specific operating conditions by retaining part of the ambient acoustic environment, e.g. speech or alarm signals that the user needs to hear
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1787General system configurations
    • G10K11/17879General system configurations using both a reference signal and an error signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/22Arrangements for obtaining desired frequency or directional characteristics for obtaining desired frequency characteristic only 
    • H04R1/222Arrangements for obtaining desired frequency or directional characteristics for obtaining desired frequency characteristic only  for microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1083Reduction of ambient noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/22Arrangements for obtaining desired frequency or directional characteristics for obtaining desired frequency characteristic only 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2410/00Microphones
    • H04R2410/05Noise reduction with a separate noise microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/01Aspects of volume control, not necessarily automatic, in sound systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/03Synergistic effects of band splitting and sub-band processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/01Hearing devices using active noise cancellation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/07Use of position data from wide-area or local-area positioning systems in hearing devices, e.g. program or information selection

Definitions

  • the application relates to an electroacoustic transformation technology, and in particular to an adaptive audio control device and method based on scenario identification.
  • an audio playback device with a function of passive noise cancellation/active noise cancellation appears, for example, a noise-canceling headphone, so as to eliminate the effect of noises on the user.
  • the inventor finds that eliminating only the noise cannot satisfy the user's requirements for playback effects; the user hopes that the audio playback device is more intelligent, and can automatically regulate the playback effects to adapt to the current playback environment.
  • the equivalent continuous A sound level is usually used to evaluate environmental noises.
  • the environmental noise is less than 50dBA, people think that the environment is relatively quiet; when the noise is greater than 80dBA, people feel the environment is noisy; when the noise reaches 120dBA, people will find it is unbearable.
  • the noise is greater than 90dBA, the possibility of hearing impairment is significantly higher.
  • the disclosure aims to provide an adaptive audio control method based on scenario identification, so as to automatically regulate the playback effects according to a usage scenario of a user.
  • an adaptive audio control device based on scenario identification, which includes: an ambient sound acquisition microphone, an acceleration sensor, a location module, a control module, an audio signal volume adjustment module, an active noise cancellation module, and an ambient sound adjustment module. Output ends of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module are connected with a speaker respectively.
  • the control module includes a memory and a processor.
  • the memory stores a computer program. When executed by the processor, the computer program implements the following steps:
  • the ambient sound adjustment module includes any one of a combination of the following submodules: a wind noise suppression submodule, a voice enhancement submodule, a dynamic range control submodule, and an EQ processing submodule.
  • the operation that the usage scenario of the user is analyzed according to the acceleration data output by the acceleration sensor and the geographic location data output by the location module includes that:
  • the environment types include an indoor environment and a road environment;
  • the motion modes include any one of: a stationary mode, a walking mode, or a transportation mode.
  • the user if the movement speed is less than a first speed threshold and the cadence value is less than a first cadence value threshold, the user is in the stationary mode; if the movement speed is in a walking speed interval and the cadence value is in a walking cadence value interval, the user is in the walking mode; if the movement speed is greater than a second speed threshold, the user is in the transportation mode.
  • the device further includes a bone conduction microphone or an infrared proximity sensor.
  • the usage scenario of the user also includes a talking state of the user.
  • the computer program When executed by the processor, the computer program implements the following steps: it is determined, according to a signal output by the bone conduction microphone or the infrared proximity sensor, whether the user is in the talking mode.
  • there are a plurality of ambient sound acquisition microphones including the microphones for acquiring the ambient sound of a real-time location of the user and the microphones for acquiring the ambient sound heard at an ear of the user.
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal includes that:
  • that the dynamic range control submodule is controlled to perform the dynamic range adjustment on the ambient sound signal according to the sound pressure level of the ambient sound signal includes that:
  • performing the EQ compensation processing on the ambient sound signal includes performing the EQ compensation processing on a voice signal band and a honk signal band in the ambient sound signal.
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that: it is determined, according to the sound pressure level of the ambient sound signal, whether to enable the active noise cancellation module, and if the active noise cancellation module is enabled, a noise cancellation level of the active noise cancellation module is adjusted according to the sound pressure level of the ambient sound signal.
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that: the wind noise suppression submodule and the dynamic range control submodule are controlled to be disabled.
  • the device is a headphone.
  • an adaptive audio control method based on scenario identification includes the following steps:
  • the ambient sound adjustment module includes any one of a combination of the following submodules: the wind noise suppression submodule, the voice enhancement submodule, the dynamic range control submodule, and the EQ processing submodule.
  • the operation that the usage scenario of the user is analyzed according to the acceleration data and the geographic location data includes that:
  • the environment types include the indoor environment and the road environment; the motion modes include any one of the followings: the stationary mode, the walking mode, and the transportation mode.
  • the user if the movement speed is less than the first speed threshold and the cadence value is less than the first cadence value threshold, the user is in the stationary mode; if the movement speed is in the walking speed interval and the cadence value is in the walking cadence value interval, the user is in the walking mode; if the movement speed is greater than the second speed threshold, the user is in the transportation mode.
  • the audio playback device further includes the bone conduction microphone or the infrared proximity sensor.
  • the usage scenario of the user also includes the talking state of the user.
  • the method further includes the following step: it is determined, according to the signal output by the bone conduction microphone or the infrared proximity sensor, whether the user is in the talking mode.
  • that the ambient sound signal of surrounding environment of the user is acquired includes that: the ambient sound signal of the real-time location of the user is acquired, and the ambient sound signal heard at an ear of the user is acquired.
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
  • that the dynamic range control submodule is controlled to perform the dynamic range adjustment on the ambient sound signal according to the sound pressure level of the ambient sound signal includes that:
  • performing the EQ compensation processing on the ambient sound signal includes performing the EQ compensation processing on the voice signal band and the honk signal band in the ambient sound signal.
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that: it is determined, according to the sound pressure level of the ambient sound signal, whether to enable the active noise cancellation module, and if the active noise cancellation module is enabled, the noise cancellation level of the active noise cancellation module is adjusted according to the sound pressure level of the ambient sound signal.
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that: the wind noise suppression submodule and the dynamic range control submodule are controlled to be disabled.
  • the audio playback device is a headphone.
  • a method for controlling an audio playback device which includes the following steps: the acceleration data of the user is acquired, and the usage scenario of the user is analyzed according to the acceleration data; the ambient sound signal of surrounding environment of the user is acquired, the sound pressure level of the ambient sound signal is calculated, and the energy distribution and the spectral distribution of the ambient sound signal is analyzed; and the audio signal volume, the active noise cancellation level, and the adjustment of the ambient sound signal of the audio playback device are controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal.
  • the method further includes the following steps: the geographic location data of the user is acquired; and the usage scenario of the user is analyzed according to the acceleration data and the geographic location data.
  • a system for controlling an audio playback device which includes: one or more processors; and a memory which is coupled to at least one of the one or more processors. There are computer program instructions stored in the memory. When the computer program instructions are executed by the at least one processor, the system performs the method for controlling an audio playback device.
  • the method includes the following operation: the acceleration data of the user is acquired, and the usage scenario of the user is analyzed according to the acceleration data; the ambient sound signal of surrounding environment of the user is acquired, the sound pressure level of the ambient sound signal is calculated, and the energy distribution and the spectral distribution of the ambient sound signal is analyzed; and the audio signal volume, the active noise cancellation level, and the adjustment of the ambient sound signal of the audio playback device are controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal.
  • a computer program product When executed by the processor, the computer program product may realize the method for controlling an audio playback device described in the third aspect of the disclosure.
  • the adaptive audio control device and method based on scenario identification provided by the disclosure may analyze the usage scenario of the use, and automatically regulate the playback effects according to the usage scenario.
  • the disclosure presents an adaptive audio control device based on scenario identification.
  • the device may be a headphone, a loudspeaker box, or other electronic devices capable of playing an audio signal.
  • the device may carry out wired communication or wireless communication with terminal devices like a cell phone and a computer, so as to play the audio signal of the terminal devices.
  • the device may also store the audio signal, for example music, and the device may play the audio signal stored in it.
  • the device may also be set in the terminal device, as a part of the terminal device.
  • the adaptive audio control device based on scenario identification provided by the first embodiment of the disclosure includes: an ambient sound acquisition microphone 13, an acceleration sensor 11, a location module 12, a control module 21, an audio signal volume adjustment module 22, an active noise cancellation module 23, and an ambient sound adjustment module 24.
  • the acceleration sensor 11 is configured to acquire acceleration data of a user, and output the acceleration data to the control module 21.
  • the location module 12 is configured to acquire geographic location data of the user, and output the geographic location data to the control module 21.
  • the audio signal volume adjustment module 22 adjusts the volume of the audio signal
  • the audio signal is input in a speaker 30 for playback.
  • the ambient sound acquisition microphone 13 is configured to pick up an ambient sound signal, and feed the picked-up ambient sound signal to the control module 21, the active noise cancellation module 23 and the ambient sound adjustment module 24 respectively. Output ends of the active noise cancellation module 23 and the ambient sound adjustment module 24 are connected with the speaker 30 respectively.
  • the control module 21 is connected with the audio signal volume adjustment module 22, the active noise cancellation module 23 and the ambient sound adjustment module 21 respectively, so as to control their working; for example, the control module 21 enables/disables a certain module or submodule, or adjusts parameters of a certain module or submodule.
  • the active noise cancellation module 23 is configured to generate a corresponding noise cancellation signal aiming at the ambient sound signal, and output the noise cancellation signal to the speaker 30.
  • the noise cancellation signal and the ambient sound signal cancel each other in the ear canal of the user, so as to reduce the impact of the ambient sound on the user listening to the audio signal.
  • the active noise cancellation module 24 may have a feedback noise cancellation manner, a feed-forward noise cancellation manner, and a noise cancellation manner of feed-forward combined with feedback.
  • the active noise cancellation module 23 is enabled only when the sound pressure level of the ambient sound reaches 60dBA.
  • the active noise cancellation module 23 may be set with various noise cancellation levels; for example, when the sound level intensities of the ambient sound reach 60dBA, 70dBA, 80dBA and 90dBA respectively, each of them corresponds to a noise cancellation level. The stronger the sound pressure level of the ambient sound, the higher the noise cancellation level.
  • the ambient sound adjustment module 24 is configured to adjust the ambient sound signal, and output the adjusted ambient sound signal to the speaker 30.
  • the ambient sound adjustment module 24 includes the following submodules: a wind noise suppression submodule 241, a voice enhancement submodule 242, a dynamic range control submodule 243, and an EQ processing submodule 244.
  • the wind noise suppression submodule 241 is mainly configured to filter the wind noise in the ambient sound signal.
  • the wind noises mainly concentrate in very low frequency bands. Once the big wind noises are detected, different filters may be set to deal with, so as to reduce the impact of the wind noise on the user listening to the audio signal.
  • different filters may be set to deal with, so as to reduce the impact of the wind noise on the user listening to the audio signal.
  • the wind noise suppression submodule 241 may be disabled.
  • the voice enhancement submodule 242 is mainly configured to enhance the voice part in the ambient sound signal, suppress and reduce a noise interference, and improve a signal-to-noise ratio of the voice part, so that the user can hear the outside voice more clearly.
  • the voice enhancement submodule 241 when the user is in a talking state, the voice enhancement submodule 241 is enabled.
  • the voice enhancement submodule 241 is enabled.
  • the voice enhancement submodule 242 may perform enhancement processing to a voice signal in the ambient sound signal and perform suppression processing to an ambient noise, thereby realizing a voice enhancement function.
  • the dynamic range control submodule 243 is mainly configured to perform dynamic range adjustment on the ambient sound signal; for example, it is possible to first compress some impulse sounds and then feed them to the headphone, so as to avoid a very big distortion at the headphone.
  • the dynamic range control submodule 243 is always in an enabled state in all circumstances, so as to prevent a burst sound from startling and damaging the user.
  • the dynamic range control submodule 243 when the user is in the outdoor environment, the dynamic range control submodule 243 must be enabled; when the user is in the indoor environment, because there are a relatively few burst sounds in the indoor environment, the dynamic range control submodule 243 may be disabled.
  • the EQ processing submodule 244 is mainly configured to enhance and attenuate the ambient sound aiming at different frequency bands, so as to optimize listening feeling of the ambient sound. In a specific example, if it is needed to hear parts of the ambient sounds, the EQ processing submodule 244 will be enabled, so as to perform compensation enhancement to the ambient sounds on a part of the frequency bands.
  • FIG. 2 illustrates an adaptive audio control device based on scenario identification provided by another embodiment of the disclosure.
  • the embodiment of FIG. 2 has all the structures and functions provided by the embodiment of FIG. 1 .
  • the main difference is that the device in the embodiment of FIG. 2 further includes a bone conduction microphone 14.
  • the output end of the bone conduction microphone 14 is connected with the control module 21.
  • FIG. 3 illustrates an adaptive audio control device based on scenario identification provided by yet another embodiment of the disclosure.
  • the embodiment of FIG. 3 has all the structures and functions provided by the embodiment of FIG. 1 .
  • the main difference is that the device in the embodiment of FIG. 3 further includes an infrared proximity sensor 15 towards the front of the user.
  • the output end of the infrared proximity sensor 15 is connected with the control module 21.
  • the ambient sound adjustment module 24 includes the following submodules: the wind noise suppression submodule 241, the voice enhancement submodule 242, the dynamic range control submodule 243, and the EQ processing submodule 244. In other embodiment, the ambient sound adjustment module 24 may also include any one or a combination of the above submodules, or includes other submodules.
  • the device may also be equipped with a passive noise cancellation structure made of a sound insulating material.
  • the passive noise cancellation is physical noise cancellation, insulating the outside noises into the ear canal by means of a shell or an earmuff. This passive noise cancellation method has a relatively good effect on the noises above medium-high frequencies 1kHz.
  • the device may also be equipped with a manual volume adjustment device, a manual noise cancellation mode switch device, a manual ambient sound adjustment device, and other structures, so as to provide more selection modes for the user.
  • the left and right headphones are respectively equipped with the ambient sound acquisition microphones.
  • the ambient sound acquisition microphone For example, only the left headphone is equipped with the ambient sound acquisition microphone.
  • only the right headphone is equipped with the ambient sound acquisition microphone.
  • a microphone is set on the headphone shell and configured to acquire the sound of surrounding environment of the user.
  • a microphone is set in the headphone and configured to acquire the ambient sound heard at an ear of the user.
  • the control module 21 includes a memory and a processor.
  • the memory stores a computer program. When executed by the processor, the computer program implements the following steps.
  • the usage scenario of the user may be analyzed according to acceleration data output by an acceleration sensor.
  • geographic location data acquired by the location module 12 may also be acquired, and the usage scenario of the user is analyzed by means of the acceleration data and the geographic location data of the user.
  • a sound pressure level of an ambient sound signal acquired by the ambient sound acquisition microphone 13 is calculated, and an energy distribution and an spectral distribution of the ambient sound signal is analyzed.
  • Components of the ambient sound may be obtained by analyzing the energy distribution and the spectral distribution of the ambient sound signal, for example, whether the ambient sound contains a voice component, a warning sound component like an alarm honk, a wind noise component, and so on, and the energy of these components.
  • the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal.
  • the control module 21 may automatically adjust a noise cancellation parameter of the active noise cancellation module 24 according to the usage scenario.
  • the active noise cancellation module 23 is set with a plurality of noise cancellation modes in advance, and each noise cancellation mode corresponds to different noise cancellation parameters.
  • the control module 21 automatically adjusts the noise cancellation mode of the active noise cancellation module 23 according to the usage scenario, so as to achieve different noise cancellation levels or effects.
  • the control module 21 controls the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal. That is, the control module 21 considers comprehensively the sound pressure level of the ambient sound signal and the components of the ambient sound and the energy of each component, and controls the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24, so that the audio control adapts to the usage scenario of the user, and the adaptive audio control implemented according to the usage scenario of the user, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal is realized.
  • that the usage scenario of the user is analyzed at S101 includes the following operation.
  • an environment type of the user are determined according to the geographic location data.
  • the environment types include an indoor environment and a road environment.
  • a movement speed of the user is calculated according to the geographic location data, and a cadence value of the user is calculated according to the acceleration data.
  • a motion mode of the user are determined according to the movement speed and the cadence value.
  • the motion modes may include any one of the followings: a stationary mode, a mode of walking on road, and a transportation mode.
  • the motion modes may include a fitness mode.
  • the fitness mode contains running, cycling and other fitness methods.
  • the usage scenario of the user may be analyzed only according to the acquired acceleration data of the user instead of by acquiring the geographic location data.
  • the motion mode of the user may be determined only according to the cadence value of the user.
  • the geographic location data it is possible to calculate the movement speed of the user only by the geographic location data, but the geographic location data is not used for acquiring the environment type.
  • the usage scenario of the user may also include a talking state of the user.
  • the control module 21 determines whether the user is in the talking mode according to a signal output by the bone conduction microphone 14 or the infrared proximity sensor 15.
  • the usage scenario referred to in the embodiments of the disclosure at least includes the current motion mode of the user. Furthermore, the usage scenario may also include the environment type of the user and/or the talking state of the user, that is, whether the user is in the talking mode.
  • control module 21 may determine the environment type of the user according to the geographic location data.
  • the environment types include the indoor environment and the road environment.
  • the location module 12 may include a GPS module or a Beidou module, for example.
  • the location module first acquires information of the specific real-time location of the user, and then determines the environment type of the user according to the information of the specific real-time location.
  • the environment types may also be divided in more detail, so as to achieve a more flexible and intelligent audio control effect.
  • the outdoor environment type is divided into a road environment type and a non-road outdoor environment type.
  • the non-road outdoor environment type is divided into an open-air trade and catering fair type, an outdoor park and green space type, and so on.
  • the environment types may be divided into the following types:
  • the environment types in the embodiments of the disclosure may be divided into “indoor” and “outdoor”.
  • the "outdoor” environment type may further be subdivided into "outdoor sports ground”, “outdoor park and green space” and “outdoor fair”, for example.
  • the environment type of the user may be determined according to the selection of the user.
  • the specific motion mode of the user may also be determined according to the geographic location data in combination with the energy distribution and the spectral distribution of the ambient sound signal; for example, it can be accurately determined that the user is in the outdoor environment in combination with the geographic location data after determining that there is a very strong wind noise signal included in the ambient sound signal according to the energy distribution and the spectral distribution of the ambient sound signal.
  • control module 21 may calculate the movement speed of the user according to the geographic location data, and calculate the cadence value of the user according to the acceleration data.
  • the motion mode of the user are determined according to the movement speed and the cadence value.
  • the running speeds of cars, ships, trains and other transports are usually greater than 30km/h.
  • the second speed threshold may be set to 30km/h. For example, if the monitored movement speed of the user is about 60km/h, it can be determined that the user is taking transports.
  • the intervals of the movement speed and the cadence value may also be divided in more detail, so as to determine the movement state of the user in detail.
  • the motion mode of the user may also be divided in more detail; for example, the motion modes may also be divided into a stationary mode, a taking-a-walk mode, a fast-walking mode, a running mode, a cycling mode, and so on.
  • the cadence value of the user is in the interval of 2.5steps/s to 5steps/s, it is determined that the user is in the running mode.
  • the motion modes in the present embodiment may be divided into “motion” and “non-motion”.
  • the "motion” mode may also be further subdivided into “running”, “swimming” and “cycling” for example.
  • the specific motion mode of the user may be determined according to the selection of the user or the output of the related sensor.
  • the usage scenario of the user may be analyzed only according to the acquired acceleration data of the user instead of by acquiring the geographic location data.
  • the motion mode of the user may be determined only according to the cadence value of the user.
  • control module 21 may determine whether the user is in the talking state according to the situation where the bone conduction microphone 14 picks up the voice signal.
  • control module 21 may determine, according to the signal output by the infrared proximity sensor 15, whether there are other people within a certain distance range in front of the user; if there are other people within a certain distance range in front of the user, it is determined that the user is in the talking state. Or, if there are people, the control module 21 may comprehensively determine whether the user is in the talking state based on the determination situation of the environment type and the motion mode, for example, the user is in the open-air catering fair.
  • the "usage scenario” referred to in the embodiments of the disclosure is a composite scenario.
  • the “usage scenario” at least includes the environment type of the user and the current motion mode of the user, and further may include the talking state of the user. For example, if the environment type of the user is "outdoor”, and the motion mode is "motion", the “usage scenario” of the user is “outdoor motion”. For example, if the environment type of the user is "indoor”, and the motion mode is "motion”, the “usage scenario” of the user is “indoor motion”. For example, if the environment type of the user is "indoor”, the motion mode is "static”, and the talking state is “in the talking mode”, the "usage scenario” of the user is “indoor static talking”.
  • the usage scenario of the user may be estimated according to the acceleration data. For example, if the cadence value of the user is in the interval of 2.5steps/s to 5steps/s, it is determined that the user is in the running mode and in the road environment.
  • the control module 21 controls the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal.
  • the sound pressure level of the ambient sound signal here may use the equivalent continuous A sound level.
  • the usage scenario includes the current environment type of the user and the current motion mode of the user.
  • the function F(t) is defined to described the energy distribution and the spectral distribution situation of the 20-20kHz ambient sound of the position where the user is at a certain moment.
  • F(t) further includes F 0 (t) and Q(t).
  • the F 0 (t) is used to represent a frequency point corresponding to the maximum noise peak value at the present, and the Q(t) is used to represent a quality factor of the ambient sound at the present.
  • the greater the value of Q the more concentrated the energy distribution of the ambient sound, and the frequency is relatively single; correspondingly, it is a non-steady noise or a burst impulse noise in the noise environment, namely a honk, a knocking noise, an impact sound, and other burst sounds.
  • the smaller the value of Q the wider of the ambient sound on each frequency band, and the energy distribution of noises on the frequency band is more uniform; at this point, the corresponding noise environment is relatively stable steady-state noises; for example, in a certain restaurant environment of dinner time, there are mainly background noises made by talking or light collision of tableware, and the frequencies of the background noises F 0 are mainly between 200Hz and 300Hz.
  • the control module 21 determines the usage scenario of the user and the level of the ambient sound according to the thresholds and the queried or received real-time values of various sensor modules (not limited to the ambient sound acquisition microphone 13, the acceleration sensor 11, the location module 12, the bone conduction microphone 14/the infrared proximity sensor 15, and so on), and acquires the energy distribution and the spectral distribution situation of the ambient sound, namely obtaining P(t), M(t), L(t), F(t) and S(t).
  • the control module 21 queries the function Action(t) in real time, automatically generates control instructions according to various variables of Action(t), and send the corresponding instructions to the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 respectively, so that each module makes a response matching with the current scenario and ambient sound signal, namely implementing the automatic adjustment of the playback effects to adapt to the current playback environment.
  • the intervals to which the sound pressure level of the ambient sound signal belongs may include the followings:
  • intervals to which the sound pressure level of the ambient sound signal belongs may also be subdivided according to practical applications, and not completely limited to this definition.
  • the intervals to which the movement speed of the user belongs may include the followings, for example:
  • the intervals to which the movement speed of the user belongs may be divided in more detail, so as to help the control module 21 to determine accurately the usage scenario of the user in combination with the cadence value and the environment type of the user.
  • the intervals of the cadence value of the user may include the followings:
  • the intervals to which the cadence value belongs may also be subdivided according to practical applications, and not completely limited to this definition.
  • the motion modes (including the used transports) of the user may be comprehensively determined according to the specific environment type of the user, the interval of the movement speed of the user and the interval of the cadence value of the user; if the energy distribution and the spectral distribution of the ambient sound signal is further considered, the motion mode of the user may be determined more accurately.
  • the above functions also proves that the embodiments of the disclosure may comprehensively analyze the geographic location data, the movement speed, the cadence value of the user, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal, thereby realizing the automatic adjustment of the playback effects according to the usage scenario.
  • the first usage scenario is that the user is in the road environment and in the walking mode. It is easy to be understood that in this usage scenario, the motion mode of the user may be estimated only through the acquired acceleration data, and then the usage scenario of the user is estimated. For example, if the acceleration data shows that the current cadence value of the user is in the interval of 0.5 steps/s-2.5 steps/s, it is determined that the user is in the walking mode and in the road environment.
  • the ambient sounds are mainly traffic noises on road and ambient low-frequency noises like the wind noises with different intensities.
  • F 0 of the ambient sound signal is usually near 100Hz, and the value of Q is relatively smaller, that is, the distribution of the low-frequency noises is relatively wide.
  • the sound level intensities vary depending on traffic conditions of different periods of time.
  • the control module 21 controls the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal, including: the wind noise suppression submodule 241 is controlled to perform suppressive filtering to a wind noise signal in the ambient sound signal.
  • the wind noise suppression submodule 241 may be a second order high-pass filter whose cut-off frequency f 0 is 300Hz, for example.
  • the voice enhancement submodule 242 It is monitored whether the ambient sound signal contains a voice signal; if the ambient sound signal contains a voice signal, the voice enhancement submodule 242 is triggered to perform enhancement processing on the voice signal in the ambient sound signal. That is, in the first usage scenario, the voice enhancement submodule 242 is in a standby mode, and may be wakened by the voice signal detected by the control module 21 in real time.
  • the dynamic range control submodule 243 is controlled to perform dynamic range adjustment on the ambient sound signal according to the sound pressure level of the ambient sound signal.
  • the ambient sound basically does not include useful information, and light amplification processing is performed to the ambient sound signal; when the sound pressure level of the ambient sound signal is greater than 40dBA and less than or equal to 50dBA, selective amplification processing is performed to the ambient sound signal; when sound pressure level of the ambient sound signal is greater than 50dBA and less than or equal to 60dBA, amplification and reduction processing is performed to the ambient sound signal; when the sound pressure level of the ambient sound signal is greater than 60dBA, it is determined that the environment is a relatively noisy, and attenuation processing is performed to the ambient sound signal.
  • the user can enjoy music while maintaining a certain ability of monitoring and sensing the outside external environment during moving.
  • the division ( ⁇ 40dBA, 40dBA-50dBA, 50dBA-60dBA, >60dBA) of the interval of the sound pressure level of the ambient sound signal is just an example.
  • the division of the interval may be adjusted according to the actual condition.
  • the EQ processing submodule 244 is controlled to perform the EQ compensation processing on the ambient sound signal, and output the ambient sound signal to the speaker 30 for playback.
  • the EQ compensation processing is performed to the voice signal band and the honk signal band in the ambient sound signal.
  • a noise cancellation level of the active noise cancellation module 23 is automatically adjusted according to the sound pressure level of the ambient sound signal.
  • the greater the sound pressure level of the ambient sound signal the higher the noise cancellation level of the active noise cancellation module 23, and the greater the degree of the active noise cancellation.
  • a noise cancellation signal generated by the active noise cancellation module 23 is output to the speaker 30.
  • the control module 21 may analyze whether there is a certain warning prompt in the ambient sound signal according to the energy distribution and the spectral distribution of the ambient sound signal. For example, the ambient sound acquisition microphone (13) picks up ambient noises in t to t 1 . If it is found, through frequency domain analysis, that pulse signals whose frequencies are 500Hz-1500Hz, and the quality factor Q is much greater than 1 appear in this period of time discontinuously or continuously, and the average energy of the pulse signals is higher than 10dB of the previous period of time, it is determined that there is a certain warning sound that the user needs to be aware of in the ambient sound signal.
  • control module 21 controls the active noise cancellation module 23 to perform active noise cancellation to the part, except the warning prompt, in the ambient sound signal, and controls the dynamic range control submodule 243 to perform the amplification processing to the warning prompt in the ambient sound signal, so as to ensure the safety and alertness of the user.
  • the working parameters of the audio signal volume adjustment module 22 are controlled according to the sound pressure level of the ambient sound signal reaching the speaker 30, so that the sound level intensities of the audio signal reaching the speaker 30 and the ambient sound signal reaching the speaker 30 keep the preset proportion.
  • the audio signal volume may be automatically controlled to become high, that is, when the outside environment is relatively noisy, the audio signal volume is turned up.
  • the audio signal volume may be automatically controlled to become low, that is, when the outside environment is relatively quiet, the audio signal volume is turned down, so as to ensure the hearing of the user.
  • the second usage scenario is that the user is in the road environment and in the transportation mode, the control module 21 controls the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal, including: it is monitored whether the ambient sound signal contains the voice signal; if the ambient sound signal contains a voice signal, the voice enhancement submodule 242 is triggered to perform enhancement processing on the voice signal in the ambient sound signal, and the EQ processing submodule 244 is triggered to perform EQ compensation processing on the voice signal in the ambient sound signal. That is, in the second usage scenario, the voice enhancement submodule 242 and the EQ processing submodule 244 are in the standby mode, and may be wakened by the voice signal detected by the control module 21 in real time.
  • the active noise cancellation module 23 is controlled to perform the active noise cancellation processing on the ambient sound signal according to a highest noise cancellation level. Or, the active noise cancellation module 23 is controlled to determine, according to the sound pressure level of the ambient sound signal, whether to be enabled, and if the active noise cancellation module 23 is enabled, the noise cancellation level of the active noise cancellation module 23 is automatically adjusted according to the sound pressure level of the ambient sound signal.
  • the working parameters of the audio signal volume adjustment module 22 are controlled according to the sound pressure level of the ambient sound signal reaching the speaker 30, so that the sound level intensities of the audio signal reaching the speaker 30 and the ambient sound signal reaching the speaker 30 keep the preset proportion.
  • the audio signal volume may be automatically controlled to become high, that is, when the outside environment is relatively noisy, the audio signal volume is turned up.
  • the audio signal volume may be automatically controlled to become low, that is, when the outside environment is relatively quiet, the audio signal volume is turned down, so as to ensure the hearing of the user.
  • the wind noise suppression submodule (241) is controlled to be disabled.
  • the dynamic range control submodule (243) when the user is in the road environment and in the transportation mode, it is monitored whether the sound pressure level of the ambient sound signal is greater than the preset upper limit of the sound pressure level or less than the preset lower limit of the sound pressure level; if the sound pressure level of the ambient sound signal is greater than the preset upper limit of the sound pressure level, the dynamic range control submodule (243) is triggered to perform the attenuation processing on the ambient sound signal; and if the sound pressure level of the ambient sound signal is less than the preset lower limit of the sound pressure level, the dynamic range control submodule (243) is triggered to perform the amplification processing on the ambient sound signal.
  • the upper limit of the sound pressure level is 60dBA, for example.
  • the lower limit of the sound pressure level is 40dBA, for example.
  • the transportation mode when it is determined that the user is in the transportation mode, it is possible to further determine which transport the user is taking. For example, it is possible to determine that the user is in the modes of cycling, taking a flight, taking a train, or taking a car according to the environment type, the height data in the geographic location data, the movement speed and the cadence value. For example, if the movement speed of the user reaches 250km/h and the user is on a trunk railway, it is possible to determine that the user is in the mode of taking a high-speed train.
  • the control module 21 may set a specific control way of the active noise cancellation module 23 and the ambient sound adjustment module 24, for example, set the noise cancellation level of the active noise cancellation module 23 when the user is taking the high-speed train to a relatively low level, according to the features of the ambient sounds corresponding to the subdivided transports, for example, there are many honks when taking a car, and it is relatively quiet in the high-speed train.
  • control module 21 may set the specific control way of the active noise cancellation module 23 and the ambient sound adjustment module 24 according to the features of the ambient sounds corresponding to the subdivided transports.
  • the user may talk with a partner, and there may be an external voice remind, for example, a voice remind about danger or a remind of arrival of vehicles in the second usage scenario, so in the two usage scenarios, the voice enhancement submodule 242 may be triggered to work by the voice signal in the ambient sound signal detected in real time.
  • the third usage scenario is that the user is in the indoor environment (for example, residence, administrative offices of education, medical treatment, research, or indoor areas of catering trade and business) and in the stationary mode and the talking mode, the control module 21 controls the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal, including:
  • the voice enhancement submodule 242 is controlled to perform the enhancement processing on the voice signal in the ambient sound signal.
  • the EQ processing submodule 244 is controlled to perform the EQ compensation processing on the voice signal band in ambient sound signal, and output the ambient sound signal to the speaker 30 for playback.
  • the wind noise suppression submodule 241 and the dynamic range control submodule 243 are controlled to be disabled.
  • the active noise cancellation module 23 is controlled to be disabled or perform the active noise cancellation processing on the ambient sound signal.
  • the audio signal volume adjustment module 22 is controlled to turn down the volume or stop playing the audio signal.
  • the adaptive audio control device in the embodiments of the disclosure may have a plurality of ambient sound acquisition microphones 13, including the microphone which is set on the headphone shell and configured to acquire the sound of surrounding environment of the user, and the microphone which is set in the headphone and configured to acquire the ambient sound heard at an ear of the user.
  • the manner of setting a plurality of microphones may acquire the ambient sound more accurately, and may reflect the situation of the ambient sound heard at an ear of the user.
  • the manner may be applied to the active noise cancellation function, is beneficial for locating an ambient sound source and regulating the proportion of the voice to the ambient sound.
  • the manner may also optimize noise cancellation quantity better, which is beneficial for more intelligent adaptive audio control.
  • control module 21 may also analyze the ambient sound signal acquired by the ambient sound acquisition microphone 13 to obtain the sound pressure level and the energy distribution and the spectral distribution of the ambient sound signal, and realize a more abundant scenario analysis with reference to the data acquired by the acceleration sensor, the location module, and other sensors, so as to control the audio signal volume adjustment module 22, the active noise cancellation module 23 and the ambient sound adjustment module 24 more delicately, and provide a better experience effect to the user.
  • the adaptive audio control device based on scenario identification may be realized by hardware, software or a combination of hardware and software.
  • the adaptive audio control method based on scenario identification includes the following steps.
  • the usage scenario of the user is analyzed.
  • the acceleration data of the user may be acquired, and the usage scenario of the user is analyzed according to the acceleration data.
  • the geographic location data of the user may also be acquired, and the usage scenario of the user is analyzed according to the acceleration data and the geographic location data.
  • the ambient sound signal of surrounding environment of the user is acquired, the sound pressure level of the ambient sound signal is calculated, and the energy distribution and the spectral distribution of the ambient sound signal is analyzed.
  • the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module of the audio playback device is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal.
  • the audio playback device is a headphone.
  • the ambient sound adjustment module 24 includes any one of a combination of the following submodules: the wind noise suppression submodule 241, the voice enhancement submodule 242, the dynamic range control submodule 243, and the EQ processing submodule 244.
  • that the usage scenario of the user is analyzed at S401 includes that: the environment type of the user is determined according to the geographic location data; the movement speed of the user is calculated according to the geographic location data; the cadence value of the user is calculated according to the acceleration data; and the motion mode of the user is determined according to the movement speed and the cadence value of the user.
  • the environment types include the indoor environment and the road environment;
  • the movement modes include any one of the followings: the stationary mode, the mode of waling on road, and the transportation mode.
  • the user is in the stationary mode; if the movement speed is in the walking speed interval and the cadence value is in the walking cadence value interval, the user is in the walking mode; and if the movement speed is greater than the second speed threshold, the user is in the transportation mode.
  • the audio playback device further includes the bone conduction microphone or the infrared proximity sensor.
  • the usage scenario of the user also includes the talking state of the user.
  • the method further includes the following step of determining, according to the signal output by the bone conduction microphone or the infrared proximity sensor, whether the user is in the talking mode.
  • that the ambient sound signal of surrounding environment of the user is acquired includes that: the ambient sound signal of the real-time location of the user is acquired, and the ambient sound signal heard at an ear of the user is acquired.
  • the operation that the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
  • that the dynamic range control submodule 243 performs the dynamic range adjustment on the ambient sound signal according to the sound pressure level of the ambient sound signal includes that: when the sound pressure level of the ambient sound signal is greater than 40dBA and less than or equal to 50dBA, the amplification processing is performed to the ambient sound signal; when the sound pressure level of the ambient sound signal is greater than 60dBA, the attenuation processing is performed to the ambient sound signal.
  • performing the EQ compensation processing on the ambient sound signal includes performing the EQ compensation processing on the voice signal band and the honk signal band in the ambient sound signal.
  • the noise cancellation level of the active noise cancellation module 23 is adjusted according to the sound pressure level of the ambient sound signal.
  • the operation that the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
  • the wind noise suppression submodule 241 when the user is in the road environment and in the transportation mode, the wind noise suppression submodule 241 is controlled to be disabled.
  • the dynamic range control submodule (243) when the user is in the road environment and in the transportation mode, it is monitored whether the sound pressure level of the ambient sound signal is greater than the preset upper limit of the sound pressure level or less than the preset lower limit of the sound pressure level; if the sound pressure level of the ambient sound signal is greater than the preset upper limit of the sound pressure level, the dynamic range control submodule (243) is triggered to perform the attenuation processing on the ambient sound signal; and if the sound pressure level of the ambient sound signal is less than the preset lower limit of the sound pressure level, the dynamic range control submodule (243) is triggered to perform the amplification processing on the ambient sound signal.
  • the operation that the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
  • the wind noise suppression submodule 241 and the dynamic range control submodule 243 are controlled to be disabled.
  • each block of the flowcharts or block diagrams may represent a module, a program segment, or a portion of code, the module, the program segment or portion of the code includes one or more executable instructions for implementing the specified logical functions.
  • the functions noted in the blocks may also occur in a different order than that illustrated in the drawings. For example, two consecutive blocks may be executed substantially in parallel, and may sometimes be executed in a reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowcharts, and combinations of the blocks in the block diagrams and/or flowcharts may be implemented with a dedicated hardware-based device that performs the specified function or action, or it may be implemented by a combination of the dedicated hardware and a computer instruction.
  • the computer program product provided by the embodiment of the disclosure includes a computer readable storage medium storing the program code, and the program code includes instructions for executing the method described in the above method embodiment. Specific implementation may be referred to the method embodiment, and will not be repeated herein.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the described apparatus embodiment is merely exemplary.
  • the unit division is merely logical function division and may be other division in actual implementation.
  • a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed.
  • the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces.
  • the indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms.
  • the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • functional units in the embodiments of the disclosure may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit.
  • the functions When the functions are implemented in the form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of the disclosure essentially, or the part contributing to the prior art, or some of the technical solutions may be implemented in a form of a software product.
  • the software product is stored in a storage medium and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or some of the steps of the methods described in the embodiments of the disclosure.
  • the foregoing storage medium includes any medium that can store program code, such as a U disk, a removable hard disk, an ROM, an RAM, a magnetic disk, or an optical disc.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Soundproofing, Sound Blocking, And Sound Damping (AREA)

Abstract

The present application discloses an adaptive audio control method and a system based on scenario identification. The method comprises: acquiring acceleration data of a user, and analyzing a use scenario of the user according to the acceleration data; acquiring an ambient sound signal of the surrounding environment of the user, calculating a sound level intensity of the ambient sound signal, and analyzing an energy and a spectral distribution of the ambient sound signal; and controlling, according to the use scenario, the sound level intensity of the ambient sound signal, and the energy and the spectral distribution of the ambient sound signal, an audio signal volume, and an active noise cancellation level, and regulation of the ambient sound signal of an audio playback device. The method further enables acquisition of geographic location data of the user and uses the same, as well as acceleration data, for analysis of the use scenario of the user. By adoption of the above solution more flexible and convenient audio playback control can be realized.

Description

    TECHNICAL FIELD
  • The application relates to an electroacoustic transformation technology, and in particular to an adaptive audio control device and method based on scenario identification.
  • BACKGROUND
  • In the conventional art, a user sometimes uses an audio playback device in a noise environment. In order to solve the noise problem, an audio playback device with a function of passive noise cancellation/active noise cancellation appears, for example, a noise-canceling headphone, so as to eliminate the effect of noises on the user. The inventor finds that eliminating only the noise cannot satisfy the user's requirements for playback effects; the user hopes that the audio playback device is more intelligent, and can automatically regulate the playback effects to adapt to the current playback environment.
  • In the acoustic field, in order to reflect the subjective auditory feeling of human ears to external noise loudness well, the equivalent continuous A sound level is usually used to evaluate environmental noises. When the environmental noise is less than 50dBA, people think that the environment is relatively quiet; when the noise is greater than 80dBA, people feel the environment is noisy; when the noise reaches 120dBA, people will find it is unbearable. When people are in a noise environment where the noise is greater than 90dBA, the possibility of hearing impairment is significantly higher.
  • So, it is necessary to provide an adaptive audio playback control solution.
  • SUMMARY
  • The disclosure aims to provide an adaptive audio control method based on scenario identification, so as to automatically regulate the playback effects according to a usage scenario of a user.
  • According to the first aspect of the disclosure, an adaptive audio control device based on scenario identification is provided, which includes: an ambient sound acquisition microphone, an acceleration sensor, a location module, a control module, an audio signal volume adjustment module, an active noise cancellation module, and an ambient sound adjustment module. Output ends of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module are connected with a speaker respectively.
  • The control module includes a memory and a processor. The memory stores a computer program. When executed by the processor, the computer program implements the following steps:
    • a usage scenario of a user is analyzed according to acceleration data output by the acceleration sensor and geographic location data output by the location module;
    • a sound pressure level (SPL) of an ambient sound signal acquired by the ambient sound acquisition microphone is calculated, and an energy distribution and an spectral distribution of the ambient sound signal is analyzed;
    • the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal.
  • In an implementation mode, the ambient sound adjustment module includes any one of a combination of the following submodules: a wind noise suppression submodule, a voice enhancement submodule, a dynamic range control submodule, and an EQ processing submodule.
  • In an implementation mode, the operation that the usage scenario of the user is analyzed according to the acceleration data output by the acceleration sensor and the geographic location data output by the location module includes that:
    • an environment type of the user are determined according to the geographic location data;
    • a movement speed of the user is calculated according to the geographic location data;
    • a cadence value of the user is calculated according to the acceleration data; and
    • a motion mode of the user are determined according to the movement speed and the cadence value.
  • In an implementation mode, the environment types include an indoor environment and a road environment; the motion modes include any one of: a stationary mode, a walking mode, or a transportation mode.
  • In an implementation mode, if the movement speed is less than a first speed threshold and the cadence value is less than a first cadence value threshold, the user is in the stationary mode;
    if the movement speed is in a walking speed interval and the cadence value is in a walking cadence value interval, the user is in the walking mode;
    if the movement speed is greater than a second speed threshold, the user is in the transportation mode.
  • In an implementation mode, the device further includes a bone conduction microphone or an infrared proximity sensor. The usage scenario of the user also includes a talking state of the user.
  • When executed by the processor, the computer program implements the following steps:
    it is determined, according to a signal output by the bone conduction microphone or the infrared proximity sensor, whether the user is in the talking mode.
  • In an implementation mode, there are a plurality of ambient sound acquisition microphones, including the microphones for acquiring the ambient sound of a real-time location of the user and the microphones for acquiring the ambient sound heard at an ear of the user.
  • In an implementation mode, if the usage scenario is that the user is in the road environment and in the walking mode, the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal includes that:
    • the wind noise suppression submodule is controlled to perform suppressive filtering to a wind noise signal in the ambient sound signal;
    • it is monitored whether the ambient sound signal contains a voice signal; if the ambient sound signal contains a voice signal, the voice enhancement submodule is triggered to perform enhancement processing on the voice signal in the ambient sound signal;
    • the dynamic range control submodule is controlled to perform dynamic range adjustment on the ambient sound signal according to the sound pressure level of the ambient sound signal;
    • the EQ processing submodule is controlled to perform EQ compensation processing on the ambient sound signal; working parameters of the audio signal volume adjustment module are controlled according to the sound pressure level of the ambient sound signal reaching a speaker, so that the sound pressure level of the audio signal reaching the speaker and the sound pressure level of the ambient sound signal reaching the speaker keep a preset proportion.
  • In an implementation mode, if the usage scenario is that the user is in the road environment and in the walking mode, that the dynamic range control submodule is controlled to perform the dynamic range adjustment on the ambient sound signal according to the sound pressure level of the ambient sound signal includes that:
    • when the sound pressure level of the ambient sound signal is greater than 40dBA and less than or equal to 50dBA, amplification processing is performed to the ambient sound signal; and
    • when the sound pressure level of the ambient sound signal is greater than 60dBA, attenuation processing is performed to the ambient sound signal.
  • In an implementation mode, performing the EQ compensation processing on the ambient sound signal includes performing the EQ compensation processing on a voice signal band and a honk signal band in the ambient sound signal.
  • In an implementation mode, if the usage scenario is that the user is in the road environment and in the walking mode, the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
    it is determined, according to the sound pressure level of the ambient sound signal, whether to enable the active noise cancellation module, and if the active noise cancellation module is enabled, a noise cancellation level of the active noise cancellation module is adjusted according to the sound pressure level of the ambient sound signal.
  • In an implementation mode, if the usage scenario is that the user is in the road environment and in the transportation mode, the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
    • it is monitored whether the ambient sound signal contains the voice signal; if the ambient sound signal contains a voice signal, the voice enhancement submodule is triggered to perform enhancement processing on the voice signal in the ambient sound signal, and the EQ processing submodule is triggered to perform EQ compensation processing on the voice signal in the ambient sound signal;
    • the active noise cancellation module is controlled to perform active noise cancellation processing according to a highest noise cancellation level; or, it is determined, according to the sound pressure level of the ambient sound signal, whether to enable the active noise cancellation module, and if the active noise cancellation module is enabled, the noise cancellation level of the active noise cancellation module is adjusted according to the sound pressure level of the ambient sound signal; and
    • the working parameters of the audio signal volume adjustment module are controlled according to the sound pressure level of the ambient sound signal reaching the speaker, so that the sound pressure level of the audio signal reaching the speaker and the sound pressure level of the ambient sound signal reaching the speaker keep the preset proportion.
  • In an implementation mode, if the usage scenario is that the user is in the road environment and in the transportation mode, the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
    • the wind noise suppression submodule is controlled to be disabled; and/or
    • it is monitored whether the sound pressure level of the ambient sound signal is greater than a preset upper limit of the sound pressure level or less than a preset lower limit of the sound pressure level; if the sound pressure level of the ambient sound signal is greater than the preset upper limit of the sound pressure level, the dynamic range control submodule is triggered to perform attenuation processing on the ambient sound signal; and if the sound pressure level of the ambient sound signal is less than the preset lower limit of the sound pressure level, the dynamic range control submodule is triggered to perform amplification processing on the ambient sound signal.
  • In an implementation mode, if the usage scenario is that the user is in the indoor environment and in the stationary mode and the talking mode, the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
    • the voice enhancement submodule is controlled to perform the enhancement processing on the voice signal in the ambient sound signal;
    • the EQ processing submodule is controlled to perform the EQ compensation processing on the voice signal band in the ambient sound signal;
    • the active noise cancellation module is controlled to be disabled or perform the active noise cancellation processing on the ambient sound signal; and
    • the audio signal volume adjustment module is controlled to turn down the volume or stop playing the audio signal.
  • In an implementation mode, if the usage scenario is that the user is in the indoor environment and in the stationary mode and the talking mode, the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
    the wind noise suppression submodule and the dynamic range control submodule are controlled to be disabled.
  • In an implementation mode, the device is a headphone.
  • According to the second aspect of the disclosure, an adaptive audio control method based on scenario identification is provided, which includes the following steps:
    • the acceleration data and the geographic location data of the user is acquired, and the usage scenario of the user is analyzed according to the acceleration data and the geographic location data;
    • the ambient sound signal of surrounding environment of the user is acquired, the sound pressure level of the ambient sound signal is calculated, and the energy distribution and the spectral distribution of the ambient sound signal is analyzed; and
    • the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module of the audio playback device is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal.
  • In an implementation mode, the ambient sound adjustment module includes any one of a combination of the following submodules: the wind noise suppression submodule, the voice enhancement submodule, the dynamic range control submodule, and the EQ processing submodule.
  • In an implementation mode, the operation that the usage scenario of the user is analyzed according to the acceleration data and the geographic location data includes that:
    • the environment type of the user are determined according to the geographic location data;
    • the movement speed of the user is calculated according to the geographic location data;
    • the cadence value of the user is calculated according to the acceleration data; and
    • the motion mode of the user are determined according to the movement speed and the cadence value.
  • In an implementation mode, the environment types include the indoor environment and the road environment; the motion modes include any one of the followings: the stationary mode, the walking mode, and the transportation mode.
  • In an implementation mode, if the movement speed is less than the first speed threshold and the cadence value is less than the first cadence value threshold, the user is in the stationary mode;
    if the movement speed is in the walking speed interval and the cadence value is in the walking cadence value interval, the user is in the walking mode;
    if the movement speed is greater than the second speed threshold, the user is in the transportation mode.
  • In an implementation mode, the audio playback device further includes the bone conduction microphone or the infrared proximity sensor. The usage scenario of the user also includes the talking state of the user. The method further includes the following step:
    it is determined, according to the signal output by the bone conduction microphone or the infrared proximity sensor, whether the user is in the talking mode.
  • In an implementation mode, that the ambient sound signal of surrounding environment of the user is acquired includes that: the ambient sound signal of the real-time location of the user is acquired, and the ambient sound signal heard at an ear of the user is acquired.
  • In an implementation mode, if the usage scenario is that the user is in the road environment and in the walking mode, the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
    • the wind noise suppression submodule is controlled to perform the suppressive filtering to the wind noise signal in the ambient sound signal;
    • it is monitored whether the ambient sound signal contains a voice signal; if the ambient sound signal contains a voice signal, the voice enhancement submodule is triggered to perform the enhancement processing on the voice signal in the ambient sound signal;
    • the dynamic range control submodule is controlled to perform the dynamic range adjustment on the ambient sound signal according to the sound pressure level of the ambient sound signal;
    • the EQ processing submodule is controlled to perform EQ compensation processing on the ambient sound signal; the working parameters of the audio signal volume adjustment module are controlled according to the sound pressure level of the ambient sound signal reaching the speaker of the audio playback device, so that the sound pressure level of the audio signal reaching the speaker and the sound pressure level of the ambient sound signal reaching the speaker keep the preset proportion.
  • In an implementation mode, if the usage scenario is that the user is in the road environment and in the walking mode, that the dynamic range control submodule is controlled to perform the dynamic range adjustment on the ambient sound signal according to the sound pressure level of the ambient sound signal includes that:
    • when the sound pressure level of the ambient sound signal is greater than 40dBA and less than or equal to 50dBA, the amplification processing is performed to the ambient sound signal;
    • when the sound pressure level of the ambient sound signal is greater than 60dBA, the attenuation processing is performed to the ambient sound signal.
  • In an implementation mode, performing the EQ compensation processing on the ambient sound signal includes performing the EQ compensation processing on the voice signal band and the honk signal band in the ambient sound signal.
  • In an implementation mode, if the usage scenario is that the user is in the road environment and in the walking mode, the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
    it is determined, according to the sound pressure level of the ambient sound signal, whether to enable the active noise cancellation module, and if the active noise cancellation module is enabled, the noise cancellation level of the active noise cancellation module is adjusted according to the sound pressure level of the ambient sound signal.
  • In an implementation mode, if the usage scenario is that the user is in the road environment and in the transportation mode, the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
    • it is monitored whether the ambient sound signal contains the voice signal; if the ambient sound signal contains a voice signal, the voice enhancement submodule is triggered to perform enhancement processing on the voice signal in the ambient sound signal, and the EQ processing submodule is triggered to perform EQ compensation processing on the voice signal in the ambient sound signal;
    • the active noise cancellation module is controlled to perform active noise cancellation processing according to a highest noise cancellation level; or, it is determined, according to the sound pressure level of the ambient sound signal, whether to enable the active noise cancellation module, and if the active noise cancellation module is enabled, the noise cancellation level of the active noise cancellation module is adjusted according to the sound pressure level of the ambient sound signal;
    • the working parameters of the audio signal volume adjustment module are controlled according to the sound pressure level of the ambient sound signal reaching the speaker of the audio playback device, so that the sound pressure level of the audio signal reaching the speaker and the sound pressure level of the ambient sound signal reaching the speaker keep the preset proportion.
  • In an implementation mode, if the usage scenario is that the user is in the road environment and in the transportation mode, the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
    • the wind noise suppression submodule is controlled to be disabled; and/or
    • it is monitored whether the sound pressure level of the ambient sound signal is greater than the preset upper limit of the sound pressure level or less than the preset lower limit of the sound pressure level; if the sound pressure level of the ambient sound signal is greater than the preset upper limit of the sound pressure level, the dynamic range control submodule is triggered to perform the attenuation processing on the ambient sound signal; and if the sound pressure level of the ambient sound signal is less than the preset lower limit of the sound pressure level, the dynamic range control submodule is triggered to perform the amplification processing on the ambient sound signal.
  • In an implementation mode, if the usage scenario is that the user is in the indoor environment and in the stationary mode and the talking mode, the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
    • the voice enhancement submodule is controlled to perform the enhancement processing on the voice signal in the ambient sound signal;
    • the EQ processing submodule is controlled to perform the EQ compensation processing on the voice signal band in the ambient sound signal;
    • the active noise cancellation module is controlled to be disabled or perform the active noise cancellation processing on the ambient sound signal;
    • the audio signal volume adjustment module is controlled to turn down the volume or stop playing the audio signal.
  • In an implementation mode, if the usage scenario is that the user is in the indoor environment and in the stationary mode and the talking mode, the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
    the wind noise suppression submodule and the dynamic range control submodule are controlled to be disabled.
  • In an implementation mode, the audio playback device is a headphone.
  • According to the third aspect of the disclosure, a method for controlling an audio playback device is presented, which includes the following steps: the acceleration data of the user is acquired, and the usage scenario of the user is analyzed according to the acceleration data; the ambient sound signal of surrounding environment of the user is acquired, the sound pressure level of the ambient sound signal is calculated, and the energy distribution and the spectral distribution of the ambient sound signal is analyzed; and the audio signal volume, the active noise cancellation level, and the adjustment of the ambient sound signal of the audio playback device are controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal.
  • In an implementation mode, the method further includes the following steps: the geographic location data of the user is acquired; and the usage scenario of the user is analyzed according to the acceleration data and the geographic location data.
  • According to the fourth aspect of the disclosure, a system for controlling an audio playback device is presented, which includes: one or more processors; and a memory which is coupled to at least one of the one or more processors. There are computer program instructions stored in the memory. When the computer program instructions are executed by the at least one processor, the system performs the method for controlling an audio playback device. The method includes the following operation: the acceleration data of the user is acquired, and the usage scenario of the user is analyzed according to the acceleration data; the ambient sound signal of surrounding environment of the user is acquired, the sound pressure level of the ambient sound signal is calculated, and the energy distribution and the spectral distribution of the ambient sound signal is analyzed; and the audio signal volume, the active noise cancellation level, and the adjustment of the ambient sound signal of the audio playback device are controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal.
  • According to the fifth aspect of the disclosure, a computer program product is presented. When executed by the processor, the computer program product may realize the method for controlling an audio playback device described in the third aspect of the disclosure.
  • The adaptive audio control device and method based on scenario identification provided by the disclosure may analyze the usage scenario of the use, and automatically regulate the playback effects according to the usage scenario.
  • To make the abovementioned purposes, features and advantages of the disclosure clearer and easier to understand, preferred embodiments will be described below in combination with the drawings in detail. The detailed description is as follows.
  • BRIEF DESCRIPTION OF DRAWINGS
  • In order to describe the technical solutions in the embodiments of the disclosure more clearly, the drawings required to be used in descriptions about the embodiments will be simply introduced below. It is to be noted that the drawings in the following descriptions are only some embodiments of the disclosure, and shall not be understood as limits to a scope of the disclosure. Those of ordinary skill in the art may further obtain other drawings according to these drawings without creative work.
    • FIG. 1 illustrates a block diagram of an adaptive audio control device based on scenario identification provided by an embodiment of the disclosure.
    • FIG. 2 illustrates a block diagram of an adaptive audio control device based on scenario identification provided by another embodiment of the disclosure.
    • FIG. 3 illustrates a block diagram of an adaptive audio control device based on scenario identification provided by yet another embodiment of the disclosure.
    • FIG. 4 illustrates a flowchart of an adaptive audio control method based on scenario identification provided by an embodiment of the disclosure.
    DETAILED DESCRIPTION
  • Different exemplary embodiments of the disclosure will be described below in combination with the drawings in detail. It is to be noted that unless otherwise specified, relative placements, numerical expressions and numerical values of the parts and the steps described in the embodiments do not form any limit to the scope of the disclosure.
  • The following description of the at least one exemplary embodiment is merely illustrative, and shall in no way form any limits to this disclosure and its disclosure or use.
  • Techniques, methods and apparatus known to those of ordinary skill in the conventional art may not be discussed in detail, but under appropriate circumstances, the techniques, methods and apparatus should be considered as part of the description.
  • In all of the examples shown and discussed herein, any specific values are to be construed as illustrative only and not as a limitation. Thus, other examples of the exemplary embodiments may have different values.
  • It should be noted that similar reference numerals and letters indicate similar items in the following figures. Therefore, once an item is defined in one figure, it is not required to be further discussed in the subsequent figures.
  • The disclosure presents an adaptive audio control device based on scenario identification. The device may be a headphone, a loudspeaker box, or other electronic devices capable of playing an audio signal. The device may carry out wired communication or wireless communication with terminal devices like a cell phone and a computer, so as to play the audio signal of the terminal devices. The device may also store the audio signal, for example music, and the device may play the audio signal stored in it. The device may also be set in the terminal device, as a part of the terminal device.
  • Referring to FIG. 1, the adaptive audio control device based on scenario identification provided by the first embodiment of the disclosure includes: an ambient sound acquisition microphone 13, an acceleration sensor 11, a location module 12, a control module 21, an audio signal volume adjustment module 22, an active noise cancellation module 23, and an ambient sound adjustment module 24.
  • The acceleration sensor 11 is configured to acquire acceleration data of a user, and output the acceleration data to the control module 21.
  • The location module 12 is configured to acquire geographic location data of the user, and output the geographic location data to the control module 21.
  • After the audio signal volume adjustment module 22 adjusts the volume of the audio signal, the audio signal is input in a speaker 30 for playback.
  • The ambient sound acquisition microphone 13 is configured to pick up an ambient sound signal, and feed the picked-up ambient sound signal to the control module 21, the active noise cancellation module 23 and the ambient sound adjustment module 24 respectively. Output ends of the active noise cancellation module 23 and the ambient sound adjustment module 24 are connected with the speaker 30 respectively.
  • The control module 21 is connected with the audio signal volume adjustment module 22, the active noise cancellation module 23 and the ambient sound adjustment module 21 respectively, so as to control their working; for example, the control module 21 enables/disables a certain module or submodule, or adjusts parameters of a certain module or submodule.
  • The active noise cancellation module 23 is configured to generate a corresponding noise cancellation signal aiming at the ambient sound signal, and output the noise cancellation signal to the speaker 30. The noise cancellation signal and the ambient sound signal cancel each other in the ear canal of the user, so as to reduce the impact of the ambient sound on the user listening to the audio signal. The active noise cancellation module 24 may have a feedback noise cancellation manner, a feed-forward noise cancellation manner, and a noise cancellation manner of feed-forward combined with feedback. In a specific example, the active noise cancellation module 23 is enabled only when the sound pressure level of the ambient sound reaches 60dBA. The active noise cancellation module 23 may be set with various noise cancellation levels; for example, when the sound level intensities of the ambient sound reach 60dBA, 70dBA, 80dBA and 90dBA respectively, each of them corresponds to a noise cancellation level. The stronger the sound pressure level of the ambient sound, the higher the noise cancellation level.
  • The ambient sound adjustment module 24 is configured to adjust the ambient sound signal, and output the adjusted ambient sound signal to the speaker 30. The ambient sound adjustment module 24 includes the following submodules: a wind noise suppression submodule 241, a voice enhancement submodule 242, a dynamic range control submodule 243, and an EQ processing submodule 244.
  • The wind noise suppression submodule 241 is mainly configured to filter the wind noise in the ambient sound signal. The wind noises mainly concentrate in very low frequency bands. Once the big wind noises are detected, different filters may be set to deal with, so as to reduce the impact of the wind noise on the user listening to the audio signal. In a specific example, when the user is in an outdoor environment, it can be determined, according to an energy distribution and an spectral distribution of the wind noises, whether it is needed to enable the wind noise suppression submodule 241. When the user is in an indoor environment, the wind noise suppression submodule 241 may be disabled.
  • The voice enhancement submodule 242 is mainly configured to enhance the voice part in the ambient sound signal, suppress and reduce a noise interference, and improve a signal-to-noise ratio of the voice part, so that the user can hear the outside voice more clearly. In a specific example, when the user is in a talking state, the voice enhancement submodule 241 is enabled. In a specific example, when the user is in a state of being necessary to hear an outside prompt voice, the voice enhancement submodule 241 is enabled. The voice enhancement submodule 242 may perform enhancement processing to a voice signal in the ambient sound signal and perform suppression processing to an ambient noise, thereby realizing a voice enhancement function.
  • The dynamic range control submodule 243 is mainly configured to perform dynamic range adjustment on the ambient sound signal; for example, it is possible to first compress some impulse sounds and then feed them to the headphone, so as to avoid a very big distortion at the headphone. In a specific example, the dynamic range control submodule 243 is always in an enabled state in all circumstances, so as to prevent a burst sound from startling and damaging the user. In another specific example, when the user is in the outdoor environment, the dynamic range control submodule 243 must be enabled; when the user is in the indoor environment, because there are a relatively few burst sounds in the indoor environment, the dynamic range control submodule 243 may be disabled.
  • The EQ processing submodule 244 is mainly configured to enhance and attenuate the ambient sound aiming at different frequency bands, so as to optimize listening feeling of the ambient sound. In a specific example, if it is needed to hear parts of the ambient sounds, the EQ processing submodule 244 will be enabled, so as to perform compensation enhancement to the ambient sounds on a part of the frequency bands.
  • FIG. 2 illustrates an adaptive audio control device based on scenario identification provided by another embodiment of the disclosure. The embodiment of FIG. 2 has all the structures and functions provided by the embodiment of FIG. 1. The main difference is that the device in the embodiment of FIG. 2 further includes a bone conduction microphone 14. The output end of the bone conduction microphone 14 is connected with the control module 21.
  • FIG. 3 illustrates an adaptive audio control device based on scenario identification provided by yet another embodiment of the disclosure. The embodiment of FIG. 3 has all the structures and functions provided by the embodiment of FIG. 1. The main difference is that the device in the embodiment of FIG. 3 further includes an infrared proximity sensor 15 towards the front of the user. The output end of the infrared proximity sensor 15 is connected with the control module 21.
  • In the embodiments of FIG. 1, FIG. 2 and FIG. 3, the ambient sound adjustment module 24 includes the following submodules: the wind noise suppression submodule 241, the voice enhancement submodule 242, the dynamic range control submodule 243, and the EQ processing submodule 244. In other embodiment, the ambient sound adjustment module 24 may also include any one or a combination of the above submodules, or includes other submodules.
  • In an embodiment, the device may also be equipped with a passive noise cancellation structure made of a sound insulating material. The passive noise cancellation is physical noise cancellation, insulating the outside noises into the ear canal by means of a shell or an earmuff. This passive noise cancellation method has a relatively good effect on the noises above medium-high frequencies 1kHz.
  • In an embodiment, the device may also be equipped with a manual volume adjustment device, a manual noise cancellation mode switch device, a manual ambient sound adjustment device, and other structures, so as to provide more selection modes for the user.
  • In the embodiments of FIG. 1, FIG. 2 and FIG. 3, there can be one or more ambient sound acquisition microphones 13. For example, the left and right headphones are respectively equipped with the ambient sound acquisition microphones. For example, only the left headphone is equipped with the ambient sound acquisition microphone. For example, only the right headphone is equipped with the ambient sound acquisition microphone. For example, a microphone is set on the headphone shell and configured to acquire the sound of surrounding environment of the user. For example, a microphone is set in the headphone and configured to acquire the ambient sound heard at an ear of the user. In an embodiment, there are a plurality of ambient sound acquisition microphones 13, including the microphones for acquiring the ambient sound of a real-time location of the user and the microphones for acquiring the ambient sound heard at an ear of the user.
  • In the adaptive audio control device based on scenario identification provided by the embodiment of the disclosure, the control module 21 includes a memory and a processor. The memory stores a computer program. When executed by the processor, the computer program implements the following steps.
  • At S101, a usage scenario of a user is analyzed.
  • According to some embodiments of the disclosure, the usage scenario of the user may be analyzed according to acceleration data output by an acceleration sensor.
  • According to another embodiments of the disclosure, at this step, geographic location data acquired by the location module 12 may also be acquired, and the usage scenario of the user is analyzed by means of the acceleration data and the geographic location data of the user.
  • At S102, a sound pressure level of an ambient sound signal acquired by the ambient sound acquisition microphone 13 is calculated, and an energy distribution and an spectral distribution of the ambient sound signal is analyzed. Components of the ambient sound may be obtained by analyzing the energy distribution and the spectral distribution of the ambient sound signal, for example, whether the ambient sound contains a voice component, a warning sound component like an alarm honk, a wind noise component, and so on, and the energy of these components.
  • At S103, the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal.
  • The control module 21 may automatically adjust a noise cancellation parameter of the active noise cancellation module 24 according to the usage scenario. Or, the active noise cancellation module 23 is set with a plurality of noise cancellation modes in advance, and each noise cancellation mode corresponds to different noise cancellation parameters. The control module 21 automatically adjusts the noise cancellation mode of the active noise cancellation module 23 according to the usage scenario, so as to achieve different noise cancellation levels or effects.
  • The control module 21 controls the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal. That is, the control module 21 considers comprehensively the sound pressure level of the ambient sound signal and the components of the ambient sound and the energy of each component, and controls the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24, so that the audio control adapts to the usage scenario of the user, and the adaptive audio control implemented according to the usage scenario of the user, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal is realized.
  • According to some embodiments of the disclosure, that the usage scenario of the user is analyzed at S101 includes the following operation.
  • At S1011, an environment type of the user are determined according to the geographic location data. The environment types include an indoor environment and a road environment.
  • At S1012, a movement speed of the user is calculated according to the geographic location data, and a cadence value of the user is calculated according to the acceleration data. A motion mode of the user are determined according to the movement speed and the cadence value. The motion modes may include any one of the followings: a stationary mode, a mode of walking on road, and a transportation mode. In another embodiment, the motion modes may include a fitness mode. The fitness mode contains running, cycling and other fitness methods.
  • According to some other embodiments of the disclosure, the usage scenario of the user may be analyzed only according to the acquired acceleration data of the user instead of by acquiring the geographic location data. For example, the motion mode of the user may be determined only according to the cadence value of the user.
  • According to some other embodiments of the disclosure, it is possible to calculate the movement speed of the user only by the geographic location data, but the geographic location data is not used for acquiring the environment type.
  • In the embodiment of FIG. 2 and the embodiment of FIG. 3, the usage scenario of the user may also include a talking state of the user. Specifically, the control module 21 determines whether the user is in the talking mode according to a signal output by the bone conduction microphone 14 or the infrared proximity sensor 15.
  • <About the usage scenario>
  • The usage scenario referred to in the embodiments of the disclosure at least includes the current motion mode of the user. Furthermore, the usage scenario may also include the environment type of the user and/or the talking state of the user, that is, whether the user is in the talking mode.
  • <Environment type>
  • At S1011, the control module 21 may determine the environment type of the user according to the geographic location data. The environment types include the indoor environment and the road environment.
  • The location module 12 may include a GPS module or a Beidou module, for example. When the user enables an adaptive scenario adjustment and playback function of the device, the location module first acquires information of the specific real-time location of the user, and then determines the environment type of the user according to the information of the specific real-time location.
  • In other embodiments, the environment types may also be divided in more detail, so as to achieve a more flexible and intelligent audio control effect. For example, the outdoor environment type is divided into a road environment type and a non-road outdoor environment type. The non-road outdoor environment type is divided into an open-air trade and catering fair type, an outdoor park and green space type, and so on.
  • In a specific example, the environment types may be divided into the following types:
    • environment type P1: urban arterial and sub-arterial roads, intercity and urban expressways, inland waterways and both sides thereof;
    • environment type P2: main lines of railway;
    • environment type P3: industrial production and warehouse logistics area;
    • environment type P4: industrial and commercial markets and mixed areas of catering trade;
    • environment type P5: administrative offices of residence, education, medical treatment and research;
    • environment type P6: outdoor park and green space;
    • environment type P7: rehabilitation and recuperation areas.
  • The environment types in the embodiments of the disclosure may be divided into "indoor" and "outdoor". The "outdoor" environment type may further be subdivided into "outdoor sports ground", "outdoor park and green space" and "outdoor fair", for example. In the embodiments of the disclosure, the environment type of the user may be determined according to the selection of the user. In the embodiments of the disclosure, the specific motion mode of the user may also be determined according to the geographic location data in combination with the energy distribution and the spectral distribution of the ambient sound signal; for example, it can be accurately determined that the user is in the outdoor environment in combination with the geographic location data after determining that there is a very strong wind noise signal included in the ambient sound signal according to the energy distribution and the spectral distribution of the ambient sound signal.
  • <Motion mode>
  • At S1012, the control module 21 may calculate the movement speed of the user according to the geographic location data, and calculate the cadence value of the user according to the acceleration data. The motion mode of the user are determined according to the movement speed and the cadence value.
    1. (a) If the movement speed is less than a first speed threshold and the cadence value is less than a first cadence value threshold, the user is in the stationary mode.
      In an embodiment, the first cadence value threshold may be set to 0.5steps/s, the first speed threshold may be set to 0.2m/s; that is, if the movement speed of the user is less than 0.2m/s and the cadence value is less than 0.5steps/s, the user is in the stationary mode.
    2. (b) If the movement speed is in a walking speed interval and the cadence value is in a walking cadence value interval, the user is in the walking mode.
      A normal walking speed interval of people is 1 m/s to 1.7 m/s, and a normal walking cadence value interval is 1.0step/s to 2.5steps/s. In an embodiment, the walking cadence value interval may be set to 1.0step/s to 2.5steps/s.
    3. (c) If the movement speed is greater than a second speed threshold, the user is in the transportation mode.
  • The running speeds of cars, ships, trains and other transports are usually greater than 30km/h. In an embodiment, the second speed threshold may be set to 30km/h. For example, if the monitored movement speed of the user is about 60km/h, it can be determined that the user is taking transports.
  • In other embodiments, the intervals of the movement speed and the cadence value may also be divided in more detail, so as to determine the movement state of the user in detail.
  • In other embodiments, the motion mode of the user may also be divided in more detail; for example, the motion modes may also be divided into a stationary mode, a taking-a-walk mode, a fast-walking mode, a running mode, a cycling mode, and so on.
  • In an embodiment, if the cadence value of the user is in the interval of 2.5steps/s to 5steps/s, it is determined that the user is in the running mode.
  • The motion modes in the present embodiment may be divided into "motion" and "non-motion". The "motion" mode may also be further subdivided into "running", "swimming" and "cycling" for example. In the embodiments of the disclosure, the specific motion mode of the user may be determined according to the selection of the user or the output of the related sensor.
  • It is easy to be understood that in some embodiments, the usage scenario of the user may be analyzed only according to the acquired acceleration data of the user instead of by acquiring the geographic location data. For example, the motion mode of the user may be determined only according to the cadence value of the user.
  • <Talking state>
  • In an embodiment, the control module 21 may determine whether the user is in the talking state according to the situation where the bone conduction microphone 14 picks up the voice signal.
  • In another embodiment, the control module 21 may determine, according to the signal output by the infrared proximity sensor 15, whether there are other people within a certain distance range in front of the user; if there are other people within a certain distance range in front of the user, it is determined that the user is in the talking state. Or, if there are people, the control module 21 may comprehensively determine whether the user is in the talking state based on the determination situation of the environment type and the motion mode, for example, the user is in the open-air catering fair.
  • According to an embodiment of the disclosure, the "usage scenario" referred to in the embodiments of the disclosure is a composite scenario. The "usage scenario" at least includes the environment type of the user and the current motion mode of the user, and further may include the talking state of the user. For example, if the environment type of the user is "outdoor", and the motion mode is "motion", the "usage scenario" of the user is "outdoor motion". For example, if the environment type of the user is "indoor", and the motion mode is "motion", the "usage scenario" of the user is "indoor motion". For example, if the environment type of the user is "indoor", the motion mode is "static", and the talking state is "in the talking mode", the "usage scenario" of the user is "indoor static talking".
  • According to another embodiment of the disclosure, it is easy to be understood that the usage scenario of the user may be estimated according to the acceleration data. For example, if the cadence value of the user is in the interval of 2.5steps/s to 5steps/s, it is determined that the user is in the running mode and in the road environment.
  • <Adaptive audio control based on scenario identification>
  • After determining the usage scenario of the user, the control module 21 controls the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal. The sound pressure level of the ambient sound signal here may use the equivalent continuous A sound level.
  • In an embodiment, the usage scenario includes the current environment type of the user and the current motion mode of the user. The usage scenario where the user may be in at a certain moment, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal may be described by defining a function Action(t):
    Action(t)=(P(t),M(t),L(t),F(t))--function 1
    where t is time, P(t) is the environment type of the user at present, M(t) is the motion mode of the user at present, L(t) is the sound pressure level of the ambient sound signal or the interval to which the sound pressure level of the ambient sound signal belongs, and F(t) is the energy distribution and the spectral distribution situation of the ambient sound signal.
  • The function F(t) is defined to described the energy distribution and the spectral distribution situation of the 20-20kHz ambient sound of the position where the user is at a certain moment.
  • F(t) further includes F0(t) and Q(t). The F0(t) is used to represent a frequency point corresponding to the maximum noise peak value at the present, and the Q(t) is used to represent a quality factor of the ambient sound at the present.
  • Generally speaking, the greater the value of Q, the more concentrated the energy distribution of the ambient sound, and the frequency is relatively single; correspondingly, it is a non-steady noise or a burst impulse noise in the noise environment, namely a honk, a knocking noise, an impact sound, and other burst sounds. The smaller the value of Q, the wider of the ambient sound on each frequency band, and the energy distribution of noises on the frequency band is more uniform; at this point, the corresponding noise environment is relatively stable steady-state noises; for example, in a certain restaurant environment of dinner time, there are mainly background noises made by talking or light collision of tableware, and the frequencies of the background noises F0 are mainly between 200Hz and 300Hz.
  • In another embodiment, the function 1 may also be adjusted to
    Action(t)=(P(t),V(t),f (t),L(t),F(t))--function 2,
    where V(t) is the current movement speed of the user or the interval to which the movement speed belongs, and f(t) is the current cadence value of the user or the interval to which the cadence value belongs.
  • In another embodiment, the usage scenario includes the talking state of the user, so the function Action(t) is:
    Action(t)=(P(t),M(t),L(t),F(t),S(t))--function 3,
    where S(t) is used to represent whether the user is in the talking mode at present.
  • The control module 21 determines the usage scenario of the user and the level of the ambient sound according to the thresholds and the queried or received real-time values of various sensor modules (not limited to the ambient sound acquisition microphone 13, the acceleration sensor 11, the location module 12, the bone conduction microphone 14/the infrared proximity sensor 15, and so on), and acquires the energy distribution and the spectral distribution situation of the ambient sound, namely obtaining P(t), M(t), L(t), F(t) and S(t).
  • The control module 21 queries the function Action(t) in real time, automatically generates control instructions according to various variables of Action(t), and send the corresponding instructions to the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 respectively, so that each module makes a response matching with the current scenario and ambient sound signal, namely implementing the automatic adjustment of the playback effects to adapt to the current playback environment.
  • Similarly, in another embodiment, the function 3 may also be adjusted to
    Action(t)=(P(t),V(t),f (t),L(t),F(t),S(t))--function 4.
  • The intervals to which the sound pressure level of the ambient sound signal belongs may include the followings:
    1. (1) 0dBA-40dBA is the first sound pressure level interval, representing a very quiet environment;
    2. (2) 40dBA-60dBA is the second sound pressure level interval, representing a relatively quiet environment;
    3. (3) 60dBA-80dBA is the third sound pressure level interval, representing a relatively noisy environment; and
    4. (4) 80dBA-120dBA is the fourth sound pressure level interval, representing an unbearable noisy environment.
  • The intervals to which the sound pressure level of the ambient sound signal belongs may also be subdivided according to practical applications, and not completely limited to this definition.
  • The intervals to which the movement speed of the user belongs may include the followings, for example:
    1. (1) 0m/s-0.2m/s is the first movement speed interval, representing being static;
    2. (2) 0.2m/s-1.7m/s is the second movement speed interval, representing walking; and
    3. (3) above 500km/h is the third movement speed interval, representing flight.
  • The intervals to which the movement speed of the user belongs may be divided in more detail, so as to help the control module 21 to determine accurately the usage scenario of the user in combination with the cadence value and the environment type of the user.
  • Most often, the highest cadence of people is not higher than 5 steps/s, and the lowest cadence is not lower than 0.5 steps/s, so the intervals of the cadence value of the user may include the followings:
    1. (1) 0 step/s-0.5 steps/s is the first cadence value interval, representing being static;
    2. (2) 0.5 steps/s-2.5 steps/s is the second cadence value interval, representing walking;
    3. (3) 2.5 steps/s-5 steps/s is the third cadence value interval, representing running.
  • The intervals to which the cadence value belongs may also be subdivided according to practical applications, and not completely limited to this definition.
  • It is to be noted that although the situations like "static", "walking", "running" and "flight" are considered when the above intervals are divided, the motion modes (including the used transports) of the user may be comprehensively determined according to the specific environment type of the user, the interval of the movement speed of the user and the interval of the cadence value of the user; if the energy distribution and the spectral distribution of the ambient sound signal is further considered, the motion mode of the user may be determined more accurately.
  • In addition, the above functions also proves that the embodiments of the disclosure may comprehensively analyze the geographic location data, the movement speed, the cadence value of the user, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal, thereby realizing the automatic adjustment of the playback effects according to the usage scenario.
  • The working processes of the adaptive audio device, provided by the embodiments of the disclosure, in several scenarios are illustrated below through several specific examples.
  • <First usage scenario>
  • The first usage scenario is that the user is in the road environment and in the walking mode. It is easy to be understood that in this usage scenario, the motion mode of the user may be estimated only through the acquired acceleration data, and then the usage scenario of the user is estimated. For example, if the acceleration data shows that the current cadence value of the user is in the interval of 0.5 steps/s-2.5 steps/s, it is determined that the user is in the walking mode and in the road environment. In the first usage scenario, the ambient sounds are mainly traffic noises on road and ambient low-frequency noises like the wind noises with different intensities. F0 of the ambient sound signal is usually near 100Hz, and the value of Q is relatively smaller, that is, the distribution of the low-frequency noises is relatively wide. The sound level intensities vary depending on traffic conditions of different periods of time.
  • In the first usage scenario, the control module 21 controls the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal, including:
    the wind noise suppression submodule 241 is controlled to perform suppressive filtering to a wind noise signal in the ambient sound signal. The wind noise suppression submodule 241 may be a second order high-pass filter whose cut-off frequency f0 is 300Hz, for example.
  • It is monitored whether the ambient sound signal contains a voice signal; if the ambient sound signal contains a voice signal, the voice enhancement submodule 242 is triggered to perform enhancement processing on the voice signal in the ambient sound signal. That is, in the first usage scenario, the voice enhancement submodule 242 is in a standby mode, and may be wakened by the voice signal detected by the control module 21 in real time.
  • The dynamic range control submodule 243 is controlled to perform dynamic range adjustment on the ambient sound signal according to the sound pressure level of the ambient sound signal. In an embodiment, when the sound pressure level of the ambient sound signal is less than or equal to 40dBA, it is determined that the outside environment is a quiet environment, the ambient sound basically does not include useful information, and light amplification processing is performed to the ambient sound signal; when the sound pressure level of the ambient sound signal is greater than 40dBA and less than or equal to 50dBA, selective amplification processing is performed to the ambient sound signal; when sound pressure level of the ambient sound signal is greater than 50dBA and less than or equal to 60dBA, amplification and reduction processing is performed to the ambient sound signal; when the sound pressure level of the ambient sound signal is greater than 60dBA, it is determined that the environment is a relatively noisy, and attenuation processing is performed to the ambient sound signal. Through the dynamic range control, the user can enjoy music while maintaining a certain ability of monitoring and sensing the outside external environment during moving. It is to be noted that the division (≤ 40dBA, 40dBA-50dBA, 50dBA-60dBA, >60dBA) of the interval of the sound pressure level of the ambient sound signal is just an example. The division of the interval may be adjusted according to the actual condition.
  • The EQ processing submodule 244 is controlled to perform the EQ compensation processing on the ambient sound signal, and output the ambient sound signal to the speaker 30 for playback. For example, the EQ compensation processing is performed to the voice signal band and the honk signal band in the ambient sound signal.
  • It is determined, according to the sound pressure level of the ambient sound signal, whether to enable the active noise cancellation module 23, and if the active noise cancellation module 23 is enabled, a noise cancellation level of the active noise cancellation module 23 is automatically adjusted according to the sound pressure level of the ambient sound signal. The greater the sound pressure level of the ambient sound signal, the higher the noise cancellation level of the active noise cancellation module 23, and the greater the degree of the active noise cancellation. In addition, when the wind noise intensity is relatively great, an effect of feedback noise cancellation may be enhanced, and an effect of feed-forward noise cancellation on the low frequency band may be weakened appropriately. A noise cancellation signal generated by the active noise cancellation module 23 is output to the speaker 30.
  • In an embodiment, the control module 21 may analyze whether there is a certain warning prompt in the ambient sound signal according to the energy distribution and the spectral distribution of the ambient sound signal. For example, the ambient sound acquisition microphone (13) picks up ambient noises in t to t1. If it is found, through frequency domain analysis, that pulse signals whose frequencies are 500Hz-1500Hz, and the quality factor Q is much greater than 1 appear in this period of time discontinuously or continuously, and the average energy of the pulse signals is higher than 10dB of the previous period of time, it is determined that there is a certain warning sound that the user needs to be aware of in the ambient sound signal. If there is a certain warning prompt in the ambient sound signal, the control module 21 controls the active noise cancellation module 23 to perform active noise cancellation to the part, except the warning prompt, in the ambient sound signal, and controls the dynamic range control submodule 243 to perform the amplification processing to the warning prompt in the ambient sound signal, so as to ensure the safety and alertness of the user.
  • The working parameters of the audio signal volume adjustment module 22 are controlled according to the sound pressure level of the ambient sound signal reaching the speaker 30, so that the sound level intensities of the audio signal reaching the speaker 30 and the ambient sound signal reaching the speaker 30 keep the preset proportion. When the sound pressure level of the ambient sound becomes high, the audio signal volume may be automatically controlled to become high, that is, when the outside environment is relatively noisy, the audio signal volume is turned up. Conversely, when the sound pressure level of the ambient sound becomes low, the audio signal volume may be automatically controlled to become low, that is, when the outside environment is relatively quiet, the audio signal volume is turned down, so as to ensure the hearing of the user.
  • <Second usage scenario>
  • The second usage scenario is that the user is in the road environment and in the transportation mode, the control module 21 controls the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal, including:
    it is monitored whether the ambient sound signal contains the voice signal; if the ambient sound signal contains a voice signal, the voice enhancement submodule 242 is triggered to perform enhancement processing on the voice signal in the ambient sound signal, and the EQ processing submodule 244 is triggered to perform EQ compensation processing on the voice signal in the ambient sound signal. That is, in the second usage scenario, the voice enhancement submodule 242 and the EQ processing submodule 244 are in the standby mode, and may be wakened by the voice signal detected by the control module 21 in real time.
  • The active noise cancellation module 23 is controlled to perform the active noise cancellation processing on the ambient sound signal according to a highest noise cancellation level. Or, the active noise cancellation module 23 is controlled to determine, according to the sound pressure level of the ambient sound signal, whether to be enabled, and if the active noise cancellation module 23 is enabled, the noise cancellation level of the active noise cancellation module 23 is automatically adjusted according to the sound pressure level of the ambient sound signal.
  • The working parameters of the audio signal volume adjustment module 22 are controlled according to the sound pressure level of the ambient sound signal reaching the speaker 30, so that the sound level intensities of the audio signal reaching the speaker 30 and the ambient sound signal reaching the speaker 30 keep the preset proportion. When the sound pressure level of the ambient sound becomes high, the audio signal volume may be automatically controlled to become high, that is, when the outside environment is relatively noisy, the audio signal volume is turned up. Conversely, when the sound pressure level of the ambient sound becomes low, the audio signal volume may be automatically controlled to become low, that is, when the outside environment is relatively quiet, the audio signal volume is turned down, so as to ensure the hearing of the user.
  • In an example, when the user is in the road environment and in the transportation mode, the wind noise suppression submodule (241) is controlled to be disabled.
  • In an example, when the user is in the road environment and in the transportation mode, it is monitored whether the sound pressure level of the ambient sound signal is greater than the preset upper limit of the sound pressure level or less than the preset lower limit of the sound pressure level; if the sound pressure level of the ambient sound signal is greater than the preset upper limit of the sound pressure level, the dynamic range control submodule (243) is triggered to perform the attenuation processing on the ambient sound signal; and if the sound pressure level of the ambient sound signal is less than the preset lower limit of the sound pressure level, the dynamic range control submodule (243) is triggered to perform the amplification processing on the ambient sound signal. The upper limit of the sound pressure level is 60dBA, for example. The lower limit of the sound pressure level is 40dBA, for example.
  • In an example, when it is determined that the user is in the transportation mode, it is possible to further determine which transport the user is taking. For example, it is possible to determine that the user is in the modes of cycling, taking a flight, taking a train, or taking a car according to the environment type, the height data in the geographic location data, the movement speed and the cadence value. For example, if the movement speed of the user reaches 250km/h and the user is on a trunk railway, it is possible to determine that the user is in the mode of taking a high-speed train.
  • The control module 21 may set a specific control way of the active noise cancellation module 23 and the ambient sound adjustment module 24, for example, set the noise cancellation level of the active noise cancellation module 23 when the user is taking the high-speed train to a relatively low level, according to the features of the ambient sounds corresponding to the subdivided transports, for example, there are many honks when taking a car, and it is relatively quiet in the high-speed train.
  • In an example, when it is determined that the user is in the transportation mode, it is also possible to determine which transport the user is taking according to the sound pressure level of the ambient sound signal and the energy distribution and the spectral distribution of the ambient sound signal. In such a manner, the control module 21 may set the specific control way of the active noise cancellation module 23 and the ambient sound adjustment module 24 according to the features of the ambient sounds corresponding to the subdivided transports.
  • In the first usage scenario and the second usage scenario, the user may talk with a partner, and there may be an external voice remind, for example, a voice remind about danger or a remind of arrival of vehicles in the second usage scenario, so in the two usage scenarios, the voice enhancement submodule 242 may be triggered to work by the voice signal in the ambient sound signal detected in real time.
  • <Third usage scenario>
  • The third usage scenario is that the user is in the indoor environment (for example, residence, administrative offices of education, medical treatment, research, or indoor areas of catering trade and business) and in the stationary mode and the talking mode, the control module 21 controls the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal, including:
    The voice enhancement submodule 242 is controlled to perform the enhancement processing on the voice signal in the ambient sound signal.
  • The EQ processing submodule 244 is controlled to perform the EQ compensation processing on the voice signal band in ambient sound signal, and output the ambient sound signal to the speaker 30 for playback.
  • The wind noise suppression submodule 241 and the dynamic range control submodule 243 are controlled to be disabled.
  • The active noise cancellation module 23 is controlled to be disabled or perform the active noise cancellation processing on the ambient sound signal.
  • The audio signal volume adjustment module 22 is controlled to turn down the volume or stop playing the audio signal.
  • The adaptive audio control device in the embodiments of the disclosure may have a plurality of ambient sound acquisition microphones 13, including the microphone which is set on the headphone shell and configured to acquire the sound of surrounding environment of the user, and the microphone which is set in the headphone and configured to acquire the ambient sound heard at an ear of the user. The manner of setting a plurality of microphones may acquire the ambient sound more accurately, and may reflect the situation of the ambient sound heard at an ear of the user. The manner may be applied to the active noise cancellation function, is beneficial for locating an ambient sound source and regulating the proportion of the voice to the ambient sound. The manner may also optimize noise cancellation quantity better, which is beneficial for more intelligent adaptive audio control.
  • In other embodiments, the control module 21 may also analyze the ambient sound signal acquired by the ambient sound acquisition microphone 13 to obtain the sound pressure level and the energy distribution and the spectral distribution of the ambient sound signal, and realize a more abundant scenario analysis with reference to the data acquired by the acceleration sensor, the location module, and other sensors, so as to control the audio signal volume adjustment module 22, the active noise cancellation module 23 and the ambient sound adjustment module 24 more delicately, and provide a better experience effect to the user.
  • For those skilled in the art, the adaptive audio control device based on scenario identification may be realized by hardware, software or a combination of hardware and software. Based on the same inventive concept, referring to FIG. 4, the adaptive audio control method based on scenario identification provided by the embodiments of the disclosure includes the following steps.
  • At S401, the usage scenario of the user is analyzed.
  • Specifically, in S401, the acceleration data of the user may be acquired, and the usage scenario of the user is analyzed according to the acceleration data. It is easy to be understood that the geographic location data of the user may also be acquired, and the usage scenario of the user is analyzed according to the acceleration data and the geographic location data.
  • At S402, the ambient sound signal of surrounding environment of the user is acquired, the sound pressure level of the ambient sound signal is calculated, and the energy distribution and the spectral distribution of the ambient sound signal is analyzed.
  • At S403, the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module of the audio playback device is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal.
  • In an implementation mode, the audio playback device is a headphone.
  • In another implementation mode, the ambient sound adjustment module 24 includes any one of a combination of the following submodules: the wind noise suppression submodule 241, the voice enhancement submodule 242, the dynamic range control submodule 243, and the EQ processing submodule 244.
  • In another implementation mode, that the usage scenario of the user is analyzed at S401 includes that: the environment type of the user is determined according to the geographic location data; the movement speed of the user is calculated according to the geographic location data; the cadence value of the user is calculated according to the acceleration data; and the motion mode of the user is determined according to the movement speed and the cadence value of the user.
  • In another implementation mode, the environment types include the indoor environment and the road environment; the movement modes include any one of the followings: the stationary mode, the mode of waling on road, and the transportation mode.
  • In another implementation mode, if the movement speed is less than the first speed threshold and the cadence value is less than the first cadence value threshold, the user is in the stationary mode; if the movement speed is in the walking speed interval and the cadence value is in the walking cadence value interval, the user is in the walking mode; and if the movement speed is greater than the second speed threshold, the user is in the transportation mode.
  • In another implementation mode, the audio playback device further includes the bone conduction microphone or the infrared proximity sensor. The usage scenario of the user also includes the talking state of the user. The method further includes the following step of determining, according to the signal output by the bone conduction microphone or the infrared proximity sensor, whether the user is in the talking mode.
  • In another implementation mode, that the ambient sound signal of surrounding environment of the user is acquired includes that: the ambient sound signal of the real-time location of the user is acquired, and the ambient sound signal heard at an ear of the user is acquired.
  • <First usage scenario>
  • If the usage scenario is that the user is in the road environment and in the walking mode, the operation that the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
    • the wind noise suppression submodule 241 is controlled to perform suppressive filtering to a wind noise signal in the ambient sound signal;
    • it is monitored whether the ambient sound signal contains a voice signal; if the ambient sound signal contains a voice signal, the voice enhancement submodule 242 is triggered to perform enhancement processing on the voice signal in the ambient sound signal;
    • the dynamic range control submodule 243 is controlled to perform dynamic range adjustment on the ambient sound signal according to the sound pressure level of the ambient sound signal;
    • the EQ processing submodule 244 is controlled to perform the EQ compensation processing on the ambient sound signal;
    • the working parameters of the audio signal volume adjustment module 22 are controlled according to the sound pressure level of the ambient sound signal reaching the speaker of the audio playback device, so that the sound pressure level of the audio signal reaching the speaker and the sound pressure level of the ambient sound signal reaching the speaker keep the preset proportion.
  • In an implementation mode, that the dynamic range control submodule 243 performs the dynamic range adjustment on the ambient sound signal according to the sound pressure level of the ambient sound signal includes that: when the sound pressure level of the ambient sound signal is greater than 40dBA and less than or equal to 50dBA, the amplification processing is performed to the ambient sound signal; when the sound pressure level of the ambient sound signal is greater than 60dBA, the attenuation processing is performed to the ambient sound signal.
  • In another implementation mode, performing the EQ compensation processing on the ambient sound signal includes performing the EQ compensation processing on the voice signal band and the honk signal band in the ambient sound signal.
  • In another implementation mode, it is determined, according to the sound pressure level of the ambient sound signal, whether to enable the active noise cancellation module 23, and if the active noise cancellation module 23 is enabled, the noise cancellation level of the active noise cancellation module 23 is adjusted according to the sound pressure level of the ambient sound signal.
  • <Second usage scenario>
  • If the usage scenario is that the user is in the road environment and in the transportation mode, the operation that the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
    • it is monitored whether the ambient sound signal contains the voice signal; if the ambient sound signal contains a voice signal, the voice enhancement submodule 242 is triggered to perform enhancement processing on the voice signal in the ambient sound signal, and the EQ processing submodule 244 is triggered to perform EQ compensation processing on the voice signal in the ambient sound signal.
    • the active noise cancellation module 23 is controlled to perform active noise cancellation processing according to the strongest noise cancellation level; or, it is determined, according to the sound pressure level of the ambient sound signal, whether to enable the active noise cancellation module 23, and if the active noise cancellation module 23 is enabled, the noise cancellation level of the active noise cancellation module 23 is adjusted according to the sound pressure level of the ambient sound signal;
    • the working parameters of the audio signal volume adjustment module 22 are controlled according to the sound pressure level of the ambient sound signal reaching the speaker of the audio playback device, so that the sound pressure level of the audio signal reaching the speaker and the sound pressure level of the ambient sound signal reaching the speaker keep the preset proportion.
  • In an implementation mode, when the user is in the road environment and in the transportation mode, the wind noise suppression submodule 241 is controlled to be disabled.
  • In another implementation, when the user is in the road environment and in the transportation mode, it is monitored whether the sound pressure level of the ambient sound signal is greater than the preset upper limit of the sound pressure level or less than the preset lower limit of the sound pressure level; if the sound pressure level of the ambient sound signal is greater than the preset upper limit of the sound pressure level, the dynamic range control submodule (243) is triggered to perform the attenuation processing on the ambient sound signal; and if the sound pressure level of the ambient sound signal is less than the preset lower limit of the sound pressure level, the dynamic range control submodule (243) is triggered to perform the amplification processing on the ambient sound signal.
  • <Third usage scenario>
  • If the usage scenario is that the user is in the indoor environment and in the stationary mode and the talking mode, the operation that the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
    • the voice enhancement submodule 242 is controlled to perform the enhancement processing on the voice signal in the ambient sound signal;
    • the EQ processing submodule 244 is controlled to perform the EQ compensation processing on the voice signal band in the ambient sound signal;
    • the active noise cancellation module 23 is controlled to be disabled or perform the active noise cancellation processing on the ambient sound signal;
    • the audio signal volume adjustment module 22 is controlled to turn down the volume or stop playing the audio signal.
  • In an implementation mode, the wind noise suppression submodule 241 and the dynamic range control submodule 243 are controlled to be disabled.
  • It should be noted that each embodiment in the description is described in a progressive manner. Each embodiment focuses on differences from other embodiments, and the same and similar parts between the embodiments may be referred to each other. However, those skilled in the art should understand that the above embodiments may be used alone or in combination with each other as needed. The device embodiments described above are merely illustrative, and the modules illustrated as separate components may or may not be physically separate.
  • In addition, the flowcharts and block diagrams in the drawings illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods, and computer program products according to various embodiments of the disclosure. In this regard, each block of the flowcharts or block diagrams may represent a module, a program segment, or a portion of code, the module, the program segment or portion of the code includes one or more executable instructions for implementing the specified logical functions. It should also be noted that in some alternative implementations, the functions noted in the blocks may also occur in a different order than that illustrated in the drawings. For example, two consecutive blocks may be executed substantially in parallel, and may sometimes be executed in a reverse order, depending upon the functionality involved. It is also to be noted that each block of the block diagrams and/or flowcharts, and combinations of the blocks in the block diagrams and/or flowcharts, may be implemented with a dedicated hardware-based device that performs the specified function or action, or it may be implemented by a combination of the dedicated hardware and a computer instruction.
  • The computer program product provided by the embodiment of the disclosure includes a computer readable storage medium storing the program code, and the program code includes instructions for executing the method described in the above method embodiment. Specific implementation may be referred to the method embodiment, and will not be repeated herein.
  • It may be clearly understood by a person skilled in the art that, for the purpose of convenient and brief description, for a detailed working process of the foregoing system, apparatus, and unit, reference may be made to a corresponding process in the foregoing method embodiments, and details are not described herein again.
  • In the several embodiments provided in the disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. The described apparatus embodiment is merely exemplary. For example, the unit division is merely logical function division and may be other division in actual implementation. For another example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms.
  • The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • In addition, functional units in the embodiments of the disclosure may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit.
  • When the functions are implemented in the form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of the disclosure essentially, or the part contributing to the prior art, or some of the technical solutions may be implemented in a form of a software product. The software product is stored in a storage medium and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or some of the steps of the methods described in the embodiments of the disclosure. The foregoing storage medium includes any medium that can store program code, such as a U disk, a removable hard disk, an ROM, an RAM, a magnetic disk, or an optical disc.
  • It should be noted that, in the description, relational terms such as first and second are used merely to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply any such actual relationship or order between these entities or operations. Furthermore, the term "include" or "include" or any other variations thereof is intended to encompass a non-exclusive inclusion, such that a process, a method, an item, or a device that includes a plurality of elements includes not only those elements but also other elements that are not explicitly listed, or elements that are inherent to such process, method, item, or device. In the case of no more restrictions, an element that is defined by the phrase "including a ..." does not exclude the presence of additional equivalent elements in the process, method, item, or device that includes the element.
  • The above descriptions are only preferred embodiments of the disclosure, and are not intended to limit the application, for those skilled in the art, various changes and modifications may be made to the application. Any modifications, equivalent replacements, improvements and the like within the spirit and principle of the application should fall within the protection scope of the claims of the application. It should be noted that similar reference numerals and letters indicate similar items in the following figures. Therefore, once an item is defined in one figure, it is not necessary to be further defined and explained in the subsequent figures.
  • Although some of the specific embodiments of the application have been described in detail by way of example, those skilled in the art should understand that the above examples are only for the purpose of illustration and not intended to limit the scope of the application. Those skilled in the art will appreciate that the above embodiments may be modified without departing from the scope of the application. The scope of the application is defined by the appended claims.

Claims (21)

  1. A method for controlling an audio playback device, comprising:
    acquiring acceleration data of a user, and analyzing a usage scenario of the user according to the acceleration data;
    acquiring an ambient sound signal of surrounding environment of the user, calculating a sound pressure level (SPL) of the ambient sound signal, and analyzing an energy distribution and an spectral distribution of the ambient sound signal; and
    controlling an audio signal volume of the audio playback device, an active noise cancellation level of the audio playback device, and adjustment of the ambient sound signal according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal.
  2. The method of claim 1, further comprising:
    acquiring geographic location data of the user; wherein
    analyzing the usage scenario of the user according to the acceleration data and the geographic location data.
  3. The method of claim 2, wherein the adjustment of the ambient sound signal comprises any one of wind noise suppression, voice enhancement, dynamic range adjustment or equalization (EQ) compensation, or any combination thereof.
  4. The method of claim 3, wherein analyzing the usage scenario of the user according to the acceleration data and the geographic location data comprises:
    determining an environment type of the user according to the geographic location data;
    calculating a movement speed of the user according to the geographic location data;
    calculating a cadence value of the user according to the acceleration data; and
    determining a motion mode of the user according to the movement speed and the cadence value.
  5. The method of claim 4, wherein the environment type comprise an indoor environment and a road environment; the motion mode comprise any one of: a stationary mode, a walking mode, or a transportation mode.
  6. The method of claim 5, further comprising: determining whether the user is in a talking mode.
  7. The method of claim 1, wherein acquiring the ambient sound signal of surrounding environment of the user comprises: acquiring the ambient sound signal of a real-time location of the user, and acquiring the ambient sound signal heard at an ear of the user.
  8. The method of claim 5, wherein if the usage scenario is that the user is in the road environment and in the walking mode, controlling the audio signal volume, the active noise cancellation level, and the adjustment of the ambient sound signal of the audio playback device according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal comprises one or more of:
    performing suppressive filtering to a wind noise signal in the ambient sound signal;
    monitoring whether the ambient sound signal contains a voice signal; if the ambient sound signal contains a voice signal, performing enhancement processing on the voice signal in the ambient sound signal;
    performing dynamic range adjustment on the ambient sound signal according to the sound pressure level of the ambient sound signal;
    performing EQ compensation processing on the ambient sound signal;
    controlling the audio signal volume of the audio playback device according to the sound pressure level of the ambient sound signal reaching a speaker of the audio playback device, so that the sound pressure level of the audio signal reaching the speaker and the sound pressure level of the ambient sound signal reaching the speaker keep a preset proportion; and
    determining, according to the sound pressure level of the ambient sound signal, whether to perform active noise cancellation, and if the active noise cancellation is to be performed, adjusting a noise cancellation level of the active noise cancellation according to the sound pressure level of the ambient sound signal.
  9. The method of claim 5, wherein if the usage scenario is that the user is in the road environment and in the transportation mode, controlling the audio signal volume, the active noise cancellation level, and the adjustment of the ambient sound signal of the audio playback device according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal comprises one or more of:
    monitoring whether the ambient sound signal contains a voice signal; if the ambient sound signal contains a voice signal, performing an enhancement processing on the voice signal in the ambient sound signal, and performing an EQ compensation processing on the voice signal in the ambient sound signal;
    setting the active noise cancellation level to a highest noise cancellation level, or determining, according to the sound pressure level of the ambient sound signal, whether to perform the active noise cancellation, and if the active noise cancellation is to be performed, adjusting the noise cancellation level of the active noise cancellation according to the sound pressure level of the ambient sound signal;
    controlling the audio signal volume of the audio playback device according to the sound pressure level of the ambient sound signal reaching the speaker of the audio playback device, so that the sound pressure level of the audio signal reaching the speaker and the sound pressure level of the ambient sound signal reaching the speaker keep a preset proportion;
    not performing wind noise suppression; and
    monitoring whether the sound pressure level of the ambient sound signal is greater than a preset upper limit of the sound pressure level or less than a preset lower limit of the sound pressure level; if the sound pressure level of the ambient sound signal is greater than the preset upper limit of the sound pressure level, performing attenuation processing on the ambient sound signal; and if the sound pressure level of the ambient sound signal is less than the preset lower limit of the sound pressure level, performing amplification processing on the ambient sound signal.
  10. The method of claim 5, wherein if the usage scenario is that the user is in the indoor environment, in the stationary mode and the talking mode, controlling the audio signal volume, the active noise cancellation level, and the adjustment of the ambient sound signal of the audio playback device according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal comprises one or more of:
    performing an enhancement processing on the voice signal in the ambient sound signal;
    performing an EQ compensation processing on a voice signal band in the ambient sound signal;
    not performing the active noise cancellation, or performing the active noise cancellation processing on the ambient sound signal;
    turning down the audio signal volume or stopping playing the audio signal; and
    not performing wind noise suppression and dynamic range adjustment.
  11. A system for controlling an audio playback device, comprising:
    one or more processors;
    a memory which is coupled to at least one of the one or more processors;
    wherein the memory is configured to store computer program instructions; when the computer program instructions are executed by the at least one processor, the system performs a method for controlling an audio playback device; the method comprises:
    acquiring acceleration data of a user, and analyzing a usage scenario of the user according to the acceleration data;
    acquiring an ambient sound signal of surrounding environment of the user, calculating a sound pressure level of the ambient sound signal, and analyzing an energy distribution and an spectral distribution of the ambient sound signal; and
    controlling an audio signal volume of the audio playback device, an active noise cancellation level of the audio playback device, and adjustment of the ambient sound signal according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal.
  12. The system of claim 11, wherein when the computer program instructions are executed by the at least one processor, the system performs the method for controlling an audio playback device; the method further comprises:
    acquiring geographic location data of the user; wherein
    analyzing the usage scenario of the user according to the acceleration data and the geographic location data.
  13. The system of claim 12, wherein the adjustment of the ambient sound signal comprises any one of wind noise suppression, voice enhancement, dynamic range adjustment or equalization (EQ) compensation, or any combination thereof.
  14. The system of claim 13, wherein when executing the instruction of analyzing the usage scenario of the user according to the acceleration data and the geographic location data, the at least one processor performs operations of:
    determining an environment type of the user according to the geographic location data;
    calculating a movement speed of the user according to the geographic location data;
    calculating a cadence value of the user according to the acceleration data; and
    determining the motion mode of the user according to the movement speed and the cadence value.
  15. The system of claim 14, wherein the environment types comprise an indoor environment and a road environment; the motion modes comprise any one of: a stationary mode, a walking mode, or a transportation mode.
  16. The system of claim 15, wherein when the computer program instructions are executed by the at least one processor, the system performs the method for controlling an audio playback device; the method further comprises: determining whether the user is in a talking mode.
  17. The system of claim 11, wherein when executing the instruction of acquiring the ambient sound signal of surrounding environment of the user, the at least one processor performs operations of:
    acquiring an ambient sound signal of a real-time location of the user and acquiring an ambient sound signal heard at an ear of the user.
  18. The system of claim 15, wherein if the usage scenario is that the user is in the road environment and in the walking mode, when executing the instruction of controlling the audio signal volume, the active noise cancellation level, and the adjustment of the ambient sound signal of the audio playback device according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal, the at least one processor performs one or more of:
    performing suppressive filtering to a wind noise signal in the ambient sound signal;
    monitoring whether the ambient sound signal contains a voice signal; if the ambient sound signal contains a voice signal, performing enhancement processing on the voice signal in the ambient sound signal;
    performing dynamic range adjustment on the ambient sound signal according to the sound pressure level of the ambient sound signal;
    performing EQ compensation processing on the ambient sound signal;
    controlling the audio signal volume of the audio playback device according to the sound pressure level of the ambient sound signal reaching a speaker of the audio playback device, so that the sound pressure level of the audio signal reaching the speaker and the sound pressure level of the ambient sound signal reaching the speaker keep a preset proportion; and
    determining, according to the sound pressure level of the ambient sound signal, whether to perform active noise cancellation, and if the active noise cancellation is to be performed, adjusting a noise cancellation level of the active noise cancellation according to the sound pressure level of the ambient sound signal.
  19. The system of claim 15, wherein if the usage scenario is that the user is in the road environment and in the transportation mode, when executing the instruction of controlling the audio signal volume, the active noise cancellation level, and the adjustment of the ambient sound signal of the audio playback device according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal, the at least one processor performs one or more of:
    monitoring whether the ambient sound signal contains the voice signal; if the ambient sound signal contains a voice signal, performing the enhancement processing on the voice signal in the ambient sound signal, and performing the EQ compensation processing on the voice signal in the ambient sound signal;
    setting the active noise cancellation level to a highest noise cancellation level, or determining, according to the sound pressure level of the ambient sound signal, whether to perform the active noise cancellation, and if the active noise cancellation is to be performed, adjusting the noise cancellation level of the active noise cancellation according to the sound pressure level of the ambient sound signal;
    controlling the audio signal volume of the audio playback device according to the sound pressure level of the ambient sound signal reaching the speaker of the audio playback device, so that the sound pressure level of the audio signal reaching the speaker and the sound pressure level of the ambient sound signal reaching the speaker keep the preset proportion;
    not performing the wind noise suppression; and
    monitoring whether the sound pressure level of the ambient sound signal is greater than a preset upper limit of the sound pressure level or less than a preset lower limit of the sound pressure level; if the sound pressure level of the ambient sound signal is greater than the preset upper limit of the sound pressure level, performing attenuation processing on the ambient sound signal; and if the sound pressure level of the ambient sound signal is less than the preset lower limit of the sound pressure level, performing amplification processing on the ambient sound signal.
  20. The system of claim 15, wherein if the usage scenario is that the user is in the indoor environment and in the stationary mode and the talking mode, when executing the instruction of controlling the audio signal volume, the active noise cancellation level, and the adjustment of the ambient sound signal of the audio playback device according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal, the at least one processor performs one or more of:
    performing the enhancement processing on the voice signal in the ambient sound signal;
    performing the EQ compensation processing on a voice signal band in the ambient sound signal;
    not performing the active noise cancellation, or performing the active noise cancellation processing on the ambient sound signal;
    turning down the audio signal volume or stopping playing the audio signal; and
    not performing the wind noise suppression and the dynamic range adjustment.
  21. A computer program product, when executed by a processor, being capable of implementing the method for controlling an audio playback device of any one of claims 1 to 10.
EP19741628.2A 2018-01-17 2019-01-07 Adaptive audio control device and method based on scenario identification Withdrawn EP3672274A4 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810043127.XA CN110049403A (en) 2018-01-17 2018-01-17 A kind of adaptive audio control device and method based on scene Recognition
PCT/CN2019/070657 WO2019141102A1 (en) 2018-01-17 2019-01-07 Adaptive audio control device and method based on scenario identification

Publications (2)

Publication Number Publication Date
EP3672274A1 true EP3672274A1 (en) 2020-06-24
EP3672274A4 EP3672274A4 (en) 2021-05-05

Family

ID=67273101

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19741628.2A Withdrawn EP3672274A4 (en) 2018-01-17 2019-01-07 Adaptive audio control device and method based on scenario identification

Country Status (3)

Country Link
EP (1) EP3672274A4 (en)
CN (1) CN110049403A (en)
WO (1) WO2019141102A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3627492A1 (en) * 2018-09-21 2020-03-25 Panasonic Intellectual Property Management Co., Ltd. Noise reduction device, noise reduction system, and sound field controlling method
EP3869821A1 (en) * 2020-02-20 2021-08-25 Beijing Xiaoniao Tingting Technology Co., Ltd Signal processing method and device for earphone, and earphone
WO2022132721A1 (en) * 2020-12-15 2022-06-23 Google Llc Ambient detector for dual mode anc

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110830862A (en) * 2019-10-10 2020-02-21 广东思派康电子科技有限公司 Self-adaptive noise-reduction earphone
CN110996205A (en) * 2019-11-28 2020-04-10 歌尔股份有限公司 Earphone control method, earphone and readable storage medium
CN111179984B (en) * 2019-12-31 2022-02-08 Oppo广东移动通信有限公司 Audio data processing method and device and terminal equipment
CN113129917A (en) * 2020-01-15 2021-07-16 荣耀终端有限公司 Speech processing method based on scene recognition, and apparatus, medium, and system thereof
CN111294691B (en) * 2020-03-31 2021-10-26 歌尔股份有限公司 Earphone, noise reduction method thereof and computer readable storage medium
CN111447523B (en) * 2020-03-31 2022-02-18 歌尔科技有限公司 Earphone, noise reduction method thereof and computer readable storage medium
CN111586522B (en) * 2020-05-20 2022-04-15 歌尔科技有限公司 Earphone noise reduction method, earphone noise reduction device, earphone and storage medium
CN111698602A (en) * 2020-06-19 2020-09-22 青岛歌尔智能传感器有限公司 Earphone, earphone control method and device and readable storage medium
CN113873379B (en) * 2020-06-30 2023-05-02 华为技术有限公司 Mode control method and device and terminal equipment
WO2022022585A1 (en) * 2020-07-31 2022-02-03 华为技术有限公司 Electronic device and audio noise reduction method and medium therefor
CN114079838B (en) * 2020-08-21 2024-04-09 华为技术有限公司 Audio control method, equipment and system
CN111935584A (en) * 2020-08-26 2020-11-13 恒玄科技(上海)股份有限公司 Wind noise processing method and device for wireless earphone assembly and earphone
CN112312257B (en) * 2020-09-08 2022-11-29 深圳市逸音科技有限公司 Intelligent 3D earphone of making an uproar falls in initiative digit
CN112185409A (en) * 2020-10-15 2021-01-05 福建瑞恒信息科技股份有限公司 Double-microphone noise reduction method and storage device
CN112767908B (en) * 2020-12-29 2024-05-21 安克创新科技股份有限公司 Active noise reduction method based on key voice recognition, electronic equipment and storage medium
CN112765395B (en) * 2021-01-22 2023-09-19 咪咕音乐有限公司 Audio playing method, electronic device and storage medium
CN112954532A (en) * 2021-03-06 2021-06-11 深圳市尊特数码有限公司 Method, system, terminal and storage medium for adjusting noise reduction level of Bluetooth headset
CN113504889A (en) * 2021-06-25 2021-10-15 和美(深圳)信息技术股份有限公司 Automatic robot volume adjusting method and device, electronic equipment and storage medium
CN113505441B (en) * 2021-07-29 2023-03-14 中国第一汽车股份有限公司 Method, device, equipment and storage medium for evaluating wind noise and sound insulation performance of vehicle
CN113990282A (en) * 2021-10-19 2022-01-28 广州番禺巨大汽车音响设备有限公司 Active noise reduction conference sound control method and device
CN114121033B (en) * 2022-01-27 2022-04-26 深圳市北海轨道交通技术有限公司 Train broadcast voice enhancement method and system based on deep learning
CN114554346B (en) * 2022-02-24 2022-11-22 潍坊歌尔电子有限公司 Adaptive adjustment method and device of ANC parameters and storage medium
CN114280571B (en) * 2022-03-04 2022-07-19 北京海兰信数据科技股份有限公司 Method, device and equipment for processing rain clutter signals
WO2024119396A1 (en) * 2022-12-07 2024-06-13 深圳市韶音科技有限公司 Open-ear wearable acoustic device and active noise cancellation method thereof
CN115778046B (en) * 2023-02-09 2023-06-09 深圳豪成通讯科技有限公司 Intelligent safety helmet adjusting control method and system based on data analysis
CN117041803B (en) * 2023-08-30 2024-03-22 江西瑞声电子有限公司 Earphone playing control method, electronic equipment and storage medium
CN116961806B (en) * 2023-09-21 2024-02-09 广东保伦电子股份有限公司 Satellite time service-based flower and car audio synchronous broadcasting system, device and method
CN117238322B (en) * 2023-11-10 2024-01-30 深圳市齐奥通信技术有限公司 Self-adaptive voice regulation and control method and system based on intelligent perception

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9113240B2 (en) * 2008-03-18 2015-08-18 Qualcomm Incorporated Speech enhancement using multiple microphones on multiple devices
CN101458931A (en) * 2009-01-08 2009-06-17 无敌科技(西安)有限公司 Method for eliminating environmental noise from voice signal
US8391524B2 (en) * 2009-06-02 2013-03-05 Panasonic Corporation Hearing aid, hearing aid system, walking detection method, and hearing aid method
US9025782B2 (en) * 2010-07-26 2015-05-05 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for multi-microphone location-selective processing
US10218327B2 (en) * 2011-01-10 2019-02-26 Zhinian Jing Dynamic enhancement of audio (DAE) in headset systems
US9055367B2 (en) * 2011-04-08 2015-06-09 Qualcomm Incorporated Integrated psychoacoustic bass enhancement (PBE) for improved audio
JP2013102370A (en) * 2011-11-09 2013-05-23 Sony Corp Headphone device, terminal device, information transmission method, program, and headphone system
US9769556B2 (en) * 2012-02-22 2017-09-19 Snik Llc Magnetic earphones holder including receiving external ambient audio and transmitting to the earphones
JP5949061B2 (en) * 2012-03-30 2016-07-06 ソニー株式会社 Information processing apparatus, information processing method, and program
JP2015173369A (en) * 2014-03-12 2015-10-01 ソニー株式会社 Signal processor, signal processing method and program
CN103945062B (en) * 2014-04-16 2017-01-18 华为技术有限公司 User terminal volume adjusting method, device and terminal
CN104158506A (en) * 2014-07-29 2014-11-19 腾讯科技(深圳)有限公司 Method, device and terminal for adjusting volume
CN104618829A (en) * 2014-12-29 2015-05-13 歌尔声学股份有限公司 Adjusting method of earphone environmental sound and earphone
CN106285083B (en) * 2015-05-12 2019-02-15 国网浙江省电力公司 A kind of substation's noise-reduction method
WO2017027397A2 (en) * 2015-08-07 2017-02-16 Cirrus Logic International Semiconductor, Ltd. Event detection for playback management in an audio device
CN105530581A (en) * 2015-12-10 2016-04-27 安徽海聚信息科技有限责任公司 Smart wearable device based on voice recognition and control method thereof
CN105611443B (en) * 2015-12-29 2019-07-19 歌尔股份有限公司 A kind of control method of earphone, control system and earphone
CN106678552B (en) * 2017-01-05 2019-03-26 北京埃德尔黛威新技术有限公司 A kind of novel leakage method for early warning
CN106792315B (en) * 2017-01-05 2023-11-21 歌尔科技有限公司 Method and device for counteracting environmental noise and active noise reduction earphone
CN107105359B (en) * 2017-06-02 2019-10-18 歌尔科技有限公司 A kind of switching earphone operating mode method and a kind of earphone
CN107484058A (en) * 2017-09-26 2017-12-15 联想(北京)有限公司 Headphone device and control method

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3627492A1 (en) * 2018-09-21 2020-03-25 Panasonic Intellectual Property Management Co., Ltd. Noise reduction device, noise reduction system, and sound field controlling method
EP3869821A1 (en) * 2020-02-20 2021-08-25 Beijing Xiaoniao Tingting Technology Co., Ltd Signal processing method and device for earphone, and earphone
US11302298B2 (en) 2020-02-20 2022-04-12 Beijing Xiaoniao Tingting Technology Co., LTD. Signal processing method and device for earphone, and earphone
EP3869821B1 (en) * 2020-02-20 2022-10-26 Beijing Xiaoniao Tingting Technology Co., Ltd Signal processing method and device for earphone, and earphone
WO2022132721A1 (en) * 2020-12-15 2022-06-23 Google Llc Ambient detector for dual mode anc
US11468875B2 (en) 2020-12-15 2022-10-11 Google Llc Ambient detector for dual mode ANC
US11887576B2 (en) 2020-12-15 2024-01-30 Google Llc Ambient detector for dual mode ANC

Also Published As

Publication number Publication date
WO2019141102A1 (en) 2019-07-25
CN110049403A (en) 2019-07-23
EP3672274A4 (en) 2021-05-05

Similar Documents

Publication Publication Date Title
EP3672274A1 (en) Adaptive audio control device and method based on scenario identification
US10979814B2 (en) Adaptive audio control device and method based on scenario identification
KR102354215B1 (en) Ambient sound enhancement and acoustic noise cancellation based on context
CN102124758B (en) Hearing aid, hearing assistance system, walking detection method, and hearing assistance method
KR101542027B1 (en) Headset communication method under a strong-noise environment and headset
CN103959813B (en) Earhole Wearable sound collection device, signal handling equipment and sound collection method
CN106507258B (en) Hearing device and operation method thereof
CN109348327B (en) Active noise reduction system
CN105530580A (en) Hearing system
EP2617127B1 (en) Method and system for providing hearing assistance to a user
CN109310525A (en) Media compensation passes through and pattern switching
CN104754462A (en) Automatic regulating device and method for volume and earphone
US10643597B2 (en) Method and device for generating and providing an audio signal for enhancing a hearing impression at live events
EP4165883A1 (en) Synchronized mode transition
JP2016110050A (en) Voice processor, voice clearing device, and voice processing method
US20230169948A1 (en) Signal processing device, signal processing program, and signal processing method
US20220303687A1 (en) Headset capable of compensating for wind noise
WO2015044000A1 (en) Device and method for superimposing a sound signal
WO2022230275A1 (en) Information processing device, information processing method, and program
EP4007300B1 (en) Controlling audio output
WO2024170321A1 (en) Adaptive dynamic range control
JP2024146441A (en) Information processing device, method, program and system
CN117425812A (en) Audio signal processing method and device, storage medium and vehicle

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20200317

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20210406

RIC1 Information provided on ipc code assigned before grant

Ipc: H04R 1/10 20060101AFI20210329BHEP

Ipc: H04R 3/00 20060101ALI20210329BHEP

Ipc: H04R 5/04 20060101ALI20210329BHEP

Ipc: H04R 1/22 20060101ALI20210329BHEP

Ipc: G10K 11/178 20060101ALN20210329BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20210823