Nothing Special   »   [go: up one dir, main page]

CN112019974B - Media system and method for adapting to hearing loss - Google Patents

Media system and method for adapting to hearing loss Download PDF

Info

Publication number
CN112019974B
CN112019974B CN202010482726.9A CN202010482726A CN112019974B CN 112019974 B CN112019974 B CN 112019974B CN 202010482726 A CN202010482726 A CN 202010482726A CN 112019974 B CN112019974 B CN 112019974B
Authority
CN
China
Prior art keywords
audio
personal
gain
level
hearing loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010482726.9A
Other languages
Chinese (zh)
Other versions
CN112019974A (en
Inventor
J·伍德鲁夫
Y·阿茨米
I·M·费思齐
夏静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/872,068 external-priority patent/US11418894B2/en
Application filed by Apple Inc filed Critical Apple Inc
Priority to CN202210104034.XA priority Critical patent/CN114422934A/en
Publication of CN112019974A publication Critical patent/CN112019974A/en
Application granted granted Critical
Publication of CN112019974B publication Critical patent/CN112019974B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03GCONTROL OF AMPLIFICATION
    • H03G5/00Tone control or bandwidth control in amplifiers
    • H03G5/16Automatic control
    • H03G5/165Equalizers; Volume or gain control in limited frequency bands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics
    • H04R25/505Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/307Frequency adjustment, e.g. tone control
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/04Time compression or expansion
    • G10L21/057Time compression or expansion for improving intelligibility
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/12Audiometering
    • A61B5/121Audiometering evaluating hearing capacity
    • A61B5/123Audiometering evaluating hearing capacity subjective methods
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/46Volume control
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03GCONTROL OF AMPLIFICATION
    • H03G5/00Tone control or bandwidth control in amplifiers
    • H03G5/005Tone control or bandwidth control in amplifiers of digital signals
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03GCONTROL OF AMPLIFICATION
    • H03G5/00Tone control or bandwidth control in amplifiers
    • H03G5/02Manually-operated control
    • H03G5/025Equalizers; Volume or gain control in limited frequency bands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/60Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals
    • H04N5/602Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals for digital sound signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/04Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception comprising pocket amplifiers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/70Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/008Systems employing more than two channels, e.g. quadraphonic in which the audio signals are in digital form, i.e. employing more than two discrete digital channels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/046Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for differentiation between music and non-music signals, based on the identification of musical parameters, e.g. based on tempo detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2205/00Details of stereophonic arrangements covered by H04R5/00 but not provided for in any of its subgroups
    • H04R2205/041Adaptation of stereophonic signal reproduction for the hearing impaired
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/43Signal processing in hearing aids to enhance the speech intelligibility
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/01Aspects of volume control, not necessarily automatic, in sound systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/11Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Multimedia (AREA)
  • Neurosurgery (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Quality & Reliability (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The invention relates to a media system and a method for adapting to hearing loss, in particular to a media system and a method for adapting to hearing loss of a user by using the media system. The method includes selecting a personal level and frequency dependent audio filter corresponding to a hearing loss curve of a user. The personal level and frequency dependent audio filter may be one of several level and frequency dependent audio filters having respective average gain levels and respective gain profiles. The adaptive audio output signal may be generated by applying a personal level and frequency dependent audio filter to the audio input signal to enhance the audio input signal based on the input level and input frequency of the audio input signal. The audio output signal may be played by an audio output device to deliver speech or music that is clearly perceived by the user despite the hearing loss of the user. Other aspects are described and claimed.

Description

Media system and method for adapting to hearing loss
Technical Field
Aspects related to media systems having audio capabilities are disclosed. More particularly, aspects related to a media system for playing audio content to a user are disclosed.
Background
An audio-enabled device, such as a laptop, tablet, or other mobile device, may deliver audio content to a user. For example, a user may listen to audio content using an audio enabled device. The audio content may be pre-stored audio content played to the user by the speaker, such as music files, podcasts, virtual assistant messages, and the like. Alternatively, the reproduced audio content may be real-time audio content, such as audio content from a telephone call, video conference, or the like.
Noise exposure, aging, or other factors can cause an individual to experience hearing loss. The hearing loss curves of individuals may vary widely and may even be due to persons who have not been diagnosed as having hearing impairment. That is, each individual may have some frequency-dependent loudness perception that differs from normal. Such differences may vary widely throughout the population and may correspond to the spectrum of the population's hearing loss profile. Given that each individual hears different sounds, each individual may have a different experience with respect to audio content that is rendered to several people in the same manner. For example, a person with substantial hearing loss at a particular frequency may experience that playback of audio content containing substantial components at that frequency is muted. In contrast, a person without hearing loss at a particular frequency may experience that playback of the same audio content is clear.
An individual may adjust an audio-enabled device to modify playback of audio content in order to enhance the user's experience. For example, a person with substantial hearing loss at a particular frequency may adjust the overall level of audio signal volume to increase the loudness of the reproduced audio. Such adjustments may be made in the hope that the modified playback will compensate for the person's hearing loss.
Disclosure of Invention
Modifying the volume adjustment of the playback as described above may not compensate for the hearing loss in a personalized manner. For example, increasing the overall level of the audio signal may increase loudness, but increase loudness across the audible frequency range, regardless of whether the user experiences hearing loss across the entire range. The result of such large scale leveling can be an unpleasant loud and disconcerting listening experience for the user.
A media system and a method of using the media system to accommodate a hearing loss of a user are described. In one aspect, the media system performs the method by: selecting an audio filter (e.g., a level and frequency dependent audio filter) from a number of audio filters (e.g., a number of level and frequency dependent audio filters); and applying the audio filter to the audio input signal to generate an audio output signal that can be played back to the user. The audio filter may be a personal audio filter, for example a personal level and frequency dependent audio filter corresponding to a hearing loss curve of the user.
The selection of the individual level and frequency dependent audio filters may be made by the media system from the level and frequency dependent audio filters corresponding to the respective preset hearing loss curves. Since the level and frequency dependent audio filter has a corresponding average gain level and a corresponding gain profile corresponding to the average loss level and loss profile of the hearing loss curve, the level and frequency dependent audio filter compensates for the preset hearing loss curve. The personal level and frequency dependent audio filter may amplify the audio input signal based on its input level and input frequency, and thus, the user may normally experience sound from the reproduced audio output signal (rather than muffling as would be the case if an uncorrected audio input signal were played).
The selection of the personal level and frequency dependent audio filters can be made by a short and straightforward registration procedure. In an aspect, a first audio signal is output during a first stage of a registration process using one or more predetermined gain levels or using a first set of levels and frequency dependent audio filters having different average gain levels. The first audio signal may be played back to a user experiencing audio content (e.g., speech) at a different loudness. The user may select an audible or preferred loudness. More specifically, the media system receives a selection of a personal average gain level in response to outputting a first audio signal using one or more predetermined gain levels or a first set of one or more levels and frequency dependent audio filters. Selection of the personal average gain level may indicate that the first audio signal (e.g., speech signal) is output at a level audible to the user. The selection of the personal average gain level may indicate that the first audio signal is output at a preferred loudness. The media system may select the personal level and the frequency dependent audio filter based in part on the personal level and the frequency dependent audio filter having the personal average gain level. For example, the respective average gain levels of the personal level and the frequency dependent audio filter may be equal to the personal average gain level.
In an aspect, a second set of level and frequency dependent audio filters having different gain profiles are used to output a second audio signal during a second stage of the enrollment process. A second set of level and frequency dependent audio filters may be selected for exploration based on user selections made during the first stage of the enrollment process. For example, each level and frequency dependent audio filter in the second set may have a personal average gain level corresponding to the audibility selection made during the first stage. The second audio signal may be played back to a user experiencing audio content (e.g., music) at a different timbre or tone setting and selecting a preferred timbre or tone setting. More specifically, the media system receives a selection of a personal gain profile in response to outputting the second audio signal. The media system may select the personal level and frequency dependent audio filter based in part on the personal level and frequency dependent audio filter having the personal gain profile. For example, the respective gain profiles of the personal level and frequency dependent audio filter may be equal to the personal gain profile.
In an aspect, the enrollment process may modify the first and second audio signals for playback using a level and frequency dependent audio filter corresponding to a preset hearing loss curve. For example, an audio filter corresponding to the most common hearing loss curve in a population of people may be used. The audio filter may alternatively correspond to a hearing loss curve from a population of people that is closely related to the user's audiogram. For example, the media system may receive a personal audiogram of a user and may determine, based on the personal audiogram, a number of preset hearing loss curves encompassing the hearing loss curve of the user represented by the audiogram. The media system may then determine level and frequency dependent audio filters corresponding to the determined hearing loss profile, and may use those audio filters during the presentation of audio in the first or second stage of the enrollment process.
The media system may select personal level and frequency dependent audio filters based directly on the user's audiogram without utilizing a registration process. For example, the media system may receive a personal audiogram of the user and may select, based on the personal audiogram, a preset personal hearing loss profile that most closely matches the hearing loss profile of the user represented by the audiogram. For example, a personal audiogram may indicate that a user has an average hearing loss level and loss profile, and the media system may select a preset hearing loss profile that fits the audiogram. The media system may then determine a level and frequency dependent audio filter corresponding to the individual's hearing loss curve. For example, the media system may determine a level and frequency dependent audio filter having an average gain level corresponding to an average hearing loss level of the audiogram and/or having a gain profile corresponding to a loss profile. The media system may use the audio filter as a personal level and frequency dependent audio filter to enhance the audio input signal and compensate for the user's hearing loss when playing back the audio content.
The above summary does not include an exhaustive list of all aspects of the present invention. It is contemplated that the invention includes all systems and methods that can be practiced from all suitable combinations of the various aspects summarized above, as well as those disclosed in the detailed description below and particularly pointed out in the claims filed with this patent application. Such combinations have particular advantages not specifically recited in the above summary.
Drawings
Fig. 1 is an illustration of a media system in accordance with an aspect.
Fig. 2 is a graph of a loudness curve for an individual with sensorineural hearing loss according to an aspect.
Fig. 3 is a graph of amplification required to normalize loudness perceived by individuals with different hearing loss curves, according to an aspect.
Fig. 4 is an illustration of a personal level and frequency dependent audio filter applied to an audio input signal to accommodate a hearing loss of a user, according to an aspect.
FIG. 5 is an illustration of an audiogram of a user according to an aspect.
Fig. 6-8 are diagrams of hearing loss curves according to an aspect.
Fig. 9 is a diagram representing a multi-band compression gain representation of a level and frequency dependent audio filter corresponding to a hearing loss curve according to an aspect.
Fig. 10 is a flow diagram of a method of enhancing an audio input signal to accommodate hearing loss according to an aspect.
FIG. 11 is an illustration of a user interface for controlling output of a first audio signal, according to an aspect.
Fig. 12 is an illustration of selection of a horizontal and frequency dependent audio filter bank for exploration in a second stage of a registration procedure, according to an aspect.
FIG. 13 is an illustration of a user interface for controlling output of a second audio signal in accordance with an aspect.
Fig. 14A-14B are diagrams of selection of level and frequency dependent audio filters with different gain profiles according to an aspect.
Fig. 15 is a flow diagram of a method of selecting a personal level and frequency dependent audio filter having a personal average gain level and a personal gain profile, according to an aspect.
FIG. 16 is an illustration of a user interface for controlling output of a first audio signal, according to an aspect.
Fig. 17A-17B are diagrams of selection of level and frequency dependent audio filters with different average gain levels according to an aspect.
FIG. 18 is an illustration of a user interface for controlling output of a second audio signal, according to an aspect.
Fig. 19A-19B are diagrams of selection of level and frequency dependent audio filters with different gain profiles according to an aspect.
Fig. 20 is a flow diagram of a method of selecting a personal level and frequency dependent audio filter having a personal average gain level and a personal gain profile, according to an aspect.
Fig. 21A-21B are a flow diagram and a diagram, respectively, of a method of determining several hearing loss curves based on a personal audiogram according to an aspect.
Fig. 22A-22B are a flow chart and a diagram, respectively, of a method of determining a hearing loss profile of an individual based on an audiogram of the individual according to an aspect.
Fig. 23 is a block diagram of a media system in accordance with an aspect.
Detailed Description
This application claims priority to U.S. provisional patent application No. 62/855,951, filed on 1/6/2019, and is incorporated herein by reference.
Aspects describe a media system and a method of using the media system to accommodate a hearing loss of a user. The media system may include a mobile device (such as a smart phone) and an audio output device (such as a handset). However, the mobile device may be another device for presenting audio to a user, such as a desktop computer, a laptop computer, a tablet computer, a smart watch, and so forth, and the audio output device may include other types of devices, such as headphones, headsets, computer speakers, and so forth, to name a few possible applications.
In various aspects, the description makes reference to the accompanying drawings. However, certain aspects may be practiced without one or more of these specific details or in combination with other known methods and configurations. In the following description, numerous specific details are set forth, such as specific configurations, dimensions, and procedures, in order to provide a thorough understanding of the aspects. In other instances, well-known processes and manufacturing techniques have not been described in particular detail in order not to unnecessarily obscure the description. Reference throughout this specification to "one aspect," "an aspect," or the like, means that a particular feature, structure, configuration, or characteristic described is included in at least one aspect. Thus, the appearances of the phrases "in one aspect," "in an aspect," and the like, in various places throughout this specification are not necessarily all referring to the same aspect. Furthermore, the particular features, structures, configurations, or characteristics may be combined in any suitable manner in one or more aspects.
The use of relative terms throughout the description may refer to relative positions or directions. For example, "in front of … …" may indicate a first direction away from a reference point. Similarly, "behind … …" may indicate a position in a second direction away from the reference point and opposite the first direction. However, such terms are provided to establish a relative frame of reference, and are not intended to limit the use or orientation of the media system to the particular configurations described in the various aspects below.
In one aspect, a media system is used to accommodate a user's hearing loss. The media system may compensate for the user's mild or moderate hearing loss profile. Furthermore, the compensation may be personalized, meaning that it adjusts the audio input signal in a level-dependent and frequency-dependent manner based on the individual's unique hearing preferences, rather than just adjusting the balance or overall level of the audio input signal. The media system may personalize the audio tuning based on selections made during a brief and straightforward registration process. During the enrollment process, the user may experience sound from several audio signals filtered in different ways, and the user may make a binary selection to select a personal audio setting based on a subjective assessment or comparison of the experiences. The personal audio setting includes the average gain level and gain profile of the preferred audio filter. When the user has selected the personal audio setting, the media system may generate an audio output signal by applying a personal level and frequency dependent audio filter having the personal audio setting to amplify the audio input signal based on the input level and the input frequency of the audio input signal. The playback of the audio output signal may deliver speech or music to the user that is clear to the user despite the presence of a hearing loss profile.
Referring to FIG. 1, a diagram of a media system is shown, according to an aspect. The media system 100 may be used to deliver audio to a user. The media system 100 may include an audio signal device 102 that outputs and/or transmits an audio output signal and an audio output device 104 that converts the audio output signal (or a signal derived from the audio output signal) into sound. In an aspect, the audio signal device 102 is a smartphone. However, the audio signal device 102 may include other types of audio enabled devices, such as laptop computers, tablet computers, smart watches, televisions, and so forth. In one aspect, the audio output device 104 is a headset (wired or wireless). However, the audio output device 104 may include other types of devices, including audio speakers such as headphones. The audio output device 104 may also be an internal or external speaker of the audio signal device 102, such as a speaker of a smart phone, laptop computer, tablet computer, smart watch, television, or the like. In any case, the media system 100 may include hardware, such as one or more processors, memories, etc., that enable the media system 100 to perform the method of enhancing an audio input signal to accommodate a hearing loss of a user. More specifically, the media system 100 may provide personalized media enhancement by applying a user's personalized audio filter to the audio input signal to enable playback of audio content that is tailored to the user's hearing preferences and or hearing abilities.
Referring to fig. 2, a graph of a loudness curve of an individual with sensorineural hearing loss according to an aspect is shown. Sensorineural hearing loss is the main type of hearing loss, however, other types of hearing loss exist, such as conductive hearing loss. Individuals with sensorineural hearing loss have a higher audibility threshold than normal listeners, but experience an uncomfortably loud level as well. The loudness curves of individuals with conductive hearing loss will vary. More specifically, individuals with conductive hearing loss have a higher audibility threshold and an uncomfortably loud level than their counterparts with normal hearing. The loudness level curve 200 is used by way of example.
The hearing preference and/or hearing ability of the user is frequency dependent and level dependent. Higher sound pressure levels are required in the ears of individuals with hearing impairment to reach the same perceived loudness as individuals with less hearing loss. The graph shows a loudness level curve 200 that describes perceived loudness (PHON) as a function of Sound Pressure Level (SPL) for several individuals at a particular frequency (e.g., 1 kHz). The curve 202 has a slope of 1:1 and has an origin of zero because the loudness unit (e.g., 50PHON) is defined as the loudness of the 1kHz tone of the corresponding SPL (e.g., 50dB SPL) as perceived by a normal-hearing listener. In contrast, an individual with impaired hearing 204 does not perceive loudness until the sound pressure level reaches a threshold level. For example, when an individual has a 60dB hearing loss, the individual will not perceive loudness until the sound pressure level reaches 60 dB.
Referring to fig. 3, a graph illustrating the amplification required to normalize loudness perceived by individuals with different hearing loss curves according to one aspect is shown. To compensate for the hearing loss of an individual, a gain may be applied to the input signal to raise the sound pressure level in the ear of the individual with the hearing loss. The graph shows a gain curve 302 that describes the gain required to match normal hearing loudness as a function of the sound pressure level of an individual having the loudness level curve of fig. 2. Obviously, at a particular frequency, an individual 202 with normal hearing does not need amplification, since obviously the individual already has normal hearing loudness at all sound pressure levels. In contrast, individuals with impaired hearing 204 require substantial amplification at low sound pressure levels in order to perceive the applied sound below the threshold level of fig. 2 (e.g., below 60 dB).
The amount of amplification required to compensate for an individual's hearing loss decreases as the sound pressure level increases. More specifically, the amount of amplification required to compensate for hearing loss depends on both the frequency and the input signal level. That is, when the input signal level of the audio input signal produces a higher sound level for a given frequency, less amplification is required to compensate for the hearing loss at that frequency. Similarly, the hearing loss of an individual is frequency dependent, so the loudness level curve and the gain curve may be different at another frequency (e.g., 2 kHz). By way of example, if the gain curve of an individual with impaired hearing shifts upward (more hearing loss at 2kHz than at 1kHz), more amplification is required to normally perceive sound at that frequency. Therefore, when the input signal level of the audio input signal has a component at a certain frequency (2kHz), a greater amplification is required to compensate for the hearing loss at that frequency. The method of adjusting an audio input signal to amplify the audio input signal based on an input level and an input frequency of the audio input signal may be referred to herein as multi-band up-compression.
Multi-band upward compression can achieve the desired enhancement of audio content by bringing sounds that are not perceived or perceived as too quiet into the audible range without the need to adjust sounds that have been perceived as full or generally loud. In other words, multi-band up-compression may enhance the audio input signal in a level-dependent and frequency-dependent manner to allow the hearing impaired individual to normally perceive sound. Normalization of the loudness level curve of a hearing impaired individual may avoid over-or under-amplification at certain levels or frequencies, which avoids the problems associated with simply turning up the volume and amplifying the audio input signal across the entire audible frequency range.
Referring to fig. 4, a diagram of a personal level and frequency dependent audio filter applied to an audio input signal to accommodate a hearing loss of a user according to an aspect is shown. In view of the above discussion, it should be appreciated that the media system 100 can accommodate the hearing loss of an individual by applying the individual level and frequency dependent audio filter 402 to the audio input signal 404. The personal level and frequency dependent audio filter 402 may convert an audio input signal 404 into an audio output signal 406 that would typically be perceived by an individual. By way of example, the audio input signal 404 may represent speech in a telephone call, music in a music track, speech from a virtual assistant, or other audio content. As indicated by the dashed and dotted leads, certain frequencies of sound may be perceived normally (indicated by the solid lead) while other frequencies of sound may be perceived quietly (voiced or silenced) or not at all (indicated by the dashed and dotted leads of different densities) when reproduced without multi-band upward compression. In contrast, after applying the personal level and frequency dependent audio filter 402 to the audio input signal 404, the generated audio output signal 406 may contain normally perceived sounds of a particular frequency (indicated by the solid lead lines). Thus, the personal level and frequency dependent audio filter 402 may restore details of speech, music, and other audio content to enhance the sound played back to the user by the audio output device 104.
Referring to FIG. 5, a diagram of an audiogram of a user is shown, according to an aspect. To understand how the personal level and frequency dependent audio filters 402 may be selected or determined for enhancing the audio input signal 404, it may be helpful to understand how the user's hearing loss profile may be identified and mapped to a user-specific multiband compression filter. In an aspect, the user's personal audiogram 500 may include one or more audiogram curves representing audible thresholds as a function of frequency. For example, the first audiogram curve 502a may represent an audible threshold for a right ear of the user, and the second audiogram curve 502b may represent an audible threshold for a left ear of the user. The personal audiogram 500 may be determined using known techniques. In an aspect, the average hearing loss 504 may be determined from one or both of the audiogram curves 502a, 502B. For example, in the illustrated example, the average hearing loss 504 of the two curves may be 30 dB. Thus, the personal audiogram 500 indicates both an average hearing loss and a frequency-dependent hearing loss for a user across a person's primary audible range (e.g., between 500Hz to 8000 kHz). It should be noted that the primary audible range mentioned herein may be less than the human audible range (known as 20Hz to 20 kHz).
Fig. 6 to 8 comprise diagrams of hearing loss curves of a population. As described below, each hearing loss curve may have a combination of level and profile parameters. The level parameter of the hearing loss curve may be indicative of the average hearing loss as determined by pure tone audiometry. The profile parameter may indicate a change in hearing loss over an audible frequency range, e.g., whether the hearing loss is more pronounced at certain frequencies. The hearing loss curves shown in fig. 6 to 8 may be grouped according to level parameters and contour parameters. In one aspect, the hearing loss curve is the most common curve of hearing loss present in a population based on an analysis of a true audiogram. More specifically, each hearing loss curve may represent a common audiogram in three-dimensional space with an audiogram of unique level and profile parameters.
Fig. 6 shows a first set 602 of hearing loss curves. The hearing loss curves in the first set 602 may have level parameters corresponding to listeners with mild hearing loss. For example, the average hearing loss 604 of the first set 602 of curves may be 20 dB. More specifically, each of the hearing loss curves contained within the first set 602 may have the same average hearing loss 604. However, the hearing loss curves may differ in shape.
In an aspect, the first set 602 may include hearing loss curves having different profile parameters. The profile parameters may include a flat loss profile 606, a notched loss profile 608, and a sloped loss profile 610. Different shapes may have significant hearing loss at the corresponding frequencies. For example, the flat loss profile 606 may have more hearing loss at low band frequencies (e.g., at 500 Hz) than the notch loss profile 608 or the sloped loss profile 610. In contrast, the notch loss profile 608 may have more hearing loss at the mid-band frequency (e.g., at 4 kHz) than the flat loss profile 606 or the sloped loss profile 610. The sloped loss profile 610 may have more hearing loss at high band frequencies (e.g., at 8 kHz) than the flat loss profile 606 or the notched loss profile 608.
Other general differences in hearing loss curve shapes are possible. For example, the flat loss profile 606 may have minimal hearing loss variation as compared to the notch loss profile 608 and the tilt loss profile 610. That is, the flat loss profile 606 exhibits a more consistent hearing loss at each frequency. In addition, for the same curve, the notch loss profile 608 may have more hearing loss at the mid-band frequency than at other frequencies.
Fig. 7 shows a diagram of a second set 702 of hearing loss curves. The average hearing loss of each of the hearing loss curve sets may increase sequentially from fig. 6 to 8. More specifically, the hearing loss curves in the second set 702 may have level parameters corresponding to listeners with mild to moderate hearing loss. For example, the average hearing loss 704 of the second group 702 may be 35 dB. However, the hearing loss profiles of the second set 702 may have different profile parameters, such as a flat loss profile 706, a notch loss profile 708, and a tilt loss profile 710. Due to the regularity of hearing loss across the population, the shape of each horizontal group may be related by shape. More specifically, the shapes of loss profile 706 and 710 may share the general differences described above with respect to loss profile 606 and 610, but the shapes may not be scaled to the same scale. For example, the notched loss profile 708 may have the highest loss at the mid-band frequency compared to the other loss profiles of fig. 7, but the maximum loss of the notched loss profile 708 may be at the high-band frequency (compared to the mid-band frequency in fig. 6). Thus, the hearing loss curve of fig. 7 may represent the most common hearing loss curve for persons with mild to moderate hearing loss in a population.
Fig. 8 shows a diagram of a third set 802 of hearing loss curves. The average hearing loss 804 of the third group 802 may be higher than the average hearing loss 704 of the second group 702. The average hearing loss of the third group 802 may represent that the person has a moderate hearing loss. For example, the average hearing loss 804 may be 50 dB. As with the other groups, the hearing loss profiles of the third group 802 may differ in shape and may include a flat loss profile 806, a notched loss profile 808, and a sloped loss profile 810. The shapes of loss profiles 806 and 810 may share the general differences described above with respect to loss profiles 606 and 610 or 706 and 710. Thus, the hearing loss curve of fig. 8 may represent the most common hearing loss curve for a person with moderate hearing loss in a population.
The hearing loss curves shown in fig. 6 to 8 represent 9 presets of the hearing loss curve stored by the media system 100. More specifically, the media system 100 may store any number of hearing loss curve presets acquired from the 3D space of the audiogram described above. Each preset may have a combination of level and profile parameters that may be compared to the personal audiogram 500. One of the 9 presets of groups 602, 702, and 802 may be similar to personal audiogram 500. For example, upon visual inspection, it is apparent that the personal audiogram 500 of fig. 5 has an average hearing loss level closest to the hearing loss profile of the second group 702 (30dB compared to 35 dB) and exhibits a shape closely related to the flat loss profile 706. Accordingly, the flat loss profile 706 may be identified as a personal hearing loss profile of a user having the personal audiogram 500.
The comparison between the audiogram and the hearing loss curve as described above is introduced by way of example and will be referred to again below with respect to fig. 21 to 22. At this level, the example illustrates the concept that each individual may have an actual hearing loss (as represented by a hearing graph) that closely matches a common hearing loss profile (as determined from a population and stored as a preset within the media system 100). To compensate for actual hearing loss, the media system 100 may apply a personal level and frequency dependent audio filter 402 that corresponds to and compensates for the closely matched hearing loss curve.
Referring to fig. 9, a diagram representing a multi-band compression gain table corresponding to a level and frequency dependent audio filter of a hearing loss curve according to an aspect is shown. Each hearing loss curve may be mapped to a corresponding level and frequency dependent audio filter. For example, whichever of the hearing loss profiles of the group 602-802 most closely matches the personal audiogram 500 may be mapped to the level and frequency dependent audio filter that is the personal level and frequency dependent audio filter 402. Thus, the media system 100 may store, for example in memory, a number of preset hearing loss curves and a number of level and frequency dependent audio filters corresponding to the hearing loss curves.
In an aspect, the personal level and frequency dependent audio filter 402 may be a multi-band compression gain table. The multi-band compression gain table may be user-specific to compensate for an individual's hearing loss to provide personalized media enhancement. In an aspect, the audio input signal 404 is amplified based on the input level 902 and the input frequency 904 using the personal level and frequency dependent audio filter 402. The input level 902 of the audio input signal 404 may be determined in a range spanning from a low sound pressure level to a high sound pressure level. By way of example, the audio input signal 404 may have a sound pressure level shown on the left side of the gain table, which may be, for example, 20 dB. The input frequency 904 of the audio input signal 404 may be determined within an audible frequency range. By way of example, the audio input signal 404 may have a frequency at the top of the gain table, which may be, for example, 8 kHz. Based on the input level 902 and the input frequency 904 of the audio input signal 404, the media system 100 may determine to apply a particular gain level (e.g., 30dB) to the audio input signal 404 to generate the audio output signal 406. It will be appreciated that this example is consistent with the hearing loss and gain curves of figures 2 to 3.
The gain table example of fig. 9 shows that for each hearing loss curve of a user, a corresponding level and frequency dependent audio filter may be determined or selected to compensate for the hearing loss of the user. The level and frequency dependent audio filters may define a gain level at each input frequency that inversely corresponds to the hearing loss of the individual at these frequencies. By way of example, a user having a personal audiogram 500 that matches a flat loss profile 706 within the second set 702 may have a personal level and frequency dependent audio filter 402 that amplifies the audio input signal 404 more at 8kHz than at 500 Hz. The gain applied across audible frequencies by the gain table may negate the hearing loss represented by the loss profile.
Referring to fig. 10, a flow diagram of a method of enhancing an audio input signal to accommodate hearing loss according to an aspect is shown. The media system 100 may perform the method to provide personalized enhancement of audio content. At operation 1002, the one or more processors of the media system 100 may select a personal level and frequency dependent audio filter 402 from a number of level and frequency dependent audio filters corresponding to respective hearing loss curves. This selection process may be performed in various ways. For example, as mentioned above and as discussed further below with respect to fig. 22, the selection may include matching the user's personal audiogram to a preset hearing loss profile. However, it is contemplated that some users of the media system 100 may not have an existing audiogram available for matching. Furthermore, even when such audiogram is available, there may be a super-threshold difference in loudness perception of different users. For example, two users with similar audiograms may still subjectively experience sound pressure levels at a given frequency in different ways, e.g., a first user may feel comfortable with the sound pressure level while a second user may feel uncomfortable with the sound pressure level. Thus, it may be beneficial to personalize the audio filter selection to the user rather than relying solely on audiogram data. More specifically, the user may have a preference for not fully capturing the audiogram data, and therefore, it may be beneficial to allow the user to select from different level and frequency dependent audio filters that do not necessarily exactly match the individual audiogram.
In an aspect, a convenient and noise robust registration procedure may be used to drive the selection of personal level and frequency dependent audio filters that adapt to the hearing preferences of the user. The registration program may play back one or more audio signals modified by one or more predetermined gain levels and/or one or more level and frequency dependent audio filters corresponding to predetermined most common hearing loss profiles of demographic persons. The user may select during the registration procedure, for example, to select one or more of the level and frequency dependent audio filters, and through the user selection, the media system 100 may determine and/or select an appropriate personal level and frequency dependent audio filter to apply to the user's audio input signal. Several embodiments of the registration procedure are described below. The registration procedure may incorporate several stages, and one or more stages of the implementation may differ. For example, fig. 11-15 describe a registration procedure that includes a first stage in which a user's selection indicates whether the played back audio signal is audible, and fig. 16-20 describe a registration procedure that includes a first stage in which a user's selection indicates a preferred audio filter from a set of audio filters having different average gain levels.
Referring to fig. 11, a diagram of a user interface for controlling output of a first audio signal is shown, according to an aspect. During the registration process, the media system 100 may output the first audio signal using one or more predetermined gain levels. The predetermined gain level may be a scalar gain level (wideband or frequency independent gain) that is applied to allow the audio signal to be played back at different loudness for listening by the user. For example, the media system may generate a first audio signal for playback to the user by a speaker. The first audio signal may represent a voice (e.g., a voice file) that includes a recorded greeting spoken in a language from around the world. Speech has good contrast between gain levels (compared to music) and therefore can facilitate selection of an appropriate average gain level during the first stage of the enrollment process.
During the first stage, the audio input signal 404 may be reproduced for the user at a first predetermined gain level. For example, the voice signal may be output at a low level (e.g., 40dB or less). The first predetermined gain level may correspond to one of different average hearing loss levels (e.g., levels 604, 704, or 804). For example, a demographic having an average hearing loss level 604 and possibly no hearing loss levels 704 and 804 expects audible levels of 40dB or less.
During playback of the first audio signal at the first magnification level, the user may select either the audibility selection element 1102 or the inaudibility selection element 1104 of the graphical user interface displayed on the audio signal device 102 of the media system 100. More specifically, after listening to the first setting, the user may make a selection indicating whether the output audio signal has a loudness audible to the user. The user may select audibility selection element 1102 to indicate that the output level is audible. In contrast, the user may select the inaudibility selection element 1104 to indicate that the output level is inaudible.
After selecting either audibility selection element 1102 or inaudibility selection element 1104, the user may select selection element 1106 to provide a selection to the system. When the system receives a selection of the audibility selection element 1102, the system can determine the personal average gain level of the user based on the selection indicating whether the output audio signal is audible to the user. For example, when the system receives a selection of the audibility selection element 1102 during the first stage of the first stage, the system may determine that the user's personal average gain level corresponds to the average hearing loss level 604 of the mild hearing loss curve group. This set of hearing loss curves can be used as a basis for further exploring the level and frequency dependent audio filters in the second stage of the enrollment procedure. In contrast, selection of the audibility selection element 1104 during the first phase may cause the registration program to proceed to a second phase of the first phase of the registration program.
In a second stage of the first stage, the first audio signal may be played at a second amplification level. For example, a voice signal may output a higher level (e.g., 55 dB). After hearing the second setting, the user may select either audibility selection element 1102 or inaudibility selection element 1104 to indicate whether the speech signal is audible.
After selecting either audibility selection element 1102 or inaudibility selection element 1104, the user may select selection element 1106 to provide a selection to the system. The system may determine the personal average gain level based on a selection indicating whether the output audio signal is audible to the user. For example, when the system receives a selection of the audibility selection element 1102 during the second stage of the first stage, the system may determine that the user's personal average gain level corresponds to the average hearing loss level 704 of the mild-to-moderate hearing loss curve group. This set of hearing loss curves can be used as a basis for further exploring the level and frequency dependent audio filters in the second stage of the enrollment procedure. In contrast, when the system receives a selection of the inaudibility selection element 1104 during the second stage, the system may determine that the user's personal average gain level corresponds to the average hearing loss level 804 for the moderate hearing loss curve group. This set of hearing loss curves can be used as a basis for further exploring the level and frequency dependent audio filters in the second stage of the enrollment procedure.
The first audio signal may be generated and/or output during the first stage in order of increasing gain using one or more predetermined gain levels. For example, as described above, as the user progresses through the first stage of the registration procedure, the first audio signal may be output at 40dB during the first stage and then at 55dB during the second stage. Playback of the speech signal using the increased predetermined gain level may continue until the personal average gain level is determined. The personal average gain level may be determined by selecting audibility selection element 1102 or selecting inaudibility selection element 1104. For example, if the user selects the audibility selection element 1102 when outputting a voice signal at 55dB, the individual average gain level corresponding to the mild to moderate hearing loss curve is determined. In contrast, if the user selects the non-audibility selection element 1104 after outputting a voice signal at 55dB, the individual average gain level corresponding to the moderate hearing loss curve is determined.
The first audio signal may be set at a calibration level and therefore volume adjustments during the first stage of the enrollment process may not be allowed. More specifically, the one or more processors of the media system 100 may disable volume adjustment of the media system 100 during output of the first audio signal. By locking the volume control of the media system 100 during the first stage of the enrollment process, the gain level that compensates for hearing loss may be set to a predetermined gain level corresponding to the common hearing loss profile being tested. Thus, these levels can be explored using predetermined levels of speech stimulation that are fixed during the assessment.
Referring to fig. 12, a diagram of selection of a horizontal and frequency dependent audio filter bank for exploration in a second stage of a registration procedure is illustrated, according to an aspect. The selection during the first stage of the enrollment procedure drives a bank of horizontally and frequency dependent audio filters that are available to be explored during the second stage of the enrollment procedure.
When the speech signal is presented at a first level (e.g., 40dB) during a first phase of a first stage of the registration procedure, the user makes a selection to indicate whether the output audio signal is audible. Selection of the audibility selection element 1102 indicates that the first level is audible and may be referred to as a first stage audibility selection 1200. The system may determine, based on the first stage audibility selection 1200, that a zero-gain audio filter and/or a first set of level and frequency-dependent audio filters (1F, 1N, and 1S) have respective average gain levels equal to the user' S personal average gain level. More specifically, the system may determine that the user' S personal average gain level is one of the average gain levels of the zero-gain audio filter or the first set of levels and frequency-dependent audio filters (1F, 1N, and 1S) in response to the first stage audibility selection 1200. For example, a zero-gain audio filter may have an average gain level of zero, and a first set of filters may have an average gain level corresponding to the first set 602 of hearing loss curves. One or more of the audio filters may be explored during the second stage of the registration procedure to further narrow the determination, as described below.
When the speech signal is presented at a second level (e.g., 55dB) during the second phase of the first stage of the enrollment procedure, the user makes a selection to indicate whether the output audio signal is audible. Selection of the audibility selection element 1102 indicates that the second level is audible and may be referred to as a second stage audibility selection 1204. The system may determine, based on the second stage audibility selection 1204, that the second set of level and frequency-dependent audio filters (2F, 2N, and 2S) have average gain levels equal to the user' S personal average gain level. More specifically, the personal average gain level of the user may be determined as the average gain level of the second group. For example, the second set of filters may have average gain levels corresponding to the second set 702 of hearing loss curves. One or more of the second set of audio filters may be explored during the second stage of the registration procedure, as described below.
Selection of the non-audibility selection 1104 during presentation of the speech signal at the second level indicates that the second level is not audible and may be referred to as a second stage non-audibility selection 1206. The system may determine that the third set of level and frequency dependent audio filters (3F, 3N, and 3S) have average gain levels equal to the user' S personal average gain level based on the second stage inaudibility selection 1206. More specifically, the personal average gain level of the user may be determined as the average gain level of the third group. For example, the third set of filters may have average gain levels corresponding to the third set 802 of hearing loss curves. One or more of the third set of audio filters may be explored during the second stage of the registration procedure, as described below.
In a second stage of the enrollment process, the user may explore the determined set of level and frequency dependent audio filters to select a personal gain profile. The personal gain profile may correspond to a user preferred gain profile (flat profile, notched profile, or sloped profile) that adjusts the tonal characteristics of the audio input signal according to user preferences.
Referring to fig. 13, a diagram of a user interface for controlling output of a second audio signal is shown, according to an aspect. During the registration process, the media system 100 may output a second audio signal using a set of level and frequency dependent audio filters. The second audio signal may represent music, such as a music file containing recorded music. Music provides good contrast between timbres (compared to speech) and thus may facilitate the selection of an appropriate gain profile during the second stage of the enrollment process. More specifically, playing music rather than speech during the second stage allows for an accurate determination of the user's timbre or tonal preferences.
During the second stage, the audio input signal 404 may be sequentially reproduced at different pitch enhancement settings for the user. More specifically, the second audio signal is output using a level and frequency dependent audio filter bank determined in response to the first stage audibility selection 1200, the second stage audibility selection 1204, or the second stage audibility selection 1206. Each member of the group may have a different gain profile. For example, each group (except for the zero-gain audio filter) may include a flat audio filter corresponding to a flat loss profile of the common hearing loss profile, a notch audio filter corresponding to a notch loss profile of the common hearing loss profile, and a tilt audio filter corresponding to a tilt loss profile of the common hearing loss profile. It should be understood that with reference to the above loss profile and the inverse relationship between the loss profile and the corresponding gain profile, the gain profile of a flat audio filter has the highest gain in the low frequency band, the gain profile of a notch audio filter has the highest gain in the medium frequency band, and the gain profile of a tilted audio filter has the highest gain in the high frequency band. An audio filter is applied to the second audio signal to play back the audio signal such that different frequencies corresponding to different hearing loss profiles are apparent.
The user may select the current tuning element 1304 to play the second audio signal with the first playback setting. For example, when the first stage audibility selection 1200 is made in fig. 12, the second audio signal can be played back without audio filtering (zero gain filter) as the current tuning. The user may select the altered tuning element 1306 to play the second audio signal with a second audio filter having a corresponding gain profile that is different from the gain profile of the first setting. For example, the altered tuning may play the second audio signal using a (1F) audio filter. When the user has identified a preferred setting (e.g., a tuning of music that allows the user to better listen to the second audio signal), the user may select the selection element 1106. Alternatively, the user may select through a physical switch, such as by tapping a button on the audio signal device 102 or the audio output device 104.
Referring to fig. 14A, a diagram of selection of level and frequency dependent audio filters with different gain profiles according to an aspect is shown. During the second stage of the registration process, the user is presented with different enhanced settings and asked to select preferred settings. The enhancement settings include a level and frequency dependent audio filter bank applied to the second audio signal based on selections made during the first stage of the enrollment process. The audio filters in the set may correspond to hearing loss curves having different loss profiles.
In the illustrated example, the second stage audibility selection 1204 is made in fig. 12. Thus, the system may select a second set of horizontally and frequency dependent audio filters to explore. The selection of the current tuning element 1304 plays back the second audio signal using a flat gain profile (2F) audio filter corresponding to the flat loss profile 706 of fig. 7. In contrast, selection of the altered tuning element 1306 plays back the second audio signal using a notch gain profile (2N) audio filter corresponding to the notch loss profile 708 of fig. 7. The user may select the preferred setting and then select the selection element 1106 to proceed to the next operation in the second level. For example, the user may (as shown) select the current tuning element 1304 to select the filter corresponding to the flat loss profile and proceed with the next operation.
The second stage of the registration process may need to present all gain profile settings in the vertical direction across the grid of fig. 14A. More specifically, even when the user selects the current tuning (e.g., (2F) audio filter) during the second stage, the registration process may provide additional comparisons between the current tuning and subsequent tuning. Subsequent tuning that may be applied to the second audio signal is shown in the columns of the grid of fig. 14A. More specifically, the additional modified tuning may correspond to a tilt loss profile for each of the possible average gain level settings.
Referring to fig. 14B, a diagram of selection of level and frequency dependent audio filters with different gain profiles according to an aspect is shown. At the next operation in the registered second stage, the second audio signal may be modified by a (2F) level and frequency dependent audio filter corresponding to the previously selected gain profile setting and the next gain profile setting (2S). In an aspect, all of the tunings applied to the second audio signal during the second stage of registration have the same average gain level. More specifically, the flat gain profile (2F), notch gain profile (2N) and tilt gain profile (2S) applied to the second audio signal for comparison pitch adjustment may all have individual average gain levels determined during the first stage of enrollment. The individual average gain level may correspond to the average gain loss 704 of, for example, a mild to moderate hearing loss group curve. When the user has heard the second audio signal altered by all filters, the user may select a preferred tuning (e.g., altered tuning 1306). Media system 100 may receive a user selection as a selection of personal gain profile 1402. For example, the personal gain profile 1402 may be a tilted gain profile (2S).
Volume adjustment of the media system 100 may be enabled during output of the second audio signal as compared to the first stage of the enrollment process. Allowing volume adjustments may help to distinguish between tonal characteristics of different audio signal adjustments. More specifically, allowing the user to adjust the volume of the media system 100 using the volume control 1302 (fig. 13) may allow the user to hear the difference between each tone setting. Thus, the second stage of the enrollment process allows the user to explore the gain profile using musical stimuli that stimulate all frequencies within the audible frequency range, and encourages volume changes to allow the user to differentiate the tonal characteristics of the altered musical stimuli.
The sequence of presentation of the filtered audio signal allows the user to step through the enrollment process to first determine the personal average gain level and then the personal gain profile. More specifically, the user may first select the personal average gain level by selecting the audible setting of the first audio signal, and then select the personal gain profile 1402 by stepping through the grid in a vertical direction along the shape axis. Each square of the grid represents a level and frequency dependent audio filter with a corresponding average gain level and gain profile, and thus the illustrated example (a 3 x 3 grid) assumes that the personal level and frequency dependent audio filter 402 resulting from the enrollment process will be one of 9 level and frequency dependent audio filters corresponding to 9 common hearing loss curves. The granularity levels (e.g., three level groups and three profile groups) have been shown to consistently guide the user in selecting consistently preferred presets for the user, regardless of whether the selected presets exactly match their hearing loss profile. However, it should be understood that the preset number used in the registration process may vary. For example, a first stage of the enrollment process may allow the user to step through four or more predetermined gain levels to drive the selection of an audio filter bank having a personal average gain level. Similarly, more or fewer gain profiles may be represented across the shape axis of the grid to allow the user to evaluate different pitch enhancements.
Referring to fig. 15, a flow diagram of a method of selecting a personal level and frequency dependent audio filter having a personal average gain level and a personal gain profile according to an aspect is shown. The flow chart shows the stage of the registration process for selecting a level and frequency dependent audio filter from a grid of audio filters having columns and rows.
As described above, the registration process allows the user to first explore the horizon to determine the correct column within the audio filter grid to further explore the contour. At operation 1502, in a first stage of the registration process, a user listens to an audio signal at a predetermined level (e.g., a 40dB level). The predetermined level is a rendering level resulting from a predetermined gain level applied to the speech audio signal. At operation 1504, the media system 100 determines whether the user can hear the current presentation level. For example, if the user can hear a 40dB level produced by a predetermined gain level audio filter, the user selects audibility selection element 1102 to identify the current level as corresponding to a personal average gain level. In this case, the system determines the personal average gain level to be the average gain level of a zero gain filter or a (1F, 1N, 1S) audio filter bank. However, if the user selects the audibility selection element 1104, then at operation 1506, the first decision sequence is iterated to a next predetermined level (e.g., 55dB level). The next predetermined level is the rendering level resulting from the next predetermined gain level applied to the speech audio signal. The audio signal may be presented at operation 1502 at a next predetermined level. At operation 1504, the media system 100 determines whether the user can hear the current level. If the user can hear the current level, the user selects audibility selection element 1102 to identify the current level as corresponding to a personal average gain level. In this case, the system determines that the personal average gain level is the average gain level of the (2F, 2N, 2S) audio filter bank. However, if the user selects the audibility selection element 1104, the system determines that the personal average gain level is the average gain level of the (3F, 3N, 3S) audio filter bank. Which level the user selects as audible during the iteration can be used to drive the determination of the personal average gain level. When the user selects an audible level, the system may determine an audio filter bank for further exploration having an average gain level corresponding to the selected predetermined gain level. More specifically, the personal average gain level may be determined based on audibility choices, and the enrollment process may continue to the second stage.
As described above, the registration process allows the user to explore the gain profiles within the selected audio filter bank to determine the correct rows within the audio filter grid, and thus arrive at squares within the grid representing the personal level and frequency dependent audio filters 402. At operation 1508, in a second stage of the enrollment process, the user compares several shape audio signals.
In special cases, the user makes a first stage audibility selection 1200 and the system determines that the zero-gain audio filter or (1F, 1N, 1S) audio filter bank corresponds to the user' S personal average gain level. In this case, the music file is played at decision sequence 1508. At decision sequence 1508, a comparison may be made between a zero-gain audio filter (or no filter) applied to the music audio signal and a low-gain flat audio filter (1F) applied to the music audio signal. If the zero-gain audio filter is again selected, e.g., via the current tuning element 1304, the process may iterate to compare the zero-gain audio filter to the low-gain notch audio filter (1N). If a zero gain audio filter is again selected, for example, via the current tuning element 1304, the registration process may end and no audio filter is applied to the audio input signal 404. More specifically, as the flow chart advances through a sequence in which the user selects a number of levels corresponding to the hearing loss curve and zero-gain audio filters on the associated audio filters, the media system 100 determines that the user has normal hearing and does not adjust the default audio settings of the system. It can also be constructed as an audio filter with a personal average gain level of zero and a personal gain profile of non-adjusted personal level and frequency dependence.
However, in the case where the user selects a non-zero human average gain level, for example, the second stage audibility selection 1204 or the second stage audibility selection 1206 is selected during the first stage, or the (1F) or (1N) audio filter is selected at an initial operation 1508 of the second stage, the shape audio signal comparison at operation 1508 is performed between the non-zero gain audio filters applied to the music audio signal. For example, if the second stage audibility selection 1204 drives the selection of the (2F, 2N, 2S) audio filter bank for further exploration, then at operation 1508, the (2F) audio filter may be applied to the music audio signal as the current tuning and the low-level notch audio filter (2N) may be applied to the music audio signal as the altered tuning. The filtered audio signal may be presented to the user as a correspondingly shaped audio signal. At operation 1510, the media system 100 determines whether the user has selected the personal gain profile 1402. After the user has listened to all shape audio signals and selected the preferred shape audio signal, the personal gain profile 1402 is selected. For example, if the user selects a (2F) audio filter instead of a (2N) audio filter at operation 1508, the (2F) audio filter is a candidate for the personal gain profile 1402. At operation 1512, the second stage iterates to the next shape audio signal comparison. For example, the (2F) audio filter selected during the previous iteration may be applied to the music audio signal, and the low-level tilt audio filter (2S) may be applied to the music audio signal. At operation 1508, the filtered audio signals may be presented to the user as corresponding shape audio signals, and the user may select a preferred shape audio signal. At operation 1510, the media system 100 determines whether the user has selected the personal gain profile 1402. For example, if the user selects a (2S) audio filter, the media system 100 identifies the selection as a personal gain profile 1402 as long as the user selects the audio filter and all shape audio signals have been presented to the user for selection.
After exploring the level and contour settings, the media system selects the personal level and frequency dependent audio filter 402 at operation 1002. More specifically, the user identifies a particular square in the grid, for example, based in part on the personal level and frequency dependent audio filter 402 having a personal average gain level determined from the first stage, and based in part on the personal level and frequency dependent audio filter 402 having a personal gain profile 1402 determined from the second stage. The process may use a selected filter having a personal average gain level and a personal gain profile 1402 in the verification operation. In a verification operation, audio signals (e.g., music audio signals) may be output and played back by the media system 100 using the personal level and frequency dependent audio filters 402 identified during the enrollment process. The verification operation allows the user to adjust between the selected preset playback and the normal playback (no adjustment) so that the user can confirm that the adjustment is actually improved. When the user agrees that the personal level and frequency dependent audio filter improves the listening experience, the user may select an element (e.g., "done") to complete the registration process.
At the end of the enrollment process, the personal level and frequency dependent audio filter 402 is identified as the audio filter having the user's preferred personal average gain and/or personal gain profile 1402. Thus, at operation 1002, the media system 100 may select the personal level and frequency dependent audio filter 402 based in part on the personal level and frequency dependent audio filter 402 having the personal average gain level and based in part on the personal level and frequency dependent audio filter 402 having the personal gain profile 1402, as determined by the enrollment process.
In alternative embodiments, the registration procedure may be different from the process described above with respect to fig. 11-15. Alternative embodiments are described below with respect to fig. 16-20. Similar to the embodiments of fig. 11 and 15, the embodiments of fig. 16-20 allow a user to select one or more of the level and frequency dependent audio filters, and through user selection, the media system 100 may determine and/or select an appropriate personal level and frequency dependent audio filter to apply to the user's audio input signal. Referring to fig. 16, a diagram of a user interface for controlling output of a first audio signal is shown, according to an aspect. During the registration process, the media system 100 may output a first audio signal using a first set of level and frequency dependent audio filters. For example, the first audio signal may represent a voice (e.g., a voice file) that contains a recorded greeting spoken in a language from around the world. Speech has good contrast between gain levels (compared to music) and therefore can facilitate selection of an appropriate average gain level during the first stage of the enrollment process. During the first stage, the audio input signal 404 may be sequentially reproduced at different enhancement settings for the user. More specifically, level and frequency dependent audio filters having different average gain levels may be applied to the first audio signal to play back the audio signal at different average gain levels corresponding to different average hearing loss levels (e.g., levels 604, 704, or 804).
The user may select a current tuning element 1602 of a graphical user interface displayed on the audio signal device 102 of the media system 100 to play the first audio signal at the first magnification level. After listening to the first setting, the user may select the modified tuning element 1604 of the graphical user interface to play the first audio signal at a second magnification level that is higher than the first magnification level. When the user has identified a preferred setting (e.g., tuning of speech that allows the user to better listen to the first audio signal), the user may select a selection element 1606 of the graphical user interface. Alternatively, the user may select through a physical switch, such as by tapping a button on the audio signal device 102 or the audio output device 104. If the user selects selection element 1606 when the current tuning element 1602 is enabled, the selection may be personal average gain level 1702. More specifically, when the user decides to continue the enrollment process using the current tuning, personal average gain level 1702 may be the average gain level applied to the first audio signal. Alternatively, the user may choose to continue registration with the modified tuning element 1604 enabled. In this case, the selection advances the registration process to the next operation in the first level. In a next operation, the first audio signal may be reproduced by another pair of level and frequency dependent audio filters.
Referring to fig. 17A, a diagram of selection of level and frequency dependent audio filters with different average gain levels according to an aspect is shown. During the first level of the enrollment process, the listener is presented with different enhancement settings and is asked to select a preferred setting. The enhancement settings include a first set of level and frequency dependent audio filters applied to the first audio signal, and these filters may correspond to hearing loss curves having different average gain levels. For example, the current tuning may initially be a zero average gain level (no gain level applied to the input signal, or "off"). The altered tuning may be a level and frequency dependent audio filter (1F) corresponding to one of the loss profiles (first level, flat profile) in the first set 602 of fig. 6. It should be understood that the subsequent tuning that may be applied to the first audio signal is shown in the top row of the grid of fig. 17A. More specifically, the additional modified tunings (2F) and (3F) correspond to the loss profile of the second set 702 of fig. 7 (second level, flat profile) and the loss profile of the third set 802 of fig. 8 (third level, flat profile). In the first stage shown in fig. 17A, the user may listen to the first audio signal with the current tuning and the applied modified tuning and select the modified tuning, thereby indicating that the user prefers to apply more gain to the first audio signal. Referring to fig. 17B, a diagram of selection of level and frequency dependent audio filters with different average gain levels according to an aspect is shown. At the next operation in the first stage of registration, the first audio signal may be modified as the current tuning by a (1F) level and frequency dependent audio filter. The first audio signal may also be modified as a modified tuning by a (2F) level and frequency dependent audio filter. In an aspect, all of the tunings applied to the first audio signal during the first stage of registration have the same gain profile. For example, the tuning may be a filter corresponding to the flat loss profile shown in fig. 6-8, and thus may all have a flat gain profile (inversely related to the flat loss profile). Thus, the current tuning in fig. 17B may have an average gain level corresponding to the average loss level 604 of fig. 6, and the altered tuning may have an average gain level corresponding to the average loss level 704 of fig. 7. When the user has heard the first audio signal as modified by the two filters, the user may select the current tuning as the preferred tuning. The media system 100 may receive the user selection as a selection of the personal average gain level 1702 (e.g., 20 dB).
It should be appreciated that if the user prefers the modified tuning in fig. 17B, selecting the modified tuning will advance the registration process to the next operation in the first level. In the next operation, the first audio signal may be reproduced using the level and frequency dependent audio filters (2F) and (3F) corresponding to the loss profiles in fig. 7 and 8. A description of such operations is omitted here for the sake of brevity.
In an aspect, the first audio signal is output to the user in order of increasing average gain level using a first set of level and frequency dependent audio filters. For example, in fig. 17A, a first audio signal is presented having a current tuning of zero gain and a modified tuning (1F) corresponding to the average hearing loss 604 of fig. 6 (e.g., an average gain level of 20 dB). In fig. 17B, the first audio signal is presented with a tuning of (1F) and (2F) corresponding to the average hearing loss of fig. 6 and 7 (e.g., average gain levels of 20dB and 35 dB). Thus, the audio signal modifications may be presented in order of increasing gain. It should be appreciated that presenting the audio signal level comparisons in increasing order, as described above, may speed up the registration process. More specifically, since it is not common for a user to want more of the third gain level than the first gain level, but not the second gain level than the first, it makes no sense to present the third gain level if the user has selected the first gain level instead of the second. Eliminating the additional comparison (comparing the third gain level to the first gain level) may shorten the registration process.
In an aspect, the first audio signal may be embedded with some noise to provide a sense of realism for the listening experience. By way of example, the first audio signal may include a speech signal representing speech and a noise signal representing noise. The speech signal and the noise signal may be embedded at a particular ratio such that an increase in the level of the first audio signal causes an increase in the level of both speech and noise audio content in the speech file. For example, the ratio of the speech signal to the noise signal may be in the range of 10dB to 30dB (e.g., 15 dB). The ratio may be high enough so that noise does not overpower the speech. However, with each increase in the average gain level, progressive amplification of the noise may prevent the user from selecting a level and frequency dependent audio filter that unnecessarily increases the volume of the audio signal. More specifically, the embedded noise provides a sense of realism to assist the user in selecting an amplification level that compensates, but does not overcompensate, the user's hearing loss.
The first audio signal may be set at a calibration level and therefore volume adjustments during the first stage of the enrollment process may not be allowed. More specifically, the one or more processors of the media system 100 may disable volume adjustment of the media system 100 during output of the first audio signal. By locking the volume control of the media system 100 during the first stage of the enrollment process, the gain level that compensates for hearing loss may be set to a level corresponding to the common hearing loss curve being tested and the average gain level of the frequency dependent audio filter. Thus, fixed levels of speech stimulation may be used to explore the levels.
In addition to allowing the personal average gain level 1702 to be selected during the first stage, the enrollment process may also include a second stage to select a personal gain profile. The personal gain profile may correspond to a user preferred gain profile (flat profile, notched profile, or sloped profile) that adjusts the tonal characteristics of the audio input signal according to user preferences.
Referring to fig. 18, a diagram of a user interface for controlling output of a second audio signal is shown, according to an aspect. During the registration process, the media system 100 may output a second audio signal using a second set of level and frequency dependent audio filters. The second audio signal may represent music, such as a music file containing recorded music. Music provides good contrast between timbres (compared to speech) and thus may facilitate the selection of an appropriate gain profile during the second stage of the enrollment process. More specifically, playing music rather than speech during the second stage allows for an accurate determination of the user's timbre or tonal preferences.
During the second stage, the audio input signal 404 may be sequentially reproduced at different pitch enhancement settings for the user. More specifically, a second set of level and frequency dependent audio filters for outputting a second audio signal may have different gain profiles. The second group may include a flat audio filter corresponding to a flat loss profile of the common hearing loss profile, a notch audio filter corresponding to a notch loss profile of the common hearing loss profile, and a tilt audio filter corresponding to a tilt loss profile of the common hearing loss profile. It should be understood that with reference to the above loss profile and the inverse relationship between the loss profile and the corresponding gain profile, the gain profile of a flat audio filter has the highest gain in the low frequency band, the gain profile of a notch audio filter has the highest gain in the medium frequency band, and the gain profile of a tilted audio filter has the highest gain in the high frequency band. An audio filter is applied to the second audio signal to play back the audio signal such that different frequencies corresponding to different hearing loss profiles are apparent.
The user may select the current tuning element 1602 to play the second audio signal with the first audio filter having the corresponding gain profile. After listening to the first setting, the user may select the modified tuning element 1604 to play the second audio signal with a second audio filter having a corresponding gain profile that is different from the gain profile of the first audio filter. When the user has identified a preferred setting (e.g., a tuning of music that allows the user to better listen to the second audio signal), the user may select selection element 1606. Alternatively, the user may select through a physical switch, such as by tapping a button on the audio signal device 102 or the audio output device 104.
Referring to fig. 19A, a diagram of selection of level and frequency dependent audio filters with different gain profiles according to an aspect is shown. During the second stage of the enrollment process, the listener is presented with different enhancement settings and is asked to select a preferred setting. The enhancement settings include a second set of level and frequency dependent audio filters applied to the second audio signal, and these filters may correspond to hearing loss curves having different loss profiles. For example, the current tuning may initially be a flat gain profile (1F) corresponding to the flat loss profile 606 of fig. 6. The modified tuning may be a (1N) level and frequency dependent audio filter corresponding to the notch loss profile 608 of fig. 6. The user may prefer a filter corresponding to a flat loss profile and may select selection element 1606 to proceed to the next operation in the second stage.
The first stage of the enrollment process need not present all average gain level settings (as represented in the horizontal direction across the grid of fig. 17A), while the second stage of the enrollment process may need to present all gain profile settings in the vertical direction across the grid of fig. 19A. More specifically, the registration process may provide additional comparisons between the current tuning and subsequent tuning even when the user selects the current tuning during the second stage. Subsequent tuning applicable to the second audio signal is shown in the columns of the grid of fig. 19A. More specifically, the additional modified tuning may correspond to a tilt loss profile for each of the possible average gain level settings.
Referring to fig. 14B, a diagram of selection of level and frequency dependent audio filters with different gain profiles according to an aspect is shown. At the next operation in the registered second stage, the second audio signal may be modified by a (1F) level and frequency dependent audio filter corresponding to the previously selected gain profile setting and the next gain profile setting (1S). In an aspect, all of the tunings applied to the second audio signal during the second stage of registration have the same average gain level. More specifically, the flat gain profile (1F), notch gain profile (1N), and tilt gain profile (1S) applied to the second audio signal for comparison pitch adjustment may all have personal average gain levels 1702 selected during the first stage of enrollment. When the user has heard the second audio signal altered by all filters, the user may select a preferred tuning (e.g., altered tuning). The media system 100 may receive a user selection as a selection of the personal gain profile 1902. For example, the personal gain profile 1902 may be a tilted gain profile (1S).
Volume adjustment of the media system 100 may be enabled during output of the second audio signal as compared to the first stage of the enrollment process. Allowing volume adjustments may help to distinguish between tonal characteristics of different audio signal adjustments. More specifically, allowing the user to adjust the volume of the media system 100 using volume control 2302 (fig. 18) may allow the user to hear the difference between each tone setting. Thus, the second stage of the enrollment process allows the user to explore the gain profile using musical stimuli that stimulate all frequencies within the audible frequency range, and encourages volume changes to allow the user to differentiate the tonal characteristics of the altered musical stimuli.
The sequence of presentations of the filtered audio signal allows the user to step through the grid in a horizontal direction during the first stage and step through the grid in a vertical direction during the second stage. More specifically, the user may first select the personal average gain level 1702 by stepping through the grid in a horizontal direction along the horizontal axis, and then select the personal gain profile 1902 by stepping through the grid in a vertical direction along the shape axis. Each square of the grid represents a level and frequency dependent audio filter with a corresponding average gain level and gain profile, and thus the illustrated example (a 3 x 3 grid) assumes that the personal level and frequency dependent audio filter 402 resulting from the enrollment process will be one of 9 level and frequency dependent audio filters corresponding to 9 common hearing loss curves. The granularity levels (e.g., three level groups and three profile groups) have been shown to consistently guide the user in selecting consistently preferred presets for the user, regardless of whether the selected presets exactly match their hearing loss profile. However, it should be understood that the preset number used in the registration process may vary. For example, a first stage of the registration process may allow a user to step through four or more average gain levels across a grid with more columns. Similarly, more or fewer gain profiles may be represented across the shape axis of the grid to allow the user to evaluate different pitch enhancements.
Referring to fig. 20, a flow diagram of a method of selecting a personal level and frequency dependent audio filter having a personal average gain level and a personal gain profile according to an aspect is shown. The flow chart shows the stage of the registration process for selecting a level and frequency dependent audio filter from a grid of audio filters having columns and rows.
As described above, the registration process allows the user to first explore the levels to determine the correct columns within the audio filter grid. At operation 2002, in a first stage of the enrollment process, the user compares several levels of audio signals, e.g., a current gain level and a next gain level. For example, a zero-gain audio filter (no gain or "off") may be applied to the speech audio signal as the current gain level, and a low-gain flat audio filter (1F) may be applied to the speech audio signal as the next gain level. The filtered audio signals may be presented to the user as corresponding horizontal audio signals. At operation 2004, the media system 100 determines whether the user is satisfied with the current level. For example, if the user is satisfied with a zero-gain audio filter, the user selects the zero-gain audio filter as personal gain level 1702. However, if the user selects the next audio level (e.g., the (1F) level and the frequency dependent audio filter) at operation 2006, the first decision sequence iterates to the next level audio signal comparison. For example, a (1F) filter may be applied to the speech audio signal as the current gain level, and a medium gain flat audio filter (2F) may be applied to the speech audio signal as the next gain level. The filtered audio signals may be presented to the user as respective level audio signals at operation 2002, and the user may select a preferred level audio signal. At operation 2004, the media system 100 determines whether the user is satisfied with the current level. If the user is satisfied with the current level, the user selects the current level, which the system determines as personal gain level 1702. If the user is more satisfied with the next level, the user selects the next gain level and the system iterates to allow the next set of horizontal audio signals to be compared. For example, the sequence advances to allow the user to also compare the medium gain flat audio filter (2F) and the high gain flat audio filter (3F). Whichever of the current levels the user selected during the iteration may be determined as personal average gain level 1702. More specifically, when the user selects a zero-gain audio filter, (1F) filter, (2F) filter, or (3F) filter at a point in the process when the selected filter is the current audio filter (as compared to the next audio filter), it may be determined that the selected audio filter has personal gain profile 1702 and the enrollment process may continue to the second stage.
As described above, the registration process allows the user to explore the gain profile within the selected gain level to determine the correct row within the audio filter grid, and thus arrive at a square within the grid representing the personal level and frequency dependent audio filter 402. At operation 2008, in a second stage of the registration process, the user compares several shape audio signals.
In special cases, the user selects a zero-gain audio filter as the personal gain level during the first stage. In this case, the voice file is played at decision sequence 2008. Similar to decision sequence 2002, at decision sequence 2008, a comparison may be made between a zero-gain audio filter applied to the speech audio signal and a low-gain notch audio filter (1N) applied to the speech audio signal. If the zero-gain audio filter is selected again, the process can iterate to compare the zero-gain audio filter with the high-gain tilted audio filter (1S). If a zero gain audio filter is again selected, the registration process may end and no audio filter is applied to the audio input signal 404. More specifically, as the flow chart advances through a sequence in which the user selects a number of levels corresponding to the hearing loss curve and zero-gain audio filters on the associated audio filters, the media system 100 determines that the user has normal hearing and does not adjust the default audio settings of the system.
In the event that the user selects a non-zero human gain level during the first stage, the shape audio signal comparison at operation 2008 is between non-zero gain audio filters applied to the music audio signal. For example, if (1F) audio filter is selected as the personal gain level at operation 2004, (1F) audio filter may be applied to the music audio signal and low-level notch audio filter (1N) may be applied to the music audio signal at operation 2008. The filtered audio signal may be presented to the user as a correspondingly shaped audio signal. At operation 2010, the media system 100 determines whether the user has selected a personal gain profile 1902. After the user has listened to all shape audio signals and selected a preferred shape audio signal, a personal gain profile 1902 is selected. For example, if the user selects a (1F) audio filter instead of a (1N) audio filter at operation 2008, the (1F) audio filter is a candidate for the personal gain profile 1902. At operation 2012, the second stage iterates to the next shape audio signal comparison. For example, the (1F) audio filter selected during the previous iteration may be applied to the music audio signal, and the low-level tilt audio filter (1S) may be applied to the music audio signal. The filtered audio signals may be presented to the user as respective shape audio signals at operation 2008, and the user may select a preferred shape audio signal. At operation 2010, the media system 100 determines whether the user has selected the personal gain profile 1902. For example, if the user selects a (1S) audio filter, the media system 100 identifies the selection as a personal gain profile 1902, as long as the user selects an audio filter and all shape audio signals have been presented to the user for selection.
After exploring the level and contour settings, the media system selects the personal level and frequency dependent audio filter 402 at operation 1002. More specifically, the user identifies particular squares in the grid, for example, based in part on the personal level and frequency dependent audio filter 402 having personal average gain level 1702, and based in part on the personal level and frequency dependent audio filter 402 having personal gain profile 1902. The process may use a selected filter having personal gain level 1702 and personal gain profile 1902 in the verification operation. In a verification operation, audio signals (e.g., music audio signals) may be output and played back by the media system 100 using the personal level and frequency dependent audio filters 402 identified during the enrollment process. The verification operation allows the user to adjust between the selected preset playback and the normal playback (no adjustment) so that the user can confirm that the adjustment is actually improved. When the user agrees that the personal level and frequency dependent audio filter improves the listening experience, the user may select an element (e.g., "done") to complete the registration process.
At the end of the enrollment process, the personal level and frequency dependent audio filter 402 is identified as the audio filter having the user's preferred personal average gain level 1702 and personal gain profile 1902. Thus, at operation 1002, the media system 100 may select the personal level and frequency dependent audio filter 402 based in part on the personal level and frequency dependent audio filter 402 having the personal average gain level 1702 and based in part on the personal level and frequency dependent audio filter 402 having the personal gain profile 1902, as determined by the enrollment process.
The above-described enrollment process drives the media system 100 to select the personal level and frequency dependent audio filter 402 based on the assumption that the user's actual hearing loss will resemble the common hearing loss profile presets stored by the system. The registration process is completed without knowledge of the user's personal audiogram 500. However, when the personal audiogram 500 is available, it may result in as good or better results than the selection process described above.
Referring to fig. 21A-21B, a flow diagram and a diagram, respectively, of a method of determining several hearing loss curves based on a personal audiogram according to an aspect are shown.
In contrast to storing general presets for the registration process described above, the personal audiogram 500 may be used to determine user-specific presets. For example, if the personal audiogram 500 is known, the media system 100 may select a hearing loss curve preset and corresponding level and frequency dependent audio filter that encompasses the known audiogram. The determination of the user-specific preset may constrain the range of level and frequency dependent audio filters that may be selected during the enrollment process, which may allow the user to have greater granularity in the selection of personal presets.
In one aspect, using the personal audiogram 500 to drive presets that may be selected during the enrollment process may be particularly helpful for users with unusual hearing loss profiles. The media system 100 may receive the personal audiogram 500 at operation 2102. At operation 2104, the media system 100 may determine a number of hearing loss curves 2110 based on the personal audiogram 500. Similarly, at operation 2106, the media system 100 may determine a level and frequency dependent audio filter preset corresponding to a user-specific hearing loss curve. The determined hearing loss profile and/or level and frequency dependent audio filter may be a user specific preset personalized for the user to ensure a good listening experience. For example, the user's average hearing loss 504 may be determined from the personal audiogram 500, and the determined number of user-specific presets may include hearing loss curves that each have an average hearing loss value similar to the average hearing loss value of the personal audiogram 500. In an aspect, the average hearing loss value of each of the user-specific presets is within a predetermined difference of the average hearing loss values of the personal audiogram 500 (e.g., +/-10dB hearing loss). As shown in fig. 21B, each of the user-specific presets may have a different hearing loss profile, even though the preset average loss levels are similar. For example, one of the hearing loss curves may have a flat loss profile 2112 that gradually decreases with increasing frequency, one of the hearing loss curves may have a flat loss profile 2114 with an upward inflection point at about 4kHz, and one of the hearing loss curves may have a flat loss profile 2116 with a downward inflection point at about 2 kHz. Such loss contours may be uncommon in a population, but the media system 100 may use audio filters corresponding to the uncommon contours during the enrollment process.
In an aspect, the determined audio filter corresponding to the user-specific preset level and frequency dependence is applied to a speech and/or music audio signal. More specifically, the audio filter may be evaluated in a decision tree such as the sequence described with respect to fig. 20. Using the enrollment process, the user may identify one of the audio filters as a personal level and frequency dependent audio filter 402 for compensating the user's hearing loss. Accordingly, at operation 2108, the personal level and frequency dependent audio filter 402 is selected from the number of level and frequency dependent audio filters 2110 for use at operation 1004 (fig. 10).
Referring to fig. 22A-22B, a flow diagram and a diagram, respectively, of a method of determining a hearing loss profile of an individual based on an audiogram of the individual according to an aspect are shown. The personal audiogram 500 may be used to select a particular hearing loss profile and corresponding level and frequency dependent audio filter from a preset range stored and/or available to the audio signal device 102. More specifically, the personal audiogram 500 may be used to determine presets that most closely correspond to known audiograms.
In an aspect, at operation 2202, the media system 100 may receive a personal audiogram 500. At operation 2204, the media system 100 may determine and/or select an individual hearing loss profile 2205 based on the individual audiogram 500. For example, the individual hearing loss profile 2205 may be selected from several hearing loss profiles stored or available by the media system 100. The selection of the personal hearing loss curve 2205 can be driven by an algorithm for fitting the personal audiogram 500 to a known hearing loss curve. More specifically, the media system 100 may select an individual hearing loss profile 2205 having the same average hearing loss and hearing loss profile as the individual audiogram 500. When the closest match is found, the media system 100 may select the individual hearing loss profile 2205 and determine a level and frequency dependent audio filter corresponding to the individual hearing loss profile 2205. More specifically, at operation 2206, the media system 100 may select or determine a personal level and frequency dependent audio filter 402 corresponding to a personal hearing loss profile 2205, which may be used to compensate for the hearing loss of the user.
At operation 1004 (fig. 10), the personal level and frequency dependent audio filter 402 selected using one of the above selection processes is applied to the audio input signal 404. Applying the personal level and frequency dependent audio filter 402 to the audio input signal 404 may generate an audio output signal 406. More specifically, the personal level and frequency dependent audio filter 402 may amplify the audio input signal 404 based on an input level 902 and an input frequency 904 of the audio input signal 404. Amplification may enhance the audio input signal 404 in a manner that allows the user to normally perceive the audio input signal 404.
At operation 1006 (fig. 10), the audio output signal 406 is output by one or more processors of the media system 100. An audio output signal 406 may be output for playback by an output device. For example, the audio signal device 102 may transmit the audio output signal 406 to the audio output device 104 via a wired or wireless connection. The audio output device 104 may receive the audio output signal 406 and play the audio content to the user. The reproduced audio may be audio from a telephone call, music played by a personal media device, speech of a virtual assistant, or any other audio content delivered by the audio signal device 102 to the audio output device 104.
Referring to fig. 23, a block diagram of a media system in accordance with an aspect is shown. The audio signal device 102 may be any of several types of portable devices or apparatuses having circuitry adapted for a particular function. Accordingly, the illustrated circuit is provided as an example and not a limitation. The audio signal device 102 may include one or more device processors 2302 to execute instructions to perform the various functions and capabilities described above. The instructions executed by the device processor 2302 of the audio signal device 102 may be retrieved from the device memory 2304, which may include a non-transitory machine-readable medium or a non-transitory computer-readable medium. The instructions may be in the form of an operating system program with a device driver and/or an accessibility engine for performing a registration process according to the method described above and tuning the audio input signal 404 based on the personal level and frequency dependent audio filter 402. The device processor 2302 can also retrieve audio data 2306 from the device memory 2304 including audiograms or audio signals associated with a telephone and/or music playback functions controlled by a telephone or music application running on top of an operating system. To perform such functions, the device processor 2302 may implement control loops, directly or indirectly, and receive input signals from and/or provide output signals to other electronic components. For example, the audio signal device 102 may receive an input signal from a microphone, a menu button, or a physical switch. The audio signal device 102 may generate and output the audio output signal 406 to a device speaker of the audio signal device 102 (which may be an internal audio output device 104) and/or to an external audio output device 104. For example, the audio output device 104 may be a wired or wireless handset to receive the audio output signal 406 via a wired or wireless communication link. More specifically, the processors of the audio signal device 102 and the audio output device 104 may be connected to respective RF circuits to receive and process audio signals. For example, a communication link may be established through a wireless connection using the bluetooth standard, and the device processor 2302 may wirelessly transmit the audio output signals 406 to the audio output device 104 via the communication link. The wireless output device may receive and process the audio output signal 406 to play audio content as sound, such as a phone call, podcast, music, and so forth. More specifically, the audio output device 104 may receive and play back the audio output signal 406 to play sound from the earpiece speaker.
The audio output device 104 may include an earpiece processor 2320 and an earpiece memory 2322. The earpiece processor 2320 and the earpiece memory 2322 may perform the functions performed by the device processor 2302 and the device memory 2304 described above. For example, the audio signal device 102 may transmit one or more of the audio input signal 404, the hearing loss profile, or the level and frequency dependent audio filter to the earpiece processor 2320, and the audio output device 104 may use the input signal in an enrollment process and/or an audio reproduction process to generate the audio output signal 406 using the personal level and frequency dependent audio filter 402. More specifically, the earpiece processor 2320 may be configured to generate the audio output signal 406 and render the signal via an earpiece speaker for audio playback. The media system 100 may include several earpiece components, but only a single earpiece is shown in fig. 23. Thus, the first audio output device 104 may be configured to render a left channel audio output and the second audio output device 104 may be configured to render a right channel audio output.
As described above, one aspect of the present technology is to collect and use data from a variety of sources to perform personalized media enhancement. The present disclosure contemplates that, in some instances, such collected data may include personal information data that uniquely identifies or may be used to contact or locate a particular person. Such personal information data may include demographic data, location-based data, phone numbers, email addresses, TWITTER ID, home addresses, data or records related to the user's fitness or fitness level (e.g., audiograms, vital signs measurements, medication information, exercise information), date of birth, or any other identifying or personal information.
The present disclosure recognizes that the use of such personal information data in the present technology may be useful to benefit the user. For example, the personal information data may be used to perform personalized media enhancement. Accordingly, the use of such personal information data enables a user to have an improved audio listening experience. In addition, the present disclosure also contemplates other uses for which personal information data is beneficial to a user. For example, health and fitness data may be used to provide insight into the overall health condition of a user, or may be used as positive feedback for individuals using technology to pursue health goals.
The present disclosure contemplates that entities responsible for collecting, analyzing, disclosing, transmitting, storing, or otherwise using such personal information data will comply with established privacy policies and/or privacy practices. In particular, such entities should enforce and adhere to the use of privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining privacy and security of personal information data. Such policies should be easily accessible to users and should be updated as data is collected and/or used. Personal information from the user should be collected for legitimate and legitimate uses by the entity and not shared or sold outside of these legitimate uses. Furthermore, such acquisition/sharing should be performed after receiving user informed consent. Furthermore, such entities should consider taking any necessary steps to defend and secure access to such personal information data, and to ensure that others who have access to the personal information data comply with their privacy policies and procedures. In addition, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and practices. In addition, policies and practices should be adjusted to the particular type of personal information data collected and/or accessed, and to applicable laws and standards including specific considerations of jurisdiction. For example, in the united states, the collection or acquisition of certain health data may be governed by federal and/or state laws, such as the health insurance association and accountability act (HIPAA); while other countries may have health data subject to other regulations and policies and should be treated accordingly. Therefore, different privacy practices should be maintained for different personal data types in each country.
Regardless of the foregoing, the present disclosure also contemplates that the user selectively prevents use or access to aspects of the personal information data. That is, the present disclosure contemplates that hardware elements and/or software elements may be provided to prevent or block access to such personal information data. For example, in the case of personalized media enhancement, the present technology may be configured to allow a user to opt-in or opt-out of participating in the collection of personal information data at any time during or after registration service. In addition to providing "opt-in" and "opt-out" options, the present disclosure contemplates providing notifications related to accessing or using personal information. For example, the user may be notified that their personal information data is to be accessed when the application is downloaded, and then be reminded again just before the personal information data is accessed by the application.
Further, it is an object of the present disclosure that personal information data should be managed and processed to minimize the risk of inadvertent or unauthorized access or use. Once the data is no longer needed, the risk can be minimized by limiting data collection and deleting data. In addition, and when applicable, including in certain health-related applications, data de-identification may be used to protect the privacy of the user. De-identification may be facilitated by removing particular identifiers (e.g., date of birth, etc.), controlling the amount or specificity of stored data (e.g., collecting location data at a city level rather than at an address level), controlling how data is stored (e.g., aggregating data among users), and/or other methods, as appropriate.
Thus, while the present disclosure broadly covers the use of personal information data to implement one or more of the various disclosed aspects, the present disclosure also contemplates that various aspects may be implemented without the need to access such personal information data. That is, various aspects of the present technology do not fail to function properly due to the lack of all or a portion of such personal information data. For example, the registration process may be determined based on non-personal information data or an absolute minimum amount of personal information, such as the user's approximate age, other non-personal information available to the device processor, or publicly available information.
To assist the patent office and any reader of any patent issued in this application in interpreting the appended claims, applicants wish to note that they do not intend for any of the appended claims or claim elements to invoke 35u.s.c.112(f), unless the word "means for … …" or "step for … …" is explicitly used in a particular claim.
In the foregoing specification, the invention has been described with reference to specific exemplary aspects thereof. It will be evident that various modifications may be made to the specific exemplary aspects without departing from the broader spirit and scope of the invention as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.

Claims (20)

1. A method of enhancing an audio input signal to accommodate hearing loss, the method comprising:
outputting, by one or more processors of a media system, an audio signal using a plurality of audio filters, wherein the plurality of audio filters correspond to a plurality of hearing loss curves and have respective gain profiles;
receiving, by the one or more processors, a selection of a personal gain profile in response to outputting the audio signal using the plurality of audio filters;
selecting, by the one or more processors, a personal audio filter having the personal gain profile based in part on the personal audio filter; and
generating, by the one or more processors, an audio output signal by applying the personal audio filter to an audio input signal, wherein the personal audio filter amplifies the audio input signal based on an input level and an input frequency of the audio input signal.
2. The method according to claim 1, wherein the respective gain profiles of the plurality of audio filters are different from one another.
3. The method of claim 2, wherein the audio signal represents music.
4. The method of claim 2, wherein the plurality of audio filters comprises a flat audio filter, a notch audio filter, and a tilted audio filter having different gain profiles, wherein a gain profile of the flat audio filter has a highest gain in a low frequency band, wherein a gain profile of the notch audio filter has a highest gain in a middle frequency band, and wherein a gain profile of the tilted audio filter has a highest gain in a high frequency band.
5. The method of claim 2, further comprising enabling, by the one or more processors, volume adjustment of the media system during output of the audio signal.
6. The method of claim 1, further comprising:
receiving, by the one or more processors, a personal audiogram;
determining, by the one or more processors, the plurality of hearing loss curves based on the personal audiogram; and
determining, by the one or more processors, the plurality of audio filters corresponding to the hearing loss profile.
7. The method of claim 1, further comprising:
receiving, by the one or more processors, a personal audiogram; and
selecting a personal hearing loss profile from the plurality of hearing loss profiles based on the personal audiogram;
wherein the personal audio filter corresponds to the personal hearing loss curve.
8. The method of claim 1, further comprising transmitting the audio output signal to an audio output device for playback by the audio output device.
9. A media system, the media system comprising:
a memory configured to store a plurality of hearing loss profiles and a plurality of audio filters corresponding to the plurality of hearing loss profiles, wherein the plurality of audio filters have respective gain profiles; and
one or more processors configured to:
outputting an audio signal using the plurality of audio filters;
receiving a selection of a personal gain profile in response to outputting the audio signal using the plurality of audio filters;
selecting the personal audio filter based in part on the personal audio filter having the personal gain profile, an
Generating an audio output signal by applying the personal audio filter to an audio input signal, wherein the personal audio filter amplifies the audio input signal based on an input level and an input frequency of the audio input signal.
10. The media system of claim 9, wherein the respective gain profiles of the plurality of audio filters are different from each other.
11. The media system of claim 10, wherein the audio signal represents music.
12. The media system of claim 10, wherein the plurality of audio filters comprises a flat audio filter, a notch audio filter, and a tilted audio filter having different gain profiles, wherein a gain profile of the flat audio filter has a highest gain in a low frequency band, wherein a gain profile of the notch audio filter has a highest gain in a middle frequency band, and wherein a gain profile of the tilted audio filter has a highest gain in a high frequency band.
13. The media system of claim 10, wherein the one or more processors are further configured to enable volume adjustment of the media system during output of the audio signal.
14. The media system of claim 9, wherein the one or more processors are further configured to:
receiving a personal audiogram;
determining the plurality of hearing loss curves based on the personal audiogram; and
determining the plurality of audio filters corresponding to the hearing loss profile.
15. The media system of claim 9, wherein the one or more processors are further configured to:
receiving a personal audiogram; and
selecting a personal hearing loss profile from the plurality of hearing loss profiles based on the personal audiogram;
wherein the personal audio filter corresponds to the personal hearing loss curve.
16. A non-transitory computer-readable medium containing instructions that, when executed by one or more processors of a media system, cause the media system to perform a method, the method comprising:
outputting, by the one or more processors, an audio signal using a plurality of audio filters, wherein the plurality of audio filters correspond to a plurality of hearing loss curves and have respective gain profiles;
receiving, by the one or more processors, a selection of a personal gain profile in response to outputting the audio signal using the plurality of audio filters;
selecting, by the one or more processors, a personal audio filter having the personal gain profile based in part on the personal audio filter; and
generating, by the one or more processors, an audio output signal by applying the personal audio filter to an audio input signal, wherein the personal audio filter amplifies the audio input signal based on an input level and an input frequency of the audio input signal.
17. The non-transitory computer-readable medium of claim 16, wherein the respective gain profiles of the plurality of audio filters are different from each other.
18. The non-transitory computer-readable medium of claim 17, wherein the plurality of audio filters includes a flat audio filter, a notched audio filter, and a tilted audio filter having different gain profiles, wherein a gain profile of the flat audio filter has a highest gain in a low frequency band, wherein a gain profile of the notched audio filter has a highest gain in a middle frequency band, and wherein a gain profile of the tilted audio filter has a highest gain in a high frequency band.
19. The non-transitory computer readable medium of claim 16, the method further comprising:
receiving, by the one or more processors, a personal audiogram;
determining, by the one or more processors, the plurality of hearing loss curves based on the personal audiogram; and
determining, by the one or more processors, the plurality of audio filters corresponding to the hearing loss profile.
20. The non-transitory computer readable medium of claim 16, the method further comprising:
receiving, by the one or more processors, a personal audiogram; and
selecting a personal hearing loss profile from the plurality of hearing loss profiles based on the personal audiogram;
wherein the personal audio filter corresponds to the personal hearing loss curve.
CN202010482726.9A 2019-06-01 2020-06-01 Media system and method for adapting to hearing loss Active CN112019974B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210104034.XA CN114422934A (en) 2019-06-01 2020-06-01 Media system and method for adapting to hearing loss

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201962855951P 2019-06-01 2019-06-01
US62/855,951 2019-06-01
US16/872,068 2020-05-11
US16/872,068 US11418894B2 (en) 2019-06-01 2020-05-11 Media system and method of amplifying audio signal using audio filter corresponding to hearing loss profile

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210104034.XA Division CN114422934A (en) 2019-06-01 2020-06-01 Media system and method for adapting to hearing loss

Publications (2)

Publication Number Publication Date
CN112019974A CN112019974A (en) 2020-12-01
CN112019974B true CN112019974B (en) 2022-02-18

Family

ID=73264435

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010482726.9A Active CN112019974B (en) 2019-06-01 2020-06-01 Media system and method for adapting to hearing loss
CN202210104034.XA Pending CN114422934A (en) 2019-06-01 2020-06-01 Media system and method for adapting to hearing loss

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210104034.XA Pending CN114422934A (en) 2019-06-01 2020-06-01 Media system and method for adapting to hearing loss

Country Status (3)

Country Link
KR (1) KR102376227B1 (en)
CN (2) CN112019974B (en)
DE (1) DE102020114026A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362839B (en) * 2021-06-01 2024-10-01 平安科技(深圳)有限公司 Audio data processing method, device, computer equipment and storage medium
EP4117308A1 (en) * 2021-07-08 2023-01-11 Audientes A/S Adaptation methods for hearing aid and hearing aid incorporating such adaptation methods

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3276983A1 (en) * 2016-07-29 2018-01-31 Mimi Hearing Technologies GmbH Method for fitting an audio signal to a hearing device based on hearing-related parameter of the user
TW201815173A (en) * 2016-09-26 2018-04-16 宏碁股份有限公司 Hearing aid and automatic multi-frequency filter gain control method thereof
CN108024178A (en) * 2016-10-28 2018-05-11 宏碁股份有限公司 Electronic device and its frequency-division filter gain optimization method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9468401B2 (en) * 2010-08-05 2016-10-18 Ace Communications Limited Method and system for self-managed sound enhancement
CN103155598A (en) * 2010-10-14 2013-06-12 峰力公司 Method for adjusting a hearing device and a hearing device that is operable according to said method
EP2521377A1 (en) * 2011-05-06 2012-11-07 Jacoti BVBA Personal communication device with hearing support and method for providing the same
CN102724360B (en) * 2012-06-05 2015-05-20 创扬通信技术(深圳)有限公司 Method and device for implementation of hearing-aid function of mobile phone and hearing-aid mobile phone
RU2568281C2 (en) * 2013-05-31 2015-11-20 Александр Юрьевич Бредихин Method for compensating for hearing loss in telephone system and in mobile telephone apparatus
KR102583931B1 (en) * 2017-01-25 2023-10-04 삼성전자주식회사 Sound output apparatus and control method thereof
DE102017106359A1 (en) * 2017-03-24 2018-09-27 Sennheiser Electronic Gmbh & Co. Kg Apparatus and method for processing audio signals to improve speech intelligibility
EP3484173B1 (en) * 2017-11-14 2022-04-20 FalCom A/S Hearing protection system with own voice estimation and related method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3276983A1 (en) * 2016-07-29 2018-01-31 Mimi Hearing Technologies GmbH Method for fitting an audio signal to a hearing device based on hearing-related parameter of the user
TW201815173A (en) * 2016-09-26 2018-04-16 宏碁股份有限公司 Hearing aid and automatic multi-frequency filter gain control method thereof
CN108024178A (en) * 2016-10-28 2018-05-11 宏碁股份有限公司 Electronic device and its frequency-division filter gain optimization method

Also Published As

Publication number Publication date
CN114422934A (en) 2022-04-29
KR20200138674A (en) 2020-12-10
DE102020114026A1 (en) 2020-12-03
CN112019974A (en) 2020-12-01
KR102376227B1 (en) 2022-03-17

Similar Documents

Publication Publication Date Title
CN107615651B (en) System and method for improved audio perception
US8447042B2 (en) System and method for audiometric assessment and user-specific audio enhancement
JP6374529B2 (en) Coordinated audio processing between headset and sound source
US7936888B2 (en) Equalization apparatus and method based on audiogram
KR101521030B1 (en) Method and system for self-managed sound enhancement
AU2021204971B2 (en) Media system and method of accommodating hearing loss
US20150281830A1 (en) Collaboratively Processing Audio between Headset and Source
US20090285406A1 (en) Method of fitting a portable communication device to a hearing impaired user
Kates et al. Using objective metrics to measure hearing aid performance
CN112019974B (en) Media system and method for adapting to hearing loss
US20240276160A1 (en) Audio signal processing method and apparatus, electronic device, and computer-readable storage medium
Dawson et al. Clinical evaluation of expanded input dynamic range in Nucleus cochlear implants
KR100929617B1 (en) Audiogram based equalization system using network
US11368776B1 (en) Audio signal processing for sound compensation
KR102695320B1 (en) A method for performing live public broadcasting, taking into account the auditory perception characteristics of the listener, in headphones
US11615801B1 (en) System and method of enhancing intelligibility of audio playback

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant