Nothing Special   »   [go: up one dir, main page]

WO2020131963A1 - Modular ear-cup and ear-bud and power management of the modular ear-cup and ear-bud - Google Patents

Modular ear-cup and ear-bud and power management of the modular ear-cup and ear-bud Download PDF

Info

Publication number
WO2020131963A1
WO2020131963A1 PCT/US2019/066984 US2019066984W WO2020131963A1 WO 2020131963 A1 WO2020131963 A1 WO 2020131963A1 US 2019066984 W US2019066984 W US 2019066984W WO 2020131963 A1 WO2020131963 A1 WO 2020131963A1
Authority
WO
WIPO (PCT)
Prior art keywords
ear
audio
power
bud
user
Prior art date
Application number
PCT/US2019/066984
Other languages
French (fr)
Inventor
Luke John CAMPBELL
Dragan Petrovic
Original Assignee
Nura Holdings Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nura Holdings Pty Ltd filed Critical Nura Holdings Pty Ltd
Publication of WO2020131963A1 publication Critical patent/WO2020131963A1/en
Priority to US17/349,875 priority Critical patent/US11910145B2/en
Priority to US17/349,858 priority patent/US20210314691A1/en
Priority to US18/414,050 priority patent/US20240276134A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1083Reduction of ambient noise
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1787General system configurations
    • G10K11/17885General system configurations additionally using a desired external signal, e.g. pass-through audio such as music or speech
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J50/00Circuit arrangements or systems for wireless supply or distribution of electric power
    • H02J50/10Circuit arrangements or systems for wireless supply or distribution of electric power using inductive coupling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1008Earpieces of the supra-aural or circum-aural type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1016Earpieces of the intra-aural type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1025Accumulators or arrangements for charging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K2210/00Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
    • G10K2210/10Applications
    • G10K2210/108Communication systems, e.g. where useful sound is kept and noise is cancelled
    • G10K2210/1081Earphones, e.g. for telephones, ear protectors or headsets
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J7/00Circuit arrangements for charging or depolarising batteries or for supplying loads from batteries
    • H02J7/34Parallel operation in networks using both storage and other dc sources, e.g. providing buffering
    • H02J7/345Parallel operation in networks using both storage and other dc sources, e.g. providing buffering using capacitors as storage or buffering devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones

Definitions

  • the present application is related to wearable audio systems, and more specifically to methods and systems that include ear-cups and ear-buds.
  • wearable audio systems work on an assumption that all users hear the same and produce an audio signal independent of the user’s audio perception. Further, the wearable audio systems consist of standalone ear-cups or standalone ear-buds.
  • the ear-buds because of their small size, can include a small power source allowing the user to continuously listen to music for a limited period of time. Once the power source is drained, the ear-buds need to be removed from the user to be charged, prior to the next usage.
  • a wearable audio system including modular ear-cup and ear-bud that can be attached and detached to the user together, or independently of each other. Further, the modular ear-cup and ear-bud can operate together, or independently of each other.
  • the wearable audio system can perform active noise cancellation by measuring noise inside or outside the ear-cup and/or the ear-bud, computing the noise canceling sound, and forwarding the noise canceling sound to a speaker inside the ear-cup and/or the ear-bud.
  • the wearable audio system can be wirelessly charged while operating, thus allowing the user to continuously listen to music longer than previously possible.
  • the wearable audio system can optimize power consumption by redistributing power intensive tasks to power sources with the highest amount of power. Further, the ambient sound outside the ear-cup can be measured and played by the speakers in the earbud allowing the user to hear the surrounding environment.
  • FIG. 1 shows a wearable audio system
  • FIG. 2 shows a cross-section of the modular ear-cups and ear-buds.
  • FIG. 3 shows a modular ear-cup, capable of operating without the ear-buds.
  • FIG. 4 shows a modular ear-bud, capable of operating without the ear-cups.
  • FIGS. 5A-5C show various arrangements of an ear-cup and an ear-bud.
  • FIG. 6A is a flowchart of a method to personalize an audio played to user.
  • FIG. 6B is a schematic diagram depicting the electrical components of the signal processing module that can be located with an ear-cup and/or an ear-bud and that can be used in creating a hearing profile of a user.
  • the hearing profile can be created based on otoacoustic emissions.
  • FIG. 7 shows a power source associated with modular ear-cups and ear-buds in communication with each other.
  • FIG. 8 shows a dual function member according to various embodiments.
  • FIGS. 9A-9B show an encoding member to perform amplitude modulation and frequency modulation of a power field.
  • FIG. 10 shows a system to optimize power consumption between multiple components.
  • FIG. 11 is a flowchart of a method to wirelessly transfer power between a dual function member and a wireless audio device.
  • FIG. 12 is a flowchart of a method to balance power among multiple power sources.
  • FIG. 13 is a diagrammatic representation of a machine in the example form of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies or modules discussed herein, may be executed.
  • Reference in this specification to“power” can include energy as well as energy per unit time, depending on a context.
  • Reference in this specification to“one embodiment” or“an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure.
  • the appearances of the phrase“in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
  • various features are described that may be exhibited by some embodiments and not by others.
  • various requirements are described that may be requirements for some embodiments but not others.
  • the words“comprise,”“comprising,” and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense; that is to say, in the sense of“including, but not limited to.”
  • the terms“connected,”“coupled,” or any variant thereof means any connection or coupling, either direct or indirect, between two or more elements.
  • the coupling or connection between the elements can be physical, logical, or a combination thereof.
  • two devices may be coupled directly, or via one or more intermediary channels or devices.
  • devices may be coupled in such a way that information can be passed there between, while not sharing any physical connection with one another.
  • words“herein,”“above,”“below,” and words of similar import when used in this application, shall refer to this application as a whole and not to any particular portions of this application. Where the context permits, words in the Detailed Description using the singular or plural number may also include the plural or singular number respectively.
  • the word“or,” in reference to a list of two or more items, covers all of the following interpretations of the word: any of the items in the list, all of the items in the list, and any combination of the items in the list.
  • module refers broadly to software, hardware, or firmware components (or any combination thereof). Modules are typically functional components that can generate useful data or another output using specified input(s). A module may or may not be self-contained.
  • An application program also called an“application”
  • An“application” may include one or more modules, or a module may include one or more application programs.
  • FIG. 1 shows a wearable audio system.
  • the wearable audio system can include modular ear-cups 100 and ear-buds 110.
  • the ear-cups and ear-buds are modular because the ear-cups and ear-buds can be attached and detached separately from a user and can function independently of each other or in coordination with each other.
  • the ear-cups 100 can be placed on top or around the user’s ears and can accommodate ear-buds 110 that can be inserted into the user’s ears at the same time.
  • the ear-cups 100 can surround the user’s ear lobe and/or can press on top of the user’s ear lobe.
  • the ear-buds 110 can be wireless and can be fully enclosed within the ear-cups 100.
  • the ear-cups can be box shaped, box shaped with rounded comers, cylindrical, spherical, oval, etc.
  • the ear-cups 100 and the wireless ear-buds 110 can wirelessly, or via an analog or a digital cable, receive audio to play to the user from a remote source, such as a cell phone, a home device, a tablet, an Internet server, a cellular network, a wifi network, etc.
  • the ear-buds 110 can receive the audio to play from the ear-cups 100, which can receive the audio from the remote source, as mentioned above.
  • the ear-cups 100 and ear-buds 110 can be in wireless communication with each other to synchronize the audio play. For example, the ear-cups 100 can play low audible frequencies, while ear-buds can play mid and high audible frequencies.
  • the ear-cups 100 can cause a vibrotactile stimulation of the skin by emitting low frequencies, for example less than 100 Hz.
  • the ear-buds 110 can emit the full range of audio frequency or frequencies that complement the low frequencies emitted from 100, for example 100Hz or higher.
  • the ear-cups 100 and ear-buds 110 can be removed independently from each other and can operate independently of each other. In other words, the user can only use the ear-buds 110 or can only use the ear-cups 100.
  • the ear-cups 100 and ear-buds 110 can personalize the sound based on the user’s hearing profile prior to playing, as described in this application.
  • FIG. 2 shows a cross-section of the modular ear-cups 100 and ear-buds 110.
  • the ear- cup 100 defines a cavity within which ear-bud 110 can be partially or fully contained.
  • the ear-cup 100 can include a first wireless transceiver 200, a first audio emitter 210 to emit a first audio into the user’s ear, an externally facing microphone 260, an internally facing microphone 270, a processor 250, and an electric circuit 240.
  • the ear-bud 110 can be placed in proximity to a user’s ear canal, such as at the entrance of the user’s ear canal or within the user’s ear canal.
  • the ear-bud 110 can include a second audio emitter 230 to emit a second audio into the user’s ear canal, a processor 235, the second wireless transceiver 220, an externally facing microphone 290 and internally facing microphone 280.
  • the first wireless transceiver 200 and the second wireless transceiver 220 can communicate with each other and can coordinate an operation of the first audio emitter 210 and the second audio emitter 230 to provide the first audio and the second audio to the user.
  • the coordination can include adjusting the timing of the audio played by the first audio emitter 210 and by the second audio emitter 230 or can involve modifying the audio emitted by the audio emitters 210, 230.
  • the coordination can include splitting an incoming audio signal into low frequencies sent to the audio emitter 210, and high frequencies sent to the audio emitter 230.
  • the first wireless transceiver 200 and the second wireless transceiver 220 can communicate with each other and can coordinate between microphones 260, 270, 280, 290 and audio emitters 210, 230 for the purposes of Active Noise Cancellation (ANC), pass-through of external audio to the user, voice detection, etc.
  • ANC Active Noise Cancellation
  • the microphone 280 can measure a hearing profile of a user by measuring an acoustic response within the user’s ear canal generated in response to the second audio.
  • the hearing profile can correlate an amplitude and/or a frequency perceived by the user and an amplitude and/or a frequency received by the user.
  • the user’s ear can receive a frequency of 5 kHz at 20 dB, but the user’s ear can perceive that frequency as 5 kHz at 10 dB.
  • the ear-bud 110 can modify audio prior to emitting the audio to the user, so that the user perceives audio at the desired frequency and/or amplitude.
  • the audio emitter 230 can emit audio, while the microphone 280 can measure an otoacoustic emission (OAE) response generated in response to the audio within the user’s ear canal.
  • OAE otoacoustic emission
  • the OAE response can be measured automatically, with or without subjective measurement of hearing, that is without asking the user to provide an input whether the user heard the audio, how loud the audio was, etc.
  • the second wireless transceiver 220 can communicate the hearing profile of the user to the first wireless transceiver 200. Based on the hearing profile of the user, the ear-cup 100 can adjust the first audio to increase the person’s enjoyment of music. The ear-bud 110 can also adjust the audio based on the hearing profile of the user.
  • the hearing profile once measured, can be uploaded to a central location, such as an Internet server, a cell phone, a tablet, a home device, a smartwatch, etc., and can be later downloaded for use by the ear-cups 100 and/or the ear-buds 110.
  • a central location such as an Internet server, a cell phone, a tablet, a home device, a smartwatch, etc.
  • the ear-cups 100 and the ear-buds 110 can perform wireless active noise cancellation
  • ANC ANC
  • feedforward FF
  • feedback FB
  • hybrid feedforward
  • FF feedforward
  • FB feedback
  • H hybrid
  • Feedforward can use an external microphone 260, 290.
  • the processor 250, 235 can calculate the noise cancelling sound from the microphone 260, 290 and pass the noise cancelling sound to an internal speaker 210, 230, respectively.
  • the noise canceling sound can destructively interfere with the undesired sound.
  • Feedback can use an internal microphone 270, 280.
  • the processor 250, 235 can calculate the noise cancelling sound and pass the noise canceling sound to the internal speaker 210, 230.
  • the microphone 280 of the ear-bud 110 can record an audio, and transmit the recorded audio to the processor 235 or the processor 250 to calculate the noise canceling sound.
  • the speaker 210 can receive the noise canceling sound from the processor 250, or from the processor 235 via the wireless transceiver 200.
  • the speaker 210 can emit the noise canceling sound. Choosing the processor 250 or 235 to perform the noise canceling calculation can depend on how much power the ear-cup 100 or the ear-bud 110 have, respectively.
  • Hybrid ANC can use both techniques simultaneously.
  • Ear-cups 100 and ear-buds 110 independently can use any combination of FF, FB or H ANC.
  • the microphone 260, 270, 290, 280 can communicate via a wireless link with the audio emitter in the other device, 230, 210 or the processor 235, 250 respectively.
  • the externally facing microphone 260 on the outside of the ear-cups 100 can measure and record the ambient sound.
  • the ear-cups 100 can record the ambient sound and can wirelessly transmit the ambient sound using the wireless transceiver 200 to the wireless transceiver 220 associated with the ear-buds 110.
  • the ear-buds 110 can calculate the noise cancelling sound and play the noise cancelling sound through the audio emitter 230.
  • the ear-cups 100 can calculate the noise cancelling sound and transmit the noise cancelling sound to the ear-buds 110, which can emit the noise cancelling sound through the audio emitter 230 without calculating the noise cancelling sound.
  • Calculating the noise cancelling sound within the ear- cups can be useful when the ear-buds do not have sufficient power, such as when the power source associated with the ear-buds is low or depleted.
  • the audio emitters 210, 230 can personalize the audio prior to emitting the audio to the user based on the hearing profile of the user.
  • the microphone 280 can measure a sound inside the ear-bud 110.
  • the wireless transceiver 220 can transfer the measured sound to the ear-cup 100.
  • the processor 250 of the ear-cup 100 can calculate the noise canceling sound destructively interfering with the received sound and cause the audio emitter 210 to emit the noise canceling sound.
  • any sound escaping from the ear-bud 110 into the ear-cup 100 can be canceled by the noise canceling sound.
  • the noise canceling sound can be calculated by the processor 235 and the wireless transceiver 220 can transmit the noise canceling sound to the ear-cup 100.
  • the audio emitter 210 can emit the noise canceling sound destructively interfering with any sound escaping from the ear-bud 110.
  • the microphone 270 can measure sound inside the ear-cup 100.
  • the wireless transceiver 200 can transfer the measured sound to the ear-bud 110.
  • the processor 235 of the ear-bud 110 can calculate the noise canceling sound destructively interfering with the received sound and cause the audio emitter 230 to emit the noise canceling sound.
  • any sound escaping from the ear-cup 100 into the ear-bud 110 can be canceled by the noise canceling sound.
  • the noise canceling sound can be calculated by the processor 250 and the wireless transceiver 200 can transmit the noise canceling sound to the ear-bud 110.
  • the audio emitter 230 can emit the noise canceling sound destructively interfering with any sound escaping from the ear-cup 100.
  • the ear-cups 100 and ear-buds 110 can operate in a social mode.
  • the externally facing microphone 260 on the outside of the ear-cups 100 can measure the ambient sounds such as an external conversation directed to the user. Without the social mode, the user cannot hear the external conversation because the user has both ear-cups 100 and ear-buds 1 10 covering his ears.
  • the wireless transceiver 200 of the ear-cups 100 can wirelessly transmit the external conversation, i.e. the ambient sound, to the wireless transceiver 220 of the ear-buds 110, or the wireless transceiver 200 of the ear-cups 100 can transmit the external conversation to the processor 250 of the ear-cups 100.
  • the user can hear the external conversation without taking the ear- cups 100 and the ear-buds 110 off.
  • the audio emitters 210, 230 have been playing a different audio, such as music, and the ear-cups 100 and ear-buds 110 switch into the social mode, the loudness of the different audio can be reduced, or completely muted, so that the ambient sound can be heard.
  • a filtered or unfiltered version of the ambient sounds measured by microphone 260 can be played by the audio emitters 230 and/or 210 allowing the user to hear the ambient sound.
  • the ear-cups 100 and ear-buds 110 can switch into the social mode upon receiving an input requesting transmission of the ambient sound, such as a touch of a button or a touch sensitive surface of the ear-cups 100 or the ear-buds 110.
  • the ear-cups 100 and ear-buds 110 can automatically switch into the social mode.
  • a memory associated with the ear-cups 100 and/or the ear-buds 1 10 can store an identifier associated with the user, such as a name of the user, e.g.“Sam.”
  • the externally facing microphone 260 can continually record ambient sounds.
  • the processor 250 can analyze the recorded sound, upon detecting the word“Sam”, the processor 250 can switch the ear-cups 100 and/or the ear-buds 1 10 into the social mode, where the ambient sound is passed through to the user.
  • the processor 250 can be associated with the ear-cups 100, ear-buds 110, or can be a remote processor associated with a cell phone, a home device, an Internet server, etc.
  • the ear-cups 100 and ear-buds 110 can automatically detect when the user is speaking and record the user’s voice.
  • the internally facing microphone 280 of the ear-buds 110 does not pick up ambient sound because the microphone 280 is insulated by the ear-cups 100. For example, if a third person is talking next to the user, the microphone 280 is not going to receive a strong signal because the microphone is insulated by the ear-cups and is facing towards the user’s ear canal.
  • the microphone 280 can record OAEs, as described in this application, and can record low frequencies of the user’s speech that are transmitted through the skull and into the user’s ear canal.
  • the ear-bud 1 lOcan determine that the user is speaking, and can send a signal to the externally facing microphone 260 to start recording the high frequencies of the user speech.
  • the electronic circuit 240 can obtain a sound recorded by the microphones 260, 280, and can combine the sound into a recording of the user’s voice.
  • the electronic circuit 240 can be a part of the ear-cups 100, or the ear-buds 110.
  • the recording of the user’ s voice can be sent to a nearby device such as a cell phone, a home device, or the cellular modem 205 within the ear-cups.
  • the device such as the cellular modem 205 can send the voice to the cellular network, thus enabling the user to use the ear-cups 100 and/or ear-buds 110 as a cell phone.
  • a processor 250, 235 can interpret the recording of the user’s voice into a command and execute the command.
  • the command can be a command to the headphones, or to other devices connected to the headphones such as cell phone, home device, an Internet server, etc.
  • the command can state“play a song by Imagine Dragons.”
  • the ear-cups 100 and/or the ear-buds 110 can download the song and play the song to the user.
  • FIG. 3 shows a modular ear-cup 100, capable of operating without the ear-buds 110 in
  • the ear-cup 100 can personalize the audio based on the hearing profile of the user in several ways.
  • the ear-cup 100 can retrieve OAE measurements made by the ear-buds 110 in FIG. 1 from a remote device such as a cell phone, an Internet server, a home device, etc. without the presence of ear-buds 110.
  • the ear-cup 100 can modify the audio to increase the user’s enjoyment of the audio prior to playing the audio.
  • the audio emitter 210 can emit the modified audio.
  • the wireless transceiver 300 can obtain the hearing profile from a server.
  • the wireless transceiver 300 can be a cellular modem, a radio frequency transceiver, a Wi-Fi transceiver, etc.
  • a battery 310 powering the wireless transceiver 300 needs to store enough power to operate the wireless transceiver 300.
  • the longer the distance of communication the more power the battery 310 needs to store.
  • the ear-cup 100 can carry a bigger battery and store more power than the ear-bud 110 in FIG. 1.
  • the ear-cup 100 can include a sensor 320 placed into contact with a user’s skin.
  • the sensor 320 can measure a user’s perception of the first sound, emitted by the audio emitter 210, by measuring a signal generated in response to the first sound.
  • the sensor 320 can be placed continuously along the perimeter of the ear-cup, where the ear-cup is in contact with the skin, or, the sensor 320 can be placed at discrete locations along the perimeter of the ear-cup.
  • the sensor 320 can be a dry electrode, a wet electrode, and/or a capacitor, which can measure auditory evoked potential response (AEP) generated in response to the first sound.
  • AEP auditory evoked potential response
  • AEP is a type of EEG signal emanating from the brain through the scalp in response to an acoustical stimulus.
  • the sensor 320 can measure any AEP, such as auditory brainstem response, mid latency response, cortical response, acoustic change complex, auditory steady state response, complex auditory brainstem response, electrocochleography, cochlear microphonic, or cochlear neurophonic AEP.
  • the sensor 320 can also measure an acoustic reflex (also known as the stapedius reflex, middle-ear-muscles (MEM) reflex, attenuation reflex, or auditory reflex).
  • the acoustic reflex is an involuntary muscle contraction that occurs in the middle ear in response to high-intensity sound stimuli or when the person starts to vocalize.
  • the hearing profile measured using the sensor 320 can be communicated to the ear- buds 110 in FIG. 1 wirelessly, from the ear-cup 100 to the ear-bud 110, or, can be communicated via an intermediary device such as a cell phone, a home device, a tablet, a smart watch, an Internet server, etc.
  • an intermediary device such as a cell phone, a home device, a tablet, a smart watch, an Internet server, etc.
  • the ear-cup 100 can modify the music based on the acoustic profile of the audio emitter 210.
  • the ear-cup 100 can obtain an acoustic profile of the audio emitter and can modify the audio based on the acoustic profile of the audio emitter and acoustic properties of the ear-cup 100 prior to playing the audio to the user.
  • the acoustic profile of the audio emitter can correlate a frequency and/or an amplitude received by the audio emitter 210 and a frequency and/or an amplitude emitted by the audio emitter 210.
  • the audio emitter 210 can receive a frequency of 32 Hz at 10 dB, but the audio emitter can emit that frequency as 32 kHz at 8 dB.
  • the ear-cup 100 can modify audio prior to sending the audio to the audio emitter 210, so that the emitted audio matches the audio intended to be emitted by the audio emitter.
  • the ear-cup 100 can obtain the acoustic profile from the audio emitter 210, can download the acoustic profile of the audio emitter 210 from the Internet, or can measure the acoustic profile of the audio emitter 210 and/or the acoustic profile of the cavity of the ear-cup 100.
  • the processor 250 can send audio to the audio emitter 210 and can determine a frequency and an amplitude contained in the audio.
  • the audio emitter 210 can play the audio, while the internally facing microphone 270 records the emitted audio and sends the recorded audio to the processor 250.
  • the processor 250 can create the acoustic profile of the audio emitter 210 based on the determined frequency and amplitude contained in the audio, and the emitted frequency and amplitude. Additionally, the acoustic profile of the microphone can be taken into account in determining the acoustic profile of the audio emitter 210.
  • the user can communicate with the ear-cups 100 using voice.
  • the user can press a button 330 to put the ear-cups 100 in a command state, meaning that the ear-cups 100 are ready to receive an input from the user.
  • the input from the user can be a voice command, recorded by the externally facing microphone 260, such as“play a Beatles song.”
  • the ear-cup 100 can interpret the command locally or can send the recorded voice to a remote processor associated with a cell phone, home device, a tablet, an Internet server, etc. to interpret the command.
  • the transceiver 300 Upon interpreting the command, the transceiver 300, which can be a cellular modem or a Wi-Fi transceiver, can access the cellular network or the Internet, download“Hey Jude”, and play the song to the user.
  • the ear-cup 100 can download the data from the Internet and/or from the cellular network without relying on a cell phone for communication.
  • the downloaded song can be modified prior to playing to account for the hearing profile of the user and/or the acoustic profile of the audio emitter 210.
  • FIG. 4 shows a modular ear-bud 110, capable of operating without the ear-cups 100 in
  • the ear-buds 110 can obtain a hearing profile associated with the user.
  • the ear-buds 110 can measure the hearing profile, as described in this application, or the ear-buds 110 can download the hearing profile measured by the ear-cup 100 in FIG. 1, or by the ear-buds 110 at an earlier time.
  • the hearing profile can be downloaded from a remote device such as a cell phone, a home device, a tablet, an Internet server, etc. using the wireless transceiver 220.
  • a processor 400 associated with the ear- cup 100 can modify the audio prior to playing the audio to the audio emitter 230.
  • the wireless transceiver 220 can download the hearing profile.
  • the wireless transceiver 220 can be a cellular modem enabling the ear-bud 110 to communicate via cellular networks without the aid of another device such as the cell phone, tablet, home device, etc.
  • the processor 400 can modify the audio based on the acoustic profile of the audio emitter 230.
  • the processor 400 can obtain an acoustic profile of the audio emitter 230 playing the modified audio from the audio emitter 230, from a remote computer, such as a cell phone, a home device, a tablet, an Internet server, or the processor 400 can measure the acoustic profile of the audio emitter 230.
  • the processor 400 can send audio to the audio emitter 230 containing a frequency of 15 KHz at 5 dB, but the audio emitter can emit that frequency as 15 kHz at 8 dB.
  • the audio emitter 230 can receive a frequency of 18 KHz at 9 dB but emit that frequency as 18 KHz at 13 dB.
  • the processor 400 can modify audio prior to sending the audio to the audio emitter 230, so that the emitted audio matches the audio sent to the audio emitter.
  • the ear-bud 110 can detect when the ear-cup 100 in FIG. 1 is proximate to the ear-bud
  • the wireless transceiver 200 in FIG. 2 of the ear-cup 100 in FIG. 2 can periodically send a short distance signal, which when received by the wireless transceiver 220 indicates to the ear- bud 110 that the ear-cup is close.
  • the processor 250 in FIG. 2 and/or processor 400 can switch the audio processing to account for the two audio emitters 210 in FIG. 2, and 230.
  • the processor 250, 400 can separate audio into a first audio including low frequencies, and a second audio including mid and high frequencies.
  • FIGS. 5A-5C show various arrangements of an ear-cup and an ear-bud.
  • the standalone ear-cup and standalone ear-bud described, for example, in FIGS. 3-4, respectively, can be combined into a single system as shown, for example, in FIGS. 1-2, 5A-5C.
  • the ear-bud 110 in FIG. 1 can be a wired ear-bud 500, as shown in FIG. 5 A.
  • the wired ear-bud 500 can be surrounded by the ear-cup 100, with the wire 505 positioned between the surface of the ear-cup 100 and a user’s skin.
  • the ear-bud 110 can be a wireless ear-bud 510 as shown in FIG. 5B and can fit through a closable passage 520 in FIG. 5C.
  • the closable passage 520 can have two trapdoors 530, 540, mounted on two opposing surfaces of the ear-cup 100.
  • the trapdoors 530, 540 are normally closed, but when the wireless ear-bud 510 pushes against the trapdoor 530, 540, the trapdoors can open, and assume positions 550, 560, to allow the passage of the ear-bud 510.
  • the trapdoors 530, 540 can automatically snap back into their original position, closing the passage 520.
  • a mechanism to automatically snap back the trapdoors 530, 540 can include a spring or a magnet pulling the trapdoors 530, 540, into their closed position.
  • FIG. 6A is a flowchart of a method to personalize audio played to user.
  • an audio emitter associated with an ear-cup or an ear-bud can obtain a hearing profile associated with a user.
  • the hearing profile can indicate a correlation between a received amplitude and frequency and a perceived amplitude and frequency.
  • the hearing profile can be stored in a memory associated with the audio emitter such as a memory of the ear-cup or the ear-bud, the hearing profile can be received from an Internet server, a nearby device such as a cell phone, a tablet, home device, a wearable device, etc.
  • a processor associated with the audio emitter can modify audio based on the hearing profile associated with the user to equalize the received amplitude and frequency and the perceived amplitude and frequency.
  • the audio emitter can play the modified audio to the user.
  • the processor can measure an otoacoustic emission generated in the user’s cochlea in response to the received amplitude and frequency.
  • the processor can create the hearing profile based on the measured otoacoustic emission. For example, the user’s ear can receive a frequency of 5 kHz at 18 dB, but the otoacoustic emissions can indicate that the user’ s ear perceives that frequency as 5 kHz at 12 dB.
  • the audio emitter 230 can modify an audio prior to emitting the audio to the user, so that the user perceives the audio at the desired frequency and/or amplitude. The measurement of the hearing profile is further described in reference to FIG. 6B below.
  • the processor can compensate for the acoustic profile of the audio emitter.
  • the processor can obtain an acoustic profile of an audio emitter playing the modified audio.
  • the acoustic profile of the audio emitter can correlate an amplitude and a frequency received by the audio emitter and an amplitude and a frequency emitted by the audio emitter.
  • the acoustic profile of the audio emitter can be obtained from a memory associated with the audio emitter, from a remote computer such as downloaded from the Internet based on make and model of the audio emitter, and/or it can be measured, etc.
  • the processor can modify the audio based on the acoustic profile of the audio emitter prior to playing the audio to the user.
  • the processor can determine a frequency and an amplitude of audio to be emitted by the audio emitter. For example, the processor can determine that an audio contains a frequency of 23 Hz at 7.5 dB.
  • the processor can measure a frequency and an amplitude of the audio emitted by the audio emitter. For example, when the audio emitter emits the audio containing the frequency of 23 Hz and 7.5 dB, the processor using a microphone can measure the frequency and amplitude of the emitted audio, which can be 23.5 Hz at 6.7 dB.
  • the processor can record in the acoustic profile that the audio emitter emits at the frequency of 23 Hz and 7.5 dB as a frequency of 25 Hz at 6.7 dB.
  • the processor can detect a proximity between two audio emitters, first audio emitter, and a second audio emitter.
  • the ear-cup can detect a proximity of the ear-bud, or the ear-bud can detect proximity of the ear-cup, and the processor can switch into a dual emitter processing mode to account for the 2 audio emitters.
  • the processor can separate audio intended for a first audio emitter into at least a first audio and a second audio and can send the first audio to the first audio emitter and the second audio to the second audio emitter.
  • the first audio can contain the low frequencies, or the second audio can contain mid and high frequencies, and vice versa.
  • a wireless transmitter associated with the first audio emitter can periodically emit a short-range message formatted according to a predetermined protocol, which identifies the second audio emitter.
  • a wireless transceiver associated with the first audio emitter receives the message, the wireless transceiver can identify the second audio emitter, and can indicate to the processor to switch into a dual emitter mode.
  • a range finder can be associated with the first audio emitter.
  • the rangefinder can be a laser, radar, sonar, lidar and/or ultrasonic range finder, capacitive, inductive, RF, optical, magnetic or acoustic sensor. Once a second proximate object is detected, the processor can switch into the dual emitter mode.
  • the processor can detect when the user is speaking and take appropriate action, such as record the user’s voice, transmit the user’s voice over a cellular network to enable user to make a phone call, interpret the user’s voice as a command, etc.
  • the processor can detect a low-frequency audio.
  • the sensor can be a microphone placed at or within the user’s ear canal. The so placed microphone can detect low frequencies of the user’s voice transmitted through the skull and into the ear canal.
  • the processor can send the signal to a second sensor to record a high-frequency audio.
  • the second sensor can be an externally facing microphone that can record high to mid-frequencies of the user’s speech.
  • the processor can record the low- frequency audio and the high-frequency audio.
  • the recorded audio can be transmitted over the cellular network as a phone call made without a phone, using only the ear-buds and/or ear-cups, or the recorded audio can be transmitted over the cellular network to be interpreted.
  • the recorded audio can also be stored for later use or interpreted locally as a command to the ear-buds and/or ear-cups.
  • the processor can record an ambient sound surrounding an ear-cup.
  • the processor can perform filtering of the ambient sound to obtain a filtered sound.
  • the processor can wirelessly transmit the ambient sound or the filtered sound to an ear-bud.
  • the processor can play the filtered sound through an audio emitter associated with an ear-bud.
  • the filtering can create the filtered sound to destructively interfere with the ambient sound.
  • the filtered sound When such filtered sound is played through the audio emitter, the filtered sound destructively interferes with the ambient sound, thus causing ambient noise cancellation within the ear-bud.
  • the filtering can also happen upon receiving an input requesting transmission of the ambient sound to the ear-bud.
  • the ambient sound can be adjusted to enhance the perception of the ambient sound to obtain the filtered sound.
  • the adjustment can come from filtering out the noise in the ambient sound, such as frequencies outside of the human speech frequency.
  • the adjustment can also come from modifying the ambient sound based on the hearing profile of the user to enhance the perception of the ambient sound.
  • the input requesting transmission is described in this application, such as a touch of a button or a touch on the surface of the ear-cup, or detecting an audio addressed to the user such as an audio containing the name of the user.
  • FIG. 6B is a schematic diagram depicting the electrical components of the signal processing module that can be located with an ear-cup and/or an ear-bud and that can be used in creating a hearing profile of a user.
  • the hearing profile can be created based on otoacoustic emissions.
  • Otoacoustic emissions can be measured within the user’s ear canal and then used to determine thresholds at multiple frequencies or relative amplitudes of the otoacoustic emissions at multiple frequencies to one or more suprathreshold sound levels in order to develop the frequency dependent hearing transfer function of the user’s ear(s).
  • Stimulus frequency OAE, swept-tone OAE, transient evoked OAE, DP-OAE, or pulsed DP-OAE can be used for this purpose.
  • the amplitude, latency, hearing threshold, and/or phase of the measured OAEs can be compared to response ranges from normal-hearing and hearing-impaired listeners to develop the frequency dependent hearing transfer function for each ear of the user.
  • OAEs Since DP-OAEs are best measured in a sealed ear canal with two separate speakers/receivers packed into each ear canal, the use of OAEs is best suited for the ear-bud implementation.
  • one stimulus frequency/loudness combination yields a response amplitude.
  • the measurement of multiple frequencies in this manner yields a plot of response amplitude versus frequency, which is stored in the memory 46 of the signal processing modules 42, or can be stored in a remote database.
  • Many OAE techniques rely upon the measurement of one frequency per stimulus; however, the swept tone OAE measures all frequencies in the range of the sweep.
  • the hearing transfer function remains the same regardless of the measuring method used, that is, the hearing transfer function comprises a plot of the signal amplitude versus frequency of the OAE evoked in the user’s ear upon application of an input audio signal.
  • the hearing transfer function can also comprise the input amplitude associated with the input frequency.
  • the processor 44 in order to determine the hearing transfer function for a user’s ear, the processor 44 captures data points for an input audio signal comprising a number of frequencies, for example, 500, 1000, 2000 and 4000 Hz, which are typically the same frequencies used in the equalizer that acts upon the output sound signal to the loudspeakers 12 and 14. At any one frequency, the processor measures the response to an input audio signal at reducing levels, for example, at 70 dB, 60 dB, 50 dB, 40 dB, etc., until there is no longer a measurable response. The processor 44 records the data point at that time.
  • frequencies for example, 500, 1000, 2000 and 4000 Hz, which are typically the same frequencies used in the equalizer that acts upon the output sound signal to the loudspeakers 12 and 14.
  • the processor measures the response to an input audio signal at reducing levels, for example, at 70 dB, 60 dB, 50 dB, 40 dB, etc., until there is no longer a measurable response.
  • the processor 44
  • the input audio signal can include a test audio signal, and/or a content audio signal comprising music, speech, environment sounds, animal sounds, etc.
  • the input audio signal can include the content audio signal with an embedded test audio signal.
  • In-situ calibration of the speakers to the user’s ear canal can be performed by the processor 44 prior to making an OAE measurement.
  • in-situ refers to measurements made at times when the speakers and microphone are situated for use inside the ear canal.
  • the acoustic impedance of the ear can be calculated from this data and utilized for deriving corrections.
  • in-situ calibration can be done by playing a test audio signal, such as a chirp, or the content signal, covering the frequency range of the speakers, recording the frequency response with the microphone, and adjusting output by changing the equalizer settings to make a flat frequency response of the desired loudness.
  • a test audio signal such as a chirp, or the content signal
  • this calibration can be done in real time to any playback sound
  • the in-situ calibration accounts for variations in different users’ external portion of the ear and variations in the placement of earbuds. If no audiometric data is yet available, then the in-situ calibration alone can be used for adjusting the sound.
  • any variation with an internal microphone can use that microphone for in-situ calibration of the speakers performed every time the user places the headphones on.
  • FIG. 7 shows a power source associated with modular ear-cups and ear-buds in communication with each other.
  • An ear-bud 110 can receive a first audio and can emit the first audio.
  • the ear-bud 110 can include a power source 700, such as a battery, having a power capacity less than the power needed for a potentially desired usage of the wireless ear-bud. Due to the small size necessary to fit within the ear, the ear-bud 110 cannot accommodate a large battery necessary to power the ear-bud for more than several hours of use. The user may want to use the ear-bud continuously for more than several hours, thus draining the battery 700.
  • the battery 700 can be wirelessly charged using wireless power transfer, such as an induction coil 710 within the ear-bud 110.
  • the ear-bud 110 can be wireless or can be wired.
  • the power source 700 of the ear-bud 110 can include a transient power storage, such as capacitors holding the voltage transiently.
  • the power source 700 of the ear-bud 110 can be powered by the power source 720 of the ear-cup 100.
  • the transient power source can discharge after a certain amount of time, such as several hours or several days. When the transient power source is discharged, the earbud 110 cannot operate without being charged by another external power source.
  • the power source 700 can be charged in a wired or a wireless manner by the external power source such as the power source 720 of the ear-cup 100, a wired device plugged into power, or via wearable device such as a necklace, an earring, glasses, a headband, etc.
  • the external power source such as the power source 720 of the ear-cup 100, a wired device plugged into power, or via wearable device such as a necklace, an earring, glasses, a headband, etc.
  • An ear-cup 100 can have at least two functions, where one function is wirelessly charging the power source 700 of the ear-bud 110, and the other function is playing an audio to a user.
  • the ear-cup 100 can substantially surround the wireless ear-bud 110.
  • the ear-bud 110 can be fully contained within the cavity of the ear-cup 100 as shown in FIG. 1, or the ear-bud 110 can partially protrude from the ear-cup 100 as shown in FIG. 5B.
  • the ear-cup 100 can include a power source 720 to wirelessly transfer power to the power source 700 while the wireless ear-bud 110 is operating.
  • the ear-cup 100 can enable the battery 700 to provide the power needed for the potentially desired usage of the wireless ear-bud.
  • the ear-bud 110 can continually play music to the user even after several hours, when the power source 700 would have been drained if the power source 700 wouldn’t charged.
  • the power source 720 can be connected to an induction coil 730, which can induce a current in the induction coil 710 of the ear-bud 110.
  • the power source 720 within the ear-cup can also be charged wirelessly by a wired device plugged into power, or via wearable device such as a necklace, an earring, glasses, a headband, etc.
  • the power source 720 can be a battery, or can be a transient power source, such as a capacitor, that can be charged in a wired or wireless manner by an external device.
  • FIG. 8 shows a dual function member according to various embodiments.
  • the 800 can have a power capacity less than the power needed for a potentially desired usage of the device 800.
  • the device 800 can be wired, or wireless, such as a wireless ear-bud.
  • a dual function member 810, 820, 830, 840, 850, 860 can include a power source configured to wirelessly transfer power to the device 800 while the device 800 is consuming power, thus enabling the device 800 to operate for a duration of the potentially desired usage.
  • the wireless power transfer can include an induction coil 710, 730 in FIG. 7.
  • the device 800 can be a wireless ear-bud carrying a small battery that can only play audio for 3 or 4 hours continuously.
  • the dual function member 810, 820, 830, 840, 850, 860 can be a wearable accessory such as an ear-cup 100 in FIG. 7, a headband 810, glasses 820, an earring 830, a necklace 840, a purse strap 850, clothing 860, etc.
  • a first function of the dual function member 810, 820, 830, 840, 850, 860 can be to wirelessly transfer power to the device 800.
  • a second function of the dual function member 810, 820, 830, 840, 850, 860 is different from the first function, and can include the decorative function such as in the case of the earring 830, the necklace 840, or the headband 810.
  • the second function can include holding a purse, a water bottle, a backpack, etc. as in the case of the purse strap 850.
  • the second function can include holding the hair back as in the case of headband 810, a hair clip, a hair tie, etc.
  • the second function can also include clothing such as in the case of the tank top 860.
  • the dual function member 810, 820, 830, 840, 850, 860 can include one or more power sources.
  • the headband 810 can have multiple discrete wireless power transmitters 812, 814 placed along the perimeter of the headband 810, or the headband can have one continuous wireless power transmitter, such as an induction coil, circumventing the perimeter of the headband 810.
  • the glasses 820 can have wireless power transmitters 822 positioned in the side arm 824 of the glasses 820 or can have one continuous wireless power transmitter placed through the frame of the glasses 820.
  • the earring 830 can have a wireless power transmitter 832 positioned within the ear clasp 834, or within the hanging part 836 of the earring 830.
  • the earring 830 can also have one continuous wireless power transmitter running through the earring 830.
  • the necklace 840 can have multiple discrete wireless power transmitters 842 placed along the perimeter of the necklace 840, or the necklace can have one continuous wireless power transmitter, such as an induction coil, circumventing the perimeter of the necklace 840.
  • the strap 850 can also have multiple discrete wire power transmitters 852, 854, placed along the perimeter of the strap 850, or the strap 850 can have one continuous wireless power transmitter placed through the strap 850.
  • the clothing 860 can have multiple discrete wireless power transmitter 862 located in the region expected to cover the wearer’s shoulders.
  • the dual function member 810, 820, 830, 840, 850, 860 can also be charged wirelessly from a third power source.
  • the device 800, the dual function member 810, 820, 830, 840, 850, 860, and the third power source can create a daisy chain of wireless power transmission.
  • the third power source can receive power wirelessly or can receive power through a wire connected to a power source.
  • FIGS. 9A-9B show an encoding member to perform amplitude modulation and frequency modulation of a power field.
  • the power source 915 can wirelessly transmit the power along with data such as audio, or a command to the device 950.
  • the encoding member 900 in FIG. 9B of the dual function member 905 can modify the frequency and/or amplitude of the power field 910, i.e., electromagnetic field, produced by the power source 915.
  • the encoding member 900 can be a processor, while the dual function member 905 can be an ear-cup, or any other dual function member 810, 820, 830, 840, 850, 860 in FIG. 8.
  • the device 950 can be a wireless ear-bud, wired ear-bud, a hearing aid, etc.
  • the device 950 can be waterproof.
  • the encoding member 900 can modify the amplitude of the power field 910, to encode the message 920.
  • the resulting amplitude modulated power field appears as an amplitude modulated power field 990.
  • the amplitude modulated power field 990 can wirelessly transfer power to the power source 940 of the device 950.
  • the amplitude modulated power field 990 can transmit the message 920 to the device 950.
  • the encoding member 900 can modify the frequency of the power field 910, to encode the message 920.
  • the resulting frequency modulated power field appears as a frequency modulated power field 930.
  • the frequency modulated power field 930 can wirelessly transfer power to the power source 940 of the device 950.
  • the frequency modulated power field 930 can transmit the message 920 to the device 950.
  • the encoding member 900 can perform phase modulation to transmit the message 920.
  • the device 950 can be wired or wireless, such as an ear- bud.
  • the message 920 can include data such as audio, or a command to the device 950.
  • an instruction to the dual function member 810, 820, 830, 840, 850, 860 in FIG. 8, can be transmitted using the power field to the device 950.
  • the instruction can be a press of a button associated with the dual function member 810, 820, 830, 840, 850, 860.
  • a press of the button 925 can be transmitted to the device 950.
  • the press of the button 925 can notify the device 950 to receive a command from the user, or can put device 950 in social mode, i.e., notify the device to reduce the amplitude of the audio, or to completely stop emitting the audio and to allow ambient sound to reach the user.
  • the device 950 can monitor an amount 945 of power associated with the power source 940 of the device 950. When an amount 945 of power of the power source 940 is below a predefined threshold 960, the wireless audio device can activate the power source of the dual function member 905.
  • the predefined threshold 960 can be, for example, 60% of the total amount of power associated with a power source 940.
  • the wireless audio device can be a wearable audio device, such as a headphone, an ear-bud, an ear-cup, a hearing aid, etc.
  • a wireless transceiver 970 can send the request to a wireless transceiver 980 of the dual function member 905, to request more power.
  • the dual function member 905 can respond in several ways.
  • the dual function member can activate the power source 915 and transfer power to the power source 940, or the dual function member can determine the most power intensive task associated with the power source 940, and move that task to another device, such as a left ear-bud, a right ear-bud, a left ear-cup or a right ear-cup, as explained further in this application.
  • the dual function member 90 upon receiving a request to activate the power source from the device 950, can choose the power source among multiple power sources based on a distance to the device 950 and/or based on an amount of power contained within the power source among the multiple power sources, as explained further below.
  • FIG. 10 shows a system to optimize power consumption between multiple components.
  • the system can include multiple power sources 1000, 1010, 1020, 1030, 1040.
  • Power sources 1000 and 1010, associated with the devices 1005 and 1015 can have a smaller power capacity than the power sources 1020 and 1030 because devices 1005 and 1015 are smaller than the dual function members 1025, 1035, associated with the power sources 1020, 1030, respectively.
  • the system can balance power between any combination of two or more power sources 1000, 1010, 1020, 1030, 1040.
  • a processor 1050 associated with at least one of the dual function members 1025, 1035 can receive a request for more power from at least one of the devices 1005, 1015.
  • the processor 1050 can determine which power source 1000, 1010, 1020, 1030, to activate.
  • the processor can activate the power source closest to the power source requesting additional power. For example, if the power source 1000 is requesting more power, the processor 1050 can activate power source 1020 because the power source 1020 is the closest to the power source 1000.
  • the processor 1050 can take into account both the distance and an amount of power associated with a power source that could provide more power. If the power source 1000 is requesting more power, the processor 1050 can determine that both power sources 1020, 1040, are good candidates based on the distance, with the power source 1020 being closer to the power source 1000. However, an amount of power remaining in the power source 1040 can be higher than the amount of power remaining in the power source 1020. Consequently, the processor can activate the power source 1042 to transfer power to the power source 1000.
  • the processor 1050 can take into account the amount of processing power associated with a power source that could provide more power. If the power source that could provide more power has a processor, then a power intensive task can be transferred from the power source requesting more power to the power source that could provide more power, with or without wirelessly charging the requesting power source. In a more specific example, if the power source 1000 is requesting more power, the processor 1050 can determine that the power source 1020 has the processor 1050 associated with it. The processor 1050 can also determine that the most power intensive task is personalizing an audio according to the hearing profile of the user.
  • FIG. 11 is a flowchart of a method to wirelessly transfer power between a dual function member and a wireless audio device.
  • the dual function member can enable a wireless audio device having a power capacity less than a power needed for a potentially desired usage of the wireless ear-bud to operate for a duration of the potentially desired usage.
  • the dual function member can wirelessly transfer power to the wireless audio device while the wireless audio device is consuming power, thus enabling the wireless audio device to operate for the duration of the potentially desired usage.
  • the wireless audio device can be an ear-bud having a power source capable of operating continuously for up to 5 hours. The user, however, may want to use the ear-buds for more than 5 hours. Currently, the user has to remove the ear-buds, charge them, and then resume using the ear-buds. By wirelessly transferring power to the ear-buds, while the ear-buds are in operation, the user can continue using the ear-buds for more than 5 hours.
  • the dual function member can have a first function and a second function.
  • the first function can include wirelessly transferring power to the wireless audio device, and the second function can be different from the first function.
  • the second function can be to play audio, to be decorative, to correct vision, to hold the hair back, to carry objects, to act as clothing, etc.
  • the dual function member itself can be charged wirelessly using the wireless audio device and/or a third power source.
  • a processor can modulate the power field emitted by the power source to transmit data.
  • the power field can be an electromagnetic field.
  • the modulation can include a frequency modulation, an amplitude modulation or a phase modulation, as explained in this application.
  • the wireless audio device can monitor its power. When the power of the wireless audio device is below a predefined threshold, the wireless audio device can activate the power source of the dual function member.
  • the predefined threshold can be 80% of the total amount of power of a power source associated with the wireless audio device.
  • the dual function member upon receiving a request for more power from the wireless audio device, can automatically determine which power source to activate to transfer power to the wireless audio device.
  • the dual function member can determine the closest power source, a power source with the most amount of power, or the dual function member can redistribute power intensive tasks away from the wireless audio device.
  • the dual function member can use movable magnets to shape the power field transferring power to the wireless audio device.
  • the power field emanating from one power source can be shaped to transfer power to a left ear-bud and can also be shaped to transfer power to a right ear-bud.
  • the movable magnets can be micro electro mechanical system (MEMS) magnets.
  • the dual function member can determine which power source to activate.
  • the dual function member can choose the power source among multiple power sources based on the distance between the power source and the wireless audio device or based on an amount of power contained within the power source in the plurality of power sources.
  • FIG. 12 is a flowchart of a method to balance power among multiple power sources.
  • the processor can determine the first amount of power remaining in a first power source associated with a wearable audio system and the second amount of power remaining in a second power source associated with the wearable audio system.
  • the wearable audio system can be any combination of a headset, a headset with ear-buds, two ear-buds, a hearing aid, etc.
  • the processor can compare the first amount of power and the second amount of power.
  • the processor can send a power intensive task to a processor associated with the first power source.
  • a power intensive task can include modifying audio based on the hearing profile of the user, to enhance the user’s perception of music.
  • a power source associated with an ear-bud could be low on power when the ear-bud needs to modify audio based on the user’s hearing profile.
  • the ear-bud can transfer the task to a processor associated with an ear-cup because a power source associated with the ear-cup has more power than the power source associated with the ear-bud.
  • the ear-cup can modify the audio based on the user’s hearing profile and transfer the modified audio to the ear-bud.
  • a processor can obtain a hearing profile associated with a user.
  • the hearing profile can correlate a received frequency and a received amplitude to a perceived frequency and a perceived amplitude.
  • the processor can send the power intensive task to modify audio based on the hearing profile to the processor associated with the first power source, and then play the modified audio to the user.
  • the processor can determine which of the two ear-buds has more power and can transfer the audio modification task to the ear-bud with more power.
  • the power intensive task can include a wireless communication with a remote device, such as a phone, a tablet, a wireless router or a cell tower.
  • a remote device such as a phone, a tablet, a wireless router or a cell tower.
  • the ear-cups can have a bigger power source, e.g., a battery, then the ear-buds, and the wireless communication session such as Bluetooth communication, Wi-Fi communication, and/or cell tower communication can be done by the ear-cups.
  • the ear-cups can then transfer the received audio to the ear-buds, so that the ear-buds need only receive the audio and play the audio. Receiving and playing the audio consume less power than the wireless communication.
  • the processor can determine a first amount of power needed for a first task and a second amount of power needed for a second task. When the first amount of power is greater than the second amount of power, the processor can send the first task to the processor associated with the first power source. To determine the amount of power associated with the task, the processor can retrieve a table stored in memory indicating an expected amount of power associated with various tasks or running programs.
  • the wearable audio system can include the first power source associated with a first ear-bud, the second power source associated with a second ear-bud, and a third power source associated with an ear-cup.
  • the processor can determine a third amount of power remaining in a third power source associated with the ear-cup.
  • the processor can compare the first amount of power, the second amount of power, and the third amount of power. When the third amount of power is greater than the second amount of power and the first amount of power, the processor can send a power intensive task to a processor associated with the third power source.
  • FIG. 13 is a diagrammatic representation of a machine in the example form of a computer system 1300 within which a set of instructions, for causing the machine to perform any one or more of the methodologies or modules discussed herein, may be executed.
  • the computer system 1300 includes a processor, memory, non-volatile memory, and an interface device. Various common components (e.g., cache memory) are omitted for illustrative simplicity.
  • the computer system 1300 is intended to illustrate a hardware device on which any of the components described in the example of FIGS. 1-12 (and any other components described in this specification) can be implemented.
  • the computer system 1300 can be of any applicable known or convenient type.
  • the components of the computer system 1300 can be coupled together via a bus or through some other known or convenient device.
  • the computer system 1300 can be included in the dual function member such as 810, 820, 830, 840, 850, 860 in FIG. 8, and 905 in FIG. 9B.
  • the computer system 1300 can be included in the wireless audio device such as 905 in FIG. 9B.
  • the processor of the computer system 1300 can perform the various steps described in this application.
  • the network of the computer system 1300 can be the wireless network facilitating communication between the dual function member and the wireless audio device.
  • the computer system 1300 may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, or a combination of two or more of these.
  • SOC system-on-chip
  • SBC single-board computer system
  • COM computer-on-module
  • SOM system-on-module
  • the computer system 1300 may include one or more computer systems 1300; be unitary or distributed; span multiple locations; span multiple machines; or reside in a cloud, which may include one or more cloud components in one or more networks.
  • one or more computer systems 1300 may perform—without substantial spatial or temporal limitation— one or more steps of one or more methods described or illustrated herein.
  • one or more computer systems 1300 may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein.
  • One or more computer systems 1300 may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate.
  • the processor may be, for example, a conventional microprocessor such as an Intel Pentium microprocessor or Motorola power PC microprocessor.
  • Intel Pentium microprocessor or Motorola power PC microprocessor.
  • machine-readable (storage) medium or“computer-readable (storage) medium” will include any type of device that is accessible by the processor.
  • the memory is coupled to the processor by, for example, a bus.
  • the memory can include, by way of example but not limitation, random access memory (RAM), such as dynamic RAM (DRAM) and static RAM (SRAM).
  • RAM random access memory
  • DRAM dynamic RAM
  • SRAM static RAM
  • the memory can be local, remote, or distributed.
  • the bus also couples the processor to the non-volatile memory and drive unit.
  • the non-volatile memory is often a magnetic floppy or hard disk, a magnetic-optical disk, an optical disk, a read-only memory (ROM), such as a CD-ROM, EPROM, or EEPROM, a magnetic or optical card, or another form of storage for large amounts of data. Some of this data is often written, by a direct memory access process, into memory during execution of software in the computer 1300.
  • the non volatile storage can be local, remote, or distributed.
  • the non-volatile memory is optional because systems can be created with all applicable data available in memory.
  • a typical computer system will usually include at least a processor, memory, and a device (e.g., a bus) coupling the memory to the processor.
  • Software is typically stored in the non-volatile memory and/or the drive unit. Indeed, storing an entire large program in memory may not even be possible. Nevertheless, it should be understood, that for software to run, if necessary, it may be moved to a computer readable location appropriate for processing, and for illustrative purposes, that location is referred to as the memory in this paper. Even when software is moved to the memory for execution, the processor will typically make use of hardware registers to store values associated with the software, and local cache that, ideally, serves to speed up execution.
  • a software program is assumed to be stored at any known or convenient location (from non-volatile storage to hardware registers) when the software program is referred to as“implemented in a computer-readable medium.”
  • a processor is considered to be“configured to execute a program” when at least one value associated with the program is stored in a register readable by the processor.
  • the bus also couples the processor to the network interface device.
  • the interface can include one or more of a modem or network interface. It will be appreciated that a modem or network interface can be considered to be part of the computer system 1300.
  • the interface can include an analog modem, ISDN modem, cable modem, token ring interface, satellite transmission interface (e.g., “direct PC”), or other interfaces for coupling a computer system to other computer systems.
  • the interface can include one or more input and/or output devices.
  • the I/O devices can include, by way of example but not limitation, a keyboard, a mouse or other pointing device, disk drives, printers, a scanner, and other input and/or output devices, including a display device.
  • the display device can include, by way of example but not limitation, a cathode ray tube (CRT), liquid crystal display (LCD), or some other applicable known or convenient display device.
  • CTR cathode ray tube
  • LCD liquid crystal display
  • controllers of any devices not depicted in the example of FIG. 13 reside in the interface.
  • the computer system 1300 can be controlled by operating system software that includes a file management system, such as a disk operating system.
  • operating system software with associated file management system software is the family of operating systems known as Windows® from Microsoft Corporation of Redmond, Washington, and their associated file management systems.
  • Windows® is the family of operating systems known as Windows® from Microsoft Corporation of Redmond, Washington, and their associated file management systems.
  • WindowsTM Windows® from Microsoft Corporation of Redmond, Washington
  • LinuxTM operating system is the LinuxTM operating system and its associated file management system.
  • the file management system is typically stored in the non-volatile memory and/or drive unit and causes the processor to execute the various acts required by the operating system to input and output data and to store data in the memory, including storing files on the non-volatile memory and/or drive unit.
  • the machine operates as a standalone device or may be connected (e.g., networked) to other machines.
  • the machine may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
  • the machine may be a server computer, a client computer, a personal computer (PC), a tablet PC, a laptop computer, a set-top box (STB), a personal digital assistant (PDA), a cellular telephone, an iPhone, a Blackberry, a processor, a telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • PC personal computer
  • PDA personal digital assistant
  • machine-readable medium or machine-readable storage medium is shown in an exemplary embodiment to be a single medium, the term“machine-readable medium” and “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
  • the term“machine-readable medium” and“machine-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and which cause the machine to perform any one or more of the methodologies or modules of the presently disclosed technique and innovation.
  • routines executed to implement the embodiments of the disclosure may be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as“computer programs.
  • the computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processing units or processors in a computer, cause the computer to perform operations to execute elements involving the various aspects of the disclosure.
  • machine-readable storage media machine-readable media, or computer-readable (storage) media
  • recordable type media such as volatile and non-volatile memory devices, floppy and other removable disks, hard disk drives, optical disks (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks, (DVDs), etc.), among others, and transmission type media such as digital and analog communication links.
  • CD ROMS Compact Disk Read-Only Memory
  • DVDs Digital Versatile Disks
  • transmission type media such as digital and analog communication links.
  • operation of a memory device may comprise a transformation, such as a physical transformation.
  • a physical transformation may comprise a physical transformation of an article to a different state or thing.
  • a change in state may involve an accumulation and storage of charge or a release of stored charge.
  • a change of state may comprise a physical change or transformation in magnetic orientation or a physical change or transformation in molecular structure, such as from crystalline to amorphous or vice versa.
  • a storage medium typically may be non-transitory or comprise a non-transitory device.
  • a non-transitory storage medium may include a device that is tangible, meaning that the device has a concrete physical form, although the device may change its physical state.
  • non-transitory refers to a device remaining tangible despite this change in state.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Power Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

Introduced here is a wearable audio system including modular ear-cup and ear-bud that can be attached and detached to the user together, or independently of each other. Further, the modular ear- cup and ear-bud can operate together, or independently of each other. The wearable audio system can perform active noise cancellation by measuring noise inside the ear-cup and/or the ear-bud, computing the noise canceling sound, and forwarding the noise canceling sound to a speaker inside the ear-cup and/or the ear-bud. The wearable audio system can be wirelessly charged while operating, thus allowing the user to continuously listen to music more than previously possible. The wearable audio system can optimize power consumption by redistributing power intensive tasks to power sources with the highest amount of power. Further, the ambient sound outside the ear-cup can be measured and played by the speakers in the earbud allowing the user to hear the surrounding environment.

Description

MODULAR EAR-CUP AND EAR-BUD AND
POWER MANAGEMENT OF THE MODULAR EAR-CUP AND EAR-BUD
[0001] This application claims priority to the U.S. provisional patent application Serial
Number 62/784, 143 filed on December 21, 2018, titled“Modular Ear-Cup and Ear-Bud and Power Management of the Modular Ear-Cup and Ear-Bud,” which is incorporated herein by reference in its entirety.
TECHNICAL FIELD
[0002] The present application is related to wearable audio systems, and more specifically to methods and systems that include ear-cups and ear-buds.
BACKGROUND
[0003] Currently, wearable audio systems work on an assumption that all users hear the same and produce an audio signal independent of the user’s audio perception. Further, the wearable audio systems consist of standalone ear-cups or standalone ear-buds. The ear-buds, because of their small size, can include a small power source allowing the user to continuously listen to music for a limited period of time. Once the power source is drained, the ear-buds need to be removed from the user to be charged, prior to the next usage.
SUMMARY
[0004] Introduced here is a wearable audio system including modular ear-cup and ear-bud that can be attached and detached to the user together, or independently of each other. Further, the modular ear-cup and ear-bud can operate together, or independently of each other. The wearable audio system can perform active noise cancellation by measuring noise inside or outside the ear-cup and/or the ear-bud, computing the noise canceling sound, and forwarding the noise canceling sound to a speaker inside the ear-cup and/or the ear-bud. The wearable audio system can be wirelessly charged while operating, thus allowing the user to continuously listen to music longer than previously possible. In addition, the wearable audio system can optimize power consumption by redistributing power intensive tasks to power sources with the highest amount of power. Further, the ambient sound outside the ear-cup can be measured and played by the speakers in the earbud allowing the user to hear the surrounding environment.
BRIEF DESCRIPTION OF THE DRAWINGS
[0005] These and other objects, features and characteristics of the present embodiments will become more apparent to those skilled in the art from a study of the following detailed description in conjunction with the appended claims and drawings, all of which form a part of this specification. While the accompanying drawings include illustrations of various embodiments, the drawings are not intended to limit the claimed subject matter.
[0006] FIG. 1 shows a wearable audio system.
[0007] FIG. 2 shows a cross-section of the modular ear-cups and ear-buds.
[0008] FIG. 3 shows a modular ear-cup, capable of operating without the ear-buds.
[0009] FIG. 4 shows a modular ear-bud, capable of operating without the ear-cups.
[0010] FIGS. 5A-5C show various arrangements of an ear-cup and an ear-bud.
[0011] FIG. 6A is a flowchart of a method to personalize an audio played to user.
[0012] FIG. 6B is a schematic diagram depicting the electrical components of the signal processing module that can be located with an ear-cup and/or an ear-bud and that can be used in creating a hearing profile of a user. The hearing profile can be created based on otoacoustic emissions.
[0013] FIG. 7 shows a power source associated with modular ear-cups and ear-buds in communication with each other.
[0014] FIG. 8 shows a dual function member according to various embodiments.
[0015] FIGS. 9A-9B show an encoding member to perform amplitude modulation and frequency modulation of a power field. [0016] FIG. 10 shows a system to optimize power consumption between multiple components.
[0017] FIG. 11 is a flowchart of a method to wirelessly transfer power between a dual function member and a wireless audio device.
[0018] FIG. 12 is a flowchart of a method to balance power among multiple power sources.
[0019] FIG. 13 is a diagrammatic representation of a machine in the example form of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies or modules discussed herein, may be executed.
DETAILED DESCRIPTION
Terminology
[0020] Brief definitions of terms, abbreviations, and phrases used throughout this application are given below.
[0021] Reference in this specification to“power” can include energy as well as energy per unit time, depending on a context. Reference in this specification to“one embodiment” or“an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase“in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described that may be exhibited by some embodiments and not by others. Similarly, various requirements are described that may be requirements for some embodiments but not others.
[0022] Unless the context clearly requires otherwise, throughout the description and the claims, the words“comprise,”“comprising,” and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense; that is to say, in the sense of“including, but not limited to.” As used herein, the terms“connected,”“coupled,” or any variant thereof, means any connection or coupling, either direct or indirect, between two or more elements. The coupling or connection between the elements can be physical, logical, or a combination thereof. For example, two devices may be coupled directly, or via one or more intermediary channels or devices. As another example, devices may be coupled in such a way that information can be passed there between, while not sharing any physical connection with one another. Additionally, the words“herein,”“above,”“below,” and words of similar import, when used in this application, shall refer to this application as a whole and not to any particular portions of this application. Where the context permits, words in the Detailed Description using the singular or plural number may also include the plural or singular number respectively. The word“or,” in reference to a list of two or more items, covers all of the following interpretations of the word: any of the items in the list, all of the items in the list, and any combination of the items in the list.
[0023] If the specification states a component or feature“may,”“can,”“could,” or“might” be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic.
[0024] The term“module” refers broadly to software, hardware, or firmware components (or any combination thereof). Modules are typically functional components that can generate useful data or another output using specified input(s). A module may or may not be self-contained. An application program (also called an“application”) may include one or more modules, or a module may include one or more application programs.
[0025] The terminology used in the Detailed Description is intended to be interpreted in its broadest reasonable manner, even though it is being used in conjunction with certain examples. The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. For convenience, certain terms may be highlighted, for example using capitalization, italics, and/or quotation marks. The use of highlighting has no influence on the scope and meaning of a term; the scope and meaning of a term is the same, in the same context, whether or not it is highlighted. It will be appreciated that the same element can be described in more than one way.
[0026] Consequently, alternative language and synonyms may be used for any one or more of the terms discussed herein, but special significance is not to be placed upon whether or not a term is elaborated or discussed herein. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification, including examples of any terms discussed herein, is illustrative only and is not intended to further limit the scope and meaning of the disclosure or of any exemplified term. Likewise, the disclosure is not limited to various embodiments given in this specification. Modular ear-cups and ear-buds
[0027] FIG. 1 shows a wearable audio system. The wearable audio system can include modular ear-cups 100 and ear-buds 110. The ear-cups and ear-buds are modular because the ear-cups and ear-buds can be attached and detached separately from a user and can function independently of each other or in coordination with each other.
[0028] The ear-cups 100 can be placed on top or around the user’s ears and can accommodate ear-buds 110 that can be inserted into the user’s ears at the same time. The ear-cups 100 can surround the user’s ear lobe and/or can press on top of the user’s ear lobe. The ear-buds 110 can be wireless and can be fully enclosed within the ear-cups 100. The ear-cups can be box shaped, box shaped with rounded comers, cylindrical, spherical, oval, etc.
[0029] The ear-cups 100 and the wireless ear-buds 110 can wirelessly, or via an analog or a digital cable, receive audio to play to the user from a remote source, such as a cell phone, a home device, a tablet, an Internet server, a cellular network, a wifi network, etc. The ear-buds 110 can receive the audio to play from the ear-cups 100, which can receive the audio from the remote source, as mentioned above. The ear-cups 100 and ear-buds 110 can be in wireless communication with each other to synchronize the audio play. For example, the ear-cups 100 can play low audible frequencies, while ear-buds can play mid and high audible frequencies. The ear-cups 100 can cause a vibrotactile stimulation of the skin by emitting low frequencies, for example less than 100 Hz. The ear-buds 110 can emit the full range of audio frequency or frequencies that complement the low frequencies emitted from 100, for example 100Hz or higher.
[0030] Further, the ear-cups 100 and ear-buds 110 can be removed independently from each other and can operate independently of each other. In other words, the user can only use the ear-buds 110 or can only use the ear-cups 100. The ear-cups 100 and ear-buds 110 can personalize the sound based on the user’s hearing profile prior to playing, as described in this application.
[0031] FIG. 2 shows a cross-section of the modular ear-cups 100 and ear-buds 110. The ear- cup 100 defines a cavity within which ear-bud 110 can be partially or fully contained. The ear-cup 100 can include a first wireless transceiver 200, a first audio emitter 210 to emit a first audio into the user’s ear, an externally facing microphone 260, an internally facing microphone 270, a processor 250, and an electric circuit 240. The ear-bud 110 can be placed in proximity to a user’s ear canal, such as at the entrance of the user’s ear canal or within the user’s ear canal. The ear-bud 110 can include a second audio emitter 230 to emit a second audio into the user’s ear canal, a processor 235, the second wireless transceiver 220, an externally facing microphone 290 and internally facing microphone 280. The first wireless transceiver 200 and the second wireless transceiver 220 can communicate with each other and can coordinate an operation of the first audio emitter 210 and the second audio emitter 230 to provide the first audio and the second audio to the user. The coordination can include adjusting the timing of the audio played by the first audio emitter 210 and by the second audio emitter 230 or can involve modifying the audio emitted by the audio emitters 210, 230. The coordination can include splitting an incoming audio signal into low frequencies sent to the audio emitter 210, and high frequencies sent to the audio emitter 230. The first wireless transceiver 200 and the second wireless transceiver 220 can communicate with each other and can coordinate between microphones 260, 270, 280, 290 and audio emitters 210, 230 for the purposes of Active Noise Cancellation (ANC), pass-through of external audio to the user, voice detection, etc.
[0032] The microphone 280 can measure a hearing profile of a user by measuring an acoustic response within the user’s ear canal generated in response to the second audio. The hearing profile can correlate an amplitude and/or a frequency perceived by the user and an amplitude and/or a frequency received by the user. For example, the user’s ear can receive a frequency of 5 kHz at 20 dB, but the user’s ear can perceive that frequency as 5 kHz at 10 dB. Based on the hearing profile, the ear-bud 110 can modify audio prior to emitting the audio to the user, so that the user perceives audio at the desired frequency and/or amplitude.
[0033] To create the hearing profile, the audio emitter 230 can emit audio, while the microphone 280 can measure an otoacoustic emission (OAE) response generated in response to the audio within the user’s ear canal. The OAE response can be measured automatically, with or without subjective measurement of hearing, that is without asking the user to provide an input whether the user heard the audio, how loud the audio was, etc. [0034] The second wireless transceiver 220 can communicate the hearing profile of the user to the first wireless transceiver 200. Based on the hearing profile of the user, the ear-cup 100 can adjust the first audio to increase the person’s enjoyment of music. The ear-bud 110 can also adjust the audio based on the hearing profile of the user. The hearing profile, once measured, can be uploaded to a central location, such as an Internet server, a cell phone, a tablet, a home device, a smartwatch, etc., and can be later downloaded for use by the ear-cups 100 and/or the ear-buds 110.
[0035] The ear-cups 100 and the ear-buds 110 can perform wireless active noise cancellation
(ANC). There are three types of ANC, feedforward (FF), feedback (FB) and hybrid (H). Feedforward can use an external microphone 260, 290. The processor 250, 235 can calculate the noise cancelling sound from the microphone 260, 290 and pass the noise cancelling sound to an internal speaker 210, 230, respectively. The noise canceling sound can destructively interfere with the undesired sound.
[0036] Feedback can use an internal microphone 270, 280. The processor 250, 235 can calculate the noise cancelling sound and pass the noise canceling sound to the internal speaker 210, 230. For example, the microphone 280 of the ear-bud 110 can record an audio, and transmit the recorded audio to the processor 235 or the processor 250 to calculate the noise canceling sound. The speaker 210 can receive the noise canceling sound from the processor 250, or from the processor 235 via the wireless transceiver 200. The speaker 210 can emit the noise canceling sound. Choosing the processor 250 or 235 to perform the noise canceling calculation can depend on how much power the ear-cup 100 or the ear-bud 110 have, respectively.
[0037] Hybrid ANC can use both techniques simultaneously. Ear-cups 100 and ear-buds 110 independently can use any combination of FF, FB or H ANC. In some embodiments, the microphone 260, 270, 290, 280 can communicate via a wireless link with the audio emitter in the other device, 230, 210 or the processor 235, 250 respectively.
[0038] In an embodiment using FF ANC, the externally facing microphone 260 on the outside of the ear-cups 100 can measure and record the ambient sound. The ear-cups 100 can record the ambient sound and can wirelessly transmit the ambient sound using the wireless transceiver 200 to the wireless transceiver 220 associated with the ear-buds 110. The ear-buds 110 can calculate the noise cancelling sound and play the noise cancelling sound through the audio emitter 230. In another example, the ear-cups 100 can calculate the noise cancelling sound and transmit the noise cancelling sound to the ear-buds 110, which can emit the noise cancelling sound through the audio emitter 230 without calculating the noise cancelling sound. Calculating the noise cancelling sound within the ear- cups can be useful when the ear-buds do not have sufficient power, such as when the power source associated with the ear-buds is low or depleted. For example, after performing the noise cancellation, the audio emitters 210, 230, can personalize the audio prior to emitting the audio to the user based on the hearing profile of the user.
[0039] In an embodiment using the FB ANC, the microphone 280 can measure a sound inside the ear-bud 110. The wireless transceiver 220 can transfer the measured sound to the ear-cup 100. The processor 250 of the ear-cup 100 can calculate the noise canceling sound destructively interfering with the received sound and cause the audio emitter 210 to emit the noise canceling sound. As a result, any sound escaping from the ear-bud 110 into the ear-cup 100, can be canceled by the noise canceling sound. Alternatively, the noise canceling sound can be calculated by the processor 235 and the wireless transceiver 220 can transmit the noise canceling sound to the ear-cup 100. Again, the audio emitter 210 can emit the noise canceling sound destructively interfering with any sound escaping from the ear-bud 110.
[0040] In another embodiment using the FB ANC, the microphone 270 can measure sound inside the ear-cup 100. The wireless transceiver 200 can transfer the measured sound to the ear-bud 110. The processor 235 of the ear-bud 110 can calculate the noise canceling sound destructively interfering with the received sound and cause the audio emitter 230 to emit the noise canceling sound. As a result, any sound escaping from the ear-cup 100 into the ear-bud 110, can be canceled by the noise canceling sound. Alternatively, the noise canceling sound can be calculated by the processor 250 and the wireless transceiver 200 can transmit the noise canceling sound to the ear-bud 110. Again, the audio emitter 230 can emit the noise canceling sound destructively interfering with any sound escaping from the ear-cup 100.
[0041] The ear-cups 100 and ear-buds 110 can operate in a social mode. The externally facing microphone 260 on the outside of the ear-cups 100 can measure the ambient sounds such as an external conversation directed to the user. Without the social mode, the user cannot hear the external conversation because the user has both ear-cups 100 and ear-buds 1 10 covering his ears. When in social mode, the wireless transceiver 200 of the ear-cups 100 can wirelessly transmit the external conversation, i.e. the ambient sound, to the wireless transceiver 220 of the ear-buds 110, or the wireless transceiver 200 of the ear-cups 100 can transmit the external conversation to the processor 250 of the ear-cups 100. That way, the user can hear the external conversation without taking the ear- cups 100 and the ear-buds 110 off. If the audio emitters 210, 230 have been playing a different audio, such as music, and the ear-cups 100 and ear-buds 110 switch into the social mode, the loudness of the different audio can be reduced, or completely muted, so that the ambient sound can be heard. Further, a filtered or unfiltered version of the ambient sounds measured by microphone 260 can be played by the audio emitters 230 and/or 210 allowing the user to hear the ambient sound.
[0042] The ear-cups 100 and ear-buds 110 can switch into the social mode upon receiving an input requesting transmission of the ambient sound, such as a touch of a button or a touch sensitive surface of the ear-cups 100 or the ear-buds 110. The ear-cups 100 and ear-buds 110 can automatically switch into the social mode. A memory associated with the ear-cups 100 and/or the ear-buds 1 10 can store an identifier associated with the user, such as a name of the user, e.g.“Sam.” The externally facing microphone 260 can continually record ambient sounds. The processor 250 can analyze the recorded sound, upon detecting the word“Sam”, the processor 250 can switch the ear-cups 100 and/or the ear-buds 1 10 into the social mode, where the ambient sound is passed through to the user. The processor 250 can be associated with the ear-cups 100, ear-buds 110, or can be a remote processor associated with a cell phone, a home device, an Internet server, etc.
[0043] The ear-cups 100 and ear-buds 110 can automatically detect when the user is speaking and record the user’s voice. The internally facing microphone 280 of the ear-buds 110 does not pick up ambient sound because the microphone 280 is insulated by the ear-cups 100. For example, if a third person is talking next to the user, the microphone 280 is not going to receive a strong signal because the microphone is insulated by the ear-cups and is facing towards the user’s ear canal.
[0044] The microphone 280 can record OAEs, as described in this application, and can record low frequencies of the user’s speech that are transmitted through the skull and into the user’s ear canal. When the microphone 280 detects low frequencies, the ear-bud 1 lOcan determine that the user is speaking, and can send a signal to the externally facing microphone 260 to start recording the high frequencies of the user speech.
[0045] The electronic circuit 240 can obtain a sound recorded by the microphones 260, 280, and can combine the sound into a recording of the user’s voice. The electronic circuit 240 can be a part of the ear-cups 100, or the ear-buds 110. The recording of the user’ s voice can be sent to a nearby device such as a cell phone, a home device, or the cellular modem 205 within the ear-cups. The device such as the cellular modem 205 can send the voice to the cellular network, thus enabling the user to use the ear-cups 100 and/or ear-buds 110 as a cell phone.
[0046] A processor 250, 235 can interpret the recording of the user’s voice into a command and execute the command. The command can be a command to the headphones, or to other devices connected to the headphones such as cell phone, home device, an Internet server, etc. The command can state“play a song by Imagine Dragons.” The ear-cups 100 and/or the ear-buds 110 can download the song and play the song to the user.
[0047] FIG. 3 shows a modular ear-cup 100, capable of operating without the ear-buds 110 in
FIG. 1. The ear-cup 100 can personalize the audio based on the hearing profile of the user in several ways. The ear-cup 100 can retrieve OAE measurements made by the ear-buds 110 in FIG. 1 from a remote device such as a cell phone, an Internet server, a home device, etc. without the presence of ear-buds 110. Upon retrieving the OAE measurements, the ear-cup 100 can modify the audio to increase the user’s enjoyment of the audio prior to playing the audio. The audio emitter 210 can emit the modified audio.
[0048] The wireless transceiver 300 can obtain the hearing profile from a server. The wireless transceiver 300 can be a cellular modem, a radio frequency transceiver, a Wi-Fi transceiver, etc. To operate the wireless transceiver 300, a battery 310 powering the wireless transceiver 300 needs to store enough power to operate the wireless transceiver 300. The longer the distance of communication, the more power the battery 310 needs to store. Due to the larger size, the ear-cup 100 can carry a bigger battery and store more power than the ear-bud 110 in FIG. 1. [0049] The ear-cup 100 can include a sensor 320 placed into contact with a user’s skin. The sensor 320 can measure a user’s perception of the first sound, emitted by the audio emitter 210, by measuring a signal generated in response to the first sound. The sensor 320 can be placed continuously along the perimeter of the ear-cup, where the ear-cup is in contact with the skin, or, the sensor 320 can be placed at discrete locations along the perimeter of the ear-cup. The sensor 320 can be a dry electrode, a wet electrode, and/or a capacitor, which can measure auditory evoked potential response (AEP) generated in response to the first sound.
[0050] AEP is a type of EEG signal emanating from the brain through the scalp in response to an acoustical stimulus. The sensor 320 can measure any AEP, such as auditory brainstem response, mid latency response, cortical response, acoustic change complex, auditory steady state response, complex auditory brainstem response, electrocochleography, cochlear microphonic, or cochlear neurophonic AEP. The sensor 320 can also measure an acoustic reflex (also known as the stapedius reflex, middle-ear-muscles (MEM) reflex, attenuation reflex, or auditory reflex). The acoustic reflex is an involuntary muscle contraction that occurs in the middle ear in response to high-intensity sound stimuli or when the person starts to vocalize.
[0051] The hearing profile measured using the sensor 320 can be communicated to the ear- buds 110 in FIG. 1 wirelessly, from the ear-cup 100 to the ear-bud 110, or, can be communicated via an intermediary device such as a cell phone, a home device, a tablet, a smart watch, an Internet server, etc.
[0052] In addition to or instead of modifying the music based on the hearing profile of the user, the ear-cup 100 can modify the music based on the acoustic profile of the audio emitter 210. The ear-cup 100 can obtain an acoustic profile of the audio emitter and can modify the audio based on the acoustic profile of the audio emitter and acoustic properties of the ear-cup 100 prior to playing the audio to the user.
[0053] The acoustic profile of the audio emitter can correlate a frequency and/or an amplitude received by the audio emitter 210 and a frequency and/or an amplitude emitted by the audio emitter 210. For example, the audio emitter 210 can receive a frequency of 32 Hz at 10 dB, but the audio emitter can emit that frequency as 32 kHz at 8 dB. Based on the acoustic profile of the audio emitter 210, the ear-cup 100 can modify audio prior to sending the audio to the audio emitter 210, so that the emitted audio matches the audio intended to be emitted by the audio emitter.
[0054] The ear-cup 100 can obtain the acoustic profile from the audio emitter 210, can download the acoustic profile of the audio emitter 210 from the Internet, or can measure the acoustic profile of the audio emitter 210 and/or the acoustic profile of the cavity of the ear-cup 100. To measure the acoustic profile, the processor 250 can send audio to the audio emitter 210 and can determine a frequency and an amplitude contained in the audio. The audio emitter 210 can play the audio, while the internally facing microphone 270 records the emitted audio and sends the recorded audio to the processor 250. The processor 250 can create the acoustic profile of the audio emitter 210 based on the determined frequency and amplitude contained in the audio, and the emitted frequency and amplitude. Additionally, the acoustic profile of the microphone can be taken into account in determining the acoustic profile of the audio emitter 210.
[0055] The user can communicate with the ear-cups 100 using voice. The user can press a button 330 to put the ear-cups 100 in a command state, meaning that the ear-cups 100 are ready to receive an input from the user. The input from the user can be a voice command, recorded by the externally facing microphone 260, such as“play a Beatles song.” The ear-cup 100 can interpret the command locally or can send the recorded voice to a remote processor associated with a cell phone, home device, a tablet, an Internet server, etc. to interpret the command. Upon interpreting the command, the transceiver 300, which can be a cellular modem or a Wi-Fi transceiver, can access the cellular network or the Internet, download“Hey Jude”, and play the song to the user. The ear-cup 100 can download the data from the Internet and/or from the cellular network without relying on a cell phone for communication. The downloaded song can be modified prior to playing to account for the hearing profile of the user and/or the acoustic profile of the audio emitter 210.
[0056] FIG. 4 shows a modular ear-bud 110, capable of operating without the ear-cups 100 in
FIG. 1. The ear-buds 110 can obtain a hearing profile associated with the user. The ear-buds 110 can measure the hearing profile, as described in this application, or the ear-buds 110 can download the hearing profile measured by the ear-cup 100 in FIG. 1, or by the ear-buds 110 at an earlier time. The hearing profile can be downloaded from a remote device such as a cell phone, a home device, a tablet, an Internet server, etc. using the wireless transceiver 220. A processor 400 associated with the ear- cup 100 can modify the audio prior to playing the audio to the audio emitter 230.
[0057] The wireless transceiver 220 can download the hearing profile. The wireless transceiver 220 can be a cellular modem enabling the ear-bud 110 to communicate via cellular networks without the aid of another device such as the cell phone, tablet, home device, etc.
[0058] In addition to modifying the audio based on the hearing profile of the user, the processor 400 can modify the audio based on the acoustic profile of the audio emitter 230. The processor 400 can obtain an acoustic profile of the audio emitter 230 playing the modified audio from the audio emitter 230, from a remote computer, such as a cell phone, a home device, a tablet, an Internet server, or the processor 400 can measure the acoustic profile of the audio emitter 230.
[0059] To measure the acoustic profile of the audio emitter 230, the processor 400 can send audio to the audio emitter 230 containing a frequency of 15 KHz at 5 dB, but the audio emitter can emit that frequency as 15 kHz at 8 dB. In another example, the audio emitter 230 can receive a frequency of 18 KHz at 9 dB but emit that frequency as 18 KHz at 13 dB. Based on the acoustic profile of the audio emitter 230, the processor 400 can modify audio prior to sending the audio to the audio emitter 230, so that the emitted audio matches the audio sent to the audio emitter.
[0060] The ear-bud 110 can detect when the ear-cup 100 in FIG. 1 is proximate to the ear-bud
110. For example, the wireless transceiver 200 in FIG. 2 of the ear-cup 100 in FIG. 2 can periodically send a short distance signal, which when received by the wireless transceiver 220 indicates to the ear- bud 110 that the ear-cup is close. Once the ear-cup 100 has been detected, the processor 250 in FIG. 2 and/or processor 400, can switch the audio processing to account for the two audio emitters 210 in FIG. 2, and 230. For example, the processor 250, 400, can separate audio into a first audio including low frequencies, and a second audio including mid and high frequencies. The processor 250, 400, can send the low frequencies to the audio emitter 210 associated with the ear-cup 100, and the mid and high frequencies to the audio emitter 230 associated with the ear-bud 110. The communication between the ear-cup 100 and ear-bud 110 can occur through the wireless transceivers 200, 220. [0061] FIGS. 5A-5C show various arrangements of an ear-cup and an ear-bud. The standalone ear-cup and standalone ear-bud described, for example, in FIGS. 3-4, respectively, can be combined into a single system as shown, for example, in FIGS. 1-2, 5A-5C.
[0062] The ear-bud 110 in FIG. 1 can be a wired ear-bud 500, as shown in FIG. 5 A. The wired ear-bud 500 can be surrounded by the ear-cup 100, with the wire 505 positioned between the surface of the ear-cup 100 and a user’s skin.
[0063] The ear-bud 110 can be a wireless ear-bud 510 as shown in FIG. 5B and can fit through a closable passage 520 in FIG. 5C. The closable passage 520 can have two trapdoors 530, 540, mounted on two opposing surfaces of the ear-cup 100. The trapdoors 530, 540, are normally closed, but when the wireless ear-bud 510 pushes against the trapdoor 530, 540, the trapdoors can open, and assume positions 550, 560, to allow the passage of the ear-bud 510. When the wireless ear-bud 510 is removed, the trapdoors 530, 540, can automatically snap back into their original position, closing the passage 520. A mechanism to automatically snap back the trapdoors 530, 540, can include a spring or a magnet pulling the trapdoors 530, 540, into their closed position.
[0064] FIG. 6A is a flowchart of a method to personalize audio played to user. In step 600, an audio emitter associated with an ear-cup or an ear-bud can obtain a hearing profile associated with a user. The hearing profile can indicate a correlation between a received amplitude and frequency and a perceived amplitude and frequency. The hearing profile can be stored in a memory associated with the audio emitter such as a memory of the ear-cup or the ear-bud, the hearing profile can be received from an Internet server, a nearby device such as a cell phone, a tablet, home device, a wearable device, etc.
[0065] In step 610, a processor associated with the audio emitter can modify audio based on the hearing profile associated with the user to equalize the received amplitude and frequency and the perceived amplitude and frequency. In step 620, the audio emitter can play the modified audio to the user.
[0066] To obtain the hearing profile, the processor can measure an otoacoustic emission generated in the user’s cochlea in response to the received amplitude and frequency. The processor can create the hearing profile based on the measured otoacoustic emission. For example, the user’s ear can receive a frequency of 5 kHz at 18 dB, but the otoacoustic emissions can indicate that the user’ s ear perceives that frequency as 5 kHz at 12 dB. Based on the hearing profile, the audio emitter 230 can modify an audio prior to emitting the audio to the user, so that the user perceives the audio at the desired frequency and/or amplitude. The measurement of the hearing profile is further described in reference to FIG. 6B below.
[0067] In addition to compensating for the hearing profile of the user, the processor can compensate for the acoustic profile of the audio emitter. The processor can obtain an acoustic profile of an audio emitter playing the modified audio. The acoustic profile of the audio emitter can correlate an amplitude and a frequency received by the audio emitter and an amplitude and a frequency emitted by the audio emitter. The acoustic profile of the audio emitter can be obtained from a memory associated with the audio emitter, from a remote computer such as downloaded from the Internet based on make and model of the audio emitter, and/or it can be measured, etc. The processor can modify the audio based on the acoustic profile of the audio emitter prior to playing the audio to the user.
[0068] To measure the acoustic profile of the audio emitter, the processor can determine a frequency and an amplitude of audio to be emitted by the audio emitter. For example, the processor can determine that an audio contains a frequency of 23 Hz at 7.5 dB. The processor can measure a frequency and an amplitude of the audio emitted by the audio emitter. For example, when the audio emitter emits the audio containing the frequency of 23 Hz and 7.5 dB, the processor using a microphone can measure the frequency and amplitude of the emitted audio, which can be 23.5 Hz at 6.7 dB. To create the acoustic profile of the audio emitter, the processor can record in the acoustic profile that the audio emitter emits at the frequency of 23 Hz and 7.5 dB as a frequency of 25 Hz at 6.7 dB.
[0069] The processor can detect a proximity between two audio emitters, first audio emitter, and a second audio emitter. For example, the ear-cup can detect a proximity of the ear-bud, or the ear-bud can detect proximity of the ear-cup, and the processor can switch into a dual emitter processing mode to account for the 2 audio emitters. For example, the processor can separate audio intended for a first audio emitter into at least a first audio and a second audio and can send the first audio to the first audio emitter and the second audio to the second audio emitter. The first audio can contain the low frequencies, or the second audio can contain mid and high frequencies, and vice versa.
[0070] To detect the proximity of the second audio emitter, a wireless transmitter associated with the first audio emitter can periodically emit a short-range message formatted according to a predetermined protocol, which identifies the second audio emitter. When a wireless transceiver associated with the first audio emitter receives the message, the wireless transceiver can identify the second audio emitter, and can indicate to the processor to switch into a dual emitter mode.
[0071] To detect the proximity of the second audio emitter, in another embodiment, a range finder can be associated with the first audio emitter. The rangefinder can be a laser, radar, sonar, lidar and/or ultrasonic range finder, capacitive, inductive, RF, optical, magnetic or acoustic sensor. Once a second proximate object is detected, the processor can switch into the dual emitter mode.
[0072] The processor can detect when the user is speaking and take appropriate action, such as record the user’s voice, transmit the user’s voice over a cellular network to enable user to make a phone call, interpret the user’s voice as a command, etc. Using the first sensor placed proximate to a user’s head, the processor can detect a low-frequency audio. For example, the sensor can be a microphone placed at or within the user’s ear canal. The so placed microphone can detect low frequencies of the user’s voice transmitted through the skull and into the ear canal.
[0073] Upon detecting the low-frequency audio, the processor can send the signal to a second sensor to record a high-frequency audio. The second sensor can be an externally facing microphone that can record high to mid-frequencies of the user’s speech. The processor can record the low- frequency audio and the high-frequency audio. The recorded audio can be transmitted over the cellular network as a phone call made without a phone, using only the ear-buds and/or ear-cups, or the recorded audio can be transmitted over the cellular network to be interpreted. The recorded audio can also be stored for later use or interpreted locally as a command to the ear-buds and/or ear-cups.
[0074] The processor can record an ambient sound surrounding an ear-cup. The processor can perform filtering of the ambient sound to obtain a filtered sound. The processor can wirelessly transmit the ambient sound or the filtered sound to an ear-bud. The processor can play the filtered sound through an audio emitter associated with an ear-bud.
[0075] The filtering can create the filtered sound to destructively interfere with the ambient sound. When such filtered sound is played through the audio emitter, the filtered sound destructively interferes with the ambient sound, thus causing ambient noise cancellation within the ear-bud.
[0076] The filtering can also happen upon receiving an input requesting transmission of the ambient sound to the ear-bud. In that case, the ambient sound can be adjusted to enhance the perception of the ambient sound to obtain the filtered sound. The adjustment can come from filtering out the noise in the ambient sound, such as frequencies outside of the human speech frequency. The adjustment can also come from modifying the ambient sound based on the hearing profile of the user to enhance the perception of the ambient sound. The input requesting transmission is described in this application, such as a touch of a button or a touch on the surface of the ear-cup, or detecting an audio addressed to the user such as an audio containing the name of the user.
[0077] FIG. 6B is a schematic diagram depicting the electrical components of the signal processing module that can be located with an ear-cup and/or an ear-bud and that can be used in creating a hearing profile of a user. The hearing profile can be created based on otoacoustic emissions.
[0078] Otoacoustic emissions can be measured within the user’s ear canal and then used to determine thresholds at multiple frequencies or relative amplitudes of the otoacoustic emissions at multiple frequencies to one or more suprathreshold sound levels in order to develop the frequency dependent hearing transfer function of the user’s ear(s). Stimulus frequency OAE, swept-tone OAE, transient evoked OAE, DP-OAE, or pulsed DP-OAE can be used for this purpose.
[0079] The amplitude, latency, hearing threshold, and/or phase of the measured OAEs can be compared to response ranges from normal-hearing and hearing-impaired listeners to develop the frequency dependent hearing transfer function for each ear of the user.
[0080] Since DP-OAEs are best measured in a sealed ear canal with two separate speakers/receivers packed into each ear canal, the use of OAEs is best suited for the ear-bud implementation. [0081] In the case of OAEs, one stimulus frequency/loudness combination yields a response amplitude. The measurement of multiple frequencies in this manner yields a plot of response amplitude versus frequency, which is stored in the memory 46 of the signal processing modules 42, or can be stored in a remote database. Many OAE techniques rely upon the measurement of one frequency per stimulus; however, the swept tone OAE measures all frequencies in the range of the sweep. Nevertheless, the hearing transfer function remains the same regardless of the measuring method used, that is, the hearing transfer function comprises a plot of the signal amplitude versus frequency of the OAE evoked in the user’s ear upon application of an input audio signal. The hearing transfer function can also comprise the input amplitude associated with the input frequency.
[0082] In this exemplary embodiment, in order to determine the hearing transfer function for a user’s ear, the processor 44 captures data points for an input audio signal comprising a number of frequencies, for example, 500, 1000, 2000 and 4000 Hz, which are typically the same frequencies used in the equalizer that acts upon the output sound signal to the loudspeakers 12 and 14. At any one frequency, the processor measures the response to an input audio signal at reducing levels, for example, at 70 dB, 60 dB, 50 dB, 40 dB, etc., until there is no longer a measurable response. The processor 44 records the data point at that time. It will be appreciated that in other embodiments, different methods, such as curve fitting or measuring a profile at a single loudness level, can be used to determine the hearing transfer function. The input audio signal can include a test audio signal, and/or a content audio signal comprising music, speech, environment sounds, animal sounds, etc. For example, the input audio signal can include the content audio signal with an embedded test audio signal.
[0083] In-situ calibration of the speakers to the user’s ear canal can be performed by the processor 44 prior to making an OAE measurement. In this context“in-situ” refers to measurements made at times when the speakers and microphone are situated for use inside the ear canal. Where the acoustic characteristic of the speakers are known, the acoustic impedance of the ear can be calculated from this data and utilized for deriving corrections.
[0084] In one or more embodiments, in-situ calibration can be done by playing a test audio signal, such as a chirp, or the content signal, covering the frequency range of the speakers, recording the frequency response with the microphone, and adjusting output by changing the equalizer settings to make a flat frequency response of the desired loudness.
[0085] In other embodiments, this calibration can be done in real time to any playback sound
(e.g., music, or any audio comprising content) by constantly comparing the predicted output of the speakers in the frequency domain given the electric input to the speaker to the microphone and altering the equalizer gains until they match. The in-situ calibration accounts for variations in different users’ external portion of the ear and variations in the placement of earbuds. If no audiometric data is yet available, then the in-situ calibration alone can be used for adjusting the sound.
[0086] Any variation with an internal microphone can use that microphone for in-situ calibration of the speakers performed every time the user places the headphones on.
Power management of the modular ear-cups and ear-buds
[0087] FIG. 7 shows a power source associated with modular ear-cups and ear-buds in communication with each other. An ear-bud 110 can receive a first audio and can emit the first audio. The ear-bud 110 can include a power source 700, such as a battery, having a power capacity less than the power needed for a potentially desired usage of the wireless ear-bud. Due to the small size necessary to fit within the ear, the ear-bud 110 cannot accommodate a large battery necessary to power the ear-bud for more than several hours of use. The user may want to use the ear-bud continuously for more than several hours, thus draining the battery 700. The battery 700 can be wirelessly charged using wireless power transfer, such as an induction coil 710 within the ear-bud 110. The ear-bud 110 can be wireless or can be wired.
[0088] In another embodiment, the power source 700 of the ear-bud 110 can include a transient power storage, such as capacitors holding the voltage transiently. The power source 700 of the ear-bud 110 can be powered by the power source 720 of the ear-cup 100. In some embodiments, the transient power source can discharge after a certain amount of time, such as several hours or several days. When the transient power source is discharged, the earbud 110 cannot operate without being charged by another external power source. The power source 700 can be charged in a wired or a wireless manner by the external power source such as the power source 720 of the ear-cup 100, a wired device plugged into power, or via wearable device such as a necklace, an earring, glasses, a headband, etc.
[0089] An ear-cup 100 can have at least two functions, where one function is wirelessly charging the power source 700 of the ear-bud 110, and the other function is playing an audio to a user. The ear-cup 100 can substantially surround the wireless ear-bud 110. For example, the ear-bud 110 can be fully contained within the cavity of the ear-cup 100 as shown in FIG. 1, or the ear-bud 110 can partially protrude from the ear-cup 100 as shown in FIG. 5B. The ear-cup 100 can include a power source 720 to wirelessly transfer power to the power source 700 while the wireless ear-bud 110 is operating. By wirelessly transferring power, the ear-cup 100 can enable the battery 700 to provide the power needed for the potentially desired usage of the wireless ear-bud. In other words, by continually charging the power source 700, the ear-bud 110 can continually play music to the user even after several hours, when the power source 700 would have been drained if the power source 700 weren’t charged.
[0090] The power source 720 can be connected to an induction coil 730, which can induce a current in the induction coil 710 of the ear-bud 110. The power source 720 within the ear-cup can also be charged wirelessly by a wired device plugged into power, or via wearable device such as a necklace, an earring, glasses, a headband, etc. The power source 720 can be a battery, or can be a transient power source, such as a capacitor, that can be charged in a wired or wireless manner by an external device.
[0091] FIG. 8 shows a dual function member according to various embodiments. A device
800 can have a power capacity less than the power needed for a potentially desired usage of the device 800. The device 800 can be wired, or wireless, such as a wireless ear-bud.
[0092] A dual function member 810, 820, 830, 840, 850, 860, can include a power source configured to wirelessly transfer power to the device 800 while the device 800 is consuming power, thus enabling the device 800 to operate for a duration of the potentially desired usage. The wireless power transfer can include an induction coil 710, 730 in FIG. 7. The device 800 can be a wireless ear-bud carrying a small battery that can only play audio for 3 or 4 hours continuously.
[0093] The dual function member 810, 820, 830, 840, 850, 860, can be a wearable accessory such as an ear-cup 100 in FIG. 7, a headband 810, glasses 820, an earring 830, a necklace 840, a purse strap 850, clothing 860, etc. A first function of the dual function member 810, 820, 830, 840, 850, 860, can be to wirelessly transfer power to the device 800. A second function of the dual function member 810, 820, 830, 840, 850, 860, is different from the first function, and can include the decorative function such as in the case of the earring 830, the necklace 840, or the headband 810. The second function can include holding a purse, a water bottle, a backpack, etc. as in the case of the purse strap 850. The second function can include holding the hair back as in the case of headband 810, a hair clip, a hair tie, etc. The second function can also include clothing such as in the case of the tank top 860.
[0094] The dual function member 810, 820, 830, 840, 850, 860, can include one or more power sources. For example, the headband 810 can have multiple discrete wireless power transmitters 812, 814 placed along the perimeter of the headband 810, or the headband can have one continuous wireless power transmitter, such as an induction coil, circumventing the perimeter of the headband 810. The glasses 820 can have wireless power transmitters 822 positioned in the side arm 824 of the glasses 820 or can have one continuous wireless power transmitter placed through the frame of the glasses 820. The earring 830 can have a wireless power transmitter 832 positioned within the ear clasp 834, or within the hanging part 836 of the earring 830. The earring 830 can also have one continuous wireless power transmitter running through the earring 830.
[0095] The necklace 840 can have multiple discrete wireless power transmitters 842 placed along the perimeter of the necklace 840, or the necklace can have one continuous wireless power transmitter, such as an induction coil, circumventing the perimeter of the necklace 840. The strap 850 can also have multiple discrete wire power transmitters 852, 854, placed along the perimeter of the strap 850, or the strap 850 can have one continuous wireless power transmitter placed through the strap 850. Finally, the clothing 860 can have multiple discrete wireless power transmitter 862 located in the region expected to cover the wearer’s shoulders. [0096] The dual function member 810, 820, 830, 840, 850, 860, can also be charged wirelessly from a third power source. The device 800, the dual function member 810, 820, 830, 840, 850, 860, and the third power source can create a daisy chain of wireless power transmission. The third power source can receive power wirelessly or can receive power through a wire connected to a power source.
[0097] FIGS. 9A-9B show an encoding member to perform amplitude modulation and frequency modulation of a power field. The power source 915 can wirelessly transmit the power along with data such as audio, or a command to the device 950. To transmit the data along with the power, the encoding member 900 in FIG. 9B of the dual function member 905 can modify the frequency and/or amplitude of the power field 910, i.e., electromagnetic field, produced by the power source 915. The encoding member 900 can be a processor, while the dual function member 905 can be an ear-cup, or any other dual function member 810, 820, 830, 840, 850, 860 in FIG. 8. The device 950 can be a wireless ear-bud, wired ear-bud, a hearing aid, etc. The device 950 can be waterproof.
[0098] The encoding member 900 can modify the amplitude of the power field 910, to encode the message 920. The resulting amplitude modulated power field appears as an amplitude modulated power field 990. The amplitude modulated power field 990 can wirelessly transfer power to the power source 940 of the device 950. In addition, the amplitude modulated power field 990 can transmit the message 920 to the device 950.
[0099] The encoding member 900 can modify the frequency of the power field 910, to encode the message 920. The resulting frequency modulated power field appears as a frequency modulated power field 930. The frequency modulated power field 930 can wirelessly transfer power to the power source 940 of the device 950. In addition, the frequency modulated power field 930 can transmit the message 920 to the device 950. Similarly, the encoding member 900 can perform phase modulation to transmit the message 920. The device 950 can be wired or wireless, such as an ear- bud.
[00100] The message 920 can include data such as audio, or a command to the device 950. For example, an instruction to the dual function member 810, 820, 830, 840, 850, 860 in FIG. 8, can be transmitted using the power field to the device 950. The instruction can be a press of a button associated with the dual function member 810, 820, 830, 840, 850, 860. In a more specific example, a press of the button 925 can be transmitted to the device 950. The press of the button 925 can notify the device 950 to receive a command from the user, or can put device 950 in social mode, i.e., notify the device to reduce the amplitude of the audio, or to completely stop emitting the audio and to allow ambient sound to reach the user.
[00101] The device 950 can monitor an amount 945 of power associated with the power source 940 of the device 950. When an amount 945 of power of the power source 940 is below a predefined threshold 960, the wireless audio device can activate the power source of the dual function member 905. The predefined threshold 960 can be, for example, 60% of the total amount of power associated with a power source 940. The wireless audio device can be a wearable audio device, such as a headphone, an ear-bud, an ear-cup, a hearing aid, etc.
[00102] To activate the power source of the dual function member 905, a wireless transceiver 970 can send the request to a wireless transceiver 980 of the dual function member 905, to request more power. The dual function member 905 can respond in several ways. The dual function member can activate the power source 915 and transfer power to the power source 940, or the dual function member can determine the most power intensive task associated with the power source 940, and move that task to another device, such as a left ear-bud, a right ear-bud, a left ear-cup or a right ear-cup, as explained further in this application.
[00103] The dual function member 905, upon receiving a request to activate the power source from the device 950, can choose the power source among multiple power sources based on a distance to the device 950 and/or based on an amount of power contained within the power source among the multiple power sources, as explained further below.
[00104] FIG. 10 shows a system to optimize power consumption between multiple components. The system can include multiple power sources 1000, 1010, 1020, 1030, 1040. Power sources 1000 and 1010, associated with the devices 1005 and 1015 can have a smaller power capacity than the power sources 1020 and 1030 because devices 1005 and 1015 are smaller than the dual function members 1025, 1035, associated with the power sources 1020, 1030, respectively. The system can balance power between any combination of two or more power sources 1000, 1010, 1020, 1030, 1040.
[00105] A processor 1050 associated with at least one of the dual function members 1025, 1035, can receive a request for more power from at least one of the devices 1005, 1015. The processor 1050 can determine which power source 1000, 1010, 1020, 1030, to activate. The processor can activate the power source closest to the power source requesting additional power. For example, if the power source 1000 is requesting more power, the processor 1050 can activate power source 1020 because the power source 1020 is the closest to the power source 1000.
[00106] In another example, the processor 1050 can take into account both the distance and an amount of power associated with a power source that could provide more power. If the power source 1000 is requesting more power, the processor 1050 can determine that both power sources 1020, 1040, are good candidates based on the distance, with the power source 1020 being closer to the power source 1000. However, an amount of power remaining in the power source 1040 can be higher than the amount of power remaining in the power source 1020. Consequently, the processor can activate the power source 1042 to transfer power to the power source 1000.
[00107] In a third example, the processor 1050 can take into account the amount of processing power associated with a power source that could provide more power. If the power source that could provide more power has a processor, then a power intensive task can be transferred from the power source requesting more power to the power source that could provide more power, with or without wirelessly charging the requesting power source. In a more specific example, if the power source 1000 is requesting more power, the processor 1050 can determine that the power source 1020 has the processor 1050 associated with it. The processor 1050 can also determine that the most power intensive task is personalizing an audio according to the hearing profile of the user. Consequently, the processor 1050 can take on personalizing the audio according to the hearing profile of the user and can send the personalized audio to the device 1005 associated with the power source 1000. As a result, the power source 1000 does not need to supply as much power as before. [00108] FIG. 11 is a flowchart of a method to wirelessly transfer power between a dual function member and a wireless audio device. In step 1100, the dual function member can enable a wireless audio device having a power capacity less than a power needed for a potentially desired usage of the wireless ear-bud to operate for a duration of the potentially desired usage.
[00109] In step 1110, the dual function member can wirelessly transfer power to the wireless audio device while the wireless audio device is consuming power, thus enabling the wireless audio device to operate for the duration of the potentially desired usage. For example, the wireless audio device can be an ear-bud having a power source capable of operating continuously for up to 5 hours. The user, however, may want to use the ear-buds for more than 5 hours. Currently, the user has to remove the ear-buds, charge them, and then resume using the ear-buds. By wirelessly transferring power to the ear-buds, while the ear-buds are in operation, the user can continue using the ear-buds for more than 5 hours.
[00110] The dual function member can have a first function and a second function. The first function can include wirelessly transferring power to the wireless audio device, and the second function can be different from the first function. For example, the second function can be to play audio, to be decorative, to correct vision, to hold the hair back, to carry objects, to act as clothing, etc. The dual function member itself can be charged wirelessly using the wireless audio device and/or a third power source.
[00111] A processor can modulate the power field emitted by the power source to transmit data. The power field can be an electromagnetic field. The modulation can include a frequency modulation, an amplitude modulation or a phase modulation, as explained in this application.
[00112] The wireless audio device can monitor its power. When the power of the wireless audio device is below a predefined threshold, the wireless audio device can activate the power source of the dual function member. The predefined threshold can be 80% of the total amount of power of a power source associated with the wireless audio device. The dual function member, upon receiving a request for more power from the wireless audio device, can automatically determine which power source to activate to transfer power to the wireless audio device. [00113] For example, the dual function member can determine the closest power source, a power source with the most amount of power, or the dual function member can redistribute power intensive tasks away from the wireless audio device. In another example, the dual function member can use movable magnets to shape the power field transferring power to the wireless audio device. That way, the power field emanating from one power source can be shaped to transfer power to a left ear-bud and can also be shaped to transfer power to a right ear-bud. The movable magnets can be micro electro mechanical system (MEMS) magnets.
[00114] Upon receiving a request from the wireless audio device to activate the power source associated with the dual function member, the dual function member can determine which power source to activate. The dual function member can choose the power source among multiple power sources based on the distance between the power source and the wireless audio device or based on an amount of power contained within the power source in the plurality of power sources.
[00115] FIG. 12 is a flowchart of a method to balance power among multiple power sources. In step 1200, the processor can determine the first amount of power remaining in a first power source associated with a wearable audio system and the second amount of power remaining in a second power source associated with the wearable audio system. The wearable audio system can be any combination of a headset, a headset with ear-buds, two ear-buds, a hearing aid, etc.
[00116] In step 1210, the processor can compare the first amount of power and the second amount of power. In step 1220, when the first amount of power is greater than the second amount of power, the processor can send a power intensive task to a processor associated with the first power source. A power intensive task can include modifying audio based on the hearing profile of the user, to enhance the user’s perception of music.
[00117] For example, a power source associated with an ear-bud could be low on power when the ear-bud needs to modify audio based on the user’s hearing profile. The ear-bud can transfer the task to a processor associated with an ear-cup because a power source associated with the ear-cup has more power than the power source associated with the ear-bud. The ear-cup can modify the audio based on the user’s hearing profile and transfer the modified audio to the ear-bud. [00118] A processor can obtain a hearing profile associated with a user. The hearing profile can correlate a received frequency and a received amplitude to a perceived frequency and a perceived amplitude. The processor can send the power intensive task to modify audio based on the hearing profile to the processor associated with the first power source, and then play the modified audio to the user. In another example, if the user is wearing only two ear-buds, the processor can determine which of the two ear-buds has more power and can transfer the audio modification task to the ear-bud with more power.
[00119] In another example, the power intensive task can include a wireless communication with a remote device, such as a phone, a tablet, a wireless router or a cell tower. For example, if the user is wearing an ear-cup and an ear-bud, the ear-cups can have a bigger power source, e.g., a battery, then the ear-buds, and the wireless communication session such as Bluetooth communication, Wi-Fi communication, and/or cell tower communication can be done by the ear-cups. The ear-cups can then transfer the received audio to the ear-buds, so that the ear-buds need only receive the audio and play the audio. Receiving and playing the audio consume less power than the wireless communication.
[00120] To optimally distribute power intensive tasks, the processor can determine a first amount of power needed for a first task and a second amount of power needed for a second task. When the first amount of power is greater than the second amount of power, the processor can send the first task to the processor associated with the first power source. To determine the amount of power associated with the task, the processor can retrieve a table stored in memory indicating an expected amount of power associated with various tasks or running programs.
[00121] The wearable audio system can include the first power source associated with a first ear-bud, the second power source associated with a second ear-bud, and a third power source associated with an ear-cup. The processor can determine a third amount of power remaining in a third power source associated with the ear-cup. The processor can compare the first amount of power, the second amount of power, and the third amount of power. When the third amount of power is greater than the second amount of power and the first amount of power, the processor can send a power intensive task to a processor associated with the third power source. Computer
[00122] FIG. 13 is a diagrammatic representation of a machine in the example form of a computer system 1300 within which a set of instructions, for causing the machine to perform any one or more of the methodologies or modules discussed herein, may be executed.
[00123] In the example of FIG. 13, the computer system 1300 includes a processor, memory, non-volatile memory, and an interface device. Various common components (e.g., cache memory) are omitted for illustrative simplicity. The computer system 1300 is intended to illustrate a hardware device on which any of the components described in the example of FIGS. 1-12 (and any other components described in this specification) can be implemented. The computer system 1300 can be of any applicable known or convenient type. The components of the computer system 1300 can be coupled together via a bus or through some other known or convenient device.
[00124] The computer system 1300 can be included in the dual function member such as 810, 820, 830, 840, 850, 860 in FIG. 8, and 905 in FIG. 9B. The computer system 1300 can be included in the wireless audio device such as 905 in FIG. 9B. The processor of the computer system 1300 can perform the various steps described in this application. The network of the computer system 1300 can be the wireless network facilitating communication between the dual function member and the wireless audio device.
[00125] This disclosure contemplates the computer system 1300 taking any suitable physical form. As example and not by way of limitation, the computer system 1300 may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, or a combination of two or more of these. Where appropriate, the computer system 1300 may include one or more computer systems 1300; be unitary or distributed; span multiple locations; span multiple machines; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems 1300 may perform— without substantial spatial or temporal limitation— one or more steps of one or more methods described or illustrated herein. As an example, and not by way of limitation, one or more computer systems 1300 may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer systems 1300 may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate.
[00126] The processor may be, for example, a conventional microprocessor such as an Intel Pentium microprocessor or Motorola power PC microprocessor. One of skill in the relevant art will recognize that the terms“machine-readable (storage) medium” or“computer-readable (storage) medium” will include any type of device that is accessible by the processor.
[00127] The memory is coupled to the processor by, for example, a bus. The memory can include, by way of example but not limitation, random access memory (RAM), such as dynamic RAM (DRAM) and static RAM (SRAM). The memory can be local, remote, or distributed.
[00128] The bus also couples the processor to the non-volatile memory and drive unit. The non-volatile memory is often a magnetic floppy or hard disk, a magnetic-optical disk, an optical disk, a read-only memory (ROM), such as a CD-ROM, EPROM, or EEPROM, a magnetic or optical card, or another form of storage for large amounts of data. Some of this data is often written, by a direct memory access process, into memory during execution of software in the computer 1300. The non volatile storage can be local, remote, or distributed. The non-volatile memory is optional because systems can be created with all applicable data available in memory. A typical computer system will usually include at least a processor, memory, and a device (e.g., a bus) coupling the memory to the processor.
[00129] Software is typically stored in the non-volatile memory and/or the drive unit. Indeed, storing an entire large program in memory may not even be possible. Nevertheless, it should be understood, that for software to run, if necessary, it may be moved to a computer readable location appropriate for processing, and for illustrative purposes, that location is referred to as the memory in this paper. Even when software is moved to the memory for execution, the processor will typically make use of hardware registers to store values associated with the software, and local cache that, ideally, serves to speed up execution. As used herein, a software program is assumed to be stored at any known or convenient location (from non-volatile storage to hardware registers) when the software program is referred to as“implemented in a computer-readable medium.” A processor is considered to be“configured to execute a program” when at least one value associated with the program is stored in a register readable by the processor.
[00130] The bus also couples the processor to the network interface device. The interface can include one or more of a modem or network interface. It will be appreciated that a modem or network interface can be considered to be part of the computer system 1300. The interface can include an analog modem, ISDN modem, cable modem, token ring interface, satellite transmission interface (e.g., “direct PC”), or other interfaces for coupling a computer system to other computer systems. The interface can include one or more input and/or output devices. The I/O devices can include, by way of example but not limitation, a keyboard, a mouse or other pointing device, disk drives, printers, a scanner, and other input and/or output devices, including a display device. The display device can include, by way of example but not limitation, a cathode ray tube (CRT), liquid crystal display (LCD), or some other applicable known or convenient display device. For simplicity, it is assumed that controllers of any devices not depicted in the example of FIG. 13 reside in the interface.
[00131] In operation, the computer system 1300 can be controlled by operating system software that includes a file management system, such as a disk operating system. One example of operating system software with associated file management system software is the family of operating systems known as Windows® from Microsoft Corporation of Redmond, Washington, and their associated file management systems. Another example of operating system software with its associated file management system software is the Linux™ operating system and its associated file management system. The file management system is typically stored in the non-volatile memory and/or drive unit and causes the processor to execute the various acts required by the operating system to input and output data and to store data in the memory, including storing files on the non-volatile memory and/or drive unit.
[00132] Some portions of the detailed description may be presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
[00133] It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as“processing” or “computing” or“calculating” or“determining” or“displaying” or“generating” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
[00134] The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the methods of some embodiments. The required structure for a variety of these systems will appear from the description below. In addition, the techniques are not described with reference to any particular programming language, and various embodiments may thus be implemented using a variety of programming languages.
[00135] In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
[00136] The machine may be a server computer, a client computer, a personal computer (PC), a tablet PC, a laptop computer, a set-top box (STB), a personal digital assistant (PDA), a cellular telephone, an iPhone, a Blackberry, a processor, a telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
[00137] While the machine-readable medium or machine-readable storage medium is shown in an exemplary embodiment to be a single medium, the term“machine-readable medium” and “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term“machine-readable medium” and“machine-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and which cause the machine to perform any one or more of the methodologies or modules of the presently disclosed technique and innovation.
[00138] In general, the routines executed to implement the embodiments of the disclosure, may be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as“computer programs. The computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processing units or processors in a computer, cause the computer to perform operations to execute elements involving the various aspects of the disclosure.
[00139] Moreover, while embodiments have been described in the context of fully functioning computers and computer systems, those skilled in the art will appreciate that the various embodiments are capable of being distributed as a program product in a variety of forms, and that the disclosure applies equally regardless of the particular type of machine or computer-readable media used to actually effect the distribution. [00140] Further examples of machine-readable storage media, machine-readable media, or computer-readable (storage) media include but are not limited to recordable type media such as volatile and non-volatile memory devices, floppy and other removable disks, hard disk drives, optical disks (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks, (DVDs), etc.), among others, and transmission type media such as digital and analog communication links.
[00141] In some circumstances, operation of a memory device, such as a change in state from a binary one to a binary zero or vice-versa, for example, may comprise a transformation, such as a physical transformation. With particular types of memory devices, such a physical transformation may comprise a physical transformation of an article to a different state or thing. For example, but without limitation, for some types of memory devices, a change in state may involve an accumulation and storage of charge or a release of stored charge. Likewise, in other memory devices, a change of state may comprise a physical change or transformation in magnetic orientation or a physical change or transformation in molecular structure, such as from crystalline to amorphous or vice versa. The foregoing is not intended to be an exhaustive list in which a change in state for a binary one to a binary zero or vice-versa in a memory device may comprise a transformation, such as a physical transformation. Rather, the foregoing is intended as illustrative examples.
[00142] A storage medium typically may be non-transitory or comprise a non-transitory device. In this context, a non-transitory storage medium may include a device that is tangible, meaning that the device has a concrete physical form, although the device may change its physical state. Thus, for example, non-transitory refers to a device remaining tangible despite this change in state.
Remarks
[00143] The foregoing description of various embodiments of the claimed subject matter has been provided for the purpose of illustration and description. It is not intended to be exhaustive or to limit the claimed subject matter to the precise forms disclosed. Many modifications and variations will be apparent to one skilled in the art. Embodiments were chosen and described in order to best describe the principles of the invention and its practical applications, thereby enabling others skilled in the relevant art to understand the claimed subject matter, the various embodiments, and the various modifications that are suited to the particular uses contemplated.
[00144] While embodiments have been described in the context of fully functioning computers and computer systems, those skilled in the art will appreciate that the various embodiments are capable of being distributed as a program product in a variety of forms, and that the disclosure applies equally regardless of the particular type of machine or computer-readable media used to actually effect the distribution.
[00145] Although the above Detailed Description describes certain embodiments and the best mode contemplated, no matter how detailed the above appears in text, the embodiments can be practiced in many ways. Details of the systems and methods may vary considerably in their implementation details, while still being encompassed by the specification. As noted above, particular terminology used when describing certain features or aspects of various embodiments should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristic, feature, or aspect of the invention with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification, unless those terms are explicitly defined herein. Accordingly, the actual scope of the invention encompasses not only the disclosed embodiments, but also all equivalent ways of practicing or implementing the embodiments under the claims.
[00146] The language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this Detailed Description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of various embodiments is intended to be illustrative, but not limiting, to the scope of the embodiments, which is set forth in the following claims.

Claims

1. A system comprising:
an ear-cup configured to be placed in proximity to a user’s ear, the ear-cup comprising a first audio emitter to emit a first audio into the user’s ear and a first wireless transceiver, the ear-cup defining a cavity to accommodate an ear-bud; and the ear-bud to be placed in proximity to a user’s ear canal, the ear-bud comprising a second audio emitter to emit a second audio into the user’s ear canal, and a second wireless transceiver configured to communicate with the first wireless transceiver and to coordinate an operation of the first audio emitter and the second audio emitter to provide the first audio and the second audio to a user.
2. The system of claim 1, comprising:
one or more microphones associated with the ear-bud or the ear-cup, a microphone among the one or more microphones configured to measure an audio; one or more audio emitters associated with the ear-bud or the ear-cup; one or more processors to: calculate a noise cancelling sound based on the audio, wherein the noise cancelling sound destructively interferes with the audio; and cause an audio emitter among the one or more audio emitters to emit the noise cancelling sound.
3. The system of claim 1, comprising: the microphone associated with the ear-cup configured to measure an audio; one or more processors to: calculate a noise cancelling sound based on the audio, wherein the noise cancelling sound destructively interferes with the audio; and cause the second audio emitter of the ear-bud to emit the noise cancelling sound.
4. The system of claim 1, comprising one or more processors to:
detect a proximity between the first audio emitter associated with the ear-cup and the second audio emitter associated with the ear-bud; and separate an audio intended for the first audio emitter into at least the first audio and the second audio and sending the first audio to the first audio emitter and the second audio to the second audio emitter.
5. The system of claim 1, the ear-bud comprising a microphone configured to measure a hearing profile of the user by measuring an acoustic response within the user’s ear canal generated in response to the second audio.
6. The system of claim 5, the acoustic response comprising an otoacoustic emission generated within the user’s ear canal.
7. The system of claim 5, comprising:
the second wireless transceiver to communicate the hearing profile of the user to the first wireless transceiver; and the first audio emitter to adjust the first audio based on the hearing profile of the user.
8. The system of claim 1, comprising:
the ear-cup comprising a first microphone configured to be externally facing and configured to measure an ambient sound when activated.
9. The system of claim 8, comprising:
the ear-bud comprising a second microphone configured to face the user and to detect whether the user is speaking and when the user is speaking to activate the first microphone.
10. The system of claim 1, comprising:
the ear-cup comprising a first microphone configured to be externally facing and configured to measure an ambient sound when activated; the ear-bud comprising a second microphone configured to face the user; and an electronic circuit to obtain a sound measured by the first microphone and a sound measured by the second microphone and combine the sound measured by the first microphone and the sound measured by the second microphone into a recording of a user’s voice.
11. The system of claim 10, comprising a processor configured to interpret the recording of the user’s voice into a command and execute the command.
12. The system of claim 1, comprising:
the first audio emitter configured to emit a first set of frequencies; and the second audio emitter configured to emit a second set of frequencies.
13. The system of claim 1, comprising:
the ear-cup to measure a hearing profile associated with the user and to wirelessly
communicate the hearing profile to the ear-bud, the hearing profile associated with the user correlating an amplitude and a frequency received by the user and an amplitude and a frequency perceived by the user; and the ear-bud to modify an audio based on the hearing profile associated with the user.
14. The system of claim 1, comprising:
the ear-bud to measure a hearing profile associated with the user and to wirelessly
communicate the hearing profile to the ear-cup, the hearing profile associated with the user correlating an amplitude and a frequency received by the user and an amplitude and a frequency perceived by the user; and the ear-cup to modify an audio based on the hearing profile associated with the user.
15. A system comprising:
an ear-cup, configured to be placed in proximity to a user’s ear, comprising an audio emitter; an ear-bud, to be placed in proximity to a user’s ear canal, in wireless communication with the ear-cup, the ear-bud comprising a microphone to measure a first audio; one or more processors to: calculate a first noise cancelling sound based on the first audio, wherein the first noise cancelling sound destructively interferes with the first audio; and cause the audio emitter of the ear-cup to emit the first noise cancelling sound.
16. The system of claim 15, the one or more processors contained within the ear-cup or within the ear-bud.
17. The system of claim 15, a first processor of the one or more processors contained within the ear- cup and a second processor of the one or more processors contained within the ear-bud.
18. A method comprising:
measuring an ambient sound surrounding an ear-cup; performing filtering of the ambient sound to obtain a filtered sound; wirelessly transmitting the ambient sound or the filtered sound to an ear-bud; and playing the filtered sound through an audio emitter associated with the ear-bud.
19. The method of claim 18, said performing filtering of the ambient sound comprising creating the filtered sound to destructively interfere with the ambient sound.
20. The method of claim 18, said performing filtering comprising adjusting the ambient sound to enhance perception of the ambient sound.
21. The method of claim 18, comprising:
receiving a press on a touch sensitive surface of the ear-cup indicating a request to transmit the ambient sound through the audio emitter associated with the ear-bud.
22. The method of claim 18, comprising: recognizing within the ambient sound an identifier associated with a user, the identifier indicating a request to transmit the ambient sound through the audio emitter associated with the ear-bud.
23. The method of claim 18, comprising:
detecting, with a first sensor placed proximate to a user’s head, a low-frequency audio; upon detecting the low-frequency audio, activating a second sensor to measure a high- frequency audio; and measuring the low-frequency audio and the high-frequency audio.
24. A system comprising: a wireless ear-bud to wirelessly receive a first audio and to emit the first audio, the wireless ear-bud comprising a first power source having a power capacity less than needed for a potentially desired usage of the wireless ear-bud; and an ear-cup having at least two functions, the ear-cup substantially surrounding the wireless ear-bud, the ear-cup comprising a second power source configured to wirelessly transfer power to the first power source while the wireless ear-bud is operating and to enable the first power source of the wireless ear-bud to provide the power needed for the potentially desired usage of the wireless ear-bud, a first function of the ear-cup comprising a wireless power transfer and a second function of the ear-cup comprising emitting an audio.
25. The system of claim 24, the first power source comprising a capacitor.
26. The system of claim 24, the second power source comprising a battery.
27. A method comprising:
enabling a wireless audio device to operate for a duration of a potentially desired usage by: wirelessly transferring power, by a dual function member, to the wireless audio device while the wireless audio device is consuming power and enabling the wireless audio device to operate for the duration of the potentially desired usage, the dual function member comprising a first function and a second function, the first function comprising wirelessly transferring power to the wireless audio device, and the second function different from the first function.
28. The method of claim 27, comprising:
modulating the power transmitted to the wireless audio device using at least one of a frequency modulation, an amplitude modulation or a phase modulation.
29. The method of claim 27, comprising:
monitoring the power of the wireless audio device; and when the power of the wireless audio device is below a predefined threshold, the wireless audio device to activate a power source associated with the dual function member.
30. The method of claim 27, comprising:
upon receiving a request from the wireless audio device for more power, choosing a power source among a plurality of power sources based on a distance between the power source in the plurality of power sources and the wireless audio device or based on an amount of power contained within each power source in the plurality of power sources.
31. A system comprising:
a wireless audio device; and a dual function member comprising a power source configured to wirelessly transfer power to the wireless audio device while the wireless audio device is consuming power and to enable the wireless audio device to operate for a duration of a potentially desired usage, a first function of the dual function member comprising wirelessly transferring power to the wireless audio device and a second function of the dual function member different from the first function.
32. The system of claim 31, the wireless audio device comprising a wireless ear-bud.
33. The system of claim 31, the wireless audio device to obtain a hearing profile associated with a user and to modify an audio based on the hearing profile associated with the user.
34. The system of claim 31, the dual function member comprising an ear-cup associated with a headphone.
35. The system of claim 31, the dual function member comprising a wearable accessory.
36. The system of claim 31, the power source to transmit the power along with a data.
37. The system of claim 31, an encoding member to perform an amplitude modulation, a frequency modulation or a phase modulation of the power to transmit the power along with a data.
38. The system of claim 31, the wireless audio device to monitor an amount of power associated with the wireless audio device, and when the amount of power of the wireless audio device is below a predefined threshold, the wireless audio device to activate the power source of the dual function member.
39. The system of claim 31, comprising upon receiving a request from the wireless audio device to activate the power source, the dual function member to choose the power source among a plurality of power sources based on a distance to the wireless audio device or based on an amount of power contained within the power source in the plurality of power sources.
40. A method comprising:
determining a first amount of power remaining in a first power source associated with a wearable audio system and a second amount of power remaining in a second power source associated with the wearable audio system; comparing the first amount of power and the second amount of power; and based on the comparison, distributing a task between the first power source and the second power source.
41. The method of claim 40, comprising: obtaining a hearing profile associated with a user, the hearing profile correlating a received frequency and a received amplitude to a perceived frequency and a perceived amplitude; and sending a power intensive task to modify an audio based on the hearing profile to a processor associated with the first power source.
42. The method of claim 40, wherein the wearable audio system comprises a headphone or an ear- bud.
43. The method of claim 40, a power intensive task comprising a wireless communication with a remote device.
44. The method of claim 40, said distributing the task comprising:
determining a first power amount needed for a first task and a second power amount needed for a second task; and when the first power amount is greater than the second power amount, sending the first task to a processor associated with the first power source.
45. The method of claim 40, wherein the first power source is associated with a first ear-bud, the second power source is associated with a second ear-bud, and a third power source is associated an ear-cup, the method comprising:
determining a third amount of power remaining in the third power source associated with the ear-cup; comparing the first amount of power, the second amount of power, and the third amount of power; and when the third amount of power is greater than the second amount of power and the first amount of power, sending a power intensive task to a processor associated with the third power source.
PCT/US2019/066984 2018-12-21 2019-12-17 Modular ear-cup and ear-bud and power management of the modular ear-cup and ear-bud WO2020131963A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US17/349,875 US11910145B2 (en) 2018-12-21 2021-06-16 Power management of the modular ear-cup and ear-bud
US17/349,858 US20210314691A1 (en) 2018-12-21 2021-06-16 Modular ear-cup and ear-bud
US18/414,050 US20240276134A1 (en) 2018-12-21 2024-01-16 Power management of the modular ear-cup and ear-bud

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862784143P 2018-12-21 2018-12-21
US62/784,143 2018-12-21

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US17/349,875 Continuation US11910145B2 (en) 2018-12-21 2021-06-16 Power management of the modular ear-cup and ear-bud
US17/349,858 Continuation US20210314691A1 (en) 2018-12-21 2021-06-16 Modular ear-cup and ear-bud

Publications (1)

Publication Number Publication Date
WO2020131963A1 true WO2020131963A1 (en) 2020-06-25

Family

ID=71102344

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/066984 WO2020131963A1 (en) 2018-12-21 2019-12-17 Modular ear-cup and ear-bud and power management of the modular ear-cup and ear-bud

Country Status (2)

Country Link
US (3) US11910145B2 (en)
WO (1) WO2020131963A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113099346A (en) * 2021-04-07 2021-07-09 南京紫牛软件科技有限公司 Earphone control method, earphone control device, wireless earphone and storage medium
US20210314691A1 (en) * 2018-12-21 2021-10-07 Nura Holdings Pty Ltd Modular ear-cup and ear-bud
CN115250402A (en) * 2021-04-28 2022-10-28 北京小米移动软件有限公司 Headset and multi-layer noise reduction earphone assembly

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11523206B2 (en) * 2019-06-28 2022-12-06 Sonos, Inc. Wireless earbud charging
US11508229B1 (en) * 2021-08-12 2022-11-22 Krystal Byrd Wireless baby monitor assembly
US12100983B2 (en) 2021-10-29 2024-09-24 Nucurrent, Inc. Wireless power transfer system for listening devices with expandable case
US12100982B2 (en) 2021-10-29 2024-09-24 Nucurrent, Inc. Wireless power transfer system for listening devices
US20230276162A1 (en) * 2022-02-28 2023-08-31 Nucurrent, Inc. On-Ear Charging For Wireless Hearables
US20230292029A1 (en) * 2022-02-28 2023-09-14 Nucurrent, Inc. On-Ear Charging For Wireless Hearables
US20230275458A1 (en) * 2022-02-28 2023-08-31 Nucurrent, Inc. On-Ear Charging For Wireless Hearables
US20230300510A1 (en) * 2022-03-16 2023-09-21 Audiowise Technology Inc. Intermediary device and wearable bridge

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6104819A (en) * 1997-06-02 2000-08-15 Gateway 2000, Inc. Combination speaker and earphone apparatus
US20140031426A1 (en) * 2011-04-04 2014-01-30 Nestec S.A. Sn-1(3) monoacylglycerides and lipid absorption
US20150334484A1 (en) * 2007-04-13 2015-11-19 Personics Holdings, Llc Method and device for voice operated control
US20170064434A1 (en) * 2015-08-31 2017-03-02 Nura Holdings Pty Ltd Personalization of auditory stimulus
US20170208380A1 (en) * 2016-01-14 2017-07-20 Nura Holdings Pty Ltd Headphones with combined ear-cup and ear-bud

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8693720B2 (en) 2006-08-31 2014-04-08 Red Tail Hawk Corporation Wireless earplug with improved sensitivity and form factor
US8688036B2 (en) 2006-08-31 2014-04-01 Red Tail Hawk Corporation Wireless communications headset system employing a loop transmitter that fits around the pinna
US9525930B2 (en) 2006-08-31 2016-12-20 Red Tail Hawk Corporation Magnetic field antenna
US8213649B2 (en) * 2007-02-02 2012-07-03 Personics Holdings Inc. Method and device for evaluating auditory health
US8379872B2 (en) 2009-06-01 2013-02-19 Red Tail Hawk Corporation Talk-through listening device channel switching
EP2362678B1 (en) 2010-02-24 2017-07-26 GN Audio A/S A headset system with microphone for ambient sounds
USD657499S1 (en) 2010-07-08 2012-04-10 Red Tail Hawk Corporation Earplug
EP2458586A1 (en) * 2010-11-24 2012-05-30 Koninklijke Philips Electronics N.V. System and method for producing an audio signal
US8543061B2 (en) * 2011-05-03 2013-09-24 Suhami Associates Ltd Cellphone managed hearing eyeglasses
US20150281853A1 (en) * 2011-07-11 2015-10-01 SoundFest, Inc. Systems and methods for enhancing targeted audibility
DK2675189T3 (en) * 2012-06-14 2015-11-09 Oticon As Binaural listening system with automatic mode can
US9191744B2 (en) * 2012-08-09 2015-11-17 Logitech Europe, S.A. Intelligent ambient sound monitoring system
US9083388B2 (en) 2012-08-29 2015-07-14 Red Tail Hawk Corporation Transmitter with improved sensitivity and shielding
US9344815B2 (en) * 2013-02-11 2016-05-17 Symphonic Audio Technologies Corp. Method for augmenting hearing
KR20150067673A (en) * 2013-12-10 2015-06-18 엘지전자 주식회사 Wireless charging device
US9516401B2 (en) * 2014-07-10 2016-12-06 T.REX Holdings, LLC Wireless in-ear headphones
CN107533839B (en) 2015-12-17 2021-02-23 华为技术有限公司 Method and device for processing ambient environment sound
US10848853B2 (en) * 2017-06-23 2020-11-24 Energous Corporation Systems, methods, and devices for utilizing a wire of a sound-producing device as an antenna for receipt of wirelessly delivered power
US10419853B2 (en) * 2018-02-21 2019-09-17 Apple Inc. Binaural audio capture using untethered wireless headset
WO2020131963A1 (en) * 2018-12-21 2020-06-25 Nura Holdings Pty Ltd Modular ear-cup and ear-bud and power management of the modular ear-cup and ear-bud

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6104819A (en) * 1997-06-02 2000-08-15 Gateway 2000, Inc. Combination speaker and earphone apparatus
US20150334484A1 (en) * 2007-04-13 2015-11-19 Personics Holdings, Llc Method and device for voice operated control
US20140031426A1 (en) * 2011-04-04 2014-01-30 Nestec S.A. Sn-1(3) monoacylglycerides and lipid absorption
US20170064434A1 (en) * 2015-08-31 2017-03-02 Nura Holdings Pty Ltd Personalization of auditory stimulus
US20170208380A1 (en) * 2016-01-14 2017-07-20 Nura Holdings Pty Ltd Headphones with combined ear-cup and ear-bud

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210314691A1 (en) * 2018-12-21 2021-10-07 Nura Holdings Pty Ltd Modular ear-cup and ear-bud
US11910145B2 (en) 2018-12-21 2024-02-20 Nura Holdings Pty Ltd Power management of the modular ear-cup and ear-bud
CN113099346A (en) * 2021-04-07 2021-07-09 南京紫牛软件科技有限公司 Earphone control method, earphone control device, wireless earphone and storage medium
CN115250402A (en) * 2021-04-28 2022-10-28 北京小米移动软件有限公司 Headset and multi-layer noise reduction earphone assembly
EP4084494A1 (en) * 2021-04-28 2022-11-02 Beijing Xiaomi Mobile Software Co., Ltd. Headphone and multi-stage noise-cancellation earphone assembly
US11700478B2 (en) 2021-04-28 2023-07-11 Beijing Xiaomi Mobile Software Co., Ltd. Headphone and multi-stage noise-cancellation earphone assembly

Also Published As

Publication number Publication date
US11910145B2 (en) 2024-02-20
US20240276134A1 (en) 2024-08-15
US20210314692A1 (en) 2021-10-07
US20210314691A1 (en) 2021-10-07

Similar Documents

Publication Publication Date Title
US11910145B2 (en) Power management of the modular ear-cup and ear-bud
CN108605177B (en) headset with combined ear cup and ear plug
US9992567B2 (en) Intelligent headphone
KR101779641B1 (en) Personal communication device with hearing support and method for providing the same
US9101299B2 (en) Hearing aids configured for directional acoustic fitting
Hosoi et al. Cartilage conduction as the third pathway for sound transmission
US20100208631A1 (en) Inaudible methods, apparatus and systems for jointly transmitting and processing, analog-digital information
US20150164361A1 (en) Sound stimulator for memory enhancement
JP7176674B2 (en) Modular in-ear device
CN106507258A (en) Hearing device comprising a feedback cancellation system based on signal energy redistribution
US20230188909A1 (en) Hearing system comprising a hearing aid and a processing device
US11825269B2 (en) Feedback elimination in a hearing aid
US11375322B2 (en) Hearing aid determining turn-taking
EP3930346A1 (en) A hearing aid comprising an own voice conversation tracker
CN110139201A (en) It is needed to test method, programmer and hearing system with hearing devices according to user
US20240325678A1 (en) Therapeutic sound through bone conduction
Popelka et al. Future directions for hearing aid development
US20220070604A1 (en) Audio equalization metadata
WO2023093412A1 (en) Active noise cancellation method and electronic device
US12052546B2 (en) Motion data based signal processing
WO2022121743A1 (en) Method for optimizing functions of hearables and hearables
US11902746B2 (en) Ear level auditory system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19898726

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19898726

Country of ref document: EP

Kind code of ref document: A1