EP1523219B1 - Method for training and operating a hearing aid and corresponding hearing aid - Google Patents
Method for training and operating a hearing aid and corresponding hearing aid Download PDFInfo
- Publication number
- EP1523219B1 EP1523219B1 EP04022104A EP04022104A EP1523219B1 EP 1523219 B1 EP1523219 B1 EP 1523219B1 EP 04022104 A EP04022104 A EP 04022104A EP 04022104 A EP04022104 A EP 04022104A EP 1523219 B1 EP1523219 B1 EP 1523219B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- hearing aid
- hearing
- input signal
- situation
- acoustic input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Revoked
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/50—Customised settings for obtaining desired overall acoustical characteristics
- H04R25/505—Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
- H04R25/507—Customised settings for obtaining desired overall acoustical characteristics using digital signal processing implemented by neural network or fuzzy logic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/41—Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
Definitions
- the present invention relates to a method for training a hearing aid by providing an acoustic input signal, providing a plurality of hearing situation identifications and assigning the acoustic input signal to one of the hearing situation identifications by a hearing device wearer. Moreover, the present invention relates to a corresponding hearing aid that is nachtrainierbar, and a method for operating such a hearing aid after a night workout.
- Classifiers are used in hearing aids to detect different situations. However, the default parameters may not necessarily be optimal for the appropriate situations with an individual hearing aid wearer. By night training, as it is usually used in speaker-related speech recognition systems, in certain situations, the recognition rate can be increased in terms of individual conditions. This is particularly important for the situation of the performance of one's own voice. Likewise, the classifier can be optimally adjusted to certain noise situations which are typical for the acoustic environment of the hearing device wearer.
- the hearing situation is only indirectly taken into account by the fact that only the parameter sets corresponding to this hearing situation are made available for selection.
- a disadvantage is that the hearing aid wearer in such night training the sound of the hearing aid, which is determined by the parameter set used to assess. He has to judge, for example, whether he wants to get the sound brighter or darker.
- adaptive parameters such as, while controlling an adaptive directional microphone, differentiation between different sets of parameters is difficult, if not impossible, for the hearing aid wearer.
- the object of the present invention is therefore to simplify the night training of a hearing device for the hearing aid wearer and to improve the operation of the hearing aid accordingly.
- this object is achieved by a method for training a hearing aid by providing an acoustic input signal, providing a plurality of hearing situation identifiers and assigning the acoustic input signal to one of the hearing situation identifications by a hearing aid wearer and automatically learning the assignment of the acoustic input signal to the one of the hearing situation identifications.
- the invention provides a hearing aid with a recording device for receiving an acoustic input signal, a memory device for storing a plurality Hörsituationskennonne and an input device for assigning the acoustic input signal to one of the Hörsituationskennonne by a hearing aid wearer and a learning device for automatically learning the assignment of the acoustic input signal to the one the Hörsituationskennonne by the input device.
- the invention is based on the knowledge that for the hearing aid wearer although the differentiation between different parameter sets is difficult, however, the naming of an acoustic situation that is currently present, for.
- These situations go beyond the hearing situations conventionally used in hearing aids, such as "speech at rest” and “speech in noise”.
- the more differentiated listening situations can relate to sub-aspects of these "classic” situations that are relevant for signal processing.
- the acoustic representations that this novel, Underlying more diverse situations can be easily retrained by specific naming.
- the sound of one's own voice or the specific sound of one's own car can be learned by the hearing aid, for example through a neural network.
- the neural network thus takes contrary to the mentioned prior art according to EP 0 813 634 A1 a mapping of the acoustic input variables not to the resulting overall setting (parameter set) of the hearing aid, but to the internal situation representation (Hörsituationskennung) before. From this, the hearing aid parameter set to be used is then derived or the relevant parameters varied and / or supplemented based on audiological expert knowledge.
- the adaptive algorithms can reuse this information without the hearing aid wearer having to evaluate the result.
- one of the listening situations of the performance of the own voice of the hearing aid wearer can correspond, so that after the automatic learning of the own voice can be detected. This is in many situations, for example, for the directional microphone setting of great importance.
- the automatic learning of the at least one hearing device setting parameter for the assigned hearing situation on the basis of the automatic evaluation can take place during (online) or after (offline) the presentation of the acoustic input signal.
- online night training the acoustic input signal does not have to be completely stored, but the hearing aid requires more computing power for the night training perform.
- offline night training eliminates this additional computing needs in the hearing aid, however, a memory device for the acoustic input signal is needed.
- the online evaluation avoids the time-consuming reading, processing and reprogramming of the data or the hearing aid.
- the input device for assigning the acoustic input signal to a listening situation can also be used to start and stop the night training. As a result, the handling of the hearing aid or the implementation of the night training for the hearing aid wearer is simplified.
- the input device may consist of a receiver integrated in the hearing device and an external remote control.
- the remote control may be configured for wired or wireless communication with the hearing aid. It is also conceivable that the remote control is used exclusively for the night training of the hearing aid.
- the remote control as a multi-function device, such as a mobile phone or portable computer with radio intersection speed, be configured.
- the input device may also comprise a programmable arithmetic unit, in particular a PC, so that the operation takes place via a corresponding programming software.
- a programmable arithmetic unit in particular a PC
- the input device can be verbally and in particular operated by means of one or more keywords.
- the operation of the hearing device for the hearing aid wearer is made even more comfortable.
- the acoustic input signal may comprise a manually or automatically processed voice signal. This makes it possible to train the classifier very specifically.
- a currently valid parameter set can be influenced by automatically assigning the current hearing situation to a hearing situation identifier.
- a parameter of the parameter set can be varied and / or supplemented by the automatic allocation.
- the hearing aid wearer or user 1 is located, as in FIG. 1 represented, in a special acoustic situation in the hearing aid, an acoustic input signal 2 available stands. Since the hearing aid for the hearing aid wearer 1 is subjectively not optimally adjusted, he undertakes a night training. For this he classifies the noise and tells the hearing aid the corresponding very general hearing situation or hearing situation identification, eg. As "speech in noise", with. Each of these listening situations 3 is assigned a plurality of parameter sets 4 in each case. Due to the selected listening situation 3, the hearing aid wearer 1 has, for example, seven parameter sets to choose from. He can now select the parameter set 4, with which the hearing aid is set so that it produces the subjectively best sound in this acoustic situation.
- a neural network 5 learns the desired parameter set 4 for the applied acoustic input signal 2, so that it will again choose this parameter set 4 for a similar acoustic situation after the training phase.
- the hearing aid should not train the use of special parameter sets, but only the recognition of the current situation. This is done according to the method of FIG. 2 , Here, too, the hearing aid wearer or user 1 receives the acoustic input signal 2.
- the hearing aid wearer 1 For the night training of the neural network 5 in the hearing aid, the hearing aid wearer 1 must assign the acoustic situation in which he is currently located to only one of a multiplicity of predefined, specific hearing situations 3 ' ,
- the number of specific listening situations 3 'in the case of the present invention is usually greater than the number of general listening situations 3 according to FIG FIG. 1 because they should be differentiated from the beginning.
- the general listening situation "speech in noise" includes, for example, the specific hearing situation "own voice".
- the neural network 5 does not learn the assignment of a parameter set to the acoustic input signal 2, but the assignment of a differentiated hearing situation or a hearing situation identifier 3 'to the acoustic input signal 2 (see arrows with solid lines in FIG FIG. 2 ).
- This means that the neural network unlike the prior art, learns at a higher level.
- the example of the listening situation "own voice in your own car” this is explained in more detail.
- this complex situation is assigned a fixed parameter set starting, for example, from the parameter set group "speech in noise". Since only a number of parameter sets for the selection for the hearing aid wearer for such situations "speech in noise" makes sense, certainly none of the available parameter sets for their own voice and additionally optimized for their own car.
- the situation "own voice” and the other situation “in your own car” learned separately. These listening situations each have a specific influence on the complex signal processing. For example, in the situation “own voice” a specific amplification, possibly coupled with a special setting of the directivity of the hearing aid, and in the situation “in-house car” in turn causes a very specific noise reduction in the hearing aid.
- one's own voice can be learned by the hearing aid. This takes place in that the acoustic input signal is subjected to a special processing with the own voice and specific parameters for the hearing aid are set specifically and assigned to the hearing situation "own voice". The same applies to learning, for example the hearing situation "own car", whereby a very specific noise reduction can be achieved. Thus, not only the input signal of a hearing situation is allocated here during learning, but also a very specific determination of parameters, such as filter or amplification parameters, takes place.
- the neural network 5 will assign an input acoustic signal 2 to one or more specific hearing situation identifiers 3 ', so that the currently valid parameter set 4' (including filter parameters) will be affected accordingly.
- Example 1 An adaptive directional microphone depends on the direction from which the maximum useful sound, z. As a voice signal, is incident. If the hearing aid wearer is talking to a person running next to him, the directional microphone should adjust to the conversation partner, ie to a maximum gain at an angle of approximately 90 °. However, as soon as the hearing aid wearer speaks for himself, the useful sound signal comes from his own mouth, ie from an angle of 0 °. The own language thus pulls the directional microphone characteristic away from the actual conversation partner and that usually with a certain delay.
- Example 2 A noise suppression method can be specially trained for a complex time-varying noise. This noise is then optimally suppressed, although it may have similar spectral components or a modulation spectrum such as speech, which is to be further processed as a useful signal. Through individual training on this acoustic situation, z. For example, if the above-mentioned situation "in the car", the noise reduction method can be automatically set optimally, for example, by adjusting special weighting factors for individual spectral bands or the dynamic behavior is optimally adapted to the noise characteristic. Even in this case, the differences between the dynamic noise reduction settings are difficult to evaluate directly, while the situation is very reliable.
- a night training according to the prior art takes place while the hearing aid wearer evaluates different parameter sets.
- the night training as it is for the hearing aid wearer, is now based on the 3 and 4 explained in more detail.
- the hearing aid wearer wants to train the situation "own voice" in his hearing aid 10. For this purpose, he connects via a line 11, a remote control 12 to the hearing aid 10.
- the remote control has a button 13 as a control.
- an acoustic signal here the own voice
- the hearing aid wearer must now inform the hearing aid 10 about the beginning and the end of the training phase. This is done by holding the button 13 while speaking. This means that he must use only a single control element 13 for both steps of the training. If there are many listening situation identifiers, another design may be more user-friendly, e.g. B. with a display and a controller (slider, trackball, etc.), with the appropriate situation can be quickly selected.
- the actual night training of the hearing aid 10 may occur during the performance of the acoustic signal 14.
- the acoustic signal 14 is recorded in the hearing aid and evaluated after recording or assigned to the selected auditory situation due to characteristic acoustic properties.
- a permanent or temporary storage of the acoustic signal 14 is not absolutely necessary.
- the hearing aid 10 Since the hearing aid 10 only the information about the current situation must be communicated, in contrast to the prior art according to EP 0 814 634 A1 an external operating unit is not absolutely necessary. It can, however, according to the 3 and 4 be used for example for reasons of comfort. However, it can also be a recording button attached to the hearing aid itself.
- the recognition rate of the classifier for certain situations can be significantly increased compared to the default, so that the hearing aid sets more reliably to this situation. Due to the self-starting and ending of the night training phase by the hearing aid wearer situations can also be trained nachtrainiert reliable because the hearing aid wearer himself decides when the signal can be assigned to the situation.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Automation & Control Theory (AREA)
- Neurosurgery (AREA)
- Otolaryngology (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Electrically Operated Instructional Devices (AREA)
- Circuit For Audible Band Transducer (AREA)
- Filters That Use Time-Delay Elements (AREA)
Abstract
Description
Die vorliegende Erfindung betrifft ein Verfahren zum Nachtrainieren eines Hörgeräts durch Bereitstellen eines akustischen Eingangssignals, Bereitstellen mehrerer Hörsituationskennungen und Zuordnen des akustischen Eingangssignals zu einer der Hörsituationskennungen durch einen Hörgeräteträger. Darüber hinaus betrifft die vorliegende Erfindung ein entsprechendes Hörgerät, das nachtrainierbar ist, und ein Verfahren zum Betreiben eines derartigen Hörgeräts nach einem Nachtraining.The present invention relates to a method for training a hearing aid by providing an acoustic input signal, providing a plurality of hearing situation identifications and assigning the acoustic input signal to one of the hearing situation identifications by a hearing device wearer. Moreover, the present invention relates to a corresponding hearing aid that is nachtrainierbar, and a method for operating such a hearing aid after a night workout.
Klassifikatoren finden in Hörgeräten zur Erkennung unterschiedlicher Situationen Einsatz. Die voreingestellten Parameter müssen allerdings für die entsprechenden Situationen bei einem individuellen Hörgeräteträger nicht notwendigerweise optimal sein. Durch Nachtraining, wie es üblicherweise bei sprecherbezogenen Spracherkennungssystemen eingesetzt wird, kann in bestimmten Situationen die Erkennungsrate bezüglich der individuellen Rahmenbedingungen erhöht werden. Dies ist insbesondere für die Situation der Darbietung der eigenen Stimme von großer Bedeutung. Ebenso kann der Klassifikator auf bestimmte Lärmsituationen, die typisch für das akustische Umfeld des Hörgeräteträgers sind, optimal eingestellt werden.Classifiers are used in hearing aids to detect different situations. However, the default parameters may not necessarily be optimal for the appropriate situations with an individual hearing aid wearer. By night training, as it is usually used in speaker-related speech recognition systems, in certain situations, the recognition rate can be increased in terms of individual conditions. This is particularly important for the situation of the performance of one's own voice. Likewise, the classifier can be optimally adjusted to certain noise situations which are typical for the acoustic environment of the hearing device wearer.
In diesem Zusammenhang ist aus der Druckschrift
In der Druckschrift
Die Aufgabe der vorliegenden Erfindung besteht somit darin, das Nachtraining eines Hörgeräts für den Hörgeräteträger zu vereinfachen und das Betreiben des Hörgeräts entsprechend zu verbessern.The object of the present invention is therefore to simplify the night training of a hearing device for the hearing aid wearer and to improve the operation of the hearing aid accordingly.
Erfindungsgemäß wird diese Aufgabe gelöst durch ein Verfahren zum Nachtrainieren eines Hörgeräts durch Bereitstellen eines akustischen Eingangssignals, Bereitstellen mehrerer Hörsituationskennungen und Zuordnen des akustischen Eingangssignals zu einer der Hörsituationskennungen durch einen Hörgeräteträger sowie automatisches Lernen der Zuordnung des akustischen Eingangssignals zu der einen der Hörsituationskennungen.According to the invention, this object is achieved by a method for training a hearing aid by providing an acoustic input signal, providing a plurality of hearing situation identifiers and assigning the acoustic input signal to one of the hearing situation identifications by a hearing aid wearer and automatically learning the assignment of the acoustic input signal to the one of the hearing situation identifications.
Ferner ist erfindungsgemäß vorgesehen ein Hörgerät mit einer Aufnahmeeinrichtung zum Aufnehmen eines akustischen Eingangssignals, einer Speichereinrichtung zum Speichern mehrerer Hörsituationskennungen und einer Eingabeeinrichtung zum Zuordnen des akustischen Eingangssignals zu einer der Hörsituationskennungen durch einen Hörgeräteträger sowie einer Lerneinrichtung zum automatischen Lernen der Zuordnung des akustischen Eingangssignals zu der einen der Hörsituationskennungen durch die Eingabeeinrichtung.Further, the invention provides a hearing aid with a recording device for receiving an acoustic input signal, a memory device for storing a plurality Hörsituationskennungen and an input device for assigning the acoustic input signal to one of the Hörsituationskennungen by a hearing aid wearer and a learning device for automatically learning the assignment of the acoustic input signal to the one the Hörsituationskennungen by the input device.
Der Erfindung liegt die Kenntnis zugrunde, dass für den Hörgeräteträger zwar die Differenzierung zwischen verschiedenen Parametersätzen schwierig ist, hingegen das Benennen einer akustischen Situation, die gerade vorliegt, z. B. die Situation "eigene Stimme" oder "Unterhaltung im Auto", in den meisten Fällen sehr zuverlässig durch den Hörgeräteträger möglich ist. Diese Situationen gehen über die herkömmlich in Hörgeräten verwendeten Hörsituationen wie "Sprache in Ruhe" und "Sprache in Störgeräusch" hinaus. D. h. die differenzierteren Hörsituationen können sich auf für die Signalverarbeitung relevanten Teilaspekte dieser "klassischen" Situationen beziehen. Die akustischen Repräsentationen, die diesen neuartigen, vielfältigeren Situationen zugrunde liegen, können auf einfache Weise durch spezifisches Benennen individuell nachtrainiert werden. Beispielsweise kann der Klang der eigenen Stimme oder der spezifische Klang des eigenen Autos vom Hörgerät beispielsweise durch ein neuronales Netz erlernt werden. Das neuronale Netz nimmt somit im Gegensatz zu dem erwähnten Stand der Technik gemäß
Bei einer erfindungsgemäßen speziellen Ausgestaltung kann eine der Hörsituationen der Darbietung der eigenen Stimme des Hörgeräteträgers entsprechen, so dass nach dem automatischen Lernen die eigene Stimme erkannt werden kann. Dies ist in vielen Situationen beispielsweise für die Richtmikrofoneinstellung von großer Bedeutung.In a specific embodiment according to the invention one of the listening situations of the performance of the own voice of the hearing aid wearer can correspond, so that after the automatic learning of the own voice can be detected. This is in many situations, for example, for the directional microphone setting of great importance.
Das automatische Lernen des mindestens einen Hörgeräteeinstellparameters für die zugeordnete Hörsituation auf der Grundlage der automatischen Auswertung kann während (online) oder nach (offline) der Darbietung des akustischen Eingangssignals erfolgen. Beim Online-Nachtraining muss das akustische Eingangssignal nicht vollständig gespeichert werden, jedoch benötigt das Hörgerät mehr Rechenleistung, um das Nachtraining durchzuführen. Beim Offline-Nachtraining entfällt dieser zusätzliche Rechenbedarf im Hörgerät, jedoch wird eine Speichervorrichtung für das akustische Eingangssignal benötigt. Die Online-Auswertung vermeidet das zeitaufwendige Auslesen, Prozessieren und Reprogrammieren der Daten bzw. des Hörgeräts.The automatic learning of the at least one hearing device setting parameter for the assigned hearing situation on the basis of the automatic evaluation can take place during (online) or after (offline) the presentation of the acoustic input signal. In online night training, the acoustic input signal does not have to be completely stored, but the hearing aid requires more computing power for the night training perform. In offline night training eliminates this additional computing needs in the hearing aid, however, a memory device for the acoustic input signal is needed. The online evaluation avoids the time-consuming reading, processing and reprogramming of the data or the hearing aid.
Die Eingabeeinrichtung zum Zuordnen des akustischen Eingangssignals zu einer Hörsituation kann auch zum Starten und Stoppen des Nachtrainings verwendet werden. Dadurch wird die Handhabung des Hörgeräts bzw. die Durchführung des Nachtrainings für den Hörgeräteträger vereinfacht.The input device for assigning the acoustic input signal to a listening situation can also be used to start and stop the night training. As a result, the handling of the hearing aid or the implementation of the night training for the hearing aid wearer is simplified.
Darüber hinaus kann die Eingabeeinrichtung aus einem in das Hörgerät integrierten Empfänger und einer externen Fernbedienung bestehen. Die Fernbedienung kann für drahtgebundene oder drahtlose Kommunikation mit dem Hörgerät ausgestaltet sein. Denkbar ist des Weiteren, dass die Fernbedienung ausschließlich für das Nachtraining des Hörgeräts verwendet wird. Alternativ kann die Fernbedienung als Multifunktionsgerät, beispielsweise als Mobiltelefon oder tragbarer Rechner mit Funkschnittschnelle, ausgestaltet sein.In addition, the input device may consist of a receiver integrated in the hearing device and an external remote control. The remote control may be configured for wired or wireless communication with the hearing aid. It is also conceivable that the remote control is used exclusively for the night training of the hearing aid. Alternatively, the remote control as a multi-function device, such as a mobile phone or portable computer with radio intersection speed, be configured.
Die Eingabeeinrichtung kann außerdem eine programmierbare Recheneinheit, insbesondere einen PC umfassen, so dass die Bedienung über eine entsprechende Programmiersoftware erfolgt.The input device may also comprise a programmable arithmetic unit, in particular a PC, so that the operation takes place via a corresponding programming software.
Schließlich kann die Eingabeeinrichtung bei einer speziellen Ausführungsform verbal und insbesondere mittels eines oder mehrerer Schlüsselworte bedienbar sein. Dadurch wird die Bedienung des Hörgeräts für den Hörgeräteträger noch komfortabler gestaltet.Finally, in a specific embodiment, the input device can be verbally and in particular operated by means of one or more keywords. As a result, the operation of the hearing device for the hearing aid wearer is made even more comfortable.
Ferner kann das akustische Eingangssignal ein manuell oder automatisch aufbereitetes Sprachsignal umfassen. Damit ist es möglich, den Klassifikator sehr spezifisch zu trainieren.Furthermore, the acoustic input signal may comprise a manually or automatically processed voice signal. This makes it possible to train the classifier very specifically.
Beim Betrieb des Hörgeräts, d. h. nach dem Nachtraining, kann ein momentan geltender Parametersatz durch das automatische Zuordnen der aktuellen Hörsituation zu einer Hörsituationskennung beeinflusst werden. Insbesondere kann ein Parameter des Parametersatzes durch das automatische Zuordnen variiert und/oder ergänzt werden. Dadurch ist es möglich, dass das akustische Eingangssignal einer komplexen Signalverarbeitung basierend auf Expertenwissen unterzogen wird, wenn das neuronale Netz eine gelernte Hörsituation, z. B. die eigene Stimme, erkennt. Dabei wird der im Hörgerät momentan verwendete Parametersatz entsprechend abgeändert bzw. entsprechende Filterungen durchgeführt.When operating the hearing aid, d. H. After the night training, a currently valid parameter set can be influenced by automatically assigning the current hearing situation to a hearing situation identifier. In particular, a parameter of the parameter set can be varied and / or supplemented by the automatic allocation. Thereby, it is possible that the acoustic input signal is subjected to a complex signal processing based on expert knowledge, when the neural network is a learned hearing situation, for. B. own voice recognizes. In this case, the parameter set currently used in the hearing aid is changed accordingly or corresponding filtering is carried out.
Die vorliegende Erfindung wird nun anhand der beigefügten Zeichnungen näher erläutert, in denen zeigen:
- FIG 1
- ein Blockschaltbild zum Verfahren gemäß dem Stand der Technik;
- FIG 2
- ein Blockschaltbild des erfindungsgemäßen Verfahrens;
- FIG 3
- eine prinzipielle Darstellung eines Hörgeräts mit Fernbedienung zum Eingeben einer Hörsituation in einem ersten Schritt; und
- FIG 4
- die Situation des Hörgeräts gemäß
FIG 3 während der Trainingsphase.
- FIG. 1
- a block diagram of the method according to the prior art;
- FIG. 2
- a block diagram of the method according to the invention;
- FIG. 3
- a schematic representation of a hearing aid with remote control for entering a hearing situation in a first step; and
- FIG. 4
- the situation of the hearing aid according to
FIG. 3 during the training phase.
Das nachfolgend näher beschriebene Ausführungsbeispiel stellt eine bevorzugte Ausführungsform der vorliegenden Erfindung dar. Zum besseren Verständnis der Erfindung wird jedoch zunächst das Verfahren zum Nachtraining gemäß dem Stand der Technik anhand von
Der Hörgeräteträger bzw. User 1 befindet sich, wie in
Ein neuronales Netz 5 lernt den gewünschten Parametersatz 4 für das anliegende akustische Eingangssignal 2, so dass es diesen Parametersatz 4 auch für eine ähnliche akustische Situation nach der Trainingsphase wieder wählen wird.A
Die subjektive Beurteilung der Klänge, hervorgerufen durch die unterschiedlichen Parametersätze für die Hörgeräteeinstellung, ist für den Hörgeräteträger 1 jedoch sehr schwierig, da dies viel Detailwissen über die Auswirkungen der Hörgeräteparameter voraussetzt.The subjective assessment of the sounds caused by the different parameter sets for the hearing aid setting, however, is very difficult for the
Gemäß der vorliegenden Erfindung soll daher vom Hörgerät nicht die Verwendung spezieller Parametersätze, sondern lediglich das Erkennen der augenblicklichen Situation trainiert werden. Dies erfolgt entsprechend dem Verfahren von
Das neuronale Netz 5 lernt damit nicht die Zuordnung eines Parametersatzes zu dem akustischen Eingangssignal 2, sondern die Zuordnung einer differenzierten Hörsituation bzw. einer Hörsituationskennung 3' zu dem akustischen Eingangssignal 2 (vgl. Pfeile mit durchgezogenen Linien in
Erfindungsgemäß wird dagegen die Situation "eigene Stimme" und die weitere Situation "im eigenen Auto" getrennt gelernt. Diese Hörsituationen nehmen jeweils spezifischen Einfluss auf die komplexe Signalverarbeitung. So wird beispielsweise bei der Situation "eigene Stimme" eine spezifische Verstärkung, eventuell gekoppelt mit einer speziellen Einstellung der Richtwirkung des Hörgeräts, und bei der Situation "im eigenen Auto" eine wiederum sehr spezifische Störgeräuschunterdrückung im Hörgerät veranlasst.According to the invention, however, the situation "own voice" and the other situation "in your own car" learned separately. These listening situations each have a specific influence on the complex signal processing. For example, in the situation "own voice" a specific amplification, possibly coupled with a special setting of the directivity of the hearing aid, and in the situation "in-house car" in turn causes a very specific noise reduction in the hearing aid.
Besonders vorteilhaft ist, dass die eigene Stimme vom Hörgerät gelernt werden kann. Dies erfolgt dadurch, dass das akustische Eingangssignal mit der eigenen Stimme einer speziellen Verarbeitung unterzogen wird und entsprechende Parameter für das Hörgerät spezifisch gesetzt und der Hörsituation "eigene Stimme" zugeordnet werden. Ähnliches gilt für das Lernen beispielsweise der Hörsituation "eigenes Auto", wodurch eine sehr spezifische Störgeräuschunterdrückung erzielt werden kann. Es werden hier also beim Lernen nicht nur das Eingangssignal einer Hörsituation zugeordnet, sondern es findet auch eine sehr spezifische Ermittlung von Parametern, wie beispielsweise Filter- oder Verstärkungsparameter, statt.It is particularly advantageous that one's own voice can be learned by the hearing aid. This takes place in that the acoustic input signal is subjected to a special processing with the own voice and specific parameters for the hearing aid are set specifically and assigned to the hearing situation "own voice". The same applies to learning, for example the hearing situation "own car", whereby a very specific noise reduction can be achieved. Thus, not only the input signal of a hearing situation is allocated here during learning, but also a very specific determination of parameters, such as filter or amplification parameters, takes place.
Beim Gebrauch des Hörgeräts nach dem Nachtraining wird das neuronale Netz 5 ein akustisches Eingangssignal 2 einer oder mehreren spezifischen Hörsituationskennungen 3' zuweisen, so dass der momentan geltende Parametersatz 4' (einschließlich Filterparameter) entsprechend beeinflusst wird. Eine komplexe Signalverarbeitungseinheit 6 z. B. mit adaptivem Richtmikrofon wird die Signalverarbeitung auf der Basis des beeinflussten Parametersatzes 4' durchführen. Falls das neuronale Netz gemäß dem obigen Beispiel nun das Eingangssignal "eigene Stimme im eigenen Auto" erhält, weist es diesem sowohl die Hörsituationskennung "eigene Stimme" als auch die Hörsituationskennung "im eigenen Auto" zu, so dass der aktuelle Parametersatz beispielsweise hinsichtlich der spezifischen Verstärkung für die eigenen Stimme und bezüglich der spezifischen Filterung zur Unterdrückung der Störgeräusche des eigenen Autos variiert oder ergänzt wird.In use of the hearing aid after night training, the
Nachfolgend werden zwei konkrete Anwendungsbeispiele der vorliegenden Erfindung aufgezeigt:In the following two concrete application examples of the present invention are shown:
Beispiel 1: Ein adaptives Richtmikrofon richtet sich auf die Richtung aus, aus der der maximale Nutzschall, z. B. ein Sprachsignal, einfällt. Unterhält sich der Hörgeräteträger mit einer neben ihm herlaufenden Person, sollte sich das Richtmikrofon auf den Gesprächespartner einstellen, d. h. auf eine maximale Verstärkung in einem Winkel um ca. 90°. Sobald jedoch der Hörgeräteträger selbst spricht, kommt das Nutzschallsignal aus dem eigenen Mund, d. h. aus einem Winkel von 0°. Die eigene Sprache zieht somit die Richtmikrofoncharakteristik vom eigentlichen Gesprächspartner weg und zwar üblicherweise mit einer gewissen zeitlichen Verzögerung. Wenn das Hörgerät hingegen auf die eigene Stimme trainiert ist, und somit der adaptiven Mikrofonsteuerung bekannt ist, welche akustischen Eigenschaften zur eigenen Stimme gehören, können Signale, die als "eigene Stimme" klassifiziert werden, für die Nachführung der Richtcharakteristik unberücksichtigt bleiben. Demgegenüber wäre die Einstellmöglichkeit des Hörgeräts nach dem Stand der Technik von
Beispiel 2: Ein Störgeräuschunterdrückungsverfahren kann speziell auf ein komplexes, zeitlich variables Geräusch trainiert werden. Dieses Geräusch wird dann optimal unterdrückt, obwohl es eventuell ähnliche spektrale Komponenten oder ein Modulationsspektrum wie Sprache, die als Nutzsignal weiterverarbeitet werden soll, aufweist. Durch individuelles Training auf diese akustische Situation, z. B. die oben erwähnte Situation "im Auto", kann das Störgeräuschunterdrückungsverfahren automatisch optimal eingestellt werden, indem beispielsweise spezielle Gewichtungsfaktoren für einzelne spektrale Bänder eingestellt werden oder das dynamische Verhalten optimal auf die Störgeräuschcharakteristik angepasst wird. Auch in diesem Fall sind die Unterschiede zwischen den Einstellungen der dynamischen Störgeräuschunterdrückung nur schwer direkt zu bewerten, die Situation hingegen sehr zuverlässig.Example 2: A noise suppression method can be specially trained for a complex time-varying noise. This noise is then optimally suppressed, although it may have similar spectral components or a modulation spectrum such as speech, which is to be further processed as a useful signal. Through individual training on this acoustic situation, z. For example, if the above-mentioned situation "in the car", the noise reduction method can be automatically set optimally, for example, by adjusting special weighting factors for individual spectral bands or the dynamic behavior is optimally adapted to the noise characteristic. Even in this case, the differences between the dynamic noise reduction settings are difficult to evaluate directly, while the situation is very reliable.
In gewissen akustischen Situationen kann es vorteilhaft sein, wenn zusätzlich zu dem erfindungsgemäßen Nachtraining ein Nachtraining gemäß dem Stand der Technik unter Beurteilung verschiedener Parametersätze durch den Hörgeräteträger erfolgt.In certain acoustic situations, it may be advantageous if, in addition to the night training according to the invention, a night training according to the prior art takes place while the hearing aid wearer evaluates different parameter sets.
Das Nachtraining, wie es sich für den Hörgeräteträger darstellt, sei nun anhand der
In dem Klassifikator sind mehrere Hörsituationen abgelegt. Der Hörgeräteträger weiß, dass die Hörsituation "eigene Stimme" beispielsweise der Situation 3 entspricht. Daher drückt er den Taster 13 dreimal, um dem Klassifikator mitzuteilen, dass Situation 3 nachtrainiert werden soll.In the classifier several listening situations are stored. The hearing aid wearer knows that the hearing situation "own voice" corresponds to
In einem anschließenden Schritt wird ein akustisches Signal, hier die eigene Stimme, zur Aufnahme für das Hörgerät 10 gemäß
Das tatsächliche Nachtraining des Hörgeräts 10 kann während der Darbietung des akustischen Signals 14 erfolgen. Alternativ wird das akustische Signal 14 im Hörgerät aufgezeichnet und nach der Aufzeichnung ausgewertet bzw. der gewählten Hörsituation aufgrund charakteristischer akustischer Eigenschaften zugeordnet. Im Falle des Online-Nachtrainings ist eine dauerhafte oder temporäre Speicherung des akustischen Signals 14 nicht unbedingt notwendig.The actual night training of the
Da dem Hörgerät 10 nur die Information über die momentane Situation mitgeteilt werden muss, ist im Unterschied zum Stand der Technik gemäß
Durch das Nachtraining lässt sich die Erkennungsrate des Klassifikators für bestimmte Situationen deutlich gegenüber der Voreinstellung erhöhen, so dass sich das Hörgerät zuverlässiger auf diese Situation einstellt. Durch das selbständige Starten und Beenden der Nachtrainingsphase durch den Hörgeräteträger können Situationen ferner zuverlässig nachtrainiert werden, da der Hörgeräteträger selbst entscheidet, wann das Signal der Situation zugeordnet werden kann.By night training, the recognition rate of the classifier for certain situations can be significantly increased compared to the default, so that the hearing aid sets more reliably to this situation. Due to the self-starting and ending of the night training phase by the hearing aid wearer situations can also be trained nachtrainiert reliable because the hearing aid wearer himself decides when the signal can be assigned to the situation.
Claims (24)
- Method for retraining a hearing aid (10) by- provision of an acoustic input signal (2),- provision of two or more hearing situation identifications (3') and- association of the acoustic input signal (2) with one of the hearing situation identifications (3') by a hearing aid wearer (1),characterized by- automatic learning of the association between the acoustic input signal (2) and one of the hearing situation identifications (3').
- Method according to Claim 1, in which one of the hearing situation identifications (3') identifies a hearing situation in which the hearing aid wearer's, own voice or the noise of his own automobile is presented, such that the wearer's own voice or the noise from his own automobile can be identified after the automatic learning process.
- Method according to Claim 1 or 2, in which the learning process is carried out during the presentation of the acoustic input signal (2).
- Method according to Claim 1 or 2, in which the learning process is carried out after the presentation of the acoustic input signal (2)
- Method according to one of the preceding claims, in which the starting and stopping of the retraining and the association with the acoustic input signal (2) are carried out via a remote control (12).
- Method according to Claim 5, in which the remote control (12) communicates with the hearing aid (10) without the use of wires.
- Method according to one of Claims 1 to 4, with the hearing aid (10) being operated verbally for retraining.
- Method according to Claim 7, in which the hearing aid (10) is operated by means of one or more keywords.
- Method according to one of the preceding claims, with the acoustic input signal (2) comprising a speech signal which is preprocessed manually or automatically.
- Hearing aid having:- a receiving device for receiving an acoustic input signal (2),- a storage device for storing two or more hearing situation identifications (3'), and- an input device (12) for the hearing aid wearer to associate the acoustic input signal (2) with one of the hearing situation identifications (3'),characterized by- a learning device (5) for automatically learning the association between the acoustic input signal (2) and one of the hearing situation identifications (3') by means of the input device (12).
- Hearing aid according to Claim 10, in which one of the hearing situation identifications (3') identifies the hearing situation in which the hearing aid wearer's own voice or the noise of his own automobile is presented, such that the wearer's own voice or the noise from his own automobile can be identified after the automatic learning process.
- Hearing aid according to Claim 10 or 11, in which the learning process of the learning device (5) can be carried out while the acoustic input signal (2) is being received in the receiving device.
- Hearing aid according to one of Claims 10 to 12, in which the learning process can be carried out after the acoustic input signal (2) has been received in the receiving device, in the learning device (5) or in an external device with subsequent transmission to the hearing aid.
- Hearing aid according to one of Claims 10 to 13, in which the input device (12) can be used for starting and stopping the recording or the retraining process.
- Hearing aid according to one of Claims 10 to 14, with the input device (12) having an external remote control.
- Hearing aid according to Claim 15, in which the remote control can be used for wire-free communication with the hearing aid.
- Hearing aid according to Claim 15 or 16, in which the remote control is designed exclusively for retraining the hearing aid.
- Hearing aid according to one of Claims 10 to 16, with the remote control being in the form of a mobile radio.
- Hearing aid according to one of Claims 10 to 18, with the input device (12) comprising a programmable computation unit, in particular a PC.
- Hearing aid according to one of Claims 10 to 19, in which the input device (12) can be operated verbally.
- Hearing aid according to Claim 20, in which the input device (12) can be operated by means of one or more keywords.
- Hearing aid according to one of Claims 10 to 21, in which the acoustic input signal (2) comprises a speech signal which is preprocessed manually or automatically.
- Hearing aid according to one of Claims 10 to 22, having a signal processing device whose parameter set (4') can be influenced with the aid of the learning device (5) by association of the acoustic input signal (2) with a hearing situation identification (3') which has been learnt.
- Hearing aid according to Claim 23, in which at least one parameter in the parameter set (4') can be varied and/or supplemented by the learning device (5) during the automatic association process.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE10347211A DE10347211A1 (en) | 2003-10-10 | 2003-10-10 | Method for training and operating a hearing aid and corresponding hearing aid |
DE10347211 | 2003-10-10 |
Publications (3)
Publication Number | Publication Date |
---|---|
EP1523219A2 EP1523219A2 (en) | 2005-04-13 |
EP1523219A3 EP1523219A3 (en) | 2007-08-08 |
EP1523219B1 true EP1523219B1 (en) | 2008-08-20 |
Family
ID=34306363
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP04022104A Revoked EP1523219B1 (en) | 2003-10-10 | 2004-09-16 | Method for training and operating a hearing aid and corresponding hearing aid |
Country Status (6)
Country | Link |
---|---|
US (1) | US7742612B2 (en) |
EP (1) | EP1523219B1 (en) |
AT (1) | ATE406073T1 (en) |
AU (1) | AU2004218632B2 (en) |
DE (2) | DE10347211A1 (en) |
DK (1) | DK1523219T3 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106714062A (en) * | 2016-11-30 | 2017-05-24 | 天津大学 | BP-artificial-neural-network-based intelligent matching algorithm for digital hearing aid |
Families Citing this family (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070276285A1 (en) * | 2003-06-24 | 2007-11-29 | Mark Burrows | System and Method for Customized Training to Understand Human Speech Correctly with a Hearing Aid Device |
US20050085343A1 (en) * | 2003-06-24 | 2005-04-21 | Mark Burrows | Method and system for rehabilitating a medical condition across multiple dimensions |
US20050090372A1 (en) * | 2003-06-24 | 2005-04-28 | Mark Burrows | Method and system for using a database containing rehabilitation plans indexed across multiple dimensions |
DE10347211A1 (en) | 2003-10-10 | 2005-05-25 | Siemens Audiologische Technik Gmbh | Method for training and operating a hearing aid and corresponding hearing aid |
US20080212789A1 (en) * | 2004-06-14 | 2008-09-04 | Johnson & Johnson Consumer Companies, Inc. | At-Home Hearing Aid Training System and Method |
US20080165978A1 (en) * | 2004-06-14 | 2008-07-10 | Johnson & Johnson Consumer Companies, Inc. | Hearing Device Sound Simulation System and Method of Using the System |
EP1767059A4 (en) * | 2004-06-14 | 2009-07-01 | Johnson & Johnson Consumer | System for and method of optimizing an individual"s hearing aid |
EP1767056A4 (en) * | 2004-06-14 | 2009-07-22 | Johnson & Johnson Consumer | System for and method of offering an optimized sound service to individuals within a place of business |
WO2005125282A2 (en) * | 2004-06-14 | 2005-12-29 | Johnson & Johnson Consumer Companies, Inc. | System for and method of increasing convenience to users to drive the purchase process for hearing health that results in purchase of a hearing aid |
WO2005125277A2 (en) * | 2004-06-14 | 2005-12-29 | Johnson & Johnson Consumer Companies, Inc. | A sytem for and method of conveniently and automatically testing the hearing of a person |
EP1769412A4 (en) * | 2004-06-14 | 2010-03-31 | Johnson & Johnson Consumer | Audiologist equipment interface user database for providing aural rehabilitation of hearing loss across multiple dimensions of hearing |
US20080240452A1 (en) * | 2004-06-14 | 2008-10-02 | Mark Burrows | At-Home Hearing Aid Tester and Method of Operating Same |
WO2006002035A2 (en) * | 2004-06-15 | 2006-01-05 | Johnson & Johnson Consumer Companies, Inc. | Low-cost, programmable, time-limited hearing health aid apparatus, method of use, and system for programming same |
US7319769B2 (en) * | 2004-12-09 | 2008-01-15 | Phonak Ag | Method to adjust parameters of a transfer function of a hearing device as well as hearing device |
US7599500B1 (en) * | 2004-12-09 | 2009-10-06 | Advanced Bionics, Llc | Processing signals representative of sound based on the identity of an input element |
DE102005032274B4 (en) * | 2005-07-11 | 2007-05-10 | Siemens Audiologische Technik Gmbh | Hearing apparatus and corresponding method for eigenvoice detection |
WO2007110073A1 (en) | 2006-03-24 | 2007-10-04 | Gn Resound A/S | Learning control of hearing aid parameter settings |
DK1906700T3 (en) * | 2006-09-29 | 2013-05-06 | Siemens Audiologische Technik | Method of timed setting of a hearing aid and corresponding hearing aid |
WO2008083315A2 (en) * | 2006-12-31 | 2008-07-10 | Personics Holdings Inc. | Method and device configured for sound signature detection |
WO2009006418A1 (en) * | 2007-06-28 | 2009-01-08 | Personics Holdings Inc. | Method and device for background noise mitigation |
US20100293227A1 (en) * | 2007-10-02 | 2010-11-18 | Phonak Ag | Hearing system, method for operating a hearing system, and hearing system network |
DE102007056466A1 (en) * | 2007-11-22 | 2009-05-28 | Myworldofhearing E. K. | Method for customizing a hearing aid |
EP2255548B1 (en) * | 2008-03-27 | 2013-05-08 | Phonak AG | Method for operating a hearing device |
US9129291B2 (en) | 2008-09-22 | 2015-09-08 | Personics Holdings, Llc | Personalized sound management and method |
DE102009007074B4 (en) | 2009-02-02 | 2012-05-31 | Siemens Medical Instruments Pte. Ltd. | Method and hearing device for setting a hearing device from recorded data |
USRE48462E1 (en) * | 2009-07-29 | 2021-03-09 | Northwestern University | Systems, methods, and apparatus for equalization preference learning |
DK2306756T3 (en) * | 2009-08-28 | 2011-12-12 | Siemens Medical Instr Pte Ltd | Method of fine tuning a hearing aid as well as hearing aid |
DE102010018877A1 (en) * | 2010-04-30 | 2011-06-30 | Siemens Medical Instruments Pte. Ltd. | Method for voice-controlling of hearing aid i.e. behind-the-ear-hearing aid, involves interacting speech recognition and distinct voice detection, such that voice command spoken by wearer of hearing aid is used for voice-controlling aid |
EP2472907B1 (en) | 2010-12-29 | 2017-03-15 | Oticon A/S | A listening system comprising an alerting device and a listening device |
US9814879B2 (en) | 2013-05-13 | 2017-11-14 | Cochlear Limited | Method and system for use of hearing prosthesis for linguistic evaluation |
US9883300B2 (en) * | 2015-02-23 | 2018-01-30 | Oticon A/S | Method and apparatus for controlling a hearing instrument to relieve tinitus, hyperacusis, and hearing loss |
DK3082350T3 (en) * | 2015-04-15 | 2019-04-23 | Starkey Labs Inc | USER INTERFACE WITH REMOTE SERVER |
US10492008B2 (en) | 2016-04-06 | 2019-11-26 | Starkey Laboratories, Inc. | Hearing device with neural network-based microphone signal processing |
US20170311095A1 (en) | 2016-04-20 | 2017-10-26 | Starkey Laboratories, Inc. | Neural network-driven feedback cancellation |
US11412333B2 (en) * | 2017-11-15 | 2022-08-09 | Starkey Laboratories, Inc. | Interactive system for hearing devices |
CN112369046B (en) | 2018-07-05 | 2022-11-18 | 索诺瓦公司 | Complementary sound categories for adjusting a hearing device |
US10795638B2 (en) * | 2018-10-19 | 2020-10-06 | Bose Corporation | Conversation assistance audio device personalization |
CN110473567B (en) * | 2019-09-06 | 2021-09-14 | 上海又为智能科技有限公司 | Audio processing method and device based on deep neural network and storage medium |
DE102019216100A1 (en) | 2019-10-18 | 2021-04-22 | Sivantos Pte. Ltd. | Method for operating a hearing aid and hearing aid |
DE102019218808B3 (en) * | 2019-12-03 | 2021-03-11 | Sivantos Pte. Ltd. | Method for training a hearing situation classifier for a hearing aid |
US11601765B2 (en) * | 2019-12-20 | 2023-03-07 | Sivantos Pte. Ltd. | Method for adapting a hearing instrument and hearing system therefor |
US20220312126A1 (en) * | 2021-03-23 | 2022-09-29 | Sonova Ag | Detecting Hair Interference for a Hearing Device |
DE102021204974A1 (en) | 2021-05-17 | 2022-11-17 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung eingetragener Verein | Apparatus and method for determining audio processing parameters |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4972487A (en) * | 1988-03-30 | 1990-11-20 | Diphon Development Ab | Auditory prosthesis with datalogging capability |
AU610705B2 (en) | 1988-03-30 | 1991-05-23 | Diaphon Development A.B. | Auditory prosthesis with datalogging capability |
US5303306A (en) * | 1989-06-06 | 1994-04-12 | Audioscience, Inc. | Hearing aid with programmable remote and method of deriving settings for configuring the hearing aid |
DE59410235D1 (en) * | 1994-05-06 | 2003-03-06 | Siemens Audiologische Technik | Programmable hearing aid |
DE4419901C2 (en) * | 1994-06-07 | 2000-09-14 | Siemens Audiologische Technik | Hearing aid |
EP0712261A1 (en) * | 1994-11-10 | 1996-05-15 | Siemens Audiologische Technik GmbH | Programmable hearing aid |
DK0712263T3 (en) * | 1994-11-10 | 2003-05-26 | Siemens Audiologische Technik | Programmable hearing aid. |
SE504091C2 (en) | 1995-03-06 | 1996-11-11 | Expandi Systems Ab | Device for self-expanding links |
EP0814634B1 (en) * | 1996-06-21 | 2002-10-02 | Siemens Audiologische Technik GmbH | Programmable hearing-aid system and method for determining an optimal set of parameters in an acoustic prosthesis |
EP0814636A1 (en) * | 1996-06-21 | 1997-12-29 | Siemens Audiologische Technik GmbH | Hearing aid |
US6674867B2 (en) * | 1997-10-15 | 2004-01-06 | Belltone Electronics Corporation | Neurofuzzy based device for programmable hearing aids |
DK1273205T3 (en) * | 2000-04-04 | 2006-10-09 | Gn Resound As | A hearing prosthesis with automatic classification of the listening environment |
WO2001020965A2 (en) * | 2001-01-05 | 2001-03-29 | Phonak Ag | Method for determining a current acoustic environment, use of said method and a hearing-aid |
DE10114015C2 (en) | 2001-03-22 | 2003-02-27 | Siemens Audiologische Technik | Method for operating a hearing aid and / or hearing protection device and hearing aid and / or hearing protection device |
US20030032681A1 (en) * | 2001-05-18 | 2003-02-13 | The Regents Of The University Of Clifornia | Super-hydrophobic fluorine containing aerogels |
DE10152197B4 (en) | 2001-10-23 | 2009-07-09 | Siemens Audiologische Technik Gmbh | Method for programming a hearing aid, programming device and remote control for the hearing aid |
US7158931B2 (en) * | 2002-01-28 | 2007-01-02 | Phonak Ag | Method for identifying a momentary acoustic scene, use of the method and hearing device |
DK1359787T3 (en) * | 2002-04-25 | 2015-04-20 | Gn Resound As | Fitting method and hearing prosthesis which is based on signal to noise ratio loss of data |
US20040190737A1 (en) * | 2003-03-25 | 2004-09-30 | Volker Kuhnel | Method for recording information in a hearing device as well as a hearing device |
US20060078139A1 (en) * | 2003-03-27 | 2006-04-13 | Hilmar Meier | Method for adapting a hearing device to a momentary acoustic surround situation and a hearing device system |
DE10347211A1 (en) | 2003-10-10 | 2005-05-25 | Siemens Audiologische Technik Gmbh | Method for training and operating a hearing aid and corresponding hearing aid |
-
2003
- 2003-10-10 DE DE10347211A patent/DE10347211A1/en not_active Withdrawn
-
2004
- 2004-09-16 DK DK04022104T patent/DK1523219T3/en active
- 2004-09-16 EP EP04022104A patent/EP1523219B1/en not_active Revoked
- 2004-09-16 AT AT04022104T patent/ATE406073T1/en not_active IP Right Cessation
- 2004-09-16 DE DE502004007878T patent/DE502004007878D1/en not_active Expired - Lifetime
- 2004-10-06 AU AU2004218632A patent/AU2004218632B2/en not_active Ceased
- 2004-10-08 US US10/961,696 patent/US7742612B2/en active Active
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106714062A (en) * | 2016-11-30 | 2017-05-24 | 天津大学 | BP-artificial-neural-network-based intelligent matching algorithm for digital hearing aid |
CN106714062B (en) * | 2016-11-30 | 2020-02-18 | 天津大学 | Digital hearing aid intelligent fitting method based on BP artificial neural network |
Also Published As
Publication number | Publication date |
---|---|
DK1523219T3 (en) | 2009-01-05 |
ATE406073T1 (en) | 2008-09-15 |
AU2004218632A1 (en) | 2005-04-28 |
DE502004007878D1 (en) | 2008-10-02 |
EP1523219A2 (en) | 2005-04-13 |
US7742612B2 (en) | 2010-06-22 |
US20050105750A1 (en) | 2005-05-19 |
AU2004218632B2 (en) | 2009-04-09 |
DE10347211A1 (en) | 2005-05-25 |
EP1523219A3 (en) | 2007-08-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1523219B1 (en) | Method for training and operating a hearing aid and corresponding hearing aid | |
DE102019206743A1 (en) | Hearing aid system and method for processing audio signals | |
EP3451705B1 (en) | Method and apparatus for the rapid detection of own voice | |
EP0681411B1 (en) | Programmable hearing aid | |
EP3809724B1 (en) | Hearing device and method for operating a hearing device | |
EP3445067B1 (en) | Hearing aid and method for operating a hearing aid | |
EP2081406B1 (en) | Method and device for configuring variables on a hearing aid | |
EP1453356B1 (en) | Method of adjusting a hearing system and corresponding hearing system | |
WO2001020965A2 (en) | Method for determining a current acoustic environment, use of said method and a hearing-aid | |
EP2306756A1 (en) | Method for fine tuning a hearing aid and hearing aid | |
EP3386215B1 (en) | Hearing aid and method for operating a hearing aid | |
EP3840418A1 (en) | Method for adjusting a hearing aid and corresponding hearing system | |
EP1404152B1 (en) | Device and method for fitting a hearing-aid | |
DE102019200956A1 (en) | Signal processing device, system and method for processing audio signals | |
EP3629601A1 (en) | Method for processing microphone signals in a hearing system and hearing system | |
CH691211A5 (en) | Circuit for operating a hearing aid as well as hearing aid with such a circuit. | |
DE102010041740A1 (en) | Method for signal processing in a hearing aid device and hearing aid device | |
EP3873108A1 (en) | Hearing system with at least one hearing instrument worn in or on the ear of the user and method for operating such a hearing system | |
EP3062249A1 (en) | Method for determining carrier-specific usage data of a hearing aid, method for adjusting hearing aid settings of a hearing aid, hearing aid system and adjusting unit for a hearing aid system | |
EP2070384B1 (en) | Hearing device controlled by a perceptive model and corresponding method | |
WO2009016010A1 (en) | Method for adapting a hearing device using a perceptive model | |
EP2302952A1 (en) | Self-adjustment of a hearing aid | |
DE102019203786A1 (en) | Hearing aid system | |
EP1841286A2 (en) | Hearing aid with adaptive starting values of parameters | |
EP1303166B1 (en) | Method of operating a hearing aid and assembly with a hearing aid |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL HR LT LV MK |
|
PUAL | Search report despatched |
Free format text: ORIGINAL CODE: 0009013 |
|
AK | Designated contracting states |
Kind code of ref document: A3 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL HR LT LV MK |
|
17P | Request for examination filed |
Effective date: 20071106 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
RTI1 | Title (correction) |
Free format text: METHOD FOR TRAINING AND OPERATING A HEARING AID AND CORRESPONDING HEARING AID |
|
AKX | Designation fees paid |
Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: NV Representative=s name: SIEMENS SCHWEIZ AG Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: LANGUAGE OF EP DOCUMENT: GERMAN |
|
REF | Corresponds to: |
Ref document number: 502004007878 Country of ref document: DE Date of ref document: 20081002 Kind code of ref document: P |
|
REG | Reference to a national code |
Ref country code: DK Ref legal event code: T3 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20081201 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080820 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080820 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080820 |
|
BERE | Be: lapsed |
Owner name: SIEMENS AUDIOLOGISCHE TECHNIK G.M.B.H. Effective date: 20080930 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: IT Payment date: 20080924 Year of fee payment: 5 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PCAR Free format text: SIEMENS SCHWEIZ AG;INTELLECTUAL PROPERTY FREILAGERSTRASSE 40;8047 ZUERICH (CH) |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FD4D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20081120 Ref country code: MC Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20080930 Ref country code: IE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080820 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080820 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090120 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080820 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080820 |
|
PLBI | Opposition filed |
Free format text: ORIGINAL CODE: 0009260 |
|
26 | Opposition filed |
Opponent name: OTICON A/S(DK)/WIDEX A/S(DK)/ GN RESOUND A/S(DK)/P Effective date: 20090520 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20080930 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080820 |
|
PLAX | Notice of opposition and request to file observation + time limit sent |
Free format text: ORIGINAL CODE: EPIDOSNOBS2 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20080916 |
|
PLBB | Reply of patent proprietor to notice(s) of opposition received |
Free format text: ORIGINAL CODE: EPIDOSNOBS3 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20081120 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080820 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20080916 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080820 Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20090221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080820 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20081121 |
|
RDAF | Communication despatched that patent is revoked |
Free format text: ORIGINAL CODE: EPIDOSNREV1 |
|
APBM | Appeal reference recorded |
Free format text: ORIGINAL CODE: EPIDOSNREFNO |
|
APBP | Date of receipt of notice of appeal recorded |
Free format text: ORIGINAL CODE: EPIDOSNNOA2O |
|
APAH | Appeal reference modified |
Free format text: ORIGINAL CODE: EPIDOSCREFNO |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20090916 |
|
APBQ | Date of receipt of statement of grounds of appeal recorded |
Free format text: ORIGINAL CODE: EPIDOSNNOA3O |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: CH Payment date: 20131210 Year of fee payment: 10 Ref country code: DE Payment date: 20131120 Year of fee payment: 10 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R064 Ref document number: 502004007878 Country of ref document: DE Ref country code: DE Ref legal event code: R103 Ref document number: 502004007878 Country of ref document: DE |
|
APBU | Appeal procedure closed |
Free format text: ORIGINAL CODE: EPIDOSNNOA9O |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DK Payment date: 20140919 Year of fee payment: 11 |
|
RDAG | Patent revoked |
Free format text: ORIGINAL CODE: 0009271 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: PATENT REVOKED |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PLX |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20140908 Year of fee payment: 11 |
|
27W | Patent revoked |
Effective date: 20141006 |
|
GBPR | Gb: patent revoked under art. 102 of the ep convention designating the uk as contracting state |
Effective date: 20141006 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R107 Ref document number: 502004007878 Country of ref document: DE Effective date: 20150115 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LI Free format text: LAPSE BECAUSE OF THE APPLICANT RENOUNCES Effective date: 20080820 Ref country code: CH Free format text: LAPSE BECAUSE OF THE APPLICANT RENOUNCES Effective date: 20080820 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20140917 Year of fee payment: 11 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MA03 Ref document number: 406073 Country of ref document: AT Kind code of ref document: T Effective date: 20141006 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R082 Ref document number: 502004007878 Country of ref document: DE Representative=s name: FDST PATENTANWAELTE FREIER DOERR STAMMLER TSCH, DE |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R082 Ref document number: 502004007878 Country of ref document: DE Representative=s name: FDST PATENTANWAELTE FREIER DOERR STAMMLER TSCH, DE Ref country code: DE Ref legal event code: R081 Ref document number: 502004007878 Country of ref document: DE Owner name: SIVANTOS GMBH, DE Free format text: FORMER OWNER: SIEMENS AUDIOLOGISCHE TECHNIK GMBH, 91058 ERLANGEN, DE |