Nothing Special   »   [go: up one dir, main page]

EP0464217B1 - Apparatus for reproducing acoustic signals - Google Patents

Apparatus for reproducing acoustic signals Download PDF

Info

Publication number
EP0464217B1
EP0464217B1 EP91902738A EP91902738A EP0464217B1 EP 0464217 B1 EP0464217 B1 EP 0464217B1 EP 91902738 A EP91902738 A EP 91902738A EP 91902738 A EP91902738 A EP 91902738A EP 0464217 B1 EP0464217 B1 EP 0464217B1
Authority
EP
European Patent Office
Prior art keywords
transfer characteristics
audio signal
signal
quadrant
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP91902738A
Other languages
German (de)
French (fr)
Other versions
EP0464217A4 (en
EP0464217A1 (en
Inventor
Kiyofumi Inanaga
Hiroyuki Sogawa
Yasuhiro Iida
Susumu Yabe
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2008520A external-priority patent/JP2893780B2/en
Priority claimed from JP2008514A external-priority patent/JP2751512B2/en
Application filed by Sony Corp filed Critical Sony Corp
Priority to EP95104929A priority Critical patent/EP0664660B1/en
Publication of EP0464217A1 publication Critical patent/EP0464217A1/en
Publication of EP0464217A4 publication Critical patent/EP0464217A4/en
Application granted granted Critical
Publication of EP0464217B1 publication Critical patent/EP0464217B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
    • H04S1/005For headphones

Definitions

  • the present invention relates to an audio signal binaural reproducing apparatus for reproducing audio signals by means of headphones.
  • a binaural reproducing method has heretofore been known as an approach for providing better direction sensation of sound image or outside head localization sensation when audio signals are reproduced by headphones fitted to the head of a listener so that a pair of headphones are located in the vicinity of both ears.
  • An audio reproducing system adopting this binaural system preliminarily applies a given signal processing to the audio signals reproduced by headphones as is described in, for example, specification of Japanese Patent Publication Sho 53-283.
  • the direction sensation of sound image and outside head localization sensation and the like depend upon the differences in volumes, times and phases of sounds listened by left and right ears.
  • the signal processing aims at causing in an audio output reproduced by the headphones, audio effects equivalent to those caused by the difference in distances between sound sources, that is, speaker systems and right and left ears of a listener and reflections and diffractions in the vicinity of the head of the listener when audio reproducing is performed, for example, by speaker systems remote from the listener.
  • Such a signal processing is performed by convolution-integrating left and right ear audio signals with impulse responses corresponding to the above-mentioned audio effects.
  • the relative direction and position of the sound image that the listener senses are changed.
  • the headphones is turned together with the listeners head if the listener turns his or her head when audio reproducing is performed by a binaural method using headphones, the relative direction and position of the sound image which the listener senses are not changed.
  • an audio signal reproducing system which detects a change in the direction of the listener's head and changes the modes of the signal processing based upon a result of the detection for providing a good front localization sensation in headphones has heretofore been proposed as is disclosed in Japanese Unexamined Patent Publication No. Sho 42-227 and Japanese Examined Patent Publication No. 54-19242.
  • a direction detecting device such as gyrocompass and magnetic needle is provided on the head of the listener.
  • a level adjusting circuit and a delay circuit and the like for processing the audio signals are controlled based upon a result of detection from the direction detecting device so that a sound image sensation which is similar to that of the audio reproducing using speaker systems remote from the listener is obtained.
  • JP-A-1121 000 discloses on audio signal reproducing apparatus according to the preamble of Claim 1. A similar arrangement is disclosed in JP-A-58 116 900.
  • the impulse responses that is, transfer characteristics corresponding to audio effects given to audio signals of left and right ears for each predetermined rotational angle and to store a great amount of information on the transfer characteristics.
  • the information is read from the storing means depending upon the change in direction of the head.
  • the audio signal will be subjected to a necessary convolution-integration processing in real-time.
  • the present invention was made with a view to overcome such drawbacks.
  • An audio signal reproducing apparatus of the present invention comprises means for detecting the rotational angular position of the head of a listener corresponding to the movement of the head of the listener relative to virtual sound sources;
  • the audio signal reproducing apparatus of the present invention comprises a headphone set 10 which is fitted over the head M of a listener P and a pair of headphones 2L and 2R are supported by a head band 1 so that they are located in the vicinity of left and right ears of the listener P, respectively as shown in Fig. 1.
  • Two sliders 4L and 4R from which support arms 3L and 3R, respectively project are slidably mounted on the head band 1 of the headphone set 10.
  • a pair of signal detectors 5L and 5R which detect a position detection reference signal emitted from a reference signal source 11 are provided at the tip ends of the support arms 3L and 3R, respectively. That is, the pair of signal detectors 5L and 5R are provided on the tip ends of the support arms 3L and 3R projectedly formed on the sliders 4L and 4R which are slidably mounted on the head band 1 so that they are supported in positions remote from the head band 1 and the pair of headphones 2L and 2R, that is the main body of the headphone set.
  • the reference signal source 11 comprises an ultrasonic signal source 12 and an ultrasonic speaker 13 for generating an ultrasonic signal from the ultrasonic signal source 12 as a reference signal.
  • Each of the pair of signal detectors 5L and 5R which receive the reference signal comprises an ultrasonic microphone.
  • An ultrasonic wave generated from the ultrasonic speaker 13, that is, the position detection reference signal is a burst wave in which an ultrasonic wave having a given level is intermittently generated for a given period of time as shown at A in Fig. 2, or an ultrasonic wave, the phase of which may be detected like a so-called level modulated wave, the level of which changes in a given circle.
  • the pair of signal detectors 5L and 5R provided on the headphone set 10 detects the ultrasonic position detection reference signal generated from the ultrasonic speaker 13 and generate respective detection signals shown at B and C in Fig. 2, each having a time lag depending upon the relative positional relation between the listener P and the ultrasonic speaker 13.
  • the pair of signal detectors 5L and 5R are supported by the support arm 3L and 3R in positions remote from the main body of the headphone set 10 while they are mounted on the tip ends of the support arms 3L and 3R in positions remote from the main body of the headphone set 10 while they are mounted on the tip ends of the support arms 3L and 3R which project from the sliders 4L and 4R, respectively slidably mounted on the head band 1 and, the head band 1 and the pair of headphone 2L and 2R, that is, the main body of the headphone set is fitted on the head M of the listener P, they can detect the ultrasonic wave generated from the ultrasonic speaker 13, that is, the position detection reference signal stably and accurately without being located behind the head P of the listener P even if the listener P moves or rotates his head P.
  • the pair of the signal detectors 5L and 5R can be adjusted to a position optimal for detecting the detection reference signal by sliding the sliders 4L and 4R along the head band 1.
  • the optimal positions of the headphones 2L and 2R which are fitted on the head M of the listener P by the head band 1 so that they correspond to the vicinity of the left and right ears depend on the shape and size of the head M of the listener P, that is, have the differences among individuals. Accordingly, the positions of the pair of signal detectors 5L and 5R can be adjusted so that they correspond to the headphones 2L and 2R, respectively.
  • Each detection signal obtained by these signal detectors 5L and 5R is applied to an operation unit 14.
  • the operation unit 14 comprises first and second edge detecting circuits 15 and 16, to which the detection signal from the signal detectors 5L and 5R for detecting the position detection reference signal are supplied, respectively and a third edge detecting circuit 17 to which an ultrasonic signal from the ultrasonic signal sources 12, that is, the position detection reference signal is applied.
  • the first and second edge detecting circuits 15 and 16 detect rise-up edges of the detection signals generated from the signal detectors 5L and 5R, respectively and output pulse signals shown at D and E of Fig. 2 corresponding to the rise-up edges pulse signals generated by the first and second edge detecting circuits 15 and 16 are supplied to a distance calculating circuit 18 and a circuit 19 for detecting the time difference between both ears.
  • the third edge detecting circuit 17 detects the rise-up edge of the ultrasonic signal from the ultrasonic signal source 12 and outputs a pulse signal corresponding to the rise-up edge as shown at F in Fig. 2. A pulse signal obtained by the third edge detection circuit 17 is supplied to the distance calculating circuit 18.
  • the distance calculating circuit 18 detects the time difference t 1 between pulse signals obtained by the third and first edge detecting circuits 17 and 15 which is represented as ⁇ T 1 in Fig. 2 and the time difference t 2 between pulse signals obtained by the third and second edge detecting circuits 17 and 16 which is represented as ⁇ T 2 in Fig. 2 and then calculates the distance l 0 between the ultrasonic speaker 13 and the center of the head M of the listener P represented as l 0 in Fig. 3 based upon the time differences t 1 , t 2 and the sound velocity V.
  • the sound velocity V may be preliminarily preset as a constant in the distance calculating circuit 18 or alternatively may be changed with changes in atmospheric temperature, humidity and atmospheric pressure and the like.
  • compensation may be conducted for the positional relation between the signal detector 5L and 5R and the center of the head M, the shape and size of the hand M.
  • Signals representative of the distance l 0 , time differences t 1 and t 2 are fed to an angle calculating circuit 20.
  • the circuit 19 for detecting the time difference between both ears detects the time difference t 3 between the pulse signals generated by the first and second edge detecting circuits 15 and 16, represented as ⁇ 3 in Fig. 2.
  • a signal representative of the time difference t 3 is fed to the angle calculating circuit 20.
  • the angle calculating circuit 20 calculates an angle representative of the direction of the head M represented by an arrow ⁇ 0 in Fig. 3 by using the time differences t 1 , t 2 , t 3 , the distance ⁇ 0 , the sound velocity V and the radius r of the head M.
  • the angle ⁇ 0 can be determined, for example, by the equation 1 as follows: ⁇ 0 ⁇ sin -1 ⁇ V 2 (t 1 +t 2 )t 3 /4rl ⁇ Then, the rotation angle ⁇ of the head M relative to a desired position of a virtual sound source is calculated from information on the angle ⁇ 0 and the distance l 0 representative of the relative positional relationship between a reference position and the listener P by assuming that the position of the ultrasonic speaker 13 be the reference position of the virtual sound source.
  • the operation unit 14 includes a storing circuit 22 which stores information on transfer characteristics from the virtual sound source to both ears of the listener in at least the first quadrant of the rotational angular position of the head of the listener, for example, information on the transfer characteristics for each angle ⁇ 11 to ⁇ 1n in the first quadrant.
  • the control circuit 21 Based upon the current angle position calculated by the angle calculating circuit 20, the control circuit 21 reads the information on the transfer characteristics corresponding to the current angles ⁇ 11 to ⁇ 1n positions from the storing circuit 22 if the current angle position is in the first quadrant in Fig. 4 and reads the transfer characteristics information in which the current angles ⁇ 21 to ⁇ 2n corresponds to the angles ⁇ 11 to 6 1n in the first quadrant from the storing circuit 22 if the current angle position is in the second quadrant in Fig. 4 and read the transfer characteristics information in which the current angles ⁇ 31 to ⁇ 3n corresponds to the angles ⁇ 11 to ⁇ 1n in the first quadrant from the storing circuit 22 if the current angle position is in the third quadrant in Fig.
  • the transfer characteristics from the virtual sound sources to both ears of the listener can be treated as symmetrical in each quadrant.
  • two transfer characteristics in the vicinity of the rotational angular position of the head represented by the angular position information may be read form the storing circuit 22 and the information on the transfer characteristics in the current head rotational angular position may be operated by, for example, linear interpolation processing, as described leter with reference to Fig. 6.
  • the audio signal source 24 is an apparatus for outputting given left and right channel audio signals S L and S R , such as recording disc playback apparatus or radio communication receivers and the like.
  • the audio signal processing circuit 23 performs a signal processing which provides the left and right channel audio signals S L and S R fed from the audio signal source 24 with a given transfer characteristics form the virtual sound source to the both ears of the listener.
  • the audio signal processing circuit 23 comprises first to sixth switches 25L, 25R, 26L, 26R, 27L and, 27R for switching the signal lines and first to fourth signal processing units 28a, 28b, 28c and 28d.
  • the first to sixth switches 25L, 25R, 26L, 26R, 27L and 27R are controlled for switching in response to a control signal from the control circuit 21 representative of the quadrant to which the current angular position belongs.
  • the first and second switches 25L and 25R perform switching of inputs of left and right channel audio signals S L and S R fed from the audio signal source 24 and supply the right channel audio signal S R to the first and second signal processing units 28a and 28b and supply the left channel audio signal S L to the third and fourth signal processing units 28c and 28d when the current angular position is in the first or third quadrant and supply the left channel audio signal S L to the first and second signal processing unit 28a and 28b and supply the right channel audio signal S R to the third and fourth signal processing units 28c and 28d when the current angular position is in the second or fourth quadrant.
  • the third and fourth switches 26L and 26R perform switching of the output of the left and right channel audio signals E L and E R outputted from the audio signal processing circuit 23 and select as a right channel audio signal E R the output signal of the first adder 29R for adding the output signals of the first and third signal processing unit 28a and 28c and select as a left channel audio signal E L the output signal of the second adder 29 L for adding the output signals of the second and fourth signal processing units 28b and 28d when the current angular position is in the first or third quadrant and select as a right channel audio signal E R the output signal of the first adder 29L and select as a left channel audio signal E L the output signal of the second adder 29L when the current angular position is in the second or the fourth quadrant.
  • the third and fourth switches 26 L and 26 R perform switching of filters for the left and right channel audio signals E L and E R outputted from the audio signal processing circuit 23 and output the left and right audio signals E L and E R unswitched when the current angular position is in the second or fourth quadrant and output the audio signals E L and E R from which high frequency components have been removed by low pass filters 30L and 30R when the current angular position is in the second or fourth quadrant.
  • an impulse response representative of the transfer characteristics of the left and right channel audio signals S L and S R reproduced from a pair of left and right channel speakers which are virtual sound sources facing to a listener to each ear of the listener is preset based upon information on transfer characteristics supplied from the control circuit 21.
  • the first signal processing unit 28a presets the impulse response ⁇ h RR (t, ⁇ ) ⁇ representative of transfer characteristics of the sound reproduced from the right channel audio signal S R to the right ear.
  • the second signal processing unit 28b presets the impulse response ⁇ h RL (t, ⁇ ) ⁇ representative of the transfer characteristics of the sound reproduced from the right channel audio signal S R to the left ear.
  • the third signal processing unit 28c presets the impulse response ⁇ h LR (t, ⁇ ) ⁇ representative of transfer characteristics of the sound reproduced form the left channel audio signal S L to the right ear.
  • the fourth signal processing unit 28d presents the impulse response ⁇ h LL (t, ⁇ ) ⁇ representative of the transfer characteristics of the sound reproduced from the left channel audio signal S L to the left ear.
  • the right channel audio signal S R is fed the first and second signal processing units 28a and 28b.
  • the first signal processing unit 28a the right channel audio signal S R is subjected to a signal processing of convolution-integration of the impulse response ⁇ h RR (t, ⁇ ) ⁇ .
  • the second signal processing unit 28b the right channel audio signal S R is subjected to a signal processing of convolution-integration of the impulse response ⁇ h RL (t, ⁇ ) ⁇ .
  • the left channel audio signal S L is fed to the third and fourth signal processing units 28c and 28d.
  • the third signal processing unit 28c the left channel audio signal S L is subjected to a signal processing of convolution-integration of the impulse response ⁇ h LR (t, ⁇ ) ⁇ .
  • the second signal processing unit 28d the left channel audio signal S L is subjected to a signal processing of convolution-integration of the impulse response ⁇ h LL (t, ⁇ ) ⁇ .
  • the output signals from the first and third signal processing units 28a and 28c are applied to the right channel adder 29R and are added with each other therein.
  • the output signal of the right channel adder 28R is fed as the right channel studio signal E R via the right channel amplifier 31R to the right channel headphone 2R and reproduced as a sound.
  • the output signals from the second and fourth signal processing units 28b and 28d are applied to the left channel adder 29L and are added with each other therein.
  • the output signal of the left channel adder 29 is fed as the left channel audio signal E L via the left channel amplifier 31L to the left channel headphone 2L and reproduced as a sound.
  • information on the transfer characteristics in the rotational angular positions corresponding to the movement of the head of the listener calculated by the angle calculating circuit 20 is formed based upon the information upon the transfer characteristics of the first quadrant stored in the storing circuit 22.
  • Fig. 5B shows that the listener P has approached to a pair of speaker systems S L and S R , that is, virtual sound sources from a position of Fig. 5A.
  • Fig. 4C shows that the listener P rotates his head M toward the right speaker device S R .
  • the audio signal processing means forms the transfer characteristics information in the rotational angular position represented by a detection output from detecting means for detecting the rotational angular position depending upon the movement of the head of the listener in accordance with the transfer characteristics information of at least the first quadrant stored in the storing means and processes the left and right channel audio signals for supplying the processed audio signals to the headphone set. Accordingly, a proper binaural reproduction can be performed for providing a very natural sound image localization sensation in which the positions of the virtual sound sources are not moved even if the listener moves.
  • the audio signal reproducing apparatus of the present invention shown in Fig. 6 comprises a headphone set 40 which is fitted over the head M of a listener P and a pair of headphones 42L and 42R are supported by a head band 41 so that they are located in the vicinity of left and right ears of the listener P, as is similar to the embodiment shown in Fig. 1.
  • Two sliders 44L and 44R from which support arms 43L and 43R, respectively project are slidably mounted on the head 1 of the headphone set 40.
  • a pair of signal detectors 45L and 45R which detect a position detection reference signal emitted from a reference signal source 51 are provided at the tip ends of the support arms 431 and 43R, respectively. That is, the pair of signal detectors 45L and 45R are provided on the tip ends of the support arms 43L and 43R projectedly formed on the sliders 44L and 44R which are slidably mounted on the head band 51 so that they are supported in positions remote from the head band 51 and the pair of headphones 42L and 42R, that is the main body of the headphone set.
  • the reference signal source 51 comprises an ultrasonic signal source 52 and an ultrasonic speaker 53 for generating an ultrasonic signal from the ultrasonic signal source 52 as a reference signal.
  • Each of the pair of signal detectors 45L and 45R which receives the reference signal comprises an ultrasonic microphone.
  • An ultrasonic wave generated from the ultrasonic speaker 53 that is, the position detection reference signal is a burst wave in which an ultrasonic wave having a given level is intermittently generated for a given period of time as is similar to the first embodiment, or an ultrasonic wave, the phase of which may be detected like a so-called level modulated wave, the level of which changes in a given circle.
  • the pair of signal detectors 45L and 45R provided on the headphone set 40 detects the ultrasonic position detection reference signal generated from the ultrasonic speaker 53 and generate respective detection signals, each having a time lag depending upon the relative positional relation between the listener P and the ultrasonic speaker 53.
  • Each detection signal obtained by these signal detectors 45L and 45R is applied to an operation unit 54.
  • the operation unit 54 comprises first and second edge detecting circuit 55 and 56, to which the detection signal from the signal detectors 45L and 45R for detecting the position detection reference signal are supplied, respectively and a third edge detecting circuit 57 to which an ultrasonic signal from the ultrasonic signal source 52, that is, the position detection reference signal is applied.
  • the first and second edge detecting circuits 55 and 56 detect rise-up edges of the detection signals generated from the signal detectors 45L and 45R, respectively and output pulse signals corresponding to the rise-up edges pulse signals generated by the first and second edge detecting circuits 55 and 56 are supplied to a distance calculating circuit 58 and a circuit 59 for detecting the time difference between both ears.
  • the third edge detecting circuit 57 detects the rise-up edge of the ultrasonic signal from the ultrasonic signal source 52 and outputs a pulse signal corresponding to the rise-up edge.
  • a pulse signal obtained by the third edge detection circuit 57 is supplied to the distance calculating circuit 58.
  • the distance calculating circuit 58 detects the time difference t 1 between pulse signals obtained by the third and first edge detecting circuits 57 and 55 and the time difference t 2 between pulse signals obtained by the third and second edge detecting circuits 57 and 56 and then calculates the distance l 0 between the ultrasonic speaker 53 and the center of the head M of the listener based upon the time differences t 1 , t 2 and the sound velocity V.
  • Signals representative of the distance l 0 , time differences t 1 and t 2 are fed to an angle calculating circuit 60.
  • the circuit 59 for detecting the time difference between both ears detects the time difference t 3 between the pulse signals generated by the first and second edge detecting circuits 55 and 56. A signal representative of the time difference t 3 is fed to the angle calculating circuit 60.
  • the angle calculating circuit 60 calculates an angle ⁇ 0 representative of the direction of the head M by using the time differences t 1 , t 2 , t 3 , the distance ⁇ 0 , the sound velocity V and the radius r of the head M similarly to the angle calculating circuit 20 in the first embodiment.
  • the operation unit 54 includes a storing circuit 62 in which transfer characteristics information representative of the transfer characteristics from the virtual sound sources to both ears of the listener for each predetermined angle, which is larger than that of the angular positional information of the listener calculated by the angle calculating circuit 60.
  • the interpolation operation and processing circuit 61 reads the information on two transfer characteristics in the vicinity of the rotational angular position of the head represented the current angular positional information calculated by the angle calculating circuit 60 and operates the transfer characteristics in the current rotational angular position of the head by, for example, a linear interpolation processing.
  • the interpolation operation and processing circuit 61 may reads the information on more than two transfer characteristics in the vicinity of the current rotational angular position of the head represented by the angular positional information for performing secondary interpolation processing other than the linear interpolation processing.
  • the information on the transfer characteristics in the current rotational angular position obtained by the interpolation operation and processing circuit 61 is supplied to an audio signal processing circuit 63.
  • the audio signal processing circuit 63 is also supplied with left and right channel audio signals S L and S R outputted from an audio signal source 64.
  • the audio signal source 64 is a device for outputting predetermined left and right channel audio signals S L and S R and may includes, for example, various recording disc playback devices, recording the playback device of wireless receivers and the like.
  • the audio signal processing circuit 63 performs a signal processing which provides the left and right channel audio signals S L and S R fed from the audio signal source 64 with a given transfer characteristics from the virtual sound source to the both ears of the listener.
  • the audio signal processing circuit 63 comprises first through fourth signal processing units 65a, 65b, 65c and 65d to which the transfer characteristics information in the current rotational angular positional of the head obtained by the interpolation operation and processing circuit 61.
  • an impulse response representative of the transfer characteristics of the left and right channel audio signals S L and S R reproduced from a pair of left and right channel speakers which are virtual sound sources facing to a listener to each ear of the listener is preset based upon information on transfer characteristics.
  • the first signal processing unit 65a presents the impulse response ⁇ h RR (t, ⁇ ) ⁇ representative of the transfer characteristics of the sound reproduced form the right channel audio signal S R to the right ear.
  • the second signal processing unit 65b presets the impulse response ⁇ h RL (t, ⁇ ) ⁇ representative of the transfer characteristics of the sound reproduced from the right channel audio signal S R to the left ear.
  • the third signal processing unit 65c presets the impulse response ⁇ h LR (t, ⁇ ) ⁇ representative of the transfer characteristics of the sound reproduced from the left channel audio signal S L to the right ear.
  • the fourth signal processing unit 65d presets the impulse response ⁇ h LL (t, ⁇ ) ⁇ representative of the transfer characteristics of the sound reproduced from the left channel audio signal S L to the left ear.
  • the right channel audio signal S L is fed the first and second signal processing units 65a and 65n.
  • the first signal processing unit 65a the right channel audio signal S L is subjected to a signal processing of convolution-integration of the impulse response ⁇ h RR (t, ⁇ ) ⁇ .
  • the right channel audio signal S R is subjected to a signal processing of convolution-integration of the impulse response ⁇ h RL (t, ⁇ ) ⁇ .
  • the left channel audio signal S L is fed to the third and fourth signal processing units 65c and 65d.
  • the third signal processing unit 65c the left channel audio signal S L is subjected to a signal processing of convolution-integration of the impulse response ⁇ h LR (t, ⁇ ) ⁇ .
  • the second signal processing unit 65d the left channel audio signal S is subjected to a signal processing of convolution-integration of the impulse response ⁇ hLL(t, ⁇ ) ⁇ .
  • the output signals from the first and third signal processing units 65a and 65c are applied to the right channel adder 66R and are added with each other therein.
  • the output signal of the right channel adder 66R is fed as the right channel audio signal E R via the right channel amplifier 68R to the right channel headphone 4R of the headphones 40 and reproduced as a sound.
  • the output signals from the second and fourth signal processing units 64b and 64a are applied to the left channel adder 66L and are added with each other therein.
  • the output signal of the right channel adder 66L is fed as the left channel audio signal E R via the left channel amplifier 68L to the left channel headphone 42L of the headphone set 40 and reproduced as a sound.
  • information on two transfer characteristics in the vicinity of the rotational angular position represented by the current angular positional information is read from the storing circuit 62 based upon the current angular positional information calculated by the angle calculating circuit 60.
  • the transfer characteristics information in the current rotational angular position are operated by a linear interpolation processing in the interpolation operation circuit 61.
  • the audio signal reproducing apparatus of the present invention information on at least two transfer characteristics in the vicinity of the rotational angular position of the head represented by the detection output from detecting means for detecting the rotational angular position of the head of the listener at a solution higher than that of the transfer characteristics information stored in the storing means is read from the storing means.
  • the transfer characteristics information in the rotational angular position of the head represented by the detection output are interpolation-operated by interpolation operation means. Accordingly, the amount of the information on the transfer characteristics stored in the storing means can be reduced.
  • the audio signal processing means processes the left and right channel audio signals based upon the transfer characteristics information determined by the interpolation operation means. The processed audio signals are supplied to the headphones, resulting in that a proper binaural reproduction can be achieved for providing very natural sound image localization sensation in which the positions of the virtual sound sources do not move even if a listener moves.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)
  • Stereophonic Arrangements (AREA)

Abstract

An apparatus for binaurally reproducing acoustic signals using a headphone device. In the acoustic signal reproducing apparatus, a memory means stores information about transfer characteristics of the path from an imaginary source of sound to both ears of a listener in at least one quadrant around the listener. The transfer characteristics information stored in the memory means is used to form transfer characteristics information at the angular position of the head. The information thus formed is represented by the outputs of detecting means, and acoustic signals of the right and left channels are processed by acoustic signal processing means. In the acoustic signal reproducing apparatus, furthermore, at least two pieces of transfer characteristics information near the angular position of the head of the listener are read out from the memory means, and acoustic signals of the right and left channels are processed by acoustic signal processing means based on transfer characteristics information at the angular position of the head found by interpolation by the interpolation calculation means. Even when the listener moves, the acoustic signal reproducing apparatus maintains proper binaural reproduction without causing the imaginary source of sound to move owing to the use of the headphone device.

Description

    FIELD OF THE INVENTION
  • The present invention relates to an audio signal binaural reproducing apparatus for reproducing audio signals by means of headphones.
  • BACKGROUND OF THE INVENTION
  • A binaural reproducing method has heretofore been known as an approach for providing better direction sensation of sound image or outside head localization sensation when audio signals are reproduced by headphones fitted to the head of a listener so that a pair of headphones are located in the vicinity of both ears.
  • An audio reproducing system adopting this binaural system preliminarily applies a given signal processing to the audio signals reproduced by headphones as is described in, for example, specification of Japanese Patent Publication Sho 53-283.
  • The direction sensation of sound image and outside head localization sensation and the like depend upon the differences in volumes, times and phases of sounds listened by left and right ears.
  • The signal processing aims at causing in an audio output reproduced by the headphones, audio effects equivalent to those caused by the difference in distances between sound sources, that is, speaker systems and right and left ears of a listener and reflections and diffractions in the vicinity of the head of the listener when audio reproducing is performed, for example, by speaker systems remote from the listener. Such a signal processing is performed by convolution-integrating left and right ear audio signals with impulse responses corresponding to the above-mentioned audio effects.
  • Since the absolute position of the sound image is cot changed even if the listener moves or turns his or her head when audio reproducing is performed by speaker systems remote from the listener, the relative direction and position of the sound image that the listener senses are changed. In contrast to this, since the headphones is turned together with the listeners head if the listener turns his or her head when audio reproducing is performed by a binaural method using headphones, the relative direction and position of the sound image which the listener senses are not changed.
  • If binaural reproducing is performed by using headphones in such a manner, a sound image is created in the head of a listener due to differences in displacement of the sound image relative to a change in direction of the listener's head. Therefore, it is difficult to locate the sound image in front of the listener. Furthermore, the front sound image has a tendency to lift up.
  • Accordingly, an audio signal reproducing system which detects a change in the direction of the listener's head and changes the modes of the signal processing based upon a result of the detection for providing a good front localization sensation in headphones has heretofore been proposed as is disclosed in Japanese Unexamined Patent Publication No. Sho 42-227 and Japanese Examined Patent Publication No. 54-19242. In such an audio signal reproducing system, a direction detecting device such as gyrocompass and magnetic needle is provided on the head of the listener. A level adjusting circuit and a delay circuit and the like for processing the audio signals are controlled based upon a result of detection from the direction detecting device so that a sound image sensation which is similar to that of the audio reproducing using speaker systems remote from the listener is obtained.
  • In the prior art binaural reproducing system in which headphones are provided with a direction detecting device comprising a gyrocompass, an excellent sound image can be obtained by controlling the content of the signal processing which is applied to the audio signals depending upon changes in direction of the listener's head. JP-A-1121 000 discloses on audio signal reproducing apparatus according to the preamble of Claim 1. A similar arrangement is disclosed in JP-A-58 116 900.
  • In order to control the content of the signal processing applied to the audio signals depending upon a change in direction of the listener's head, it is necessary to preliminarily measure the impulse responses, that is, transfer characteristics corresponding to audio effects given to audio signals of left and right ears for each predetermined rotational angle and to store a great amount of information on the transfer characteristics. The information is read from the storing means depending upon the change in direction of the head. The audio signal will be subjected to a necessary convolution-integration processing in real-time.
  • The present invention was made with a view to overcome such drawbacks.
  • It is an object of the present invention to provide an audio signal reproducing apparatus having a simple structure using storing means having a low storing capacity which is capable of performing a binaural reproduction for providing a very natural localization of a sound image in which the positions of virtual sound sources are not changed by headphones even if a listener moves by reducing the amount of information on transfer characteristics from virtual sound sources necessary for binaural reproduction of audio signals with the headphones to both ears of the listener.
  • An audio signal reproducing apparatus of the present invention comprises means for detecting the rotational angular position of the head of a listener corresponding to the movement of the head of the listener relative to virtual sound sources;
    • means for storing transfer characteristics information representative of transfer characteristics of direct sounds from virtual sound sources to each ear of the listener for predetermined rotational angular positions; and
    • audio signal processing means for forming, based upon the transfer characteristics information stored in said storing means, transfer characteristic information appropriate to the rotational angular position of the head and for processing left and right channel audio signals using said transfer characteristic information, whereby audio signals which have been processed by said audio signal processing means are reproduced as sounds by headphone means; characterised in that:
    • said means for storing stores transfer characteristics information of predetermined angular positions in a quadrant and said audio signal processing means derives information for the other quadrants from the stored information.
  • The present invention will be further described hereinafter with reference to the following description of exemplary embodiments and the accompanying figures, in which:
    • Fig. 1 is a block diagram schematically showing the structure of an audio signal reproducing apparatus of the present invention;
    • Fig. 2 is a time chart schematically showing signals supplied to a operation unit of the audio signal reproducing apparatus;
    • Fig. 3 is a schematic diagram illustrating the distance and the angle calculated by the operation unit of the audio signal reproducing apparatus;
    • Fig. 4 is a view for explaining the information on the transfer characteristics stored in a storing circuit of the operation unit in the audio signal reproducing apparatus;
    • Fig. 5 is a plan view showing the relative positional relation between virtual sound sources and a listener for explaining the operation of binaural reproducing performed by the audio signal reproducing apparatus; and
    • Fig. 6 is a block diagram schematically showing the other structure of the audio signal reproducing apparatus of the present invention.
    BEST MODE FOR EMBODYING THE INVENTION
  • The audio signal reproducing apparatus of the present invention comprises a headphone set 10 which is fitted over the head M of a listener P and a pair of headphones 2L and 2R are supported by a head band 1 so that they are located in the vicinity of left and right ears of the listener P, respectively as shown in Fig. 1.
  • Two sliders 4L and 4R from which support arms 3L and 3R, respectively project are slidably mounted on the head band 1 of the headphone set 10. A pair of signal detectors 5L and 5R which detect a position detection reference signal emitted from a reference signal source 11 are provided at the tip ends of the support arms 3L and 3R, respectively. That is, the pair of signal detectors 5L and 5R are provided on the tip ends of the support arms 3L and 3R projectedly formed on the sliders 4L and 4R which are slidably mounted on the head band 1 so that they are supported in positions remote from the head band 1 and the pair of headphones 2L and 2R, that is the main body of the headphone set.
  • In the present embodiment, the reference signal source 11 comprises an ultrasonic signal source 12 and an ultrasonic speaker 13 for generating an ultrasonic signal from the ultrasonic signal source 12 as a reference signal. Each of the pair of signal detectors 5L and 5R which receive the reference signal comprises an ultrasonic microphone.
  • An ultrasonic wave generated from the ultrasonic speaker 13, that is, the position detection reference signal is a burst wave in which an ultrasonic wave having a given level is intermittently generated for a given period of time as shown at A in Fig. 2, or an ultrasonic wave, the phase of which may be detected like a so-called level modulated wave, the level of which changes in a given circle.
  • The pair of signal detectors 5L and 5R provided on the headphone set 10 detects the ultrasonic position detection reference signal generated from the ultrasonic speaker 13 and generate respective detection signals shown at B and C in Fig. 2, each having a time lag depending upon the relative positional relation between the listener P and the ultrasonic speaker 13.
  • Since the pair of signal detectors 5L and 5R are supported by the support arm 3L and 3R in positions remote from the main body of the headphone set 10 while they are mounted on the tip ends of the support arms 3L and 3R in positions remote from the main body of the headphone set 10 while they are mounted on the tip ends of the support arms 3L and 3R which project from the sliders 4L and 4R, respectively slidably mounted on the head band 1 and, the head band 1 and the pair of headphone 2L and 2R, that is, the main body of the headphone set is fitted on the head M of the listener P, they can detect the ultrasonic wave generated from the ultrasonic speaker 13, that is, the position detection reference signal stably and accurately without being located behind the head P of the listener P even if the listener P moves or rotates his head P. The pair of the signal detectors 5L and 5R can be adjusted to a position optimal for detecting the detection reference signal by sliding the sliders 4L and 4R along the head band 1. For example, the optimal positions of the headphones 2L and 2R which are fitted on the head M of the listener P by the head band 1 so that they correspond to the vicinity of the left and right ears depend on the shape and size of the head M of the listener P, that is, have the differences among individuals. Accordingly, the positions of the pair of signal detectors 5L and 5R can be adjusted so that they correspond to the headphones 2L and 2R, respectively.
  • Each detection signal obtained by these signal detectors 5L and 5R is applied to an operation unit 14.
  • The operation unit 14 comprises first and second edge detecting circuits 15 and 16, to which the detection signal from the signal detectors 5L and 5R for detecting the position detection reference signal are supplied, respectively and a third edge detecting circuit 17 to which an ultrasonic signal from the ultrasonic signal sources 12, that is, the position detection reference signal is applied.
  • The first and second edge detecting circuits 15 and 16 detect rise-up edges of the detection signals generated from the signal detectors 5L and 5R, respectively and output pulse signals shown at D and E of Fig. 2 corresponding to the rise-up edges pulse signals generated by the first and second edge detecting circuits 15 and 16 are supplied to a distance calculating circuit 18 and a circuit 19 for detecting the time difference between both ears. The third edge detecting circuit 17 detects the rise-up edge of the ultrasonic signal from the ultrasonic signal source 12 and outputs a pulse signal corresponding to the rise-up edge as shown at F in Fig. 2. A pulse signal obtained by the third edge detection circuit 17 is supplied to the distance calculating circuit 18.
  • The distance calculating circuit 18 detects the time difference t1 between pulse signals obtained by the third and first edge detecting circuits 17 and 15 which is represented as ΔT1 in Fig. 2 and the time difference t2 between pulse signals obtained by the third and second edge detecting circuits 17 and 16 which is represented as ΔT2 in Fig. 2 and then calculates the distance ℓ0 between the ultrasonic speaker 13 and the center of the head M of the listener P represented as ℓ0 in Fig. 3 based upon the time differences t1, t2 and the sound velocity V.
  • The sound velocity V may be preliminarily preset as a constant in the distance calculating circuit 18 or alternatively may be changed with changes in atmospheric temperature, humidity and atmospheric pressure and the like. On calculating the distance ℓ0, compensation may be conducted for the positional relation between the signal detector 5L and 5R and the center of the head M, the shape and size of the hand M.
  • Signals representative of the distance ℓ0, time differences t1 and t2 are fed to an angle calculating circuit 20.
  • The circuit 19 for detecting the time difference between both ears detects the time difference t3 between the pulse signals generated by the first and second edge detecting circuits 15 and 16, represented as Δ3 in Fig. 2. A signal representative of the time difference t3 is fed to the angle calculating circuit 20.
  • The angle calculating circuit 20 calculates an angle representative of the direction of the head M represented by an arrow θ0 in Fig. 3 by using the time differences t1, t2, t3, the distance θ0, the sound velocity V and the radius r of the head M. The angle θ0 can be determined, for example, by the equation 1 as follows: θ 0 ≒sin -1 {V 2 (t 1 +t 2 )t 3 /4rℓ}
    Figure imgb0001
    Then, the rotation angle θ of the head M relative to a desired position of a virtual sound source is calculated from information on the angle θ0 and the distance ℓ0 representative of the relative positional relationship between a reference position and the listener P by assuming that the position of the ultrasonic speaker 13 be the reference position of the virtual sound source.
  • Information on the rotation angle of the head of the listener obtained by the angle calculating circuit 20 is provided to a control circuit 21.
  • In the audio signal reproducing apparatus of this embodiment, the operation unit 14 includes a storing circuit 22 which stores information on transfer characteristics from the virtual sound source to both ears of the listener in at least the first quadrant of the rotational angular position of the head of the listener, for example, information on the transfer characteristics for each angle θ11 to θ1n in the first quadrant.
  • Based upon the current angle position calculated by the angle calculating circuit 20, the control circuit 21 reads the information on the transfer characteristics corresponding to the current angles θ11 to θ1n positions from the storing circuit 22 if the current angle position is in the first quadrant in Fig. 4 and reads the transfer characteristics information in which the current angles θ21 to θ2n corresponds to the angles θ11 to 61n in the first quadrant from the storing circuit 22 if the current angle position is in the second quadrant in Fig. 4 and read the transfer characteristics information in which the current angles θ31 to θ3n corresponds to the angles θ11 to θ1n in the first quadrant from the storing circuit 22 if the current angle position is in the third quadrant in Fig. 4 and read the transfer characteristics information in which the current angles θ41 to θ4n correspond to the angles θ11 to θ1n in the first quadrant from the storing circuit 22 if the current angle position is in the fourth quadrant in Fig. 4 and supplies the read transfer characteristics information to an audio signal processing circuit 23 together with a signal representative of the quadrant in which the current angular position is located.
  • Since the head of the listener is substantially spherical and rotary symmetric, the transfer characteristics from the virtual sound sources to both ears of the listener can be treated as symmetrical in each quadrant.
  • Alternatively, in the control circuit 21, two transfer characteristics in the vicinity of the rotational angular position of the head represented by the angular position information may be read form the storing circuit 22 and the information on the transfer characteristics in the current head rotational angular position may be operated by, for example, linear interpolation processing, as described leter with reference to Fig. 6.
  • Left and right channel audio signals SL and SR which are outputted from the audio signal source 22 are supplied to the audio signal processing circuit 23.
  • The audio signal source 24 is an apparatus for outputting given left and right channel audio signals SL and SR, such as recording disc playback apparatus or radio communication receivers and the like.
  • The audio signal processing circuit 23 performs a signal processing which provides the left and right channel audio signals SL and SR fed from the audio signal source 24 with a given transfer characteristics form the virtual sound source to the both ears of the listener. The audio signal processing circuit 23 comprises first to sixth switches 25L, 25R, 26L, 26R, 27L and, 27R for switching the signal lines and first to fourth signal processing units 28a, 28b, 28c and 28d.
  • The first to sixth switches 25L, 25R, 26L, 26R, 27L and 27R are controlled for switching in response to a control signal from the control circuit 21 representative of the quadrant to which the current angular position belongs.
  • The first and second switches 25L and 25R perform switching of inputs of left and right channel audio signals SL and SR fed from the audio signal source 24 and supply the right channel audio signal SR to the first and second signal processing units 28a and 28b and supply the left channel audio signal SL to the third and fourth signal processing units 28c and 28d when the current angular position is in the first or third quadrant and supply the left channel audio signal SL to the first and second signal processing unit 28a and 28b and supply the right channel audio signal SR to the third and fourth signal processing units 28c and 28d when the current angular position is in the second or fourth quadrant.
  • The third and fourth switches 26L and 26R perform switching of the output of the left and right channel audio signals EL and ER outputted from the audio signal processing circuit 23 and select as a right channel audio signal ER the output signal of the first adder 29R for adding the output signals of the first and third signal processing unit 28a and 28c and select as a left channel audio signal EL the output signal of the second adder 29L for adding the output signals of the second and fourth signal processing units 28b and 28d when the current angular position is in the first or third quadrant and select as a right channel audio signal ER the output signal of the first adder 29L and select as a left channel audio signal EL the output signal of the second adder 29L when the current angular position is in the second or the fourth quadrant.
  • The third and fourth switches 26L and 26R perform switching of filters for the left and right channel audio signals EL and ER outputted from the audio signal processing circuit 23 and output the left and right audio signals EL and ER unswitched when the current angular position is in the second or fourth quadrant and output the audio signals EL and ER from which high frequency components have been removed by low pass filters 30L and 30R when the current angular position is in the second or fourth quadrant.
  • In each of signal processing units 28a, 28b, 28c and 28d, an impulse response representative of the transfer characteristics of the left and right channel audio signals SL and SR reproduced from a pair of left and right channel speakers which are virtual sound sources facing to a listener to each ear of the listener is preset based upon information on transfer characteristics supplied from the control circuit 21.
  • In other words, the first signal processing unit 28a presets the impulse response {hRR(t, θ)} representative of transfer characteristics of the sound reproduced from the right channel audio signal SR to the right ear. The second signal processing unit 28b presets the impulse response {hRL(t, θ)} representative of the transfer characteristics of the sound reproduced from the right channel audio signal SR to the left ear. The third signal processing unit 28c presets the impulse response {hLR(t, θ)} representative of transfer characteristics of the sound reproduced form the left channel audio signal SL to the right ear. The fourth signal processing unit 28d presents the impulse response {hLL(t, θ)} representative of the transfer characteristics of the sound reproduced from the left channel audio signal SL to the left ear.
  • When the current angular position of the head of the listener is in the first quadrant, the right channel audio signal SR is fed the first and second signal processing units 28a and 28b. In the first signal processing unit 28a, the right channel audio signal SR is subjected to a signal processing of convolution-integration of the impulse response {hRR(t, θ)}. In the second signal processing unit 28b, the right channel audio signal SR is subjected to a signal processing of convolution-integration of the impulse response {hRL(t, θ)}.
  • The left channel audio signal SL is fed to the third and fourth signal processing units 28c and 28d. In the third signal processing unit 28c, the left channel audio signal SL is subjected to a signal processing of convolution-integration of the impulse response {hLR(t, θ)}. In the second signal processing unit 28d, the left channel audio signal SL is subjected to a signal processing of convolution-integration of the impulse response {hLL(t, θ)}.
  • The output signals from the first and third signal processing units 28a and 28c are applied to the right channel adder 29R and are added with each other therein. The output signal of the right channel adder 28R is fed as the right channel studio signal ER via the right channel amplifier 31R to the right channel headphone 2R and reproduced as a sound. The output signals from the second and fourth signal processing units 28b and 28d are applied to the left channel adder 29L and are added with each other therein. The output signal of the left channel adder 29 is fed as the left channel audio signal EL via the left channel amplifier 31L to the left channel headphone 2L and reproduced as a sound.
  • When the current angular position of the head of the listener is in the second quadrant, the left and right channels of inputs and outputs are replaced with each other and a processing which is similar to that of the foregoing first quadrant is performed. Accordingly, a front localization sensation is provided. When the current angular position of the head of the listener is in the third and fourth quadrants a processing which is similar to those of the first and second quadrants is performed. Audio signals EL and ER from which high frequency components have been removed from the low pass filters 30L and 30R are outputted. Accordingly, rear localization sensation can be provided.
  • In the audio signal reproducing apparatus of the thus formed embodiment, information on the transfer characteristics in the rotational angular positions corresponding to the movement of the head of the listener calculated by the angle calculating circuit 20 is formed based upon the information upon the transfer characteristics of the first quadrant stored in the storing circuit 22. By performing a signal processing of the left and right channel audio signals SL and SR which responds to changes in transfer characteristics in association with the movement of the listener P and the rotation of the head M in real time in the audio signal processing circuit 23 based upon the transfer characteristics data, good outside head localization sensation and front localization sensation are obtained in which the virtual sound sources are not moved as similarily to the case in which an audio signal is reproduced by a pair of speaker systems SL and SR which faces to the listener P and are remote therefrom and with each other as is shown in Figs. 5A, 5B and 5C in which relative positional relations between the virtual sound sources and the listener P are illustrated.
  • Fig. 5B shows that the listener P has approached to a pair of speaker systems SL and SR, that is, virtual sound sources from a position of Fig. 5A. Fig. 4C shows that the listener P rotates his head M toward the right speaker device SR. By performing a signal processing which can respond in real time to changes in transfer characteristics in association with the movement of the listener and the rotation of the head M as mentioned above in the audio signal reproduction apparatus of the present embodiment, good head outside and front localization sensation in which no virtual sound source is moved can be obtained so that a binaural reproduction which can respond to any conditions of Figs. 5A, 5B and 5C can be performed.
  • Since it suffices for the audio signal reproducing apparatus of the present invention to store in storing means transfer characteristics information representative of the transfer characteristics from virtual sound sources to a listener of the first quadrant of the rotational angular position of the head of the listener, the amount of the information of the transfer characteristics to be stored in the storing means is small and the storing means having a low storing capacity can be used. The audio signal processing means forms the transfer characteristics information in the rotational angular position represented by a detection output from detecting means for detecting the rotational angular position depending upon the movement of the head of the listener in accordance with the transfer characteristics information of at least the first quadrant stored in the storing means and processes the left and right channel audio signals for supplying the processed audio signals to the headphone set. Accordingly, a proper binaural reproduction can be performed for providing a very natural sound image localization sensation in which the positions of the virtual sound sources are not moved even if the listener moves.
  • A second embodiment of an audio signal reproducing apparatus of the present invention will now be described in detail with reference to the drawings.
  • The audio signal reproducing apparatus of the present invention shown in Fig. 6 comprises a headphone set 40 which is fitted over the head M of a listener P and a pair of headphones 42L and 42R are supported by a head band 41 so that they are located in the vicinity of left and right ears of the listener P, as is similar to the embodiment shown in Fig. 1.
  • Two sliders 44L and 44R from which support arms 43L and 43R, respectively project are slidably mounted on the head 1 of the headphone set 40. A pair of signal detectors 45L and 45R which detect a position detection reference signal emitted from a reference signal source 51 are provided at the tip ends of the support arms 431 and 43R, respectively. That is, the pair of signal detectors 45L and 45R are provided on the tip ends of the support arms 43L and 43R projectedly formed on the sliders 44L and 44R which are slidably mounted on the head band 51 so that they are supported in positions remote from the head band 51 and the pair of headphones 42L and 42R, that is the main body of the headphone set.
  • Also in this present embodiment, the reference signal source 51 comprises an ultrasonic signal source 52 and an ultrasonic speaker 53 for generating an ultrasonic signal from the ultrasonic signal source 52 as a reference signal. Each of the pair of signal detectors 45L and 45R which receives the reference signal comprises an ultrasonic microphone.
  • An ultrasonic wave generated from the ultrasonic speaker 53, that is, the position detection reference signal is a burst wave in which an ultrasonic wave having a given level is intermittently generated for a given period of time as is similar to the first embodiment, or an ultrasonic wave, the phase of which may be detected like a so-called level modulated wave, the level of which changes in a given circle.
  • The pair of signal detectors 45L and 45R provided on the headphone set 40 detects the ultrasonic position detection reference signal generated from the ultrasonic speaker 53 and generate respective detection signals, each having a time lag depending upon the relative positional relation between the listener P and the ultrasonic speaker 53.
  • Each detection signal obtained by these signal detectors 45L and 45R is applied to an operation unit 54.
  • The operation unit 54 comprises first and second edge detecting circuit 55 and 56, to which the detection signal from the signal detectors 45L and 45R for detecting the position detection reference signal are supplied, respectively and a third edge detecting circuit 57 to which an ultrasonic signal from the ultrasonic signal source 52, that is, the position detection reference signal is applied.
  • The first and second edge detecting circuits 55 and 56 detect rise-up edges of the detection signals generated from the signal detectors 45L and 45R, respectively and output pulse signals corresponding to the rise-up edges pulse signals generated by the first and second edge detecting circuits 55 and 56 are supplied to a distance calculating circuit 58 and a circuit 59 for detecting the time difference between both ears. The third edge detecting circuit 57 detects the rise-up edge of the ultrasonic signal from the ultrasonic signal source 52 and outputs a pulse signal corresponding to the rise-up edge. A pulse signal obtained by the third edge detection circuit 57 is supplied to the distance calculating circuit 58.
  • The distance calculating circuit 58 detects the time difference t1 between pulse signals obtained by the third and first edge detecting circuits 57 and 55 and the time difference t2 between pulse signals obtained by the third and second edge detecting circuits 57 and 56 and then calculates the distance ℓ0 between the ultrasonic speaker 53 and the center of the head M of the listener based upon the time differences t1, t2 and the sound velocity V.
  • Signals representative of the distance ℓ0, time differences t1 and t2 are fed to an angle calculating circuit 60.
  • The circuit 59 for detecting the time difference between both ears detects the time difference t3 between the pulse signals generated by the first and second edge detecting circuits 55 and 56. A signal representative of the time difference t3 is fed to the angle calculating circuit 60.
  • The angle calculating circuit 60 calculates an angle θ0 representative of the direction of the head M by using the time differences t1, t2, t3, the distance θ0, the sound velocity V and the radius r of the head M similarly to the angle calculating circuit 20 in the first embodiment.
  • Information on the rotation angular position head of the listener obtained by the angle calculating circuit 60 is provided to an interpolation operation and processing circuit 61.
  • In the audio signal reproducing apparatus of the present embodiment, the operation unit 54 includes a storing circuit 62 in which transfer characteristics information representative of the transfer characteristics from the virtual sound sources to both ears of the listener for each predetermined angle, which is larger than that of the angular positional information of the listener calculated by the angle calculating circuit 60.
  • The interpolation operation and processing circuit 61 reads the information on two transfer characteristics in the vicinity of the rotational angular position of the head represented the current angular positional information calculated by the angle calculating circuit 60 and operates the transfer characteristics in the current rotational angular position of the head by, for example, a linear interpolation processing.
  • The interpolation operation and processing circuit 61 may reads the information on more than two transfer characteristics in the vicinity of the current rotational angular position of the head represented by the angular positional information for performing secondary interpolation processing other than the linear interpolation processing.
  • The information on the transfer characteristics in the current rotational angular position obtained by the interpolation operation and processing circuit 61 is supplied to an audio signal processing circuit 63.
  • The audio signal processing circuit 63 is also supplied with left and right channel audio signals SL and SR outputted from an audio signal source 64.
  • The audio signal source 64 is a device for outputting predetermined left and right channel audio signals SL and SR and may includes, for example, various recording disc playback devices, recording the playback device of wireless receivers and the like.
  • The audio signal processing circuit 63 performs a signal processing which provides the left and right channel audio signals SL and SR fed from the audio signal source 64 with a given transfer characteristics from the virtual sound source to the both ears of the listener. The audio signal processing circuit 63 comprises first through fourth signal processing units 65a, 65b, 65c and 65d to which the transfer characteristics information in the current rotational angular positional of the head obtained by the interpolation operation and processing circuit 61. In each of signal processing units 65a, 65b, 65c and 65d, an impulse response representative of the transfer characteristics of the left and right channel audio signals SL and SR reproduced from a pair of left and right channel speakers which are virtual sound sources facing to a listener to each ear of the listener is preset based upon information on transfer characteristics.
  • In other words, the first signal processing unit 65a presents the impulse response {hRR(t, θ)} representative of the transfer characteristics of the sound reproduced form the right channel audio signal SR to the right ear. The second signal processing unit 65b presets the impulse response {hRL(t, θ)} representative of the transfer characteristics of the sound reproduced from the right channel audio signal SR to the left ear. The third signal processing unit 65c presets the impulse response {hLR(t, θ)} representative of the transfer characteristics of the sound reproduced from the left channel audio signal SL to the right ear. The fourth signal processing unit 65d presets the impulse response {hLL(t, θ)} representative of the transfer characteristics of the sound reproduced from the left channel audio signal SL to the left ear. In the audio signal processing circuit 63, the right channel audio signal SL is fed the first and second signal processing units 65a and 65n. In the first signal processing unit 65a, the right channel audio signal SL is subjected to a signal processing of convolution-integration of the impulse response {hRR(t, θ)}. In the second signal processing unit 65b, the right channel audio signal SR is subjected to a signal processing of convolution-integration of the impulse response {hRL(t, θ)}.
  • The left channel audio signal SL is fed to the third and fourth signal processing units 65c and 65d. In the third signal processing unit 65c, the left channel audio signal SL is subjected to a signal processing of convolution-integration of the impulse response {hLR(t, θ)}. In the second signal processing unit 65d, the left channel audio signal S is subjected to a signal processing of convolution-integration of the impulse response {hLL(t, θ)}.
  • The output signals from the first and third signal processing units 65a and 65c are applied to the right channel adder 66R and are added with each other therein. The output signal of the right channel adder 66R is fed as the right channel audio signal ER via the right channel amplifier 68R to the right channel headphone 4R of the headphones 40 and reproduced as a sound. The output signals from the second and fourth signal processing units 64b and 64a are applied to the left channel adder 66L and are added with each other therein. The output signal of the right channel adder 66L is fed as the left channel audio signal ER via the left channel amplifier 68L to the left channel headphone 42L of the headphone set 40 and reproduced as a sound.
  • In the thus formed audio signal reproducing apparatus of this embodiment, information on two transfer characteristics in the vicinity of the rotational angular position represented by the current angular positional information is read from the storing circuit 62 based upon the current angular positional information calculated by the angle calculating circuit 60. The transfer characteristics information in the current rotational angular position are operated by a linear interpolation processing in the interpolation operation circuit 61. By performing a signal processing which responds to changes in transfer characteristics in association with the movement of the listener and the rotation of the head M in real time in the audio signal processing circuit 63 based upon the transfer characteristics data, good outside head localization sensation and front localization sensation are obtained in which the virtual sound sources are not moved as similarly to the case in which an audio signal is reproduced by a pair of speaker systems which faces to the listener and are remote therefrom and with each other.
  • As mentioned above, in the audio signal reproducing apparatus of the present invention, information on at least two transfer characteristics in the vicinity of the rotational angular position of the head represented by the detection output from detecting means for detecting the rotational angular position of the head of the listener at a solution higher than that of the transfer characteristics information stored in the storing means is read from the storing means. The transfer characteristics information in the rotational angular position of the head represented by the detection output are interpolation-operated by interpolation operation means. Accordingly, the amount of the information on the transfer characteristics stored in the storing means can be reduced. The audio signal processing means processes the left and right channel audio signals based upon the transfer characteristics information determined by the interpolation operation means. The processed audio signals are supplied to the headphones, resulting in that a proper binaural reproduction can be achieved for providing very natural sound image localization sensation in which the positions of the virtual sound sources do not move even if a listener moves.

Claims (8)

  1. An audio signal reproducing apparatus, comprising:
    means (10, 11, 14) for detecting the rotational angular position of the head (M) of a listener corresponding to the movement of the head of the listener relative to virtual sound sources;
    means (22) for storing transfer characteristics information representative of transfer characteristics of direct sounds from virtual sound sources to each ear of the listener for predetermined rotational angular positions; and
    audio signal processing means (20, 21, 23) for forming, based upon the transfer characteristics information stored in said storing means, transfer characteristics information appropriate to the detected rotational angular position of the head and for processing left and right channel audio signals using said transfer characteristics information, whereby audio signals which have been processed by said audio signal processing means are reproduced as sounds by headphone means; characterised in that:
    said stored transfer characteristics information corresponds to predetermined angular positions in a quadrant of the rotational position of the head (M) of the listener and said audio signal processing means derives information for the other quadrants from the stored information.
  2. An apparatus according to claim 1 in which said detecting means includes
    a pair of signal detecting elements (5L, 5R) for detecting a reference signal transmitted from a reference signal source (11);
    means (18) for calculating the distance between said reference signal source and the head of the listener from the phase difference between the detection output signals from said pair of the signal detecting elements and said reference signal; and
    means (19) for detecting the time difference between the detection output signals from said pair of signal detecting elements, whereby the rotational angular position of the head of the listener is calculated by using information on the distance obtained from said distance calculating means and on the time difference obtained from said time difference detecting means.
  3. An apparatus according to claim 1 or 2 in which said storing means stores the transfer characteristics information representative of the transfer characteristics from said virtual sound sources to both ears of the listener in the first quadrant of the rotational angular position of the head.
  4. An apparatus according to claim 1, 2 or 3, and further including control means (21) which reads from said storing means the transfer characteristics information for the angular position in the stored quadrant corresponding to the rotational angular position of the head represented, and for supplying said audio signal processing means (23) with the read transfer characteristics information together with a control signal representative of which quadrant the rotational angular position represented by the detection output from said detecting means belongs to.
  5. An apparatus according to claim 1, 2, 3 or 4, in which said audio signal processing means includes
    a first signal processor (28a) for applying to the right channel input audio signal a convolution-integration of the impulse response corresponding to transfer characteristics of a right channel reproduced audio signal of an input audio signal to the right ear;
    a second signal processor (28b) for applying to the right channel input audio signal a convolution-integration of the impulse response corresponding to the transfer characteristics of the right channel reproduced audio signal to the left ear;
    a third signal processor (28c) for applying to the left channel input audio signal a convolution-integration of the impulse response corresponding to the transfer characteristics of the left channel reproduced audio signal of the input audio signal to the right ear;
    a fourth signal processor (28d) for applying to the left channel input audio signal a convolution-integration of the impulse response corresponding to the transfer characteristics of the left channel reproduced audio signal to the left ear;
    first adding means (29R) for adding the output of said first signal processing unit with the output of the third signal processing unit;
    second adding means (29L) for adding the output of said second signal processing unit with the output of the fourth signal processing unit whereby, the outputs of the first and second adding means are respectively supplied to the right and left channel headphones of said headphone set (1).
  6. An apparatus according to claim 5 when dependent on claim 4, wherein said audio signal processing means includes
    a first selecting means (25R) for supplying said right channel input audio signal to said first and second signal processing units (28a, 28b) when the control signal represents the first or third quadrant and for supplying said right channel input audio signal to the third and fourth signal processing units (28c, 28d) when the control signal represents the second or fourth quadrant;
    a second selecting means (25L) for supplying the left channel input audio signal to the third and fourth signal processing units (28c, 28d) when said control signal represent the first or third quadrant and for supplying the right channel input audio signal to the first and second signal processing units (28a, 28b) when the control signal represent the second or fourth quadrant;
    a third selecting means (26R) for selecting the output of said first adding means (29R) when said control signal represents the second or fourth quadrant and for selecting the output of said second adding means (29L) when said control signal represents the second or fourth quadrant;
    a fourth selecting means (26L) for selecting the output of said second adding means (29L) when the control signal represents the first or third quadrant and for selecting the output of the first adding means (29R) when the control signal represents the second or fourth quadrant;
    a fifth selecting means (27R) for supplying the output of said third selecting means (26R) to the right channel headphone (2R) of the headphone set (1) when said control signal represents the first or third quadrant and for supplying the output of the third selecting means (26R) to the right channel headphone (2R) of the headphone set via a low pass filter (30R) when said control signal represents the second and fourth quadrant; and
    a sixth selecting means (27L) for supplying the output of said fourth selecting means (26L) to the left channel headphone (2L) of the headphone set (1) when said control signal represents the first or third quadrant and for supplying the output of the fourth selecting means (26L) to the left channel headphone (2L) of the headphone set via a low pass filter (30L) when the control signal represents the second or fourth quadrant.
  7. An apparatus according to any one of claims 1 to 5 and including interpolation processing means (61) which reads from said storing means information on transfer characteristics of at least two angles in the quadrant in the vicinity of the angle corresponding to the rotational angular position of the head and for determining transfer characteristic information corresponding to the actual rotational angular position by an interpolation processing.
  8. An apparatus according to claim 7 in which said interpolation processing means (61) supplies transfer characteristics information determined by said interpolation processing to said audio signal processing means (63) together with a control signal representing which quadrant the actual rotational angular position of the head represented by the detection output from said detecting means belongs to.
EP91902738A 1990-01-19 1991-01-18 Apparatus for reproducing acoustic signals Expired - Lifetime EP0464217B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP95104929A EP0664660B1 (en) 1990-01-19 1991-01-18 Audio signal reproducing apparatus

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
JP2008520A JP2893780B2 (en) 1990-01-19 1990-01-19 Sound signal reproduction device
JP8520/90 1990-01-19
JP2008514A JP2751512B2 (en) 1990-01-19 1990-01-19 Sound signal reproduction device
JP8514/90 1990-01-19
PCT/JP1991/000057 WO1991011080A1 (en) 1990-01-19 1991-01-18 Apparatus for reproducing acoustic signals

Related Child Applications (2)

Application Number Title Priority Date Filing Date
EP95104929.5 Division-Into 1991-01-18
EP95104929A Division EP0664660B1 (en) 1990-01-19 1991-01-18 Audio signal reproducing apparatus

Publications (3)

Publication Number Publication Date
EP0464217A1 EP0464217A1 (en) 1992-01-08
EP0464217A4 EP0464217A4 (en) 1992-06-24
EP0464217B1 true EP0464217B1 (en) 1996-06-12

Family

ID=26343043

Family Applications (2)

Application Number Title Priority Date Filing Date
EP95104929A Expired - Lifetime EP0664660B1 (en) 1990-01-19 1991-01-18 Audio signal reproducing apparatus
EP91902738A Expired - Lifetime EP0464217B1 (en) 1990-01-19 1991-01-18 Apparatus for reproducing acoustic signals

Family Applications Before (1)

Application Number Title Priority Date Filing Date
EP95104929A Expired - Lifetime EP0664660B1 (en) 1990-01-19 1991-01-18 Audio signal reproducing apparatus

Country Status (5)

Country Link
EP (2) EP0664660B1 (en)
KR (1) KR920702175A (en)
CA (1) CA2048686C (en)
DE (2) DE69120150T2 (en)
WO (1) WO1991011080A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0795698A (en) * 1993-09-21 1995-04-07 Sony Corp Audio reproducing device
EP0744881B1 (en) * 1995-05-22 2004-07-14 Victor Company Of Japan, Ltd. Headphone reproducing apparatus
JP3577798B2 (en) * 1995-08-31 2004-10-13 ソニー株式会社 Headphone equipment
FR2744871B1 (en) * 1996-02-13 1998-03-06 Sextant Avionique SOUND SPATIALIZATION SYSTEM, AND PERSONALIZATION METHOD FOR IMPLEMENTING SAME
US20090052703A1 (en) * 2006-04-04 2009-02-26 Aalborg Universitet System and Method Tracking the Position of a Listener and Transmitting Binaural Audio Data to the Listener
EP2288178B1 (en) * 2009-08-17 2012-06-06 Nxp B.V. A device for and a method of processing audio data
US9706304B1 (en) * 2016-03-29 2017-07-11 Lenovo (Singapore) Pte. Ltd. Systems and methods to control audio output for a particular ear of a user

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05165901A (en) * 1991-12-11 1993-07-02 Mutoh Ind Ltd Method and device for correcting progressive dimension

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5419242B2 (en) 1973-06-22 1979-07-13
JPS5165901A (en) * 1974-12-05 1976-06-08 Sony Corp
US4076677A (en) 1976-06-23 1978-02-28 Desoto, Inc. Aqueous copolymer dispersions and method of producing the same
JPS54109401A (en) * 1978-02-16 1979-08-28 Victor Co Of Japan Ltd Signal converter
JPS58116900A (en) * 1982-11-15 1983-07-12 Sony Corp Stereophonic reproducing device
US4893342A (en) * 1987-10-15 1990-01-09 Cooper Duane H Head diffraction compensated stereo system
JP2671329B2 (en) * 1987-11-05 1997-10-29 ソニー株式会社 Audio player

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05165901A (en) * 1991-12-11 1993-07-02 Mutoh Ind Ltd Method and device for correcting progressive dimension

Also Published As

Publication number Publication date
DE69132430T2 (en) 2001-04-05
CA2048686C (en) 2001-01-02
EP0464217A4 (en) 1992-06-24
DE69120150D1 (en) 1996-07-18
EP0664660A3 (en) 1995-08-09
KR920702175A (en) 1992-08-12
CA2048686A1 (en) 1991-07-20
DE69132430D1 (en) 2000-11-02
EP0664660B1 (en) 2000-09-27
WO1991011080A1 (en) 1991-07-25
EP0464217A1 (en) 1992-01-08
DE69120150T2 (en) 1996-12-12
EP0664660A2 (en) 1995-07-26

Similar Documents

Publication Publication Date Title
US5495534A (en) Audio signal reproducing apparatus
EP0762803B1 (en) Headphone device
KR100225546B1 (en) Apparatus for reproducing acoustic signals
EP0438281B1 (en) Acoustic signal reproducing apparatus
JP3687099B2 (en) Video signal and audio signal playback device
EP0674467B1 (en) Audio reproducing device
JP3266020B2 (en) Sound image localization method and apparatus
US7917236B1 (en) Virtual sound source device and acoustic device comprising the same
EP0464217B1 (en) Apparatus for reproducing acoustic signals
EP1161119B1 (en) Method for localizing sound image
JPH0946800A (en) Sound image controller
JP2003153398A (en) Apparatus and method for sound image localization in the front-back direction by headphones
JP2893780B2 (en) Sound signal reproduction device
JP2893779B2 (en) Headphone equipment
JPH03296400A (en) Audio signal reproducing device
JP2751514B2 (en) Sound signal reproduction device
JP2874236B2 (en) Sound signal reproduction system
JP3111455B2 (en) Sound signal reproduction device
JPH03214896A (en) Acoustic signal reproducing device
JPS5819920Y2 (en) Sound reproduction device using headphone
JPH03214895A (en) Acoustic signal reproducing device
JPH03254163A (en) Sound apparatus for vehicle
JPH1084599A (en) Headphone device
JPH11308698A (en) Spatial sound reproducing apparatus and method

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19910924

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): DE FR GB NL

A4 Supplementary search report drawn up and despatched

Effective date: 19920507

AK Designated contracting states

Kind code of ref document: A4

Designated state(s): DE FR GB NL

17Q First examination report despatched

Effective date: 19940906

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB NL

XX Miscellaneous (additional remarks)

Free format text: TEILANMELDUNG 95104929.5 EINGEREICHT AM 18/01/91.

REF Corresponds to:

Ref document number: 69120150

Country of ref document: DE

Date of ref document: 19960718

EN Fr: translation not filed
EN Fr: translation not filed

Free format text: BO 96/45 PAGES:193 LL Y A LIEU DE SUPPRIMER:LA MENTION DE LA NON REMISE DE CETTE TRADUCTION.LA MENTION DE LA REMISE DE CETTE TRADUCTION EST PUBLIEE DANS LE PRESENT BOPI

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed
REG Reference to a national code

Ref country code: GB

Ref legal event code: IF02

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20100208

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20100113

Year of fee payment: 20

Ref country code: DE

Payment date: 20100114

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20100101

Year of fee payment: 20

REG Reference to a national code

Ref country code: NL

Ref legal event code: V4

Effective date: 20110118

REG Reference to a national code

Ref country code: GB

Ref legal event code: PE20

Expiry date: 20110117

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20110118

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20110117

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20110118