Abstract
Our goal is to achieve a robot audition system that is capable of recognizing multiple environmental sounds and making use of them in human-robot interaction. The main problems in environmental sound recognition in robot audition are: (1) recognition under a large amount of background noise including the noise from the robot itself, and (2) the necessity of robust feature extraction against spectrum distortion due to separation of multiple sound sources. This paper presents the environmental recognition of two sound sources fired simultaneously using matching pursuit (MP) with the Gabor wavelet, which extracts salient audio features from a signal. The two environmental sounds come from different directions, and they are localized by multiple signal classification and, using their geometric information, separated by geometric source separation with the aid of measured head-related transfer functions. The experimental results show the noise-robustness of MP although the performance depends on the properties of the sound sources.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Rosenthal, D.F., Okuno, H.G.: Computational auditory scene analysis. L. Erlbaum Associates Inc., Mahwah (1998)
Brown, G., Cooke, M.: Computational auditory scene analysis. Computer Speech and Language 8(4), 297–336 (1994)
Okuno, H.G., Ogata, T., Komatani, K.: Computational Auditory Scene Analysis and Its Application to Robot Audition: Five Years Experience. In: ICKS 2007, pp. 69–76 (2007)
Matsusaka, Y., Tojo, T., Kuota, S., Furukawa, K., Tamiya, D., Hayata, K., Nakano, Y., Kobayashi, T.: Multi-person conversation via multi-modal interface — a robot who communicates with multi-user. In: EUROSPEECH 1999, pp. 1723–1726 (1999)
Nishimura, R., Uchida, T., Lee, A., Saruwatari, H., Shikano, K.: Aska: Receptionist robot with speech dialogue system. In: IROS 2002, pp. 1308–1313 (2002)
Brooks, R., Breazeal, C., Marjanovie, M., Scassellati, B., Williamson, M.: The cog project: Building a humanoid robot. In: Computation for Metaphors, Analogy, and Agents, pp. 52–87 (1999)
Nakadai, K., Takahashi, T., Okuno, H.G., Nakajima, H., Hasegawa, Y., Tsujino, H.: Design and Implementation of Robot Audition System’HARK’Open Source Software for Listening to Three Simultaneous Speakers. Advanced Robotics 24 5(6), 739–761 (2010)
Ikeda, Y., Jahns, G., Kowalczyk, W., Walter, K.: Acoustic Analysis to Recognize Individuals and Animal Conditions. In: The XIV Memorial CIGR World Congress, vol. 8206 (2000)
Jahns, G.: Call recognition to identify cow conditions–A call-recogniser translating calls to text. Computers and Electronics in Agriculture 62(1), 54–58 (2008)
Eronen, A.J., Peltonen, V.T., Tuomi, J.T., Klapuri, A.P., Fagerlund, S., Sorsa, T., Lorho, G., Huopaniemi, J.: Audio-based context recognition. IEEE TASLP 14(1), 321–329 (2005)
Chu, S., Narayanan, S., Kuo, C.: Environmental sound recognition with timefrequency audio features. IEEE TASL 17(6), 1142 (2009)
Ntalampiras, S., Potamitis, I., Fakotakis, N.: Sound classification based on temporal feature integration. ISCCSP 2010, 1–4 (2010)
Mallat, S.G., Zhang, Z.: Matching pursuits with time-frequency dictionaries. IEEE TSP 41(12), 3397–3415 (1993)
Schmidt, R.: Multiple emitter location and signal parameter estimation. IEEE TAP 34(3), 276–280 (1986)
Parra, L.C., Alvino, C.V.: Geometric source separation: Mergin convolutive source separation with geometric beamforming. IEEE TSALP 10(6), 352–362 (2002)
Real World Computing Partnership, Rwcp sound scene database in real acoustical environments, http://tosa.mri.co.jp/sounddb/indexe.htm
Yamakawa, N., Kitahara, T., Takahashi, T., Komatani, K., Ogata, T., Okuno, H.G.: Effects of modelling within-and between-frame temporal variations in power spectra on non-verbal sound recognition. In: INTERSPEECH 2010, pp. 2342–2345 (2010)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Yamakawa, N., Takahashi, T., Kitahara, T., Ogata, T., Okuno, H.G. (2011). Environmental Sound Recognition for Robot Audition Using Matching-Pursuit. In: Mehrotra, K.G., Mohan, C.K., Oh, J.C., Varshney, P.K., Ali, M. (eds) Modern Approaches in Applied Intelligence. IEA/AIE 2011. Lecture Notes in Computer Science(), vol 6704. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21827-9_1
Download citation
DOI: https://doi.org/10.1007/978-3-642-21827-9_1
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-21826-2
Online ISBN: 978-3-642-21827-9
eBook Packages: Computer ScienceComputer Science (R0)