default search action
Martin Heckmann
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [i1]Matthias Pijarowski, Alexander Wolpert, Martin Heckmann, Michael Teutsch:
Utilizing Grounded SAM for self-supervised frugal camouflaged human detection. CoRR abs/2406.05776 (2024) - 2020
- [j8]Martina Hasenjäger, Martin Heckmann, Heiko Wersing:
A Survey of Personalization for Advanced Driver Assistance Systems. IEEE Trans. Intell. Veh. 5(2): 335-344 (2020) - [c64]Nima Nabizadeh, Dorothea Kolossa, Martin Heckmann:
MyFixit: An Annotated Dataset, Annotation Tool, and Baseline Methods for Information Extraction from Repair Manuals. LREC 2020: 2120-2128 - [c63]Nima Nabizadeh, Martin Heckmann, Dorothea Kolossa:
Target-Aware Prediction of Tool Usage in Sequential Repair Tasks. LOD (2) 2020: 156-168 - [c62]Nima Nabizadeh, Martin Heckmann, Dorothea Kolossa:
Hierarchy-aware Learning of Sequential Tool Usage via Semi-automatically Constructed Taxonomies. MWE-LEX 2020: 22-26
2010 – 2019
- 2019
- [j7]Andrea Schnall, Martin Heckmann:
Feature-space SVM adaptation for speaker adapted word prominence detection. Comput. Speech Lang. 53: 198-216 (2019) - [c61]Martin Heckmann, Dennis Orth, Mark Dunn, Nico Steinhardt, Bram Bolder, Dorothea Kolossa:
CORA, a prototype for a cooperative speech-based on-demand intersection assistant. AutomotiveUI (adjunct) 2019: 483-488 - [c60]Diana Kleingarn, Nima Nabizadeh, Martin Heckmann, Dorothea Kolossa:
Speaker-adapted neural-network-based fusion for multimodal reference resolution. SIGdial 2019: 210-214 - [c59]Nazia Attari, Martin Heckmann, David Schlangen:
From Explainability to Explanation: Using a Dialogue Setting to Elicit Annotations with Justifications. SIGdial 2019: 331-335 - 2018
- [j6]Martin Heckmann:
Audio-visual word prominence detection from clean and noisy speech. Comput. Speech Lang. 48: 15-30 (2018) - [c58]Martin Heckmann, Dennis Orth, Dorothea Kolossa:
"Gap after the next two vehicles": A Spatio-temporally Situated Dialog for a Cooperative Driving Assistant. ITG Symposium on Speech Communication 2018: 1-5 - [c57]Dennis Orth, Nico Steinhardt, Bram Bolder, Mark Dunn, Dorothea Kolossa, Martin Heckmann:
Analysis of a Speech-Based Intersection Assistant in Real Urban Traffic. ITSC 2018: 1273-1278 - [c56]Dennis Orth, Bram Bolder, Nico Steinhardt, Mark Dunn, Dorothea Kolossa, Martin Heckmann:
A Speech-Based On-Demand Intersection Assistant Prototype. Intelligent Vehicles Symposium 2018: 2048-2053 - 2017
- [c55]Dennis Orth, Nadja Schömig, Christian Mark, Monika Jagiellowicz-Kaufmann, Dorothea Kolossa, Martin Heckmann:
Benefits of Personalization in the Context of a Speech-Based Left-Turn Assistant. AutomotiveUI 2017: 193-201 - [c54]Dennis Orth, Dorothea Kolossa, Martin Heckmann:
Predicting driver left-turn behavior from few training samples using a maximum a posteriori method. ITSC 2017: 1-6 - [c53]Dennis Orth, Dorothea Kolossa, Milton Sarria Paja, Kersten Schaller, Andreas Pech, Martin Heckmann:
A maximum likelihood method for driver-specific critical-gap estimation. Intelligent Vehicles Symposium 2017: 553-558 - 2016
- [c52]Andrea Schnall, Martin Heckmann:
Balancing Gaussianity and sparseness in feature-space speaker adaptation for word prominence detection. ITG Symposium on Speech Communication 2016: 1-5 - [c51]Nadja Schömig, Martin Heckmann, Heiko Wersing, Christian Maag, Alexandra Neukum:
Assistance-On-Demand: a Speech-Based Assistance System for Urban Intersections. AutomotiveUI (adjunct) 2016: 51-56 - [c50]Martin Heckmann:
Feature-Level Decision Fusion for Audio-Visual Word Prominence Detection. INTERSPEECH 2016: 575-579 - [c49]Nils Magiera, H. Janssen, Martin Heckmann, Hermann Winner:
Rider skill identification by probabilistic segmentation into motorcycle maneuver primitives. ITSC 2016: 379-386 - [c48]Andrea Schnall, Martin Heckmann:
Comparing speaker independent and speaker adapted classification for word prominence detection. SLT 2016: 239-244 - [c47]Lea Schonherr, Dennis Orth, Martin Heckmann, Dorothea Kolossa:
Environmentally robust audio-visual speaker identification. SLT 2016: 312-318 - 2015
- [c46]Andrea Schnall, Martin Heckmann:
Evaluation of optical flow field features for the detection of word prominence in a human-machine interaction scenario. IJCNN 2015: 1-7 - 2014
- [c45]Martin Heckmann, Paschalis Mikias, Dorothea Kolossa:
The Impact of Word Alignment Accuracy on Audio-visual Word Prominence Detection. ITG Symposium on Speech Communication 2014: 1-4 - [c44]Martin Heckmann:
Steps Towards More Natural Human-Machine Interaction via Audio-Visual Word Prominence Detection. MA3HMI@INTERSPEECH 2014: 15-24 - [c43]Andrea Schnall, Martin Heckmann:
Integrating sequence information in the audio-visual detection of word prominence in a human-machine interaction scenario. INTERSPEECH 2014: 2640-2644 - 2013
- [c42]Martin Heckmann, Keisuke Nakamura, Kazuhiro Nakadai:
Differences in the audio-visual detection of word prominence from Japanese and English speakers. AVSP 2013: 209-214 - [c41]Martin Heckmann:
Inter-speaker variability in audio-visual classification of word prominence. INTERSPEECH 2013: 1791-1795 - [c40]Samuel K. Ngouoko M, Martin Heckmann, Britta Wrede:
Robust spectro-temporal speech features with model-based distribution equalization. WIAMIS 2013: 1-4 - 2012
- [j5]Irene Ayllón Clemente, Martin Heckmann, Britta Wrede:
Incremental word learning: Efficient HMM initialization and large margin discriminative adaptation. Speech Commun. 54(9): 1029-1048 (2012) - [c39]Martin Heckmann:
Image Transformation based Features for the Visual Discrimination of Prominent and Non-ProminentWords. ITG Conference on Speech Communication 2012: 1-4 - [c38]Samuel K. Ngouoko M, Martin Heckmann, Britta Wrede:
Spectro-temporal features with distribution equalization. SAPA@INTERSPEECH 2012: 104-109 - [c37]Martin Heckmann:
Audio-visual Evaluation and Detection of Word Prominence in a Human-Machine Interaction Scenario. INTERSPEECH 2012: 2390-2393 - [c36]Martin Heckmann:
Visual Contribution to Word Prominence Detection in a Playful Interaction Setting. IWSDS 2012: 241-247 - 2011
- [j4]Martin Heckmann, Bhiksha Raj, Paris Smaragdis:
Preface. Speech Commun. 53(5): 591 (2011) - [j3]Martin Heckmann, Xavier Domont, Frank Joublin, Christian Goerick:
A hierarchical framework for spectro-temporal feature extraction. Speech Commun. 53(5): 736-752 (2011) - [c35]Martin Heckmann, Claudius Gläser:
Discriminant Sub-Space Projection of Spectro-Temporal Speech Features Based on Maximizing Mutual Information. INTERSPEECH 2011: 225-228 - [c34]Martin Heckmann, Kazuhiro Nakadai, Hirofumi Nakajima:
Robust Intonation Pattern Classification in Human Robot Interaction. INTERSPEECH 2011: 3137-3140 - [c33]Raphael Golombek, Sebastian Wrede, Marc Hanheide, Martin Heckmann:
Online data-driven fault detection for robotic systems. IROS 2011: 3011-3016 - 2010
- [j2]Claudius Gläser, Martin Heckmann, Frank Joublin, Christian Goerick:
Combining Auditory Preprocessing and Bayesian Estimation for Robust Formant Tracking. IEEE Trans. Speech Audio Process. 18(2): 224-236 (2010) - [c32]Claudius Gläser, Martin Heckmann, Frank Joublin, Christian Goerick:
Robust Formant Tracking in Echoic and Noisy Environments. Sprachkommunikation 2010: 1-4 - [c31]Irene Ayllón Clemente, Martin Heckmann, Gerhard Sagerer, Frank Joublin:
Multiple sequence alignment based bootstrapping for improved incremental word learning. ICASSP 2010: 5246-5249 - [c30]Martin Heckmann:
Supervised vs. unsupervised learning of spectro temporal speech features. SAPA@INTERSPEECH 2010: 1-6 - [c29]Irene Ayllón Clemente, Martin Heckmann, Alexander Denecke, Britta Wrede, Christian Goerick:
Incremental word learning using large-margin discriminative training and variance floor estimation. INTERSPEECH 2010: 889-892 - [c28]Martin Heckmann, Claudius Gläser, Frank Joublin, Kazuhiro Nakadai:
Applying geometric source separation for improved pitch extraction in human-robot interaction. INTERSPEECH 2010: 2602-2605 - [c27]Martin Heckmann, Frank Joublin, Kazuhiro Nakadai:
Pitch extraction in Human-Robot interaction. IROS 2010: 1482-1487 - [c26]Raphael Golombek, Sebastian Wrede, Marc Hanheide, Martin Heckmann:
Learning a probabilistic self-awareness model for robotic systems. IROS 2010: 2745-2750
2000 – 2009
- 2009
- [c25]Christian Goerick, Jens Schmüdderich, Bram Bolder, Herbert Janssen, Michael Gienger, Achim Bendig, Martin Heckmann, Tobias Rodemann, Holger Brandl, Xavier Domont, Inna Mikhailova:
Interactive online multimodal association for internal concept building in humanoids. Humanoids 2009: 411-418 - [c24]Martin Heckmann, Holger Brandl, Xavier Domont, Bram Bolder, Frank Joublin, Christian Goerick:
An audio-visual attention system for online association learning. INTERSPEECH 2009: 2171-2174 - [c23]Martin Heckmann, Holger Brandl, Jens Schmüdderich, Xavier Domont, Bram Bolder, Inna Mikhailova, Herbert Janssen, Michael Gienger, Achim Bendig, Tobias Rodemann, Mark Dunn, Frank Joublin, Christian Goerick:
Teaching a humanoid robot: Headset-free speech interaction for audio-visual association learning. RO-MAN 2009: 422-427 - 2008
- [c22]Xavier Domont, Martin Heckmann, Frank Joublin, Christian Goerick:
Hierarchical spectro-temporal features for robust speech recognition. ICASSP 2008: 4417-4420 - [c21]Martin Heckmann, Xavier Domont, Frank Joublin, Christian Goerick:
A closer look on hierarchical spectro-temporal features (HIST). INTERSPEECH 2008: 894-897 - [c20]Claudius Gläser, Martin Heckmann, Frank Joublin, Christian Goerick:
Auditory-based formant estimation in noise using a probabilistic framework. INTERSPEECH 2008: 2606-2609 - [c19]Martin Heckmann, Claudius Gläser, Miguel Vaz, Tobias Rodemann, Frank Joublin, Christian Goerick:
Listen to the parrot: Demonstrating the quality of online pitch and formant extraction via feature-based resynthesis. IROS 2008: 1699-1704 - 2007
- [c18]Xavier Domont, Martin Heckmann, Heiko Wersing, Frank Joublin, Christian Goerick:
A hierarchical model for syllable recognition. ESANN 2007: 573-578 - [c17]Claudius Gläser, Martin Heckmann, Frank Joublin, Christian Goerick, Horst-Michael Groß:
Joint Estimation of Formant Trajectories via Spectro-Temporal Smoothing and Bayesian Techniques. ICASSP (4) 2007: 477-480 - [c16]Martin Heckmann, Frank Joublin, Christian Goerick:
Combining rate and place information for robust pitch extraction. INTERSPEECH 2007: 2765-2768 - [c15]Xavier Domont, Martin Heckmann, Heiko Wersing, Frank Joublin, Stefan Menzel, Bernhard Sendhoff, Christian Goerick:
Word Recognition with a Hierarchical Neural Network. NOLISP 2007: 142-151 - 2006
- [c14]Martin Heckmann, Marco Moebus, Frank Joublin, Christian Goerick:
Speaker independent voiced-unvoiced detection evaluated in different speaking styles. INTERSPEECH 2006 - [c13]Martin Heckmann, Tobias Rodemann, Björn Schölling, Frank Joublin, Christian Goerick:
Modeling the precedence effect for binaural sound source localization in noisy and echoic environments. INTERSPEECH 2006 - [c12]Björn Schölling, Martin Heckmann, Frank Joublin, Christian Goerick:
Structuring time domain blind source separation algorithms for CASA integration. SAPA@INTERSPEECH 2006: 37-41 - [c11]Martin Heckmann, Tobias Rodemann, Frank Joublin, Christian Goerick, Björn Schölling:
Auditory Inspired Binaural Robust Sound Source Localization in Echoic and Noisy Environments. IROS 2006: 368-373 - [c10]Tobias Rodemann, Martin Heckmann, Frank Joublin, Christian Goerick, Björn Schölling:
Real-time Sound Localization With a Binaural Head-system Using a Biologically-inspired Cue-triple Mapping. IROS 2006: 860-865 - 2005
- [c9]Martin Heckmann, Frank Joublin, Edgar Körner:
Sound source separation for a robot based on pitch. IROS 2005: 2197-2202 - 2003
- [b1]Martin Heckmann:
Adaptive Datenfusion für die audio-visuelle Spracherkennung. Karlsruhe Institute of Technology, 2003, ISBN 3-8322-2034-8, pp. 1-169 - [c8]Martin Heckmann, Frédéric Berthommier, Christophe Savariaux, Kristian Kroschel:
Effects of image distortions on audio-visual speech recognition. AVSP 2003: 163-168 - 2002
- [j1]Martin Heckmann, Frédéric Berthommier, Kristian Kroschel:
Noise Adaptive Stream Weighting in Audio-Visual Speech Recognition. EURASIP J. Adv. Signal Process. 2002(11): 1260-1273 (2002) - [c7]Martin Heckmann, Kristian Kroschel, Christophe Savariaux, Frédéric Berthommier:
DCT-based video features for audio-visual speech recognition. INTERSPEECH 2002: 1925-1928 - 2001
- [c6]Martin Heckmann, Frédéric Berthommier, Kristian Kroschel:
A hybrid ANN/HMM audio-visual speech recognition system. AVSP 2001: 189-194 - [c5]Martin Heckmann, Frédéric Berthommier, Kristian Kroschel:
Optimal weighting of posteriors for audio-visual speech recognition. ICASSP 2001: 161-164 - [c4]Martin Heckmann, Thorsten Wild, Frédéric Berthommier, Kristian Kroschel:
Comparing audio- and a-posteriori-probability-based stream confidence measures for audio-visual speech recognition. INTERSPEECH 2001: 1023-1026 - 2000
- [c3]Martin Heckmann, Julia Vogel, Kristian Kroschel:
Frequency selective step-size control for acoustic echo cancellation. EUSIPCO 2000: 1-4 - [c2]Kristian Kroschel, Martin Heckmann:
Robust Noise Reduction and Echo Cancellation. PWC 2000: 249-258 - [c1]Martin Heckmann, Frédéric Berthommier, Christophe Savario, Kristian Kroschel:
Labeling audio-visual speech corpora and training an ANN/HMM audio-visual speech recognition system. INTERSPEECH 2000: 9-12
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 22:24 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint