default search action
Jean-Luc Schwartz
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2022
- [j30]Elise Roger, Liliane Rodrigues de Almeida, Hélène Loevenbruck, Marcela Perrone-Bertolotti, E. Cousin, Jean-Luc Schwartz, Pascal Perrier, Marion Dohen, Anne Vilain, Pierre Baraduc, Sophie Achard, Monica Baciu:
Unraveling the functional attributes of the language connectome: crucial subnetworks, flexibility and variability. NeuroImage 263: 119672 (2022) - [c75]Mamady Nabé, Jean-Luc Schwartz, Julien Diard:
Bayesian gates: a probabilistic modeling tool for temporal segmentation of sensory streams into sequences of perceptual accumulators. CogSci 2022 - [c74]Marc-Antoine Georges, Julien Diard, Laurent Girin, Jean-Luc Schwartz, Thomas Hueber:
Repeat after Me: Self-Supervised Learning of Acoustic-to-Articulatory Mapping by Vocal Imitation. ICASSP 2022: 8252-8256 - [c73]Marc-Antoine Georges, Jean-Luc Schwartz, Thomas Hueber:
Self-supervised speech unit discovery from articulatory and acoustic features using VQ-VAE. INTERSPEECH 2022: 774-778 - [c72]Monica Ashokumar, Jean-Luc Schwartz, Takayuki Ito:
Orofacial somatosensory inputs in speech perceptual training modulate speech production. INTERSPEECH 2022: 784-787 - [c71]Mamady Nabé, Julien Diard, Jean-Luc Schwartz:
Isochronous is beautiful? Syllabic event detection in a neuro-inspired oscillatory model is facilitated by isochrony in speech. INTERSPEECH 2022: 4671-4675 - [i3]Marc-Antoine Georges, Julien Diard, Laurent Girin, Jean-Luc Schwartz, Thomas Hueber:
Repeat after me: Self-supervised learning of acoustic-to-articulatory mapping by vocal imitation. CoRR abs/2204.02269 (2022) - [i2]Marc-Antoine Georges, Jean-Luc Schwartz, Thomas Hueber:
Self-supervised speech unit discovery from articulatory and acoustic features using VQ-VAE. CoRR abs/2206.08790 (2022) - 2021
- [c70]Marc-Antoine Georges, Laurent Girin, Jean-Luc Schwartz, Thomas Hueber:
Learning Robust Speech Representation with an Articulatory-Regularized Variational Autoencoder. Interspeech 2021: 3345-3349 - [i1]Marc-Antoine Georges, Laurent Girin, Jean-Luc Schwartz, Thomas Hueber:
Learning robust speech representation with an articulatory-regularized variational autoencoder. CoRR abs/2104.03204 (2021) - 2020
- [j29]Thomas Hueber, Eric Tatulli, Laurent Girin, Jean-Luc Schwartz:
Evaluating the Potential Gain of Auditory and Audiovisual Speech-Predictive Coding Using Deep Learning. Neural Comput. 32(3): 596-625 (2020) - [j28]Jonathan Bucci, Paolo Lorusso, Silvain Gerber, Mirko Grimaldi, Jean-Luc Schwartz:
Assessing the Representation of Phonological Rules by a Production Study of Non-Words in Coratino. Phonetica 77(6): 405-428 (2020) - [j27]Vincent Aubanel, C. Bayard, Antje Strauß, Jean-Luc Schwartz:
The Fharvard corpus: A phonemically-balanced French sentence resource for audiology and intelligibility research. Speech Commun. 124: 68-74 (2020)
2010 – 2019
- 2019
- [j26]Jonathan Bucci, Pascal Perrier, Silvain Gerber, Jean-Luc Schwartz:
Vowel Reduction in Coratino (South Italy): Phonological and Phonetic Perspectives. Phonetica 76(4): 287-324 (2019) - 2018
- [j25]Jean-François Patri, Pascal Perrier, Jean-Luc Schwartz, Julien Diard:
What drives the perceptual change resulting from speech motor adaptation? Evaluation of hypotheses in a Bayesian modeling framework. PLoS Comput. Biol. 14(1) (2018) - [c69]Tiphaine Caudrelier, Pascal Perrier, Jean-Luc Schwartz, Amélie Rochet-Capellan:
Picture Naming or Word Reading: Does the Modality Affect Speech Motor Adaptation and Its Transfer? INTERSPEECH 2018: 956-960 - [c68]Marie-Lou Barnaud, Julien Diard, Pierre Bessière, Jean-Luc Schwartz:
COSMO SylPhon: A Bayesian Perceptuo-motor Model to Assess Phonological Learning. INTERSPEECH 2018: 3786-3790 - 2016
- [c67]Marie-Lou Barnaud, Jean-Luc Schwartz, Julien Diard, Pierre Bessière:
Sensorimotor learning in a Bayesian computational model of speech communication. ICDL-EPIROB 2016: 27-32 - [c66]Attigodu C. Ganesh, Frédéric Berthommier, Jean-Luc Schwartz:
Audiovisual Speech Scene Analysis in the Context of Competing Sources. INTERSPEECH 2016: 47-51 - [c65]Marie-Lou Barnaud, Julien Diard, Pierre Bessière, Jean-Luc Schwartz:
Assessing Idiosyncrasies in a Bayesian Model of Speech Communication. INTERSPEECH 2016: 2080-2084 - [c64]Tiphaine Caudrelier, Pascal Perrier, Jean-Luc Schwartz, Amélie Rochet-Capellan:
Does Auditory-Motor Learning of Speech Transfer from the CV Syllable to the CVCV Word? INTERSPEECH 2016: 2095-2099 - [c63]Tiphaine Caudrelier, Pascal Perrier, Jean-Luc Schwartz, Christophe Savariaux, Amélie Rochet-Capellan:
De bé à bébé : le transfert d'apprentissage auditori-moteur pour interroger l'unité de production de la parole (From sensorimotor experience to speech unit). JEP-TALN-RECITAL (1) 2016: 101-109 - 2015
- [j24]Jean-Luc Schwartz, Clément Moulin-Frier, Pierre-Yves Oudeyer:
On the cognitive nature of speech sound systems. J. Phonetics 53: 1-4 (2015) - [j23]Clément Moulin-Frier, Julien Diard, Jean-Luc Schwartz, Pierre Bessière:
COSMO ("Communicating about Objects using Sensory-Motor Operations"): A Bayesian modeling framework for studying speech communication and the emergence of phonological systems. J. Phonetics 53: 5-41 (2015) - [c62]Ganesh Attigodu Chandrashekara, Frédéric Berthommier, Jean-Luc Schwartz:
Dynamics of audiovisual binding in elderly population. AVSP 2015 - [c61]Lucie Scarbel, Denis Beautemps, Jean-Luc Schwartz, Marc Sato:
Auditory and audiovisual close-shadowing in normal and cochlear-implanted hearing impaired subjects. AVSP 2015: 143-146 - [c60]Jean-Luc Schwartz:
Audiovisual binding in speech perception. AVSP 2015 - [c59]Antje Strauß, Christophe Savariaux, Sonia Kandel, Jean-Luc Schwartz:
Visual lip information supports auditory word segmentation. AVSP 2015 - [c58]Marie-Lou Barnaud, Julien Diard, Pierre Bessière, Jean-Luc Schwartz:
COSMO, a Bayesian computational model of speech communication: Assessing the role of sensory vs. motor knowledge in speech perception. ICDL-EPIROB 2015: 248-249 - [c57]Lucie Scarbel, Denis Beautemps, Jean-Luc Schwartz, Sébastien Schmerber, Marc Sato:
Phonetic convergence and imitation of speech by cochlear implant patients. ICPhS 2015 - 2014
- [j22]Pierre Badin, Louis-Jean Boë, Thomas R. Sawallis, Jean-Luc Schwartz:
Keep the lips to free the larynx: Comments on de Boer's articulatory model (2010). J. Phonetics 46: 161-167 (2014) - [j21]Jean-Luc Schwartz, Christophe Savariaux:
No, There Is No 150 ms Lead of Visual Speech on Auditory Speech, but a Range of Audiovisual Asynchronies Varying from Small Audio Lead to Large Audio Lag. PLoS Comput. Biol. 10(7) (2014) - 2013
- [j20]Louis-Jean Boë, Pierre Badin, Lucie Ménard, Guillaume Captier, Barbara L. Davis, Peter F. MacNeilage, Thomas R. Sawallis, Jean-Luc Schwartz:
Anatomy and control of the developing human vocal tract: A response to Lieberman. J. Phonetics 41(5): 379-392 (2013) - [c56]Jean-Luc Schwartz, Christophe Savariaux:
Data and simulations about audiovisual asynchrony and predictability in speech perception. AVSP 2013: 147-152 - [c55]Avril Treille, Coriandre Vilain, Thomas Hueber, Jean-Luc Schwartz, Laurent Lamalle, Marc Sato:
The sight of your tongue: neural correlates of audio-lingual speech perception. AVSP 2013: 157-162 - [c54]Olha Nahorna, Ganesh Attigodu Chandrashekara, Frédéric Berthommier, Jean-Luc Schwartz:
Modulating fusion in the McGurk effect by binding processes and contextual noise. AVSP 2013: 181-186 - [c53]Ganesh Attigodu Chandrashekara, Frédéric Berthommier, Olha Nahorna, Jean-Luc Schwartz:
Effect of context, rebinding and noise, on audiovisual speech fusion. INTERSPEECH 2013: 1643-1647 - [c52]Raphaël Laurent, Jean-Luc Schwartz, Pierre Bessière, Julien Diard:
A computational model of perceptuo-motor processing in speech perception: learning to imitate and categorize synthetic CV syllables. INTERSPEECH 2013: 2797-2801 - 2012
- [j19]Jean-Luc Schwartz, Louis-Jean Boë, Pierre Badin, Thomas R. Sawallis:
Grounding stop place systems in the perceptuo-motor substance of speech: On the universality of the labial-coronal-velar stop series. J. Phonetics 40(1): 20-36 (2012) - [j18]Jussi Alho, Marc Sato, Mikko Sams, Jean-Luc Schwartz, Hannu Tiitinen, Iiro P. Jääskeläinen:
Enhanced early-latency electromagnetic activity in the left premotor cortex is associated with successful phonetic categorization. NeuroImage 60(4): 1937-1946 (2012) - [c51]Raphaël Laurent, Jean-Luc Schwartz, Pierre Bessière, Julien Diard:
COSMO, un modèle bayésien de la communication parlée : application à la perception des syllabes (COSMO, a Bayesian model of speech communication, applied to syllable perception) [in French]. JEP-TALN-RECITAL 2012 2012: 305-312 - [c50]Olha Nahorna, Frédéric Berthommier, Jean-Luc Schwartz:
Dynamique temporelle du liage dans la fusion de la parole audiovisuelle (Temporal dynamics of binding in audiovisual speech fusion) [in French]. JEP-TALN-RECITAL 2012 2012: 481-488 - [c49]Ali Hadian Cefidekhanie, Christophe Savariaux, Marc Sato, Jean-Luc Schwartz:
Mise au point d'un paradigme de perturbation motrice pour l'étude de la perception de la parole (Defining a motor perturbation paradigm for speech perception studies) [in French]. JEP-TALN-RECITAL 2012 2012: 665-672 - 2011
- [c48]Olha Nahorna, Frédéric Berthommier, Jean-Luc Schwartz:
Binding and unbinding the Mcgurk effect in audiovisual speech fusion: follow-up experiments on a new paradigm. AVSP 2011: 21-24 - [c47]Frédéric Berthommier, Jean-Luc Schwartz:
Audiovisual streaming in voicing perception: new evidence for a low-level interaction between audio and visual modalities. AVSP 2011: 77-80 - 2010
- [j17]Marion Dohen, Jean-Luc Schwartz, Gérard Bailly:
Speech and face-to-face communication - An introduction. Speech Commun. 52(6): 477-480 (2010) - [c46]Jean-Luc Schwartz, Kaisa Tiippana, Tobias S. Andersen:
Disentangling unisensory from fusion effects in the attentional modulation of Mcgurk effects: a Bayesian modeling study suggests that fusion is attention-dependent. AVSP 2010: 2-1 - [c45]Olha Nahorna, Frédéric Berthommier, Jean-Luc Schwartz:
Binding and unbinding in audiovisual speech fusion: removing the Mcgurk effect by an incoherent preceding audiovisual context. AVSP 2010: 11
2000 – 2009
- 2008
- [j16]Anahita Basirat, Marc Sato, Jean-Luc Schwartz, Philippe Kahane, Jean-Philippe Lachaux:
Parieto-frontal gamma band activity during the perceptual emergence of speech forms. NeuroImage 42(1): 404-413 (2008) - [j15]Lucie Ménard, Jean-Luc Schwartz, Jérôme Aubin:
Invariance and variability in the production of the height feature in French vowels. Speech Commun. 50(1): 14-28 (2008) - [p1]Jihène E. Serkhane, Jean-Luc Schwartz, Pierre Bessière:
Building a Talking Baby Robot: A Contribution to the Study of Speech Acquisition and Evolution. Probabilistic Reasoning and Decision Making in Sensory-Motor Systems 2008: 329-357 - 2007
- [j14]Lucie Ménard, Jean-Luc Schwartz, Louis-Jean Boë, Jérôme Aubin:
Articulatory-acoustic relationships during vocal tract growth for French vowels: Analysis of real data and simulations with an articulatory model. J. Phonetics 35(1): 1-19 (2007) - [j13]Jihène Serkhane, Jean-Luc Schwartz, Louis-Jean Boë, Barbara L. Davis, Christine L. Matyear:
Infants' vocalizations analyzed with an articulatory model: A preliminary report. J. Phonetics 35(3): 321-340 (2007) - [c44]Anahita Basirat, Marc Sato, Jean-Luc Schwartz:
Audiovisual verbal transformations, as a way to study audiovisual interactions in speech perception. AVSP 2007: 19 - [c43]Amélie Rochet-Capellan, Jean-Luc Schwartz, Rafael Laboissière, Arturo Galvàn:
Pointing to a target while naming it with /pata/ or /tapa/: the effect of consonants and stress position on jaw-finger coordination. INTERSPEECH 2007: 634-637 - 2006
- [c42]David Sodoyer, Bertrand Rivet, Laurent Girin, Jean-Luc Schwartz, Christian Jutten:
An Analysis of Visual Speech Information Applied to Voice Activity Detection. ICASSP (1) 2006: 601-604 - 2005
- [j12]Jean-Luc Schwartz, Christian Abry, Louis-Jean Boë, Lucie Ménard, Nathalie Vallée:
Asymmetries in vowel perception, in the context of the Dispersion-Focalisation Theory. Speech Commun. 45(4): 425-434 (2005) - [c41]Amélie Rochet-Capellan, Jean-Luc Schwartz:
The labial-coronal effect and CVCV stability during reiterant speech production: an acoustic analysis. INTERSPEECH 2005: 1009-1012 - [c40]Amélie Rochet-Capellan, Jean-Luc Schwartz:
The labial-coronal effect and CVCV stability during reiterant speech production: an articulatory analysis. INTERSPEECH 2005: 1013-1016 - 2004
- [j11]Marc Sato, Monica Baciu, Hélène Loevenbruck, Jean-Luc Schwartz, Marie-Agnès Cathiard, Christoph Segebarth, Christian Abry:
Multistable representation of speech forms: a functional MRI study of verbal transformations. NeuroImage 23(3): 1143-1151 (2004) - [j10]Jean-Luc Schwartz, Frédéric Berthommier, Marie-Agnès Cathiard, Renato de Mori:
Editorial. Speech Commun. 44(1-4): 1-3 (2004) - [j9]David Sodoyer, Laurent Girin, Christian Jutten, Jean-Luc Schwartz:
Developing an audio-visual speech source separation algorithm. Speech Commun. 44(1-4): 113-125 (2004) - [j8]Marion Dohen, Hélène Loevenbruck, Marie-Agnès Cathiard, Jean-Luc Schwartz:
Visual perception of contrastive focus in reiterant French speech. Speech Commun. 44(1-4): 155-172 (2004) - [c39]Jean-Luc Schwartz, Marie-Agnès Cathiard:
Modeling audio-visual speech perception: back on fusion architectures and fusion control. INTERSPEECH 2004: 2017-2020 - [c38]Bertrand Rivet, Laurent Girin, Christian Jutten, Jean-Luc Schwartz:
Using audiovisual speech processing to improve the robustness of the separation of convolutive speech mixtures. MMSP 2004: 47-50 - 2003
- [c37]Jean-Luc Schwartz, Frédéric Berthommier, Christophe Savariaux:
Auditory syllabic identification enhanced by non-informative visible speech. AVSP 2003: 19-24 - [c36]Jean-Luc Schwartz:
Why the FLMP should not be applied to McGurk data ...or how to better compare models in the Bayesian framework. AVSP 2003: 77-82 - [c35]David Sodoyer, Laurent Girin, Christian Jutten, Jean-Luc Schwartz:
Further experiments on audio-visual speech source separation. AVSP 2003: 145-150 - [c34]Marion Dohen, Hélène Loevenbruck, Marie-Agnès Cathiard, Jean-Luc Schwartz:
Audiovisual perception of contrastive focus in French. AVSP 2003: 245-250 - [c33]Marion Dohen, Hélène Loevenbruck, Marie-Agnès Cathiard, Jean-Luc Schwartz:
Potential audiovisual correlates of contrastive focus in French. INTERSPEECH 2003: 145-148 - [c32]David Sodoyer, Laurent Girin, Christian Jutten, Jean-Luc Schwartz:
Extracting an AV speech source from a mixture of signals. INTERSPEECH 2003: 1393-1396 - [c31]David Sodoyer, Laurent Girin, Christian Jutten, Jean-Luc Schwartz:
Speech extraction based on ICA and audio-visual coherence. ISSPA (2) 2003: 65-68 - [e1]Jean-Luc Schwartz, Frédéric Berthommier, Marie-Agnès Cathiard, David Sodoyer:
AVSP 2003 - International Conference on Audio-Visual Speech Processing, St. Jorioz, France, September 4-7, 2003. ISCA 2003 [contents] - 2002
- [j7]David Sodoyer, Jean-Luc Schwartz, Laurent Girin, Jacob Klinkisch, Christian Jutten:
Separation of Audio-Visual Speech Sources: A New Approach Exploiting the Audio-Visual Coherence of Speech Stimuli. EURASIP J. Adv. Signal Process. 2002(11): 1165-1173 (2002) - [c30]Jihène Serkhane, Jean-Luc Schwartz, Louis-Jean Boë, Barbara L. Davis, Christine L. Matyear:
Motor specifications of a baby robot via the analysis of infants² vocalizations. INTERSPEECH 2002: 45-48 - [c29]Marc Sato, Jean-Luc Schwartz, Marie-Agnès Cathiard, Christian Abry, Hélène Loevenbruck:
Intrasyllabic articulatory control constraints in verbal working memory. INTERSPEECH 2002: 669-672 - [c28]Lynne E. Bernstein, Denis Burnham, Jean-Luc Schwartz:
Special session: issues in audiovisual spoken language processing (when, where, and how?). INTERSPEECH 2002: 1445-1448 - [c27]Jean-Luc Schwartz, Frédéric Berthommier, Christophe Savariaux:
Audio-visual scene analysis: evidence for a "very-early" integration process in audio-visual speech perception. INTERSPEECH 2002: 1937-1940 - [c26]David Sodoyer, Laurent Girin, Christian Jutten, Jean-Luc Schwartz:
Audio-visual speech sources separation: a new approach exploiting the audio-visual coherence of speech stimuli. INTERSPEECH 2002: 1953-1956 - 2001
- [c25]Jean-Luc Schwartz, Christophe Savariaux:
Is it easier to lipread one's own speech gestures than those of somebody else? it seems not! AVSP 2001: 18-23 - [c24]Marie-Agnès Cathiard, Jean-Luc Schwartz, Christian Abry:
Asking a naive question about the McGurk effect: Why does audio [b] give more [d] percepts with visual [g] than with visual [d]? AVSP 2001: 138-142 - [c23]Lucie Ménard, Jean-Luc Schwartz, Louis-Jean Boë, Sonia Kandel, Nathalie Vallée:
Perceptual identification and normalization of synthesized French vowels from birth to adulthood. INTERSPEECH 2001: 163-166 - [c22]Laurent Girin, A. Allard, Jean-Luc Schwartz:
Speech signals separation: a new approach exploiting the coherence of audio and visual speech. MMSP 2001: 631-636
1990 – 1999
- 1999
- [j6]Pascal Teissier, Jordi Robert-Ribes, Jean-Luc Schwartz, Anne Guérin-Dugué:
Comparing models for audiovisual fusion in a noisy-vowel recognition task. IEEE Trans. Speech Audio Process. 7(6): 629-642 (1999) - [c21]François Gaillard, Frédéric Berthommier, Gang Feng, Jean-Luc Schwartz:
A reliability criterion for time-frequency labeling based on periodicity in an auditory scene. EUROSPEECH 1999: 2603-2606 - 1998
- [j5]Pascal Teissier, Anne Guérin-Dugué, Jean-Luc Schwartz:
Models for Audiovisual Fusion in a Noisy-Vowel Recognition Task. J. VLSI Signal Process. 20(1-2): 25-44 (1998) - [c20]Jon P. Barker, Frédéric Berthommier, Jean-Luc Schwartz:
Is Primitive AV Coherence An Aid To Segment The Scene? AVSP 1998: 103-108 - [c19]Marie-Agnès Cathiard, Christian Abry, Jean-Luc Schwartz:
Visual Perception of Glides Versus Vowels: The Effect of Dynamic Expectancy. AVSP 1998: 115-120 - [c18]Laurent Girin, Gang Feng, Jean-Luc Schwartz:
Fusion of auditory and visual information for noisy speech enhancement: a preliminary study of vowel transitions. ICASSP 1998: 1005-1008 - [c17]Laurent Girin, Laurent Varin, Gang Feng, Jean-Luc Schwartz:
A signal processing system for having the sound "pop-out" in noise thanks to the image of the speaker's lips: new advances using multi-layer perceptrons. ICSLP 1998 - [c16]Laurent Girin, Laurent Varin, Gang Feng, Jean-Luc Schwartz:
Audiovisual speech enhancement: new advances using multi-layer perceptrons. MMSP 1998: 77-82 - 1997
- [c15]Laurent Girin, Jean-Luc Schwartz, Gang Feng:
Can the visual input make the audio signal "pop out" in noise ? a first study of the enhancement of noisy VCV acoustic sequences by audio-visual fusion. AVSP 1997: 37-40 - [c14]François Gaillard, Frédéric Berthommier, Gang Feng, Jean-Luc Schwartz:
A modified zero-crossing method for pitch detection in presence of interfering sources. EUROSPEECH 1997: 445-448 - [c13]Pascal Teissier, Jean-Luc Schwartz, Anne Guérin-Dugué:
Non-linear representations, sensor reliability estimation and context-dependent fusion in the audiovisual recognition of speech in noise. EUROSPEECH 1997: 1611-1614 - [c12]Laurent Girin, Gang Feng, Jean-Luc Schwartz:
Noisy speech enhancement by fusion of auditory and visual information: a study of vowel transitions. EUROSPEECH 1997: 2555-2558 - [c11]Anne Guérin-Dugué, Pascal Teissier, Jean-Luc Schwartz, Jeanny Hérault:
Constrained Neural Network for Estimating Sensor Reliability in Sensors Fusion. IWANN 1997: 872-881 - [c10]Pascal Teissier, Jean-Luc Schwartz, Anne Guérin-Dugué:
Models for audiovisual fusion in a noisy-vowel recognition task. MMSP 1997: 37-44 - 1995
- [j4]Jordi Robert-Ribes, Jean-Luc Schwartz, Pierre Escudier:
A Comparison of Models for Fusion of the Auditory and Visual Sensors in Speech Perception. Artif. Intell. Rev. 9(4-5): 323-346 (1995) - [c9]Laurent Girin, Gang Feng, Jean-Luc Schwartz:
Noisy speech enhancement with filters estimated from the speaker's lips. EUROSPEECH 1995: 1559-1562 - 1993
- [j3]Andrew C. Morris, Jean-Luc Schwartz, Pierre Escudier:
An information theoretical investigation into the distribution of phonetic information across the auditory spectrogram. Comput. Speech Lang. 7(2): 121-136 (1993) - [c8]Jordi Robert-Ribes, Tahar Lallouache, Pierre Escudier, Jean-Luc Schwartz:
Integrating auditory and visual representations for audiovisual vowel recognition. EUROSPEECH 1993: 1753-1756 - 1991
- [c7]Andrew C. Morris, Pierre Escudier, Jean-Luc Schwartz:
On and off units detect information bottle-necks for speech recognition. EUROSPEECH 1991: 1441-1444 - 1990
- [c6]Zong Liang Wu, Jean-Luc Schwartz, Pierre Escudier:
Modeling spectral processing in the central auditory system. ICASSP 1990: 373-376
1980 – 1989
- 1989
- [j2]Jean-Luc Schwartz, Pierre Escudier:
A strong evidence for the existence of a large-scale integrated spectral representation in vowel perception. Speech Commun. 8(3): 235-259 (1989) - [c5]Zong Liang Wu, Pierre Escudier, Jean-Luc Schwartz:
Specialized physiology-based channels for the detection of articulatory-acoustic events. A preliminary scheme and its performance. ICASSP 1989: 2013-2016 - [c4]Jean-Luc Schwartz, Louis-Jean Boë, Pascal Perrier, Bernard Guérin, Pierre Escudier:
Perceptual contract and stability in vowel systems: a 3-d simulation study. EUROSPEECH 1989: 1063-1066 - [c3]Zong Liang Wu, Jean-Luc Schwartz, Pierre Escudier:
A theoretical study of neural mechanisms specialized in the detection of articulatory-acoustic events. EUROSPEECH 1989: 1235-1238 - [c2]Frédéric Berthommier, Jean-Luc Schwartz, Pierre Escudier:
Auditory processing in a post-cochlear neural network vowel spectrum processing based on spike synchrony. EUROSPEECH 1989: 1247-1250 - [c1]Louis-Jean Boë, Pascal Perrier, Bernard Guérin, Jean-Luc Schwartz:
Maximal vowel space. EUROSPEECH 1989: 2281-2284 - 1985
- [j1]Pierre Escudier, Jean-Luc Schwartz:
Pulsation threshold patterns of synthetic vowels: Study of the second formant emergence and the "center of gravity" effects. Speech Commun. 4(1-3): 189-198 (1985)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 21:16 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint