default search action
1st SSW 1990: Autrans, France
- Gérard Bailly, Christian Benoît:
The ESCA Workshop on Speech Synthesis, SSW 1990, Autrans, France, September 25-28, 1990. ISCA 1990 - Satoshi Imaizumi, Shigeru Kiritani:
A generation model of formant trajectory at various speaking rates. 1-4 - Wendy J. Holmes, David J. B. Pearce:
Automatic derivation of segment models for synthesis by rule. 5-8 - Rolf Carlson, Lennart Nord:
Cluster realizations in rule synthesis. 9-12 - John Coleman:
Yorktalk: "synthesis-by-rule" without segments or rules. 13-16 - Louis ten Bosch:
Rule extraction for allophone synthesis. 17-20 - Douglas D. O'Shaughnessy:
Spectral transitions in rule-based and diphone synthesis. 21-24 - Joseph P. Olive:
A new algorithm for a concatenative speech synthesis system using an augmented acoustic inventory of speech sounds. 25-30 - Alejandro Macarrón Larumbe:
Design and generation of the acoustic database of a text-to-speech synthesizer for Spanish. 31-34 - Kazuya Takeda, Katsuo Abe, Yoshinori Sagisaka:
On unit selection algorithms and their evaluation in non-uniform unit speech synthesis. 35-38 - Tetsuya Nomura, Hideyuki Mizuno, Hirokazu Sato:
Speech synthesis by optimum concatenation of phoneme segments. 39-42 - Piero Pierucci, Giuliano Ferri, Massimo Giustiniani:
A database for diphone units extraction. 43-46 - Philippe Depalle, Xavier Rodet, Gilles Poirot:
Energy and articulation rules for improving diphone speech synthesis. 47-50 - Nickolas Yiourgalis, George Kokkinakis:
Quality and intelligibility improvements in a greek text-to-speech system. 51-54 - David Talkin, James Rowley:
Pitch-synchronous analysis and synthesis for its systems. 55-58 - Tomoki Hamagami, Shinichiro Hashimoto:
A new synthesizer model for high quality synthetic speech. 59-62 - Kenneth N. Stevens, Corine A. Bickley:
Higher-level control parameters for a formant synthesizer. 63-66 - Gérard Bailly, Morten Bach, Rafael Laboissière, Morten Olesen:
Generation of articulatory trajectories using sequential networks. 67-70 - Alain Soquet, Marco Saerens, Paul Jospa:
Acoustic-articulatory inversion based on a neural controller of a vocal tract model. 71-74 - Peter Howell, Mark Williams:
Use of articulatory synthesis for analysis of voice disorders. 75-78 - Henrietta J. Cedergren, Gilles Boulianne, Danièle Archambault:
On modelling the phonology phonetics interface for articulatory synthesis. 79-82 - Cecil H. Coker, Kenneth Ward Church, Maik Y. Liberman:
Morphology and rhyming: two powerful alternatives to letter-to-sound rules for speech synthesis. 83-86 - Simon M. Lucas, Robert I. Damper:
Text-to-phonetics translation with syntactic neural nets. 87-90 - Danielle Larreur, Christel Sorin:
Quality evaluation of French text-to-speech synthesis within a task the importance of the mute "e". 91-96 - Kirk P. H. Sullivan, Robert I. Damper:
Novel-word pronunciation within a text-to-speech system. 97-100 - N. P. Warren, William A. Ainsworth:
Automatic syntactic classification of isolated English words using connectionist architectures. 101-104 - Miguel Ángel Rodríguez Crespo, José Gregorio Escalada Sardina:
Text analysis system with automatic letter to allophone conversion for a Spanish text-to-speech synthesizer. 105-108 - Alex I. C. Monaghan:
A multi-phase parsing strategy for unrestricted text. 109-112 - Alex I. C. Monaghan:
Treating anaphora in the CSTR text-to-speech system. 113-116 - Thomas Russi:
A framework for morphological and syntactic analysis and its application in a text-to-speech system for German. 117-120 - Betina Schnabel, Harald Roth:
Automatic linguistic processing in a German text-to-speech synthesis system. 121-124 - Gösta Bruce, Björn Granström, David House:
Prosodic phrasing in Swedish speech synthesis. 125-128 - Richard Sproat:
Stress assignment in complex nominals for English text-to-speech. 129-132 - Henri Zinglé:
Morphological segmentation and stress calculus in German with an expert system. 133-136 - Hugo Quené, Arthur Dirksen:
A comparison of natural, theoretical and automatically derived accentuations of dutch texts. 137-140 - Christof Traber:
F0 generation with a data base of natural F0 patterns and with a neural network. 141-144 - Hirokazu Sato:
Pitch frequency characteristics in Japanese words related to phonemes. 145-148 - Philippe Martin:
Automatic assignment of lexical stress in italian. 149-152 - A. K. Datta, N. R. Ganguly, B. Mukherjee:
Intonation in segment-concatenated speech. 153-156 - Jan P. H. van Santen:
Deriving text-to-speech durations from natural speech. 157-160 - Thomas Portele, Walter F. Sendlmeier, Wolfgang Hess:
Hadifix : a system for German speech synthesis based on demisyllables, diphones and suffixes. 161-164 - Nobuyoshi Kaiki, Kazuya Takeda, Yoshinori Sagisaka:
The control of segmental duration in speech synthesis using linguistic properties. 165-168 - W. Nick Campbell:
Normalised segment durations in a syllable frame. 169-172 - Antônio R. M. Simões:
Predicting sound segment duration in connected speech: an acoustical study of brazilian portuguese. 173-176 - Isabelle Guaïtella, Serge Santi:
Contribution of the analysis of punctuation to improving the prosody of speech synthesis. 177-180 - Julia Hirschberg:
Using discourse context to guide pitch accent decisions in synthetic speech. 181-184 - Jill House, Nick J. Youd:
Contextually appropriate intonation in speech synthesis. 185-188 - Klaus J. Kohler:
Improving the prosody in German text-to-speech output. 189-192 - Valerie Pasdeloup:
Multi-style prosodic model for French text-to-speech synthesis. 193-196 - Massimo Giustiniani, Alessandro Falaschi, Piero Pierucci:
Automatic inference of a syllabic prosodic model. 197-200 - Gérard Bailly, Thierry Barbe, Hai-Dong Wang:
Automatic labeling of large prosodic databases : tools, methodology and links with a text-to-speech system. 201-204 - Jacques M. B. Terken, René Collier:
Designing algorithms for intonation in synthetic speech. 205-208 - L. Mortamet, Françoise Emerard, Laurent Miclet:
Attempting automatic prosodic knowledge acquisition using a database. 209-214 - Véronique Aubergé:
Semi-automatic constitution of a prosodic contour lexicons for the text-to-speech synthesis. 215-218 - Klaus Wothke:
From orthography to phonetic transcription in the German text-to-speech system tetos. 219-224 - Susan R. Hertz:
A modular approach to multi-dialect and multi-language speech synthesis using the delta system. 225-228 - Michael D. Riley:
Tree-based modelling for speech synthesis. 229-232 - Géza Németh, Géza Gordos, Gábor Olaszy:
Implementations aspects and the development system of the multivox text-to-speech converter. 233-236 - Frank Fallside:
Synfrec: speech synthesis from recognition using neural networks. 237-240 - Yoichi Yamashita, Naoki Mizutani, Riichiro Mizoguchi:
Concept description for synthetic speech output system. 241-244 - N. Michael Brooke, Paul D. Templeton:
Classification of lip-shapes and their association with acoustic speech events. 245-248 - Michel Saintourens, Marie-Hélène Tramus, Hervé Huitric, Monique Nahas:
Creation of a synthetic face speaking in real time with a synthetic voice. 249-252 - Christian Benoît, T. Lallouache, T. Mohamedi, A. Tseva, Christian Abry:
Nineteen (±two) French visemes for visual speech synthesis. 253-256 - Rolf Carlson, Björn Granström, Lennart Nord:
Segmental evaluation using the ESPRIT/SAM test procedures and monosyllabic words. 257-260 - Alessandro Falaschi:
Segmental quality assessment by well-formed nonsense words. 261-264 - Serge Santi, Michel Grenié:
Individual strategies in synthetic speech evaluation. 266-268 - Tatsuro Matsumoto, Yukiko Yamaguchi:
A multi-language text-to-speech system using neural networks. 269-272 - René Collier:
Multi-lingual intonation synthesis: principles and applications. 273-276 - Gábor Olaszy, Géza Gordos, Géza Németh:
Phonetic aspects of the MULTIVOX text-to-speech system. 277-280
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.