Kim et al., 2015 - Google Patents
A kinematic study of critical and non-critical articulators in emotional speech productionKim et al., 2015
View PDF- Document ID
- 16372518492717562216
- Author
- Kim J
- Toutios A
- Lee S
- Narayanan S
- Publication year
- Publication venue
- The Journal of the Acoustical Society of America
External Links
Snippet
This study explores one aspect of the articulatory mechanism that underlies emotional speech production, namely, the behavior of linguistically critical and non-critical articulators in the encoding of emotional information. The hypothesis is that the possible larger …
- 230000002996 emotional 0 title abstract description 46
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/06—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
- G10L21/10—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids transforming into visible information
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/66—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
- G10L17/26—Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/20—Handling natural language data
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/065—Adaptation
- G10L15/07—Adaptation to the speaker
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/06—Elementary speech units used in speech synthesisers; Concatenation rules
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/003—Changing voice quality, e.g. pitch or formants
- G10L21/007—Changing voice quality, e.g. pitch or formants characterised by the process used
- G10L21/013—Adapting to target pitch
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Busso et al. | Iterative feature normalization scheme for automatic emotion detection from speech | |
Jing et al. | Prominence features: Effective emotional features for speech emotion recognition | |
Narayanan et al. | Behavioral signal processing: Deriving human behavioral informatics from speech and language | |
Rudzicz et al. | The TORGO database of acoustic and articulatory speech from speakers with dysarthria | |
Wöllmer et al. | LSTM-modeling of continuous emotions in an audiovisual affect recognition framework | |
Busso et al. | Interrelation between speech and facial gestures in emotional utterances: a single subject study | |
Engwall | Analysis of and feedback on phonetic features in pronunciation training with a virtual teacher | |
Tran et al. | Improvement to a NAM-captured whisper-to-speech system | |
Kim et al. | A kinematic study of critical and non-critical articulators in emotional speech production | |
Pfister et al. | Real-time recognition of affective states from nonverbal features of speech and its application for public speaking skill analysis | |
Arora et al. | Phonological feature-based speech recognition system for pronunciation training in non-native language learning | |
Origlia et al. | Continuous emotion recognition with phonetic syllables | |
Wang et al. | Phoneme-level articulatory animation in pronunciation training | |
Maier et al. | Automatic detection of articulation disorders in children with cleft lip and palate | |
Yap | Speech production under cognitive load: Effects and classification | |
Cheng et al. | Articulatory limit and extreme segmental reduction in Taiwan Mandarin | |
Aryal et al. | Reduction of non-native accents through statistical parametric articulatory synthesis | |
Aghaahmadi et al. | Clustering Persian viseme using phoneme subspace for developing visual speech application | |
Strömbergsson et al. | Acoustic and perceptual evaluation of category goodness of/t/and/k/in typical and misarticulated children's speech | |
Piotrowska et al. | Evaluation of aspiration problems in L2 English pronunciation employing machine learning | |
Paroni et al. | Vocal drum sounds in human beatboxing: An acoustic and articulatory exploration using electromagnetic articulography | |
Pravin et al. | Regularized deep LSTM autoencoder for phonological deviation assessment | |
O'Dell | Intrinsic timing and quantity in Finnish | |
Rudzicz | Production knowledge in the recognition of dysarthric speech | |
Chetouani et al. | Time-scale feature extractions for emotional speech characterization: applied to human centered interaction analysis |