Nothing Special   »   [go: up one dir, main page]

Kim et al., 2015 - Google Patents

A kinematic study of critical and non-critical articulators in emotional speech production

Kim et al., 2015

View PDF
Document ID
16372518492717562216
Author
Kim J
Toutios A
Lee S
Narayanan S
Publication year
Publication venue
The Journal of the Acoustical Society of America

External Links

Snippet

This study explores one aspect of the articulatory mechanism that underlies emotional speech production, namely, the behavior of linguistically critical and non-critical articulators in the encoding of emotional information. The hypothesis is that the possible larger …
Continue reading at sail.usc.edu (PDF) (other versions)

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L21/10Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids transforming into visible information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/66Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/26Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/20Handling natural language data
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/065Adaptation
    • G10L15/07Adaptation to the speaker
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/06Elementary speech units used in speech synthesisers; Concatenation rules
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/003Changing voice quality, e.g. pitch or formants
    • G10L21/007Changing voice quality, e.g. pitch or formants characterised by the process used
    • G10L21/013Adapting to target pitch
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/62Methods or arrangements for recognition using electronic means

Similar Documents

Publication Publication Date Title
Busso et al. Iterative feature normalization scheme for automatic emotion detection from speech
Jing et al. Prominence features: Effective emotional features for speech emotion recognition
Narayanan et al. Behavioral signal processing: Deriving human behavioral informatics from speech and language
Rudzicz et al. The TORGO database of acoustic and articulatory speech from speakers with dysarthria
Wöllmer et al. LSTM-modeling of continuous emotions in an audiovisual affect recognition framework
Busso et al. Interrelation between speech and facial gestures in emotional utterances: a single subject study
Engwall Analysis of and feedback on phonetic features in pronunciation training with a virtual teacher
Tran et al. Improvement to a NAM-captured whisper-to-speech system
Kim et al. A kinematic study of critical and non-critical articulators in emotional speech production
Pfister et al. Real-time recognition of affective states from nonverbal features of speech and its application for public speaking skill analysis
Arora et al. Phonological feature-based speech recognition system for pronunciation training in non-native language learning
Origlia et al. Continuous emotion recognition with phonetic syllables
Wang et al. Phoneme-level articulatory animation in pronunciation training
Maier et al. Automatic detection of articulation disorders in children with cleft lip and palate
Yap Speech production under cognitive load: Effects and classification
Cheng et al. Articulatory limit and extreme segmental reduction in Taiwan Mandarin
Aryal et al. Reduction of non-native accents through statistical parametric articulatory synthesis
Aghaahmadi et al. Clustering Persian viseme using phoneme subspace for developing visual speech application
Strömbergsson et al. Acoustic and perceptual evaluation of category goodness of/t/and/k/in typical and misarticulated children's speech
Piotrowska et al. Evaluation of aspiration problems in L2 English pronunciation employing machine learning
Paroni et al. Vocal drum sounds in human beatboxing: An acoustic and articulatory exploration using electromagnetic articulography
Pravin et al. Regularized deep LSTM autoencoder for phonological deviation assessment
O'Dell Intrinsic timing and quantity in Finnish
Rudzicz Production knowledge in the recognition of dysarthric speech
Chetouani et al. Time-scale feature extractions for emotional speech characterization: applied to human centered interaction analysis