Nothing Special   »   [go: up one dir, main page]

Origlia et al., 2014 - Google Patents

Introducing context in syllable based emotion tracking

Origlia et al., 2014

Document ID
8532477921128616138
Author
Origlia A
Galatà V
Cutugno F
Publication year
Publication venue
2014 5th IEEE Conference on Cognitive Infocommunications (CogInfoCom)

External Links

Snippet

In this paper, we present a further step in the development of an emotion tracking system based on phonetic syllables and machine learning algorithms. A system built on phonetically defined units has advantages both on the side of the amount of data needed to …
Continue reading at ieeexplore.ieee.org (other versions)

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/26Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/66Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/003Changing voice quality, e.g. pitch or formants
    • G10L21/007Changing voice quality, e.g. pitch or formants characterised by the process used
    • G10L21/013Adapting to target pitch
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/065Adaptation
    • G10L15/07Adaptation to the speaker
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/06Elementary speech units used in speech synthesisers; Concatenation rules

Similar Documents

Publication Publication Date Title
Gangamohan et al. Analysis of emotional speech—A review
Schuller et al. Emotion recognition in the noise applying large acoustic feature sets
Mencattini et al. Speech emotion recognition using amplitude modulation parameters and a combined feature selection procedure
Prabhakera et al. Dysarthric speech classification using glottal features computed from non-words, words and sentences
Origlia et al. Continuous emotion recognition with phonetic syllables
Kamińska et al. Recognition of human emotion from a speech signal based on Plutchik's model
CN110085220A (en) Intelligent interaction device
Bombatkar et al. Emotion recognition using Speech Processing Using k-nearest neighbor algorithm
Elbarougy et al. Cross-lingual speech emotion recognition system based on a three-layer model for human perception
Kadiri et al. Breathy to Tense Voice Discrimination using Zero-Time Windowing Cepstral Coefficients (ZTWCCs).
Barbosa Semi-automatic and automatic tools for generating prosodic descriptors for prosody research
Kiss et al. Language independent detection possibilities of depression by speech
Pravena et al. Development of simulated emotion speech database for excitation source analysis
Chamoli et al. Detection of emotion in analysis of speech using linear predictive coding techniques (LPC)
Kadiri et al. Extraction and utilization of excitation information of speech: A review
Houari et al. Study the Influence of Gender and Age in Recognition of Emotions from Algerian Dialect Speech.
Zbancioc et al. A study about the automatic recognition of the anxiety emotional state using Emo-DB
Szekrényes Prosotool, a method for automatic annotation of fundamental frequency
Atmaja et al. Ensembling multilingual pre-trained models for predicting multi-label regression emotion share from speech
Origlia et al. Introducing context in syllable based emotion tracking
Yusnita et al. Analysis of accent-sensitive words in multi-resolution mel-frequency cepstral coefficients for classification of accents in Malaysian English
Holliday et al. How black does Obama sound now? Testing listener judgments of intonation in incrementally manipulated speech
Bartkova et al. Prosodic parameters and prosodic structures of French emotional data
Bojanić et al. Application of dimensional emotion model in automatic emotional speech recognition
Wenjing et al. A hybrid speech emotion perception method of VQ-based feature processing and ANN recognition