Nothing Special   »   [go: up one dir, main page]

loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Jose A. Gonzalez 1 ; Lam A. Cheah 2 ; James M. Gilbert 2 ; Jie Bai 2 ; Stephen R. Ell 3 ; Phil D. Green 1 and Roger K. Moore 1

Affiliations: 1 University of Sheffield, United Kingdom ; 2 University of Hull, United Kingdom ; 3 Hull and East Yorkshire Hospitals Trust, United Kingdom

Keyword(s): silent speech interfaces, speech rehabilitation, speech synthesis and permanent magnet articulography

Related Ontology Subjects/Areas/Topics: Acoustic Signal Processing ; Artificial Intelligence ; Biomedical Engineering ; Biomedical Signal Processing ; Data Manipulation ; Devices ; Electromagnetic Fields in Biology and Medicine ; Health Engineering and Technology Applications ; Health Information Systems ; Human-Computer Interaction ; Methodologies and Methods ; Multimedia ; Multimedia Signal Processing ; Neurocomputing ; Neurotechnology, Electronics and Informatics ; Pattern Recognition ; Physiological Computing Systems ; Sensor Networks ; Soft Computing ; Speech Recognition ; Telecommunications ; Wearable Sensors and Systems

Abstract: Patients with larynx cancer often lose their voice following total laryngectomy. Current methods for post-laryngectomy voice restoration are all unsatisfactory due to different reasons: requires frequent replacement due to biofilm growth (tracheo-oesoephageal valve), speech sounds gruff and masculine (oesophageal speech) or robotic (electro-larynx) and, in general, are difficult to master (oesophageal speech and electro-larynx). In this work we investigate an alternative approach for voice restoration in which speech articulator movement is converted into audible speech using a speaker-dependent transformation learned from simultaneous recordings of articulatory and audio signals. To capture articulator movement, small magnets are attached to the speech articulators and the magnetic field generated while the user `mouths' words is captured by a set of sensors. Parallel data comprising articulatory and acoustic signals recorded before laryngectomy are used to learn the mapping betwee n the articulatory and acoustic domains, which is represented in this work as a mixture of factor analysers. After laryngectomy, the learned transformation is used to restore the patient's voice by transforming the captured articulator movement into an audible speech signal. Results reported for normal speakers show that the proposed system is very promising. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 65.254.225.175

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Gonzalez, J.; Cheah, L.; Gilbert, J.; Bai, J.; Ell, S.; Green, P. and Moore, R. (2016). Direct Speech Generation for a Silent Speech Interface based on Permanent Magnet Articulography. In Proceedings of the 9th International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC 2016) - BIOSIGNALS; ISBN 978-989-758-170-0; ISSN 2184-4305, SciTePress, pages 96-105. DOI: 10.5220/0005754100960105

@conference{biosignals16,
author={Jose A. Gonzalez. and Lam A. Cheah. and James M. Gilbert. and Jie Bai. and Stephen R. Ell. and Phil D. Green. and Roger K. Moore.},
title={Direct Speech Generation for a Silent Speech Interface based on Permanent Magnet Articulography},
booktitle={Proceedings of the 9th International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC 2016) - BIOSIGNALS},
year={2016},
pages={96-105},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005754100960105},
isbn={978-989-758-170-0},
issn={2184-4305},
}

TY - CONF

JO - Proceedings of the 9th International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC 2016) - BIOSIGNALS
TI - Direct Speech Generation for a Silent Speech Interface based on Permanent Magnet Articulography
SN - 978-989-758-170-0
IS - 2184-4305
AU - Gonzalez, J.
AU - Cheah, L.
AU - Gilbert, J.
AU - Bai, J.
AU - Ell, S.
AU - Green, P.
AU - Moore, R.
PY - 2016
SP - 96
EP - 105
DO - 10.5220/0005754100960105
PB - SciTePress

<style> #socialicons>a span { top: 0px; left: -100%; -webkit-transition: all 0.3s ease; -moz-transition: all 0.3s ease-in-out; -o-transition: all 0.3s ease-in-out; -ms-transition: all 0.3s ease-in-out; transition: all 0.3s ease-in-out;} #socialicons>ahover div{left: 0px;} </style>