Browse by author
Lookup NU author(s): Dr Mohsen Naqvi, Professor Jonathon Chambers
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
The separation of speech signals measured at multiple microphones in noisy and reverberant environments using only the audio modality has limitations because there is generally insufficient information to fully discriminate the different sound sources. Humans mitigate this problem by exploiting the visual modality, which is insensitive to background noise and can provide contextual information about the audio scene. This advantage has inspired the creation of the new field of audiovisual (AV) speech source separation that targets exploiting visual modality alongside the microphone measurements in a machine. Success in this emerging field will expand the application of voice-based machine interfaces, such as Siri, the intelligent personal assistant on the iPhone and iPad, to much more realistic settings and thereby provide more natural human?machine interfaces.
Author(s): Rivet B, Wang W, Naqvi SM, Chambers JA
Publication type: Article
Publication status: Published
Journal: IEEE Signal Processing Magazine
Year: 2014
Volume: 31
Issue: 3
Pages: 125-134
Print publication date: 01/05/2014
Online publication date: 07/04/2014
Acceptance date: 01/01/1900
ISSN (print): 1053-5888
ISSN (electronic): 1558-0792
Publisher: IEEE
URL: http://dx.doi.org/10.1109/MSP.2013.2296173
DOI: 10.1109/MSP.2013.2296173
Altmetrics provided by Altmetric