Browse by author
Lookup NU author(s): Dr Dawn Knight
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
The paper highlights the outcomes of a specific ‘driver project’ hosted by DReSS (the ESRC funded Digital Records for eSocial Science project), which sought to combine the knowledge of linguists and the expertise of computer scientists in the construction of the multi-modal (MM hereafter) corpus software: the Digital Replay System (DRS). DRS presents ‘data’ in three different modes, as spoken (audio), video and textual records of real-life interactions, accurately aligning them within a functional, searchable corpus setting (known as the Nottingham Multi-Modal Corpus: NMMC herein). The DRS environment therefore allows for the exploration of the lexical, prosodic and gestural features of conversation and how they interact in everyday speech. Further to this, the paper introduces a computer vision based gesture recognition system which has been constructed to allow for the detection and preliminary codification of gesture sequences. This gesture tracking system can be imported into DRS to enable an automated approach to the analysis of MM datasets.
Author(s): Knight D, Tennent P
Publication type: Conference Proceedings (inc. Abstract)
Publication status: Published
Conference Name: Language Resources and Evaluation Conference
Year of Conference: 2008
Notes: I will claim 60% for this