Toggle Main Menu Toggle Search

Open Access padlockePrints

Predicting speech-in-noise ability with static and dynamic auditory figure-ground analysis using structural equation modelling

Lookup NU author(s): Xiaoxuan Guo, Dr Ester Benzaquen, Dr William Sedley, Professor Stephen Rushton, Professor Tim GriffithsORCiD

Downloads


Licence

This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).


Abstract

© 2025 The Author(s). Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited.Auditory figure-ground paradigms assess the ability to extract a foreground figure from a random background, a crucial part of central hearing. Previous studies have shown that the ability to extract static figures predicts speech-in-noise ability. In this study, we assessed both fixed and dynamic figures: the latter comprised component frequencies that vary over time like natural speech. We examined how well speech-in-noise ability (for words and sentences) could be predicted by age, peripheral hearing, static and dynamic figure-ground with 159 participants. Regression demonstrated that in addition to audiogram and age, low-frequency dynamic figure-ground accounted for an independent variance of both word- and sentence-in-noise perception, higher than the static figure-ground. The structural equation models showed that a combination of all types of figure-ground tasks and age and audiogram could explain up to 89% of the variance in speech-in-noise, and figure-ground predicted speech-in-noise with a higher effect size than the audiogram or age. Age influenced word perception in noise directly but sentence perception indirectly via effects on peripheral and central hearing. Overall, this study demonstrates that dynamic figure-ground predicts a significant variance in real-life listening better than the prototype figure-ground. The combination of figure-ground tasks predicts real-life listening better than audiogram or age.


Publication metadata

Author(s): Guo X, Benzaquen E, Holmes E, Berger JI, Bruhl IC, Sedley W, Rushton SP, Griffiths T

Publication type: Article

Publication status: Published

Journal: Proceedings of the Royal Society B: Biological Sciences

Year: 2025

Volume: 292

Issue: 2042

Print publication date: 05/03/2025

Online publication date: 05/03/2025

Acceptance date: 05/02/2025

Date deposited: 25/03/2025

ISSN (print): 0962-8452

ISSN (electronic): 1471-2954

Publisher: Royal Society Publishing

URL: https://doi.org/10.1098/rspb.2024.2503

DOI: 10.1098/rspb.2024.2503

Data Access Statement: The data and analysis script that support the findings of this study are openly available in OSF at [48]. Supplementary material is available online [49].


Altmetrics


Funding

Funder referenceFunder name
Medical Research Council
MR/T032553/1

Share