Reassessing the Benefits of Audio-Visual Integration to Speech Perception and Intelligibility
This article is a preprint and has not been certified by peer review [What does this mean?].
Author(s) / Creator(s)
O'Hanlon, Brandon
Plack, Christopher
Nuttall, Helen
Abstract / Description
Purpose: In difficult listening conditions, the visual system assists with speech perception through lipreading. Stimulus onset asynchrony (SOA) is used to investigate the interaction between the two modalities in speech perception. Previous estimates of audiovisual benefit and SOA integration period differ widely. A limitation of previous research is a lack of consideration of visemes - categories of phonemes defined by similar lip movements when produced by a speaker - to ensure that selected phonemes are visually distinct. This study aimed to reassess the benefits of audiovisual lipreading to speech perception when different viseme categories are selected as stimuli and presented in noise. The study also aimed to investigate the effects of SOA on these stimuli.
Method: Sixty participants were presented with audio-only and audiovisual stimuli containing the speaker’s lip movements. The speech was presented either with or without noise and had six different SOAs (0, 200, 216.6, 233.3, 250, and 266.6 ms). Participants discriminated between speech syllables with button presses.
Results: The benefit of visual information was weaker than that in previous studies. There was a significant increase in reaction times as SOA was introduced, but no significant effects of SOA on accuracy. Furthermore, exploratory analyses suggest that the effect was not equal across viseme categories: ‘Ba’ was more difficult to recognise than ‘Ka’ in noise.
Conclusion: In summary, the findings suggest that the contributions of audiovisual integration to speech processing are weaker when considering visemes, but are not sufficient to identify a full integration period.
Keyword(s)
speech speech perception multisensory integration audiovisual audiovisual speech speech-in-noisePersistent Identifier
Date of first publication
2024-03-06
Publisher
PsychArchives
Citation
-
Reassessing the Benefits of Audio-Visual Integration to Speech Perception and Intelligibility.pdfAdobe PDF - 826.02KBMD5: b96d29f210d97e96787c684de6990aee
-
There are no other versions of this object.
-
Author(s) / Creator(s)O'Hanlon, Brandon
-
Author(s) / Creator(s)Plack, Christopher
-
Author(s) / Creator(s)Nuttall, Helen
-
PsychArchives acquisition timestamp2024-03-06T11:55:04Z
-
Made available on2024-03-06T11:55:04Z
-
Date of first publication2024-03-06
-
Submission date2024-03-05
-
Abstract / DescriptionPurpose: In difficult listening conditions, the visual system assists with speech perception through lipreading. Stimulus onset asynchrony (SOA) is used to investigate the interaction between the two modalities in speech perception. Previous estimates of audiovisual benefit and SOA integration period differ widely. A limitation of previous research is a lack of consideration of visemes - categories of phonemes defined by similar lip movements when produced by a speaker - to ensure that selected phonemes are visually distinct. This study aimed to reassess the benefits of audiovisual lipreading to speech perception when different viseme categories are selected as stimuli and presented in noise. The study also aimed to investigate the effects of SOA on these stimuli. Method: Sixty participants were presented with audio-only and audiovisual stimuli containing the speaker’s lip movements. The speech was presented either with or without noise and had six different SOAs (0, 200, 216.6, 233.3, 250, and 266.6 ms). Participants discriminated between speech syllables with button presses. Results: The benefit of visual information was weaker than that in previous studies. There was a significant increase in reaction times as SOA was introduced, but no significant effects of SOA on accuracy. Furthermore, exploratory analyses suggest that the effect was not equal across viseme categories: ‘Ba’ was more difficult to recognise than ‘Ka’ in noise. Conclusion: In summary, the findings suggest that the contributions of audiovisual integration to speech processing are weaker when considering visemes, but are not sufficient to identify a full integration period.en
-
Publication statusotheren
-
Review statusnotRevieweden
-
SponsorshipThis work was supported by the Economic and Social Research Council (ESRC) Training Grant (O’Hanlon, ES/P000665/1), the Manchester Biomedical Research Centre and the National Institute for Health and Care Research (NIHR) (Plack, NIHR203308), and the Biotechnology and Biological Sciences Research Council (BBSRC) New Investigator Grant (Nuttall, BB/S008527/1).en
-
Persistent Identifierhttps://hdl.handle.net/20.500.12034/9684
-
Persistent Identifierhttps://doi.org/10.23668/psycharchives.14221
-
Language of contentengen
-
PublisherPsychArchivesen
-
Keyword(s)speechen
-
Keyword(s)speech perceptionen
-
Keyword(s)multisensory integrationen
-
Keyword(s)audiovisualen
-
Keyword(s)audiovisual speechen
-
Keyword(s)speech-in-noiseen
-
Dewey Decimal Classification number(s)150
-
TitleReassessing the Benefits of Audio-Visual Integration to Speech Perception and Intelligibilityen
-
DRO typepreprinten