Skip to Content

Search: {{$root.lsaSearchQuery.q}}, Page {{$root.page}}

Kevin McGowan Colloquium

Subcategorical mismatches can be mismatches of phonetic, phonological, lexical, and social context.
Friday, November 11, 2016
4:00-5:30 PM
138 Hutchins Hall Map
Kevin McGowan is an Assistant Professor of Linguistics at the University of Kentucky. He will give a talk titled, "Subcategorical mismatches can be mismatches of phonetic, phonological, lexical, and social context."

Abstract
We listen to the world through a filter of our own phonetic, phonological, lexical, and social expectations. In this talk I will present a pair of experiments that attempt to tease apart theories of listeners’ expectations and the speech signal. In both experiments, listeners made lexical decisions in a semantic priming paradigm. In experiment 1, long and short voice onset times (VOT) were paired with citation and fast speaking rates to examine the extent to which longer VOTs are more canonical cues to a voiceless stop —more likely to facilitate semantic priming— than shorter VOTs across speech styles. The results of this study suggest that previous findings, which argue for the superiority of long VOT in evoking word-initial voiceless percepts in English listeners, are exaggerated by a mismatch between listener expectations established by a citation speaking rate. Experiment 2 is a follow-up experiment that investigates whether, as has been generally assumed, artificially shortened VOT results in a voiced, rather than voiceless, percept for word-initial stops even when all other coarticulatory phonetic cues in the word might indicate voicelessness. Listeners heard short VOT paired with citation speaking rates in words like ‘coal’ and made lexical decisions about either a voiceless percept consistent word (coal/mine) or a voiced percept consistent word (goal/score). Surprisingly, in this task, artificially short VOT paired with a citation speaking rate word frame resulted in strong semantic priming for voiceless percepts and only weak semantic priming for voiced percepts. Listeners are sensitive to the subcategorical mismatch in the stimuli, but even a clear VOT cue does not override the constellation of other coarticulatory cues to voicelessness when long VOT productions are absent from the task. The message of these experiments is that VOT, while a cue to the voiced/voiceless distinction in English word-initial stops, is neither a sufficient nor a necessary cue to perceive this category difference (see also Lisker, 1996). Use of this cue depends on social information in the speech signal (implemented here with speaking rate), lexical status of the word (whether it has a voiced lexical competitor), the relationship to other phonetic cues in the word, and extrinsic comparison to other productions by the same talker. I will argue that theories of speech perception and word recognition that do not take all of this contextual information into account during processing risk missing a fundamental property of what it means to be a human and what it is native speakers know when they know a language.
Building: Hutchins Hall
Event Type: Lecture / Discussion
Tags: AEM Featured, colloquium, Discussion, Language
Source: Happening @ Michigan from Department of Linguistics