SAGE Journal Articles
Click on the following links. Please note these will open in a new window.
Journal Article 12.1: Dahan, D. (2010). The time course of interpretation in speech comprehension. Current Directions in Psychological Science, 19(2), 121-126. doi:10.1177/0963721410364726
Abstract: Determining how language comprehension proceeds over time has been central to theories of human language use. Early research on the comprehension of speech in real time put special emphasis on the sequential property of speech, by assuming that the interpretation of what is said proceeds at the same rate that information in the speech signal reaches the senses. The picture that is emerging from recent work suggests a more complex process, one in which information from speech has an immediate influence while enabling later-arriving information to modulate initial hypotheses. “Right-context” effects, in which the later portion of a spoken stimulus can affect the interpretation of an earlier portion, are pervasive and can span several syllables or words. Thus, the interpretation of a segment of speech appears to result from the accumulation of information and integration of linguistic constraints over a larger temporal window than the duration of the speech segment itself. This helps explain how human listeners can understand language so efficiently, despite massive perceptual uncertainty in the speech signal.
Journal Article 12.2: Poeppel, D., & Monahan, P. J. (2008). Speech perception: Cognitive foundations and cortical implementation. Current Directions in Psychological Science, 17(2), 80-85. doi:10.1111/j.1467-8721.2008.00553.x
Abstract: Speech perception includes, minimally, the set of computations that transform continuously varying acoustic signals into linguistic representations that can be used for subsequent processing. The auditory and motor subroutines of this complex perceptual process are executed in a network of brain areas organized in ventral and dorsal parallel pathways, performing sound-to-meaning and sound-to-motor mappings, respectively. Research on speech using neurobiological techniques argues against narrow motor or auditory theories. To account for the range of cognitive and neural attributes, integrative computational models seem promising.
Journal Article 12.3: Rosenblum, L. D. (2008). Speech perception as a multimodal phenomenon. Current Directions in Psychological Science, 17(6), 405-409. doi:10.1111/j.1467-8721.2008.00615.x
Abstract: Speech perception is inherently multimodal. Visual speech (lip-reading) information is used by all perceivers and readily integrates with auditory speech. Imaging research suggests that the brain treats auditory and visual speech similarly. These findings have led some researchers to consider that speech perception works by extracting amodal information that takes the same form across modalities. From this perspective, speech integration is a property of the input information itself. Amodal speech information could explain the reported automaticity, immediacy, and completeness of audiovisual speech integration. However, recent findings suggest that speech integration can be influenced by higher cognitive properties such as lexical status and semantic context. Proponents of amodal accounts will need to explain these results.