Current psycholinguistic models suggest that we know what we want to say before we decide how we are going to say it: in other words, for speaking, word meaning is activated prior to information about syntax and phonology. Listening likely involves the reverse order of processes: phonological processing before meaning activation. We examined the relative time courses of phonological and semantic processing during language production and comprehension using event-related brain potentials (ERPs). Participants viewed a series of pictures (with the instruction to covertly name the depicted item), or heard a series of words, and made dual choice Go/noGo decisions based on each item's conceptual (whether the item was an animal or an object) and phonological features (whether the item's German name started with a vowel or a consonant). During picture naming, the N200 component (related to response inhibition) indicated that conceptual processing preceded phonological processing by about 170ms. During auditory word processing, on the other hand, the brain activity related to these two aspects of comprehension indicated some temporal overlap with the N200 to phonological processing preceding that to semantic processing by only about 85ms. In sum, the data are compatible with current psycholinguistic models of speech production and comprehension and argue for serial or widely spaced cascaded processing during production but more parallel processing of information during comprehension.
Research Areas and Centers
- Academic Focus: Center for Brain, Behavior and Metabolism (CBBM)