Abstract
How can anticipatory neural processes structure the temporal unfolding of context in our natural environment? We here provide evidence for a neural coding scheme that sparsely updates contextual representations at the boundary of events and gives rise to a hierarchical, multi-layered organization of predictive language comprehension. Training artificial neural networks to predict the next word in a story at five stacked timescales and then using model-based functional MRI, we observe a sparse, event-based “surprisal hierarchy”.The hierarchy evolved along a temporo-parietal pathway, with model-based surprisal at longest timescales represented in inferior parietal regions. Along this hierarchy, surprisal at any given timescale gated bottom-up and top-down connectivity to neighbouring timescales. In contrast, surprisal derived from a continuously updated context influenced temporo-parietal activity only at short timescales. Representing context in the form of increasingly coarse events constitutes a network architecture for making predictions that is both computationally efficient and semantically rich.### Competing Interest StatementThe authors have declared no competing interest.
Originalsprache | Englisch |
---|---|
Zeitschrift | bioRxiv |
DOIs | |
Publikationsstatus | Veröffentlicht - 12.2020 |
Strategische Forschungsbereiche und Zentren
- Forschungsschwerpunkt: Gehirn, Hormone, Verhalten - Center for Brain, Behavior and Metabolism (CBBM)