![]() For example, in the Autosegmental Metrical model of intonation (e.g., Beckman & Pierrehumbert, 1986 Grice, 1995 Gussenhoven, 1984 Ladd, 2008 Pierrehumbert, 1980, among many others), an intonation contour is composed of tonal events located in structurally privileged positions. Some authors have proposed a direct mapping of acoustic parameters onto discourse functions (e.g., Cooper, Eady, & Mueller, 1985 Fry, 1955), others have proposed a mediating level of abstract phonological representations. Yet, despite its important role in human communication, we only have limited knowledge about how listeners’ process intonation in order to recognize what a speaker intends to say.Ī central concern for a theory of intonation-based intention recognition is how intonation is mapped onto discourse functions. For instance, we use intonation, that is, the modulation of fundamental frequency across the utterance ( f 0), to encode sentence structure, illocutionary acts, and postlexical discourse relationships (e.g., Cruttenden, 1997 Cutler, Dahan, & Van Donselaar, 1997 Dahan, 2015 Gussenhoven, 2004 Ladd, 2008, among many others). The intended meaning of an utterance depends not only on what we say, that is, which words we use and how we combine them, but also on how we say them. This is not a trivial achievement because listeners have to integrate information from many different sources. One long-standing question of linguistic research is how listeners map a speech utterance onto intended meaning as rapidly and accurately as they do. All materials, data, and scripts can be retrieved here: Our results are compatible with the assumption that listeners rapidly and rationally integrate all available intonational information, that they expect reliable intonational information initially, and that they adapt these initial expectations gradually during exposition to unreliable input. ![]() In order to capture rational belief updates after concrete observations of a speaker's behavior, we formulate and explore an extension of this model that includes the listener's hierarchical beliefs about the speaker's likely production behavior. ![]() We use mouse tracking to investigate two questions: (a) how listeners draw predictive inferences based on information from intonation? and (b) how listeners adapt their online interpretation of intonational cues when these are reliable or unreliable? We formulate a novel Bayesian model of rational predictive cue integration and explore predictions derived under a concrete linking hypothesis relating a quantitative notion of evidential strength of a cue to the moment in time, relative to the unfolding speech signal, at which mouse trajectories turn towards the eventually selected option. Listeners can rapidly integrate intonational information to predictively map a given pitch accent onto the speaker's likely referential intentions. Intonation plays an integral role in comprehending spoken language. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |