Language understanding and Bayesian inference

NASSLLI 2014 @ UMD

Instructor: Daniel Lassiter, Stanford Linguistics

Contact: "dan" concatenated with "lassiter", then "at", "stanford", ".", and "edu".

Time & location: June 23-27, 2PM-3:30PM, Tydings 2102

Course description

For speakers, the goal of communication is to render some information accessible to listener, usually with some specific worldly purpose in mind. For listeners, the goal is to reconstruct the information that was intended by a speaker, with the aid of whatever background information is available about the language employed and the speaker's beliefs and desires (including accessible features of the context of conversation). This process takes place in the presence of two irremediable sources of uncertainty: enviromental noise, and uncertainty about an interlocutor's beliefs and desires. Formal linguistics and related fields have developed a set of intricate and insightful theories which work well where only all-or-nothing reasoning (deductive inference) is relevant. However, cognitive science has largely moved beyond this paradigm, integrating logic with probabilistic models designed to explain how humans acquire richer information from than could be inferred by deductive inference alone. This course is an overview of a new and exciting line of research which focuses on integrating formal linguistics with a cognitively-motivated approach to communication and inference.

We'll start with Bayesian inference using structured models, the best available theoretical account of uncertain inference. We'll reconstruct basic probability theory and Bayesian inference from the ground up, treating it as an intensional logic of the sort familiar to many NASSLLI students. We'll discuss psycholinguistic evidence that speakers and listeners model and adjust to noise in the communication channel, and evidence that speakers and listeners actively model each others' knowledge states and adjust their productions and inferences accordingly (at least sometimes). This will lead into a reconsideration of modularity from the perspective of a Bayesian noisy-channel model, distinguishing a potentially well-motivated version which focuses on paths of information flow from an empirically unmotivated version which makes claims of inferential isolation between modules.

The second half of the course will focus on the parts of this process which pertains most directly to issues of interest to semantics and pragmatics. We will go through the architecture for language understanding proposed by Goodman & Lassiter 2014 and show how it integrates probabilistic reasoning into the compositional semantics and explains inferential interactions between semantics and world knowledge while maintaining architectural modularity. Finally, we will discuss scalar implicature, vagueness, question interpretation, and several other pragmatic phenomena, showing that they can be understood as belief-desire inferences generated by meta-reasoning about the communication process.

Schedule

Date Topic Slides
Monday June 23 Course overview
Probability and Bayesian inference
Slides
Tuesday-Wednesday, June 24-25 Noisy channels, audience design, modularity & Bayesian inference Slides
Thursday June 26 Anchoring interpretation in world models Slides
Friday June 27 Bayesian pragmatics Slides

References