Issue 2013/02/15

Djalali Today on Ranking

Come to the Greenberg room at 12:15 today for the Phonetics and Phonology workshop. Our very own Alex Djalali will be talking about his work on “A constructive solution to the ranking problem in Partial Order Optimality Theory”.

I give a solution to the ranking problem in Partial Order Optimality Theory (PoOT), which can be stated as follows: Allowing for free variation, given a finite set of input/output pairs, i.e., a dataset, that a speaker knows to be part of some language, how can learn the set of all PoOT grammars under some constraint set compatible with that dataset?

For an arbitrary dataset, we provide set-theoretic means for constructing the set of all PoOT grammars compatible with that dataset. Specifically, we determine the set of all strict orders of constraints that are compatible with dataset. As every strict total order is in fact a strict order, our solution is applicable in both PoOT and classical optimality theory (COT), showing that the ranking problem in COT is a special instance of a more general one in PoOT.

Rubinstein Colloq Tuesday

Come to the Greenberg room on Tuesday at noon to hear Aynat Rubinstein (Georgetown) give a colloquium on “Necessity and comparison: The view from modality and mood”.

The ability to compare possibilities and to designate some as better than others is a fundamental aspect of our use of modal words and verbs of propositional attitude. When placed under an obligation or when expressing our desires, we designate as “better” those possibilities in which the obligation is met or the desire fulfilled as closely as possible.

While comparison of possibilities is part and parcel of modal semantic theory, two lines of research have argued that not all modals and attitude verbs are created equal in this respect. Among the modals, Sloman (1970) proposed that weak necessity ‘ought’ is comparative whereas strong necessity ‘must’ is not. Among attitude verbs, subjunctive-selecting attitudes in Romance have been argued to invoke comparison, unlike those that are indicative-selecting (e.g., Villalta 2008).

This talk offers to reconcile these claims about necessity and mood with the common assumption that all priority-based modal expressions involve comparison of possibilities. Building on my analysis of weak necessity (Rubinstein 2012) and recent joint work on verbal mood (Portner and Rubinstein to appear), I propose that true comparison reflects dependency on assumptions that are still negotiable, or “up for discussion” in a conversation. The split between the negotiable and the non-negotiable makes room for strong, non-comparative, normative modalities and attitudes and suggests a new dimension of the context-dependency of these expressions.

Piantadosi, Wednesday and Thursday

Steven Piantadosi (Rochester) will be here both Wednesday and Thursday, for a Psychology colloquium and a cognitive seminar respectively. The colloquium will be on Wednesday from 3:45-5pm in 420-041 and will be entitled “A rational approach to language” (abstract below). The cognitive seminar will be on Thursday, in Varian Physics room 102/103, from 10-11:30am. The seminar is on “A computational perspective on language acquisition and design”.

A rational approach to language
Jordan Hall 420-041, Feb 20, 3:45-5pm
I’ll present an overview of my research studying rational models of language form. I’ll argue that specific features of human language—such as the variation in word lengths and the presence of ambiguity–can be understood as information-theoretically efficient solutions to communicative problems. I’ll also discuss current experiments testing this general approach and present evidence that sentence processing mechanisms make sensible communicative inferences in decoding language across a noisy channel. These projects suggest that the cognitive mechanisms supporting human language are well-structured for solving problems of communication.

A computational perspective on language acquisition and design
Varian Physics 102/103, Feb 21, 10-11:30am
I’ll describe my computational and experimental work on language learning. I’ll discuss two primary lines of research that both focus on how learners might discover abstract aspects of language, including number words and quantifiers. In each domain, I’ll argue that the key aspects of meaning are not directly observable by learners, and that the inductive challenge this poses is best solved by statistically well-formed models that operate over the domain of rich semantic representations. I’ll show how such learning models can solve acquisition problems in theory, well-describe inferences made by children and adults, and lead to compelling developmental predictions. I’ll then discuss current and ongoing experiments with infants and toddlers testing the core assumptions of these learning models.

Constant’s Elicitation Tool Thursday

Come to the Greenberg room on Thursday at 3pm for a 30 minute discussion and demo of one of Noah Constant‘s (UMass, Amherst) projects in development. The aim is to create a freely editable database of linguistic examples and discussion, organized around eliciting answers to specific questions across a range of languages. He will contrast this system with existing tools like the World Atlas of Language Structures (WALS), and will solicit feedback on directions for future work.

Regier for Cognition and Language, Thursday

The next in this quarter’s Cognition and Language series will be Terry Regier (Berkeley). Come to CSLI, Cordura 100 at 3pm on Thursday 2/21 to hear his talk on “Word Meanings Across Languages Reflect General Communicative Principles”.

A central question in cognitive science is why languages have the semantic categories they do. Word meanings vary widely across languages, but this variation is constrained: words with similar or identical meanings often appear in unrelated languages. I will argue that this wide but constrained variation reflects the functional need for efficient and informative communication. I will present a series of computational simulations illustrating how these ideas can account for cross-language variation in three semantic domains: color, spatial relations, and kinship.

Constant colloquium next Friday (2/22)

Please join us in the Greenberg Room next Friday, February 22, to see Noah Constant (UMass) give a colloquium on “Deriving the diversity of contrastive topic realizations”.

Information structural notions like topic/focus, given/new and contrastive/non-contrastive have a diverse range of effects on sentence structure and pronunciation. In this talk, I look at Contrastive Topic (CT) constructions, and present a novel account of their meaning and structure that can make sense of the range of CT marking strategies attested in the world’s languages. I will cover languages that mark CT prosodically (e.g. English), those that employ a discourse particle (e.g. Mandarin), and those that have a dedicated CT position in the syntax (e.g. Czech).

A typical example of contrastive topic is given in (1). The object is pronounced with falling prosody, marking ‘the beans’ as the answer to the question of what Fred ate. The subject, on the other hand, bears a distinct rising contour, marking ‘Fred’ as a contrastive topic. The effect is to imply additional questions about what other people ate.

(1) (What about FRED? What did HE eat?)
FRED … ate the BEANS.

I review Büring’s (2003) account of CT and point out several challenges for it—for example, it doesn’t extend to CT questions (attested in Japanese) and it fails to account for effects of CT marking on word order and prosodic phrasing. In its place, I introduce a new model of contrastive topic that posits a Topic Abstraction operator in the left periphery, and defines CT as the focus associate of this operator. In English, the abstraction operator is lexicalized as a tonal clitic to an intonational phrase. The influence of information structure on phrasing is captured via a scope-prosody correspondence constraint requiring the operator and its associate to be realized within a single prosodic domain.

The topic abstraction account is supported by a range of typologically diverse data. For one, it provides a simple way of understanding the possibility of dedicated CT positions in the syntax. Additionally, the account predicts the existence of CT morphemes that occur at a distance from the topic phrase itself, which are attested in Mandarin and Paraguayan Guaraní.

Linguistic Levity

Are you suffering from Finno-Ugric language deficiency? Then you should ask your doctor about Finnexia, proven to help you learn Finnish faster.