LINGUIST 143: Sign Languages
TimesMW 11:00–12:15
Location110-114
InstructorKathryn Flack Potts
Office hours By appointment [appointment sign-up]
Textbook Linguistics of American Sign Language: 4th edition
Notes on readings "LASL" refers to the textbook.
Readings not from the textbook are linked from this page.

Readings may be required (R1, R2, etc.) or optional (O1, O2, etc.). Optional readings may give additional background, especially on concepts in linguistics; they may also expand on the main ideas of the required readings.
Course description The linguistic structure of sign languages. How sign languages from around the world differ, and what properties they share. Accents and dialects in sign languages. How sign languages are similar to and different from spoken languages. How and why sign languages have emerged.
Syllabus
Other resources
  • Class presentations: [schedule a meeting] [preparation worksheet] [rubric]
  • Cathy Haas: Stanford ASL instructor ( ; 260-302A)
  • ASL alphabet, numbers
  • Online ASL video dictionary
Topics
Show all class topics
Reading Other assignments due
M 3/28 Introduction:
deafness vs. Deafness; Signed English vs. American Sign Language vs. signed languages

Show class topics

Make sure you're familiar with the syllabus and the assignment schedule for the next couple weeks.

Lower-case "deaf" describes a medical condition; upper-case "Deaf" describes a cultural identity. People who can't hear but communicate via speech and lip-reading, and identify with hearing culture, are "deaf"; those who (still can't hear but) communicate via ASL (or another sign language), have a social community primarily involving other Deaf people, and identify with Deaf culture are "Deaf".

There are lots of sign languages in the world; these are fully grammatical languages typically unrelated to the spoken languages they coexist with. ASL is the most common sign language in the US. It is quite different from Signed English (SE), which uses (roughly) ASL vocabulary, but arranges these signs in English word order; this is the signing you see when someone is simultaneously signing and speaking. It's a functional form of communication, but differs from natural languages (either signed and spoken) in basic ways.

Translating songs into some version of sign involves lots of complex issues, like the disco stick problem.

W 3/30 Sign language linguistics

Show class topics

We know something (signed or spoken) is a natural human language if it can be acquired faithfully by children. This tends to correlate with other properties, but basically, languages are things babies can learn.

Arguing convincingly that something is a language is a more complex issue; people both inside and outside a speech community can be resistant to believing that their language is systematic. Early descriptions of a language's structure are very often linguistically and socially/cultural complex projects, with important potential benefits and also noteworthy risks. The conflicts in the early days of ASL linguistics commonly arise in many minority language situations.

(R1) LASL: Defining language (1-14)
(R2) LASL, Battison: Signs have parts (230-241)
(R3) Padden: Folk explanation in language survival

(O1) LASL: Files 1.3, 1.4 (218-229)
(O2) Eastman, From student to professional
(1) Info sheet
(2) Reading reaction: a couple of thoughtful paragraphs responding to the readings (interesting points, questions, challenges, etc.)
M 4/4 Sign phonology

Show class topics

I know it's hard to follow the pictures and descriptions if you don't sign; the video dictionary can help, and feel free to ask for demonstrations in class. But mostly, the parameters of the linguistic system (types of features; kinds of rules on combinations) are more important than the details of individual signs, features, etc.

All meaningful units of language (signs, spoken words, morphemes anywhere) are composed of smaller meaningless units (features, phonemes). Minimal differences (of a single phoneme, or single feature) are used to identify which components are part of the grammatical system, but listing exactly which features must be specified to identify e.g. a phoneme, or how many values a feature can possibly have, is difficult, in part because different levels of analysis are relevant to different questions. This is part of the reason why so many people disagree on how many handshapes ASL has; it can also be hard to count English consonants, or possible oral places of articulation.

It's hard to find perfect correspondences between spoken features/phonemes/morphemes/words and similar levels of description in signs; one reason for this is because signed languages often express many independently meaningful elements simultaneously, where spoken languages tend to make words out of a linear sequence of parts. A change in e.g. signed movement can be a change between two words, like a phonological feature, or can be a morphological change to the root word, like a spoken suffix. This is a case where signed and spoken languages are made of elements that are abstractly similar in many ways, but also show some fundamental differences in how they are commonly assembled.

(R1) LASL: Signs have parts (17-22)
(R2) LASL: Sequentiality (28-33)
(R3) LASL: Battison, Analyzing signs (193-212)

(O) LASL: Files 3.1, 3.2, 4.1, 4.2 (258-272)
Reading reaction
W 4/6 Sign phonology

Show class topics

Simplifying enormously, in many ways: Stokoe focused largely on phonetic description: what are the features and parameters that make up each sign? Battison elaborated this system, and introduced many phonotactic restrictions (aka Morpheme Structure Constraints): not all features can combine with all others; what are the limits on possible signs? Liddell and Johnson refined both of these descriptions further, and added a focus on phonological processes: when you put these signs next to each other in sentences, how can one cause a change in another? Morphological processes change signs in similar ways to phonological processes, but these change the meaning of the root sign, whereas phonology only changes the surface form.

L&J's other major contribution was the Movement-Hold model. While the surface reality of e.g. how movement-less "holds" are can be hard to settle, this model captures insights about different basic kinds of specifications that signs can have. So while the surface reality of this model is arguable, and its psychological reality would need to be determined by experimentation (i.e., do signers cognitively distinguish movements from holds?), the model has great value for linguistic analysis, as it allows phonological and morphological processes to be described much more straightforwardly.

From Lydia's paper: many deaf people don't get natural language input in early childhood, for a variety of social reasons. This has important, lasting cognitive/linguistic effects. Early sign exposure and cochlear implants both arguably address this issue, and the choice between them raises lots of other difficult questions.



Presentation: Lydia
(R) LASL: Liddell & Johnson, ASL: The phonological base (280-319) Focus on what it means to be a "phonological process" in ASL -- how is this different from describing simply features and their combinations? Feel free to skim the first half of the paper.

(O) from Lydia: Mayberry (1993) First-language acquisition after childhood differs form second-language acquisition
Short paper 1 [rubric]
M 4/11 Iconicity

Show class topics

Sign language linguistics typically ignored iconicity in early arguments that signed languages are "real" languages; now, though, there is increasing attention to questions of how iconic signs are, how much use different populations can make of iconicity, where it came from, and whether it contributes to other shared properties common to most or all sign languages. Signs are typically seen as more transparently iconic by people who already know their meanings, and experience speaking one sign language seems to help people guess word meanings in another sign language, suggesting that they may have better access to iconic relationships. Shared cultural background also helps with this task -- non-signing Italians can guess the meanings of Italian Sign Language words better than other non-signing Europeans. Unrelated sign languages generally tend to have more accidentally shared words than unrelated spoken languages, potentially because of iconicity.

Beyond purely linguistic descriptions of iconicity, this is a crucial concept for many educational and other applications of sign languages. Iconicity, unsurprisingly, aids in second-language acquisition of ASL. A test of ASL vocabulary ("which picture represents the meaning of this sign?") is inaccurate if iconicity helps subjects respond correctly, but iconicity is valuable in designing a manual system for people with difficulty communicating.

(R1) choose one: Sandler & Lillo-Martin or Pizzuto & Volterra
(R2) as assigned in class: White & Tischler (LP, AK), Griffith & Robinson (JF, MD), or Beykirch, Holcomb, & Harrington (LS, AJ, CF)

(O) LASL: File 1.4 (224-229)
Come prepared to describe the basic questions, methods, and conclusions of your assigned paper, and how it relates to basic ideas about iconicity discussed in your chosen background paper.
W 4/13 Syllables; phonology wrap-up

Show class topics

All languages have syllables. In spoken languages, these typically have a vowel (or similar) in the middle, and maybe some consonants on the edges. These are organizing units for phonology, bigger than features/segments but smaller than words, and rules can pick out syllables as targets (e.g. for adding or deleting parts of words, placing stress, copying something, etc.). While languages vary in what sounds they'll allow in syllables, any languages allowing the same string of sounds will syllabify it in the same way. We know that auditory nerves respond most strongly to loud things that follow quiet things; by encouraging languages to organize as alternations between quieter consonants and louder vowels, syllables also seem to encourage spoken language to be particularly easy to perceive.

Sign language phonologists generally agree that sign languages have syllables, but there's lots of debate over whether they have internal structure parallel to that found in spoken syllables, or whether they enhance perceptability in any parallel ways. Most linguists agree that each path movement (of a hand from one location to another), and each still hand with 'internal movement', counts as an individual syllable, with non-moving still hands incorporated as syllable edges. Most signs, certainly in ASL, are monosyllabic, though some -- especially compounds -- are disyllabic. Phonological rules can target syllables; for example, reduplication targets the whole sign when it's monosyllabic, but only the second syllable in a disyllabic sign. In signs with more than one syllable, stress is also predictably applied to individual syllables.

Presentation: Lena
(O) from Lena: Zhao et al. A machine translation system from English to ASL
M 4/18 Use of space in syntax

Show class topics

Basic orientation to ASL morphosyntax, especially with respect to verbs, subjects, and objects. The basic ASL word order is SVO; this can be disrupted by various syntactic processes (which can occur on their own or together), many of which also add nonmanual morphological marking on an argument. Locations can be used phonologically, morphologically, or in ways combining these. Location is lexically specified in plain verbs, and adds morphological information in indicating verbs.

All languages use word order, argument marking (e.g. case), and/or verb agreement to identify subjects and objects. Signed languages use very little argument marking as compared to spoken languages -- much less even than English, where only pronouns have case. While signed pronouns don't change their form to indicate subject vs. object, they do change their form much more than spoken pronouns do to pick out individual referents. (In English, I'd use the same word "she" to refer to any of 20 women in a room; in ASL, these would be 20 slightly differently directed pointing signs.) While referential uses of space are tightly integrated into the linguistic systems of signed languages -- incorporated into verb forms, and used in pronouns -- Liddell points out that they are fundamentally different from anything that happens in spoken language (though similar to other aspects of spoken communication), raising questions about whether spatial referents are truly linguistic or more gestural.

(R) LASL: Space in ASL, Verbs in ASL, Simple sentences in ASL (74-88)

(O) Liddell, Indicating verbs and pronouns (365-377)
Reading reaction
If the LASL chapters are old news and hard to say anything interesting about, take a look at the first 5 or so pages of Liddell.
W 4/20 Verb agreement

Show class topics

Sign languages virtually all use spatial locations to distinguish referents, and most have at least some verbs which show their subjects and/or objects by manual moving between these locations. Agreement can be marked in ways other than manually on verbs, and can have major effects on other aspects of syntax.

In ASL, while only some signs allow manual S/O agreement, Bahan et al. argue that all signs may take optional non-manual agreement (head tilt towards S; eye gaze towards O). Further, they claim that null arguments (S/O which aren't explicitly stated) are allowed only if they are explicitly agreed with. So while verbs with manual agreement (e.g. GIVE) may always drop S/O, "non-agreeing" verbs (e.g. LOVE) may only drop S/O if non-manual agreement is present. Some of their data is controversial, like the claims about "neutral agreement", but the basic idea that morphological agreement and other syntactic processes may interact is something that happens in lots of languages, signed and spoken.

Sign language agreement systems can be organized in ways quite different from ASL. Sandler and Lillo-Martin describe how different sign languages use auxiliaries -- meaning-free words that carry S/O and other agreement morphology; these can be optional or obligatory, and can occur with all verbs or only with those that don't show agreement. These can also have effects on other parts of the sentence, from 'stealing' agreement from other verbs to licensing word movement processes. Brazilian Sign Language has a particularly elaborate system of auxiliary patterning.

Presentation: Ariel
(R) as assigned: Bahan et al. (CF, AJ, LP, LS) or Sandler & Lillo-Martin, ch. 19 (MD, JF, AK).
Notes on what to focus on vs. skip are on the first page of each paper.

(O) from Ariel: Zeshan (2004) Negative constructions in sign languages
Come prepared to describe the data in your paper, focusing on the key question(s) on the first page of each.

Short paper 2
M 4/25 Word formation

Show class topics

Part or all of a new word in a sign language can be borrowed from another sign language, or from aspects of the written or spoken forms in the local spoken language (as fingerspellings or mouthings). These borrowed forms change in predictable ways, though to greater or lesser extents, as they are adapted to the native grammar (phonological, morphological, etc.) of the borrowing language. In this way, word formation can provide compelling dynamic evidence for the productivity of phonological generalizations.

Mouthing, especially lexically specified mouth movements borrowed from spoken language, are used much more in other sign languages than in ASL. Fingerspelling is used much more in ASL than in other sign languages. In general, the relationships between sign languages and local spoken languages is complex, often with lots of effects on the sign language, though much more in terms of vocabulary than lexicon.

Numeral incorporation involves bound roots, unspecified for handshape, and morphologically meaningful handshapes (though these may be identical to meaningless handshapes that are lexically specified in other signs). These combinations of bound morphemes are in some ways similar to classifier constructions, which we'll discuss next time.

(R1) LASL: Fingerspelling & loan signs, Numeral incorporation (62-72)
(R2) Sandler & Lillo-Martin, ch. 6
Focus on 6.3, as we haven't done classifiers yet.
Reading reaction
W 4/27 Classifiers

Show class topics

Classifier constructions are a challenge to linguistic theory in that they are phonologically, morphologically, syntactically, and semantically quite different from other morphemes, words, and phrases in sign languages. In ASL, while the handshapes are relatively to extremely grammatical and categorical, movements are much free-er: like pronouns, a particular 'process movement' is often quite hard to define phonologically, and quite literal/iconic in its representation of the movement being described.

The closest spoken language parallel is in verbal classifier affixes, which are thought to originate from noun incorporation. Classifiers in speech can reflect very similar semantic and physical properties and share some other grammatical aspects with signed classifiers, but there are also many grammatical differences between typical classifier systems in sign and speech. This, along with the observation that all sign languages have robust classifier systems, indicates that this is a major domain of modality-based grammatical differences.

All sign languages have classifiers, and they tend to behave quite similarly, though the details of languages' classifier systems do vary. It has been proposed that classifiers originate in the kind of pantomime often used in homesign systems, following from observations that new sign languages tend to use mostly handling classifiers while more established sign languages use entity classifiers more often.

Presentation: Aaron
(R1) LASL: Classifier predicates (90-98)
Key questions: What kinds of information can be conveyed by “Movement Roots” and “Classifier Handshapes”? What does it mean (generally) to say these combine into “predicates”?
(R2) Sandler & Lillo-Martin, ch. 5
See reading notes on the first page.

(O1) Riekehof: more detailed pictures and explanations of classifiers
(O2) Sandler & Lillo-Martin, 6.1-6.2 (from last time)
(O3) from Aaron: Emmorey, The impact of sign language use on visuospatial cognition (also optional for next time)
Reading reaction
M 5/2 No class: KFP away
W 5/4 Space, language, and cognition

Show class topics

Space is incorporated into signed languages in the grammatical ways we've discussed before, and also in verbal descriptions of how objects are arranged in space. In these cases, signers' use of left-right and front-back bear some literal relationship to their L-R/F-B positions, but "left" can be either the signer's left or the addressee's left. (Some objects, e.g. the "vehicle" classifier, also contain inherent spatial information -- one end of the sign is always the front of the car -- which further complicates how signs can accurately reflect spatial information.) Signers vary in which perspective they produce; to interpret descriptions accurately, both signers and addressees must be able to mentally rotate images to understand spatial relationships from other physical perspectives. This visual rotation is, of course, not something that spoken languages make use of.

Signers -- native or not, Deaf or hearing -- have much stronger nonlinguistic mental rotation abilities than nonsigners, likely as a result of this linguistic practice; here, linguistic experience with a signed language has a demonstrable effect on nonlinguistic cognition. Similar language-modality effects are found for other cognitive abilities (e.g. face/eye discrimination), while other cognitive abilities vary instead with deafness (e.g. visual attention). Language also affects approaches to and success with other linguistic tasks; ASL signers describe spatial arrangements differently, and more efficiently, than English speakers.


Presentation: Melissa
(R) LASL: Emmorey, The confluence of space and language (336-364)
Focus on sections 5.1.5-5.3

(O1) Emmorey, The impact of sign language use on visuospatial cognition
(O2) from Melissa: Nonaka, The forgotten endangered languages
Short paper 3
M 5/9 Acquisition

Show class topics

Acquisition of spoken and signed languages proceeds at almost identical rates, through parallel stages. All children begin babbling both manually and vocally, though children quickly focus on babbling in the modality (or modalities) where they are exposed to language. Babbling is structurally distinct from all other vocal and motor activity, and is often used in conversational patterns, even though it's prelinguistic.

Child-directed speech and sign are fundamentally quite similar, though signers talking with young children will often make extra efforts to sign in the location or on the object that the adult is talking to, so the child doesn't have to split visual attention between the words and their referents.

While children learning sign produce many aspects of the grammar of ASL (negative head-shakes, furrowed brow for puzzlement, pointing, moving gestures between referents), these communicative abilities aren't incorporated into children's ASL productions until much later; parallels between gesture and sign grammar don't speed up the acquisition of sign.

Also, Petitto and Holowka note, "it is clear that when methods employ a variety of sources and populations and when bilingual babies’ two languages are taken into consideration, we see that babies know that they are acquiring two distinct languages from the onset of language production and that they acquire each of their languages without fundamental language delay or language confusion."

(R) Emmorey, Sign language acquisition
Focus on pp. 169-190

(O) Petitto, Biological foundations of language
Reading reaction
W 5/11 Gesture

Show class topics

"Gesture" can refer to a wide variety of kinds of things, including non-iconic gesticulation, illustration, pantomime, and emblematic conventional representations. These are, for the most part, all attested in both vocal and manual modalities, though there are some relatively subtle differences in how they can interact with speech vs. sign.

There is probably a clear line between gesture and language, but it is very hard to identify concretely. Various kinds of gesture (manual, vocal, or otherwise) can be more or less conventionalized, can have structures that are more or less similar to language, and interact with simultaneous language in various ways. Gestures of any sort tend to be iconic, and to represent aspects of the meaning of the language (or other communication) that they co-occur with, though this iconicity may be quite abstract.

Apes can be charming and very clever, but they are not very good at learning human languages.


Presentation: Chris
(R) Emmorey, Do signers gesture?

(O1) McNeill: Gesture & thought (5-12)
(O2) from Chris: Gardner & Gardner, Teaching sign language to a chimpanzee
Reading reaction
M 5/16 Homesign systems

Show class topics

Linguists have approached many communication systems with the question, "Is this a real language?" This is typically addressed by looking for kinds of structures (phonological, morphological, syntactic) that are typical of languages. All of these structures are found in natural sign languages, some in homesign systems, and few if any in apes' attempts at learning sign languages.

Homesigners develop a number of remarkably language-like properties in their communication, despite their lack of linguistic input; their acquisition path is also remarkably similar to that for the acquisition of full languages. Homesigners use their own productions as input to their linguistic 'decoder', finding generalizable patterns e.g. in their own invented words and generalizing these into morphology-like patterns.

It's important to remember that homesign systems aren't fully linguistic, despite their resemblance to language, and don't fully satisfy the brain's need to acquire a natural language during the critical period. And while these children communicate remarkably well, they don't have the full linguistic resources that children natively acquiring a natural language have to communicate freely with those around them.

Goldin-Meadow, The resilience of language:
(O1) ch. 6
background on children and project
(O2) ch. 7
methods; strongly recommended
(O3) ch. 14, by request
(R1) either ch. 8 or ch. 9
(R2) ch. 12
Come prepared to describe what you learned in ch. 8 or 9
W 5/18 Sign language emergence

Show class topics

Sign languages are often one of the best resources for understanding how speakers create languages from nothing, as this occurs fairly regularly in villages with large deaf populations (typically 3.5% or less) or in other, often school-based majority-deaf communities.

Meir et al. suggest that there are different patterns of emergence in village-based sign languages (VSLs) and Deaf community-based sign languages (DCSLs); more generally, patterns of emergence are certainly dependent on various factors including the number of speakers, their level of contact with each other, and the rate at which new generations of speakers are introduced. The authors suggest that DCSLs change and regularize more quickly than VSLs, and are much more likely to develop features typical of highly grammatically structured sign languages like spatial agreement morphology.

ASL poetry is lovely, and has a great deal of internal poetic structure.

Presentation: Josh
(O1) Meir et al., Emerging sign languages
(O2) from Josh: Klima & Bellugi, Poetry and song in a language without sound
Short paper 4 (aka final paper proposal)
M 5/23 Morphology and language age

Show class topics

Aronoff et al. examine both universal and language-specific aspects of sign language morphology, ultimately suggesting that many features often assumed to be in opposition can and do coexist in (both spoken and signed) languages: iconicity and arbitrariness are often both present, though more iconicity is arguably incorporated in a manual/visual language. All languages can show effects of the same set of underlying principles, but modality can strongly effect how these are realized.

They spend quite a bit of time talking about spatial agreement morphology and how this emerges relatively quickly and universally in young signed languages but not young spoken creoles; by exploring different agreement systems, including apparent alliterative agreement (as in Swahili) and literal alliterative agreement (as in Bainouk), signed agreement with referential indices seems to be simply a step farther along the continuum of agreement as abstract vs. literal copying, rather than fundamentally different from all spoken agreement patterns.

Finally, sign languages are typically young, due to their "fragile socio-genetic ecological niche" and perpetual re-creolization. In older sign languages (e.g. ASL and ISL), we do see examples of arguably sequential morphology; this takes time to emerge, tends to be much less iconic, and looks much more like what develops in creoles. This leads the authors to suggest that "the arbitrariness of grammatical systems is a property of old languages, not of human language."

(R) Aronoff et al., The paradox of sign language morphology
See reading notes on the first page
Come prepared to explain your chosen point of interest
W 5/25 Historical change in ASL

Show class topics

Some historical changes in sign languages seem to be part of new languages maturing and becoming easier to produce, perceive, and/or more systematic: signs on the face become more peripheral and one-handed; signs elsewhere become more central and two-handed (often symmetrical). Newly lexicalized words (from classifiers, compounds, or other sources) become monosyllabic. Some of these changes become more linguistically systematic at the cost of becoming less iconic, whether by adding restrictions on possible handshapes/locations/etc, or shifting from a series of points to an arc movement for WE.

(R) LASL, Variation and historical change, 161-167

(O) Frishberg, Arbitrariness and iconicity: Historical change in ASL
Reading reaction
M 5/30 No class: Memorial Day
W 6/1 What can we learn from sign languages?
F 6/3 Final paper due, 11:30 am [rubric]