I study the interaction of language and cognition with an emphasis on multimodality. My research focuses on how our interaction with and perception of space influences our language, and in turn how our language use reflects and construes how we think about spatial relationships and events. I work on both co-speech gesture and signed language. Some of the big questions I think about are things like: What can gestures tell us about metaphor and cognition? How do people use metaphoric language differently from literal language? How does the spatial modality interact with argument structure in signed languages?
My dissertation research was on the use of manner and path metaphoric motion verbs in co-speech gesture. In particular, I make use of both gesture data collected from naturalistic sources (online video archives) and lab-elicited gesture production to understand how this family of metaphors interacts with both grammatical structures and modalities, and how it compares to how we use the same words in literal contexts. This research incorporates formalisms from Embodied Construction Grammar, along with the metaphor analysis formalisms we have developed at MetaNet, both to analyze data and as a tool for collecting metaphoric language in spoken and written English.
MetaNet is a very large-scale project with two main goals: (a) develop a system to automatically identify and extract metaphoric language from texts for NLP and translation purposes; (b) develop a rigorously analyzed database of conceptual metaphors and schemas to serve as a research and teaching tool for cognitive linguistics. I am primarily involved in the second goal and work on the linguistic analysis needed for the database. I also work on incorporating MetaNet and Embodied Construction Grammar formalisms. As a result of this project, we have developed a robust theory of metaphor compositionality that moves beyond the typical "lists of metaphors" approach. As this project continues and the database grows we will be able to make falsifiable predictions regarding the internal structure of conceptual metaphors. This approach makes use of corpus linguistics techniques to analyze usage data in large written and spoken texts.
Gesture, Metaphor, and Modality
This ongoing joint project with Tasha N. Lewis is an investigation of the interaction between metaphoric gestures and cognition. While we know that gestures can be metaphoric, and that they often convey information complementary to their coordinated spoken language, we don't know as much about how the perception of metaphoric gesture influences cognition, or how communication modality influences our perception of gesture. Our work shows that metaphoric gestures, even when not overtly iconic, are not only communicative but prime metaphoric reasoning in the addressee.
Gesture Production and Mental Imagery
I am currently developing a project with Tasha N. Lewis and Matthew W. Kirkhart on the relationship between variation in gesture production and mental imagery skills. It has been previously shown that high spatial reasoning skills are correlated with increased iconic gesture production rates; we will investigate if mental imagery skills are also correlated with gesture production, and if there is an interaction between spatial reasoning, mental imagery, and gesture viewpoint.
Color Naming and Perception
My secondary semantic domain of interest (besides space) is color. Cross-linguistically, languages divide up the visible color spectrum in different ways, and a major line of debate is the degree to which color perception is universal or if these linguistic variations in color categorization result in cognitive variations in color perception. My main project uses the World Color Survey data to consider whether variation in color naming can be explained in party by non-linguistic influences, particularly differences in color diet (i.e., the natural environment the language's speakers are situated in). A second study pairs semantically meaningful content with ambiguous colors (e.g., brown-red: is it red or is it brown?) to investigate the influence of semantic information on perception.
Signed Language Bilingualism and Spatial Cognition
This project considers the impact of learning a signed language on the perception of motion events. The well-known typological dichotomy between "satellite" languages such as English (wherein the manner component of an event is in the verb and the path component is in a satellite) and "path" languages such as Hebrew (where the path is the verb) has been found to influence how people perceive motion events; for example, satellite speakers are more likely to attend to and remember the manner component of events. However, less is known about the influence of signed languages on motion event perception in this aspect, in part because signed languages are difficult to categorize typologically. This project looks at hearing ASL-English bilinguals to see if their acquisition of a signed language influences their event perception, which has implications both for how ASL in particular influences cognition and more generally how learning a spatially-attuned language can influence more general spatial cognition.
Conceptual Metaphors in ASL Modals
Building on Eve Sweetser's mental spaces analysis of the force dynamics of epistemic modality, this work considers the systematic prosodic variation in 'families' of ASL modals (i.e., OUGHT-TO, SHOULD, MUST). Whereas we can understand implicitly the "strength" of epistemic modality, the fact that the speed and force of the movement parameter in these signs varies in systematic correlation with the perceived strength of the modal provides further evidence for the force dynamics analysis.
ASL Information Structure
This work considers two basic types of constructions in American Sign Language -- wh-questions and doubles. My previous work from a formal syntactic perspective supports the possibility of righward wh-movement and argues that non-manual markers (such as eyebrow movement), can be incorporated into a formal grammar. Additional work on rhetorical wh-questions (pseudoclefts) in comparison to doubled constructions has shown that the pseudoclefts constitute identificational focus whereas the doubles constitute information focus.
In a previous academic life I worked as a roboticist in the Fish Fellows lab of John H. Long, Jr. in the Vassar Biology department and Cognitive Science program. I helped design, build, and evolve small robotic fish in order to model the evolution of early vertebrates. Initially I primarily served as a programmer and wrote programs for data analysis and a genetic algorithm, which calculates the traits of the offspring generation of robots as based on the relative fitness of the parental generation. As our research progressed I did some work in biomechanics, analyzing the mechanical properties of the robots' artificial vertebral columns.
More on this work can be found in Prof. Long's book, Darwin's Devices.