I am a Ph.D. candidate in linguistics at Stanford University and a member of the Language and Cognition Lab and ALPS Lab working with Mike C. Frank and Judith Degen. My research intersects computational modeling, theoretical syntax, and language acquisition.
Objective functions for language learning - Most language models currently learn by trying to maximize the log-likelihood of the next token. Intuitively, this does not seem to correspond with what children do when learning a language. How can we design models that learn more like humans? In this project, we compare how different objective functions during learning affect a model's performance on a set of cognitively informed tasks.
Comparing memory-based and neural network models of early syntactic development. - Using child-directed speech and child produced speech from the CHILDES database, we compare the performance of the Chunk Based Learner model (McCauley & Christiansen 2019) and an LSTM language model on their ability to mirror child production behavior.
Neural grammar induction for mildly context sensitive languages - Following the Compound Probabilistic Context-Free Grammar of Kim et al. 2019, this project proposes to generalize neural grammar induction to Abstract Grammars in order to learn more expressive grammars, such as Minimalist Grammars.
Presentations and Posters
Portelance, E., G. Kachergis, M.C. Frank. (2019). Comparing memory-based and neural network models of early syntactic development. Poster presentation at the Boston University Conference on Language Development, Boston, MA.
Portelance,E. (2019). Verb stranding ellipsis in Lithuanian: verbal identity and head movement. Presentation at the Syntax & Semantics circle, UC Berkeley.
Portelance, E., A. Bruno, D. Harasim, L. Bergen, T. J. O’Donnell. (2018). A Framework for Lexicalized Grammar Induction Using Variational Bayesian Inference. Poster presentation at the Learning Language in Humans and Machines conference, Paris, France.
Portelance, E. (2018). On the move: Free word order in Lithuanian. Presentation at the Association for the Advancement of Baltic Studies Conference, Stanford, USA.
Portelance, E., A. Bruno, and T. J. O’Donnell. (2017). Unsupervised induction of natural language dependency structures. Poster presentation at the Montreal AI Symposium, Montreal, Canada.
Department of Linguistics, Stanford University
Margaret Jacks Hall, Building 460 Stanford, CA 94305-2150
Contact me via email at portelan[at]stanford.edu .
© 2019 - Eva Portelance - based on template by Rick Waalders