portrait I am a Ph.D. candidate in linguistics at Stanford University and a member of the Language and Cognition Lab and ALPS Lab working with Mike C. Frank and Judith Degen. My research intersects computational modeling, theoretical syntax, and language acquisition.

Current Projects


Objective functions for language learning - Most language models currently learn by trying to maximize the log-likelihood of the next token. Intuitively, this does not seem to correspond with what children do when learning a language. How can we design models that learn more like humans? In this project, we compare how different objective functions during learning affect a model's performance on a set of cognitively informed tasks.

Comparing memory-based and neural network models of early syntactic development. - Using child-directed speech and child produced speech from the CHILDES database, we compare the performance of the Chunk Based Learner model (McCauley & Christiansen 2019) and an LSTM language model on their ability to mirror child production behavior.

Neural grammar induction for mildly context sensitive languages - Following the Compound Probabilistic Context-Free Grammar of Kim et al. 2019, this project proposes to generalize neural grammar induction to Abstract Grammars in order to learn more expressive grammars, such as Minimalist Grammars.

Contact

Department of Linguistics, Stanford University
Margaret Jacks Hall, Building 460 Stanford, CA 94305-2150

Contact me via email at portelan[at]stanford.edu .

Other Information

View my Curriculum Vitae.
Access my Github.

© 2019 - Eva Portelance - based on template by Rick Waalders