2020 ARTICLES [ALL PUBS] [GOOGLE SCHOLAR]
Dan Iter, Kelvin Guu, Larry Lansing, Dan Jurafsky. 2020. Pretraining with Contrastive Sentence Objectives Improves Discourse Performance of Language Models. ACL 2020.
Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A Smith and Yejin Choi. 2020. Social Bias Frames: Reasoning about Social and Power Implications of Language. ACL (2020)
Adam S. Miner, Albert Haque, Jason A. Fries, Scott L. Fleming, Denise E. Wilfley, G. Terence Wilson, Arnold Milstein, Dan Jurafsky, Bruce A. Arnow, W. Stewart Agras, Li Fei-Fei, and Nigam H. Shah. 2020. Assessing the accuracy of automatic speech recognition for psychotherapy. npj Digital Medicine 3, 82 (2020).
Bas Hofstra, Vivek V. Kulkarni, Sebastian Munoz-Najar Galvez, Bryan He, Dan Jurafsky, and Daniel A. McFarland. 2020. The Diversity-Innovation Paradox in Science. Proceedings of the National Academy of Sciences. [pdf]
Allison Koenecke, Andrew Nam, Emily Lake, Joe Nudell, Minnie Quartey, Zion Mengesha, Connor Toups, John Rickford, Dan Jurafsky, and Sharad Goel. 2020. Racial Disparities in Automated Speech Recognition. Proceedings of the National Academy of Sciences 117 (14) 7684-7689. [pdf] [Press: NY Times]
Michael Hahn, Dan Jurafsky, and Richard Futrell. 2020. Universals of word order reflect optimization of grammars for efficient communication. Proceedings of the National Academy of Sciences 117 (5) 2347-2353. [pdf] [bib] [code]
Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer and Mike Lewis. 2020. Generalization through Memorization: Nearest Neighbor Language Models. International Conference on Learning Representations (ICLR), 2020. [pdf] [code]
Reid Pryzant, Richard Diehl Martinez, Nathan Dass, Sadao Kurohashi, Dan Jurafsky, Diyi Yang. 2020. Automatically Neutralizing Subjective Bias in Text. AAAI, 2020 [pdf] [code]