Edited by Jean-Philippe Bernardy, Rasmus Blanck, Stergios Chatzikyriakidis, Shalom Lappin, Aleksandre Maskharashvili
In this collection of papers, leading researchers in computational and formal linguistics present new approaches to the central questions of linguistic theory, explore the role of probabilistic and machine learning methods in the development of linguistic theories, and reveal new applications of recent learning and modeling methods in AI. Through a highly interdisciplinary framework, this book tackles central questions in syntax, semantics, pragmatics, sentence processing, and dialogue, offering insights that go well beyond traditional linguistic orthodoxy. New and non-traditional approaches to longstanding issues allow the researchers to challenge and set aside entrenched theories, in favor of experimentally and computationally robust new models. This book is a must-have for researchers and graduate students in linguistics, computer science (AI and NLP), psychology, and cognitive science, and a useful reference work for industrial research labs and AI systems development.
Jean-Philippe Bernardy is a researcher at the University of Gothenburg. His main research interest is in interpretable linguistic models, in particular those built from first principles of algebra, probability, and geometry.
Rasmus Blanck is a Senior lecturer in Logic and Theoretical Philosophy at the University of Gothenburg. His research interests lie close to the intersection of linguistics, logic, and philosophy.
Stergios Chatzikyriakidis if Professor of Computational Linguistics at the University of Crete. He has previously held posts at the University of Gothenburg, CNRS, University of London, and the Open University of Cyprus. His research interests are in computational semantics and formal syntax/semantics.
Shalom Lappin is Professor of Computational Linguistics at the University of Gothenburg, Professor of Natural Language Processing at Queen Mary University of London, and Emeritus Professor of Computational Linguistics at King's College London. His research focuses on the application of machine learning and probabilistic models to the representation and the acquisition of linguistic knowledge.
Aleksandre Maskharashvili is a postdoctoral scholar at The Ohio State University. His research concerns logical, probabilistic, and machine learning approaches to natural language, with a focus on discourse, dialogue, and inference.
February 2023
- Introduction
- 1 Computational Morphology
Christo Kirov and Richard Sproat
- 2 Something Old, Something New: Grammar-based CCG Parsing with Transformer Models
Stephen Clark
- 3 Probabilistic Lexical Semantics: From Gaussian Embeddings to Bernoulli Fields
Guy Emerson
- 4 The Origins of Vagueness
Peter Sutton
- 5 Bayesian Inference Semantics for Natural Language
Jean-Philippe Bernardy, Rasmus Blanck, Stergios Chatzikyriakidis, Shalom Lappin, and Aleksandre Maskharashvili
- 6 Probabilistic Pragmatics: A Dialogical Perspective
Bill Noble, Vladislav Maraev, and Ellen Breitholtz
- 7 Neuro-computation for Language Processing
Vidya Somashekarappa
- 8 Learning Language Games Probabilistically: From Crying to Compositionality
Robin Cooper, Jonathan Ginzburg, and Staffan Larsson
- 9 Distributional Semantics for Situated Spatial Language? Functional, Geometric and Perceptual Perspectives
John D. Kelleher and Simon Dobnik
- 10 Action Coordination and Learning in Dialogue
Arash Eshghi, Christine Howes, and Eleni Gregoromichelaki
- 11 Reanalysis, Probability, and the Faculty of Language
Asad B. Sayeed
- Contributors
- Glossary