Title: Reasoning with Language Models and Knowledge Graphs for Question Answering

Speaker: Michihiro Yasunaga

Abstract

Question answering systems need to access relevant knowledge and reason over it effectively. In this talk, we consider answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs). This problem presents two major challenges: given a QA context (question and answer choices), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG. We present a new model, QA-GNN, which addresses the above challenges through two innovations: (i) relevance scoring, where we use LMs to estimate the importance of KG nodes relative to the given QA context, and (ii) joint reasoning, where we connect the QA context and KG to form a joint graph, and mutually update their representations through graph neural networks. We experiment with QA-GNN on commonsense and science question answering tasks, and show its improvement over existing LM or KG-based models.

Slides

Bio

Michihiro Yasunaga is a 2nd year PhD student in Computer Science at Stanford University, advised by Percy Liang and Jure Leskovec. His research interest is in natural language processing and machine learning, in particular, in unified representation learning for various modalities of data, such as texts, programs and graphs. His recent work developed question answering systems over textual and structured knowledge bases, and automatic source code repair/completion systems.