Machine comprehension and question answering are central questions in natural
language processing, as they require modeling interactions between the passage
and the question. In this paper, we build on the multi-stage hierarchical process
BiDAF described in Seo et al. (2017)'s Bi-Directional Attention Flow for Machine Comprehension. We utilize tools from the R-Net model described in R-Net:
Machine Reading Comprehension with Self-Matching Networks, testing different
combinations of model components. We experiment with different types of encoding, such as using a Gated Recurrent Unit (GRU) or a Convolutional Neural
Network (CNN), and attention mechanisms, such as comparing context-query
attention layers and contemplating the usage of gates. We ultimately introduce a
modified form of BiDAF which utilizes both an LSTM and a CNN in its encoding
layer, as well as BiDAF's context-query attention layer followed by R-Net's self-attention layer. We conduct various experiments on the SQuAD datasets, yielding
competitive results on the CS224N SQuAD Leaderboard.