Question-answering task is an important problem for research in natural language processing, for which many deep learning models have been designed. Here we implement R-Net and evaluate its performance on SQuAD 2.0. While the performance of R-Net itself is worse than BiDAF, it showed a strong capability of its attention mechanism compared to BiDAF as shown in the image. We have also experimented with an ensemble model using BiDAF and R-Net that achieved better performance than the baseline BiDAF. Our study suggests that a promising future direction is to combine BiDAF and R-Net for building better models.