The primary goal of this work is to build a QA system that improves upon a baseline modified BiDAF model's performance on the SQuAD 2.0 dataset. To achieve this improvement, two approaches are explored. In the first one, the modified BiDAF model's embedding layer is extended with character-level embeddings. In the second approach, a self-attention layer is added on top of the existing BiDAF attention layer. The performance of these two approaches is evaluated separately and also when combined together into a single model. The model with character embeddings yielded the best performance on the test set, achieving an EM score of 56.872 and a F1 score of 60.652. The self-attention model performed below expectations overall, though it was the best model when it came to performance on unanswerable questions.