I am primarily interested in providing provable guarantees for different Machine Learning problems
and algorithms. I am currently exploring guarantees for security of ML systems. As ML systems become more powerful and widely employed, it's important to have
guarantees on their performance in presence of adversaries, particularly in high risk applications like self-driving cars and medicine. How can we ensure robustness to
adversarial corruptions of inputs to the deployed classifier?
On a related note, I have previously worked on extrapolating properties of the unseen parts of the distribution.
How much training data do we need to collect to ensure that with high probability, every element we see at deployment is already seen during training?
More broadly, I am interested in understanding other goals that ML systems should satisfy (besides prediction accuracy) like
fairness, interpretability and privacy .
On the theoretical side, I am interested in non-convex optimization and understanding the conditions under which local methods can solve non-convex objectives
arising in Machine Learning problems. I have previously worked on providing guarantees for learning mixture of gaussians from streaming data. I am also excited about new methods to circumvent computational challenges of non-convexity. As an undergraduate, I worked on using method of moments to perform computationally efficient estimation under indirect supervision.