Aditi Raghunathan

Aditi Raghunathan



I am a PhD student at Stanford University.

Email: aditir'at'stanford'dot'edu


Publications ~ Research ~ Honors ~

Bio

I am a fifth year PhD student in Computer Science at Stanford University working with Percy Liang.
Previously, I obtained my B.Tech. (Hons.) in Computer Science from IIT Madras in 2016.

Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming
NeurIPS 2020
Sumanth Dathathri*, Krishnamurthy Dvijotham*, Alexey Kurakin*, Aditi Raghunathan*, Jonathan Uesato, Rudy Bunel, Shreya Shankar, Jacob Steinhardt, Ian Goodfellow, Percy Liang, Pushmeet Kohli

The Pitfalls of Simplicity Bias in Neural Networks
NeurIPS 2020
Harshay Shah, Kaustav Tamuly, Aditi Raghunathan, Prateek Jain, Praneeth Netrapalli

DROCC: Deep Robust One-Class Classification
ICML 2020
Sachin Goyal, Aditi Raghunathan, Moksh Jain, Harsha Vardhan Simhadri , Prateek Jain

Adversarial Training Can Hurt Generalization
Identifying and Understanding Deep Learning Phenomena ICML 2019 Workshop
Aditi Raghunathan*, Sang Michael Xie*, Fanny Yang , John Duchi and Percy Liang

Probabilitistic dependency networks for prediction and diagnostics
TRB Annual Meeting 2014
Narayanan U. Edakunni, Aditi Raghunathan, Abhishek Tripathi, John Handley and Fredric Roulland

I am interested in developing principled methods that form the foundations of robust machine learning, where models perform well in the presence of distribution shifts. My work derives rigorous approaches that successfully address several empirical challenges around robust training of deep networks. Broadly, my work has touched on the following aspects.

Certification of robustness: A major challenge in adversarially robust machine learning is reliable evaluation of the robustness of large deep networks. Heuristic evaluation can lead to an arms race where better attack algorithms under the same threat model can break defenses. Such an arms race was prevalent in the literature on adversarial examples. To address this, I have developed a new training methodology that provides network with certificates of robustness and devised efficient methods to certify several empirically successful networks.

Unlabeled data for robustness: Robust machine learning could substantially benefit from more data. My work shows that unlabeled data which is cheap and easy to obtain, is especially valuable for robustness. We proposed Robust Self-training that leverages unlabeled data to reach state-of-the-art robustness in adversarial examples.

Rethinking ML intuition for robust training: It is commonly observed and believed that larger models perform better than smaller models. Augmenting a dataset with label preserving transformations often improves accuracy. However, in the context of robustness, we find that such label preserving data augmentation ofen decreases accuracy and smaller models surprisingly perform better than bigger models.

Robustness to understand ML: Finally, I am excited to use robustness as a tool to examine ML models outside the training distribution and hence highlight the role of the elusive inductive bias of current systems.

Previously, I worked on non-convex optimization and approximation algorithms.