Hilal Asi
I am a researcher in the Machine Learning Research (MLR) team at Apple where I work primarily on privacy-preserving machine learning. I obtained my PhD from Stanford University, where I was advised by John Duchi.
Previously, I obtained a B.Sc. and M.Sc. from the department of computer science at the Technion,
where I was advised by Eitan Yaakobi.
During my PhD, I also spent some time at Apple where I worked with Vitaly Feldman
and Kunal Talwar.
My current research interests include privacy-preservin machine learning and its intersection with other fields such as optimization and robustness.
Email: (first name).(last name) 94 @ gmail.com
Preprints
-
Private Online Leaning via Lazy Algorithms
with Tomer Koren, Daogao Liu, Kunal Talwar.
[pdf]
-
DP-Dueling: Learning from Preference Feedback without Compromising User Privacy
with Aadirupa Saha.
[pdf]
-
Near Instance-Optimality in Differential Privacy
with John Duchi.
[pdf]
Conference publications
-
Universally Instance-Optimal Mechanisms for Private Statistical Estimation
with John Duchi, Saminul Haque, Zewei Li, Feng Ruan.
COLT, 2024.
[pdf]
-
Private Vector Mean Estimation in the Shuffle Model: Optimal Rates Require Many Messages
with Vitaly Feldman, Jelani Nelson, Huy L Nguyen, Samson Zhou, Kunal Talwar.
ICML, 2024.
[pdf]
-
User-level differentially private stochastic convex optimization: Efficient algorithms with optimal rates
with Daogao Liu.
AISTATS, 2024.
[pdf]
-
Faster optimal LDP mean estimation via random projections
with Vitaly Feldman, Jelani Nelson, Huy L. Nguyen, Kunal Talwar.
NeurIPS, 2023.
[pdf]
-
Near-Optimal Algorithms for Private Online Optimization in the Realizable Regime
with Vitaly Feldman, Tomer Koren, Kunal Talwar.
ICML, 2023.
[pdf]
-
From Robustness to Privacy and Back
with Jonathan Ullman, Lydia Zakynthinou.
ICML, 2023.
[pdf]
-
Private Online Prediction from Experts: Separations and Faster Rates
with Vitaly Feldman, Tomer Koren, Kunal Talwar.
COLT, 2023.
[pdf]
-
Optimal Algorithms for Mean Estimation under Local Differential Privacy
with Vitaly Feldman, Kunal Talwar.
ICML, 2022 (oral presentation).
[pdf]
-
Private optimization in the interpolation regime: faster rates and hardness results
with Karan Chadha, Gary Cheng, John Duchi.
ICML, 2022.
-
Element Level Differential Privacy
with John Duchi, Omid Javidbakht.
[pdf]
PPAI, 2022.
-
Stochastic Bias-Reduced Gradient Methods
with Yair Carmon, Arun Jambulapati, Yujia Jin, Aaron Sidford.
[pdf]
NeurIPS, 2021.
-
Adapting to function difficulty and growth conditions in private optimization
with Daniel Levy, John Duchi.
NeurIPS, 2021.
-
Private Stochastic Convex Optimization: Optimal Rates in L1 Geometry
with Vitaly Feldman, Tomer Koren, Kunal Talwar.
ICML, 2021 (oral presentation).
[pdf]
-
Private Adaptive Gradient Methods for Convex Optimization
with John Duchi, Alireza Fallah, Omid Javidbakht, Kunal Talwar.
ICML, 2021.
[pdf]
-
Instance-optimality in differential privacy via approximate inverse sensitivity mechanisms
with John Duchi.
NeurIPS, 2020.
[pdf]
-
Minibatch Stochastic Approximate Proximal Point Methods
with Karan Chadha, Gary Cheng, John Duchi.
NeurIPS, 2020 (spotlight).
[pdf]
-
Modeling simple structures and geometry for better stochastic optimization algorithms
with John Duchi.
AISTATS, 2019.
[pdf]
-
Nearly Optimal Constructions of PIR and Batch Codes
with Eitan Yaakobi.
ISIT, 2017.
[pdf]
Journal publications
-
The importance of better models in stochastic optimization
with John Duchi.
Proceedings of the National Academy of Sciences, 2019.
[pdf]
[code]
-
Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity
with John Duchi.
SIAM Journal on Optimization, 2019.
[pdf]
-
Nearly Optimal Constructions of PIR and Batch Codes
with Eitan Yaakobi.
IEEE Transactions on Information Theory, 2019.
[pdf]
Teaching
Service
-
Reviewer: ICML, NeurIPS (outstanding reviewer award 2021), AISTATS, SIAM Optimization, JMLR, IEEE Information theory