I build models of trust in socio-technical systems — and apply them to AI and digital platforms.
I am a researcher studying how trust forms, stabilizes, and breaks in AI-mediated interaction. My work combines field experiments, behavioral measurement, and theory to inform how intelligent systems should be evaluated and governed. I hold positions at Stanford as a Fellow at IRiSS and at Meta as a Senior Researcher.
Current focus
AI evaluation & human judgment
Designing frameworks for when human judgments are reliable enough to shape AI behavior — and when they should remain advisory.
Trust, reputation, and reinforcement
Empirical research on how reputation signals and social similarity shape trust decisions in online platforms.
A dynamic model of trust in AI systems
Developing a model of how trust thresholds adjust in response to experience, control, and safety infrastructure in AI-mediated environments.