I build models of trust in socio-technical systems — and apply them to AI and digital platforms.

I am a researcher studying how trust forms, stabilizes, and breaks in AI-mediated interaction. My work combines field experiments, behavioral measurement, and theory to inform how intelligent systems should be evaluated and governed. I hold positions at Stanford as a Fellow at IRiSS and at Meta as a Senior Researcher.

Current focus

AI evaluation & human judgment

Designing frameworks for when human judgments are reliable enough to shape AI behavior — and when they should remain advisory.

Trust, reputation, and reinforcement

Empirical research on how reputation signals and social similarity shape trust decisions in online platforms.

A dynamic model of trust in AI systems

Developing a model of how trust thresholds adjust in response to experience, control, and safety infrastructure in AI-mediated environments.

Selected highlights

Reputation and trust on Airbnb (PNAS, 2017)
Field evidence showing how reputation signals interact with social bias in real economic decisions.
A model for trust in the context of AI
Ongoing work developing a dynamic model of trust, control, and safety infrastructure in AI-mediated interaction.
Trust as practice (University of Turin, Italy)
Invited lecture on trust dynamics in socio-technical systems.
AI evaluation dimensions & human judgment
Research on quality dimensions and the stability of user evaluations in large language models.

Find me

Contact · Google Scholar · LinkedIn