Research Demos and Highlights


Developing this platform has been a significant part of my PhD. The video below showcases some highlights of what we've been able to accomplish using the platform.

Visuomotor Policy Learning on Real Robots

This is a closed-loop visuomotor policy operating at 20 hz using an action space consisting of an operational space controller. It was trained using a handful of human demonstrations collected with RoboTurk. See our work on Generalization Through Imitation for more information.


Simulation framework for robotic manipulation. Features include several robot models, controllers, and benchmark tasks, procedural generation of tasks, multi-modal sensors, and support for human demonstrations. See the website for more information or try the code out yourself.

Selected Talks

Large-Scale Human Supervision for Learning Robot Manipulation

  • [April 2021] Talk on RoboTurk presented at NVIDIA GTC 2021.

Algorithms for Learning Robot Manipulation through Human Imitation

  • [January 2021] Talk that covers recent imitation learning algorithms for learning from robot manipulation datasets collected by humans.

Media Coverage


RoboTurk: A Crowdsourcing Platform for Imitation Learning in Robotics

Tech Xplore, November 21, 2018

“In the future, RoboTurk could become a key resource in the field of robotics, aiding the development of more advanced and better performing robots.”


Robots Learn Tasks from People with Framework Developed by Stanford Researchers

Stanford News, October 26, 2018

“With a smartphone and a browser, people worldwide will be able to interact with a robot to speed the process of teaching robots how to do basic tasks.”

Peer Review

I have served as a reviewer for NeurIPS, ICML, RSS, CORL, IROS, CVPR, IEEE T-RO.