Research

Advancing the science of safe and trustworthy AI systems.




FLAGSHIP PROJECTS

Responsible AI Assessment

We are developing a Responsible AI (RAI) assessment framework consisting of evaluation metrics and mitigation strategies. This research combines industry and academic perspectives to create a comprehensive guide for building RAI, supporting regulatory compliance, risk mitigation, and continuous improvement in AI governance.

Error Pocket Retrieval in Computer Vision

We are developing tools that leverage model disagreement signals to identify "Error Pockets"—thematic groups of images where models disagree. This approach provides deeper audits beyond numerical metrics and offers prescriptive data for model improvement with human-understandable explanations.

Neural Lyapunov Barrier Certificates

We explore joint training of neural controllers and Neural Lyapunov Barrier (NLB) certificates to provide strong guarantees on agent behavior. Using formal verification tools like Marabou, we provide counterexample-guided feedback for training safe autonomous vehicle controllers.

Safe Navigation with Neural Maps

We leverage neural scene reconstruction to develop high-quality neural mapping for environments from dense cities to extraterrestrial terrain. Our work develops safe navigation methods robust to map errors, with uncertainty quantification and safety guarantees for trajectory planning.

Explore Our Publications

View Publications