Katie Driggs-Campbell

Research Projects

My research focuses on exploring and uncovering structure in complex human-robot systems to create more intelligent, interactive autonomy. I develop rigorous human models and control frameworks that mimic the positive properties of human agents, while compensating for their shortcomings with safety guarantees.

This research agenda combines ideas from robotics, artificial intelligence, and control applied to human-robot systems and the transportation domain. The major focuses are:

  1. Validating the autonomous systems using novel tools to find likely failures as well as rigorous experiments in high-fidelity simulation, immersive testbeds, and fully outfitted autonomous test vehicles;

  2. Developing robust models of human-robot systems that capture the highly stochastic behaviors of humans for use in semi- and fully autonomous control;

  3. Designing interactive control policies for intelligent systems in multi-agent settings, which can be applied to shared control schemes or fully autonomous systems that interact and collaborate with humans; and

  4. Learning from human behaviors for improved intelligent systems by formalizing methods to integrate people as sensors in perception modules and learning control policies based on expert human actions.

    For up to date research, check out my publication list!

Robust, Informative Human-in-the-Loop Predictions via Empirical Reachable Sets

Given the current capabilities of autonomous vehicles, one can easily imagine autonomous vehicles being released on the road in the near future. However, it can be assumed that this transition will not be instantaneous, suggesting two key points:

(1) levels of autonomy will be introduced incrementally (e.g. active safety systems as currently released), and
(2) autonomous vehicles will have to be capable of driving in a mixed environment, with both humans and autonomous vehicles on the road.

In both of these cases, the human driven vehicle (or generally the human-in-the-loop system) must be modeled in an accurate and precise manner that is easily integrated into control frameworks.

Our developed driver modeling framework estimates the empirical reachable set, which is an alternative look at a classic control theoretic safety metric. This allows us to:

  • predict driving behavior over long time horizons with very high accuracy

  • apply intervention schemes for semi-autonomous vehicles

  • mimic nuanced interactions between humans and autonomy in cooperative maneuvers

People as Sensors

Much like humans in the real world, who observe other drivers and make inferences, we have designed a framework that treats human drivers as sensors to provide environment information to our intelligent system. We used probabilistic learning methods to estimate a sensor model that captures how people dynamically respond to pedestrians (i.e., learning the relationship between environment state and action), so that driver’s actions can then serve as a proxy for detection. This framework has shown significantly improvement in overall environment awareness.

Bayesian Approach to Safe Human-in-the-Loop Learning

We have developed a new method for deep imitation learning, which aims to train a “novice” to replicate an “expert” human on the fly. Our method, DropoutDAgger, builds upon the DAgger algorithm by using dropout to train the novice as a Bayesian neural network, providing a distribution over actions. Then, we define a probabilistic measure of safety with respect to the expert action, which determines when the novice explores and when expert should be exploited in a switched control setting. Our method exhibits improved performance and safety compared to classic imitation learning.

Experimental Design for Human-in-the-Loop Driving Simulations

I've been working on setting up the experimental platform for human driving studies and control applications. The platform used is a Force Dynamics Car Simulator, shown on the left, to conduct driving experiments. This has been integrated with PreScan and dSPACE sytems. This allows for a realistic driving experience, while allowing complete control of the test in a safe environment.

Car Simulator 

Safe Interaction with Autonomous Vehicles

Utilizing driver modeling methods to predict future driver action in particular scenarios, we experiment with how autonomous vehicles interact with surrounding human drivers on the road. The goal is to create a new methodology for path planning and high level control decisions for an autonomous vehicle that must interact and collaborate in a heterogeneous environment. This information can be communicated or measured by the autonomous vehicle of interest to plan control strategries for safe interaction.
The focuses of this work have considered:

  • estimating and predicting discrete modes of behavior in vehicles

  • predicting human driver responses to conveyed intent

  • generating human-like trajectories using optimization based planning to match interaction metrics

Improving Trust in Automation through Intelligent UI Design

When people interact with automation, there must be a clear line of communciation to ensure understanding. This line of research focuses on the development of user interfaces to better inform the user of what the automation is doing. Specifically, we have looked at internal vs. external information and how to optimally design a user interface using information theory.
Through this, we have found:

  • by effectively conveying external information, we can:

    • improve driving performance while handing off control between the automation and the human driver

    • increase overall trust in the automation

    • improve situational awareness

  • by modeling the UI as a communication channel, we can adapt the UI to an individual tendancies and balance brevity and utility in an optimal manner

Human-in-the-Loop Control for Semi-Autonomous Cars

To create a smart active safety system for semi-autonomous vehicles, a human-in-the-loop control system has been developed, utilizing driver modeling to accurately predict the actions of the driver. This looks at driver monitoring to determine when the human is distracted and/or has enough situational awareness to safely control the vehicle. Key components incorporate computer vision, machine learning, sensor fusion, and control theory. Extensions include probabilistic driver modeling to assess threat and applying formal verification and model checking quantify driver performance.