OpenGL GUI with context-sensitive popup dialog (Actual robots shown in inset)
A Graphical User Interface (GUI) has been developed to enable the operation of multiple complex field robots. The interaction mechanism was inspired by interface techniques refined in the Real-Time Strategy (RTS) genre of video games that includes the popular titles Starcraft, Command & Conquer, and Strifeshadow. This mechanism follows three basic steps:
- The operator selects which robots to use
- The operator selects which objects to be acted on
- The operator selects a task to perform
However, the nature of field robotics requires some significant differences in the implementation of the RTS interface method. For instance, there is no single source of accurate global information -- each robot can only provide relative data that has to be fused together. In addition, the tasks that each robot can perform change dynamically and this information must be reflected in the choices presented by the GUI to the operator.
The GUI utilizes OpenGL to display the robot world in three dimensions. Development was significantly aided by Glt (by Nigel Stewart) and GLUI (by Paul Rademacher). Using Glt, which includes GLUI, is highly recommended, especially for C++ programmers new to OpenGL. The OpenGL picking mechanism was used in conjunction with GLUI dialog boxes to provide a direct manipulation interface for robot operation. Additional screenshots and system architecture diagrams are also available (see below).
In the background, real-time data is being handled by NDDS from RTI. The determination of robot capabilities, which change from moment to moment depending on robot capabilities and object characteristics, is performed by the Java Theorem Prover (JTP) developed at Stanford by Gleb Frank. Communication between the GUI and JTP is carried out by the Open Agent Architecture (OAA) from SRI.
To gain insight into how humans already manage distributed teams, this research observed field exercises of a police Special Weapons and Tactics (SWAT) team. The Palo Alto / Mountain View (California) Regional SWAT team provided access to its training exercises. The researchers were given free movement throughout the exercise area so that the activities of the commanders, the field units, the snipers, and the hostages and suspects could all be monitored. The tactical commander and field units play roles analgous to the robot operator and the field robots, respectively. The key observations made were:
- The role of the leader (commander or operator) has two primary components
- Cultivating common ground
- Coordinating action
- A natural and efficient interaction can be based on physical objects in the remote agents' (field units or robots) environment, just as with the RTS games
SWAT Commander briefing his team
Other interaction methods
Other human-robot interaction projects using the concepts previously developed in the Aerospace Robotics Laboratory are being considered, such as the implementation of other interface modalities.
Object-Based Interaction on a Handheld Device
The interaction world is an a priori modeled environment based on Room 010 in the Durand Building at Stanford University. The 3-D implementation allows the user to view the robots in action from many perspectives, even those not possible in real life. Some example screenshots are shown here.
Live and Virtual Scene Comparison
From far above
A closeup of Huey at work
Just above the tabletop
The view when entering the room
The user is aided by various software agents that listen to the robots and perform helpful tasks, from determining what capabilities can be afforded to the user to collecting information about the world for later redistribution.
Afforded actions may change depending on instantaneous robot abilities
The Query Agent can list all tasks possible on a given object
The system is made up of a community of agents that include the robots, the interface, and information-processing agents. For communication, the system uses the Network Data Delivery Service (NDDS) from RTI and the Open Agent Architecture (OAA) from SRI. NDDS handles medium- and high-frequency state updates, while OAA is responsible for the dialogues between agents that take place less often.
Last modified Tue, 2 Nov, 2010 at 20:22