Monday, December 1, 2003, 11:00am - Monday, December 1, 2003, 12:00pm


Autonomous agents must make real-time decisions on the scheduling and coordination of domain activities. These control decisions are made in the context of limited resources and uncertainty about action outcomes. The meta-level control problem is deciding how to sequence domain and control actions without consuming too many resources in the process. The state-of-the-art in agent architectures and control algorithms does not explicitly reason about the cost of time and other resources by control actions, which may degrade an agent's performance. In this talk, I describe a meta-level control agent architecture with bounded computational overhead which supports reasoning about control actions as first class entities. I then present a series of increasingly sophisticated approaches for meta-level control which are based on the use of high-level features which capture critical state information in a concise form. They differ by the amount of knowledge, including learned knowledge they use. I demonstrate empirically that meta-level reasoning leads to improved performance. I also show that offline reinforcement learning is a viable approach for autonomously constructing efficient meta-level control policies.

Anita Raja is an Assistant Professor of Software and Information Systems at The University of North Carolina at Charlotte. She received a B.S. Honors in Computer Science with a minor in Mathematics summa cum laude from Temple University, Philadelphia in 1996, and a M.S. and PhD in Computer Science from the University of Massachusetts, Amherst in 1998 and 2003 respectively. Her current research interests include design and control of multi-agent systems, bounded-rationality, adaptive agent control, multi-agent learning, distributed information gathering and organizational design.

OWL Tweet

UMBC ebiquity