Feature-Based Local Policy Reinforcement Learning
Tuesday, May 12, 2009, 12:30pm - Tuesday, May 12, 2009, 14:30pm
MS DefenseThe problem of learning to control an agent in an arbitrary environment is difficult. In robotics, the standard approach is to hand-code and manually fine-tune a robot's perception of its environment and the actions it should take given its current state. This is both time-consuming and expensive. A better approach is to learn features and action policies without significant manual intervention. This problem is investigated in the context of learning image features to control a fovea position on an image. Using a self-organizing feature map, features are extracted from images. Controllers are then placed at each node and use reinforcement learning to learn how to move a fovea between areas in an image that closely match features in the feature map.
Contributions of this work include determining the impact of network parameters (number of nodes, patch size) and sampling methods (random, random walk, structured walk) on learned features, and an understanding of how to perform local control (as opposed to using a monolithic policy as in most RL approaches) based on learned features. Committee
- Dr. Tim Oates (Chair)
- Dr. Clay Morrison (University of Arizona)
- Dr. Marie desJardins
- Dr. Yun Peng