Selective Knowledge Transfer for Machine Learning
by Eric Eaton
Wednesday, April 15, 2009, 11:30am - Wednesday, April 15, 2009, 2:00am
325b ITE
Knowledge transfer from previously learned tasks to a new task is a fundamental component of human learning. Recent work has shown that knowledge transfer can also improve machine learning, enabling more rapid learning or higher levels of performance. Transfer allows learning algorithms to reuse knowledge from a set of previously learned source tasks to improve learning on new target tasks. Proper selection of the source knowledge to transfer to a given target task is critical to the success of knowledge transfer. Poorly chosen source knowledge may reduce the effectiveness of transfer, or hinder learning through a phenomenon known as negative transfer.
This dissertation proposes several methods for source knowledge selection that are based on the transferability between learning tasks. Transferability is introduced as the change in performance on a target task between learning with and without transfer. These methods show that transferability can be used to select source knowledge for two major types of transfer: instance-based transfer, which reuses individual data instances from the source tasks, and model-based transfer, which transfers components of previously learned source models.
For selective instance-based transfer, the proposed TransferBoost algorithm uses a novel form of set-based boosting to determine the individual source instances to transfer in learning the target task. TransferBoost reweights instances from each source task based on their collective transferability to the target task, and then performs regular boosting to adjust individual instance weights.
For model-based transfer, the learning tasks are organized into a directed network based on their transfer relationships to each other. Tasks that are close in this network have high transferability, and tasks that are far apart have low transferability. Model-based transfer is equivalent to learning a labeling function on this network. This dissertation proposes the novel spectral graph labeling algorithm that constrains the smoothness of the learned function using the graph's Laplacian eigenvalues. This method is then applied to the task transferability network to learn a transfer function that automatically determines the model parameter values to transfer to a target task. Experiments validate the success of these methods for selective knowledge transfer, demonstrating significantly improved performance over existing methods.
Committee Members
- Dr. Marie desJardins (Chair)
- Dr. Tim Finin
- Dr. Tim Oates
- Dr. Yun Peng
- Dr. Terran Lane (UNM)