Publication
IJCNN 2016
Conference paper
Manifold regularization based approximate value iteration for learning control
Abstract
In this paper, we develop a model-free and data efficient batch reinforcement learning algorithm for learning control of continuous state-space and discounted-reward Markov decision processes. This algorithm is an approximate value iteration which uses the manifold regularization method to learn feature representations for Q-value function approximation. The learned features can preserve the intrinsic geometry of the state space by learning on collected samples, and thus can improve the quality of the final value function estimate and the learned policy. The effectiveness and efficiency of the proposed scheme is evaluated on a benchmark control task, i.e., the inverted pendulum balancing problem.