Rama Akkiraju, Pinar Keskinocak, et al.
Applied Intelligence
In this study, we discuss a baseline function for the estimation of a natural policy gradient with respect to variance, and demonstrate a condition in which an optimal baseline function that reduces the variance is equivalent to the state value function. However, outside of this condition, the state value could be considerably different from the optimal baseline. For such cases, an extended version of the NTD algorithm is proposed, where an auxiliary function is estimated to adjust the baseline, being state value estimates in the original NTD version, to the optimal baseline. The proposed algorithm is applied to simple MDPs and a challenging pendulum swing-up problem. © International Symposium on Artificial Life and Robotics (ISAROB). 2008.
Rama Akkiraju, Pinar Keskinocak, et al.
Applied Intelligence
Freddy Lécué, Jeff Z. Pan
IJCAI 2013
Albert Atserias, Anuj Dawar, et al.
Journal of the ACM
Els van Herreweghen, Uta Wille
USENIX Workshop on Smartcard Technology 1999