Skip directly to content

Proximal Policy Optimization with Continuous Bounded Action Space via the Beta Distribution

on Fri, 10/15/2021 - 15:46
TitleProximal Policy Optimization with Continuous Bounded Action Space via the Beta Distribution
Publication TypeConference Proceedings
Year of Conference2021
AuthorsPetrazzini IGB, Antonelo EA
Conference Name2021 IEEE Symposium Series on Computational Intelligence (SSCI)
Pagination1-8
Abstract

Reinforcement learning methods for continuous control tasks have evolved in recent years generating a family of policy gradient methods that rely primarily on a Gaussian distribution for modeling a stochastic policy. However, the Gaussian distribution has an infinite support, whereas real world applications usually have a bounded action space. This dissonance causes an estimation bias that can be eliminated if the Beta distribution is used for the policy instead, as it presents a finite support. In this work, we investigate how this Beta policy performs when it is trained by the Proximal Policy Optimization (PPO) algorithm on two continuous control tasks from OpenAI gym. For both tasks, the Beta policy is superior to the Gaussian policy in terms of agent's final expected reward, also showing more stability and faster convergence of the training process. For the CarRacing environment with high-dimensional image input, the agent's success rate was improved by 63% over the Gaussian policy.

URLhttps://ieeexplore.ieee.org/abstract/document/9660123
DOI10.1109/SSCI50451.2021.9660123