Abstract
Proximal policy optimization (PPO) is one of the most successful deep reinforcement learning methods, achieving state-of-the-art performance across a wide range of challenging tasks. However, its optimization behavior is still far from being fully understood. In this paper, we show that PPO could neither strictly restrict the probability ratio as it attempts to do nor enforce a well-defined trust region constraint, which means that it may still suffer from the risk of performance instability. To address this issue, we present an enhanced PPO method, named Trust Region-based PPO with Rollback (TR-PPO-RB). Two critical improvements are made in our method: 1) it adopts a new clipping function to support a rollback behavior to restrict the ratio between the new policy and the old one; 2) the triggering condition for clipping is replaced with a trust region-based one, which is theoretically justified according to the trust region theorem. It seems, by adhering more truly to the “proximal” property − restricting the policy within the trust region, the new algorithm improves the original PPO on both stability and sample efficiency.
Original language | English (US) |
---|---|
Pages | 113-122 |
Number of pages | 10 |
State | Published - 2019 |
Event | 35th Uncertainty in Artificial Intelligence Conference, UAI 2019 - Tel Aviv, Israel Duration: Jul 22 2019 → Jul 25 2019 |
Conference
Conference | 35th Uncertainty in Artificial Intelligence Conference, UAI 2019 |
---|---|
Country/Territory | Israel |
City | Tel Aviv |
Period | 07/22/19 → 07/25/19 |
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability