Truly Proximal Policy Optimization

Yuhui Wang*, Hao He*, Xiaoyang Tan

*Corresponding author for this work

    Research output: Contribution to conferencePaperpeer-review

    45 Scopus citations


    Proximal policy optimization (PPO) is one of the most successful deep reinforcement learning methods, achieving state-of-the-art performance across a wide range of challenging tasks. However, its optimization behavior is still far from being fully understood. In this paper, we show that PPO could neither strictly restrict the probability ratio as it attempts to do nor enforce a well-defined trust region constraint, which means that it may still suffer from the risk of performance instability. To address this issue, we present an enhanced PPO method, named Trust Region-based PPO with Rollback (TR-PPO-RB). Two critical improvements are made in our method: 1) it adopts a new clipping function to support a rollback behavior to restrict the ratio between the new policy and the old one; 2) the triggering condition for clipping is replaced with a trust region-based one, which is theoretically justified according to the trust region theorem. It seems, by adhering more truly to the “proximal” property − restricting the policy within the trust region, the new algorithm improves the original PPO on both stability and sample efficiency.

    Original languageEnglish (US)
    Number of pages10
    StatePublished - 2019
    Event35th Uncertainty in Artificial Intelligence Conference, UAI 2019 - Tel Aviv, Israel
    Duration: Jul 22 2019Jul 25 2019


    Conference35th Uncertainty in Artificial Intelligence Conference, UAI 2019
    CityTel Aviv

    ASJC Scopus subject areas

    • Artificial Intelligence
    • Software
    • Control and Systems Engineering
    • Statistics and Probability


    Dive into the research topics of 'Truly Proximal Policy Optimization'. Together they form a unique fingerprint.

    Cite this