Alleviating the estimation bias of deep deterministic policy gradient via co-regularization

Yao Li, Yu Hui Wang, Yao Zhong Gan, Xiao Yang Tan*

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    3 Scopus citations


    The overestimation in Deep Deterministic Policy Gradients (DDPG) caused by value approximation error may result in unstable policy training. Twin Delayed Deep Deterministic Policy Gradient (TD3) addresses the overestimation but suffers from the underestimation. In this paper, we propose a Co-Regularization based Deep Deterministic (CoD2) policy gradient method to mitigate the estimation bias. Two learners characterized by overestimated and underestimated biases are trained with Co-regularization to achieve this goal. The overestimated and underestimated values are updated conservatively in CoD2 for policy evaluation. Experimental results show that our method achieves comparable performance compared with other methods.

    Original languageEnglish (US)
    Article number108872
    JournalPattern Recognition
    StatePublished - Nov 2022


    • Co-training
    • Deterministic policy gradient
    • Overestimation
    • Reinforcement learning
    • Underestimation

    ASJC Scopus subject areas

    • Software
    • Signal Processing
    • Computer Vision and Pattern Recognition
    • Artificial Intelligence


    Dive into the research topics of 'Alleviating the estimation bias of deep deterministic policy gradient via co-regularization'. Together they form a unique fingerprint.

    Cite this