Bilkent University
Department of Computer Engineering
CS590/690 SEMINAR
Corrected Uniform Experience Replay for Off-Policy Continuous Deep Reinforcement Learning Algorithms
Arda Sarp Yenicesu
Master Student
(Supervisor: Asst. Prof. Özgür Salih Öğüz)
Computer Engineering Department
Bilkent University
Abstract: The utilization of the experience replay mechanism enables agents to effectively leverage their experiences on several occasions. In previous studies, the sampling probability of the transitions was modified based on their relative significance. The process of reassigning sample probabilities for every transition in the replay buffer after each iteration is considered extremely inefficient. Hence, in order to enhance computing efficiency, experience replay prioritization algorithms reassess the importance of a transition as it is sampled. However, the relative importance of the transitions undergoes dynamic adjustments when the agent’s policy and value function are iteratively updated. Furthermore, experience replay is a mechanism that retains the transitions generated by the agent’s past policies, which could potentially diverge significantly from the agent’s most recent policy. An increased deviation from the agent’s most recent policy results in a greater frequency of off-policy updates, which has a negative impact on the agent’s performance. In this paper, we develop a novel algorithm, Corrected Uniform Experience Replay (CUER), which stochastically samples the stored experience while considering the fairness among all other experiences without ignoring the dynamic nature of the transition importance by making sampled state distribution more on-policy. CUER provides promising improvements for off-policy continuous control algorithms in terms of sample efficiency, final performance, and stability of the policy during the training.
DATE: October 16th, Monday @ 16:30 EA-502