Collaborative exploration and reinforcement learning between heterogeneously skilled agents in environments with sparse rewards
Published in International Joint Conference on Neural Networks, IJCNN, 2021
Full paper can be found here
Abstract: A critical goal in Reinforcement Learning is the minimization of the time needed for an agent to learn to solve a given environment. In this context, collaborative reinforcement learning refers to the improvement of this learning process through the interaction between agents, which usually yields better results than training each agent in isolation. Most studies in this area have focused on the case with homogeneous agents, namely, agents equally skilled for undertaking their task. By contrast, heterogeneity among agents could arise due to the particular capabilities on how they sense the environment and/or the actions they could perform. Those differences eventually hinder the learning process and information sharing between agents. This issue becomes even more complicated to address over hard exploration scenarios where the extrinsic rewards collected from the environment are sparse.
This work sheds light on the impact of leveraging collaborative learning strategies between heterogeneously skilled agents over hard exploration scenarios. Our study gravitates on how to share and exploit knowledge between the agents so as to mutually improve their learning procedures, further considering mechanisms to cope with sparse rewards.
We assess the performance of these strategies via extensive simulations over modifications of the ViZDooM environment, which allow examining their benefits and drawbacks when dealing with agents endowed with different behavioral policies. Our results uncover the inherent problems of not considering the skill heterogeneity of the agents in the knowledge sharing strategy, and unleash a manifold of research directions aimed at circumventing these noted issues.