Phased Continuous Exploration Method for Cooperative Multi-Agent Reinforcement Learning

Jie Kang, Yaqing Hou, Yifeng Zeng, Yongchao Chen, Xiangrong Tong, Xin Xu, Qiang Zhang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In multi-agent reinforcement learning, achieving effective exploration for agents remains challenging due to the non-stationarity of the environment and discrepancies between local and global information. In this paper, we propose a curiosity-driven phased continuous exploration method, termed PCE. We recognize that agents in different learning phases possess distinct knowledge and policies, allowing them to learn diverse knowledge and experiences from the same states. Therefore, we divide the training process of agents into different phases, employing a curiosity-driven method to explore independently within each phase. Simultaneously, addressing the characteristic of inconsistent local and global information in multi-agent systems, we strike a balance between exploration from local and global perspectives. Finally, we evaluate the proposed method in the popular multi-agent test task, StarCraft II. The results indicate that the method excels in enhancing the exploration capabilities of agents.
Original languageEnglish
Title of host publication2024 IEEE Conference on Artificial Intelligence (CAI)
PublisherIEEE
Pages1086-1091
Number of pages6
ISBN (Electronic)9798350354096
ISBN (Print)9798350354102
DOIs
Publication statusPublished - 25 Jun 2024
Event2024 IEEE Conference on Artificial Intelligence (CAI) - Marina Bay Sands, Singapore, Singapore
Duration: 25 Jun 202427 Jun 2024

Conference

Conference2024 IEEE Conference on Artificial Intelligence (CAI)
Country/TerritorySingapore
CitySingapore
Period25/06/2427/06/24

Keywords

  • Multi-agent systems
  • exploration
  • multi-agent reinforcement learning (MARL)

Cite this