Abstract
In multi-agent reinforcement learning, achieving effective exploration for agents remains challenging due to the non-stationarity of the environment and discrepancies between local and global information. In this paper, we propose a curiosity-driven phased continuous exploration method, termed PCE. We recognize that agents in different learning phases possess distinct knowledge and policies, allowing them to learn diverse knowledge and experiences from the same states. Therefore, we divide the training process of agents into different phases, employing a curiosity-driven method to explore independently within each phase. Simultaneously, addressing the characteristic of inconsistent local and global information in multi-agent systems, we strike a balance between exploration from local and global perspectives. Finally, we evaluate the proposed method in the popular multi-agent test task, StarCraft II. The results indicate that the method excels in enhancing the exploration capabilities of agents.
Original language | English |
---|---|
Title of host publication | 2024 IEEE Conference on Artificial Intelligence (CAI) |
Publisher | IEEE |
Pages | 1086-1091 |
Number of pages | 6 |
ISBN (Electronic) | 9798350354096 |
ISBN (Print) | 9798350354102 |
DOIs | |
Publication status | Published - 25 Jun 2024 |
Event | 2024 IEEE Conference on Artificial Intelligence (CAI) - Marina Bay Sands, Singapore, Singapore Duration: 25 Jun 2024 → 27 Jun 2024 |
Conference
Conference | 2024 IEEE Conference on Artificial Intelligence (CAI) |
---|---|
Country/Territory | Singapore |
City | Singapore |
Period | 25/06/24 → 27/06/24 |
Keywords
- Multi-agent systems
- exploration
- multi-agent reinforcement learning (MARL)