A substructure transfer reinforcement learning method based on metric learning

Peihua Chai, Bilian Chen*, Yifeng Zeng, Shenbao Yu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Transfer reinforcement learning has gained significant traction in recent years as a critical research area, focusing on bolstering agents’ decision-making prowess by harnessing insights from analogous tasks. The primary transfer learning method involves identifying the appropriate source domains, sharing specific knowledge structures and subsequently transferring the shared knowledge to novel tasks. However, existing transfer methods exhibit a pronounced dependency on high task similarity and an abundance of source data. Consequently, we attempt to formulate a more efficacious approach that optimally exploits the previous learning experiences to direct an agent's exploration as it learns new tasks. Specifically, we introduce a novel transfer learning paradigm rooted within the distance measure in the Markov chain, denoted as Distance Measure Substructure Transfer Reinforcement Learning (DMS-TRL). The core idea involves partitioning the Markov chain into the most basic small Markov units, which contain basic information about the agent's transfer between two states, and then followed by employing a new distance measure technique to find the most similar structure, which is also the most suitable for transfer. Finally, we propose a policy transfer method to transfer knowledge through the Q table from the selected Markov unit to the target task. Through a series of experiments conducted on discrete Gridworld scenarios, we compare our approach with state-of-the-art learning methods. The results clearly illustrate that DMS-TRL can adeptly identify optimal policy in target tasks, exhibiting swifter convergence.

Original languageEnglish
Article number128071
Pages (from-to)1-11
Number of pages11
JournalNeurocomputing
Volume598
Early online date14 Jun 2024
DOIs
Publication statusE-pub ahead of print - 14 Jun 2024

Cite this