Abstract
Motion control algorithms in the presence of pedestrians are critical for the development of safe and reliable Autonomous Vehicles (AVs). Traditional motion control algorithms rely on manually designed decision-making policies which neglect the mutual interactions between AVs and pedestrians. On the other hand, recent advances in Deep Reinforcement Learning allow for the automatic learning of policies without manual designs. To tackle the problem of decision-making in the presence of pedestrians, the authors introduce a framework based on Social Value Orientation and Deep Reinforcement Learning (DRL) that is capable of generating decision-making policies with different driving styles. The policy is trained using stateof- the-art DRL algorithms in a simulated environment. A novel computationally-efficient pedestrian model that is suitable for DRL training is also introduced. We perform experiments to validate our framework and we conduct a comparative analysis of the policies obtained with two different model-free Deep Reinforcement Learning Algorithms. Simulations results show how the developed model exhibits natural driving behaviours, such as short-stopping, to facilitate the pedestrian’s crossing.
Original language | English |
---|---|
Pages (from-to) | 1339-1349 |
Number of pages | 11 |
Journal | IEEE Transactions on Intelligent Vehicles |
Volume | 8 |
Issue number | 2 |
Early online date | 11 Jul 2022 |
DOIs | |
Publication status | Published - Feb 2023 |
Keywords
- Autonomous driving
- Autonomous vehicles
- Decision making
- Deep Reinforcement Learning
- Force
- Navigation
- Pedestrian Modelling
- Reinforcement learning
- Roads
- Sit- uational Awareness
- Social Value Orientation
- Trajectory