%0 Generic %A Yuan, Henan %A Dong, Yongqi %A Li, Penghui %A Kang, Liujiang %A Farah, Haneen %A van Arem, Bart %D 2025 %T Code underlying the publication: Safe, Efficient, Comfort, and Energy-Saving Automated Driving Through Roundabout Based on Deep Reinforcement Learning %U %R 10.4121/c1020a3f-0053-491f-8ead-35d18819d37e.v1 %K DRL %K ITS %K Road transportation %K Deep reinforcement learning %K Merging %K Energy consumption %K Roundabout %K Safety %K Testing %X

This is the code related to the publication:

H. Yuan, P. Li, B. Van Arem, L. Kang, H. Farah and Y. Dong, "Safe, Efficient, Comfort, and Energy-Saving Automated Driving Through Roundabout Based on Deep Reinforcement Learning," 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC), Bilbao, Spain, 2023, pp. 6074-6079, doi: 10.1109/ITSC57777.2023.10422488. 


keywords: {Road transportation;Deep learning;Energy consumption;Merging;Reinforcement learning;Safety;Testing},


The implementation is based on Python, Stable-Baselines3 (https://stable-baselines3.readthedocs.io/en/master/) and Highway_env simulation environment https://github.com/Farama-Foundation/HighwayEnv.


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Traffic scenarios in roundabouts pose substantial complexity for automated driving. Manually mapping all possible scenarios into a state space is labor-intensive and challenging. Deep reinforcement learning (DRL) with its ability to learn from interacting with the environment emerges as a promising solution for training such automated driving models. This study explores, employs, and implements various DRL algorithms, namely Deep Deterministic Policy Gradient (DDPG), Proximal Policy Optimization (PPO), and Trust Region Policy Optimization (TRPO) to instruct automated vehicles' driving through roundabouts. The driving state space, action space, and reward function are designed. The reward function considers safety, efficiency, comfort, and energy consumption to align with real-world requirements. All three tested DRL algorithms succeed in enabling automated vehicles to drive through the roundabout. To holistically evaluate the performance of these algorithms, this study establishes an evaluation methodology considering multiple indicators, i.e., safety, efficiency, comfort and energy consumption level. A method employing the Analytic Hierarchy Process is also developed to weigh these evaluation indicators. Experimental results on various testing scenarios reveal that the TRPO algorithm outperforms DDPG and PPO in terms of safety and efficiency, while PPO performs the best in terms of comfort level and energy consumption. Lastly, to verify the model's adaptability and robustness regarding other driving scenarios, this study also deploys the model trained by TRPO to a range of different testing scenarios, e.g., highway driving and merging. Experimental results demonstrate that the TRPO model trained on only roundabout driving scenarios exhibits a certain degree of proficiency in highway driving and merging scenarios. This study provides a foundation for the application of automated driving with DRL.


%I 4TU.ResearchData