Abstract
Deep Reinforcement Learning (DRL) has made remarkable progress in autonomous vehicle decision-making and execution control to improve traffic performance. This paper introduces a DRL-based mechanism for cooperative lane changing in mixed traffic (CLCMT) for connected and automated vehicles (CAVs). The uncertainty of human-driven vehicles (HVs) and the microscopic interactions between HVs and CAVs are explicitly modelled, and different leader-follower compositions are considered in CLCMT, which provides a high-fidelity DRL learning environment. A feedback module is established to enable interactions between the decision-making layer and the manoeuvre control layer. Simulation results show that the increase in CAV penetration leads to safer, more comfort, and eco-friendly lane-changing behaviours. A CAV-CAV lane-changing scenario can enhance safety by 24.5%–35.8%, improve comfort by 8%–9%, and reduce fuel consumption and emissions by 5.2%–12.9%. The proposed CLCMT promises advantages in the lateral decision-making and motion control of CAVs.
Acknowledgements
The authors confirm their contribution to the paper as follows: study conception and design: Xue Yao, and Zhanbo Sun; data collection: Xue Yao, Zhao Chengdu; analysis and interpretation of results: Xue Yao, Zhao Chengdu; draft manuscript preparation: Xue Yao, Zhanbo Sun, Simeon C. Calvert, Zhao Chengdu, and Ang Ji. All authors reviewed the results and approved the final version of the manuscript.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
1 Example can be found by https://www.youtube.com/watch?v=gZIwcZZR1P0)