posted on 2023-05-20, 14:22authored byKe, Z, Li, Z, Cao, Z, Liu, P
The study aims to evaluate the performance of the transfer learning algorithm to enhance the transferability of a deep reinforcement learning-based variable speed limits (VSL) control. The Double Deep Q Network (DDQN)-based VSL control strategy is proposed for reducing total time spent (TTS) on freeways. A real merging bottleneck is developed in the simulation and considered for the VSL control as the source scenario. Three types of target scenarios are considered, including the overspeed scenarios, adverse weather scenarios, and diverse capacity drop scenarios. A stable testing demand and a fluctuating testing demand are adopted to evaluate the effects of VSL control. The results show that by updating the neural networks, the transfer learning in the DDQN-based VSL control agent successfully transfers knowledge learned in the source scenario to other target scenarios. With the transfer learning, the entire training process is shortened by 32.3% to 69.8%, while keeping a similar maximum reward level, as compared to the VSL control with full learning from scratch. With the transferred DDQN-based VSL strategy, the TTS is reduced by 26.02% to 67.37% with the stable testing demand and 21.31% to 69.98% with the fluctuating testing demand in various scenarios, respectively. The results also show that when the task similarity between the source scenario and target scenario is relatively low, the transfer learning could lead to local optimum and may not achieve the global optimal control effects.
History
Publication title
IEEE Transactions on Intelligent Transportation Systems
Pagination
1-12
ISSN
1524-9050
Department/School
School of Information and Communication Technology
Publisher
IEEE-Inst Electrical Electronics Engineers Inc
Place of publication
445 Hoes Lane, Piscataway, USA, Nj, 08855
Rights statement
Copyright 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.