Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

CA-DTS: A Distributed and Collaborative Task Scheduling Algorithm for Edge Computing Enabled Intelligent Road Network

  • Regular Paper
  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

An Erratum to this article was published on 30 November 2023

This article has been updated

Abstract

Edge computing enabled Intelligent Road Network (EC-IRN) provides powerful and convenient computing services for vehicles and roadside sensing devices. The continuous emergence of transportation applications has caused a huge burden on roadside units (RSUs) equipped with edge servers in the Intelligent Road Network (IRN). Collaborative task scheduling among RSUs is an effective way to solve this problem. However, it is challenging to achieve collaborative scheduling among different RSUs in a completely decentralized environment. In this paper, we first model the interactions involved in task scheduling among distributed RSUs as a Markov game. Given that multi-agent deep reinforcement learning (MADRL) is a promising approach for the Markov game in decision optimization, we propose a collaborative task scheduling algorithm based on MADRL for EC-IRN, named CA-DTS, aiming to minimize the long-term average delay of tasks. To reduce the training costs caused by trial-and-error, CA-DTS specially designs a reward function and utilizes the distributed deployment and collective training architecture of counterfactual multi-agent policy gradient (COMA). To improve the stability of performance in large-scale environments, CA-DTS takes advantage of the action semantics network (ASN) to facilitate cooperation among multiple RSUs. The evaluation results of both the testbed and simulation demonstrate the effectiveness of our proposed algorithm. Compared with the baselines, CA-DTS can achieve convergence about 35% faster, and obtain average task delay that is lower by approximately 9.4%, 9.8%, and 6.7%, in different scenarios with varying numbers of RSUs, service types, and task arrival rates, respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Change history

References

  1. Chen S B. Intelligent perception system for vehicle-road cooperation. arXiv: 2208.14052, 2022. 10.48550/arXiv.2208.14052, Oct. 2023.

  2. Lu S D, Shi W S. The emergence of vehicle computing. IEEE Internet Computing, 2021, 25(3): 18–22. https://doi.org/10.1109/MIC.2021.3066076.

    Article  Google Scholar 

  3. Xie G Q, Yang L T, Wu W, Zeng K Y, Xiao X Z, Li R F. Security enhancement for real-time parallel in-vehicle applications by CAN FD message authentication. IEEE Trans. Intelligent Transportation Systems, 2021, 22(8): 5038–5049. https://doi.org/10.1109/TITS.2020.3000783.

    Article  Google Scholar 

  4. Shi W S, Cao J, Zhang Q, Li Y H Z, Xu L Y. Edge computing: Vision and challenges. IEEE Internet of Things Journal, 2016, 3(5): 637–646. https://doi.org/10.1109/JIOT.2016.2579198.

    Article  Google Scholar 

  5. Luo Q Y, Hu S H, Li C L, Li G H, Shi W S. Resource scheduling in edge computing: A survey. IEEE Communications Surveys & Tutorials, 2021, 23(4): 2131–2165. https://doi.org/10.1109/COMST.2021.3106401.

    Article  Google Scholar 

  6. Liu F M, Chen J, Zhang Q X, Li B. Online MEC offloading for V2V networks. IEEE Trans. Mobile Computing, 2023, 22(11): 6097–6109. https://doi.org/10.1109/TMC.2022.3186893.

    Article  Google Scholar 

  7. Chen S T, Wang L, Liu F M. Optimal admission control mechanism design for time-sensitive services in edge computing. In Proc. the 2022 IEEE Conference on Computer Communications, May 2022, pp.1169–1178. 10.1109/INFOCOM48880.2022.9796847.

  8. Guo E T, Chen Z F, Rahardja S, Yang J J. 3D detection and pose estimation of vehicle in cooperative vehicle infrastructure system. IEEE Sensors Journal, 2021, 21(19): 21759–21771. https://doi.org/10.1109/JSEN.2021.3101497.

    Article  Google Scholar 

  9. Li M S, Gao J, Zhao L, Shen X M. Deep reinforcement learning for collaborative edge computing in vehicular networks. IEEE Trans. Cognitive Communications and Networking, 2020, 6(4): 1122–1135. https://doi.org/10.1109/TCCN.2020.3003036.

    Article  Google Scholar 

  10. Wu Y C, Lin C, Quek T Q S. A robust distributed hierarchical online learning approach for dynamic MEC networks. IEEE Journal on Selected Areas in Communications, 2022, 40(2): 641–656. https://doi.org/10.1109/JSAC.2021.3118342.

    Article  Google Scholar 

  11. Qiu X Y, Zhang W K, Chen W H, Zheng Z B. Distributed and collective deep reinforcement learning for computation offloading: A practical perspective. IEEE Trans. Parallel and Distributed Systems, 2021, 32(5): 1085–1101. https://doi.org/10.1109/TPDS.2020.3042599.

    Article  Google Scholar 

  12. Liu X L, Yu J D, Wang J, Gao Y. Resource allocation with edge computing in IoT networks via machine learning. IEEE Internet of Things Journal, 2020, 7(4): 3415–3426. https://doi.org/10.1109/JIOT.2020.2970110.

    Article  Google Scholar 

  13. Xiong X, Zheng K, Lei L, Hou L. Resource allocation based on deep reinforcement learning in IoT edge computing. IEEE Journal on Selected Areas in Communications, 2020, 38(6): 1133–1146. https://doi.org/10.1109/JSAC.2020.2986615.

    Article  Google Scholar 

  14. Zhai Y L, Bao T H, Zhu L H, Shen M, Du X J, Guizani M. Toward reinforcement-learning-based service deployment of 5G mobile edge computing with request-aware scheduling. IEEE Wireless Communications, 2020, 27(1): 84–91. https://doi.org/10.1109/MWC.001.1900298.

    Article  Google Scholar 

  15. Zou J F, Hao T B, Yu C, Jin H. A3C-DO: A regional resource scheduling framework based on deep reinforcement learning in edge scenario. IEEE Trans. Computers, 2021, 70(2): 228–239. https://doi.org/10.1109/TC.2020.2987567.

    Article  Google Scholar 

  16. Ren Y L, Chen X Y, Guo S, Guo S Y, Xiong A. Blockchain-based VEC network trust management: A DRL algorithm for vehicular service offloading and migration. IEEE Trans. Vehicular Technology, 2021, 70(8): 8148–8160. https://doi.org/10.1109/TVT.2021.3092346.

    Article  Google Scholar 

  17. Zhan Y F, Guo S, Li P, Zhang J. A deep reinforcement learning based offloading game in edge computing. IEEE Trans. Computers, 2020, 69(6): 883–893. https://doi.org/10.1109/TC.2020.2969148.

    Article  MathSciNet  Google Scholar 

  18. Li Y Q, Wang X, Gan X Y, Jin H M, Fu L Y, Wang X B. Learning-aided computation offloading for trusted collaborative mobile edge computing. IEEE Trans. Mobile Computing, 2020, 19(12): 2833–2849. https://doi.org/10.1109/TMC.2019.2934103.

    Article  Google Scholar 

  19. Tang M, Wong V W S. Deep reinforcement learning for task offloading in mobile edge computing systems. IEEE Trans. Mobile Computing, 2022, 21(6): 1985–1997. https://doi.org/10.1109/TMC.2020.3036871.

    Article  Google Scholar 

  20. Chen Q, Zheng Z M, Hu C, Wang D, Liu F M. On-edge multi-task transfer learning: Model and practice with data- driven task allocation. IEEE Trans. Parallel and Distributed Systems, 2020, 31(6): 1357–1371. https://doi.org/10.1109/TPDS.2019.2962435.

    Article  Google Scholar 

  21. Cui J J, Liu Y W, Nallanathan A. Multi-agent reinforcement learning-based resource allocation for UAV networks. IEEE Trans. Wireless Communications, 2020, 19(2): 729–743. https://doi.org/10.1109/TWC.2019.2935201.

    Article  Google Scholar 

  22. Afrin M, Jin J, Rahman A, Rahman A, Wan J F, Hossain E. Resource allocation and service provisioning in multi-agent cloud robotics: A comprehensive survey. IEEE Communications Surveys & Tutorials, 2021, 23(2): 842–870. https://doi.org/10.1109/COMST.2021.3061435.

    Article  Google Scholar 

  23. Zhang Y T, Di B Y, Zheng Z J, Lin J L, Song L Y. Distributed multi-cloud multi-access edge computing by multi- agent reinforcement learning. IEEE Trans. Wireless Communications, 2021, 20(4): 2565–2578. https://doi.org/10.1109/TWC.2020.3043038.

    Article  Google Scholar 

  24. Huang X Y, Leng S P, Maharjan S, Zhang Y. Multi-agent deep reinforcement learning for computation offloading and interference coordination in small cell networks. IEEE Trans. Vehicular Technology, 2021, 70(9): 9282–9293. https://doi.org/10.1109/TVT.2021.3096928.

    Article  Google Scholar 

  25. Zhang K, Cao J Y, Zhang Y. Adaptive digital twin and multiagent deep reinforcement learning for vehicular edge computing and networks. IEEE Trans. Industrial Informatics, 2022, 18(2): 1405–1413. https://doi.org/10.1109/TII.2021.3088407.

    Article  Google Scholar 

  26. Zhu C, Tao J, Pastor G, Xiao Y, Ji Y S, Zhou Q, Li Y, Ylä-Jääski A. Folo: Latency and quality optimized task allocation in vehicular fog computing. IEEE Internet of Things Journal, 2019, 6(3): 4150–4161. https://doi.org/10.1109/JIOT.2018.2875520.

    Article  Google Scholar 

  27. Wu X Z, Subramanian S, Guha R, White R G, Li J Y, Lu K W, Bucceri A, Zhang T. Vehicular communications using DSRC: Challenges, enhancements, and evolution. IEEE Journal on Selected Areas in Communications, 2013, 31(9): 399–408. https://doi.org/10.1109/JSAC.2013.SUP.0513036.

    Article  Google Scholar 

  28. Luo Q Y, Li C L, Luan T H, Shi W S. Collaborative data scheduling for vehicular edge computing via deep reinforcement learning. IEEE Internet of Things Journal, 2020, 7(10): 9637–9650. https://doi.org/10.1109/JIOT.2020.2983660.

    Article  Google Scholar 

  29. Levy Y, Yechiali U. Utilization of idle time in an M/G/1 queueing system. Management Science, 1975, 22(2): 202–211. https://doi.org/10.1287/mnsc.22.2.202.

    Article  Google Scholar 

  30. Wang X J, Ning Z L, Guo S. Multi-agent imitation learning for pervasive edge computing: A decentralized computation offloading algorithm. IEEE Trans. Parallel and Distributed Systems, 2021, 32(2): 411–425. https://doi.org/10.1109/TPDS.2020.3023936.

    Article  Google Scholar 

  31. Sutton R S, Barto A G. Reinforcement learning: An introduction. IEEE Trans. Neural Networks, 1998, 9(5): 1054. https://doi.org/10.1109/TNN.1998.712192.

    Article  Google Scholar 

  32. Filar J, Vrieze K. Competitive Markov Decision Processes. Springer, 2012. https://doi.org/10.1007/978-1-4612-4054-9.

  33. Foerster J, Farquhar G, Afouras T, Nardelli N, Whiteson S. Counterfactual multi-agent policy gradients. In Proc. the 32nd AAAI Conference on Artificial Intelligence, Apr. 2018. https://doi.org/10.1609/aaai.v32i1.11794.

    Book  Google Scholar 

  34. Wang W X, Yang T P, Liu Y, Hao J Y, Hao X T, Hu Y J, Chen Y F, Fan C J, Gao Y. Action semantics network: Considering the effects of actions in multiagent systems. In Proc. the 8th International Conference on Learning Representations, Apr. 2019.

  35. Liu C B, Tang F, Hu Y K, Li K L, Tang Z, Li K Q. Distributed task migration optimization in MEC by extending multi-agent deep reinforcement learning approach. IEEE Trans. Parallel and Distributed Systems, 2021, 32(7): 1603–1614. https://doi.org/10.1109/TPDS.2020.3046737.

    Article  Google Scholar 

  36. Niv Y, Duff M O, Dayan P. Dopamine, uncertainty and TD learning. Behavioral and Brain Functions, 2005, 1: Article No. 6. https://doi.org/10.1186/1744-9081-1-6.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guang-Hui Li.

Supplementary Information

ESM 1

(PDF 1255 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, SH., Luo, QY., Li, GH. et al. CA-DTS: A Distributed and Collaborative Task Scheduling Algorithm for Edge Computing Enabled Intelligent Road Network. J. Comput. Sci. Technol. 38, 1113–1131 (2023). https://doi.org/10.1007/s11390-023-2839-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11390-023-2839-0

Keywords

Navigation