Nothing Special   »   [go: up one dir, main page]

Skip to main content

Advertisement

Log in

Radio Resource Scheduling in 5G Networks Based on Adaptive Golden Eagle Optimization Enabled Deep Q-Net

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

The growth of fifth-generation (5G) broadband wireless systems presents several issues in terms of network resource allocation. In a collaborative network of mobile devices, the users, and devices are struggled with precious resources. Consequently, it emphasizes the importance of fair and effective resource allocation for the optimum functioning of the networks. Hence, this research presents the Adaptive Golden Eagle optimization based Deep Q Net (Adaptive GEO_DQN) in radio resource scheduling of 5G networks. Moreover, the 5G network is made up of base stations (BS) and user equipment (UEs). The radio resource scheduler at the BS is active in every slot. The BS can collect the data from the UE, like channel feedback data, buffer, hybrid automatic repeat request (HARQs), and allocation log. The resource blocks (RBs) from the current resource blocks group (RBG) have been scheduled by UEs in the current slot. Furthermore, the DQN is used in the UE scheduling, and the Adaptive GEO is utilized in the training of DQN. In addition, the efficacy of the system is validated with respect to the throughput and fairness metrics with the outcomes of 0.921 Mbps, and 0.902 are attained.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data Availability

The dataset utilized and examined in this study can be obtained from the corresponding author upon reasonable request.

References

  1. Al-Tam F, Correia N, Rodriguez J. Learn to schedule (LEASCH): a deep reinforcement learning approach for radio resource scheduling in the 5G MAC layer. IEEE Access. 2020;8:108088–101.

    Article  Google Scholar 

  2. Sengupta A, Alvarino AR, Catovic A, Casaccia L. Cellular terrestrial broadcast. physical layer evolution from 3g pp release 9 to release 16. IEEE Trans Broadcast. 2020;66(2):459–70.

    Article  Google Scholar 

  3. Azimi Y, Yousefi S, Kalbkhani H, Kunz T. Energy-efficient deep reinforcement learning assisted resource allocation for 5G-RAN slicing. IEEE Trans Veh Technol. 2021;71(1):856–71.

    Article  Google Scholar 

  4. Elayoubi SE, Jemaa SB, Altman Z, Galindo-Serrano A. 5G RAN slicing for verticals: enablers and challenges. IEEE Commun Mag. 2019;57(1):28–34.

    Article  Google Scholar 

  5. Filali A, Mlika Z, Cherkaoui S, Kobbane A. Dynamic SDN-based radio access network slicing with deep reinforcement learning for URLLC and eMBB services. IEEE Trans Netw Sci Engg. 2022;9(4):2174–87.

    Article  Google Scholar 

  6. Polese M, Giordani M, Zugno T, Roy A, Goyal S, Castor D, Zorzi M. Integrated access and backhaul in 5G mmWave networks: potential and challenges. IEEE Commun Mag. 2020;58(3):62–8.

    Article  Google Scholar 

  7. Sande MM, Hlophe MC, Maharaj BT. Access and radio resource management for IAB networks using deep reinforcement learning. IEEE Access. 2021;9:114218–34.

    Article  Google Scholar 

  8. Degambur LN, Mungur A, Armoogum S, Pudaruth S. Resource allocation in 4G and 5G networks. Int J Commun Netw Inf Secur. 2021;13(3):401–8.

    Article  Google Scholar 

  9. Comsa I-S, De-Domenico A, Ktenas D (2017) ``QoS-driven scheduling in 5G radio access networks a reinforcement learning approach. In: Proceedings of GLOBECOM Global Communication Conference, pp 17, Dec 2017

  10. Lien S-Y, Shieh S-L, Huang Y, Su B, Hsu Y-L, Wei H-Y. 5G new radio: waveform, frame structure, multiple access, and initial access. IEEE Commun Mag. 2017;55(6):6471.

    Article  Google Scholar 

  11. Tsinos CG, Chatzinotas S, Ottersten B. Hybrid analog-digital transceiver designs for multi-user MIMO mmWave cognitive radio systems. IEEE Trans Cognit Commun Netw. 2020;6(1):310324.

    Google Scholar 

  12. Tham ML, Iqbal A, Chang YC (2019) “Deep reinforcement learning for resource allocation in 5G communications”. In: Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pp. 1852–1855

  13. Zhou H, Erol-Kantarci M, Poor HV. Learning from peers: deep transfer reinforcement learning for joint radio and cache resource allocation in 5G RAN slicing. IEEE Trans Cogn Commun Netw. 2022;8(4):1925–41.

    Article  Google Scholar 

  14. Fang C, Xu H, Yang Y, Hu Z, Tu S, Ota K, Yang Z, Dong M, Han Z, Yu FR, Liu Y. Deep-reinforcement-learning-based resource allocation for content distribution in fog radio access networks. IEEE Internet Things J. 2022;9(18):16874–83.

    Article  Google Scholar 

  15. Shen S, Zhang T, Mao S, Chang GK. DRL-based channel and latency aware radio resource allocation for 5G service-oriented RoF-MmWave RAN. J Lightwave Technol. 2021;39(18):5706–14.

    Article  Google Scholar 

  16. Vimalnath S, Ravi G. Improved radio resource allocation in 5G network using fuzzy logic systems. J Intell Autom Soft Comput. 2021;32(03):1687–99.

    Article  Google Scholar 

  17. Rkhami A, Hadjadj-Aoul Y, Outtagarts (2021) Learn to improve: a novel deep reinforcement learning approach for beyond 5G network slicing”. In: Proceedings of IEEE 18th Annual Consumer Communications and Networking Conference (CCNC), pp 1–6

  18. Sasaki H, Horiuchi T, Kato S (2017) "A study on vision-based mobile robot learning by deep Q-network". In: Proceedings of 2017 56th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), pp 799–804

  19. Mohammadi-Balani A, Nayeri MD, Azar A, Taghizadeh-Yazdi M. “Golden eagle optimizer” A nature-inspired metaheuristic algorithm. Comput Indus Eng. 2021;152:107050.

    Article  Google Scholar 

  20. AlQahtani SA. Cooperative-aware radio resource allocation scheme for 5G network slicing in cloud radio access networks. Sensors. 2023;23(11):5111.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors acknowledged the REVA University, Bangalore, Karnataka India for supporting the research work by providing the facilities.

Funding

No funding received for this research.

Author information

Authors and Affiliations

Authors

Contributions

This research endeavor owes its success to the collaborative efforts and valuable contributions of all authors involved.

Corresponding author

Correspondence to V. Shilpa.

Ethics declarations

Conflict of Interest

No conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection “Advances in Computational Approaches for Image Processing, Wireless Networks, Cloud Applications and Network Security” guest edited by P. Raviraj, Maode Ma and Roopashree H R.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shilpa, V., Ranjan, R. Radio Resource Scheduling in 5G Networks Based on Adaptive Golden Eagle Optimization Enabled Deep Q-Net. SN COMPUT. SCI. 5, 517 (2024). https://doi.org/10.1007/s42979-024-02856-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-024-02856-8

Keywords

Navigation