Parvaresh et al., 2023 - Google Patents
A continuous actor–critic deep Q-learning-enabled deployment of UAV base stations: Toward 6G small cells in the skies of smart citiesParvaresh et al., 2023
View PDF- Document ID
- 17711875946827198552
- Author
- Parvaresh N
- Kantarci B
- Publication year
- Publication venue
- IEEE Open Journal of the Communications Society
External Links
Snippet
Uncrewed aerial vehicle-mounted base stations (UAV-BSs), also know as drone base stations, are considered to have promising potential to tackle the limitations of ground base stations. They can provide cost-effective Internet connection to users that are out of …
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATIONS NETWORKS
- H04W4/00—Mobile application services or facilities specially adapted for wireless communication networks
- H04W4/02—Mobile application Services making use of the location of users or terminals, e.g. OMA SUPL, OMA MLP or 3GPP LCS
- H04W4/023—Mobile application Services making use of the location of users or terminals, e.g. OMA SUPL, OMA MLP or 3GPP LCS using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATIONS NETWORKS
- H04W4/00—Mobile application services or facilities specially adapted for wireless communication networks
- H04W4/02—Mobile application Services making use of the location of users or terminals, e.g. OMA SUPL, OMA MLP or 3GPP LCS
- H04W4/025—Mobile application Services making use of the location of users or terminals, e.g. OMA SUPL, OMA MLP or 3GPP LCS using location based information parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATIONS NETWORKS
- H04W16/00—Network planning, e.g. coverage or traffic planning tools; Network deployment, e.g. resource partitioning or cells structures
- H04W16/22—Traffic simulation tools or models
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATIONS NETWORKS
- H04W84/00—Network topologies
- H04W84/18—Self-organizing networks, e.g. ad-hoc networks or sensor networks
- H04W84/20—Master-slave selection or change arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network-specific arrangements or communication protocols supporting networked applications
- H04L67/10—Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B7/00—Radio transmission systems, i.e. using radiation field
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Liu et al. | Trajectory design and power control for multi-UAV assisted wireless networks: A machine learning approach | |
Bayerlein et al. | Trajectory optimization for autonomous flying base station via reinforcement learning | |
Shamsoshoara et al. | An autonomous spectrum management scheme for unmanned aerial vehicle networks in disaster relief operations | |
Parvaresh et al. | A continuous actor–critic deep Q-learning-enabled deployment of UAV base stations: Toward 6G small cells in the skies of smart cities | |
Miao et al. | Drone swarm path planning for mobile edge computing in industrial internet of things | |
Hashesh et al. | AI-enabled UAV communications: Challenges and future directions | |
Rahimi et al. | An efficient 3-D positioning approach to minimize required UAVs for IoT network coverage | |
Ben Aissa et al. | UAV communications with machine learning: challenges, applications and open issues | |
Ding et al. | Distributed machine learning for uav swarms: Computing, sensing, and semantics | |
Hajiakhondi-Meybodi et al. | Deep reinforcement learning for trustworthy and time-varying connection scheduling in a coupled UAV-based femtocaching architecture | |
Liu et al. | Efficient deployment of UAVs for maximum wireless coverage using genetic algorithm | |
Luo et al. | A two-step environment-learning-based method for optimal UAV deployment | |
Nasr-Azadani et al. | Single-and multiagent actor–critic for initial UAV’s deployment and 3-D trajectory design | |
Sharif et al. | Space-aerial-ground-sea integrated networks: Resource optimization and challenges in 6G | |
Nasr-Azadani et al. | Distillation and ordinary federated learning actor-critic algorithms in heterogeneous UAV-aided networks | |
Liu et al. | Multi-agent federated reinforcement learning strategy for mobile virtual reality delivery networks | |
Parvaresh et al. | Deep Q-learning-enabled deployment of aerial base stations in the presence of mobile users | |
CN115314904B (en) | Communication coverage method based on multi-agent maximum entropy reinforcement learning and related equipment | |
Abdalla et al. | Multi-Agent Learning for Secure Wireless Access from UAVs with Limited Energy Resources | |
Rezwan et al. | Federated Deep Reinforcement Learning-Based Multi-UAV Navigation for Heterogeneous NOMA Systems | |
Akin et al. | Multiagent Q-learning based UAV trajectory planning for effective situationalawareness | |
Wu et al. | Mobility-aware deep reinforcement learning with seq2seq mobility prediction for offloading and allocation in edge computing | |
CN113727278A (en) | Path planning method, access network equipment and flight control equipment | |
Bousbaa et al. | GTSS-UC: A game theoretic approach for services' selection in UAV clouds | |
Yang et al. | Deep reinforcement learning in NOMA-assisted UAV networks for path selection and resource offloading |