Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (219)

Search Parameters:
Keywords = offloading decision

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 468 KiB  
Article
Toward 6G: Latency-Optimized MEC Systems with UAV and RIS Integration
by Abdullah Alshahrani
Mathematics 2025, 13(5), 871; https://doi.org/10.3390/math13050871 - 5 Mar 2025
Viewed by 156
Abstract
Multi-access edge computing (MEC) has emerged as a cornerstone technology for deploying 6G network services, offering efficient computation and ultra-low-latency communication. The integration of unmanned aerial vehicles (UAVs) and reconfigurable intelligent surfaces (RISs) further enhances wireless propagation, capacity, and coverage, presenting a transformative [...] Read more.
Multi-access edge computing (MEC) has emerged as a cornerstone technology for deploying 6G network services, offering efficient computation and ultra-low-latency communication. The integration of unmanned aerial vehicles (UAVs) and reconfigurable intelligent surfaces (RISs) further enhances wireless propagation, capacity, and coverage, presenting a transformative paradigm for next-generation networks. This paper addresses the critical challenge of task offloading and resource allocation in an MEC-based system, where a massive MIMO base station, serving multiple macro-cells, hosts the MEC server with support from a UAV-equipped RIS. We propose an optimization framework to minimize task execution latency for user equipment (UE) by jointly optimizing task offloading and communication resource allocation within this UAV-assisted, RIS-aided network. By modeling this problem as a Markov decision process (MDP) with a discrete-continuous hybrid action space, we develop a deep reinforcement learning (DRL) algorithm leveraging a hybrid space representation to solve it effectively. Extensive simulations validate the superiority of the proposed method, demonstrating significant latency reductions compared to state-of-the-art approaches, thereby advancing the feasibility of MEC in 6G networks. Full article
Show Figures

Figure 1

Figure 1
<p>Framework of the proposed algorithm.</p>
Full article ">Figure 2
<p>Average rewards vs. no. of episodes.</p>
Full article ">Figure 3
<p>The total time delay according to different schemes vs. <math display="inline"><semantics> <mi mathvariant="script">F</mi> </semantics></math>m,k, with <span class="html-italic">K</span> = 100 and <math display="inline"><semantics> <msub> <mi>ζ</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </semantics></math> = 30 Giga cycles/s.</p>
Full article ">Figure 4
<p>The total time delay according to different schemes vs. <math display="inline"><semantics> <msub> <mi>ζ</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </semantics></math>, with <span class="html-italic">K</span> = 100 and <math display="inline"><semantics> <msub> <mi>F</mi> <mrow> <mi>m</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> </semantics></math> = 600 cycles/bit.</p>
Full article ">Figure 5
<p>The total time delay vs. no. of UEs.</p>
Full article ">Figure 6
<p>Task completion ratio vs. no. of UEs.</p>
Full article ">
21 pages, 1405 KiB  
Review
Variations in Multi-Agent Actor–Critic Frameworks for Joint Optimizations in UAV Swarm Networks: Recent Evolution, Challenges, and Directions
by Muhammad Morshed Alam, Sayma Akter Trina, Tamim Hossain, Shafin Mahmood, Md. Sanim Ahmed and Muhammad Yeasir Arafat
Drones 2025, 9(2), 153; https://doi.org/10.3390/drones9020153 - 19 Feb 2025
Viewed by 544
Abstract
Autonomous unmanned aerial vehicle (UAV) swarm networks (UAVSNs) can efficiently perform surveillance, connectivity, computing, and energy transfer services for ground users (GUs). These missions require trajectory planning, UAV-GUs association, task offloading, next-hop selection, and resource allocation, including transmit power, bandwidth, timeslots, caching, and [...] Read more.
Autonomous unmanned aerial vehicle (UAV) swarm networks (UAVSNs) can efficiently perform surveillance, connectivity, computing, and energy transfer services for ground users (GUs). These missions require trajectory planning, UAV-GUs association, task offloading, next-hop selection, and resource allocation, including transmit power, bandwidth, timeslots, caching, and computing resources, to enhance network performance. Owing to the highly dynamic topology, limited resources, stringent quality of service requirements, and lack of global knowledge, optimizing network performance in UAVSNs is very intricate. To address this, an adaptive joint optimization framework is required to handle both discrete and continuous decision variables, ensuring optimal performance under various dynamic constraints. A multi-agent deep reinforcement learning-based adaptive actor–critic framework offers an effective solution by leveraging its ability to extract hidden features through agent interactions, generate hybrid actions under uncertainty, and adaptively learn with scalable generalization in dynamic conditions. This paper explores the recent evolutions of actor–critic frameworks to deal with joint optimization problems in UAVSNs by proposing a novel taxonomy based on the modifications in the internal actor–critic neural network structure. Additionally, key open research challenges are identified, and potential solutions are suggested as directions for future research in UAVSNs. Full article
(This article belongs to the Special Issue Wireless Networks and UAV: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>An illustration of a UAVSN-assisted mission.</p>
Full article ">Figure 2
<p>Taxonomy of actor–critic frameworks in UAVSNs.</p>
Full article ">
17 pages, 2073 KiB  
Article
Few-Shot Learning with Multimodal Fusion for Efficient Cloud–Edge Collaborative Communication
by Bo Gao, Xing Liu and Quan Zhou
Electronics 2025, 14(4), 804; https://doi.org/10.3390/electronics14040804 - 19 Feb 2025
Viewed by 271
Abstract
As demand for high-capacity, low-latency communication rises, mmWave systems are essential for enabling ultra-high-speed transmission in fifth-generation mobile communication technology (5G) and upcoming 6G networks, especially in dynamic, data-scarce environments. However, deploying mmWave systems in dynamic environments presents significant challenges, especially in beam [...] Read more.
As demand for high-capacity, low-latency communication rises, mmWave systems are essential for enabling ultra-high-speed transmission in fifth-generation mobile communication technology (5G) and upcoming 6G networks, especially in dynamic, data-scarce environments. However, deploying mmWave systems in dynamic environments presents significant challenges, especially in beam selection, where limited training data and environmental variability hinder optimal performance. In such scenarios, computation offloading has emerged as a key enabler, allowing computationally intensive tasks to be shifted from resource-constrained edge devices to powerful cloud servers, thereby reducing latency and optimizing resource utilization. This paper introduces a novel cloud–edge collaborative approach integrating few-shot learning (FSL) with multimodal fusion to address these challenges. By leveraging data from diverse modalities—such as red-green-blue (RGB) images, radar signals, and light detection and ranging (LiDAR)—within a cloud–edge architecture, the proposed framework effectively captures spatiotemporal features, enabling efficient and accurate beam selection with minimal data requirements. The cloud server is tasked with computationally intensive training, while the edge node focuses on real-time inference, ensuring low-latency decision making. Experimental evaluations confirm the model’s robustness, achieving high beam selection accuracy under one-shot and five-shot conditions while reducing computational overhead. This study highlights the potential of combining cloud–edge collaboration with FSL and multimodal fusion for next-generation wireless networks, paving the way for scalable, intelligent, and adaptive mmWave communication systems. Full article
(This article belongs to the Special Issue Computation Offloading for Mobile-Edge/Fog Computing)
Show Figures

Figure 1

Figure 1
<p>The proposed beam selection for cloud–edge collaboration in MIMO communication systems.</p>
Full article ">Figure 2
<p>Illustration of the proposed few-shot beam prediction model and its components. (<b>a</b>) The feature extraction module. (<b>b</b>) The proposed beam prediction model.</p>
Full article ">Figure 3
<p>The accuracy of the proposed model under 1-shot and 5-shot conditions fluctuates with a training-to-test-set ratio of 1:1.</p>
Full article ">Figure 4
<p>Accuracies of different Transformer modules (multi-attention layers) under 1-shot conditions.</p>
Full article ">Figure 5
<p>The impact of the number of CNN layers on the results.</p>
Full article ">Figure 6
<p>Accuracy of different models and multimodal fusion under 1-shot conditions.</p>
Full article ">Figure 7
<p>Accuracies of the proposed approach and various baseline methods.</p>
Full article ">Figure 8
<p>The proposed algorithm was evaluated against the baseline algorithms in terms of inference time for a single test data point following the training phase.</p>
Full article ">
31 pages, 1787 KiB  
Article
Distributed Gradient Descent Framework for Real-Time Task Offloading in Heterogeneous Satellite Networks
by Yanbing Li, Yuchen Wu and Shangpeng Wang
Mathematics 2025, 13(4), 561; https://doi.org/10.3390/math13040561 - 8 Feb 2025
Viewed by 328
Abstract
Task offloading in satellite networks, which involves distributing computational tasks among heterogeneous satellite nodes, is crucial for optimizing resource utilization and minimizing system latency. However, existing approaches such as static offloading strategies and heuristic-based offloading methods neglect dynamic topologies and uncertain conditions that [...] Read more.
Task offloading in satellite networks, which involves distributing computational tasks among heterogeneous satellite nodes, is crucial for optimizing resource utilization and minimizing system latency. However, existing approaches such as static offloading strategies and heuristic-based offloading methods neglect dynamic topologies and uncertain conditions that hinder adaptability to sudden changes. Furthermore, current collaborative computing strategies inadequately address satellite platform heterogeneity and often overlook resource fluctuations, resulting in inefficient resource sharing and inflexible task scheduling. To address these issues, we propose a dynamic gradient descent-based task offloading method. This method proposes a collaborative optimization framework based on dynamic programming. By constructing delay optimization and resource efficiency models and integrating dynamic programming with value iteration techniques, the framework achieves real-time updates of system states and decision variables. Then, a distributed gradient descent algorithm combined with Gradient Surgery techniques is employed to optimize task offloading decisions and resource allocation schemes, ensuring a precise balance between delay minimization and resource utilization maximization in dynamic network environments. Experimental results demonstrate that the proposed method enhances the global optimizing result by at least 1.97%, enhances resource utilization rates by at least 3.91%, and also reduces the solution time by at least 191.91% in large-scale networks. Full article
(This article belongs to the Special Issue New Advances in Network and Edge Computing)
Show Figures

Figure 1

Figure 1
<p>Schematic basic idea D-GTOM framework.</p>
Full article ">Figure 2
<p>Main simulation interface of the CSTK platform.</p>
Full article ">Figure 3
<p>Comparison of objective function values for different algorithms across 4 datasets.</p>
Full article ">Figure 4
<p>Comparison of resource utilization rates for different algorithms across 4 datasets.</p>
Full article ">Figure 5
<p>Running time and improvement rates of algorithms across 4 datasets.</p>
Full article ">Figure 6
<p>Comparison of average task delay for different algorithms across 4 datasets.</p>
Full article ">Figure 7
<p>Comparison of evaluation metrics in large-scale satellite dataset with growing task quantity.</p>
Full article ">
17 pages, 369 KiB  
Article
Collaborative Sensing-Aware Task Offloading and Resource Allocation for Integrated Sensing-Communication- and Computation-Enabled Internet of Vehicles (IoV)
by Bangzhen Huang, Xuwei Fan, Shaolong Zheng, Ning Chen, Yifeng Zhao, Lianfen Huang, Zhibin Gao and Han-Chieh Chao
Sensors 2025, 25(3), 723; https://doi.org/10.3390/s25030723 - 25 Jan 2025
Viewed by 576
Abstract
Integrated Sensing, Communication, and Computation (ISCC) has become a key technology driving the development of the Internet of Vehicles (IoV) by enabling real-time environmental sensing, low-latency communication, and collaborative computing. However, the increasing sensing data within the IoV leads to demands of fast [...] Read more.
Integrated Sensing, Communication, and Computation (ISCC) has become a key technology driving the development of the Internet of Vehicles (IoV) by enabling real-time environmental sensing, low-latency communication, and collaborative computing. However, the increasing sensing data within the IoV leads to demands of fast data transmission in the context of limited communication resources. To address this issue, we propose a Collaborative Sensing-Aware Task Offloading (CSTO) mechanism for ISCC to reduce the sensing tasks transmission delay. We formulate a joint task offloading and communication resource allocation optimization problem to minimize the total processing delay of all vehicular sensing tasks. To solve this mixed-integer nonlinear programming (MINLP) problem, we design a two-stage iterative optimization algorithm that decomposes the original optimization problem into a task offloading subproblem and a resource allocation subproblem, which are solved iteratively. In the first stage, a Deep Reinforcement Learning algorithm is used to determine task offloading decisions based on the initial setting. In the second stage, a convex optimization algorithm is employed to allocate communication bandwidth according to the current task offloading decisions. We conduct simulation experiments by varying different crucial parameters, and the results demonstrate the superiority of our scheme over other benchmark schemes. Full article
(This article belongs to the Special Issue Feature Papers in Intelligent Sensors 2024)
Show Figures

Figure 1

Figure 1
<p>The considered ISCC system within the IoV scenario.</p>
Full article ">Figure 2
<p>Task processing delay for different mechanisms.</p>
Full article ">Figure 3
<p>Framework of the DQN algorithm.</p>
Full article ">Figure 4
<p>Total task processing delay with different task data sizes.</p>
Full article ">Figure 5
<p>Total task processing delay with different bandwidths of RSU.</p>
Full article ">Figure 6
<p>Total task processing delay with different computing resources of RSU.</p>
Full article ">Figure 7
<p>Total task processing delay with different computing resources of vehicles.</p>
Full article ">Figure 8
<p>RSU workload with or without collaborative computing.</p>
Full article ">
21 pages, 5691 KiB  
Article
Task Offloading Strategy for UAV-Assisted Mobile Edge Computing with Covert Transmission
by Zhijuan Hu, Dongsheng Zhou, Chao Shen, Tingting Wang and Liqiang Liu
Electronics 2025, 14(3), 446; https://doi.org/10.3390/electronics14030446 - 23 Jan 2025
Viewed by 606
Abstract
Task offloading strategies for unmanned aerial vehicle (UAV) -assisted mobile edge computing (MEC) systems have emerged as a promising solution for computationally intensive applications. However, the broadcast and open nature of radio transmissions makes such systems vulnerable to eavesdropping threats. Therefore, developing strategies [...] Read more.
Task offloading strategies for unmanned aerial vehicle (UAV) -assisted mobile edge computing (MEC) systems have emerged as a promising solution for computationally intensive applications. However, the broadcast and open nature of radio transmissions makes such systems vulnerable to eavesdropping threats. Therefore, developing strategies that can perform task offloading in a secure communication environment is critical for both ensuring the security and optimizing the performance of MEC systems. In this paper, we first design an architecture that utilizes covert communication techniques to guarantee that a UAV-assisted MEC system can securely offload highly confidential tasks from the relevant user equipment (UE) and calculations. Then, utilizing the Markov Decision Process (MDP) as a framework and incorporating the Prioritized Experience Replay (PER) mechanism into the Deep Deterministic Policy Gradient (DDPG) algorithm, a PER-DDPG algorithm is proposed, aiming to minimize the maximum processing delay of the system and the correct detection rate of the warden by jointly optimizing resource allocation, the movement of the UAV base station (UAV-BS), and the transmit power of the jammer. Simulation results demonstrate the convergence and effectiveness of the proposed approach. Compared to baseline algorithms such as Deep Q-Network (DQN) and DDPG, the PER-DDPG algorithm achieves significant performance improvements, with an average reward increase of over 16% compared to DDPG and over 53% compared to DQN. Furthermore, PER-DDPG exhibits the fastest convergence speed among the three algorithms, highlighting its efficiency in optimizing task offloading and communication security. Full article
(This article belongs to the Special Issue Research in Secure IoT-Edge-Cloud Computing Continuum)
Show Figures

Figure 1

Figure 1
<p>Unmanned aerial vehicle (UAV) -assisted mobile edge computing (MEC) scenario.</p>
Full article ">Figure 2
<p>Deep Deterministic Policy Gradient (DDPG) algorithm structure.</p>
Full article ">Figure 3
<p>Convergence performance with varying learning rates of the PER-DDPG algorithm.</p>
Full article ">Figure 4
<p>Convergence result using varying discount factors.</p>
Full article ">Figure 5
<p>Convergence performance with varying exploration parameters.</p>
Full article ">Figure 6
<p>Performance without PER mechanism or state normalization.</p>
Full article ">Figure 7
<p>Performance of various algorithms with task size of D = 100 Mb.</p>
Full article ">Figure 8
<p>Detection error rate of warden in different algorithms.</p>
Full article ">Figure 9
<p>Various indicators of three algorithms under different computing capabilities of UEs. (<b>a</b>) Convergence performance. (<b>b</b>) Offloading ratio. (<b>c</b>) Detection error rate of warden.</p>
Full article ">Figure 10
<p>Performance of various algorithms with different task size.</p>
Full article ">Figure 11
<p>Performance of various algorithms as number of UEs varies from 1 to 10.</p>
Full article ">
22 pages, 1818 KiB  
Article
Cooperative Service Caching and Task Offloading in Mobile Edge Computing: A Novel Hierarchical Reinforcement Learning Approach
by Tan Chen, Jiahao Ai, Xin Xiong and Guangwu Hu
Electronics 2025, 14(2), 380; https://doi.org/10.3390/electronics14020380 - 19 Jan 2025
Viewed by 703
Abstract
In the current mobile edge computing (MEC) system, the user dynamics, diversity of applications, and heterogeneity of services have made cooperative service caching and task offloading decision increasingly important. Service caching and task offloading have a naturally hierarchical structure, and thus, hierarchical reinforcement [...] Read more.
In the current mobile edge computing (MEC) system, the user dynamics, diversity of applications, and heterogeneity of services have made cooperative service caching and task offloading decision increasingly important. Service caching and task offloading have a naturally hierarchical structure, and thus, hierarchical reinforcement learning (HRL) can be used to effectively alleviate the dimensionality curse in it. However, traditional HRL algorithms are designed for short-term missions with sparse rewards, while existing HRL algorithms proposed for MEC lack delicate a coupling structure and perform poorly. This article introduces a novel HRL-based algorithm, named hierarchical service caching and task offloading (HSCTO), to solve the problem of the cooperative optimization of service caching and task offloading in MEC. The upper layer of HSCTO makes decisions on service caching while the lower layer is in charge of task offloading strategies. The upper-layer module learns policies by directly utilizing the rewards of the lower-layer agent, and the tightly coupled design guarantees algorithm performance. Furthermore, we adopt a fixed multiple time step method in the upper layer, which eliminates the dependence on the semi-Markov decision processes (SMDPs) theory and reduces the cost of frequent service replacement. We conducted numerical evaluations and the experimental results show that HSCTO improves the overall performance by 20%, and reduces the average energy consumption by 13% compared with competitive baselines. Full article
(This article belongs to the Special Issue Advanced Technologies in Edge Computing and Applications)
Show Figures

Figure 1

Figure 1
<p>System model.</p>
Full article ">Figure 2
<p>Architecture of hierarchical reinforcement learning.</p>
Full article ">Figure 3
<p>Reward of service caching agent during training process.</p>
Full article ">Figure 4
<p>Rewards of task offloading agents during training process. (<b>a</b>–<b>c</b>) show the reward curves of the 3 agents respectively.</p>
Full article ">Figure 5
<p>Comparison of rewards with different <math display="inline"><semantics> <mi>η</mi> </semantics></math>.</p>
Full article ">Figure 6
<p>Comparison of average utility with a different <math display="inline"><semantics> <mi>η</mi> </semantics></math>.</p>
Full article ">Figure 7
<p>Comparison of the reward for different algorithms.</p>
Full article ">Figure 8
<p>Comparison of average utility for different algorithms.</p>
Full article ">
20 pages, 2004 KiB  
Article
A Dual-Stage Processing Architecture for Unmanned Aerial Vehicle Object Detection and Tracking Using Lightweight Onboard and Ground Server Computations
by Odysseas Ntousis, Evangelos Makris, Panayiotis Tsanakas and Christos Pavlatos
Technologies 2025, 13(1), 35; https://doi.org/10.3390/technologies13010035 - 16 Jan 2025
Viewed by 1108
Abstract
UAVs are widely used for multiple tasks, which in many cases require autonomous processing and decision making. This autonomous function often requires significant computational capabilities that cannot be integrated into the UAV due to weight or cost limitations, making the distribution of the [...] Read more.
UAVs are widely used for multiple tasks, which in many cases require autonomous processing and decision making. This autonomous function often requires significant computational capabilities that cannot be integrated into the UAV due to weight or cost limitations, making the distribution of the workload and the combination of the results produced necessary. In this paper, a dual-stage processing architecture for object detection and tracking in Unmanned Aerial Vehicles (UAVs) is presented, focusing on efficient resource utilization and real-time performance. The proposed system delegates lightweight detection tasks to onboard hardware while offloading computationally intensive processes to a ground server. The UAV is equipped with a Raspberry Pi for onboard data processing, utilizing an Intel Neural Compute Stick 2 (NCS2) for accelerated object detection. Specifically, YOLOv5n is selected as the onboard model. The UAV transmits selected frames to the ground server, which handles advanced tracking, trajectory prediction, and target repositioning using state-of-the-art deep learning models. Communication between the UAV and the server is maintained through a high-speed Wi-Fi link, with a fallback to a 4G connection when needed. The ground server, equipped with an NVIDIA A40 GPU, employs YOLOv8x for object detection and DeepSORT for multi-object tracking. The proposed architecture ensures real-time tracking with minimal latency, making it suitable for mission-critical UAV applications such as surveillance and search and rescue. The results demonstrate the system’s robustness in various environments, highlighting its potential for effective object tracking under limited onboard computational resources. The system achieves recall and accuracy scores as high as 0.53 and 0.74, respectively, using the remote server, and is capable of re-identifying a significant portion of objects of interest lost by the onboard system, measured at approximately 70%. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

Figure 1
<p>The Transformer model architecture as presented in [<a href="#B17-technologies-13-00035" class="html-bibr">17</a>].</p>
Full article ">Figure 2
<p>The structure of RT-DETR as presented in [<a href="#B19-technologies-13-00035" class="html-bibr">19</a>].</p>
Full article ">Figure 3
<p>Representation of an LSTM cell structure.</p>
Full article ">Figure 4
<p>A picture of the UAV on which the system was tested.</p>
Full article ">Figure 5
<p>An overview of the hardware of the proposed architecture.</p>
Full article ">Figure 6
<p>An overview of the software of the proposed architecture.</p>
Full article ">Figure 7
<p>Losses and precision scores during the fine-tuning of the pretrained weights (validation set). The blue lines indicate the precise results, and the yellow dots represent the corresponding smoothed curves.</p>
Full article ">Figure 8
<p>Results indicating the effects of custom model training. (<b>a</b>) Detection results before fine-tuning the model. (<b>b</b>) Detection results after fine-tuning the model.</p>
Full article ">Figure 9
<p>RMSE values for training with unscaled data and batch size = 32. (<b>a</b>) RMSE for training performed with learning rate = 0.001, resulting in the model being unable to converge. (<b>b</b>) RMSE for training with learning rate = 0.0001, resulting in the model converging. However, due to the lack of scaling, the final RMSE is close to 27.</p>
Full article ">Figure 10
<p>RMSE values for training with scaled data and batch size = 32. (<b>a</b>) RMSE for training without neighbor information. (<b>b</b>). RMSE for training with information on the position of 4 neighbors.</p>
Full article ">Figure 11
<p>Example of a case where the onboard system failed (the target represented by a red box passed behind the street sign) and the server corrected it. A different run of the same simulation can be found in [<a href="#B53-technologies-13-00035" class="html-bibr">53</a>],where the on-board system failure is presented earlier.</p>
Full article ">Figure 12
<p>Example of a case where the server could not re-identify the lost target (represented by a red box).</p>
Full article ">
33 pages, 1773 KiB  
Article
Energy-Efficient Aerial STAR-RIS-Aided Computing Offloading and Content Caching for Wireless Sensor Networks
by Xiaoping Yang, Quanzeng Wang, Bin Yang and Xiaofang Cao
Sensors 2025, 25(2), 393; https://doi.org/10.3390/s25020393 - 10 Jan 2025
Viewed by 735
Abstract
Unmanned aerial vehicle (UAV)-based wireless sensor networks (WSNs) hold great promise for supporting ground-based sensors due to the mobility of UAVs and the ease of establishing line-of-sight links. UAV-based WSNs equipped with mobile edge computing (MEC) servers effectively mitigate challenges associated with long-distance [...] Read more.
Unmanned aerial vehicle (UAV)-based wireless sensor networks (WSNs) hold great promise for supporting ground-based sensors due to the mobility of UAVs and the ease of establishing line-of-sight links. UAV-based WSNs equipped with mobile edge computing (MEC) servers effectively mitigate challenges associated with long-distance transmission and the limited coverage of edge base stations (BSs), emerging as a powerful paradigm for both communication and computing services. Furthermore, incorporating simultaneously transmitting and reflecting reconfigurable intelligent surfaces (STAR-RISs) as passive relays significantly enhances the propagation environment and service quality of UAV-based WSNs. However, most existing studies place STAR-RISs in fixed positions, ignoring the flexibility of STAR-RISs. Some other studies equip UAVs with STAR-RISs, and UAVs act as flight carriers, ignoring the computing and caching capabilities of UAVs. To address these limitations, we propose an energy-efficient aerial STAR-RIS-aided computing offloading and content caching framework, where we formulate an energy consumption minimization problem to jointly optimize content caching decisions, computing offloading decisions, UAV hovering positions, and STAR-RIS passive beamforming. Given the non-convex nature of this problem, we decompose it into a content caching decision subproblem, a computing offloading decision subproblem, a hovering position subproblem, and a STAR-RIS resource allocation subproblem. We propose a deep reinforcement learning (DRL)–successive convex approximation (SCA) combined algorithm to iteratively achieve near-optimal solutions with low complexity. The numerical results demonstrate that the proposed framework effectively utilizes resources in UAV-based WSNs and significantly reduces overall system energy consumption. Full article
(This article belongs to the Special Issue Recent Developments in Wireless Network Technology)
Show Figures

Figure 1

Figure 1
<p>System model of aerial STAR-RIS-aided WSN.</p>
Full article ">Figure 2
<p>Illustration of task caching and offloading for STAR-RIS-aided UAV system.</p>
Full article ">Figure 3
<p>Time allocation for task processing in STAR-RIS-aided UAV system.</p>
Full article ">Figure 4
<p>The proposed optimization framework of the energy consumption minimization problem.</p>
Full article ">Figure 5
<p>Workflow of PPO algorithm.</p>
Full article ">Figure 6
<p>Energy consumption versus the number of iterations.</p>
Full article ">Figure 7
<p>Energy consumption versus network bandwidth.</p>
Full article ">Figure 8
<p>Energy consumption versus CPU cycles required for computing 1 bit of task data.</p>
Full article ">Figure 9
<p>Energy consumption versus computation task size.</p>
Full article ">Figure 10
<p>Energy consumption versus number of elements.</p>
Full article ">Figure 11
<p>Energy consumption versus sensors’ transmit power.</p>
Full article ">Figure 12
<p>Energy consumption versus SINR.</p>
Full article ">Figure 13
<p>Convergence of average weighted reward sum for various caching DRL learning rates.</p>
Full article ">
23 pages, 2052 KiB  
Article
On Edge-Fog-Cloud Collaboration and Reaping Its Benefits: A Heterogeneous Multi-Tier Edge Computing Architecture
by Niroshinie Fernando, Samir Shrestha, Seng W. Loke and Kevin Lee
Future Internet 2025, 17(1), 22; https://doi.org/10.3390/fi17010022 - 7 Jan 2025
Viewed by 976
Abstract
Edge, fog, and cloud computing provide complementary capabilities to enable distributed processing of IoT data. This requires offloading mechanisms, decision-making mechanisms, support for the dynamic availability of resources, and the cooperation of available nodes. This paper proposes a novel 3-tier architecture that integrates [...] Read more.
Edge, fog, and cloud computing provide complementary capabilities to enable distributed processing of IoT data. This requires offloading mechanisms, decision-making mechanisms, support for the dynamic availability of resources, and the cooperation of available nodes. This paper proposes a novel 3-tier architecture that integrates edge, fog, and cloud computing to harness their collective strengths, facilitating optimised data processing across these tiers. Our approach optimises performance, reducing energy consumption, and lowers costs. We evaluate our architecture through a series of experiments conducted on a purpose-built testbed. The results demonstrate significant improvements, with speedups of up to 7.5 times and energy savings reaching 80%, underlining the effectiveness and practical benefits of our cooperative edge-fog-cloud model in supporting the dynamic computational needs of IoT ecosystems. We argue that a multi-tier (e.g., edge-fog-cloud) dynamic task offloading and management of heterogeneous devices will be key to flexible edge computing, and that the advantage of task relocation and offloading is not straightforward but depends on the configuration of devices and relative device capabilities. Full article
(This article belongs to the Special Issue Edge Intelligence: Edge Computing for 5G and the Internet of Things)
Show Figures

Figure 1

Figure 1
<p>A 3-tier architecture for device-enhanced edge, fog, and cloud computing. End-user IoT devices such as smartphones, drones, and robots are integrated as edge resources, forming a local collective resource, and work collaboratively with conventional edge, fog, and cloud servers.</p>
Full article ">Figure 2
<p>Scenario showing how different contexts require collaboration amongst resource nodes at edge, fog, and cloud tiers.</p>
Full article ">Figure 3
<p>The edge-fog-cloud collaborative architecture.</p>
Full article ">Figure 4
<p>Results for Experiments 2–6 and 9. (<b>a</b>) Jobs completed by each node for Experiments 2–6,9. (<b>b</b>) Speedup gains and battery usage for varying node configurations.</p>
Full article ">Figure 5
<p>Time series of number of jobs completed by each node. Experiment 6: Results for D1 working with D2, F1, and C1: Time series of number of jobs completed by each node.</p>
Full article ">Figure 6
<p>Experiment 7: Varying the chunk size. (<b>a</b>) Speedup gains for the D1 with varying chunk size for F1 and constant chunk size for nodes D2 and C1. (<b>b</b>) Speedup gain for the D1 with varying chunk size for D2 and constant chunk size for nodes F1 and C1.</p>
Full article ">Figure 7
<p>Experiment 8: Results of scaling up cloud workers. (<b>a</b>) Speedups for delegator D1 with varying number of cloud workers (1 to 12). (<b>b</b>) Avg. job transmission time (ms) from delegator D1 to different setups of varying number of cloud workers.</p>
Full article ">Figure 8
<p>Experiment 10: Results for D1 working with D2, F1, and C1 under dynamic conditions. (<b>a</b>) Time series of cumulative jobs completed by the nodes: Slowing down F1. (<b>b</b>) Time series of cumulative jobs completed by the nodes: Disconnect F1. (<b>c</b>) Time series of cumulative jobs completed by the nodes: Add new cloud worker.</p>
Full article ">
19 pages, 929 KiB  
Article
Task Offloading with LLM-Enhanced Multi-Agent Reinforcement Learning in UAV-Assisted Edge Computing
by Feifan Zhu, Fei Huang, Yantao Yu, Guojin Liu and Tiancong Huang
Sensors 2025, 25(1), 175; https://doi.org/10.3390/s25010175 - 31 Dec 2024
Cited by 1 | Viewed by 1276
Abstract
Unmanned aerial vehicles (UAVs) furnished with computational servers enable user equipment (UE) to offload complex computational tasks, thereby addressing the limitations of edge computing in remote or resource-constrained environments. The application of value decomposition algorithms for UAV trajectory planning has drawn considerable research [...] Read more.
Unmanned aerial vehicles (UAVs) furnished with computational servers enable user equipment (UE) to offload complex computational tasks, thereby addressing the limitations of edge computing in remote or resource-constrained environments. The application of value decomposition algorithms for UAV trajectory planning has drawn considerable research attention. However, existing value decomposition algorithms commonly encounter obstacles in effectively associating local observations with the global state of UAV clusters, which hinders their task-solving capabilities and gives rise to reduced task completion rates and prolonged convergence times. To address these challenges, this paper introduces an innovative multi-agent deep learning framework that conceptualizes multi-UAV trajectory optimization as a decentralized partially observable Markov decision process (Dec-POMDP). This framework integrates the QTRAN algorithm with a large language model (LLM) for efficient region decomposition and employs graph convolutional networks (GCNs) combined with self-attention mechanisms to adeptly manage inter-subregion relationships. The simulation results demonstrate that the proposed method significantly outperforms existing deep reinforcement learning methods, with improvements in convergence speed and task completion rate exceeding 10%. Overall, this framework significantly advances UAV trajectory optimization and enhances the performance of multi-agent systems within UAV-assisted edge computing environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>System model.</p>
Full article ">Figure 2
<p>Network architecture of the LLM-QTRAN algorithm.</p>
Full article ">Figure 3
<p>Average reward of the LLM-QTRAN algorithm and other baselines.</p>
Full article ">Figure 4
<p>Success rate for 5 users with 2 UAVs.</p>
Full article ">Figure 5
<p>Success rate for 7 users with 3 UAVs.</p>
Full article ">Figure 6
<p>Success rate for 10 users with 4 UAVs.</p>
Full article ">
23 pages, 6304 KiB  
Article
Task-Driven Computing Offloading and Resource Allocation Scheme for Maritime Autonomous Surface Ships Under Cloud–Shore–Ship Collaboration Framework
by Supu Xiu, Ying Zhang, Hualong Chen, Yuanqiao Wen and Changshi Xiao
J. Mar. Sci. Eng. 2025, 13(1), 16; https://doi.org/10.3390/jmse13010016 - 26 Dec 2024
Viewed by 699
Abstract
Currently, Maritime Autonomous Surface Ships (MASS) have become one of the most attractive research areas in shipping and academic communities. Based on the ship-to-shore and ship-to-ship communication network, they can exploit diversified and distributed resources such as shore-based facilities and cloud computing centers [...] Read more.
Currently, Maritime Autonomous Surface Ships (MASS) have become one of the most attractive research areas in shipping and academic communities. Based on the ship-to-shore and ship-to-ship communication network, they can exploit diversified and distributed resources such as shore-based facilities and cloud computing centers to execute a variety of ship applications. Due to the increasing number of MASS and asymmetrical distribution of traffic flows, the transportation management must design an efficient cloud–shore–ship collaboration framework and smart resource allocation strategy to improve the performance of the traffic network and provide high-quality applications to the ships. Therefore, we design a cloud–shore–ship collaboration framework, which integrates ship networking and cloud/edge computing and design the respective task collaboration process. It can effectively support the collaborative interaction of distributed resources in the cloud, onshore, and onboard. Based on the global information of the framework, we propose an intelligent resource allocation method based on Q-learning by combining the relevance, QoS characteristics, and priority of ship tasks. Simulation experiments show that our proposed approach can effectively reduce task latency and system energy consumption while supporting the concurrency of scale tasks. Compared with other analogy methods, the proposed algorithm can reduce the task processing delay by at least 15.7% and the task processing energy consumption by 15.4%. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Figure 1
<p>The cloud–shore–ship collaboration framework integrating IoS and cloud/edge computing.</p>
Full article ">Figure 2
<p>The basic task processing of cloud–shore–ship collaboration framework.</p>
Full article ">Figure 3
<p>The diagram of cloud–shore–ship collaboration task processing.</p>
Full article ">Figure 4
<p>The tasks unloading process of MASS.</p>
Full article ">Figure 5
<p>(<b>a</b>) The processing flow of serial tasks; (<b>b</b>) The processing flow of ring tasks.</p>
Full article ">Figure 6
<p>The Q-learning based task offloading algorithm for cloud–shore–ship collaboration.</p>
Full article ">Figure 7
<p>The average task delay of the four algorithms with different task numbers.</p>
Full article ">Figure 8
<p>The system energy consumption of the four algorithms with different task numbers.</p>
Full article ">Figure 9
<p>The impact of the number of ships on the task average delay.</p>
Full article ">Figure 10
<p>The impact of the number of ships on the system energy consumption.</p>
Full article ">Figure 11
<p>The impact of task data volume on the task average delay.</p>
Full article ">Figure 12
<p>The impact of the task data volume on the system energy consumption.</p>
Full article ">Figure 13
<p>The impact of the maximum task delay on the task average delay.</p>
Full article ">Figure 14
<p>The impact of the maximum task delay on the system energy consumption.</p>
Full article ">Figure 15
<p>The impact of the maximum communication power on the task average delay.</p>
Full article ">Figure 16
<p>The impact of the maximum communication power on the system energy consumption.</p>
Full article ">
19 pages, 3567 KiB  
Article
Multi-Agent Reinforcement Learning-Based Computation Offloading for Unmanned Aerial Vehicle Post-Disaster Rescue
by Lixing Wang and Huirong Jiao
Sensors 2024, 24(24), 8014; https://doi.org/10.3390/s24248014 - 15 Dec 2024
Viewed by 971
Abstract
Natural disasters cause significant losses. Unmanned aerial vehicles (UAVs) are valuable in rescue missions but need to offload tasks to edge servers due to their limited computing power and battery life. This study proposes a task offloading decision algorithm called the multi-agent deep [...] Read more.
Natural disasters cause significant losses. Unmanned aerial vehicles (UAVs) are valuable in rescue missions but need to offload tasks to edge servers due to their limited computing power and battery life. This study proposes a task offloading decision algorithm called the multi-agent deep deterministic policy gradient with cooperation and experience replay (CER-MADDPG), which is based on multi-agent reinforcement learning for UAV computation offloading. CER-MADDPG emphasizes collaboration between UAVs and uses historical UAV experiences to classify and obtain optimal strategies. It enables collaboration among edge devices through the design of the ’critic’ network. Additionally, by defining good and bad experiences for UAVs, experiences are classified into two separate buffers, allowing UAVs to learn from them, seek benefits, avoid harm, and reduce system overhead. The performance of CER-MADDPG was verified through simulations in two aspects. First, the influence of key hyperparameters on performance was examined, and the optimal values were determined. Second, CER-MADDPG was compared with other baseline algorithms. The results show that compared with MADDPG and stochastic game-based resource allocation with prioritized experience replay, CER-MADDPG achieves the lowest system overhead and superior stability and scalability. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Edge computing architecture.</p>
Full article ">Figure 2
<p>CER-MADDPG algorithm structure.</p>
Full article ">Figure 3
<p>Improved critic network structure.</p>
Full article ">Figure 4
<p>Good and bad behavior guidance model.</p>
Full article ">Figure 5
<p>Selection of learning rates for the critic and actor networks.</p>
Full article ">Figure 6
<p>Selection of replay buffer size.</p>
Full article ">Figure 7
<p>Selection of <math display="inline"><semantics> <msub> <mi>α</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>α</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 8
<p>Comparison of system overhead of different algorithms.</p>
Full article ">Figure 9
<p>Comparison of task completion time for different UAV mission sizes.</p>
Full article ">Figure 10
<p>System consumption comparison as the number of UAVs increases.</p>
Full article ">
17 pages, 2537 KiB  
Article
Collaborative Optimization Strategy for Dependent Task Offloading in Vehicular Edge Computing
by Xiting Peng, Yandi Zhang, Xiaoyu Zhang, Chaofeng Zhang and Wei Yang
Mathematics 2024, 12(23), 3820; https://doi.org/10.3390/math12233820 - 2 Dec 2024
Viewed by 1143
Abstract
The advancement of the Internet of Autonomous Vehicles has facilitated the development and deployment of numerous onboard applications. However, the delay-sensitive tasks generated by these applications present enormous challenges for vehicles with limited computing resources. Moreover, these tasks are often interdependent, preventing parallel [...] Read more.
The advancement of the Internet of Autonomous Vehicles has facilitated the development and deployment of numerous onboard applications. However, the delay-sensitive tasks generated by these applications present enormous challenges for vehicles with limited computing resources. Moreover, these tasks are often interdependent, preventing parallel computation and severely prolonging completion times, which results in substantial energy consumption. Task-offloading technology offers an effective solution to mitigate these challenges. Traditional offloading strategies, however, fall short in the highly dynamic environment of the Internet of Vehicles. This paper proposes a task-offloading scheme based on deep reinforcement learning to optimize the strategy between vehicles and edge computing resources. The task-offloading problem is modeled as a Markov Decision Process, and an improved twin-delayed deep deterministic policy gradient algorithm, LT-TD3, is introduced to enhance the decision-making process. The integration of LSTM and a self-attention mechanism into the LT-TD3 network boosts its capability for feature extraction and representation. Additionally, considering task dependency, a topological sorting algorithm is employed to assign priorities to subtasks, thereby improving the efficiency of task offloading. Experimental results demonstrate that the proposed strategy significantly reduces task delays and energy consumption, offering an effective solution for efficient task processing and energy saving in autonomous vehicles. Full article
(This article belongs to the Special Issue Advanced Computational Intelligence in Cloud/Edge Computing)
Show Figures

Figure 1

Figure 1
<p>Autonomous vehicle cloud offloading scenario.</p>
Full article ">Figure 2
<p>Subtask dependency. (<b>a</b>) Example AR application model. (<b>b</b>) DAG applies subtask dependencies.</p>
Full article ">Figure 3
<p>LT-TD3 Algorithm architecture diagram.</p>
Full article ">Figure 4
<p>Experimental results of the average task delay for varying numbers of vehicles in the vehicle cloud.</p>
Full article ">Figure 5
<p>Experimental results of the average task delay in the vehicle cloud by data size.</p>
Full article ">Figure 6
<p>Experimental results of the task completion rate under different acceptable delays in vehicle cloud.</p>
Full article ">Figure 7
<p>Experimental results of the average task delay under different edge device resources in the vehicle cloud.</p>
Full article ">
17 pages, 1081 KiB  
Article
Intelligent End-Edge Computation Offloading Based on Lyapunov-Guided Deep Reinforcement Learning
by Xue Feng, Chi Xu, Xi Jin, Changqing Xia and Jing Jiang
Appl. Sci. 2024, 14(23), 11160; https://doi.org/10.3390/app142311160 - 29 Nov 2024
Viewed by 682
Abstract
To address the end-edge computation offloading challenge in the multi-terminal and multi-server environment, this paper proposes an intelligent computation offloading algorithm based on Lyapunov optimization and deep reinforcement learning. We formulate a network computation rate maximization problem while balancing constraints including offloading time, [...] Read more.
To address the end-edge computation offloading challenge in the multi-terminal and multi-server environment, this paper proposes an intelligent computation offloading algorithm based on Lyapunov optimization and deep reinforcement learning. We formulate a network computation rate maximization problem while balancing constraints including offloading time, CPU frequency, energy consumption, transmission power, and data queue stability. Due to the fact that the problem is mixed integer nonlinear programming, we transform it into a deterministic problem based on Lyapunov optimization theory, and then model it as a Markov decision process. Then, we employ deep reinforcement learning algorithm, i.e., asynchronous advantage actor-critic (A3C), and propose Lyapunov-guided A3C algorithm named LyA3C to approximate the optimal computation offloading policy. Experiments show that the LyA3C algorithm can converge stably and effectively improve the long-term network computation rate by 2.8% and 5.7% in comparison to the A2C-based and TD3-based algorithms. Full article
(This article belongs to the Special Issue Real-Time Systems and Industrial Internet of Things)
Show Figures

Figure 1

Figure 1
<p>System model.</p>
Full article ">Figure 2
<p>Algorithm structure.</p>
Full article ">Figure 3
<p>Flowchart of the LyA3C algorithm.</p>
Full article ">Figure 4
<p>Network computation rate for different algorithms.</p>
Full article ">Figure 5
<p>Relationship between data arrival rate and network computation rate for different algorithms.</p>
Full article ">Figure 6
<p>Relationship between data arrival rate and average data queue length for different algorithms.</p>
Full article ">Figure 7
<p>Relationship between data arrival rate and average energy consumption for different algorithms.</p>
Full article ">Figure 8
<p>Effect of different energy consumption thresholds on network computation rate.</p>
Full article ">Figure 9
<p>Relationship between energy consumption thresholds and data queue length for different algorithms.</p>
Full article ">
Back to TopTop