Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (213)

Search Parameters:
Keywords = offloading decision

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1818 KiB  
Article
Cooperative Service Caching and Task Offloading in Mobile Edge Computing: A Novel Hierarchical Reinforcement Learning Approach
by Tan Chen, Jiahao Ai, Xin Xiong and Guangwu Hu
Electronics 2025, 14(2), 380; https://doi.org/10.3390/electronics14020380 - 19 Jan 2025
Viewed by 335
Abstract
In the current mobile edge computing (MEC) system, the user dynamics, diversity of applications, and heterogeneity of services have made cooperative service caching and task offloading decision increasingly important. Service caching and task offloading have a naturally hierarchical structure, and thus, hierarchical reinforcement [...] Read more.
In the current mobile edge computing (MEC) system, the user dynamics, diversity of applications, and heterogeneity of services have made cooperative service caching and task offloading decision increasingly important. Service caching and task offloading have a naturally hierarchical structure, and thus, hierarchical reinforcement learning (HRL) can be used to effectively alleviate the dimensionality curse in it. However, traditional HRL algorithms are designed for short-term missions with sparse rewards, while existing HRL algorithms proposed for MEC lack delicate a coupling structure and perform poorly. This article introduces a novel HRL-based algorithm, named hierarchical service caching and task offloading (HSCTO), to solve the problem of the cooperative optimization of service caching and task offloading in MEC. The upper layer of HSCTO makes decisions on service caching while the lower layer is in charge of task offloading strategies. The upper-layer module learns policies by directly utilizing the rewards of the lower-layer agent, and the tightly coupled design guarantees algorithm performance. Furthermore, we adopt a fixed multiple time step method in the upper layer, which eliminates the dependence on the semi-Markov decision processes (SMDPs) theory and reduces the cost of frequent service replacement. We conducted numerical evaluations and the experimental results show that HSCTO improves the overall performance by 20%, and reduces the average energy consumption by 13% compared with competitive baselines. Full article
(This article belongs to the Special Issue Advanced Technologies in Edge Computing and Applications)
Show Figures

Figure 1

Figure 1
<p>System model.</p>
Full article ">Figure 2
<p>Architecture of hierarchical reinforcement learning.</p>
Full article ">Figure 3
<p>Reward of service caching agent during training process.</p>
Full article ">Figure 4
<p>Rewards of task offloading agents during training process. (<b>a</b>–<b>c</b>) show the reward curves of the 3 agents respectively.</p>
Full article ">Figure 5
<p>Comparison of rewards with different <math display="inline"><semantics> <mi>η</mi> </semantics></math>.</p>
Full article ">Figure 6
<p>Comparison of average utility with a different <math display="inline"><semantics> <mi>η</mi> </semantics></math>.</p>
Full article ">Figure 7
<p>Comparison of the reward for different algorithms.</p>
Full article ">Figure 8
<p>Comparison of average utility for different algorithms.</p>
Full article ">
20 pages, 2004 KiB  
Article
A Dual-Stage Processing Architecture for Unmanned Aerial Vehicle Object Detection and Tracking Using Lightweight Onboard and Ground Server Computations
by Odysseas Ntousis, Evangelos Makris, Panayiotis Tsanakas and Christos Pavlatos
Technologies 2025, 13(1), 35; https://doi.org/10.3390/technologies13010035 - 16 Jan 2025
Viewed by 426
Abstract
UAVs are widely used for multiple tasks, which in many cases require autonomous processing and decision making. This autonomous function often requires significant computational capabilities that cannot be integrated into the UAV due to weight or cost limitations, making the distribution of the [...] Read more.
UAVs are widely used for multiple tasks, which in many cases require autonomous processing and decision making. This autonomous function often requires significant computational capabilities that cannot be integrated into the UAV due to weight or cost limitations, making the distribution of the workload and the combination of the results produced necessary. In this paper, a dual-stage processing architecture for object detection and tracking in Unmanned Aerial Vehicles (UAVs) is presented, focusing on efficient resource utilization and real-time performance. The proposed system delegates lightweight detection tasks to onboard hardware while offloading computationally intensive processes to a ground server. The UAV is equipped with a Raspberry Pi for onboard data processing, utilizing an Intel Neural Compute Stick 2 (NCS2) for accelerated object detection. Specifically, YOLOv5n is selected as the onboard model. The UAV transmits selected frames to the ground server, which handles advanced tracking, trajectory prediction, and target repositioning using state-of-the-art deep learning models. Communication between the UAV and the server is maintained through a high-speed Wi-Fi link, with a fallback to a 4G connection when needed. The ground server, equipped with an NVIDIA A40 GPU, employs YOLOv8x for object detection and DeepSORT for multi-object tracking. The proposed architecture ensures real-time tracking with minimal latency, making it suitable for mission-critical UAV applications such as surveillance and search and rescue. The results demonstrate the system’s robustness in various environments, highlighting its potential for effective object tracking under limited onboard computational resources. The system achieves recall and accuracy scores as high as 0.53 and 0.74, respectively, using the remote server, and is capable of re-identifying a significant portion of objects of interest lost by the onboard system, measured at approximately 70%. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

Figure 1
<p>The Transformer model architecture as presented in [<a href="#B17-technologies-13-00035" class="html-bibr">17</a>].</p>
Full article ">Figure 2
<p>The structure of RT-DETR as presented in [<a href="#B19-technologies-13-00035" class="html-bibr">19</a>].</p>
Full article ">Figure 3
<p>Representation of an LSTM cell structure.</p>
Full article ">Figure 4
<p>A picture of the UAV on which the system was tested.</p>
Full article ">Figure 5
<p>An overview of the hardware of the proposed architecture.</p>
Full article ">Figure 6
<p>An overview of the software of the proposed architecture.</p>
Full article ">Figure 7
<p>Losses and precision scores during the fine-tuning of the pretrained weights (validation set). The blue lines indicate the precise results, and the yellow dots represent the corresponding smoothed curves.</p>
Full article ">Figure 8
<p>Results indicating the effects of custom model training. (<b>a</b>) Detection results before fine-tuning the model. (<b>b</b>) Detection results after fine-tuning the model.</p>
Full article ">Figure 9
<p>RMSE values for training with unscaled data and batch size = 32. (<b>a</b>) RMSE for training performed with learning rate = 0.001, resulting in the model being unable to converge. (<b>b</b>) RMSE for training with learning rate = 0.0001, resulting in the model converging. However, due to the lack of scaling, the final RMSE is close to 27.</p>
Full article ">Figure 10
<p>RMSE values for training with scaled data and batch size = 32. (<b>a</b>) RMSE for training without neighbor information. (<b>b</b>). RMSE for training with information on the position of 4 neighbors.</p>
Full article ">Figure 11
<p>Example of a case where the onboard system failed (the target represented by a red box passed behind the street sign) and the server corrected it. A different run of the same simulation can be found in [<a href="#B53-technologies-13-00035" class="html-bibr">53</a>],where the on-board system failure is presented earlier.</p>
Full article ">Figure 12
<p>Example of a case where the server could not re-identify the lost target (represented by a red box).</p>
Full article ">
34 pages, 1773 KiB  
Article
Energy-Efficient Aerial STAR-RIS-Aided Computing Offloading and Content Caching for Wireless Sensor Networks
by Xiaoping Yang, Quanzeng Wang, Bin Yang and Xiaofang Cao
Sensors 2025, 25(2), 393; https://doi.org/10.3390/s25020393 - 10 Jan 2025
Viewed by 458
Abstract
Unmanned aerial vehicle (UAV)-based wireless sensor networks (WSNs) hold great promise for supporting ground-based sensors due to the mobility of UAVs and the ease of establishing line-of-sight links. UAV-based WSNs equipped with mobile edge computing (MEC) servers effectively mitigate challenges associated with long-distance [...] Read more.
Unmanned aerial vehicle (UAV)-based wireless sensor networks (WSNs) hold great promise for supporting ground-based sensors due to the mobility of UAVs and the ease of establishing line-of-sight links. UAV-based WSNs equipped with mobile edge computing (MEC) servers effectively mitigate challenges associated with long-distance transmission and the limited coverage of edge base stations (BSs), emerging as a powerful paradigm for both communication and computing services. Furthermore, incorporating simultaneously transmitting and reflecting reconfigurable intelligent surfaces (STAR-RISs) as passive relays significantly enhances the propagation environment and service quality of UAV-based WSNs. However, most existing studies place STAR-RISs in fixed positions, ignoring the flexibility of STAR-RISs. Some other studies equip UAVs with STAR-RISs, and UAVs act as flight carriers, ignoring the computing and caching capabilities of UAVs. To address these limitations, we propose an energy-efficient aerial STAR-RIS-aided computing offloading and content caching framework, where we formulate an energy consumption minimization problem to jointly optimize content caching decisions, computing offloading decisions, UAV hovering positions, and STAR-RIS passive beamforming. Given the non-convex nature of this problem, we decompose it into a content caching decision subproblem, a computing offloading decision subproblem, a hovering position subproblem, and a STAR-RIS resource allocation subproblem. We propose a deep reinforcement learning (DRL)–successive convex approximation (SCA) combined algorithm to iteratively achieve near-optimal solutions with low complexity. The numerical results demonstrate that the proposed framework effectively utilizes resources in UAV-based WSNs and significantly reduces overall system energy consumption. Full article
(This article belongs to the Special Issue Recent Developments in Wireless Network Technology)
Show Figures

Figure 1

Figure 1
<p>System model of aerial STAR-RIS-aided WSN.</p>
Full article ">Figure 2
<p>Illustration of task caching and offloading for STAR-RIS-aided UAV system.</p>
Full article ">Figure 3
<p>Time allocation for task processing in STAR-RIS-aided UAV system.</p>
Full article ">Figure 4
<p>The proposed optimization framework of the energy consumption minimization problem.</p>
Full article ">Figure 5
<p>Workflow of PPO algorithm.</p>
Full article ">Figure 6
<p>Energy consumption versus the number of iterations.</p>
Full article ">Figure 7
<p>Energy consumption versus network bandwidth.</p>
Full article ">Figure 8
<p>Energy consumption versus CPU cycles required for computing 1 bit of task data.</p>
Full article ">Figure 9
<p>Energy consumption versus computation task size.</p>
Full article ">Figure 10
<p>Energy consumption versus number of elements.</p>
Full article ">Figure 11
<p>Energy consumption versus sensors’ transmit power.</p>
Full article ">Figure 12
<p>Energy consumption versus SINR.</p>
Full article ">Figure 13
<p>Convergence of average weighted reward sum for various caching DRL learning rates.</p>
Full article ">
23 pages, 2052 KiB  
Article
On Edge-Fog-Cloud Collaboration and Reaping Its Benefits: A Heterogeneous Multi-Tier Edge Computing Architecture
by Niroshinie Fernando, Samir Shrestha, Seng W. Loke and Kevin Lee
Future Internet 2025, 17(1), 22; https://doi.org/10.3390/fi17010022 - 7 Jan 2025
Viewed by 470
Abstract
Edge, fog, and cloud computing provide complementary capabilities to enable distributed processing of IoT data. This requires offloading mechanisms, decision-making mechanisms, support for the dynamic availability of resources, and the cooperation of available nodes. This paper proposes a novel 3-tier architecture that integrates [...] Read more.
Edge, fog, and cloud computing provide complementary capabilities to enable distributed processing of IoT data. This requires offloading mechanisms, decision-making mechanisms, support for the dynamic availability of resources, and the cooperation of available nodes. This paper proposes a novel 3-tier architecture that integrates edge, fog, and cloud computing to harness their collective strengths, facilitating optimised data processing across these tiers. Our approach optimises performance, reducing energy consumption, and lowers costs. We evaluate our architecture through a series of experiments conducted on a purpose-built testbed. The results demonstrate significant improvements, with speedups of up to 7.5 times and energy savings reaching 80%, underlining the effectiveness and practical benefits of our cooperative edge-fog-cloud model in supporting the dynamic computational needs of IoT ecosystems. We argue that a multi-tier (e.g., edge-fog-cloud) dynamic task offloading and management of heterogeneous devices will be key to flexible edge computing, and that the advantage of task relocation and offloading is not straightforward but depends on the configuration of devices and relative device capabilities. Full article
(This article belongs to the Special Issue Edge Intelligence: Edge Computing for 5G and the Internet of Things)
Show Figures

Figure 1

Figure 1
<p>A 3-tier architecture for device-enhanced edge, fog, and cloud computing. End-user IoT devices such as smartphones, drones, and robots are integrated as edge resources, forming a local collective resource, and work collaboratively with conventional edge, fog, and cloud servers.</p>
Full article ">Figure 2
<p>Scenario showing how different contexts require collaboration amongst resource nodes at edge, fog, and cloud tiers.</p>
Full article ">Figure 3
<p>The edge-fog-cloud collaborative architecture.</p>
Full article ">Figure 4
<p>Results for Experiments 2–6 and 9. (<b>a</b>) Jobs completed by each node for Experiments 2–6,9. (<b>b</b>) Speedup gains and battery usage for varying node configurations.</p>
Full article ">Figure 5
<p>Time series of number of jobs completed by each node. Experiment 6: Results for D1 working with D2, F1, and C1: Time series of number of jobs completed by each node.</p>
Full article ">Figure 6
<p>Experiment 7: Varying the chunk size. (<b>a</b>) Speedup gains for the D1 with varying chunk size for F1 and constant chunk size for nodes D2 and C1. (<b>b</b>) Speedup gain for the D1 with varying chunk size for D2 and constant chunk size for nodes F1 and C1.</p>
Full article ">Figure 7
<p>Experiment 8: Results of scaling up cloud workers. (<b>a</b>) Speedups for delegator D1 with varying number of cloud workers (1 to 12). (<b>b</b>) Avg. job transmission time (ms) from delegator D1 to different setups of varying number of cloud workers.</p>
Full article ">Figure 8
<p>Experiment 10: Results for D1 working with D2, F1, and C1 under dynamic conditions. (<b>a</b>) Time series of cumulative jobs completed by the nodes: Slowing down F1. (<b>b</b>) Time series of cumulative jobs completed by the nodes: Disconnect F1. (<b>c</b>) Time series of cumulative jobs completed by the nodes: Add new cloud worker.</p>
Full article ">
19 pages, 929 KiB  
Article
Task Offloading with LLM-Enhanced Multi-Agent Reinforcement Learning in UAV-Assisted Edge Computing
by Feifan Zhu, Fei Huang, Yantao Yu, Guojin Liu and Tiancong Huang
Sensors 2025, 25(1), 175; https://doi.org/10.3390/s25010175 - 31 Dec 2024
Viewed by 686
Abstract
Unmanned aerial vehicles (UAVs) furnished with computational servers enable user equipment (UE) to offload complex computational tasks, thereby addressing the limitations of edge computing in remote or resource-constrained environments. The application of value decomposition algorithms for UAV trajectory planning has drawn considerable research [...] Read more.
Unmanned aerial vehicles (UAVs) furnished with computational servers enable user equipment (UE) to offload complex computational tasks, thereby addressing the limitations of edge computing in remote or resource-constrained environments. The application of value decomposition algorithms for UAV trajectory planning has drawn considerable research attention. However, existing value decomposition algorithms commonly encounter obstacles in effectively associating local observations with the global state of UAV clusters, which hinders their task-solving capabilities and gives rise to reduced task completion rates and prolonged convergence times. To address these challenges, this paper introduces an innovative multi-agent deep learning framework that conceptualizes multi-UAV trajectory optimization as a decentralized partially observable Markov decision process (Dec-POMDP). This framework integrates the QTRAN algorithm with a large language model (LLM) for efficient region decomposition and employs graph convolutional networks (GCNs) combined with self-attention mechanisms to adeptly manage inter-subregion relationships. The simulation results demonstrate that the proposed method significantly outperforms existing deep reinforcement learning methods, with improvements in convergence speed and task completion rate exceeding 10%. Overall, this framework significantly advances UAV trajectory optimization and enhances the performance of multi-agent systems within UAV-assisted edge computing environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>System model.</p>
Full article ">Figure 2
<p>Network architecture of the LLM-QTRAN algorithm.</p>
Full article ">Figure 3
<p>Average reward of the LLM-QTRAN algorithm and other baselines.</p>
Full article ">Figure 4
<p>Success rate for 5 users with 2 UAVs.</p>
Full article ">Figure 5
<p>Success rate for 7 users with 3 UAVs.</p>
Full article ">Figure 6
<p>Success rate for 10 users with 4 UAVs.</p>
Full article ">
23 pages, 6304 KiB  
Article
Task-Driven Computing Offloading and Resource Allocation Scheme for Maritime Autonomous Surface Ships Under Cloud–Shore–Ship Collaboration Framework
by Supu Xiu, Ying Zhang, Hualong Chen, Yuanqiao Wen and Changshi Xiao
J. Mar. Sci. Eng. 2025, 13(1), 16; https://doi.org/10.3390/jmse13010016 - 26 Dec 2024
Viewed by 521
Abstract
Currently, Maritime Autonomous Surface Ships (MASS) have become one of the most attractive research areas in shipping and academic communities. Based on the ship-to-shore and ship-to-ship communication network, they can exploit diversified and distributed resources such as shore-based facilities and cloud computing centers [...] Read more.
Currently, Maritime Autonomous Surface Ships (MASS) have become one of the most attractive research areas in shipping and academic communities. Based on the ship-to-shore and ship-to-ship communication network, they can exploit diversified and distributed resources such as shore-based facilities and cloud computing centers to execute a variety of ship applications. Due to the increasing number of MASS and asymmetrical distribution of traffic flows, the transportation management must design an efficient cloud–shore–ship collaboration framework and smart resource allocation strategy to improve the performance of the traffic network and provide high-quality applications to the ships. Therefore, we design a cloud–shore–ship collaboration framework, which integrates ship networking and cloud/edge computing and design the respective task collaboration process. It can effectively support the collaborative interaction of distributed resources in the cloud, onshore, and onboard. Based on the global information of the framework, we propose an intelligent resource allocation method based on Q-learning by combining the relevance, QoS characteristics, and priority of ship tasks. Simulation experiments show that our proposed approach can effectively reduce task latency and system energy consumption while supporting the concurrency of scale tasks. Compared with other analogy methods, the proposed algorithm can reduce the task processing delay by at least 15.7% and the task processing energy consumption by 15.4%. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Figure 1
<p>The cloud–shore–ship collaboration framework integrating IoS and cloud/edge computing.</p>
Full article ">Figure 2
<p>The basic task processing of cloud–shore–ship collaboration framework.</p>
Full article ">Figure 3
<p>The diagram of cloud–shore–ship collaboration task processing.</p>
Full article ">Figure 4
<p>The tasks unloading process of MASS.</p>
Full article ">Figure 5
<p>(<b>a</b>) The processing flow of serial tasks; (<b>b</b>) The processing flow of ring tasks.</p>
Full article ">Figure 6
<p>The Q-learning based task offloading algorithm for cloud–shore–ship collaboration.</p>
Full article ">Figure 7
<p>The average task delay of the four algorithms with different task numbers.</p>
Full article ">Figure 8
<p>The system energy consumption of the four algorithms with different task numbers.</p>
Full article ">Figure 9
<p>The impact of the number of ships on the task average delay.</p>
Full article ">Figure 10
<p>The impact of the number of ships on the system energy consumption.</p>
Full article ">Figure 11
<p>The impact of task data volume on the task average delay.</p>
Full article ">Figure 12
<p>The impact of the task data volume on the system energy consumption.</p>
Full article ">Figure 13
<p>The impact of the maximum task delay on the task average delay.</p>
Full article ">Figure 14
<p>The impact of the maximum task delay on the system energy consumption.</p>
Full article ">Figure 15
<p>The impact of the maximum communication power on the task average delay.</p>
Full article ">Figure 16
<p>The impact of the maximum communication power on the system energy consumption.</p>
Full article ">
19 pages, 3567 KiB  
Article
Multi-Agent Reinforcement Learning-Based Computation Offloading for Unmanned Aerial Vehicle Post-Disaster Rescue
by Lixing Wang and Huirong Jiao
Sensors 2024, 24(24), 8014; https://doi.org/10.3390/s24248014 - 15 Dec 2024
Viewed by 692
Abstract
Natural disasters cause significant losses. Unmanned aerial vehicles (UAVs) are valuable in rescue missions but need to offload tasks to edge servers due to their limited computing power and battery life. This study proposes a task offloading decision algorithm called the multi-agent deep [...] Read more.
Natural disasters cause significant losses. Unmanned aerial vehicles (UAVs) are valuable in rescue missions but need to offload tasks to edge servers due to their limited computing power and battery life. This study proposes a task offloading decision algorithm called the multi-agent deep deterministic policy gradient with cooperation and experience replay (CER-MADDPG), which is based on multi-agent reinforcement learning for UAV computation offloading. CER-MADDPG emphasizes collaboration between UAVs and uses historical UAV experiences to classify and obtain optimal strategies. It enables collaboration among edge devices through the design of the ’critic’ network. Additionally, by defining good and bad experiences for UAVs, experiences are classified into two separate buffers, allowing UAVs to learn from them, seek benefits, avoid harm, and reduce system overhead. The performance of CER-MADDPG was verified through simulations in two aspects. First, the influence of key hyperparameters on performance was examined, and the optimal values were determined. Second, CER-MADDPG was compared with other baseline algorithms. The results show that compared with MADDPG and stochastic game-based resource allocation with prioritized experience replay, CER-MADDPG achieves the lowest system overhead and superior stability and scalability. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Edge computing architecture.</p>
Full article ">Figure 2
<p>CER-MADDPG algorithm structure.</p>
Full article ">Figure 3
<p>Improved critic network structure.</p>
Full article ">Figure 4
<p>Good and bad behavior guidance model.</p>
Full article ">Figure 5
<p>Selection of learning rates for the critic and actor networks.</p>
Full article ">Figure 6
<p>Selection of replay buffer size.</p>
Full article ">Figure 7
<p>Selection of <math display="inline"><semantics> <msub> <mi>α</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>α</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 8
<p>Comparison of system overhead of different algorithms.</p>
Full article ">Figure 9
<p>Comparison of task completion time for different UAV mission sizes.</p>
Full article ">Figure 10
<p>System consumption comparison as the number of UAVs increases.</p>
Full article ">
17 pages, 2537 KiB  
Article
Collaborative Optimization Strategy for Dependent Task Offloading in Vehicular Edge Computing
by Xiting Peng, Yandi Zhang, Xiaoyu Zhang, Chaofeng Zhang and Wei Yang
Mathematics 2024, 12(23), 3820; https://doi.org/10.3390/math12233820 - 2 Dec 2024
Viewed by 861
Abstract
The advancement of the Internet of Autonomous Vehicles has facilitated the development and deployment of numerous onboard applications. However, the delay-sensitive tasks generated by these applications present enormous challenges for vehicles with limited computing resources. Moreover, these tasks are often interdependent, preventing parallel [...] Read more.
The advancement of the Internet of Autonomous Vehicles has facilitated the development and deployment of numerous onboard applications. However, the delay-sensitive tasks generated by these applications present enormous challenges for vehicles with limited computing resources. Moreover, these tasks are often interdependent, preventing parallel computation and severely prolonging completion times, which results in substantial energy consumption. Task-offloading technology offers an effective solution to mitigate these challenges. Traditional offloading strategies, however, fall short in the highly dynamic environment of the Internet of Vehicles. This paper proposes a task-offloading scheme based on deep reinforcement learning to optimize the strategy between vehicles and edge computing resources. The task-offloading problem is modeled as a Markov Decision Process, and an improved twin-delayed deep deterministic policy gradient algorithm, LT-TD3, is introduced to enhance the decision-making process. The integration of LSTM and a self-attention mechanism into the LT-TD3 network boosts its capability for feature extraction and representation. Additionally, considering task dependency, a topological sorting algorithm is employed to assign priorities to subtasks, thereby improving the efficiency of task offloading. Experimental results demonstrate that the proposed strategy significantly reduces task delays and energy consumption, offering an effective solution for efficient task processing and energy saving in autonomous vehicles. Full article
(This article belongs to the Special Issue Advanced Computational Intelligence in Cloud/Edge Computing)
Show Figures

Figure 1

Figure 1
<p>Autonomous vehicle cloud offloading scenario.</p>
Full article ">Figure 2
<p>Subtask dependency. (<b>a</b>) Example AR application model. (<b>b</b>) DAG applies subtask dependencies.</p>
Full article ">Figure 3
<p>LT-TD3 Algorithm architecture diagram.</p>
Full article ">Figure 4
<p>Experimental results of the average task delay for varying numbers of vehicles in the vehicle cloud.</p>
Full article ">Figure 5
<p>Experimental results of the average task delay in the vehicle cloud by data size.</p>
Full article ">Figure 6
<p>Experimental results of the task completion rate under different acceptable delays in vehicle cloud.</p>
Full article ">Figure 7
<p>Experimental results of the average task delay under different edge device resources in the vehicle cloud.</p>
Full article ">
17 pages, 1081 KiB  
Article
Intelligent End-Edge Computation Offloading Based on Lyapunov-Guided Deep Reinforcement Learning
by Xue Feng, Chi Xu, Xi Jin, Changqing Xia and Jing Jiang
Appl. Sci. 2024, 14(23), 11160; https://doi.org/10.3390/app142311160 - 29 Nov 2024
Viewed by 575
Abstract
To address the end-edge computation offloading challenge in the multi-terminal and multi-server environment, this paper proposes an intelligent computation offloading algorithm based on Lyapunov optimization and deep reinforcement learning. We formulate a network computation rate maximization problem while balancing constraints including offloading time, [...] Read more.
To address the end-edge computation offloading challenge in the multi-terminal and multi-server environment, this paper proposes an intelligent computation offloading algorithm based on Lyapunov optimization and deep reinforcement learning. We formulate a network computation rate maximization problem while balancing constraints including offloading time, CPU frequency, energy consumption, transmission power, and data queue stability. Due to the fact that the problem is mixed integer nonlinear programming, we transform it into a deterministic problem based on Lyapunov optimization theory, and then model it as a Markov decision process. Then, we employ deep reinforcement learning algorithm, i.e., asynchronous advantage actor-critic (A3C), and propose Lyapunov-guided A3C algorithm named LyA3C to approximate the optimal computation offloading policy. Experiments show that the LyA3C algorithm can converge stably and effectively improve the long-term network computation rate by 2.8% and 5.7% in comparison to the A2C-based and TD3-based algorithms. Full article
(This article belongs to the Special Issue Real-Time Systems and Industrial Internet of Things)
Show Figures

Figure 1

Figure 1
<p>System model.</p>
Full article ">Figure 2
<p>Algorithm structure.</p>
Full article ">Figure 3
<p>Flowchart of the LyA3C algorithm.</p>
Full article ">Figure 4
<p>Network computation rate for different algorithms.</p>
Full article ">Figure 5
<p>Relationship between data arrival rate and network computation rate for different algorithms.</p>
Full article ">Figure 6
<p>Relationship between data arrival rate and average data queue length for different algorithms.</p>
Full article ">Figure 7
<p>Relationship between data arrival rate and average energy consumption for different algorithms.</p>
Full article ">Figure 8
<p>Effect of different energy consumption thresholds on network computation rate.</p>
Full article ">Figure 9
<p>Relationship between energy consumption thresholds and data queue length for different algorithms.</p>
Full article ">
22 pages, 1686 KiB  
Article
Optimizing Transmit Power for User-Cooperative Backscatter-Assisted NOMA-MEC: A Green IoT Perspective
by Huaiwen He, Chenghao Zhou, Feng Huang, Hong Shen and Yihong Yang
Electronics 2024, 13(23), 4678; https://doi.org/10.3390/electronics13234678 - 27 Nov 2024
Viewed by 494
Abstract
Non-orthogonal multiple access (NOMA) enables the parallel offloading of multiuser tasks, effectively enhancing throughput and reducing latency. Backscatter communication, which passively reflects radio frequency (RF) signals, improves energy efficiency and extends the operational lifespan of terminal devices. Both technologies are pivotal for the [...] Read more.
Non-orthogonal multiple access (NOMA) enables the parallel offloading of multiuser tasks, effectively enhancing throughput and reducing latency. Backscatter communication, which passively reflects radio frequency (RF) signals, improves energy efficiency and extends the operational lifespan of terminal devices. Both technologies are pivotal for the next generation of wireless networks. However, there is little research focusing on optimizing the transmit power in backscatter-assisted NOMA-MEC systems from a green IoT perspective. In this paper, we aim to minimize the transmit energy consumption of a Hybrid Access Point (HAP) while ensuring task deadlines are met. We consider the integration of Backscatter Communication (BackCom) and Active Transmission (AT), and leverage NOMA technology and user cooperation to mitigate the double near–far effect. Specifically, we formulate a transmit energy consumption minimization problem, accounting for task deadline constraints, task offloading decisions, transmit power allocation, and energy constraints. To tackle the non-convex optimization problem, we employ variable substitution and convex optimization theory to transform the original non-convex problem into a convex one, which is then efficiently solved. We deduce the semi-closed form expression of the optimal solution and propose an energy-efficient algorithm to minimize the transmit power of the entire wireless powered MEC. The extensive simulation results demonstrate that our proposed scheme significantly reduces the HAP transmit power by around 8% compared to existing schemes, validating the effectiveness of our approach. This study provides valuable insights for the design of green IoT systems by optimizing the transmit power in NOMA-MEC networks. Full article
Show Figures

Figure 1

Figure 1
<p>System model of a WPMEC network with a user-cooperative wireless-powered MEC system.</p>
Full article ">Figure 2
<p>Flowchart for convexification of Problem P1.</p>
Full article ">Figure 3
<p>Energy consumption in different schemes versus the latency constraint <span class="html-italic">T</span>.</p>
Full article ">Figure 4
<p>Offloaded data in different schemes versus the latency constraint <span class="html-italic">T</span>.</p>
Full article ">Figure 5
<p>Energy consumption in different schemes versus input computation bits at <math display="inline"><semantics> <msub> <mi>MD</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 6
<p>Offloaded data in different schemes versus input computation bits at <math display="inline"><semantics> <msub> <mi>MD</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 7
<p>Energy consumption in different schemes versus transmit power <math display="inline"><semantics> <msub> <mi>p</mi> <mn>0</mn> </msub> </semantics></math> of HAP.</p>
Full article ">Figure 8
<p>Offloaded data in different schemes versus transmit power <math display="inline"><semantics> <msub> <mi>p</mi> <mn>0</mn> </msub> </semantics></math> of HAP.</p>
Full article ">Figure 9
<p>Energy consumption in different schemes versus transmit power constraint <math display="inline"><semantics> <msubsup> <mi>P</mi> <mn>2</mn> <mi>max</mi> </msubsup> </semantics></math> of <math display="inline"><semantics> <msub> <mi>MD</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 10
<p>Energy consumption in different schemes versus the distance between <math display="inline"><semantics> <msub> <mi>MD</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>MD</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 11
<p>Offloading strategy with different distances between <math display="inline"><semantics> <msub> <mi>MD</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>MD</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">
22 pages, 1366 KiB  
Article
Mobility-Aware Task Offloading and Resource Allocation in UAV-Assisted Vehicular Edge Computing Networks
by Long Chen, Jiaqi Du and Xia Zhu
Drones 2024, 8(11), 696; https://doi.org/10.3390/drones8110696 - 20 Nov 2024
Viewed by 794
Abstract
The rapid development of the Internet of Vehicles (IoV) and intelligent transportation systems has led to increased demand for real-time data processing and computation in vehicular networks. To address these needs, this paper proposes a task offloading framework for UAV-assisted Vehicular Edge Computing [...] Read more.
The rapid development of the Internet of Vehicles (IoV) and intelligent transportation systems has led to increased demand for real-time data processing and computation in vehicular networks. To address these needs, this paper proposes a task offloading framework for UAV-assisted Vehicular Edge Computing (VEC) systems, which considers the high mobility of vehicles and the limited coverage and computational capacities of drones. We introduce the Mobility-Aware Vehicular Task Offloading (MAVTO) algorithm, designed to optimize task offloading decisions, manage resource allocation, and predict vehicle positions for seamless offloading. MAVTO leverages container-based virtualization for efficient computation, offering flexibility in resource allocation in multiple offload modes: direct, predictive, and hybrid. Extensive experiments using real-world vehicular data demonstrate that the MAVTO algorithm significantly outperforms other methods in terms of task completion success rate, especially under varying task data volumes and deadlines. Full article
(This article belongs to the Special Issue UAV-Assisted Intelligent Vehicular Networks 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Task offloading from bi-directions moving in UAV-assisted Vehicular Edge Computing.</p>
Full article ">Figure 2
<p>Direct offloading model.</p>
Full article ">Figure 3
<p>Prediction offloading model.</p>
Full article ">Figure 4
<p>Mixed offloading model.</p>
Full article ">Figure 5
<p>Example diagram for calculating the remaining travel distance of the vehicle.</p>
Full article ">Figure 6
<p>The performance of different task offloading sequences under a 95% Tukey HSD confidence interval.</p>
Full article ">Figure 7
<p>The performance of different task offloading strategies under a 95% Tukey HSD confidence interval.</p>
Full article ">Figure 8
<p>The performance of different resource allocation strategies under a 95% Tukey HSD confidence interval.</p>
Full article ">Figure 9
<p>Interaction plots of the compared algorithms for tests with different vehicle numbers and task data volume under 95.0% Tukey HSD confidence interval.</p>
Full article ">Figure 10
<p>Interaction plots of the compared algorithms for tests with different container numbers and task data volume intervals under 95.0% Tukey HSD confidence interval.</p>
Full article ">Figure 11
<p>Interaction plots of the compared algorithms for tests with different vehicle numbers and task deadlines under 95.0% Tukey HSD confidence interval.</p>
Full article ">
19 pages, 2768 KiB  
Article
Reinforcement-Learning-Based Edge Offloading Orchestration in Computing Continuum
by Ioana Ramona Martin, Gabriel Ioan Arcas and Tudor Cioara
Computers 2024, 13(11), 295; https://doi.org/10.3390/computers13110295 - 14 Nov 2024
Viewed by 825
Abstract
The AI-driven applications and large data generated by IoT devices connected to large-scale utility infrastructures pose significant operational challenges, including increased latency, communication overhead, and computational imbalances. Addressing these is essential to shift the workloads from the cloud to the edge and across [...] Read more.
The AI-driven applications and large data generated by IoT devices connected to large-scale utility infrastructures pose significant operational challenges, including increased latency, communication overhead, and computational imbalances. Addressing these is essential to shift the workloads from the cloud to the edge and across the entire computing continuum. However, to achieve this, significant challenges must still be addressed, particularly in decision making to manage the trade-offs associated with workload offloading. In this paper, we propose a task-offloading solution using Reinforcement Learning (RL) to dynamically balance workloads and reduce overloads. We have chosen the Deep Q-Learning algorithm and adapted it to our workload offloading problem. The reward system considers the node’s computational state and type to increase the utilization of the computational resources while minimizing latency and bandwidth utilization. A knowledge graph model of the computing continuum infrastructure is used to address environment modeling challenges and facilitate RL. The learning agent’s performance was evaluated using different hyperparameter configurations and varying episode lengths or knowledge graph model sizes. Results show that for a better learning experience, a low, steady learning rate and a large buffer size are important. Additionally, it offers strong convergence features, with relevant workload tasks and node pairs identified after each learning episode. It also demonstrates good scalability, as the number of offloading pairs and actions increases with the size of the knowledge graph and the episode count. Full article
(This article belongs to the Special Issue Intelligent Edge: When AI Meets Edge Computing)
Show Figures

Figure 1

Figure 1
<p>RL-based offloading solution.</p>
Full article ">Figure 2
<p>Hyperparameter configurations.</p>
Full article ">Figure 3
<p>The mean reward evolution for a neutral penalty (0).</p>
Full article ">Figure 4
<p>Mean reward for the 100 steps/episode case.</p>
Full article ">Figure 5
<p>Loss for the 100 steps/episode case.</p>
Full article ">Figure 6
<p>The mean reward for the 250 steps/episode case.</p>
Full article ">Figure 7
<p>Loss for the 250 steps/episode case.</p>
Full article ">Figure 8
<p>Mean reward for the 500 steps/episode case.</p>
Full article ">Figure 9
<p>Loss for the 500 steps/episode case.</p>
Full article ">Figure 10
<p>DQN training behavior for knowledge graphs with more than 1000 nodes and 100 steps/episode.</p>
Full article ">Figure 11
<p>DQN training behavior for knowledge graphs with more than 1000 nodes and 250 steps/episode.</p>
Full article ">Figure 12
<p>DQN training behavior for knowledge graphs with more than 1000 nodes and 500 steps/episode.</p>
Full article ">Figure 13
<p>A2C training behavior for knowledge graphs with more than 1000 nodes and 100 steps/episode.</p>
Full article ">Figure 14
<p>A2C training behavior for knowledge graphs with more than 1000 nodes and 250 steps/episode.</p>
Full article ">Figure 15
<p>A2C training behavior for knowledge graphs with more than 1000 nodes and 500 steps/episode.</p>
Full article ">
18 pages, 776 KiB  
Article
Joint Computation Offloading and Trajectory Optimization for Edge Computing UAV: A KNN-DDPG Algorithm
by Yiran Lu, Chi Xu and Yitian Wang
Drones 2024, 8(10), 564; https://doi.org/10.3390/drones8100564 - 9 Oct 2024
Cited by 1 | Viewed by 1156
Abstract
Unmanned aerial vehicles (UAVs) are widely used to improve the coverage and communication quality of wireless networks and assist mobile edge computing (MEC) due to their flexible deployments. However, the UAV-assisted MEC systems also face challenges in terms of computation offloading and trajectory [...] Read more.
Unmanned aerial vehicles (UAVs) are widely used to improve the coverage and communication quality of wireless networks and assist mobile edge computing (MEC) due to their flexible deployments. However, the UAV-assisted MEC systems also face challenges in terms of computation offloading and trajectory planning in the dynamic environment. This paper employs deep reinforcement learning to jointly optimize the computation offloading and trajectory planning for UAV-assisted MEC system. Specifically, this paper investigates a general scenario where multiple pieces of user equipment (UE) offload tasks to a UAV equipped with a MEC server to collaborate on a complex job. By fully considering UAV and UE movement, computation offloading ratio, and blocked relations, a joint computation offloading and trajectory optimization problem is formulated to minimize the maximum computational delay. Due to the non-convex nature of the problem, it is converted into a Markov decision process, and solved by the deep deterministic policy gradient (DDPG) algorithm. To enhance the exploration capability and stability of DDPG, the K-nearest neighbor (KNN) algorithm is employed, namely KNN-DDPG. Moreover, the prioritized experience replay algorithm, where the constant learning rate is replaced by the decaying learning rate, is utilized to enhance the converge. To validate the effectiveness and superiority of the proposed algorithm, KNN-DDPG is compared with the benchmark DDPG algorithm. Simulation results demonstrate that KNN-DDPG can converge and achieve 3.23% delay reduction compared to DDPG. Full article
(This article belongs to the Special Issue Space–Air–Ground Integrated Networks for 6G)
Show Figures

Figure 1

Figure 1
<p>UAV-assisted model.</p>
Full article ">Figure 2
<p>KNN-DDPG algorithm framework.</p>
Full article ">Figure 3
<p>Convergence of KNN-DDPG and DDPG.</p>
Full article ">Figure 4
<p>Delay with different task size.</p>
Full article ">Figure 5
<p>Exploration of the state spaces of KNN-DDPG and DDPG.</p>
Full article ">Figure 6
<p>Delay with different amounts of UE.</p>
Full article ">Figure 7
<p>Delay with different discount factors.</p>
Full article ">Figure 8
<p>Delay with different learning rates.</p>
Full article ">
14 pages, 1311 KiB  
Article
Decision Transformer-Based Efficient Data Offloading in LEO-IoT
by Pengcheng Xia, Mengfei Zang, Jie Zhao, Ting Ma, Jie Zhang, Changxu Ni, Jun Li and Yiyang Ni
Entropy 2024, 26(10), 846; https://doi.org/10.3390/e26100846 - 7 Oct 2024
Viewed by 801
Abstract
Recently, the Internet of Things (IoT) has witnessed rapid development. However, the scarcity of computing resources on the ground has constrained the application scenarios of IoT. Low Earth Orbit (LEO) satellites have drawn people’s attention due to their broader coverage and shorter transmission [...] Read more.
Recently, the Internet of Things (IoT) has witnessed rapid development. However, the scarcity of computing resources on the ground has constrained the application scenarios of IoT. Low Earth Orbit (LEO) satellites have drawn people’s attention due to their broader coverage and shorter transmission delay. They are capable of offloading more IoT computing tasks to mobile edge computing (MEC) servers with lower latency in order to address the issue of scarce computing resources on the ground. Nevertheless, it is highly challenging to share bandwidth and power resources among multiple IoT devices and LEO satellites. In this paper, we explore the efficient data offloading mechanism in the LEO satellite-based IoT (LEO-IoT), where LEO satellites forward data from the terrestrial to the MEC servers. Specifically, by optimally selecting the forwarding LEO satellite for each IoT task and allocating communication resources, we aim to minimize the data offloading latency and energy consumption. Particularly, we employ the state-of-the-art Decision Transformer (DT) to solve this optimization problem. We initially obtain a pre-trained DT through training on a specific task. Subsequently, the pre-trained DT is fine-tuned by acquiring a small quantity of data under the new task, enabling it to converge rapidly, with less training time and superior performance. Numerical simulation results demonstrate that in contrast to the classical reinforcement learning approach (Proximal Policy Optimization), the convergence speed of DT can be increased by up to three times, and the performance can be improved by up to 30%. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>The data offloading of the LEO-IoT network.</p>
Full article ">Figure 2
<p>Decision Transformer model.</p>
Full article ">Figure 3
<p>Cumulative reward comparison of DT-FT and PPO in different scenarios.</p>
Full article ">Figure 4
<p>Latency comparison of DT-FT and PPO in different scenarios.</p>
Full article ">Figure 5
<p>Energy consumption comparisons of DT-FT and PPO in different scenarios.</p>
Full article ">Figure 6
<p>Latency and energy comparison of DT-FT and PPO in different <math display="inline"><semantics> <msub> <mi>P</mi> <mi>U</mi> </msub> </semantics></math>.</p>
Full article ">Figure 7
<p>Latency and energy comparison of DT-FT and PPO in different <math display="inline"><semantics> <msub> <mi>B</mi> <mrow> <mi>t</mi> <mi>o</mi> <mi>t</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">
19 pages, 1201 KiB  
Article
Energy-Efficient Joint Partitioning and Offloading for Delay-Sensitive CNN Inference in Edge Computing
by Zhiyong Zha, Yifei Yang, Yongjun Xia, Zhaoyi Wang, Bin Luo, Kaihong Li, Chenkai Ye, Bo Xu and Kai Peng
Appl. Sci. 2024, 14(19), 8656; https://doi.org/10.3390/app14198656 - 25 Sep 2024
Cited by 1 | Viewed by 991
Abstract
With the development of deep learning foundation model technology, the types of computing tasks have become more complex, and the computing resources and memory required for these tasks have also become more substantial. Since it has long been revealed that task offloading in [...] Read more.
With the development of deep learning foundation model technology, the types of computing tasks have become more complex, and the computing resources and memory required for these tasks have also become more substantial. Since it has long been revealed that task offloading in cloud servers has many drawbacks, such as high communication delay and low security, task offloading is mostly carried out in the edge servers of the Internet of Things (IoT) network. However, edge servers in IoT networks are characterized by tight resource constraints and often the dynamic nature of data sources. Therefore, the question of how to perform task offloading of deep learning foundation model services on edge servers has become a new research topic. However, the existing task offloading methods either can not meet the requirements of massive CNN architecture or require a lot of communication overhead, leading to significant delays and energy consumption. In this paper, we propose a parallel partitioning method based on matrix convolution to partition foundation model inference tasks, which partitions large CNN inference tasks into subtasks that can be executed in parallel to meet the constraints of edge devices with limited hardware resources. Then, we model and mathematically express the problem of task offloading. In a multi-edge-server, multi-user, and multi-task edge-end system, we propose a task-offloading method that balances the tradeoff between delay and energy consumption. It adopts a greedy algorithm to optimize task-offloading decisions and terminal device transmission power to maximize the benefits of task offloading. Finally, extensive experiments verify the significant and extensive effectiveness of our algorithm. Full article
(This article belongs to the Special Issue Deep Learning and Edge Computing for Internet of Things)
Show Figures

Figure 1

Figure 1
<p>A multi-user multi-edge server-edge computing system.</p>
Full article ">Figure 2
<p>DAG for the partitioning process of the general CNN network framework.</p>
Full article ">Figure 3
<p>Partitioning method for CNN.</p>
Full article ">Figure 4
<p>Flowchart of GPOA.</p>
Full article ">Figure 5
<p>The influence of the number of Edge Node CPU Cores.</p>
Full article ">Figure 6
<p>The influence of the data size.</p>
Full article ">Figure 7
<p>The influence of the request arrival rate.</p>
Full article ">
Back to TopTop