Energy-Efficient Scheduling Based On Task Prioritization in Mobile Fog Computing
Energy-Efficient Scheduling Based On Task Prioritization in Mobile Fog Computing
Energy-Efficient Scheduling Based On Task Prioritization in Mobile Fog Computing
https://doi.org/10.1007/s00607-022-01108-y
REGULAR PAPER
Received: 20 August 2021 / Accepted: 14 July 2022 / Published online: 28 September 2022
© The Author(s), under exclusive licence to Springer-Verlag GmbH Austria, part of Springer Nature 2022
Abstract
Mobile network processing and the Edge Computing paradigm can be integrated as a
unit called mobile fog computing in fifth-generation networks. Because mobile devices
have less computing capacity such as limited CPU power, storage capacity, memory
constraints, and limited battery life, therefore, computationally intensive tasks migrate
from MDs to MFC. In this paper, we formulate an optimization scheme based on the
Greedy Knapsack Offloading Algorithm (GKOA) to minimize the energy consumption
of the MDs and save the capacity of limited resources. For resource allocation and
dynamic scheduling, we present a dynamic scheduling algorithm based on the priority
queue. We design two queues where the tasks with high execution times have the high
priority in high time queue and the other, tasks with low execution times have the high
priority in low time queue. These two priority queues work together and call as High
Low Priority Scheduling (HLPS) model. Numerical results demonstrate the GKOA
scheme improves energy efficiency by 19%, system overhead by 13.87%, and average
delay by 8.5% on the MD side than local computing. Also, our proposed scheduling
algorithm performs optimal results than several benchmark algorithms in terms of
waiting time, delay, service level, average response time and the number of scheduled
tasks on the MFC side.
B Mohsen Nickray
m.nickray@qom.ac.ir
Shamsollah Ghanbari
myrshg@gmail.com
123
188 E. Hosseini et al.
1 Introduction
Internet of things (IoT) is one of the most popular trends in today’s world. All smart
mobile devices and sensors have seen advanced technology and design to realize many
novel mobile services and applications.
According to the high computation capability as well as the high energy consump-
tion of MD, Mobile Cloud Computing (MCC) is used as an effective paradigm in
the past years, which is a central platform of resource allocation and network pro-
visioning in the above layer of network [1]. Cloud computing introduced first by
Cisco in 2012 and defined as a cloud computing platform extends computing, storage
and network services between end users and cloud-based servers [2]. Multi-Access
Edge Computing (MEC) is being developed by the European Telecommunications
Standards Institute (ETSI). MEC and Mobile Fog Computing (MFC) created to solve
cloud problems that are provided distributed resources and services, computing, stor-
age and network control for Mobile Devices (MDs). MFC is a new paradigm that
extends MCC by adding a new layer between the cloud and its end users. MFC is
work with the cloud, but MEC is usually defined by the exclusion of cloud. The MFC
provides faster and better Quality of Services (QoS) than MCC [3] because it is in close
proximity to cellular subscribers. Using MFC saves mobile resources with offloading
computational tasks from MD to MFC. The sensitive tasks execute locally on MD or
send to MFC. One of the important challenges is how to select the tasks to offload
from MD, schedule and execute tasks without delay in MFC [4, 5]. Offloading is a
decision model to select the best task set in the MD side and send to MFC layer. The
best set is a selection set that is beneficial in terms of limited resources consumption
for MD.
The scheduling process is the activity of the tasks manager that handles the tasks
and assign the best resources on the basis of a particular strategy for faster response
time [6]. We implement the offloading and dynamic scheduling algorithm in MD and
MFC side. In this paper, we use some Virtual Machines (VMs) with some processors
in the MFC side. We propose an offloading algorithm based on Greedy Knapsack
algorithms to choose the best set of tasks that use the most energy consumption of
MDs. The GKOA algorithms is a general design technique that used for optimization
problems. This algorithm simply chooses the best option based on benefit at each time
slot [7]. Our goal is to reduce the energy consumption of MDs with GKOA algorithm
and offloading tasks execute speedily. Then, we apply a set of policies, define priority
scheduling with a time slot in HLPS algorithm by configuring two priority queues
namely High Time Queue (HTQ) and Low Time Queue (LTQ). Also, we propose a
procedure for VM allocation. After prioritizing the task, the best VM will be allocated
to that, which is a decision based on the existence or lack of sufficient resources. HLPS
compare our proposed model with the First come First Service (FCFS), Round Robin
(RR) and Shortest Job First (SJF) algorithms [8, 9].
The key contributions of this paper can be summarized as follows.
1. Mobile devices in IoT use very energy for running applications. Also, the execution
speed of real-time applications in mobile devices is a very important factor. Thus,
we use the fog layer to run latency-sensitive tasks with high energy consumption.
123
Energy efficient scheduling based on task … 189
2. All tasks have important factors such as completion time and energy consumption.
A dynamic scheduling algorithm based on the priority queue using the time slot
approach is necessary to execute tasks speedily and optimize service levels.
3. To minimize waiting time, average response time, and delay, some tasks are imple-
mented locally and some are offloaded to the fog layer. To allocate the best VM
to each task, a simple optimal allocation policy is very important. The results
show the improvement of waiting time, waiting tasks, delay, system overhead,
mean response time, and the number of scheduled tasks than some of the works
reviewed in related work.
The rest of this paper is organized as follows. In Sect. 2, we review related works.
Then, in Sect. 3, we define the system model and formulation. Section 4 presents our
proposed approach for offloading and scheduling and simulation results are provided
in Sect. 5. Finally, in Sect. 6, we are concluded our work.
2 Related work
This section presents related works of scheduling and offloading challenges that have
been proposed recently in different researches. We divide mobile computing environ-
ments into some subsections as MFC, MEC, mobile edge cloud, and MCC. Moreover,
a summary of related works are presented in Table 1. In these tables, we highlight the
aspects, key features as well as the algorithm of the presented works.
In [10], the authors used different queue models and an energy harvesting model for
socially-aware computation offloading in MFC. Their approach solves the generalized
Nash equilibrium problem and reduces the social group execution cost function. The
authors in [11] presented a learning algorithm by Markov’s decision for offloading
tasks on fog in the real environment. Their algorithm minimized delay and compu-
tational power consumption. In another work [12], scheduling problems used to load
balancing in FC with Linear Prediction Graphical (LPG). They reduced the deadline
and total runtime for connecting car systems in FC. In [13], the authors presented a
task scheduling in FC with homogeneous nodes to maximum energy consumption.
In [14], the authors present a monitoring biosurveillance framework for the detection
and localization of biological threats with fog and mobile edge computing support.
Fog nodes aggregate monitoring data within their regions.
In [15], the authors offered a structure for an edge environment that includes three
layers: the migration collaboration layer, the computing sharing layer, and the remote
assisted layer. They proposed Optimal Routing for Task Dispatching (OPTD) and
Initialization Scheduling (IS) algorithms that increase optimal routing for each task
and resources assignment. Also, OPTD reduced task traffic and time variance for
each task and makespan of the tasks. In [16], the authors presented a divisible load
scheduling (DLS) algorithm for scheduling resources in fog computing environments
in three layers. They solved the problem of balanced data load distribution on fog
123
190 E. Hosseini et al.
nodes. In [17] an intelligent monitoring program was presented to offload the tasks
into fog or cloud or local execution. Methods Deep Reinforcement Learning (DRL)
and Greedy Auto-Scaling Deep rEinforcement learning-based Offloading (GASDEO)
were used to select the best fog and offload them. And the result was that the energy was
energy consumption, total execution cost, total network usage, and delay by 1, 4, 2
and 0.07%. In [18], the authors presented two schedulers based on integer linear
programming, named CASSIA-RR, that schedule tasks in cloud and fog. Their results
showed that their algorithm works better than traditional methods such as RR.
A number of researchers created a number of clones for several smart devices in the
cloud environment and movable tasks moved from device to clone. There is a virtual
version of it if the device to be lost. Another advantage of this plan is to deal with the
smart devices hardware limitation [19]. Another work in [20] proposed big data anal-
ysis that was integrated MCC and Hadoop due to a large amount of data to fix traffic
constraints and delays. The authors studied Multi-user Computation Partitioning Prob-
lem (MCPP) for minimizing average completion and delay time. They use an offline
heuristic algorithm, namely Search Adjust in Performance-Resource-Load (SAPRL)
model to solve this problem for latency-sensitive application [21]. The HealthEdge
proposed for task scheduling. Researchers set various processing priorities for dif-
ferent tasks based on the collected human health data. HealthEdge reduced the total
processing time as much as possible [22]. Round-Robin (RR) algorithm proposed in
the cloud computing environment. The RR algorithm demonstrated a better response
time and load balancing compared with the other algorithms in [23]. An example
presented scheduling algorithm in the cloud, which minimizes the average latency of
traffic flows and Average Waiting Time (AWT) at the bottom of the LTE screen [24].
The authors proposed an offloading decision-transmission scheduling scheme based
on Lyapunov optimization to save average execution time and energy consumption
[25]. A computation model proposed for offloading decision making named Context-
aware Multi-criteria Offloading Decision (CMOD) to minimize energy consumption
and execution time [26]. Researchers proposed a new hierarchical model on cloud
computing that is collected from a user layer and a cloud control layer with a machine
learning algorithm [27]. An integer encoding based adaptive genetic algorithm pro-
posed for offloading decision to improve single-task, multi-component, and multi-site
offloading [28]. The genetic decision algorithm proposed to solve the problem of task
offloading in multi-site environments. This algorithm improves execution time and
energy consumption in MCC [29].
Edge nodes developed as road units on vehicle networks for different Machine to
Machine (M2M) services such as OpenM2M to improve latency and traffic on network
[30]. The authors proposed an offloading algorithm by using stochastic optimization
techniques and Lyapunov optimization on MEC for a single device. Their results
123
Energy efficient scheduling based on task … 191
saved energy consumption in thousands of IoT devices [31]. Another example was
Scheduler Edge (SE), which minimizes the average latency of traffic flows at the
bottom of the LTE screen [32]. The authors proposed a scheduling model based on
the base station as a Queuing Network Theory (QNT) consisting of multiple multi-
type servers. This work optimizing traditional cellular Up-link Transmission (UT)
service for file or message, uploading, and traditional cellular Downlink Transmission
(DT) service for file downloading or media streaming [33]. A multi-server system
based on the Markov approximation technique proposed. in this work single MD sent
the computational tasks to multiple servers in order to minimize execution time and
mobile energy consumption [34]. A gateway-based edge computing service model
proposed via visualization technology as Docker to reduce the latency, transmission,
and the network bandwidth from and to the cloud. This model improved the opera-
tional efficiency of the edge nodes [35]. An optimization problem based on adopting
a Promoted-By-Probability (PBP) proposed to improve the combination of energy
cost and packet congestion. They used an improved krill herd meta heuristic opti-
mization algorithm for minimizing queuing congestion [36]. Researchers proposed
Non-Orthogonal Multiple Access (NOMA) for optimizing the efficiency of multi-
access radio transmission. In this work, a layered structure of the problem with an
efficient algorithm used to minimize latency for a single mobile user [37]. The authors
in [38] proposed the trade-off between energy consumption and latency in single
and multi-cell MEC network. They proposed an iterative search algorithm that mini-
mizes the sum of energy consumption and execution latency. Mobile edge computing
framework used for scheduling delay-sensitive applications. They applied a queuing
model, namely Multi-User Mobile Computation Offloading (MOTM), for improving
the pricing rule and transmission scheduling control [39]. A coalition game-based
algorithm proposed to improve the overall latency, capability of computation and stor-
age of heterogeneous services. It reduced the sum of delays of users by average 27.8%
and 82.1% [40]. The authors designed a distributed computation offloading algorithm
based on a game-theoretic approach to improve computation offloading performance
for high user size [41]. The authors proposed a Dynamic Computation Offloading
Algorithm (DCOA) as an optimization problem to minimize offloading costs. Results
presented the trade-off between offloading cost and performance [42]. Mixed Integer
Linear Programming (MILP) proposed for multi-user computation offloading prob-
lem. To solve the computation complexity of the formulated, they proposed an Iterative
Heuristic MEC Resource Allocation (IHRA) algorithm to make the offloading deci-
sion dynamically with low execution latency and high offloading efficiency [43]. An
Overhead-Optimizing Multi-device Scheduling Game (OOMSG) algorithm proposed
to minimize the overhead of the device and complete the task scheduling [44]. A
Software-Defined Networking (SDN) based edge computing architecture proposed
called Software-Defined Infrastructure (SDI). They used OpenFlow and OpenStack
to virtual service resources for building smart applications. SDI scheduled network
resources flexibly [45]. The authors in [46] proposed Energy-Efficient Computation
Offloading (EECO) mechanism to minimize energy consumption.
In [47], the authors proposed a new energy-efficient deep learning-based offloading
scheme (EEDOS) to tutor a deep learning-based smart decision-making algorithm to
minimize energy consumption and cost function.
123
192 E. Hosseini et al.
The authors proposed the Markov Decision Process (MDP) model to obtain an optimal
policy to minimize the computation and offloading costs [48]. The authors used the
Queuing Theory (QT) with a M/M/m model for moving vehicle-based edge nodes that
minimize the average offloading response time [49]. An optimal algorithm proposed
for the stochastic tasks to improve the scheduling problem in the appearance of full
knowledge and improve performance in a hierarchical manner [50]. Device-to-Device
(D2D) crowd system proposed in the mobile edge cloud and improved the energy
consumption and parallel offloading [51].
In recent proposed works, they did not consider all important parameters to reduce
costs and increase system performance. But some of the parameters improved. These
are the disadvantages of the proposed works. A summary of existing survey articles
on mobile computing is shown in Table 1.
We consider the most important parameters such as delay, the minimum number
of processors, and service level, which did not address in the related proposed works.
These parameters discuss in the following sections and we compare them in different
algorithms.
In this section, the optimal offloading and dynamic scheduling models are proposed
to minimize the usage of resources of MDs and improve efficiency on the MFC side.
Then, the problem formulation model of our system is presented.
We use the MFC layer between cloud and MDs. The internal structure of this layer is
used to provide faster access to receive the required services for users. The MFC layer
contains several VMs. The MDs connect to the MFC with Wi-Fi, 5G, 4G technologies
to offload computation tasks to the MFC. The MDs contain delay-sensitive and real-
time tasks such as augmented reality and facial recognition. We assume each MD can
either execute the task locally or offload it to MFC. Our system is designed on three
levels. In below, in the third level, there are several MDs that run sensitive applications.
The GKOA interface decides to offload tasks or execute them locally. In the second
level, the scheduling server decides about tasks priority and allocation resources. If
there isn’t any available recourse to the assignment, tasks migrate to the cloud server
in the above in the first layer. Figure 1 shows our system model.
In this section, we examine the details of the equations and parameters. The notations
of this paper summarize in Table 2.
123
Table 1 Comparison of scheduling methods in mobile computing
123
193
Table 1 continued
194
123
MILP and IHRA [43] Latency and high offloading efficiency ARkit
OOMSG [44] Overhead and complete time Real
SDN and SDI [45] Network resources OpenFlow and OpenStack
EECO [46] Energy consumption Simulation
EEDOS [47] Energy consumption and cost function Simulation
Mobile Edge and Cloud HEC [50] Latency and cost MATLAB
D2D [51] Energy consumption and parallel offload D2D,ONE
M/M/m QT [49] AWT and expected number of channel switches MATLAB
MDP [48] Computation and offloading costs OMNeT++
MCC Clone[19] Virtual version of mobile Real
Hadoop[20] Traffic constraints and delays CLOUDSIM
HealthEdge [22] Processing time MATLAB
RR [23] Response time and load balancing MATLAB
LTE [24] AWT and latency MATLAB
Lyapunov [25] Execution time and energy consumption MATLAB
MCPP and SAPRL [21] Average completion and delay time MATLAB
CMOD [52] Energy consumption and execution time CLOUDSIM
Hierarchical [53] Machine learning Simulation
MAGA [54] Energy usage, offloading and completion time Simulation
GAMCO [26] Energy consumption and execution time MATLAB
E. Hosseini et al.
Energy efficient scheduling based on task … 195
123
196 E. Hosseini et al.
Symbol Definition
There are n tasks in each MD. We assume the set of n tasks numbered from 1 up to
n. H = {h 1 , h 2 , ...h n } is obtained the offloading decision set. We denote h n ∈ {0, 1}
as the computation offloading decision of MD. If h n = 1, MD offload the tasks
to the MFC. h n = 0 means the task execute locally. Let the greedy solution be
X = x1 , x2 , ..., xn . xi indicates the fraction of taski taken. The e.w[i] holds the weight
of i th task. The e[i] holds the benefit of the i th task. The weight of the tasks is their
energy consumption and the benefit of the tasks is a constant value. GKOA selects a
maximum weight of W for offloading. The value of xi is 0 ≤ xi ≤ 1. xi ∗ (e.w[i])
donates the total weight in the knapsack, and xi pi donates to the value of the load
123
Energy efficient scheduling based on task … 197
profit of taski . So, the objective of GKOA is denoted in Eqs. (1) and (2):
n
max (xi . pi ) (1)
i=1
n
(xi .e.w[i]) ≤ W (2)
n=1
e[i]
pi = (3)
e.w[i]
In Eq. (3), pi is profit of ith task and P[1, 2, ...n] is profit array of all task that are
selected in GKOA for offloading. e[i] is benefit of ith task.
In this part, we denote In (dn , sn , cn ) as execution task parameters. dn presents the size
of input data, sn describes the computation size. The dn is fixed on both mobile and
MFC side. Local execution time of computing task in MD is:
cnl
tnl = (4)
u ln
In Eq. (4), u ln and cnl denote the local computing ability and capacity of the CPU for
local execution of task In respectively. Then the execution time of the MFC layer is
shown as:
cM
tnM = nM (5)
un
In Eq. (5), u nM and cnM denote the offload computing ability and capacity of the CPU
for offloading task In respectively. The transmission delay and energy consumption
of offloading ith task are:
tno f = tnh + tntr (6)
and
of
E n = E ntr + E nM (7)
d(t)
In Eqs. (6) and (7) th (t) = s
cnM
is time of serving request inside V M k and ttr = CM is
the estimated transmission time for offloading task to MFC. and E ntr E nM
show energy
consumption in transmission and execution time on the MFC side respectively, which
are obtained from the following equations:
123
198 E. Hosseini et al.
and
E nM = th ∗ eidle (9)
In Eqs. (8) and (9), etran and eidle denote transmission and idle battery level in MD.
The local energy consumption of local execution is given by:
In Eq. (10), eactive denotes active battery level in MD. According to Eqs. (4) and
(10), the MD overhead of local execution tasks is:
In Eq. (11), βnl ∈ {0, 1} and βne ∈ {0, 1} denote the weight parameters of execution
time and energy consumption. For delays sensitive task β is βnl = 1 and βne = 0.
For task with high energy consumption β is βnl = 0 βne = 1. In computer science,
overhead is any combination of excess or indirect computation time, memory, energy
consumption, or other resources that are required to perform a specific task. In our
proposed work, system overhead is the time and energy required to process all tasks
offloaded by mobile users. We equalize these values based on a weight parameter.
Then, the overhead of the MFC layer according to energy consumption and execution
time is obtained as:
of
S M = (βnl ∗ (tnM + tn )) + (βne ∗ E nl ) (12)
According to Eqs. (11) and (12), we calculate the cost function for both local and MFC
offloading aspects as:
φ = n∈N {h n ∗ S M + (1 − h n ) ∗ SL } (13)
In Eq. (13), φ is the cost function of our proposed model where S M and SL are
overhead in MFC and MD side. We assume that time is divided into time slots ts that
is denoted as ts ∈ {1, 2, ..., T }. We consider two priority queues for scheduling tasks
which act as follows:
All tasks are arrived and served based on the high execution time at time slot ts
with arrival rate λ1 that is denoted as b H (t).
All tasks are arrived and served based on the low execution time at time slot ts with
arrival rate λ2 that is denoted as b L (t). We consider a execution time threshold that is
denoted as τ . Also, We consider a threshold value for the waiting time as tw which is
equal to twice the time slot as tw = 2 ∗ ts . The number of tasks, entered into HTQ and
LTQ queues at the time slot ts , are shown with N H T (t) and N L T (t) respectively.
The arrival rate is based on Poisson’s rate [27]. We need to determine the probability
of n service requests in the MFC. We assume N denotes the maximum number of
service requests that are arrived in the MFC. μ is service rate for each VM. The
stability condition is ρ = μλ and P0 denotes the initial condition.
123
Energy efficient scheduling based on task … 199
1−ρ
1−ρ N +1
, ρ = 1
P0 = (14)
N +1 , ρ = 1
1
1−ρ
P0 ρ n = ρ 1−ρ N +1 , ρ = 1
Pn = (15)
P0 = N +1 , ρ
1
=1
ρ (N +1)ρ N +1
ρ−1 − 1−ρ N +1 , ρ = 1
L= (16)
2 ,ρ = 1
N
Eqs. (14) and (15) determine the probability of n service requests with the maximum
number of N th service requests. Eq. (16) denotes the total number of tasks in the system
using Little’s law [28]. The number of waiting tasks in waiting queue and the waiting
time are calculated from the following formulas:
L q = L − (1 − P0 ) (17)
Lq
wq = (18)
λ
In Eqs. (17) and (18), waiting tasks and waiting time can be reduced by minimum
number of service requests in MFC.
The offloading and scheduling algorithm runs on the MD and MFC side respectively.
In MFC side, resource allocation is done according to available resources. As a result,
our proposed model works with three algorithms in three steps:
1. Optimal offloading step proposed with GKOA in MD side.
2. Dynamic scheduling step proposed with HLPS in MFC side.
3. Select the best resources algorithm proposed with allocation resources in MFC
side.
We obtain all data from the real environment, then we evaluate and analyze data and
result in the R Studio. All tasks collect from some android applications. The energy
consumption coefficient in every computing cycle obtains from check Batter y Status
function in the android code. Execution time is equal to the time that a task consumes
the energy threshold value to execute. Offloading tasks enter into the priority queue
in MFC side. Tasks also run with other queues such as FCFS, RR, and SJF. FCFS is
a simply queue that sorts tasks in the order that they arrive in the ready queue. RR is
a scheduling queue where each task is assigned a time slot. SJF is a scheduling queue
that selects the tasks with the smallest execution time to execute next [8, 9].
123
200 E. Hosseini et al.
Algorithm 1 GKOA
Input: n{1, 2, ...n}, e.w[1..n], e[1..n], W
Output: XandW
x[i] ← 0
e.w[i] ← 0
Calculate pi according to Eq. (3)
Sort the tasks by decreasing order according to pi
for i = n; i ≤ 0; i − − do
for k = 0; k ≤ i; k + + do
if e[k − 1] > e[k] then
Swap(e[k − 1], e[k])
Swap(e.w[k − 1], e.w[k])
Swap( pk−1 , pk )
end if
end for
end for
for i = 1 to n do
if e.w[i] < W then
temp = e.w[i]
maxvalue = maxvalue + e[i]
x[i] = temp
else
temp = W
x[i + +] = temp
W = W − temp
end if
end for
return X
If the all tasks are already sorted into decreasing order of epii , then the for loop takes
time in O(n). Therefore, the total time to sort X is in O(n log n).
The detailed procedure of the HLPS is depicted in Algorithm 2. The output of Algo-
rithm 1 as X is one of the inputs of the Algorithm 2.
123
Energy efficient scheduling based on task … 201
Algorithm 2 HLPS
Input: X , τ, wq , tiM , ts and In (dn , sn , cn )
Output: N H T , N L T
N H T (t) ← 0
N L T (t) ← 0
Calculate tiM according to Eq. (5)
for i = 1 to n do
if tiM ≤ τ then
L T Q ← taski
N L T (t) = N L T (t) + b L (t)
else
H T Q ← taski
N H T (t) = N H T (t) + b H (t)
end if
while (N H T (t)&N L T (t)) = 0 do
for All task in LTQ and HTQ S do
if tiM (t) > ts then /** t is time slot **/
waitingqueue ← taski
tiM (t) = tiM (t) − ts
L q (t) ← L q (t) + 1
else
N L T (t) = N L T (t) − b L (t)
N H T (t) = N H T (t) − b H (t)
end if
end for
end while
for j = 1 to L q do
if tiM (t) ≤ τ then
L T Q ← taski
N L T (t) = N L T (t) + b H (t) + b L (t)
L q (t) ← L q (t) − 1
else
H T Q ← taski
N H T (t) = N H T (t) + b H (t)
end if
end for
while ( j ≤ L q ) do
Calculate wq j(t) according to Eq. (18)
if wq j(t) ≥ tw wq ( j)(t) ≥ wq ( j + 1)(t) then
Promote priority of task
if tiM (t) ≤ τ then
L T Q ← taski
N L T (t) = N L T (t) + b H (t) + bW (t)
L q (t) ← L q (t) − 1
else
H T Q ← taski
N H T (t) = N H T (t) + bw (t)
end if
end if
end while
end for
return HTQ and LTQ
123
202 E. Hosseini et al.
We discuss different instances of the two HTQ and LTQ queues as follows. At the
beginning, if there are no tasks in HTQ and LTQ, then N H T (t) = 0 and N L T (t) = 0.
If the execution time of a task is greater than τ , then it sends to HTQ queue.
Otherwise, it sends to LTQ. A time slot ts is assigned to all tasks on HTQ and LTQ
queue. If the task is not completed in ts , it will be transferred to the waiting queue.
The remaining time of waiting tasks is compared with the threshold value τ and the
appropriate queue is selected. At the time t + 1:
N H T (t + 1) = N H T (t) + b H (t + 1)
(19)
N L T (t + 1) = N L T (t) + b L (t + 1)
We apply a policy to reduce the waiting time for a task in the waiting queue. If the
waiting time of a task is higher than the tw or the waiting time for ith task is greater
than the next one, then the task will be upgraded to a higher priority. The tasks, that
come from the waiting queue with wq ≥ tw , is denoted as bW (t). At the time t = 2∗ts :
N H T (2ts ) = N H T (2ts − 1) + b H (2ts ) + bW (2ts )
(20)
N L T (2ts ) = N L T (2ts − 1) + b L (2ts ) + bW (2ts )
In this case, some of the tasks, that come from the HTQ, are transferred to the LTQ
if the remaining execution time of a task is lower than τ . At the time t + k, we assume
that all tasks are completed in the HTQ queue or added to the waiting queue. For all
tasks in the waiting queue, if tnM ≤ τ , then:
N H T (t + k) = 0
(21)
N L T (t + k) = N L T ((t + k) − 1) + b L (t + k) + b H (t + k)
The steps of the allocation of resources are depicted in Algorithm 3. The set of
HTQ and LTQ is the input of the Algorithm 3. Each VM’s request consists of
three parameters like CPU, RAM, and capacity. The collection of selected VM is
V1 {1, ...n 1 }, V2 {1, ...n 2 } and V3 {1, ...n 3 } based on CPU, RAM and capacity available
respectively. The subscription of three sets includes the most suitable VMs. Deciding
to choose an available VM takes two following policies:
1. All VMs are busy where (V1 V2 V3 ) = ∅: Task migrated to the cloud.
2. More than one VM is free that (V1 V2 V3 ) = ∅: The VM will be assigned
according to three parameters CPU, RAM, and capacity.
Task i requires storage, RAM, and processor to execute. Each VM has some of
these resources. The ratio of required resources to existing resources should be 1 or
k
Cvm k
capvm
larger. We must have this relationship for all three conditions si ≥ 1, di ≥ 1, and
k
R AMvm
ram i . Each available VM with this condition will be assigned. Each virtual machine
has some resources, including CPU, RAM, and capacity, each virtual machine that
123
Energy efficient scheduling based on task … 203
has any free resources, allocates them to processing tasks. If any VM is not available
for a ith task, the ith task migrates to the cloud. As a result, delay-sensitive tasks are
sent to the cloud if all virtual machines are busy.
Figure 2 shows the offloading and scheduling flowchart in our proposed system
model.
5 Evaluation
In this section, we test numerical examples using the proposed methods to illustrate
its advantages and potential applications in scheduling.
In this section, we evaluate our proposed work with two GKOA and HLPS algorithms.
We evaluate the performance of the proposed model using one PC with 10i cores
Intel(R) CPU at 3.6G H z frequency and total RAM at 8G B. The operating system is
desktop PC windows 10 and system type is 64 − bit for each VM. We use two VMs
123
204 E. Hosseini et al.
123
Energy efficient scheduling based on task … 205
as edge nodes in the MFC and 2 MDs. Each VMs worked with 2M B memories and 5
core processors. The total hard drive is 1T B that each VM is working with 200G B.
The MD’s model is S M − A720F that is working with 8i core CPU at the 2G H z
processor, total RAM at 2815M B, 4G, and Wi-Fi network. The mobile operating
system is Android version 8.0.0. Battery capacity is 3600m Ah and the battery level
was 90% in the beginning. The idle, active and transmitting power are 79%, 1.5%
and 2.2% respectively. We created a connection between the MFC and MDs in our
environment via the Wi-Fi network. We obtain all data from the real environment
and analyze data and result in the R Studio with C code language. To calculate cost
function φ(t), we set β = 0.5. The value of data arrival rate is λ = 250 and λ = 500.
The value of GKOA’s capacity is W = 200 and W = 500.
In this section, first, we evaluate the offloading problem with respect to Algorithm 1.
We study the proposed GKOA status to the other scheme. Also, we implement this
algorithm for the case, where random tasks are selected and not sorted by decreasing
order. Second, we analyze the scheduling algorithm on the MFC side according to
Algorithm 2 and compare our proposed model with several benchmark algorithms.
The HLPS compares with three algorithms FCFS, RR and SJF. In the end, we examine
the allocation of resources Algorithm 3 and determine how many tasks migrate to
the cloud. We compare the proposed GKOA (W = 500) with GKOA (W = 200), all
local computing, all MFC offloading and PBP according to [36] scheme. As shown
in Fig. 3a, the energy consumption of local computing, MFC offloading, PBP scheme
and GKOA of the different number of mobile users with some tasks.
In addition, we compare the values with PBP scheme where γ = 0.2 in [36].
Energy consumption is lower in GKOA than PBP for the high number of users. Energy
consumption is lower in all MFC offloading scheme than all local computing. The
GKOA (W = 500) is better, where the number of mobile users is higher than 20.
Using GKOA is more beneficial than offloading on MFC and is better for highly mobile
users. Because in some cases, local execution is more profitable than offloading on
MFC. As shown in Fig. 3b, with PBP algorithm where γ = 0.2 in [36], the system
overhead increases when the number of mobile users increases. GKOA proposes better
performance than other modes and the system overhead is not increase. The highest
system overhead is for local execution modes. When the number of users increases,
the GKOA (W = 500) selection process will be longer. As shown in Fig. 3c, the cost
of delays in local computing is low. In the following, Table 3 shows the advantage
of GKOA in energy consumption, system overhead and average time delay cost for
GKOA than the three modes of local computing, all MFC offloading and PBP scheme.
Figure. 4a shows the comparison of the cost function for GKOA (W = 200), GKOA
(W = 500), all local computing and all MFC offloading scheme. When W = 200, by
using GKOA algorithm, the cost function φ is minimum, which is slightly low than
the cost function in all MFC offloading and local computing. When W increases to
500 in the GKOA, the cost function should be reduced because there is a possibility to
load more tasks into MFC. The completion time of a user’s application contains some
123
206 E. Hosseini et al.
(a)
(b)
(c)
Fig. 3 Compare some parameters for the different number of mobile users in four modes, GKOA, PBP
scheme, local computing, and all MFC offloading
123
Energy efficient scheduling based on task … 207
Table 3 The advantage of GKOA(W = 500) than other modes in Fig. 3. LC: Local Computing, AOE: All
Offloading Execution
(a) (b)
(c) (d)
Fig. 4 The comparison of some parameters for the different number of mobile users in four schemes
GKOA(W = 200), GKOA (W = 500), local computing and all MFC offloading and details of GKOA
algorithm
parts, the execution time on the mobile device, waiting time for local execution, the
waiting time for data transmission, data transmission time, the waiting time for the
MFC execution, and the execution time on the cloud. This value is 0 for local execution
in some parts such as the waiting time for data transmission, data transmission time.
This parameter is optimal in our proposed algorithm GKOA by increasing the number
of mobile users. When W = 500, by using GKOA algorithm, this parameter is more
optimal than all schemes. Because the amount of waiting time on the mobile side is
reduced than local computing to a large number of users. Also, the waiting time for
data transmission is reduced than GKOA algorithm by W = 500. Details of GKOA
algorithm shown in Fig. 4d. In GKOA (W = 200), the weight of the knapsack relative
200 with 314 number of selected tasks and profit P = 117. In
to the total weight is 189.2
123
208 E. Hosseini et al.
Fig. 5 CPU level in five modes GKOA (W = 200), GKOA (W = 500), Random selection (W = 200),
Random selection (W = 500) and local computing in 1 ≤ t ≤ 60
GKOA (W = 500), the weight of the knapsack relative to the total weight is 500 435
with
668 number of selected tasks and profit P = 320. These details are shown in Table 4.
Now, we shows state of some parameters in five schemes GKOA (W = 200),
GKOA (W = 500), Random selection (W = 200), Random selection (W = 500) and
local computing in 1 ≤ t ≤ 60 for instance. In Fig. 5, CPU of the local computing
scheme has the highest value. GKOA (W = 500) scheme shows an optimal CPU
level than other schemes. The battery level in the local computing scheme is higher
than other schemes because all tasks execute in MD. As shown in Fig. 6, GKOA
(W = 500) with selection best set of tasks for offloading, shows the least battery
usage. Maximum of battery level in GKOA (W = 500) is about 7% in t = 60 but
this is 19% in Random selection (W = 500) scheme. According to Figs. 7 and 8,
RAM usage in local computing is high and transmitted data in the network is low
because no tasks will be offloaded. RAM level in GKOA (W = 500) and Random
selection (W = 200) is more optimal. The transmitted data in the network vary in
different schemes. All schemes have almost the same value in t[1, ...60] − (t48 , t54 ).
In t48 and t54 , GKOA (W = 500) and Random selection (W = 500) transmitting a
larger amount of data. Because they choose more data than GKOA (W = 200) and
Random selection (W = 200).
In the proposed method, the tasks are first selected by an optimal loading algorithm,
for offloading into fog. GKOA by selecting the set of tasks that has the most energy
consumption, this set limits the input to the fog. In the fog layer, completion time is
used to prioritize two related queues. This optimal offloading and scheduling allow
123
Energy efficient scheduling based on task … 209
Fig. 6 Battery level in five modes GKOA (W = 200), GKOA (W = 500), Random selection (W = 200),
Random selection (W = 500) and local computing in 1 ≤ t ≤ 60
Fig. 7 RAM level in five modes GKOA(W = 200), GKOA (W = 500), Random selection (W = 200),
Random selection (W = 500) and local computing in 1 ≤ t ≤ 60
us to actually use two criteria to prioritize tasks. So we once prioritized tasks based
on energy consumption, and among them we re-prioritized tasks based on completion
time. This selects the best set of tasks to offload to the fog layer and reduces the number
of scheduled searches by limiting inputs.On the other hand, this action saves energy
in the mobile device and reduces bandwidth consumption when sending to fog.
As shown in Fig. 9a, maximum of AWT is 0.17ms when λ = 500 in FCFS. The
maximum and minimum of AWT in HLPS is 0.03ms and 0 when λ = 500 and
λ = 100 respectively. When λ = 100, all arrived tasks are scheduled. As shown in
Fig. 9b, when 100 ≤ λ ≤ 500, the minimum number of scheduled tasks relative to
the arrival rate of the tasks is 415 for λ = 500 in FCFS and the maximum number
of scheduled tasks is 100 for λ = 100 in HLPS and SJF. In HLPS, the number of
123
210 E. Hosseini et al.
Fig. 8 Transmission data on the network in five modes GKOA(W = 200), GKOA (W = 500), Random
selection(W = 200), Random selection (W = 500) and local computing in 1 ≤ t ≤ 60
scheduled tasks relative to the λ = 500 is 490. As shown in Fig. 9c, the maximum
number of waiting tasks relative to the arrival rate of the tasks is 85 for λ = 500 in
FCFS. This value is 10 in HLPS. The minimum number of waiting tasks relative to
the arrival rate of the tasks is 0 for λ = 100 in HLPS and SJF. When λ = 500, the
optimal value of average response time is 505ms in HLPS as Fig. 9d. As shown in
Fig. 9e, the maximum delay is 341ms and 304ms when λ = 500 in FCFS and RR
respectively. The optimal delay is 0 and 100ms when λ is 100 and 500 respectively in
HLPS. As shown in Fig. 9f, when λ = 500, the optimal service level is 98% in HLPS
and the minimum service level is 83% and 84.8% in FCFS and RR respectively. The
SJF algorithm is similar in the HLPS algorithm in some places.
Figure 10 shows the percentage of the completed tasks in MFC and migrated to
cloud, according to data arrival rate where λ = 250, 500. The proposed method prefers
better performance than other methods because of the most data complete in the MFC
by 92.11% where λ = 250 and 90.88% where λ = 500. On the other hand, the SJF
method prefers better performance than FCFS and RR by 80.68% where λ = 250
and 71.38% where λ = 500 because the number of tasks with less execution time is
greater in our executive scenario.
Selecting the haze layer to load tasks instead of the cloud speeds up the execution
of tasks. Because the fog layer is close to the user and does not have high traffic in
the cloud. Table 5 shows that the use of mobile computing in fog for delay-sensitive
applications, due to less time delay and higher transfer speeds at all stages, is more
suitable than using the cloud layer. These values vary for tasks with different data
volumes. And it has reduced the high time delay of the cloud in all stages, which is the
advantage of using fog. Reducing request time, processing and response transfer time
for a large set of tasks is very effective in increasing execution speed and reducing
energy consumption, and satisfies the optimal value of the cost function. We considered
two similar works in [49] and [24] for comparing request time, processing transfer,
and other times. We used the values stated in the work available to compare.
123
Energy efficient scheduling based on task … 211
(a) (b)
(c) (d)
(e) (f)
Fig. 9 The comparison of evaluation parameters with the lowest number of processors for each algorithm
Offloading and download times are the time of loading tasks with processing data
to fog or cloud. The download time is response after processing and execution of tasks
on the mobile device. In the cloud, these values are high and as a result the speed
is lower due to the large number of users and traffic. Of course, the similarity of the
conditions is important in our proposed method and the methods [24] and [49] on the
values of evaluation parameters.
123
212 E. Hosseini et al.
Table 5 Comparison of the proposed method of this research with the work in [49] and [24]
6 Conclusions
This paper investigated an offloading and scheduling model in MD and MFC layers.
Based on the proposed model, the admission resource consumption optimization in
the MD side and the delay, waiting time and service level optimization in the MFC side
were investigated. We propose an offloading policy that uses the Knapsack Greedy
algorithm that chooses the best set of tasks to offload on MFC. GKOA obtains the best
optimal results to minimize energy consumption. By extensive evaluations, we show
GKOA (W = 500) performs better than other schemes and saves energy consumption
by 59.35% than GKOA (W = 200) by 10.1%, the all MFC offloading by 21.35%, local
computing by 28.2% and PBP 12.54% schemes on average. The GKOA (W = 500)
has optimal performance, where the number of mobile users is higher than 20. In
GKOA (W = 500), the weight of the knapsack is 435 with a 668 number of selected
tasks and a profit of P = 320. We use two priority queues as dynamic scheduling on
the MFC side. The results indicate the improvement of waiting time, waiting tasks,
delay probability, service level, average response time and the number of scheduled
tasks than FCFS, RR, and SJF. Also, the results have shown, the completed task on
123
Energy efficient scheduling based on task … 213
MFC is by 92.11% where λ = 250 and 90.88% where λ = 500 and just about 10% of
tasks migrate to cloud. Our future works include implementing a quick offloading and
forecasting system to run a number of parallel VM. Also, securing data offloading is
one of the problems that need to be addressed.
References
1. Mao Y, You C, Zhang J, Huang K, Letaief KB (2017) A survey on mobile edge computing: The
communication perspective. IEEE Commun. Surveys Tuts 19(4):2322–2358. https://doi.org/10.1109/
COMST.2017.2745201
2. Verma R, Chandra S (2022) Analysing the impact of security attributes in fog-iot environment using
ahp approach 481–491. https://doi.org/10.1109/JIOT.2018.2805263
3. Aghababaeipour A, Ghanbari S (2018) A new adaptive energy-aware job scheduling in cloud comput-
ing, in: International Conference on Soft Computing and Data Mining, Springer, pp. 308–317
4. Jazayeri F, Shahidinejad A, Ghobaei-Arani M (2021) Autonomous computation offloading and auto-
scaling the in the mobile fog computing: a deep reinforcement learning-based approach. Journal of
Ambient Intelligence and Humanized Computing 12(8):8265–8284. https://doi.org/10.1007/s12652-
020-02561-3
5. Guevara JC, da Fonseca NL (2021) Task scheduling in cloud-fog computing systems. Peer-to-Peer
Networking and Applications 14(2):962–977. https://doi.org/10.1007/s12083-020-01051-9
6. Singh PK, Verma RK, Sarkar JL (2019) Mcc and big data integration for various technological frame-
works 405–414. https://doi.org/10.1007/978981130224436
7. Chandak AV, Ray NK, Barik RK, Kumar V (2022) Performance analysis of task scheduling heuristics
in fog environment 857–863
8. Somula R, Nalluri S, NallaKaruppan M, Ashok S, Kannayaram G (2019) Analysis of cpu scheduling
algorithms for cloud computing 375–382. https://doi.org/10.1007/978-981131927340
9. Ghanbari S (2019) Priority-aware job scheduling algorithm in cloud computing: A multi-criteria
approach. Azerbaijan Journal of High Performance Computing 2(1):29–38
10. Liu L, Chang Z, Guo X (2018) Socially aware dynamic computation offloading scheme for fog comput-
ing system with energy harvesting devices. IEEE Internet of Things Journal 5(3):1869–1879. https://
doi.org/10.1109/JIOT.2018.2816682
11. Tang Z, Zhou X, Zhang F, Jia W, Zhao W (2018) Migration modeling and learning algorithms for
containers in fog computing. IEEE Transactions on Services Computing 12(5):712–725. https://doi.
org/10.1109/TSC.2018.2827070
12. Chen Y-A, Walters JP, Crago SP (2017) Load balancing for minimizing deadline misses and total
runtime for connected car systems in fog computing, In: 2017 IEEE International Symposium on
Parallel and Distributed Processing with Applications and 2017 IEEE International Conference on
Ubiquitous Computing and Communications (ISPA/IUCC), IEEE, pp. 683–690. https://doi.org/10.
1109/ISPA/IUCC.2017.00107
13. Yang Y, Wang K, Zhang G, Chen X, Luo X, Zhou M-T (2018) Meets: Maximal energy efficient task
scheduling in homogeneous fog networks. IEEE Internet of Things Journal 5(5):4076–4087. https://
doi.org/10.1109/JIOT.2018.2846644
14. Al-Zinati M, Alrashdan R, Al-Duwairi B, Aloqaily M (2021) A re-organizing biosurveillance frame-
work based on fog and mobile edge computing. Multimedia Tools and Applications 80(11):16805–
16825. https://doi.org/10.1007/2Fs11042-020-09050-x
15. Deng S, Zhang C, Li C, Yin J, Dustdar S, Zomaya AY (2021) Burst load evacuation based on dispatching
and scheduling in distributed edge networks. IEEE Transactions on Parallel and Distributed Systems
32(8):1918–1932. https://doi.org/10.1109/TPDS.2021.3052236
16. Kosta S, Aucinas A, Hui P, Mortier R, Zhang X (2012) Thinkair: Dynamic resource allocation and
parallel execution in the cloud for mobile code offloading 945–953
17. Kazemi M, Ghanbari S, Kazemi M (2020) Divisible load framework and close form for scheduling in
fog computing systems, In: International Conference on Soft Computing and Data Mining, Springer,
pp. 323–333
18. Rezapour R, Asghari P, Javadi HHS, Ghanbari S (2021) Security in fog computing: A systematic
review on issues, challenges and solutions. Computer Science Review 41:100421
123
214 E. Hosseini et al.
19. Wang K, Yang K, Magurawalage CS (2016) Joint energy minimization and resource allocation in c-ran
with mobile cloud. IEEE Transactions on Cloud Computing 6(3):760–770
20. Kchaou H, Kechaou Z, Alimi AM (2016) Towards an offloading framework based on big data analytics
in mobile cloud computing environments. Procedia Computer Science 53:292–297. https://doi.org/10.
1016/j.procs.2015.07.306
21. Yang L, Cao J, Cheng H, Ji Y (2015) Multi-user computation partitioning for latency-sensitive mobile
cloud applications. IEEE Transactions on Computers 64(8):2253–2266. https://doi.org/10.1109/TC.
2014.2366735
22. Wang H, Gong J, Zhuang Y, Shen H, Lach J (2017) Healthedge: Task scheduling for edge computing
with health emergency and human behavior consideration in smart homes, in: 2017 IEEE International
Conference on Big Data (Big Data), IEEE, pp. 1213–1222. https://doi.org/10.1109/NAS.2017.8026861
23. Samal P, Mishra P (2013) Analysis of variants in round robin algorithms for load balancing in cloud
computing. International Journal of computer science and Information Technologies 4(3):416–419.
https://doi.org/10.5120/12103-8221
24. Jin M, Wang H, Song L, Li Y, Zeng Y (2018) Man-machine dialogue system optimization based
on cloud computing. Personal and Ubiquitous Computing 22(5):937–942. https://doi.org/10.1007/
s00779-018-1157-y
25. Wang J, Peng J, Wei Y, Liu D, Fu J (2017) Adaptive application offloading decision and transmission
scheduling for mobile cloud computing. China Communications 14(3):169–181
26. Goudarzi M, Zamani M, ToroghiHaghighat A (2017) A genetic-based decision algorithm for multisite
computation offloading in mobile cloud computing. International Journal of Communication Systems
30(10):e3241
27. Abeywickrama R, Haviv M, Oz B, Ziedins I (2019) Strategic bidding in a discrete accumulating priority
queue. Operations Research Letters 47(3):162–167. https://doi.org/10.1016/j.orl.2019.02.004
28. Li N, Stanford DA, Taylor P, Ziedins I (2017) Nonlinear accumulating priority queues with equivalent
linear proxies. Operations Research 65(6):1712–1721. https://doi.org/10.1287/opre.2017.1613
29. Shi Y, Chen S, Xu X (2017) Maga: A mobility-aware computation offloading decision for distributed
mobile cloud computing. IEEE Internet of Things Journal 5(1):164–174. https://doi.org/10.1109/JIOT.
2017.2776252
30. Sabireen H, Neelanarayanan V (2021) A review on fog computing: Architecture, fog with iot, algo-
rithms and research challenges, vol 7. Elsevier, Amsterdam, pp 162–176
31. Mao Y, Zhang J, Letaief KB (2016) Dynamic computation offloading for mobile-edge computing with
energy harvesting devices. IEEE Journal on Selected Areas in Communications 34(12):3590–3605.
https://doi.org/10.1109/JSAC.2016.2611964
32. Sarkar S, Misra S (2016) Theoretical modeling of fog computing: a green computing paradigm to
support iot applications. Iet Networks 5(2):23–29. https://doi.org/10.1049/iet-net.2015.0034
33. Guo S, Wu D, Zhang H, Yuan D (2018) Resource modeling and scheduling for mobile edge comput-
ing: A service provider’s perspective. IEEE Access 6:35611–35623. https://doi.org/10.1109/ACCESS.
2018.2851392
34. Zhou W, Fang W, Li Y, Yuan B, Li Y, Wang T (2019) Markov approximation for task offloading and
computation scaling in mobile edge computing. Mobile Information Systems 12. https://doi.org/10.
1155/2019/8172698
35. Tseng C-W, Tseng F-H, Yang Y-T, Liu C-C, Chou L-D (2018) Task scheduling for edge computing
with agile vnfs on-demand service model toward 5g and beyond, Wireless Communications and Mobile
Computing
36. Yang Y, Ma Y, Xiang W, Gu X, Zhao H (2018) Joint optimization of energy consumption and packet
scheduling for mobile edge computing in cyber-physical networks. IEEE Access 6:15576–15586.
https://doi.org/10.1109/ACCESS.2018.2810115
37. Wu Y, Ni K, Zhang C, Qian LP, Tsang DH (2018) Noma-assisted multi-access mobile edge computing:
A joint optimization of computation offloading and time allocation. IEEE Transactions on Vehicular
Technology 67(12):12244–12258
38. Zhang J, Hu X, Ning Z, Ngai EC-H, Zhou L, Wei J, Cheng J, Hu B (2017) Energy-latency tradeoff
for energy-aware offloading in mobile edge computing networks. IEEE Internet of Things Journal
5(4):2633–2645. https://doi.org/10.1109/JIOT.2017.2786343
39. Yi C, Cai J, Su Z (2019) A multi-user mobile computation offloading and transmission scheduling
mechanism for delay-sensitive applications. IEEE Transactions on Mobile Computing 19(1):29–43.
https://doi.org/10.1109/TMC.2019.2891736
123
Energy efficient scheduling based on task … 215
40. Dai Y, Xu D, Maharjan S, Zhang Y (2018) Joint computation offloading and user association in
multi-task mobile edge computing. IEEE Transactions on Vehicular Technology 67(12):12313–12325.
https://doi.org/10.1109/TVT.2018.2876804
41. Chen X, Jiao L, Li W, Fu X (2015) Efficient multi-user computation offloading for mobile-edge cloud
computing. IEEE/ACM transactions on networking 24(5):2795–2808. https://doi.org/10.1109/TNET.
2015.2487344
42. Chen Y, Zhang N, Zhang Y, Chen X (2018) Dynamic computation offloading in edge computing
for internet of things. IEEE Internet of Things Journal 6(3):4242–4251. https://doi.org/10.1109/JIOT.
2018.2875715
43. Ning Z, Dong P, Kong X, Xia F (2018) A cooperative partial computation offloading scheme for mobile
edge computing enabled internet of things. IEEE Internet of Things Journal 6(3):4804–4814. https://
doi.org/10.1109/JIOT.2018.2868616
44. Tianze L, Muqing W, Min Z, Wenxing L (2017) An overhead-optimizing task scheduling strategy for
ad-hoc based mobile edge computing. IEEE Access 5:5609–5622. https://doi.org/10.1109/ACCESS.
2017.2678102
45. Morabito R, Beijar N (2016) Enabling data processing at the network edge through lightweight vir-
tualization technologies, In: 2016 IEEE International Conference on Sensing, Communication and
Networking (SECON Workshops), IEEE, pp. 1–6. https://doi.org/10.1109/TNET.2015.2487344
46. Zhang K, Mao Y, Leng S, Zhao Q, Li L, Peng X, Pan L, Maharjan S, Zhang Y (2016) Energy-efficient
offloading for mobile edge computing in 5g heterogeneous networks. IEEE access 4:5896–5907.
https://doi.org/10.1109/ACCESS.2016.2597169
47. Ali Z, Jiao L, Baker T, Abbas G, Abbas ZH, Khaf S (2019) A deep learning approach for energy
efficient computational offloading in mobile edge computing. IEEE Access 7:149623–149633. https://
doi.org/10.1109/ACCESS.2019.2947053
48. Zhang Y, Niyato D, Wang P (2015) Offloading in mobile cloudlet systems with intermittent connectiv-
ity. IEEE Transactions on Mobile Computing 14(12):2516–2529. https://doi.org/10.1109/TMC.2015.
2405539
49. Fowler S, Häll CH, Yuan D, Baravdish G, Mellouk A (2014) Analysis of vehicular wireless channel
communication via queueing theory model, in: 2014 IEEE International Conference on Communica-
tions (ICC), IEEE, pp. 1736–1741. https://doi.org/10.1109/ICC.2014.6883573
50. Tong L, Li Y, Gao W A hierarchical edge cloud architecture for mobile computing, IEEE INFOCOM
https://doi.org/10.1109/INFOCOM.2016.7524340
51. Chen X, Pu L, Gao L, Wu W, Wu D (2017) Exploiting massive d2d collaboration for energy-
efficient mobile edge computing. IEEE Wireless communications 24(4):64–71. https://doi.org/10.
1007/s00779-018-1157-y
52. Salehan A, Deldari H, Abrishami S (2019) An online context-aware mechanism for computation
offloading in ubiquitous and mobile cloud environments. The Journal of Supercomputing 75(7):3769–
3809. https://doi.org/10.1007/s11227-019-02743-7
53. Lee T-D, Lee BM, Noh W (2018) Hierarchical cloud computing architecture for context-aware iot
services. IEEE Transactions on Consumer Electronics 64(2):222–230. https://doi.org/10.1109/TCE.
2018.2844724
54. Lu Y, Zhao D (2022) Providing impersonation resistance for biometric-based authentication scheme
in mobile cloud computing service. Computer Communications 182:22–30
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the
author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is
solely governed by the terms of such publishing agreement and applicable law.
123