Nothing Special   »   [go: up one dir, main page]

A Novel Framework For Mobile-Edge Computing by Optimizing Task Offloading

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

IEEE INTERNET OF THINGS JOURNAL, VOL. 8, NO.

16, AUGUST 15, 2021 13065

A Novel Framework for Mobile-Edge Computing


by Optimizing Task Offloading
Abdenacer Naouri, Hangxing Wu, Nabil Abdelkader Nouri, Sahraoui Dhelim , Member, IEEE,
and Huansheng Ning , Senior Member, IEEE

Abstract—With the emergence of mobile computing offload- equipped with thousands of smart objects, including cameras,
ing paradigms, such as mobile-edge computing (MEC), many sensors and actuators. These objects sense the environment
Internet of Things applications can take advantage of the com- and react to a real-time situation by gathering data from dif-
puting powers of end devices to perform local tasks without
the need to rely on a centralized server. Computation offload- ferent sources, which requires sending a huge amount of data
ing is becoming a promising technique that helps to prolong the continuously. Analyzing such tremendous generated data, such
device’s battery life and reduces the computing tasks’ execution as the video streams from smart cameras, or augmented real-
time. Many previous works have discussed task offloading to ity, and facial recognition data in IoT applications requires
the cloud. However, these schemes do not differentiate between a high computing resource that can be empowered by the
types of application tasks. It is not reasonable to offload all
application tasks into the cloud. Some application tasks with Cloud computing [1]. Cloud computing can be described as a
low computing and high communication cost are more suitable remote data center consisting of a collection of super com-
to be executed on the end devices. On the other hand, most puting nodes that share resources with each other forming
resources on the end devices are idle and can be used to pro- intensive resource computing associated with smart manage-
cess tasks with low computing and high communication cost. ment software, such as a software-defined network (SDN). The
In this article, a three-layer task offloading framework named
DCC is proposed, which consists of the device layer, cloudlet devices that are unable to complete computing tasks locally,
layer and cloud layer. In DCC, the tasks with high computing offload their computing task to the cloud. However, due to
requirement are offloaded to the cloudlet layer and cloud layer. the large gap between the cloud and end devices, the network
Whereas tasks with low computing and high communication cost could suffers from connectivity delay which is not suitable
are executed on the device layer, hence DCC avoids transmitting for latency sensitive real-time applications, in addition to the
large amount of data to the cloud, and can effectively reduce
the processing delay. We have introduced a greedy task graph back traffic weight that could overload the network. In order
partition offloading algorithm, where the tasks scheduling pro- to reduce the network delays and the massive resulting traffic
cess is assisted according to the device computing capabilities through the network, edge computing (EC) [2] was suggested
following a greedy optimization approach to minimize the tasks as a solution to solve those issues where it enables computing
communication cost. To show the effectiveness of the proposed at the edge of the network. However, shifting the computation
framework, We have implemented a facial recognition system as
usecase scenario. Furthermore, experiment and simulation results from the cloud to the edge requires intelligent supervision.
show that DCC can achieve high performance when compared Furthermore, researchers suggested mobile EC (MEC) in [3],
to state-of-the-art computational offloading techniques. which is considered to be a variant form of EC adapted to
Index Terms—Cloud computing, cloudlet computing, clus- mobile networks.
ter formation, communication tasks, computation offloading, Some MEC architectures and offloading strategies have
dynamic mobile cloudlet. been investigated in [4]–[8] for remote task computing, which
deals separately with different minimizing or maximizing
I. I NTRODUCTION objectives, such as minimizing the energy consumption or
the execution delay or maximizing the offloaded tasks ratio
ITH the tremendous growth of the Internet of Things
W (IoT), the number of objects connected to the IoT
network is in the scale of billions. Most of mega cities around
or the system profit within computing devices or the fog
nodes [7], [8]. Kao et al. [9] targeted the execution time within
IoT devices, and introduced a fully polynomial-time approxi-
the world, such as Beijing and New York have recently been mation scheme to reduce the application tasks execution delay.
Manuscript received November 11, 2020; revised January 21, 2021; Also, Habak et al. [10] and Sundar and Liang [11] have
accepted February 28, 2021. Date of publication March 8, 2021; date of proposed femtocloud system which provides a dynamic, self-
current version August 6, 2021. This work was supported by the National configuring, and multidevice mobile cloud out of a cluster of
Natural Science Foundation of China under Grant 61872038. (Corresponding
author: Huansheng Ning.) mobile devices. Moreover, Sundar and Liang [11] identified
Abdenacer Naouri, Hangxing Wu, Sahraoui Dhelim, and Huansheng an optimal scheduling decision for a mobile application com-
Ning are with the School of Computer and Communication Engineering, prising of dependent tasks, such that the communication and
the University of Science and Technology Beijing, Beijing 100083,
China, and also with the Beijing Engineering Research Center for execution cost is minimized subject to an application deadline.
Cyberspace Data Analysis and Applications, Beijing, China (e-mail: However, most of these works did not consider the appli-
ninghuansheng@ustb.edu.cn). cation task dependency and the communication burden, the
Nabil Abdelkader Nouri is with the Department of Mathematics and
Computer Science, University of Djelfa, Djelfa 17000, Algeria. IoT devices at the edge do not differentiate applications tasks
Digital Object Identifier 10.1109/JIOT.2021.3064225 and offloaded the entire applications tasks to the cloud using
2327-4662 
c 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.

Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
13066 IEEE INTERNET OF THINGS JOURNAL, VOL. 8, NO. 16, AUGUST 15, 2021

handled locally within the nearby computing nodes, and high


computing tasks can be sent to the cloud. Second, a mobile
device jointly form a cloudlet, which will make its link to
the cloud more reliable. Third, some tasks will be processed
locally which will ease the pressure on the cloud. Finally, a
suitable task execution location can be chosen by adopting an
optimal scheme in DCC.
Our contributions can be summarized as follows.
1) Proposed a novel a three-layer task offloading frame-
work named DCC, which consists of the device layer,
cloudlet layer and cloud layer.
2) Proposed a greedy task graph partition offloading algo-
rithm, where the tasks scheduling process is assisted
according to the device computing capabilities following
a greedy optimization approach to minimize the tasks
communication cost.
3) Implemented a facial recognition system based on the
proposed framework as usecase scenario.
The remainder of this article is organized as follows.
Section II discusses previous related works. Section III
describes the application model used for computing and
offloading tasks with the proposed architecture and outlines
the proposed resource allocation and task partition algorithms.
Section IV analyses the partition algorithm’s performance and
the experiment results. Finally, we conclude this article in
Fig. 1. DCC computing architecture. Section V.

traditional offloading strategies, which cause a large volume of II. R ELATED W ORK
data to be transmitted, and subsequently network congestion. Recently, offloading computing shown a wide application
In fact, offloading tasks to the cloud can be reasonable when in several domains in IoT, with different architectures and dif-
the edge or fog computing nodes cannot fulfill the application ferent policies, where application tasks executed remotely on
needs. The high computing requests demands may exceed the other devices due to insufficient device resources, regarding
capability of the nodes because of their insufficient resources. the application execution time and devices energy. Different
While offloading tasks with low computing and high com- works have been explored this area and different static
munication cost to the cloud is not reasonable, due to the and dynamic offloading frameworks and architectures were
unstable devices connection with the cloud and the long dis- proposed. Chun et al. [12] and Wei et al. [13] introduced
tance between devices and cloud. It is better to process these CloneCloud and MAUI frameworks that aim to improve bat-
tasks locally within the edge and nearby devices to reduce the tery life and device performance by offloading application
communication traffic and execution time. Furthermore, there components to cloud servers. Both targeting one server for
are still some problems that have not been addressed in the offloading. In CloneCloud, tasks executed in a cloned image
previous works. First, the resources on the edge may not ful- of the system of the device, it combines a static program
fill all user’s requests due to the enormous offloading tasks. analysis with a profiling program to select the offloaded com-
Second, the mobility of mobile devices still stands as a bar- ponents. In MAUI tasks executed based on methods annotation
rier to achieve efficient offloading. Third, choosing the suitable and static program analysis. However, a single server may
task execution location remains a challenge. In order to solve not have enough communication and computing resources. In
all these problems mentioned above, we proposed a three-tier these cases, and in other situations, where there are significant
MEC architecture as shown in Fig. 1, which consists of device latency limitations, frameworks with concurrent offloading on
layer, cloudlet layer, and cloud layer (DCC). multiple servers have been suggested for task distribution
Based on the observation that a large number of comput- among a cluster of servers with specific processing and com-
ing resources in the local devices are idle in practice, a large munication capabilities. In [14] a framework named ThinkAir
number of high communication tasks can be handled locally has been proposed to fix the drawbacks of two previous men-
if these idle computing resources can be utilized. As a result, tioned frameworks. In particular, ThinkAir use being used with
the processing delay of these tasks can be reduced, and we new ways for resource management and simultaneous task
avoid sending large amounts of data to the cloud. Therefore, execution. It focuses on the cloud’s elasticity and scalabil-
a device layer in DCC is defined by forming dynamic mobile ity and improves the capacity of mobile cloud computing by
cloudlet devices in the same area. The advantages of DCC are using several virtual machine (VM) images for parallel process
as follows. First, high communication tasks in DCC can be execution.

Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
NAOURI et al.: NOVEL FRAMEWORK FOR MOBILE-EDGE COMPUTING BY OPTIMIZING TASK OFFLOADING 13067

Concerning the offloading process, Wang and Li [15],


Castro-Orgazand Hager and Hager [16], and Inag et al. [17]
proposed several strategies. Most concentrate on indepen-
dent task scheduling and concentrate on single metrics, such
as energy or execution application time. Castro-Orgaz and
Hager [16] studied the task scheduling problem to reduce
energy consumption, by assuming that the cloud and the
mobile devices can not run simultaneously. While exploit-
ing parallelism between the cloud and the mobile devices can
quickly improve the application execution time. For instance,
a heuristics [18] and genetic algorithms [19] introduced to
Fig. 2. Different offloading cases through the DCC.
reduce the application tasks execution time respecting task
execution order. Sundar and Liang [11] targeted the device’s
energy where propose a task allocation strategy for hetero-
geneous devices and mobile cloud. Furthermore, offloading and the response time. This requires a smart offloading task
computing of dependent tasks have been discussed in [9] policy to explore the edge devices resource efficiently. Some
and [11]. Kao et al. [9] targeted the reduction of execution offloading computing policies have been proposed regarding
time by introducing a fully polynomial-time approximation the energy and device computation cost. However, these poli-
scheme to reduce the overall latency while offloading depen- cies assign the offloading tasks directly to the fog or to the
dent tasks to multiple devices. Kao and Krishnamachari [20] cloud without considering the communication cost. This moti-
suggested deterministic and probabilistic delay constrained vates us to investigate on offloading policy to reduce the
task partitioning algorithms to optimize the total of remote communication and computation cost in an efficient manner
computational cost and mobile energy usage of all applica- within our proposed DCC architecture.
tion tasks with delay constraints in polynomial time, where Allowing communication between devices can alleviate the
tasks are structured in form of a tree for sake of simplicity. communication burden at the edge network, where tasks are
However, both [21] and [20] assumed that the devices have executed locally within devices instead of sent to cloudlet or
infinite capacity. In contrast, our proposed task dependency cloud servers. Let N is the number of computing nodes and M
offloading model regards the device constraint resource where is the number of computing tasks to be assigned. Our objective
it can serve more than one user and perform more than one is to provide an optimal collaboration strategy regarding the
task according to the device’s ability. communication cost, focusing on delegating the M computa-
tional correlated tasks to the appropriate computing nodes. In
addition, the device location on the network can affect the task
III. S YSTEM A RCHITECTURE M ODEL AND execution process due to their mobility, hence a management
P ROBLEM F ORMULATION process required to avoid task execution failure.
Cloud computing offers easy accessibility and coopera-
tion, but cloud centralization could expose it to a bottleneck
problem due to the massive network traffic generated by the A. Overview of DCC
user’s offloading requests. Also, adding a delay to the network, The DCC computing system provides a computing resource
due to the large gap between the users and the cloud resources. at different levels, where users can use it. When a user initi-
For example, a self-driving car, it is critical to have the shortest ates its offloading process, the application tasks distinguished
possible time from collecting data through sensors to making into different types, where each task will offload to the corre-
a decision and then acting on it. Hence, enabling computing at sponding computing node according to its feature and the node
the network edge by placing computing resources within the ability. Application remote tasks can be classified into two
edge network can improve the time response system. Although main types, tasks with high computing and low communica-
the advantages brought by the edge, the resource limitation tion (computation tasks) which are suitable to be offloaded to
remains challenging. For that, we introduced a DCC architec- the cloud, and tasks with low computing and high communica-
ture, where consists of different layers collaborate with each tion (communication tasks) that are preferable to be executed
other to mitigate the network burden as much as possible and at the nearby computing devices whether in device layer or
increase the system performance. edge layer. Fig. 2 illustrates different cases for the offload-
Fig. 1 presents our proposed architecture where it con- ing process within the DCC system. As we can see in case
sists of three tiers: a DCC. The device layer consists of a 1: user A initiates the offloading process where is starting by
set of heterogeneous devices, the cloudlet layer consists of a offloading communication tasks within its nearby computing
set of computing resources provided close to the user in the nodes in the corresponding cluster. The communication tasks
form of servers, and the last tier represents the cloud servers. that cannot be handled intercluster, it is offloaded to neigh-
Usually, edge devices play a limited role in sending data and boring clusters through the edge node as shown in case 2. In
information to and receiving processed information from the case 3: computation tasks offloaded toward the edge nodes due
cloud. In our case, we explore the idle edge device resource for to their high computation resource needs. Case 4: reflect on
executing tasks to reduce the traffic of the network backbone the tasks that cannot be performed in the current edge node

Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
13068 IEEE INTERNET OF THINGS JOURNAL, VOL. 8, NO. 16, AUGUST 15, 2021

due to the edge node resource limitation which is preferable


to offloaded toward the neighboring edge nodes.

B. Dynamic Cloudlet Formation


The creation of mobile cloudlets in form of clusters helps to
reduce energy consumption during the task offloading process,
especially when considering scalability and robustness [22].
As a result, when the device’s arrangement is suitably adjusted,
the network life duration period is prolonged (without the need
to replace the nodes’ batteries). Centralized and distributed
cluster formation approaches have been proposed in [23].
Each one has advantages and disadvantages, where relying Fig. 3. Dynamic cloudlet formation using the K-mean clustering algorithm.
on the centralized approach provides an optimal bound over
the distributed schemes, while the distributed approach shown
is resistant to network topology changes but consumes sig-
nificant edge devices energy which is critical. In contrast, nearest centroid Centerj using the following function:
the centralized approach is not scalable but it consumes low  n 
k  2
 (j) 
energy. However, we relied on the centralized approach where xi − Centerj  . (2)
two important issues related to the clustering process for task j=1 i=1
offloading should be addressed. The first one: which are the
To this end, the K numbers of the cluster centers are desig-
suitable cluster nodes for current task execution according
nated in such a way that the limitation of the basic algorithm,
to task features (i.e., Soft and Hard deadline)?. The second
such as the inappropriate distribution of the nodes have been
one: where should tasks be executed for optimal execution
covered.
“INTER/INTRA Cluster”?. Note that users can perform simi-
In order to select the appropriate computing nodes that will
lar tasks giving the same outcome. Therefore, allowing caching
serves as CHs, the corresponding cloudlet server calculate the
among clusters can improve the application response time.
energy residual and computing capacity of each cluster S, S =
The cloudlet formation is the process of finding the most
[S1 , S2 , . . . , Sk ].
adequate clustering method of intermediate servers. Cloudlet
1) Calculate the average node computing energy of each
formation has been explored in previous works in different
cluster
area with different networks. The K-means algorithm intro-
|Si (j)|
duced in [24] which is one of the common algorithms used for j=1 e(j)r
the clustering process. This algorithm partitions data set into eavg (Si (j)) = ∀j ∈ 1, 2, . . . , |Si | (3)
|Si (j)|
K clusters using the mean Euclidean distance. We adjusted
the K-mean algorithm to fit our task offloading computing where e(j)r and eavg (Si (j)) are the residual energy of
scenario where the cluster head (CH) responsible for task the jth member node and the average cluster energy of
offloading outside the cluster (i.e., offload tasks toward the Si , respectively.
edge server), and cluster members (CMs) receive and send 2) Calculate the average node computing capacity of each
their tasks among each other and toward the CH. In order to cluster.
prolong the cluster lifetime and to shield the network from |S (j)|
j=1
i
c(j)r
the results drawbacks due to the CH mobility, each node in cavg (Si (j)) = ∀j ∈ 1, 2, . . . , |Si | (4)
|Si (j)|
the cluster can act as a CH when the current CH is out of
service or under a certain threshold. In addition, the K-mean 3) After determining the computation and energy aver-
clustering algorithm strongly depends on the initial centroids, ages, the potential clusters head nodes are expressed as
where inappropriate centroids distribution can leads to low follows:

performance. Hence, In order to ensure an effective cluster   Max(j), if cj > cavg and ej > eavg
formation, we determine the initial K cluster centroids based CH cj , ej = (5)
0, otherwise.
on nodes locations, the formulation is presented in Fig. 3.
1) We define an initial centroid Centerj for all network
nodes N, where Xi refers to the location of nodes in C. User Mobility Offloading Model
the network
N Note that the offloading decision is affected by nodes
Xi mobility. For this reason, we assigned to each user a ran-
Centerj = i=1 (1)
N dom way-point mobility (RWP) model as defined in [25],
2) Select next centroids according to the initial centroid which is widely used to describe nodes movement. In this
Centerj edges (−max, max) in such way that the model the mobility trajectories of node i represented by
Euclidean distance maximized. TRi = (Xi Xi−1 , ti , vi ). |Xi − Xi−1 | reference to the crossed
3) After determining the initial K centroids, the eligible distance during ti with the variant speed vi . Two mobile nodes
nodes will join to the corresponding cluster with the can communicate if their on the same range |Xi (t)−Xj (t)| < R,

Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
NAOURI et al.: NOVEL FRAMEWORK FOR MOBILE-EDGE COMPUTING BY OPTIMIZING TASK OFFLOADING 13069

Fig. 4. User mobility model.

Xi (t) = {X1 (t), X2 (t), . . . , Xn (t)} represents the nodes location


at instant t and R represents the transmission range as shown
in Fig. 4.
To realize a successful and efficient offloading, the commu-
nication time (CT) is regarded as the main factor to complete
the task operation. Since the relation between computing nodes
is not stable, we introduce CT CTj (t) which indicate the link
duration time between node j and node i within cluster Si ,
represent by

CTj (t) = {t : tfinish − tstart } (6) Fig. 5. DCC computing model for IoT devices.
  
 
tstart = inf t : Xi (t) − Xj (t) ≤ R
0≤t≤τ
     
tfinish = inf t : ∀t ≥ t ∩ t ≤ τ, Xi t − Xj t  > R . Second, the LMDW consists of task and resource man-
0≤t≤τ
agement components, in which all collected information is
(7)
concerning the edge devices states and the offloaded tasks.
Resource Manager involves a resource classifier (RC) and a
D. System Model Description Profiler, the RC works on classifying the available resource on
We defined a global and local middleware (LMDW) works the network using the Profiler metadata. While the Profiler is
as an SDN controller to fulfill the users offloading requests at responsible for monitoring the local network and device states.
the edge and monitoring the network and mobile devices state. In addition, the Task Manager includes a Task Classifier and
The global middleware (GMDW) work on monitoring and Solver. Where Task Classifier ranges the tasks according to
redirecting the traffic through all network between the static their features and constraints. The Solver takes responsibility
cloudlets nodes (SC), while the LMDW focuses on managing for sending and receiving the offloading request toward nodes
the local explored area. Fig. 5 shows the architecture model according to the scheduling decision. Based on the resource
where all mobile devices can subscribe to profit from the SC and tasks information provided by the previously mentioned
resources. In this architecture, users notify their states, includ- components a local task offloaded (LTO) offload the comput-
ing location, storage capacity, energy level and CPU to the ing tasks to the suitable devices using a partition task and
corresponding SC. GMDW can enhance the network through- resource allocation algorithms, which will be discussed later
put by offloading data to the less overloaded servers across in the next sections taking into account the task deadline and
the network according to the user’s density variation in dif- their dependencies also the device’s connectivity.
ferent areas. LMDW can control and track users due to their
mobility.
Fig. 5 displays global and local middleware (GMDW, E. Optimal Decision of Task Offloading
LMDW) components, each one associated with different com- Although not all application tasks are suitable for remote
ponents. First, the GMDW consists of static cloudlet profiler execution, binary and partial offloading strategies have been
(SCP) includes information concerning the status of supervised introduced in [26]–[28] as shown in Fig. 6. Binary offload-
SC nodes (server load, location, processing power, etc.) and ing requires a task to be executed as a whole either locally at
static cloudlet network profiler (SCNP) to monitor the network the device or remotely on the MEC server. Partial offloading
traffic over the available various SCs in the network. While allows a task to be partitioned into two parts, some executed
static cloudlet request resolver (SCRR) is in charge of solving locally and some execute remotely. In practice, binary offload-
the offloaded heavy tasks. In the end, task offloading decision ing is easier to implement and suitable for simple tasks that are
(TOD) is responsible for TODs among the SCs based on the not partitionable, while partial offloading is favorable for com-
gathered information (SCs, network states, and the offloaded plex tasks that include multiple methods, such as augmented
requests). reality and face recognition applications.

Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
13070 IEEE INTERNET OF THINGS JOURNAL, VOL. 8, NO. 16, AUGUST 15, 2021

Fig. 6. Different offloading computing strategies.

1) Task Specification: We represent a task flow by an


acyclic directed graph Gt = (VM , E), where VM =
{T1 , T2 , T3 , . . . , TM } represents the set of computing tasks and
E represent the communication link between tasks. A task Ti
depends on task Ti−1 if there is a edge between them. We
assume that each computing node can serve more than one task Fig. 7. Application tasks follow topology, including communication and
on time based on its available resources, and can offload its computation tasks, vertex weight refers to task workload, and the edge weight
refers to the ratio of the output data to the input data size.
computing tasks by a single hop. Each task Ti of VM associated
with following variables (wi , ii , oi , τi , ei ), where wi represent
the computation workload, (ii , oi ) represent the input and out- as follows:
put data size, τi represent the task execution deadline time and
ei represent the energy. Those parameters related to the appli- T(i, j) = ω(i, j)/fj And E(i, j) = T(ij) ∗ Pj (8)
cation nature, such as real-time application, e.g., “recognition
where fj is the device CPU frequency and Pj represent
face and augmented reality,” where this type of application
the computation energy of device j for task i. ω(i, j) :
requires a hard deadline (i.e., tasks execute respecting to its
represent task i workload on device j.
τi ) comparing to soft deadline application such as networking
2) Remote computing tasks can be executed remotely on
apps where tasks can accept latency with relatively small τi .
the edge server or remotely at neighbor devices. For
The task parameters can be forecast or estimated based on the
that, we discuss the communication overhead in terms of
device profiler as mentioned in [21] and [29], which facili-
execution time and energy consumption for both cases,
tates the execution process and assess the communication cost
respectively. For a task ith executed by device j, the exe-
resulting during the tasks interactions.
cution time for both cases (at the nearby mobile device
2) Task Classification: According to [30], the application
or at the edge node), includes input data uploading,
tasks can be could be classified according to the execution
cloudlet execution, and result downloading
mode, tasks that can be run locally and tasks that must be
executed remotely on the cloud, and those that can be executed lni ωi outi
T( i) = + + . (9)
on both modes. Hence, we distinguished the application tasks U f D
into three categories (Local, Shared, and Heavy) tasks. Local For energy cost
tasks refer to unoffloadable tasks denote by Tu (tasks that use
lni ωi outi
local device components such “camera, GPS, user interface”). e(i) = · ps + · pj + · pr (10)
Shared Tasks denote by Ts refers to communication tasks that U f D
require low computation and high communication cost and can ps and pr are the transmitting energy for uploading and
executed on both modes, either remotely or locally based on downloading data.
the device’s resources. While heavy tasks denote by Th refers 4) Cooperation Task Execution Strategy: In this section,
to computation tasks that require a high computation and low we described the functioning of our proposed offloading algo-
communication cost and can be execute remotely at the edge rithm where it demonstrates the offloading phases. When a
within cloudlet nodes or the cloud. It was pointed out in [31], resource constraint occurs in a device, the designed nodes
that shared tasks constitute the majority of network traffic and, offload their computing task to their neighbor nodes within
therefore, an optimum offloading regarding Ts can enhance the the current mobile dynamic cloudlet or to other computing
network performance. nodes via CHs according to the central scheduler decision.
3) Task Offloading Computing Model: In order to ensure Tasks can be executed parallel or sequentially based on their
an efficient cooperative offloading within the DCC system, correlation as shown in Fig. 7, where nodes I and T indicate
we evaluate the energy consumption and task execution time initiation and termination of the application, respectively, and
for local and remote task execution. the intermediate nodes indicate either tasks that can execute
1) Local computing the execution time and energy con- remotely or not. For instance, the tasks in Fig. 7 assigned
sumption for local computation could be expressed to nearby computing nodes according to certain execution

Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
NAOURI et al.: NOVEL FRAMEWORK FOR MOBILE-EDGE COMPUTING BY OPTIMIZING TASK OFFLOADING 13071

nodes.
|Qj | 
 M
u,d u,d u,d
Timeni,Nd ∈Qj = C δi,j + UD δi,j + W δi,j .
i=1 j=1
(13)
Nd denotes the selected node to perform the assigned task
u,d u,d
δi,j of node Nu , where C(δi,j ) represent computing time of
u,d
task j at node i, UD(δi,j ) represent data upload and download
u,d
time and W(δi,j ) denote waiting data receiving time.
More formally, our proposed task scheduling allocation
Fig. 8. Tasks dependency cases. algorithm can formally be defined as the succeeding integer
linear programming problem (ILP) 0–1 where decision vari-
able di,j addressed properly. di,j = 1 indicates that the jth task
paths, such as P1 (T3, T7, T9) assigned to N1 , P2 (T4, T6, T8) is assigned to the remote node i.
assigned to N2 and (T2, T5) assigned to N3 . Based on the |Q| 
|M|

execution flow, we partition our task graph between I and T minimize CCij · di,j (14)
in terms of communication usage according to the heaviest i=1 j=1
execution paths, where it considered a near-optimal execution |M|
strategy and we denoted the communication cost by CCi,j , 
subject to di,j Ti,j ≤ CTi (τ ), i = 1, 2, . . . , |M| (15)
where
j=1
|Q|

ini,j outi,j  
CCi,j = + . (11) di,j c δiu ≤ ci (τ ), i = 1, 2, . . . , |Q| (16)
Bupload Bdownload
i=1
di,j = 0, 1, i = 1, . . . , |Q|; j = 1, . . . , |M|. (17)
Besides that, we represented the TOD on two major cases
as illustrated in Fig. 8. 5) Centralized Task Scheduling Algorithm Description:
Case 1: N1 offload tasks to N2 and N3 . N2 execute the Note that the response time of nearby computing nodes to
offloaded task by N1 and send output results to N3 to presume deliver the output task result could vary depending on the com-
its task execution. Distributed tasks among the cluster nodes puting capacity, connection time, and network condition. For
should be done in an efficient manner to reduce the com- that, we defined an electing strategy to evaluate the computing
munication overhead, energy consumption, and delay. Hence, nodes’ eligibility within each mobile cloudlet.
we proposed a greedy task partition algorithm to seek near- 1) Qualified Nodes Election Process: Node mobility could
optimal allocation to achieve efficient offloading according to be a barrier during the offloading process leading to a
available computing resources. task failure. Therefore, we set low task computation and
Case 2: N1 offload tasks to Nc and waiting for the result CT barriers to deal with task execution time respecting
back to resume its execution, where it is not practical to rely to task deadline within the cluster Si nodes. Moreover,
on one edge server on dense networks due to the enormous we maintained appropriate energy ej to support the task
resulting requests from edge devices. When the edge server completion time. Note that the CT that can be obtained
is unable to handle the assigned tasks, it will offload part of using a prediction model [29] should be reasonable to
to its nearby edge servers where smart selection methods are meet the task needs. While exchange time (ET) defined
required. which is out of the scope of the current work and to ensure the successful data transfer between the orig-
can be considered as future work. inator and the neighbor nodes. Hence, the selection
Since the application may include dependent and indepen- process must satisfy the following condition:
dent tasks, we focused on the dependent communication tasks u,d u,d
(shared tasks) that add burden to the network, due to their high CTj > C δi,j , ETj > D δi,j and ej > E.
interaction cost. We minimized interaction between nodes by
2) Task Graph Partition: After determining the eligible
mapping tasks with high interactions on the same computing
nodes sets Q within the mobile cloudlets, we intro-
node as illustrated in Fig. 7. This, reduces the communication
duce a greedy task graph offloading partition algorithm
cost, task execution time, and extend the network lifetime.
for acyclic task graph Gt . The algorithm takes a Gt as
Consequently, we defined the application overall execution
input, which includes task nodes represented as vertices
time Tapp within the computing cluster as follows:
and communication link between nodes represented as
edges. Note that the communication cost between tasks
Timeapp = Timeni + Time e
i (12) at the same node is negligible.
a) Tasks classification: First, we distinguished tasks
where Timeni indicate the tasks execution time at nearby into different sets of unoffloadable tasks (Local),
devices and timeei indicate the execution time at the edge computation tasks (heavy), and communication

Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
13072 IEEE INTERNET OF THINGS JOURNAL, VOL. 8, NO. 16, AUGUST 15, 2021

Algorithm 1: Greedy Task Graph Partition Algorithm


Input: Cluster node Ns , Originator Ni , Task graph Gt =< VM , E >,
VM = Application Tasks, E = Dependency links
Output: Mapped task among our DCC Edge Tasks (i.e. Cloudlet
Server), Unoffloadable Tasks (i.e. locally), Shared Tasks
(i.e. Neighbor Devices)
1 initialization
2 Create a queue PrimaryTasks.
3 PrimaryTasks := S successors in AGD.
4 S := Represent DCC Asc nodes in cluster SK according their resources.
5 for each Node i of Sk do
6 if CTi (t) > C δiu,d and ETi (t) > D δiu,d then
7 if Node i have enough Energy then
8 Add Node i to Eligible Node List LENi.

9 for each Node j of LENk do


10 while NoFinsih do
11 Ei := Max(Path) if CurrentTask! = TermineTask and
CurrentTask nonExplored then
12 if Cj > C(δi ) then
13 Assign Taski To Nodej
Fig. 9. Task flow partition result.
14 Taski is Explored
15 else
16 i=k Last Explored Node
17 Break
assigns the target tasks to the mobile devices with the high-
18 else est relative computing capacity, respecting devices connection,
19 if CurrentTask == TermineTask Or CurrentTask and energy constraints. For instance, Fig. 9 shows the partition
isExplored then
20 Remove Ti From PrimaryTasks Queue. result in which the start task and the terminate task methods
21 UPDATE(PrimaryTasks). are assumed unoffloadable tasks. Tasks D, G, I, J assigned to
22 CurrentTask:= PrimaryTasks1. device 1, tasks C, F assigned to device 2, and tasks B, E, H, K
23 if Cj > C(δi ) then
24 Assign Taski To Nodej .
assigned to device 3. Mobile devices 1 and 3 receive the out-
put resulting data from the device 2 to presume the execution
25 if PrimaryTasks isEmpty then of their tasks with the following edge costs (E-H and E-I).
26 Finish := True

A. Performance of GTPG Algorithm


We compared the overall communication cost of mobile
devices with random and uniform offloading strategies to
tasks(shared). Unoffloadable tasks execute on the
our proposed GTGP algorithm under different numbers of
device due to its special features while the remain-
tasks, i.e., “different application size.” As shown in Fig. 10.
ing tasks execute remotely. Apart from this, unof-
Compared to the other two algorithms, the GTGP algorithm
floadable and computation tasks, each one can be
shown the lowest communication cost. Because GTGP works
regarded as one vertex where their communication
on exploiting the device computing resources completely and
and computation costs aggregated separately. Let
executes the correlated tasks with the high interaction cost
Gts represent the resulting graph after merging the
in the same device. While random offloading algorithm runs
initial graph.
separately in different devices, which may add more delay
b) Classification of eligible nodes: Once the eligible
due to the resulting transmission. We observed that the uni-
nodes have been identified, they are sorted from
form offloading strategy with the small-scale application can
the highest to the lowest weight according to their
give similar performance to our algorithm. This relates to
computational weight.
the task graph bifurcation where the probability to flow the
c) Edge cut partition: This step aims to assign the Gts
same execution path is high at the start-up phase, while as the
resulting graph tasks within the selected eligible
application size scales the communication cost increases and
nodes, such as communication cost minimized.
diverges from each other.
1) Computation and Communication Energy Consumption:
In Fig. 11, we compared the performance of our proposed
IV. E XPERIMENTAL AND S IMULATIONS E VALUATION algorithm during the offloading process to the random and uni-
To evaluate our proposed partitioning algorithm, we con- form algorithms overall different shared tasks, i.e., (tasks with
ducted our experiment scenario over a large number of low workload and high interaction) in terms of the following-
simulations, where we assign to each task node a computa- Axes. The first axe represents, energy consumption resulting
tion weight and a communication weight for each edge. We due to the data transmission, i.e., (task interactions), and the
have implemented the GTGP algorithm using MATLAB that second axe represents, energy consumption for task execu-
can serve as proof of its feasibility and utility. The algorithm tion. Random and uniform offloading strategies usually work

Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
NAOURI et al.: NOVEL FRAMEWORK FOR MOBILE-EDGE COMPUTING BY OPTIMIZING TASK OFFLOADING 13073

Fig. 10. Communication cost during the offloading process.


Fig. 12. Average network utilization.

Fig. 11. Energy average consumption during the offloading process.


Fig. 13. Tasks failed due to the link instability.

on assigning tasks separately to the different available comput-


ing nodes which can be practical for independent tasks while high computation and communication tasks, high computa-
increasing the communication costs for dependent tasks. Our tion tasks should be offloaded to the cloudlet or cloud nodes
GTGP algorithm shows low energy consumption considering which is clear due to the surrounded device resource limitation,
the data transmission overall the two prior mentioned strategies and the high communication tasks should be offloaded to the
while showing high energy consumption for the computing nearby devices, according to the scheduler decision. Fig. 12
process. We note that the gain energy brought by decreasing shows the offloading process of communication tasks where
the communication cost can affect inversely the energy devices it shows that relying only on the cloud consumes more band-
due to the task computing burden on the device. However, the width comparing to the cloudlet and devices while offloading
energy consumption resulting due to the computing can be tasks to the nearby nodes showing the lowest consumed
acceptable in case the communication cost is very high. bandwidth.
2) Task Failure Ratio: In our experiment, we adopted a
clustering schema to form a dynamic mobile cloudlet where
B. Performance of DCC Architecture each mobile within the cloudlet able to receive and send
The application tasks could be executed in a cooperative tasks. When a mobile node intends to offload its computing
manner where multiple heterogeneous devices can share their tasks, it starts by offloading tasks to eligible nodes within the
resources within the proposed computing system DCC. In this same cloudlet and then offloading outside the mobile cloudlet
section, we show the advantages of DCC layers separately and according to the task scheduler. Note that the offloading out-
completely on the function of the network usage, task failed side the mobile cloudlet is affected through the cloudlet server
ratio, and task completion delay. to avoid extra routing cost, which is considered a challenge in
1) Network Usage: We executed our application with DCC a dynamic network. Fig. 13 shows the task failure ratio overall
layers as shown in Fig. 12 where the application consists of offloaded tasks, wherewith 100 tasks, the task failure ratio is

Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
13074 IEEE INTERNET OF THINGS JOURNAL, VOL. 8, NO. 16, AUGUST 15, 2021

Fig. 15. Application call graph.

capture frames task, face extraction task, face detection task,


Fig. 14. Average task completion delay. extraction face features, matching face, etc.
1) Experiment Setup: We defined a central scheduler within
our specified SDN to determine optimum tasks scheduling,
0.2 with mobile cloudlet, and 0.45 without mobile cloudlet. We using the input information provided by the network and
observe that offloading tasks failed ratio via mobile cloudlet edge devices. Based on the node resources, eligible computing
is the lowest for various application scales and this relies on nodes within the formed mobile clusters are selected to share
the reliability of the connection given by the mobile cloudlet their resources which subject to prior mention constraints in
to the cloudlet server. the previous sections. In addition, note that the central sched-
3) Task Completion Delay: Fig. 14 shown the average uler is familiar with the entire local network where includes
delay of our proposed scheduling algorithm overall different device profiles. Furthermore, in order to evaluate our proposed
architectures (Cloud-Only, CloudAndCloudlet, DCC). In the GTGP algorithm performance, we analyzed a face-recognition
beginning of the offloading process, the average completion application using application profiler in [32] to select sub
delay is almost identical for the different architectures, then computing task, including dependent and independent tasks.
it starts increases significantly, wherewith the Cloud-Only the Fig. 15 represents the task recognition face call graph that
delay increases to 9.5 s with 400 devices and to 900 s with describes the application process. In the experiment scenario,
1000 devices, and for the CloudAndCloudlet the delay changes we implement a compute-intensive recognition face mobile
from 9.5 s with 400 devices to 400.3 s with 1000 devices. application using Opencv Library and NDN using JNI func-
Hence, this difference relies on the computing nodes’ loca- tion and determine its corresponding tasks according to [32]
tion. Although offloading computing with CloudAndCloudlet where the application includes 19 tasks. Moreover, we build
shown acceptable results comparing to the cloud but it is central middleware using JAVA work on monitoring the hull
still limited in dense networks. However, when the cloud and network and device profiling. We run the computes tasks in
cloudlet nodes unable to fulfill the users offloading requests, the proposed framework, which contains infrastructure-based
integrating the devices layer within the cloud and cloudlet lay- static cloudlets and dynamic mobile cloudlets.
ers showed efficient results, where the delay achieve (2 s, 5 s) 2) Experiment Configuration: The application and scenario
with (400, 500) devices, respectively, and 400 s with 1000 experiments are based on an environment depicted in Table I.
devices. We have fixed the numbers of mobile devices between 10
and 30 devices. These devices have a processing frequency
of 300–600 MHz, respectively, and the targeted application
C. Experiment Evaluation includes 19 tasks. A server node acts as a cloudlet node located
To explore the feasibility of our experiment, we have built one hop away from the mobile devices with a computing
a software defined as proof of concept for our DCC archi- capacity of 0.5–3 GHz and a quad-core CPU.
tecture (SDDCC). We chose a facial recognition system as 3) Application Response Time: To get an accurate estima-
a scenario for our experiment due to its intensive computing tion of the application response time, we run the face recog-
tasks. It consists of smart mobile phones and a cloudlet server. nition application tasks within various architectures, locally at
Smart mobile phones act as an IP mobile camera. The cloudlet the device or remotely with CloudAndCloudlet and the DCC.
server works for automatic face recognition or verification of It can be seen clearly that in Fig. 16 the response execu-
an individual from a digital image or a video frame from a tion time with CloudAndCloudlet and the DCC is significantly
live streaming source. It can be used primarily in crowded reduced compared to the local device execution. It takes more
areas, such as airports, stadiums, and army areas. Using a face than 15 min to process and detect 200 frames of streaming on
recognition application could be a good demonstration to show mobile devices, while it takes less than 1 min with the other
the experiment results due to its task diversity, where consists remote offloading strategies. We observe that at the beginning,
of different computation and communication tasks, such as the remote offloading strategies have identical response time

Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
NAOURI et al.: NOVEL FRAMEWORK FOR MOBILE-EDGE COMPUTING BY OPTIMIZING TASK OFFLOADING 13075

TABLE I
E XPERIMENT PARAMETER S ETTING

Fig. 17. Cluster energy usage.

V. C ONCLUSION
In this article, we have evaluated the task and resource allo-
cation problem of multiple tasks for a single application within
the DCC environment taking into account the task depen-
dencies and user mobility, and we have introduced a greedy
task graph partition GTGP offloading algorithm, where the
tasks scheduling process is assisted according to the device
computing capabilities with following a greedy optimization
approach to minimize the tasks communication cost. Over
trace-driven and randomized simulations, the results show that
our GTGP algorithm outperforms effectively over random and
Fig. 16. Response time within different architecture. uniform offloading strategies. In addition, we build a frame-
work works as an SDN based on managing the offloading
process in a centralized manner. Moreover, we implement a
and then it starts increasing with video size scaling, We note compute-intensive mobile application to operate in DCC archi-
that the offloading over DCC showed the lowest response time tecture, which mostly includes infrastructure-based cloudlets,
compared to CloudAndCloudlet and Cloudlet only, confirm- mobile cloudlets, and cloud. The experiment results also show
ing that the offloading through DCC can improve the system that the performance of our proposed mechanism is excellent.
performance.
4) Energy Consumption Within Different Offloading R EFERENCES
Strategies: Fig. 17 shows the results of the average com- [1] P. Srivastava and R. Khan, “A review paper on cloud computing,” Int.
munication energy consumed within the different computing J. Adv. Res. Comput. Sci. Softw. Eng., vol. 8, no. 6, p. 17, Jun. 2018.
clusters using random, uniform offloading strategies, and [2] H. Bangui, S. Rakrak, S. Raghay, and B. Buhnova, “Moving to the
edge-cloud-of-things: Recent advances and future research directions,”
the GTGP offloading algorithm. In order to measure the Electronics, vol. 7, no. 11, p. 309, Jun. 2018.
communication energy, we select random devices within [3] Y. He, J. Ren, G. Yu, and Y. Cai, “D2D communications meet
clusters 1 and 2 which have insufficient energy. Each cluster mobile edge computing for enhanced computation capacity in cel-
lular networks,” IEEE Trans. Wireless Commun., vol. 18, no. 3,
consisting of ten devices at most. The results showed that the pp. 1750–1763, Mar. 2019.
energy usage of the GTPG offloading algorithm is the lowest [4] D. Kovachev and R. Klamma, “Framework for computation offloading
compared to the two other strategies. This relies on the task in mobile cloud computing,” Int. J. Interact. Multimedia Artif. Intell.,
vol. 1, no. 7, pp. 6–15, 2012.
assignment strategy during the offloading process. By using [5] Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, “A sur-
random/uniform offloading strategies, tasks can be assigned vey on mobile edge computing: The communication perspective,” IEEE
equitably/inequitably to the surrounded computing nodes Commun. Surveys Tuts., vol. 19, no. 4, pp. 2322–2358, 4th Quart., 2017.
[6] F. Messaoudi, A. Ksentini, and P. Bertin, “On using edge computing
which consume more resources. By our GTGP algorithm, for computation offloading in mobile network,” in Proc. IEEE Global
the assignment process among computing clusters is affected Commun. Conf. (GLOBECOM), Jul. 2017, pp. 1–7.
based on devices’ capacities and their locations which can [7] Q. Tang, L. Chang, K. Yang, K. Wang, J. Wang, and P. K. Sharma,
“Task number maximization offloading strategy seamlessly adapted to
reduce energy consumption. We note that the energy con- UAV scenario,” Comput. Commun., vol. 151, pp. 19–30, Feb. 2020.
sumed by cluster 1 is higher than the other two clusters. Since [8] H. Yuan, J. Bi, M. Zhou, J. Zhang, and W. Zhang, “Profit-
the randomly selected nodes tend to offload their computing maximized task offloading with simulated-annealing-based migrating
birds optimization in hybrid cloud-edge systems,” in Proc. IEEE Int.
tasks within the same cluster node than to the other nodes in Conf. Syst. Man Cybern. (SMC), Oct. 2020, pp. 1218–1223. [Online].
other clusters. Available: https://ieeexplore.ieee.org/document/9283467/

Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
13076 IEEE INTERNET OF THINGS JOURNAL, VOL. 8, NO. 16, AUGUST 15, 2021

[9] Y.-H. Kao, B. Krishnamachari, M.-R. Ra, and F. Bai “Hermes: [32] Y. Zhang, H. Liu, L. Jiao, and X. Fu, “To offload or not to offload: An
Latency optimal task assignment for resource-constrained mobile com- efficient code partition algorithm for mobile cloud computing,” in Proc.
puting,” IEEE Trans. Mobile Comput., vol. 16, no. 11, pp. 3056–3069, 1st IEEE Int. Conf. Cloud Netw. (CLOUDNET), 2012, pp. 80–86.
Nov. 2017.
[10] K. Habak, M. Ammar, K. A. Harras, and E. Zegura, “Femto clouds:
Leveraging mobile devices to provide cloud service at the edge,” in Proc. Abdenacer Naouri received the B.S. degree in com-
8th Int. Conf. Cloud Comput., New York, NY, USA, 2015, pp. 9–16. puter science from the University of Djelfa, Djelfa,
[11] S. Sundar and B. Liang, “Communication augmented latest possible Algeria, in 2011, and the M.Sc. degree in networking
scheduling for cloud computing with delay constraint and task depen- and distributed systems from the University of
dency,” in Proc. IEEE Conf. Comput. Commun. Workshops (INFOCOM Laghouat, Laghouat, Algeria, in 2016. He is cur-
WKSHPS), Sep. 2016, pp. 1009–1014. rently pursuing the Ph.D. degree with the University
[12] B. G. Chun, S. Ihm, P. Maniatis, M. Naik, and A. Patti, “CloneCloud: of Science and Technology Beijing, Beijing, China.
Elastic execution between mobile device and cloud,” in Proc. 6th Conf. His current research interests include cloud com-
Comput. Syst. (EuroSys), 2011, pp. 301–314. puting, smart communication, machine learning,
[13] X. Wei et al., “MVR: An architecture for computation offloading in Internet of Vehicles, and Internet of Things.
mobile edge computing,” in Proc. IEEE 1st Int. Conf. Edge Comput.
(EDGE), 2017, pp. 232–235.
[14] S. Kosta, A. Aucinas, P. Hui, R. Mortier, and X. Zhang, “ThinkAir:
Hangxing Wu received the B.S. degree in automa-
Dynamic resource allocation and parallel execution in the cloud for
tion control from Chang’an University, Xi’an, China,
mobile code offloading,” in Proc. IEEE INFOCOM, Orlando, FL, USA,
in 2001, and the Ph.D. degree in control science
2012, pp. 945–953.
and technology from Northwestern Polytechnical
[15] C. Wang and Z. Li, “Parametric analysis for adaptive computa- University, Xi’an, in 2008.
tion offloading,” in Proc. ACM SIGPLAN Conf. Progr. Lang. Design He is currently an Associate Professor with
Implement. (PLDI), vol. 1, 2004, pp. 119–130. the School of Computer and Communication
[16] O. Castro-Orgaz and W. H. Hager, “Computation of steady transcritical Engineering, University of Science and Technology
open channel flows,” in Shallow Water Hydraulics. Cham, Switzerland: Beijing, Beijing, China. His current research
Springer, 2019, pp. 183–200. interests include flow control and congestion con-
[17] Y. Inag, M. Demirci, and S. Ozemir, “Implementation of an SDN based trol, high speed networks, data center networks, and
IoT network model for efficient transmission of sensor data,” in Proc. mobile edge computation.
4th Int. Conf. Comput. Sci. Eng. (UBMK), 2019, pp. 682–687.
[18] M. Jia, J. Cao, and L. Yang, “Heuristic offloading of concurrent tasks
for computation-intensive applications in mobile cloud computing,” in
Nabil Abdelkader Nouri received the Engineering
Proc. IEEE Conf. Comput. Commun. Workshops (INFOCOM WKSHPS),
degree in computer sciences from the University of
2014, pp. 352–357.
Laghouat, Laghouat, Algeria, in 2003, and the mag-
[19] M.-A. H. Abdel-Jabbar, I. Kacem, and S. Martin, “Unrelated parallel
ister degree in networking and distributed systems
machines with precedence constraints: Application to cloud comput-
from the University of Bejaia, Bejaia, Algeria,
ing,” in Proc. IEEE 3rd Int. Conf. Cloud Netw. (CloudNet), Nov. 2014,
in 2007.
pp. 438–442.
His current research interests include wireless
[20] Y. H. Kao and B. Krishnamachari, “Optimizing mobile computational networking design, Internet of Things, performance
offloading with delay constraints,” in Proc. IEEE Global Commun. Conf. evaluation, fog computing, optimization.
(GLOBECOM), Feb. 2014, pp. 2289–2294.
[21] E. Cuervo et al., “MAUI: Making smartphones last longer with code
offload,” in Proc. 8th Int. Conf. Mobile Syst. Appl. Services, 2010,
pp. 49–62.
[22] D. Mazza, D. Tarchi, and G. E. Corazza, “A cluster based computation Sahraoui Dhelim (Member, IEEE) received the B.S.
offloading technique for mobile cloud computing in smart cities,” in degree in computer science from the University of
Proc. IEEE Int. Conf. Commun. (ICC), Jul. 2016, p. 6. Djelfa, Djelfa, Algeria, in 2012, the master’s degree
[23] L. Xiang, B. Li, and B. Li, “Coalition formation towards energy-efficient in networking and distributed systems from the
collaborative mobile computing,” in Proc. Int. Conf. Comput. Commun. University of Laghouat, Laghouat, Algeria, in 2014,
Net. (ICCCN), Las Vegas, NV, USA, Oct. 2015, pp. 1–8. and Ph.D. degree in computer science and technol-
[24] N. Shi, X. Liu, and Y. Guan, “Research on k-means clustering algorithm: ogy from the University of Science and Technology
An improved k-means clustering algorithm,” in Proc. 3rd Int. Symp. Beijing, Beijing, China, in 2020.
Intell. Inf. Technol. Security Informatics (IITSI), 2010, pp. 63–67. His current research interests include social
[25] S. Deng, L. Huang, J. Taheri, and A. Y. Zomaya, “Computation offload- computing, personality computing, user modeling,
ing for service workflow in mobile cloud computing,” IEEE Trans. interest mining, recommendation systems and intel-
Parallel Distrib. Syst., vol. 26, no. 12, pp. 3317–3329, Dec. 2015. ligent transportation systems, interest mining,
[26] Y. Lan, X. Wang, C. Wang, D. Wang, and Q. Li, “Collaborative com-
putation offloading and resource allocation in cache-aided hierarchical
edge-cloud systems,” Electronics, vol. 8, no. 12, p. 1430, 2019. Huansheng Ning (Senior Member, IEEE) received
[27] Y. He, J. Ren, G. Yu, and Y. Cai, “D2D communications meet the B.S. degree from Anhui University, Hefei,
mobile edge computing for enhanced computation capacity in cel- China, in 1996, and the Ph.D. degree from Beihang
lular networks,” IEEE Trans. Wireless Commun., vol. 18, no. 3, University, Beijing, China, in 2001.
pp. 1750–1763, Mar. 2019. He is currently a Professor and the Vice Director
[28] H. Wu, W. Knottenbelt, K. Wolter, and Y. Sun, “An Optimal Offloading of the School of Computer and Communication
Partitioning Algorithm in Mobile Cloud Computing” in International Engineering, University of Science and Technology
Conference on Quantitative Evaluation of Systems. Cham, Switzerland: Beijing, Beijing. He is a Visiting Chair Professor of
Springer, 2016, pp. 311–328. Ulster University, Coleraine, U.K. He has presided
[29] L. Luo and B. E. John, “Predicting task execution time on handheld many research projects, including Natural Science
devices using the keystroke-level model,” in Proc. Int. Conf. Human Foundation of China, National High Technology
Factors Comput. Syst., 2005, pp. 1605–1608. Research and Development Program of China (863 Project). He has pub-
[30] A. Khanna, A. Kero, and D. Kumar, “Mobile cloud computing archi- lished more than 150 journal/conference papers, and authored 5 books. His
tecture for computation offloading,” in Proc. 2nd Int. Conf. Next Gener. current research focuses on the Internet of Things and general cyberspace.
Comput. Technol., (NGCT), Oct. 2017, pp. 639–643. Prof. Ning is the Founder and the Chair of the Cyberspace and Cybermatics
[31] A. Khanna, A. Kero, and D. Kumar, “Mobile cloud computing archi- International Science and Technology Cooperation Base. He serves as a Area
tecture for computation offloading,” in Proc. 2nd Int. Conf. Next Gener. Editor for IEEE I NTERNET OF T HINGS J OURNAL from 2020 to 2022, and
Comput. Technol. (NGCT), Oct. 2016, pp. 639–643. an editor role for some other journals.

Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.

You might also like