A Novel Framework For Mobile-Edge Computing by Optimizing Task Offloading
A Novel Framework For Mobile-Edge Computing by Optimizing Task Offloading
A Novel Framework For Mobile-Edge Computing by Optimizing Task Offloading
Abstract—With the emergence of mobile computing offload- equipped with thousands of smart objects, including cameras,
ing paradigms, such as mobile-edge computing (MEC), many sensors and actuators. These objects sense the environment
Internet of Things applications can take advantage of the com- and react to a real-time situation by gathering data from dif-
puting powers of end devices to perform local tasks without
the need to rely on a centralized server. Computation offload- ferent sources, which requires sending a huge amount of data
ing is becoming a promising technique that helps to prolong the continuously. Analyzing such tremendous generated data, such
device’s battery life and reduces the computing tasks’ execution as the video streams from smart cameras, or augmented real-
time. Many previous works have discussed task offloading to ity, and facial recognition data in IoT applications requires
the cloud. However, these schemes do not differentiate between a high computing resource that can be empowered by the
types of application tasks. It is not reasonable to offload all
application tasks into the cloud. Some application tasks with Cloud computing [1]. Cloud computing can be described as a
low computing and high communication cost are more suitable remote data center consisting of a collection of super com-
to be executed on the end devices. On the other hand, most puting nodes that share resources with each other forming
resources on the end devices are idle and can be used to pro- intensive resource computing associated with smart manage-
cess tasks with low computing and high communication cost. ment software, such as a software-defined network (SDN). The
In this article, a three-layer task offloading framework named
DCC is proposed, which consists of the device layer, cloudlet devices that are unable to complete computing tasks locally,
layer and cloud layer. In DCC, the tasks with high computing offload their computing task to the cloud. However, due to
requirement are offloaded to the cloudlet layer and cloud layer. the large gap between the cloud and end devices, the network
Whereas tasks with low computing and high communication cost could suffers from connectivity delay which is not suitable
are executed on the device layer, hence DCC avoids transmitting for latency sensitive real-time applications, in addition to the
large amount of data to the cloud, and can effectively reduce
the processing delay. We have introduced a greedy task graph back traffic weight that could overload the network. In order
partition offloading algorithm, where the tasks scheduling pro- to reduce the network delays and the massive resulting traffic
cess is assisted according to the device computing capabilities through the network, edge computing (EC) [2] was suggested
following a greedy optimization approach to minimize the tasks as a solution to solve those issues where it enables computing
communication cost. To show the effectiveness of the proposed at the edge of the network. However, shifting the computation
framework, We have implemented a facial recognition system as
usecase scenario. Furthermore, experiment and simulation results from the cloud to the edge requires intelligent supervision.
show that DCC can achieve high performance when compared Furthermore, researchers suggested mobile EC (MEC) in [3],
to state-of-the-art computational offloading techniques. which is considered to be a variant form of EC adapted to
Index Terms—Cloud computing, cloudlet computing, clus- mobile networks.
ter formation, communication tasks, computation offloading, Some MEC architectures and offloading strategies have
dynamic mobile cloudlet. been investigated in [4]–[8] for remote task computing, which
deals separately with different minimizing or maximizing
I. I NTRODUCTION objectives, such as minimizing the energy consumption or
the execution delay or maximizing the offloaded tasks ratio
ITH the tremendous growth of the Internet of Things
W (IoT), the number of objects connected to the IoT
network is in the scale of billions. Most of mega cities around
or the system profit within computing devices or the fog
nodes [7], [8]. Kao et al. [9] targeted the execution time within
IoT devices, and introduced a fully polynomial-time approxi-
the world, such as Beijing and New York have recently been mation scheme to reduce the application tasks execution delay.
Manuscript received November 11, 2020; revised January 21, 2021; Also, Habak et al. [10] and Sundar and Liang [11] have
accepted February 28, 2021. Date of publication March 8, 2021; date of proposed femtocloud system which provides a dynamic, self-
current version August 6, 2021. This work was supported by the National configuring, and multidevice mobile cloud out of a cluster of
Natural Science Foundation of China under Grant 61872038. (Corresponding
author: Huansheng Ning.) mobile devices. Moreover, Sundar and Liang [11] identified
Abdenacer Naouri, Hangxing Wu, Sahraoui Dhelim, and Huansheng an optimal scheduling decision for a mobile application com-
Ning are with the School of Computer and Communication Engineering, prising of dependent tasks, such that the communication and
the University of Science and Technology Beijing, Beijing 100083,
China, and also with the Beijing Engineering Research Center for execution cost is minimized subject to an application deadline.
Cyberspace Data Analysis and Applications, Beijing, China (e-mail: However, most of these works did not consider the appli-
ninghuansheng@ustb.edu.cn). cation task dependency and the communication burden, the
Nabil Abdelkader Nouri is with the Department of Mathematics and
Computer Science, University of Djelfa, Djelfa 17000, Algeria. IoT devices at the edge do not differentiate applications tasks
Digital Object Identifier 10.1109/JIOT.2021.3064225 and offloaded the entire applications tasks to the cloud using
2327-4662
c 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
13066 IEEE INTERNET OF THINGS JOURNAL, VOL. 8, NO. 16, AUGUST 15, 2021
traditional offloading strategies, which cause a large volume of II. R ELATED W ORK
data to be transmitted, and subsequently network congestion. Recently, offloading computing shown a wide application
In fact, offloading tasks to the cloud can be reasonable when in several domains in IoT, with different architectures and dif-
the edge or fog computing nodes cannot fulfill the application ferent policies, where application tasks executed remotely on
needs. The high computing requests demands may exceed the other devices due to insufficient device resources, regarding
capability of the nodes because of their insufficient resources. the application execution time and devices energy. Different
While offloading tasks with low computing and high com- works have been explored this area and different static
munication cost to the cloud is not reasonable, due to the and dynamic offloading frameworks and architectures were
unstable devices connection with the cloud and the long dis- proposed. Chun et al. [12] and Wei et al. [13] introduced
tance between devices and cloud. It is better to process these CloneCloud and MAUI frameworks that aim to improve bat-
tasks locally within the edge and nearby devices to reduce the tery life and device performance by offloading application
communication traffic and execution time. Furthermore, there components to cloud servers. Both targeting one server for
are still some problems that have not been addressed in the offloading. In CloneCloud, tasks executed in a cloned image
previous works. First, the resources on the edge may not ful- of the system of the device, it combines a static program
fill all user’s requests due to the enormous offloading tasks. analysis with a profiling program to select the offloaded com-
Second, the mobility of mobile devices still stands as a bar- ponents. In MAUI tasks executed based on methods annotation
rier to achieve efficient offloading. Third, choosing the suitable and static program analysis. However, a single server may
task execution location remains a challenge. In order to solve not have enough communication and computing resources. In
all these problems mentioned above, we proposed a three-tier these cases, and in other situations, where there are significant
MEC architecture as shown in Fig. 1, which consists of device latency limitations, frameworks with concurrent offloading on
layer, cloudlet layer, and cloud layer (DCC). multiple servers have been suggested for task distribution
Based on the observation that a large number of comput- among a cluster of servers with specific processing and com-
ing resources in the local devices are idle in practice, a large munication capabilities. In [14] a framework named ThinkAir
number of high communication tasks can be handled locally has been proposed to fix the drawbacks of two previous men-
if these idle computing resources can be utilized. As a result, tioned frameworks. In particular, ThinkAir use being used with
the processing delay of these tasks can be reduced, and we new ways for resource management and simultaneous task
avoid sending large amounts of data to the cloud. Therefore, execution. It focuses on the cloud’s elasticity and scalabil-
a device layer in DCC is defined by forming dynamic mobile ity and improves the capacity of mobile cloud computing by
cloudlet devices in the same area. The advantages of DCC are using several virtual machine (VM) images for parallel process
as follows. First, high communication tasks in DCC can be execution.
Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
NAOURI et al.: NOVEL FRAMEWORK FOR MOBILE-EDGE COMPUTING BY OPTIMIZING TASK OFFLOADING 13067
Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
13068 IEEE INTERNET OF THINGS JOURNAL, VOL. 8, NO. 16, AUGUST 15, 2021
Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
NAOURI et al.: NOVEL FRAMEWORK FOR MOBILE-EDGE COMPUTING BY OPTIMIZING TASK OFFLOADING 13069
CTj (t) = {t : tfinish − tstart } (6) Fig. 5. DCC computing model for IoT devices.
tstart = inf t : Xi (t) − Xj (t) ≤ R
0≤t≤τ
tfinish = inf t : ∀t ≥ t ∩ t ≤ τ, Xi t − Xj t > R . Second, the LMDW consists of task and resource man-
0≤t≤τ
agement components, in which all collected information is
(7)
concerning the edge devices states and the offloaded tasks.
Resource Manager involves a resource classifier (RC) and a
D. System Model Description Profiler, the RC works on classifying the available resource on
We defined a global and local middleware (LMDW) works the network using the Profiler metadata. While the Profiler is
as an SDN controller to fulfill the users offloading requests at responsible for monitoring the local network and device states.
the edge and monitoring the network and mobile devices state. In addition, the Task Manager includes a Task Classifier and
The global middleware (GMDW) work on monitoring and Solver. Where Task Classifier ranges the tasks according to
redirecting the traffic through all network between the static their features and constraints. The Solver takes responsibility
cloudlets nodes (SC), while the LMDW focuses on managing for sending and receiving the offloading request toward nodes
the local explored area. Fig. 5 shows the architecture model according to the scheduling decision. Based on the resource
where all mobile devices can subscribe to profit from the SC and tasks information provided by the previously mentioned
resources. In this architecture, users notify their states, includ- components a local task offloaded (LTO) offload the comput-
ing location, storage capacity, energy level and CPU to the ing tasks to the suitable devices using a partition task and
corresponding SC. GMDW can enhance the network through- resource allocation algorithms, which will be discussed later
put by offloading data to the less overloaded servers across in the next sections taking into account the task deadline and
the network according to the user’s density variation in dif- their dependencies also the device’s connectivity.
ferent areas. LMDW can control and track users due to their
mobility.
Fig. 5 displays global and local middleware (GMDW, E. Optimal Decision of Task Offloading
LMDW) components, each one associated with different com- Although not all application tasks are suitable for remote
ponents. First, the GMDW consists of static cloudlet profiler execution, binary and partial offloading strategies have been
(SCP) includes information concerning the status of supervised introduced in [26]–[28] as shown in Fig. 6. Binary offload-
SC nodes (server load, location, processing power, etc.) and ing requires a task to be executed as a whole either locally at
static cloudlet network profiler (SCNP) to monitor the network the device or remotely on the MEC server. Partial offloading
traffic over the available various SCs in the network. While allows a task to be partitioned into two parts, some executed
static cloudlet request resolver (SCRR) is in charge of solving locally and some execute remotely. In practice, binary offload-
the offloaded heavy tasks. In the end, task offloading decision ing is easier to implement and suitable for simple tasks that are
(TOD) is responsible for TODs among the SCs based on the not partitionable, while partial offloading is favorable for com-
gathered information (SCs, network states, and the offloaded plex tasks that include multiple methods, such as augmented
requests). reality and face recognition applications.
Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
13070 IEEE INTERNET OF THINGS JOURNAL, VOL. 8, NO. 16, AUGUST 15, 2021
Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
NAOURI et al.: NOVEL FRAMEWORK FOR MOBILE-EDGE COMPUTING BY OPTIMIZING TASK OFFLOADING 13071
nodes.
|Qj |
M
u,d u,d u,d
Timeni,Nd ∈Qj = C δi,j + UD δi,j + W δi,j .
i=1 j=1
(13)
Nd denotes the selected node to perform the assigned task
u,d u,d
δi,j of node Nu , where C(δi,j ) represent computing time of
u,d
task j at node i, UD(δi,j ) represent data upload and download
u,d
time and W(δi,j ) denote waiting data receiving time.
More formally, our proposed task scheduling allocation
Fig. 8. Tasks dependency cases. algorithm can formally be defined as the succeeding integer
linear programming problem (ILP) 0–1 where decision vari-
able di,j addressed properly. di,j = 1 indicates that the jth task
paths, such as P1 (T3, T7, T9) assigned to N1 , P2 (T4, T6, T8) is assigned to the remote node i.
assigned to N2 and (T2, T5) assigned to N3 . Based on the |Q|
|M|
execution flow, we partition our task graph between I and T minimize CCij · di,j (14)
in terms of communication usage according to the heaviest i=1 j=1
execution paths, where it considered a near-optimal execution |M|
strategy and we denoted the communication cost by CCi,j ,
subject to di,j Ti,j ≤ CTi (τ ), i = 1, 2, . . . , |M| (15)
where
j=1
|Q|
ini,j outi,j
CCi,j = + . (11) di,j c δiu ≤ ci (τ ), i = 1, 2, . . . , |Q| (16)
Bupload Bdownload
i=1
di,j = 0, 1, i = 1, . . . , |Q|; j = 1, . . . , |M|. (17)
Besides that, we represented the TOD on two major cases
as illustrated in Fig. 8. 5) Centralized Task Scheduling Algorithm Description:
Case 1: N1 offload tasks to N2 and N3 . N2 execute the Note that the response time of nearby computing nodes to
offloaded task by N1 and send output results to N3 to presume deliver the output task result could vary depending on the com-
its task execution. Distributed tasks among the cluster nodes puting capacity, connection time, and network condition. For
should be done in an efficient manner to reduce the com- that, we defined an electing strategy to evaluate the computing
munication overhead, energy consumption, and delay. Hence, nodes’ eligibility within each mobile cloudlet.
we proposed a greedy task partition algorithm to seek near- 1) Qualified Nodes Election Process: Node mobility could
optimal allocation to achieve efficient offloading according to be a barrier during the offloading process leading to a
available computing resources. task failure. Therefore, we set low task computation and
Case 2: N1 offload tasks to Nc and waiting for the result CT barriers to deal with task execution time respecting
back to resume its execution, where it is not practical to rely to task deadline within the cluster Si nodes. Moreover,
on one edge server on dense networks due to the enormous we maintained appropriate energy ej to support the task
resulting requests from edge devices. When the edge server completion time. Note that the CT that can be obtained
is unable to handle the assigned tasks, it will offload part of using a prediction model [29] should be reasonable to
to its nearby edge servers where smart selection methods are meet the task needs. While exchange time (ET) defined
required. which is out of the scope of the current work and to ensure the successful data transfer between the orig-
can be considered as future work. inator and the neighbor nodes. Hence, the selection
Since the application may include dependent and indepen- process must satisfy the following condition:
dent tasks, we focused on the dependent communication tasks u,d u,d
(shared tasks) that add burden to the network, due to their high CTj > C δi,j , ETj > D δi,j and ej > E.
interaction cost. We minimized interaction between nodes by
2) Task Graph Partition: After determining the eligible
mapping tasks with high interactions on the same computing
nodes sets Q within the mobile cloudlets, we intro-
node as illustrated in Fig. 7. This, reduces the communication
duce a greedy task graph offloading partition algorithm
cost, task execution time, and extend the network lifetime.
for acyclic task graph Gt . The algorithm takes a Gt as
Consequently, we defined the application overall execution
input, which includes task nodes represented as vertices
time Tapp within the computing cluster as follows:
and communication link between nodes represented as
edges. Note that the communication cost between tasks
Timeapp = Timeni + Time e
i (12) at the same node is negligible.
a) Tasks classification: First, we distinguished tasks
where Timeni indicate the tasks execution time at nearby into different sets of unoffloadable tasks (Local),
devices and timeei indicate the execution time at the edge computation tasks (heavy), and communication
Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
13072 IEEE INTERNET OF THINGS JOURNAL, VOL. 8, NO. 16, AUGUST 15, 2021
Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
NAOURI et al.: NOVEL FRAMEWORK FOR MOBILE-EDGE COMPUTING BY OPTIMIZING TASK OFFLOADING 13073
Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
13074 IEEE INTERNET OF THINGS JOURNAL, VOL. 8, NO. 16, AUGUST 15, 2021
Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
NAOURI et al.: NOVEL FRAMEWORK FOR MOBILE-EDGE COMPUTING BY OPTIMIZING TASK OFFLOADING 13075
TABLE I
E XPERIMENT PARAMETER S ETTING
V. C ONCLUSION
In this article, we have evaluated the task and resource allo-
cation problem of multiple tasks for a single application within
the DCC environment taking into account the task depen-
dencies and user mobility, and we have introduced a greedy
task graph partition GTGP offloading algorithm, where the
tasks scheduling process is assisted according to the device
computing capabilities with following a greedy optimization
approach to minimize the tasks communication cost. Over
trace-driven and randomized simulations, the results show that
our GTGP algorithm outperforms effectively over random and
Fig. 16. Response time within different architecture. uniform offloading strategies. In addition, we build a frame-
work works as an SDN based on managing the offloading
process in a centralized manner. Moreover, we implement a
and then it starts increasing with video size scaling, We note compute-intensive mobile application to operate in DCC archi-
that the offloading over DCC showed the lowest response time tecture, which mostly includes infrastructure-based cloudlets,
compared to CloudAndCloudlet and Cloudlet only, confirm- mobile cloudlets, and cloud. The experiment results also show
ing that the offloading through DCC can improve the system that the performance of our proposed mechanism is excellent.
performance.
4) Energy Consumption Within Different Offloading R EFERENCES
Strategies: Fig. 17 shows the results of the average com- [1] P. Srivastava and R. Khan, “A review paper on cloud computing,” Int.
munication energy consumed within the different computing J. Adv. Res. Comput. Sci. Softw. Eng., vol. 8, no. 6, p. 17, Jun. 2018.
clusters using random, uniform offloading strategies, and [2] H. Bangui, S. Rakrak, S. Raghay, and B. Buhnova, “Moving to the
edge-cloud-of-things: Recent advances and future research directions,”
the GTGP offloading algorithm. In order to measure the Electronics, vol. 7, no. 11, p. 309, Jun. 2018.
communication energy, we select random devices within [3] Y. He, J. Ren, G. Yu, and Y. Cai, “D2D communications meet
clusters 1 and 2 which have insufficient energy. Each cluster mobile edge computing for enhanced computation capacity in cel-
lular networks,” IEEE Trans. Wireless Commun., vol. 18, no. 3,
consisting of ten devices at most. The results showed that the pp. 1750–1763, Mar. 2019.
energy usage of the GTPG offloading algorithm is the lowest [4] D. Kovachev and R. Klamma, “Framework for computation offloading
compared to the two other strategies. This relies on the task in mobile cloud computing,” Int. J. Interact. Multimedia Artif. Intell.,
vol. 1, no. 7, pp. 6–15, 2012.
assignment strategy during the offloading process. By using [5] Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, “A sur-
random/uniform offloading strategies, tasks can be assigned vey on mobile edge computing: The communication perspective,” IEEE
equitably/inequitably to the surrounded computing nodes Commun. Surveys Tuts., vol. 19, no. 4, pp. 2322–2358, 4th Quart., 2017.
[6] F. Messaoudi, A. Ksentini, and P. Bertin, “On using edge computing
which consume more resources. By our GTGP algorithm, for computation offloading in mobile network,” in Proc. IEEE Global
the assignment process among computing clusters is affected Commun. Conf. (GLOBECOM), Jul. 2017, pp. 1–7.
based on devices’ capacities and their locations which can [7] Q. Tang, L. Chang, K. Yang, K. Wang, J. Wang, and P. K. Sharma,
“Task number maximization offloading strategy seamlessly adapted to
reduce energy consumption. We note that the energy con- UAV scenario,” Comput. Commun., vol. 151, pp. 19–30, Feb. 2020.
sumed by cluster 1 is higher than the other two clusters. Since [8] H. Yuan, J. Bi, M. Zhou, J. Zhang, and W. Zhang, “Profit-
the randomly selected nodes tend to offload their computing maximized task offloading with simulated-annealing-based migrating
birds optimization in hybrid cloud-edge systems,” in Proc. IEEE Int.
tasks within the same cluster node than to the other nodes in Conf. Syst. Man Cybern. (SMC), Oct. 2020, pp. 1218–1223. [Online].
other clusters. Available: https://ieeexplore.ieee.org/document/9283467/
Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.
13076 IEEE INTERNET OF THINGS JOURNAL, VOL. 8, NO. 16, AUGUST 15, 2021
[9] Y.-H. Kao, B. Krishnamachari, M.-R. Ra, and F. Bai “Hermes: [32] Y. Zhang, H. Liu, L. Jiao, and X. Fu, “To offload or not to offload: An
Latency optimal task assignment for resource-constrained mobile com- efficient code partition algorithm for mobile cloud computing,” in Proc.
puting,” IEEE Trans. Mobile Comput., vol. 16, no. 11, pp. 3056–3069, 1st IEEE Int. Conf. Cloud Netw. (CLOUDNET), 2012, pp. 80–86.
Nov. 2017.
[10] K. Habak, M. Ammar, K. A. Harras, and E. Zegura, “Femto clouds:
Leveraging mobile devices to provide cloud service at the edge,” in Proc. Abdenacer Naouri received the B.S. degree in com-
8th Int. Conf. Cloud Comput., New York, NY, USA, 2015, pp. 9–16. puter science from the University of Djelfa, Djelfa,
[11] S. Sundar and B. Liang, “Communication augmented latest possible Algeria, in 2011, and the M.Sc. degree in networking
scheduling for cloud computing with delay constraint and task depen- and distributed systems from the University of
dency,” in Proc. IEEE Conf. Comput. Commun. Workshops (INFOCOM Laghouat, Laghouat, Algeria, in 2016. He is cur-
WKSHPS), Sep. 2016, pp. 1009–1014. rently pursuing the Ph.D. degree with the University
[12] B. G. Chun, S. Ihm, P. Maniatis, M. Naik, and A. Patti, “CloneCloud: of Science and Technology Beijing, Beijing, China.
Elastic execution between mobile device and cloud,” in Proc. 6th Conf. His current research interests include cloud com-
Comput. Syst. (EuroSys), 2011, pp. 301–314. puting, smart communication, machine learning,
[13] X. Wei et al., “MVR: An architecture for computation offloading in Internet of Vehicles, and Internet of Things.
mobile edge computing,” in Proc. IEEE 1st Int. Conf. Edge Comput.
(EDGE), 2017, pp. 232–235.
[14] S. Kosta, A. Aucinas, P. Hui, R. Mortier, and X. Zhang, “ThinkAir:
Hangxing Wu received the B.S. degree in automa-
Dynamic resource allocation and parallel execution in the cloud for
tion control from Chang’an University, Xi’an, China,
mobile code offloading,” in Proc. IEEE INFOCOM, Orlando, FL, USA,
in 2001, and the Ph.D. degree in control science
2012, pp. 945–953.
and technology from Northwestern Polytechnical
[15] C. Wang and Z. Li, “Parametric analysis for adaptive computa- University, Xi’an, in 2008.
tion offloading,” in Proc. ACM SIGPLAN Conf. Progr. Lang. Design He is currently an Associate Professor with
Implement. (PLDI), vol. 1, 2004, pp. 119–130. the School of Computer and Communication
[16] O. Castro-Orgaz and W. H. Hager, “Computation of steady transcritical Engineering, University of Science and Technology
open channel flows,” in Shallow Water Hydraulics. Cham, Switzerland: Beijing, Beijing, China. His current research
Springer, 2019, pp. 183–200. interests include flow control and congestion con-
[17] Y. Inag, M. Demirci, and S. Ozemir, “Implementation of an SDN based trol, high speed networks, data center networks, and
IoT network model for efficient transmission of sensor data,” in Proc. mobile edge computation.
4th Int. Conf. Comput. Sci. Eng. (UBMK), 2019, pp. 682–687.
[18] M. Jia, J. Cao, and L. Yang, “Heuristic offloading of concurrent tasks
for computation-intensive applications in mobile cloud computing,” in
Nabil Abdelkader Nouri received the Engineering
Proc. IEEE Conf. Comput. Commun. Workshops (INFOCOM WKSHPS),
degree in computer sciences from the University of
2014, pp. 352–357.
Laghouat, Laghouat, Algeria, in 2003, and the mag-
[19] M.-A. H. Abdel-Jabbar, I. Kacem, and S. Martin, “Unrelated parallel
ister degree in networking and distributed systems
machines with precedence constraints: Application to cloud comput-
from the University of Bejaia, Bejaia, Algeria,
ing,” in Proc. IEEE 3rd Int. Conf. Cloud Netw. (CloudNet), Nov. 2014,
in 2007.
pp. 438–442.
His current research interests include wireless
[20] Y. H. Kao and B. Krishnamachari, “Optimizing mobile computational networking design, Internet of Things, performance
offloading with delay constraints,” in Proc. IEEE Global Commun. Conf. evaluation, fog computing, optimization.
(GLOBECOM), Feb. 2014, pp. 2289–2294.
[21] E. Cuervo et al., “MAUI: Making smartphones last longer with code
offload,” in Proc. 8th Int. Conf. Mobile Syst. Appl. Services, 2010,
pp. 49–62.
[22] D. Mazza, D. Tarchi, and G. E. Corazza, “A cluster based computation Sahraoui Dhelim (Member, IEEE) received the B.S.
offloading technique for mobile cloud computing in smart cities,” in degree in computer science from the University of
Proc. IEEE Int. Conf. Commun. (ICC), Jul. 2016, p. 6. Djelfa, Djelfa, Algeria, in 2012, the master’s degree
[23] L. Xiang, B. Li, and B. Li, “Coalition formation towards energy-efficient in networking and distributed systems from the
collaborative mobile computing,” in Proc. Int. Conf. Comput. Commun. University of Laghouat, Laghouat, Algeria, in 2014,
Net. (ICCCN), Las Vegas, NV, USA, Oct. 2015, pp. 1–8. and Ph.D. degree in computer science and technol-
[24] N. Shi, X. Liu, and Y. Guan, “Research on k-means clustering algorithm: ogy from the University of Science and Technology
An improved k-means clustering algorithm,” in Proc. 3rd Int. Symp. Beijing, Beijing, China, in 2020.
Intell. Inf. Technol. Security Informatics (IITSI), 2010, pp. 63–67. His current research interests include social
[25] S. Deng, L. Huang, J. Taheri, and A. Y. Zomaya, “Computation offload- computing, personality computing, user modeling,
ing for service workflow in mobile cloud computing,” IEEE Trans. interest mining, recommendation systems and intel-
Parallel Distrib. Syst., vol. 26, no. 12, pp. 3317–3329, Dec. 2015. ligent transportation systems, interest mining,
[26] Y. Lan, X. Wang, C. Wang, D. Wang, and Q. Li, “Collaborative com-
putation offloading and resource allocation in cache-aided hierarchical
edge-cloud systems,” Electronics, vol. 8, no. 12, p. 1430, 2019. Huansheng Ning (Senior Member, IEEE) received
[27] Y. He, J. Ren, G. Yu, and Y. Cai, “D2D communications meet the B.S. degree from Anhui University, Hefei,
mobile edge computing for enhanced computation capacity in cel- China, in 1996, and the Ph.D. degree from Beihang
lular networks,” IEEE Trans. Wireless Commun., vol. 18, no. 3, University, Beijing, China, in 2001.
pp. 1750–1763, Mar. 2019. He is currently a Professor and the Vice Director
[28] H. Wu, W. Knottenbelt, K. Wolter, and Y. Sun, “An Optimal Offloading of the School of Computer and Communication
Partitioning Algorithm in Mobile Cloud Computing” in International Engineering, University of Science and Technology
Conference on Quantitative Evaluation of Systems. Cham, Switzerland: Beijing, Beijing. He is a Visiting Chair Professor of
Springer, 2016, pp. 311–328. Ulster University, Coleraine, U.K. He has presided
[29] L. Luo and B. E. John, “Predicting task execution time on handheld many research projects, including Natural Science
devices using the keystroke-level model,” in Proc. Int. Conf. Human Foundation of China, National High Technology
Factors Comput. Syst., 2005, pp. 1605–1608. Research and Development Program of China (863 Project). He has pub-
[30] A. Khanna, A. Kero, and D. Kumar, “Mobile cloud computing archi- lished more than 150 journal/conference papers, and authored 5 books. His
tecture for computation offloading,” in Proc. 2nd Int. Conf. Next Gener. current research focuses on the Internet of Things and general cyberspace.
Comput. Technol., (NGCT), Oct. 2017, pp. 639–643. Prof. Ning is the Founder and the Chair of the Cyberspace and Cybermatics
[31] A. Khanna, A. Kero, and D. Kumar, “Mobile cloud computing archi- International Science and Technology Cooperation Base. He serves as a Area
tecture for computation offloading,” in Proc. 2nd Int. Conf. Next Gener. Editor for IEEE I NTERNET OF T HINGS J OURNAL from 2020 to 2022, and
Comput. Technol. (NGCT), Oct. 2016, pp. 639–643. an editor role for some other journals.
Authorized licensed use limited to: Univ of Science and Tech Beijing. Downloaded on March 28,2022 at 10:44:49 UTC from IEEE Xplore. Restrictions apply.