Nothing Special   »   [go: up one dir, main page]

skip to main content
survey
Open access

A Survey on Edge Intelligence and Lightweight Machine Learning Support for Future Applications and Services

Published: 22 June 2023 Publication History

Abstract

As the number of devices connected to the Internet has grown larger, so too has the intensity of the tasks that these devices need to perform. Modern networks are more frequently working to perform computationally intensive tasks on low-power devices and low-end hardware. Current architectures and platforms tend towards centralized and resource-rich cloud computing approaches to address these deficits. However, edge computing presents a much more viable and flexible alternative. Edge computing refers to a distributed and decentralized network architecture in which demanding tasks such as image recognition, smart city services, and high-intensity data processing tasks can be distributed over a number of integrated network devices. In this article, we provide a comprehensive survey for emerging edge intelligence applications, lightweight machine learning algorithms, and their support for future applications and services. We start by analyzing the rise of cloud computing, discuss its weak points, and identify situations in which edge computing provides advantages over traditional cloud computing architectures. We then divulge details of the survey: the first section identifies opportunities and domains for edge computing growth, the second identifies algorithms and approaches that can be used to enhance edge intelligence implementations, and the third specifically analyzes situations in which edge intelligence can be enhanced using any of the aforementioned algorithms or approaches. In this third section, lightweight machine learning approaches are detailed. A more in-depth analysis and discussion of future developments follows. The primary discourse of this article is in service of an effort to ensure that appropriate approaches are applied adequately to artificial intelligence implementations in edge systems, mainly, the lightweight machine learning approaches.

1 Introduction

Modern software depends on artificial intelligence (AI) algorithms for a wide array of purposes. From image filters to efficient route planning, artificial intelligence impacts nearly every aspect of our day-to-day lives in contemporary society. Artificial intelligence algorithms, generally speaking, perform some sort of reasoning based on a given set of data. Programs that leverage artificial intelligence require a great deal of data to be provided and analyzed, through which an approach to reason about that information can be developed, tested, and implemented. In some cases, this may be as simple as playing and winning a simple two-player game against a real human through a simple terminal interface. In other cases, this may require analysis of complex real-world imagery, prior knowledge from previous encounters, and more to determine a way forward. Algorithms have taken on a wide array of approaches to solving problems — while some leverage mathematical principles, others make use of statistical measures and machine learning. The growth of more complicated, computationally intensive algorithms, including deep learning and recurrent models, require that time-efficient and space-saving hardware is available to and accessible by users at any time to ensure that the needs of their software are being met.
Historically, computing occurred with a single computer and several terminals interacting with it. All computing was centralized around a single device that had numerous agents that could converse with it. As developments in hardware and software occurred, the centralized moved to the localized. Personal computers allowed for more tasks to be performed in a shorter amount of time. Instead of queuing tasks from several users to be performed in sequence, the personal computer allowed for users to process tasks more immediately and directly. These developments required many new networks of computers to be developed and implemented among each other. Difficulties in keeping machines in sync with one another emerged from these changes, however. Thus, servers took on the role of providing resources or commands to allow for multiple clients to communicate more effectively with one another. Server outages and downtime would impact work, which eventually led to the development of cloud computing services. With a reliable framework to keep resources and tasks operational, more work can be done on the part of the user or the machine. Applications have grown to take on serious amounts of resources and processor power to manage these through tools such as artificial intelligence and machine learning.
Cloud computing has been used to perform the task of making artificial intelligence systems, as well as other systems, reliable, secure, and responsive to users. Cloud computing traditionally operates in a restricted private platform through which users can create, manage, and deploy any variety of applications they desire. These platforms are Internet enabled, meaning that both developers and users can access them from anywhere. Further, cloud providers often have several data centers across the globe to make access simple and fast from any location. Many also provide different service packages and hardware tiers, creating convenient customization tools for developers to ensure that their applications are running optimally in all cases.
Cloud computing has also enabled the growth of industry. Any company using artificial intelligence to leverage its business can benefit from the power that cloud computing provides. By allowing companies and other entities to make software applications and services capable of leveraging the benefits of constant uptime, prolonged versioning support, and data visualization and analysis, great headway has been made in an effort to create automated data reports. Many devices are now equipped with any number of sensors and transmitters that not only record measurements or readings but also transmit them to other locations for batch analysis.
While the use of cloud computing has been effective for some of this analysis, there are still issues. One prominent case in particular involves governments transmitting data to privately owned computers [1]. To address these issues, some groups have explored the notion of performing these tasks within their ever-growing network of Internet of Things (IoT) devices. By combining approaches and being able to perform this analysis within their network, goals of increased performance, lower latency, and quicker responses can be achieved. This has brought about the development of edge computing. Edge computing serves to bring the potentialities of cloud architectures to a more localized environment.
Edge computing, like cloud computing, has dedicated platforms in which certain applications, processes, or data can be managed, stored, or controlled. Edge computing benefits from being localized in proximity to the user, leading to several differences from traditional cloud computing architecture. Due to network delays, cloud computing traditionally receives batches of data at once in large quantities. Edge computing, being locally implemented, can more easily communicate with the generation of data, working with each reading or record as opposed to a massive quantity of data. As such, edge computing creates a number of new challenges. Traditional artificial intelligence structures and algorithms are designed to handle large quantities of data quickly. Edge computing projects need to be able to perform comparably and return similar results when they are supplied only with smaller numbers of records at once. This has led to some developments that focus on algorithm and implementation structure in edge computing systems that differ strongly from cloud architectures. As shown in Figure 1, edge computing allows for decisions to be handled at a local level in the edge, without constant need to access cloud services.
Fig. 1.
Fig. 1. Cloud-edge interactions architecture.
At this point, the question arises as to whether edge computing can compete with the current capabilities of cloud architectures. Simply put, this depends on the specific task at hand. Edge computing structures are very effective at mobilizing data as a result of the low throughput between components. Smaller data samples are much more effective at being transported, which allows analysis to proceed at a rapid pace. These and many other differences have led to the need for algorithms to be adapted to more effectively and efficiently deal with the alternate amounts of data at any point in the network.
Edge intelligence is the result of the movement to localize the artificial intelligence support of cloud computing capabilities. It is also supported by the emerging communication technologies such as 5G and the future 6G technologies [2, 3]. By bringing the technology of cloud computing into a more controlled environment that users can leverage for their own ends, edge computing enhances the provisions of a traditional distributed cloud architecture. This edge computing growth has spurred the development of more efficient, lightweight means by which to run artificial intelligence algorithms and applications. In some cases, this can even entail the dissemination of data across a wide array of localized devices and recompilation of them at the end of the process [4]. Other approaches have searched for a means by which to reduce the overhead and runtime of the process by managing data more efficiently. Regardless of the approach, the growth of edge computing systems and edge intelligence has led to some major developments in the field. With such a wide array of approaches to implementing these algorithms in an edge environment, the difficulty of mapping developments, tracking changes, and deciding where to search for further developments can be overwhelming.
Current research work in Edge Intelligence technologies focus on two main aspects asdefined by Deng et al: intelligent wireless networking and the edge’s intelligence itself [5]. Intelligent wireless networking refers to the underlying structure that the edge system must use in order tofunction efficiently. This refers to many factors such as network management, resource handling, and communication process optimization. Moreover, the actual intelligence side focuses on the exploitation of the advance machine learning (ML) models in edge services and edge resource managements to improve the quality of service for the users. This includes the techniques used from the edge’s data collection and data preparation, the training of the AI models, the implementation of edge intelligence models, and the development of these models at the edge servers [5]. Also, more recent research is focusing on the exploitation of the edge resource capabilities to enable the emerging Federated Learning system and to improve user data security and privacy [6]. This is expanded upon in Section 3.

1.1 Current Edge Intelligence Surveys

With the recent growth of edge systems, it comes as no surprise that other surveys on the topic have appeared in recent years. Many of the currently existing surveys in edge computing are rather similar. The majority of these surveys have provided general background information on edge computing systems as compared wih the current state of affairs in distributed computing. Many then move into a more specific subdomain in which edge systems can be applied, including one that focuses on mobile devices and Mobile Edge Computing (MEC) by Abbas et al. [7, 8].

1.2 Contributions of This Survey

This survey adheres to the general structure by describing the general background of edge systems but differentiates itself from other surveys by focusing on not only edge computing but also edge intelligence, lightweight machine learning, and the implementation specifically needed to ensure that edge intelligence is viable.
Edge systems and lightweight machine learning algorithms lend to each other well as they can play to the strengths of the other systems. Lightweight machine learning allows for powerful decision-making processes to be utilized on systems with lower-end hardware, and edge systems can use their many endpoints and sensors to generate data for the lightweight models to use. The two play upon each other and allow for powerful machine learning to be used on edge systems.
We devote an entire section to algorithms and approaches that could be used to improve upon the deficiencies that exist within current edge applications and instances. Lightweight machine learning models are exposed and discussed in reference to how they can be utilized with edge systems. We also explore optimizations for existing edge intelligence systems and describe how these could serve to further develop the field of edge computing.

1.3 Organization of the Survey

Addressing and working to resolve conflicts in the current space of edge computing is the purpose of our research. In the following survey, we will discuss edge intelligence as it has developed, optimizations of edge intelligence and how these systems could be enhanced or improved, and where future developments hope to shape the field.
The structure of the discourse and a brief summarization of the analysis follows:

1.3.1 Section 1: Introduction.

This introduction serves to present a brief overview of some necessary background information regarding the advent of edge intelligence, without going too in-depth. It also serves to provide an outline of sorts of the entire survey.

1.3.2 Section 2: Growth of Edge Computing and Edge Intelligence.

Section 2 elaborates more specifically on the background of each of these elements, citing further information on how each element has grown and changed, the challenges that have been or may need to be faced, and where current developments are presently situated. This section of the text identifies more fields and domains in which edge intelligence developments may serve to enhance. This section also contains some information describing how edge intelligence can and has met its goals recently without going into a great deal of analysis.

1.3.3 Section 3: The Survey.

The entire survey comprises Section 3 of the article. The survey intends to be a holistic, though by no means entirely comprehensive, overview of the growth of edge intelligence in different facets. In the survey, we focus on exploring other research into edge intelligence or other developments in which edge intelligence has proven to be an effective solution to a specific problem or challenge. The first subdivision of the survey analyzes proposals in which edge intelligence–based solutions are put forth as an appropriate response to a situation. In some cases, these articles have not implemented edge intelligence in any practical manner; they simply discuss edge intelligence solutions or describe a means by which a new approach to edge intelligence can be implemented. However, the majority of the articles’ authors have worked with the implementation of edge intelligence in some way, shape, or manner. The second subdivision discusses practical and sound edge intelligence approaches that have been implemented and demonstrated to some capacity with regard to a specific domain of study. This section focuses on the results of these experiments and highlights how they may be unique or unforeseen implementations of edge intelligence systems. The subdivision following focuses on Lightweight Machine Learning architectures and how to leverage them in order to improve the performance of edge systems. The final subdivision describes challenges that have been encountered or challenges that have been theorized by other authors. This subsection intends to discuss them to a lesser extent and segues into the next section.

1.3.4 Section 4: Findings, Analysis, Thoughts, Future Directions, Suggestions and Advice for Future Research.

Section 4, following the survey, presents a culmination of our understanding of the current state of affairs as it pertains to edge intelligence. In this section, we put forth our own analysis of the most pressing developments in the domain of edge intelligence. Each of the most significant developments from the survey are revisited and the implications of each are discussed in further detail. We also put forth our own representation of which attributes of the domain are the most promising for future research in the field. It is of note that this section is representative of our own understanding of the field and that claims and assertions therein, though based on the facts presented, may not necessarily be unbiased, as each of the authors has a unique outlook and understanding of what may benefit the field most. Following this is Section 5, with a conclusion that reasserts the initial claims put forth, provides another opportunity to discuss some of the findings, and terminates the article after highlighting significant observations.

2 The Growth of Edge Intelligence

This section consists of a discussion of Edge Intelligence and its growth. We first describe the more recent changes and issues that have led to the need for growth in Edge Intelligence. This is followed by a discussion of how Edge Intelligence can work to resolve the challenges faced with contemporary approaches.

2.1 Cloud Computing and Its Pitfalls

As technology has developed, hardware and network resource needs have changed. Research in recent years has funnelled hundreds of thousands of dollars into searching for methods to enhance modern cloud computing approaches. A fair selection of this research explains that with current trends in hardware and software products, many more products are being directly connected to the Internet [9]. This new development and new growth seems to suggest that current trends in cloud computing are unsatisfactory. Qi and Tao remark that “different factors, such as the network unavailability, overfull bandwidth, and latency time, restrict its [cloud-based smart manufacturing’s] availability for high-speed and low-latency real-time applications.” Simply put, modern cloud architectures are rapidly becoming insufficient for current hardware and software.
Being able to keep up-to-date with modern trends in the technology industry is a critical component of any current or future software developer. When traditional approaches fail to accommodate modern technological practices, the approach is more often than not left to the wayside as superior alternatives make themselves more well known. Being able to adapt and respond to changes are the hallmark of a forward-thinking group. Implementing these changes as necessary is a critical component of staying competitive in the modern economic system.
Growth and change in hardware and software is not new. This is an ongoing challenge that providers and companies must be able to adapt to. Suresh et al. have described in great detail how changes in hardware, focusing on the Internet of Things, have occurred in recent years [10]. This review details historical changes in developments in the IoT domain, with prominent statistics provided to undeniably define where and when changes in the field have occurred.
The Internet of Things is a primary driving force in the growth of intelligent edge computing systems. Devices that can read data and process it themselves or transmit data to a local machine for processing before being uploaded are a major factor for consideration. Being able to leverage existing hardware for computational tasks is beneficial to all parties involved in the system. Processing data using machine learning approaches or with artificial intelligence are the current goals for these technologies.
Recent papers have explored trying to perform high-end artificial intelligence experiments on low-end technology. Hassan et al. describe and experiment in which cell phones can be used to manage human–computer interaction activities that work with several different feature clustering algorithms [11]. Generally speaking, these algorithms can take time on lower-end devices. The advent of this sort of experimentation, combined with a series of costly efforts pertaining to traditional cloud computing paradigms make for a difficult set of challenges to overcome.
As has been examined, there are a great deal of issues arising relating to the growth and change in hardware and software that limits the effectiveness of cloud computing. Beyond this rapid change in needs of the users, cloud computing is also rapidly increasing in terms of cost of ownership with greatly reduced return on investment [12]. Without businesses being able to maintain profits from cloud computing systems, more cost-effective, widely scalable approaches that meet the needs of modern corporate entities should be researched and implemented.
Corporate entities need to be able to maintain their bottom line while working with new technologies. It is incredibly difficult for organizations to evaluate the potential for loss or growth with a new technology. As this study shows, reduction of costs is a significant incentive to jump-start the switch to edge computing architectures.
In addition to the costs of cloud computing, there are plenty of other issues that make it difficult to implement in businesses. Specifically for smaller businesses, the overhead required to set up and maintain these systems can be detrimental when trying to implement them [13]. This is drastically impacted by the lack of non-technical support available for those running smaller industries, making other alternatives more appealing.
Hu et al. discuss both the advantages and disadvantages of cloud computing. One prominent point they make is that transferring data between cloud providers can be incredibly difficult [14]. As such, consumers who subscribe to one particular cloud service provider are almost certainly locked in. With little to do to change providers, consumers can potentially be bullied or driven bankrupt through aggressive monopolistic practices engaged in by the providers. This potentiality can discourage the use of cloud services.
Both of these discussions seemingly assert that, while the service that cloud computing provides is desirable among consumers and clients, there is a serious lack of understanding between service providers and corporations in regard to the exact needs of the user. Being able to bridge this gap, edge intelligence manages to present itself as a strong, workable solution to resolve these issues.
It is important to realize that there are serious challenges with modern cloud computing paradigms. Though the technology and structure of cloud services are strained by the current state of affairs, there are solutions that can be developed to alleviate that stress. Growth and change are necessary to ensure that experiments can be run quickly and effectively. Research now needs to move onward from cloud structures and explore new potentialities provided by edge computing.

2.2 Growth Potential for Edge Intelligence

Edge computing and its potential uses of artificial intelligence have presented themselves as a prominent alternative to traditional cloud computing paradigms. Edge computing is a means by which computer applications and hardware can work amongst each other in a decentralized architecture. Edge intelligence combines the distributed structure of edge computing with the modern need for artificial intelligence, machine learning, and other more complex algorithms. The goal of an intelligent edge computing system is to be able to perform all of the resource-heavy tasks required of modern AI and ML software applications in a resource-limited environmentsuch as those found in IoT environs or on other devices with limited computing power [15]. Figure 2 shows the relation between edge networking and machine learning and how their interaction derives the concept of Edge Intelligence
Fig. 2.
Fig. 2. The relation between edge networking and machine learning, and how their interaction derives the concept of Edge Intelligence.
Only a recent development, edge intelligence can conceptually be difficult to follow through on. Edge intelligence operations can occur in many different structures. Some handle data locally for analysis at large, whereas others determine which data to ignore and which to send for further analysis. To understand the entirety of edge intelligence, it is worth diving into several studies in order to view its implementation.
Deng et al. describe the the rise of edge intelligence in great detail [16]. This article focuses on the critical components necessary for an intelligent edge system while also presenting a way forward for future developments in the domain. An additional focus of this article is the division of AI into two groups: those to be used to promote the growth of edge intelligence and those to be used within edge systems themselves. The generalist approach of this article lays the groundwork for understanding edge intelligence as a whole.
Though edge intelligence is prominently concerned with artificial intelligence, there are several other prominent uses for edge networking systems. Any system in which work is distributed over a plurality of machines can provide even more insight into the approaches that edge systems need to operate at peak capacity.
Beyond artificial intelligence, Sahni et al. [17] work with edge systems, describing the potential for edge devices to be combined in a distributed system to perform low-level operations and decision-making tasks This approach is considerable for its parallels to the goal of intelligent edge systems. By distributing decision-making of any kind among components of a system, a final classification, grouping, or output can be generated. Both lower-level operations and application-level tasks can benefit from the approach that Sahni et al. describe.
Low-level systems are still the cornerstone of any architecture. Even with recent developments, being able to effectively manage these networks in an intelligent edge environment is crucial. Still, new developments are also worthy of study for their ability to contribute to edge intelligence. Developments in remote sensing, real-time decision-making, and other such approaches are critical to ensuring the success of certain modern applications.
Newer developments, including ride-sharing services and the increasingly discussed autonomous vehicles, have been researched heavily for their ability to impact regular commutes. One potentiality emerging from this is the need for self-piloting vehicles to communicate with one another. Edge intelligence can present itself as a distributed decision-making solution for these situations, as Yuan et al. describe [18]. In their proposed solution, an edge base station would be leveraged to coordinate content delivery among all devices communicating between each other.
The significance of the work of Yuan et al. [18] lies in the notion that data from a larger network can be subdivided and disseminated downward from a higher chain of command. While this is one single approach, there are several others. Some of these approaches include the idea that the reverse is also true, that data from a lower level can be filtered back to the top level. This notion is crucially important for data collectors and other low-end components. Figure 3 provides an approach comparable to that proposed by Yuan et al. [18], which outlines a traditional data workflow using a cloud computing architecture. Figure 4 also depicts an approach comparable to that proposed by Yuan et al. [18], which highlights a potential implementation using edge devices to limit which data are processed, reducing overall throughput costs.
Fig. 3.
Fig. 3. An approach comparable to that proposed by Yuan et al. [18], which outlines a traditional data workflow using a cloud computing architecture.
Fig. 4.
Fig. 4. An approach comparable to that proposed by Yuan et al. [18], which highlights a potential implementation using edge devices to limit which data are processed, reducing overall throughput costs.
Remote sensing devices in IoT implementations can produce incredibly large amounts of data. Contemporary approaches to handling big data involve collection and analysis of large datasets in one unit. Kolomvatsos and Anagnostopoulos suggest that IoT devices can be incorporated into edge intelligence networks to decide whether to analyze and sort data at the local level or remotely [19]. In this manner, only the most prominent data can be extracted to create a more optimized analysis.
IoT devices are a leading force in the growth of intelligent edge systems. Because of their potential to be both data collection and data processing tools, a number of different developments and applications to edge intelligence exist. Determining which approach provides the ultimate best fit for a given task is critical but is also something that has encountered little development to date. Thus, exploring all of its facets is necessary for any exploration into edge intelligence [20].
IoT also has the potential to make use of intelligent edge systems as they pertain to services, as Huang et al. describe [21]. The authors describe IoT and edge devices being used to process more timely requests from clients. These services make use of trends and commonalities as analyzed through the system to determine which processes are to be handled locally and which are needed on a more remote level.
As demonstrated, it becomes quite clear that contemporary approaches reliant on cloud computing are simply not effective in certain environments. IoT and other low-end computing power systems are incompatible with cloud architectures. Intelligent edge systems have been put forth as a solution to these pitfalls. Their scalability and potential as an effective means by which to adapt to modern trends indicates that they present an ideal solution to move beyond the cloud. As such, we now move into the survey of applications of edge intelligence in recent years to highlight how edge intelligence can be incorporated en masse.

3 The Survey

Edge intelligence applications are a steadily growing field of interest. The IoT has largely contributed to the amount of data being produced and processed as consumers are replacing traditional home appliances with smart appliances, which not only serve their original purpose but can also collect data. Decentralizing the computing infrastructure allows for more data to be collected, as sensors can be placed anywhere. Edge systems leverage their large number of sensors by using them to feed into different applications. Often these systems utilize machine learning models and large amounts of data in order to train those models. Endpoints on the edge largely serve two major functions, user interfaces and data collection. A purely data collecting endpoint may be something like a security camera or thermometer, whereas a smart refrigerator may be an endpoint that both collects data and serves as user interface into an application. Edge intelligence aims to leverage this massive data collection to better train machine learning models for a variety of purposes.
This section contains a critical discussion of intelligent edge computing systems. We begin by describing the potential use cases for edge intelligence systems in detail, discussing what the problem presented is and how edge intelligence can solve the problem. We also discuss situations in which edge intelligence systems have been used and the lessons learned from those implementations. The second section describes situations in which mathematical approaches to edge intelligence can be used to enhance or optimize any approach for networking, problem solving, or other similar circumstances. The final section describes problems in the field of edge intelligence and where future researchis needed to best address these problems, potentially with approaches described in the second section.

3.1 Potential Edge Intelligence Applications

Explorations into the use of edge intelligence as a solution to problems previewed thus far have yielded several different results. This portion of the survey presents an in-depth review of such problems and how edge intelligence has worked to resolve them. This section discusses where edge intelligence has been explored as a solution to a problem and looks further into practical applications or other systems where edge intelligence has been used.
The intelligence aspect of Edge Intelligence focuses on the implementations and algorithms that edge systems currently use and possible future avenues to explore. Currently, edge intelligence is being used for multiple types of systems, including brain tumor diagnoses [22], Smart City applications [23], temperature prediction for agriculture [24], text recognition from images [25], and unmanned vehicle operations [26].
Zhou et al. describe the current state of affairs in regard to edge intelligence [27]. The authors describe developments in the history of edge intelligence and focus on areas in which growth of edge computing may provide a path forward in artificial intelligence domains.
Neto et al. [23] provide a theoretical framework for a way to create a “Smart City.” The smart city as a concept refers to an edge network with sensors to automate things such as traffic, traffic enforcement, parking, and facial recognition, as shown in Figure 5. They first identify that a smart city will need to take in massive amounts of data. This poses the first problem they aim to address with their Multilevel Information Distributed Processing Architecture (MELINDA), which theoretically breaks down huge data streams into objects of interest before processing. Data are sent to the processing facility where MELINDA filters raw video data (in their proposed application) into events of interest, frames of video that may be useful for the model. A second software component within this MELINDA architecture then identifies the object of interest within that frame of video. The data are given environmental context and sent to another processing node for decision-making. By sending the massive data streams through layers that cut them down into objects of interest, the data become manageable [23]. Figure 5 presents a potential application of smart cities/autonomous vehicles: optimizing traffic flow based on predicted behavior of vehicles.
Fig. 5.
Fig. 5. A potential application of Smart Cities/Autonomous Vehicles: optimizing traffic flow based on predicted behavior of vehicles.
Ahmed et al. use a federated learning system to perform deep learning on mobile devices using smaller local models to make predictions for the user, feeding the information from those tasks to a server. The server then takes the data and updates a larger model with the information, creating a more robust and precise network. The server is where the larger deep learning model is stored, whereas the endpoints have smaller, similar versions. This server then pushes out the updated weights for the models onto endpoints so that they may more accurately predict future tasks [28].
Li et al. discuss the growth of machine learning as of late, with focus on deep neural networks and their growth in software applications [29]. Mobile applications, with limited storage and computing power, find it difficult to leverage these capabilities. As such, the authors suggest the implementation of an accessible edge intelligence interface with partitioning features to reduce latency. A prototype developed on lightweight hardware indicates a degree of success in such an approach.
Christensen et al. describe a system built on edge intelligence to create and deploy network resources automatically as needed by the network [30]. The system, called OpenEdge, involves three components that provide for network abstraction, authentication, and broadband termination. The system enables participants to decide which resources specifically are exploited and the time frame for doing so. The article suggests that edge intelligence can be incorporated into networks with minimal impact to traditional networking architectures, and can even serve to reduce costs through monitoring usage.
Maier et al. [31] discuss the potential for edge intelligence to assist in the growth of fiber-wireless networks. Present approaches to increasing the potential of optical networks focus more on increasing capacity of the network. Maier et al. suggest using edge computing networks to route requests through the network in a more efficient, orderly manner with the goal of reducing latency across the entire network.
Al-Rakhami et al. suggest using Docker, a virtualization software, to enable a more manageable, widely distributed set of tools to be distributed among different hardware devices and software applications in the edge computing environment [32]. They focus on the use of human activity identification to highlight the effectiveness of this approach, which uses support vector machines (SVM) to identify the behavior exhibited in these situations.
The capacity for edge networks to operate in a manner capable of running highly responsive and demanding processes is described by Dai et al. [33] Their approach focuses on exploring the potential for edge computing systems to enhance these capabilities in the context of multi-user wireless networks. In these systems, processes or users can be remotely engaged from a multitude of devices or situations. The approach highlighted in this article reportedly provides a great deal of improvement over its contemporaries.
Ren et al. propose a generative-coding group evolution algorithm that shows potential for edge devices to become more prominent in the industrial environment [34]. A simple grouping strategy is proposed, which provides an almost-optimized solution in a small number of situations. Enhancing this approach or adapting it to generalized situations could provide significant help in addressing similar edge intelligence problems.
Combining the growth of both Blockchain and edge intelligence technologies, Doku and Rawat propose a solution to identify relevant datapoints when operating in a distributed machine learning environment. [35]. A “Proof of Common Interest” metric is described that can be applied to a machine learning model through which decisions can be made on novel datapoints. The model can then be distributed among members of the network to enhance the rate at which decisions can be made among other components.
Meloni et al. discuss the potential for smart grids and their monitoring of several different smart devices to maintain an accurate representation of a network’s current state [36]. Through the use of an edge computing architecture, the authors suggest that the flexibility provided by edge computing allows for a more effective and efficient approach to be instantiated. Case studies are analyzed that provide a more thorough analysis of the situation.
Modern personal assistant software (such as Apple’s Siri) are operated largely on a cloud-based infrastructure. Kang et al. [37] highlight the potential performance enhancements that could be achieved by migration of similar services to an edge computing environment. The study put forth describes 8 intelligent applications, with results including an increased latency, reduced energy consumption, and higher data center throughput.
Chen et al. propose “green edge intelligence,” an enhanced approach to operating artificial intelligence algorithms in a mobile edge computing environment [38]. The approach highlights the effectiveness of using a cache, which could potentially be shared among edge nodes in an edge computing environment to reduce the workload across the network. The goal of the authors is for researchers to focus more efforts on developing potential applications of green edge intelligence.
The identification and understanding of smart-home activities is the focus of an article by Zhang et al. [39] on the potential use of open-source software in edge computing. The discussion looks for low-cost solutions to identify activities in smart homes equipped with IoT technology. Promising when compared with other approaches, preliminary results suggest that these approaches can be explored in more detail and applied in other contexts.
A text discussing the potential of mobile solutions devotes an entire chapter to analyzing the potential for IoT-enabled houses to provide a SmartLiving lifestyle. Rahman et al. discuss their primary concern, teaching automated systems the process of reasoning [40]. The authors strongly suggest that an implementation based on edge computing can significantly enable reasoning among the many multifaceted utilities in a SmartLiving environment.
A review of edge computing by Sittón-Candanedo et al. also proposes a modular, tiered method for handling edge-driven IoT devices [41]. Using Blockchain technology, their approach provides greater deal of flexibility and security than other contemporary methods. In its implementation, the proposed solution lowers the costs necessary for data throughput.
Unmanned aerial vehicles (UAVs), more commonly known as drones, are another new development that could hinge on advancements in the edge computing industry. Allowing UAVs to communicate directly with one another or with a remote base station could enable a larger and safer variety of uses. Chung et al. describe an approach to using UAVs to monitor offshore wind farms [42]. Using UAVs to monitor wind conditions, the positioning of turbines can be optimized to increase power output by substantial margins.
In a manner similar to that of UAVs, autonomous, self-driving vehicles face a similar need to communicate amongst each other. Dai et al. suggest that enhanced artificial intelligence algorithms operating in these vehicles may be capable of addressing some of the current deficits [43]. By caching certain information at various points in the network, autonomous vehicles are able to make more effective decisions in a shorter amount of time.
The topics discussed in this section briefly highlight the great variety of topics and domains in which edge intelligence implementation can be used. The simple fact of its widespread potential use cases and improvements upon existing architectures suggest that further enhancements in edge intelligence can be explored. Making the technology more efficient or otherwise more useful should allow for more robust and effective implementation in a greater number of domains. Thus, improvements on the theoretical level must also be considered.

3.2 Theoretical Systems and Optimizations

In this portion of the survey, we focus on proposals that are more theoretical in nature, such as algorithms, system design, and other optimization practices. This section can be used as a springboard for developments into the subsequent section in which issues are presented. The goal is to promote ideas that can be used to solve the problems occurring in edge intelligence applications. Current edge systems suffer from large energy usage and inefficient or bulky algorithms. Intelligent networking aims to solve these problems to make edge systems usable and fast. Edge systems leverage many microservices, spinning up machines as needed and connecting to various different processing centers to use the data that endpoints have collected. Problems arise when these processing centers are busy or have a long queue from other endpoints. Zhou et al. propose an algorithm for an efficient way to send microservice requests to endpoints in order to optimize edge performance [44]. Before sending a request, this algorithm polls processing centers and receives information regarding response time, current system load, and more in order to more strategically decide which processing center to send data to [44]. Processing nodes may lie dormant when not being used or endpoints may be requesting too much of single processing nodes overloading them.
Al-Rakhami et al. explore the advantages of using a containerization technology such as Docker to create on-demand processing for the data produced by the edge system. Creating processing only as needed saves on resources and provides for edge systems to remain lightweight when not in use [32].
Wang et al. look into the possibility of using edge computing to optimize social virtual reality (VR) applications on all devices. They propose an optimization problem and model that encompasses the most pressing problems related to using edge technology for social VR. ITEM, the algorithm they use to solve this optimization problem, iteratively constructs graphs and uses well-known max-flow algorithms to solve smaller graph cut problems. ITEM outperforms other algorithms in most preliminary test scenarios [45].
Li and Lan discuss how to determine execution cost of edge computing and how to effectively assign tasks in order to minimize total execution cost. The authors model their problem as a multichoice game and implement the Shapley value, which is commonly used in similar cooperative game theory problems. The proposed solution shows promise by outperforming other similar solutions to this problem. The analysis of various simulations shows that their algorithm works well in heterogeneous edge networks but might fall short in other networks [46].
In an attempt to find a solution to the AI service placement problem in a multi-user mobile edge computing system, Lin et al. [47] proposed a mixed-integer linear programming problem that looks to minimize computation time and energy consumption of users. By deriving various expressions that represent the optimal allocation of resources while still maintaining low complexity, the authors were able to find a solution that uses various search-based algorithms to efficiently solve the linear program stated earlier. The algorithm can even be scaled up to be used on larger networks. The results of many simulations of running the algorithm leads them to believe that this solution performs exceptionally and outperforms many other algorithms.
Zhou et al. [48] takes a look at a UAV-enabled wireless powered mobile edge computing system and a related power minimization problem. As this type of problem is difficult to solve outright, an alternative solution is proposed. An optimization algorithm based on sequential convex optimization is suggested to solve the power minimization problem. This approach is efficient and preferable to similar algorithms for this problem according to the results of the simulations.
Wang et al. propose a power allocation algorithm, to be used with edge intelligence, that prioritizes maximizing learning instead of communication throughput. The proposed learning-centric power allocation (LPMA) algorithm involves allocating radio resources and makes use of an error classification model. This algorithm is different from many well-known algorithms and is shown to outperform other power allocation algorithms [49].
Huang et al. propose an edge intelligence framework for creating IoT applications. The main goal of this new framework is to streamline data analytics in a more reliable fashion. The authors accomplished this by creating annotation based programming primitives that developers can use to build learning capabilities and implement a user activity recognition system on edge devices. Using their proposed edge intelligence framework will increase performance without sacrificing accuracy of activity recognition [50].
Mobile edge computing is another area for which many are looking to optimize performance. Bouet et al. look into optimizing resource space and capacity of MEC servers [51]. They formulate a mixed-integer linear program problem that takes into account MEC server size, operation area, and number and still meets all MEC goals. Their proposed solution to this problem is a graph-based algorithm that involves forming MEC clusters.
In the field of health care, Sodhro et al. wish to improve quality of service (QoS) using edge computing. The Window-based Rate Control Algorithm (w-RCA) is their proposed algorithm to optimize QoS [52]. When compared to conventional battery smoothing algorithm (BSA) and a baseline using MPEG-4 encoder, w-RCA is shown to outperform both algorithms when optimizing QoS in remote health care applications. It is also important to note that the w-RCA has better results while using smaller buffer sizes than the other two algorithms.
Edge computing is a possibility for improving mobile vehicles. Luo et al. investigated how the 5G-enabled vehicular ad hoc network (5G-VANET) and edge computing can be optimized for many vehicle applications. They propose a prefetching scheme to deal with rapidly changing data and employ common graph theory algorithms to efficiently handle the data. Overall, the proposed scheme efficiently optimizes the 5G-VANET [53].
Liu et al. study the problem of large-volume data dissemination that would be used for automated driving assisted by cellular networks. First, models are examined that take into account the high variability in vehicle mobility. The data dissemination problem is then formulated into an NP-hard optimization problem. The authors do not stop there; they then propose a low-complexity dynamic programming solution and run many simulations displaying the effectiveness of this solution [54].
Liu et al. proposes a blockchain-based framework for video streaming on mobile edge computing devices [55]. The proposed framework uses adaptive block-size while also using two different offloading models. The authors also formulate an optimization problem that takes into account the adaptive block size, resource allocation, and offload scheduling. Simulations show that the proposed method solves the problem quite efficiently.
An et al. propose the use of edge intelligence in order to disperse Hypertext Transfer Protocol (HTTP) anomaly detection across multiple nodes rather than being centralized on a single server [56]. The framework proposed efficiently and reliably identifies anomalies in HTTP of the IoT. in addition to the framework, a data processing algorithm is proposed to reduce redundant HTTP data and to more effectively classify the anomalies. This framework and proposed algorithms show an increase in accuracy and speed when detecting unknown anomalies.
Lodhi et al. look at the current state of edge intelligence and note major flaws that need to be addressed in the development of these systems. They propose that edge systems need to address the computational gap, cost, latency, scalability, and security [57]. The computational gap refers to finding ways to make low-power edge endpoints capable of doing high-level machine learning tasks. Cost refers to the massive amount of infrastructure that must be created for an edge system to function. There must be many endpoints deployed, data centers and processing facilities created, algorithms to process data, databases, and communication systems between all of these things. The cost of creating this infrastructure is reflected in the complexity of the system and is not always a worthwhile investment for many implementations. Some applications require real-time responses from data collected. Latency refers to the time it takes for data to be collected, processed, and sent back to an endpoint for use; the goal is to minimize that time by keeping as much processing on endpoints as possible. Travel time between processing centers, databases, and endpoints can cause serious delays. Scalability aims to make edge systems that are comprehensive but not bloated and taking up network resources unnecessarily. The ideal edge system is infinitely scalable as you add endpoints, processing facilities, and databases as needed. Security is a massive concern when discussing edge computing. The nature of the technology requires some data to be sent over networks. Ideally, endpoints would be able to scrap unnecessary data, clean useful data, and encrypt it in order to send it to be processed. Data being transmitted must be secured as it can be potentially sensitive information. However, edge systems are especially vulnerable due to the massive amounts of endpoints on the system. Every endpoint becomes an attack surface; therefore, every endpoint must be meticulously designed for security and constantly updated to prevent attacks. This is both expensive and resource intensive. Figure 6 provides a summary of practical and theoretical benefits of edge intelligence to exploit big data.
Fig. 6.
Fig. 6. Summary of practical/theoretical benefits of edge intelligence to exploit big data.
Federated edge learning, first proposed by Konečný et al. in 2015, defines a system for edge computing in which edge endpoints can share a model between them without sharing local data, allowing for training to occur on nodes simultaneously and securely [58, 59]. Zhou et al. propose a system for IoT machine learning model sharing between endpoints [60]. While not explicitly federated edge learning, their proposed system mirrors it closely. They propose to alleviate security concerns by using encryption to hide model parameters between endpoints and granular access control systems in order to regulate who may access the edge system. Federated edge learning systems come with data quality challenges when model sharing [61]: one endpoint’s bad data can poison the performance of the model. Zhaohang et al. discuss the problem in greater detail, introducing the concept of device heterogeneity and statistical heterogeneity [62]. Device heterogeneity recognizes that not every device performs in the same way. Some edge endpoints may be running different hardware, some may have less energy available, some have poor network connections, and so on. All of these factors can affect their overall performance on the network. Zhaohang et al. propose an optimization algorithm in which a “staleness function” is implemented to identify nodes that are underperforming and have them process smaller jobs or take them off the network [62]. Statistical heterogeneity comes from a single system taking in data from multiple edge sensors, but the context of that data may differ entirely. The example they use of statistical heterogeneity used is an edge network deployed both in a school and in an office using video sensors, taking in children and adult’s data and running them through the same models when the features of these groups should be separated [62].
When building an edge network, physical hardware is an important design decision made for the edge endpoints and processing nodes. Li et al. analyze multiple edge endpoint hardware configurations and calculate their performances [63]. They identify that physical distance is an outlying factor that will affect network latency. Data have to travel from point A to point B; longer distances from edge to processing node causes latency. Model performance is directly tied to the actual physical hardware that it is being run on. Edge endpoints can run some ML models effectively but it comes at the cost of energy and time. The highest-performing endpoints have cloud-based processing systems, which comes with both security risks and introduces network latency.
This section shows that many areas can be studied and optimized by using edge computing. Even within in systems that already use edge computing, these proposed algorithms and solutions can drastically improve the efficiency and reliability of edge networks and systems. Formulating new problems and optimizing different aspects of a given scenario allows for steady improvement in edge computing and the implementation of edge intelligence. One such optimization is the implementation of machine learning models into edge systems to leverage their strengths and give them more power. Lightweight Machine Learning is a technique that is especially useful in this area.

3.3 Machine Learning and Lightweight Machine Learning

Machine learning is one of the most prevalent technologies available. However, despite its large skill set, it faces major challenges. Let us look into them individually.

3.4 Challenges in Machine Learning

3.4.1 Data Collection.

Machine learning tasks require a large dataset in order to perform effectively. Sufficiently large datasets allow for these algorithms to perform powerful tasks such as clustering similar data or predicting future values from past precedent. However, data for data’s sake is pointless and may even be harmful to the operation of a predictor. Machine learning models require data that are relevant and useful to making predictions [64]. Data must be aggregated onto one system in order to be fed to a model, taking care to remove any bias that may exist in the data. Biased data fed into a model will produce biased results. When the volume of data is insufficient, an algorithm may need to wait for more data to be generated in order to create a proper model [64].

3.4.2 Data Quality.

The quality of data plays an important role in any machine learning task. These models are susceptible to small errors and inconsistencies in the data. Lack of data quality takes different forms, such as missing data, incomplete data, inconsistent data, and duplicate data [65]. Data must be cleaned and corrected before being fed into a machine learning model. If the data are introduced raw and untouched, the machine learning model’s performance may not be as powerful as anticipated. Therefore, data quality must be closely monitored.

3.4.3 Data Inefficiency.

Machine learning requires datasets to be sufficiently robust in order to draw conclusions. Incomplete datasets cause problems as there may not be enough information for the model to draw relationships between data. As machine learning deals with large datasets, a small error in the dataset may lead to a huge error in a system’s output. In this case, the model’s performance deteriorates and cannot be used for other tasks [66].

3.4.4 Investment Returns.

Every project costs something. Machine learning deals with large datasets and intense computations. This means that data will have to be collected, cleaned, and processed. All of this will come at some cost. Further, there is no way of knowing whether the model being built will produce meaningful results [67]. The usefulness of the model can be seen only after the processing has been complete. Every machine learning investigation is a gamble, and sometimes the algorithms simply do not perform well enough to be worth implementing

3.4.5 Neural Networks.

Neural networks need to be engineered before training them. Neural networks take large amounts of time for learning and are expensive. We need dedicated machines for performing neural network computations with powerful GPUs for processing [68]. The computation complexity of the neural networks widely depends on two things: the size of the input data and the number of layers that the neural network will use to draw conclusions. Because of this, organizations must be cognizant of the cost of neural networks and when they are useful and when they are too expensive to use [69].
The discussion of machine learning highlights many pitfalls in their use. They require massive structural support, data to be collected and cleaned, proper model selection with feature engineering, and they can be both monetarily and temporally expensive. Despite these pitfalls, machine learning is an incredibly powerful tool and there are ways to optimize it. We discuss in the next section a growing field of machine learning focused on streamlining the algorithms.

3.5 Lightweight Machine Learning

Lightweight machine learning algorithms aim to maximize the learning potential and power of machine learning algorithms while minimizing the resources needed to support those models. These models will allow for machine learning to be run on machines with less powerful hardware, and will allow for more experimental models to be developed [70].
On edge networks specifically, many of the machines we want to use for machine learning are inexpensive and physically small. As a result, their hardware is less intense than that of a fully kitted out machine learning processing server. Edge systems have many devices with limited computational capacity. Edge devices require machine learning models to be fast, memory efficient, and energy efficient.
We implement compression techniques in lightweight machine learning projects, which make the model faster and more efficient. They do not need any super computers for implementing the models and these models do not take more time, unlike our regular machine learning models. Let us look into a few of the lightweight machine learning frameworks.

3.5.1 TensorFlow Lite.

TensorFlow Lite is a production-ready, cross-platform framework for deploying machine learning models on mobile devices and embedded systems. It can be deployed on Android, iOS, Linux, and other platforms used in edge computing. TensorFlow Lite is built to deploy machine learning models on edge devices. Its models feature low latency, allowing edge systems where endpoints may have poor network connectivity to utilize the system. TensorFlow Lite is built with privacy-preserving features, making the system secure while still being portable. It can be used for image, text, speech, audio, and various other content-generation systems. It is one of the leading machine learning mobile frameworks and is used by Google Photos, YouTube, Uber, Hike, and more [71].

3.5.2 Caffe2.

Caffe2 is a lightweight deep learning framework which was built for both production and research. It is a scalable, fast framework that enables real-time on-device machine learning. Currently, implementation is focused on mobile devices. It is a cross-platform framework with support for distributed systems. This framework allows for fast training of models, as they provide models to be run on edge endpoints rather than in data processing facilities. Caffe2 is used by Facebook for machine vision, translation, speech, and ranking [72].

3.5.3 MXNet.

MXNet is an open-source deep learning framework which is used to define, train, and deploy neural networks. It is a scalable, efficient framework that can be easily debugged. MXNet comes packaged with prewritten optimization and supports multiple languages, including C++, Python, R, Java, Julia, JavaScript, Scala, Go, and Perl. As it supports multiple languages, it reduces the necessity to adapt new languages. These are memory efficient and, therefore, very portable. Edge systems can use this framework to train models and then deploy on edge devices. The framework is scalable to multiple machines and multiple GPUs. MXNet was developed to handle large-scale machine learning model creation. It aims to lighten the load of deploying and training models across systems. MXNet comes with tool kits for computer vision, natural language processing, and time series. MXNet includes MXBoard, a visualization interface that allows developers to monitor how a model is training, how losses are converging, and how gradients are getting updated. It also comes with TVM, a deep learning compiler stack that allows developers to optimize models for inference machines using full-scale GPUs, edge devices, and IoT devices [73].

3.5.4 Lasagne.

Lasagne is an open-source lightweight library used for building and training neural networks in Theano. The Lasagne library supports both convolutional neural networks (CNNs) and long short-term memory (LSTM) along with a combination of both. It allows multiple input-output structures for training models. Lasagne is designed to be easy to use and understand. Used mostly for research, this library implements the most commonly used layer types. Layer declaration is very simple and straight forward. Using Lasagne, it is easy to change the model architecture or to make changes to the current architecture [74].

3.6 Concerns and Issues with Edge Intelligence

This final subsection describes the many problems and challenges that have been encountered within edge intelligent systems. In this section, we discuss the challenges that have limited the growth of edge intelligence in regard to domain, implementation specification, and other areas. The objective here is to address these issues using the information for improvement presented in the previous subsection.
Highlighting the increasing use of IoT-enabled devices, Plastiras et al. provide an exceptional overview of some of the growing challenges in edge computing [75]. Some specific items they point to include the high computational requirements of decision-making tasks and algorithms in these devices. The efficiency and enhancement approaches described in the previous section may be able to improve upon these areas. Additionally, the large number of devices involved in edge networks, and potential for a large number of tasks to be performed at once, require a serious amount of consideration for optimization. Efficiently handling numerous disparate devices is likely a critical component to enhance in any edge computing system.
One particular instance that could be analyzed in more depth is the success of edge intelligence systems implemented with user-driven recommendation systems. Su et al. [76] focus on information to provide tourist experiences to users from other locations. Improvements in this area could focus on applying more efficient algorithms in systems with sparse datasets or finding more unique information to use with users who have otherwise dissimilar data profiles.
Efficiency of tasks in edge intelligence systems is a large concern in many areas. Jia et al. [77] describe some of the challenges in working with extracting unique user-derived vocal profiles from auditory inputs (such as those provided to automatic phone systems) to more accurately and effectively respond to the client. The approach used in this study seems to have provided a great deal of success in making these enhancements operational in an edge-based system. The results of their experiment are highly accurate and generated in extremely short times. Thus, any study working to improve upon this would need to yield vastly higher accuracy in a comparable or better time frame.
Roman et al. provide a unique outlook on the potential security and privacy implications produced by the growth of edge intelligence systems [78]. As edge intelligence grows and has the potential to move into more domains, so too does the potential for its malicious or unintended uses. As such, it is important to evaluate IoT sytems in a similar manner as that of Roman et al. in an effort to reduce the potential for indirect harm to users or clients.
Huh and Seo have produced a survey with focus on the challenges 5G networks present to edge intelligence [79]. While many systems can and do achieve efficiency in these regards, the nature of mobile networks and mobile edge computing needs to be addressed more fully in order to achieve maximum efficiency. In some cases, it might prove useful for mobile edge computing to have some buffer system in place to store data temporarily in instances when data are being produced while the network is unavailable.
Yee et al. focus on another prominent privacy risk that edge intelligence implementations will likely have to address, that of facial recognition and its potential implications [80]. Facial recognition enables a company or application to accurately and securely identify users but with the potential for serious impact on the privacy of these users. Beyond privacy concerns, there are also technical and efficiency concerns in implementing facial recognition algorithms. While Yee et al. touch on the latter, the former is still a concern that would need expanded upon.
The necessity of responsive, effective solutions in emergency situations is described by Wang et al. [81]. Their analysis of Mission Cognitive Wireless Emergency Networks (MCWENs) highlights how edge networks can effectively and efficiently support these critical applications. While this article provides a detailed analysis and some application examples, the assured efficiency of these applications and their ability to support and enable these systems should be further researched for the benefit of the general public in emergency situations.
As described in Section 3.1, the automotive industry, with specific focus on autonomous vehicles, is a major field in which intelligent edge systems may operate. Pan et al. describe an approach to making these systems more efficient [82]. Their approach and the simulations conducted indicate a comparable error rate with traditional methods, which is a great achievement. Practical applications, however, need to account for other unpredictable hazards in real-world environments, such as weather, obstructions, pedestrians, or other external interference that can be difficult to capture in a simulation.
Gill et al. provide a thorough analysis of the many instances in which current cloud computing paradigms operate [83]. The authors focus on how Blockchain, the IoT, and AI could potentially operate in the cloud environment. While edge intelligence is not the primary focus, the information presented and concerns expressed are all still valid when viewed from the intelligent edge perspective.
The general sentiment expressed suggests that edge computing architectures can be and often are successful at replicating the efforts of traditional cloud systems. By focusing on applying more efficient algorithms to these systems, we can more accurately identify situations in which a greater operational efficiency can work towards a more productive, safe, and effective computing system. As such, applying algorithms and concepts detailed in the previous subsection to the problems and challenges detailed in this subsection can serve as a greater and more effective solution for organizations and businesses alike. The discussion of this notion continues in the following section.
Table 1 summarizes the sets of potential edge intelligence applications through the literature and outlines the period of publications. The table is divided into sections corresponding with those of the survey itself, Potential Edge Intelligence Applications, Theoretical Systems and Optimizations, and Concerns and Issues with Edge Intelligence.
Table 1.
FrameworkDeveloped ByLanguagesPlatforms
TensorFlow LiteGooglePython, C++, CUDALinux, Mac OS, Windows, Android
Caffe2Berkeley Vision and Learning CenterC++Linux, Mac OS, Windows
MXNetApacheC++, Python, R, Java, Julia, JavaScript, Scala, Go, PerlLinux, Mac OS, Windows, Cloud
LasagneMontreal Institute for Learning AlgorithmsPythonLinux, Mac OS, Windows, Android
Table 1. Lightweight Frameworks
ApplicationReference IDsYear
Potential Edge Intelligence ApplicationsZhou et al. [27], Li et al. [29], Christensen et al. [30], Neto et al. [23] Ahmed et al. [28], Li et al. [29], Maier and Ebrahimzadeh [31], Al-Rakhami et al. [32], Dai et al. [33], Ren et al. [34], Doku and Rawat [35], Meloni et al. [36], Kang et al. [37], Chen et al. [38], Zhang et al. [39], Rahman et al. [40], Sittón-Candanedo et al. [41], Chung et al. [42], Dai et al. [43]2017–2020
Theoretical Systems and OptimizationsWang et al. [45], Li et al. [29], Lin et al. [47], Zehong et al. [47], Zhou et al. [48], Wang et al. [49], Huang et al. [50], Bouet et al. [2018], Sodhro et al. [52], Luo et al. [53], Liu et al. [55], An et al. [2], Lodhi et al. [57], Konečný et al. [58], Zhou et al. [27], Li et al. [63],2018–2020
Concerns and Issues with Edge IntelligencePlastiras et al. [75], Su et al. [76], Jia et al. [77], Roman et al. [78], Huh et al. [79], Yee et al. [80], Wang et al. [49], Pan et al. [82], Gill et al. [83],2013–2020

3.7 Future Edge Intelligence Applications

Several other practical functions for edge intelligence are currently being implemented across several of the industries we mentioned previously. Some of these implementations are described in this section.
ADEPOS [84] is a machine learning algorithm for edge computing. The algorithm combines the Extreme Learning Machine (ELM) framework with boundary methods to detect anomalies in hardware at the edge layer. Their application was geared towards machine health to reduce downtime and maintenance costs by using IoT sensors to monitor machinery in its new state and be able to detect an anomaly that indicates the machinery requires maintenance.
The authors of [85] discuss the use of machine learning models to detect security anomalies in an IoT network. Their model efficiently reconstructs inputs that resemble normal network traffic and poorly reconstructs input attacks or anomalous inputs. They use this reconstruction error as a classifier to identify normal versus abnormal network traffic.
The authors of [86] present a series of ML techniques for mapping and adapting machine learning workloads to Field Programable Gate Arrays (FPGAs). FPGAs are limited in their capabilities due to memory and energy constraints. They are relatively inexpensive devices that are a good match for IoT usage. The authors present techniques for utilizing Deep Neural Networks with these resource-limited FPGAs. They use pruning and quantization to reduce the size of the DNN to improve resource utilization.
The authors of [87] discuss the heavy burden on processing power required for data preprocessing and training in machine learning. The authors rightly note that IoT devices are less likely to have the computing resources needed to perform machine learning tasks. In order to meet this need, the authors introduce Transparent Learning (TL). TL is a framework that moves training tasks away from the IoT devices out to an edge device or edge servers. Since data are being sent to an edge node or server for computing, an efficient cache system must be utilized to optimize performance. TL does just this, and the results show that it decreases the time cost and accuracy rate as compared with the TensorFlow framework.
The author of [88] elaborates on some of the methodologies behind edge computing when working with data streams – in which information is pushed constantly from the bottom rather than pulled selectively by the top. A practical system is highlighted, which takes information from the pipeline and sends it both to real-time view and develops it into training data simultaneously.
The authors of [89] propose a framework for deploying DNNs on resource-limited hardware called Edgent. Edgent utilizes adaptive partitioning of DNN resources between the device and the edge server to cover the shortcomings of either individually. The other specialized approach is right-sizing via early exit, which allows for a reduction in latency by allowing output to be pulled when certain intermediate nodes are reached.
The emerging federated learning and its capacity to support large-scale model learning without the need to move the data into a centralized data center is now the driving force for future applications of edge intelligence in transportation, smart cities, and other IoT-based applications and services [90].

4 Findings, Analysis, Thoughts, Future Directions, Suggestions and Advice for Future Research

In this section, we analyze the findings generated from the survey in more detail. We have decided to divide this section into three parts. The first portion discusses the individual domains in which edge intelligence can be used. The second subsection details how more efficient algorithms should be used to more effectively and efficiently manage intelligent edge systems. The last subsection discusses potential privacy concerns as they pertain to edge intelligence. This section provides a springboard for future research in the field to improve intelligent edge systems, focusing on how and where to apply the information that was presented in the survey prior.

4.1 Expanding Edge Intelligence in New and Existing Domains

As discussed in Section 3.1, there are a variety of different systems currently using edge intelligence. In this section, we take a further look at each of these to identify where additional improvements can be made. Our hope is to inspire additional research in these domains. Likewise, we also more thoroughly identify domains in which novel research could be performed with regard to edge intelligence.
As reiterated through this article, there are a great deal of issues in current edge intelligence implementations across the board. The biggest focus in edge intelligence is optimizing approaches with respect to the limited storage and computational power of different mobile or edge network devices. Topics pertaining to this are discussed in the following section, and the remainder of this section will discuss other concerns that may be more domain specific.
Many edge networks utilize the architecture to effectively and efficiently distribute relevant information between systems. Current approaches are doing so in one major way: identifying which of the data points are the most prominent to transfer from one point on the network to another: for example, determining whether a given data point is more or less useful to be evaluated to observe general trends in the system under study.
Another prominent area for improvement is the distribution of edge systems. Generally speaking, edge systems require a large number of devices to be interacting with one another. As such, it can become very complicated in situations with many devices interacting or where devices may be entering and leaving a network frequently, such as with cell phones and wireless hotspots. To address these needs, it is critical to ensure that the edge devices can be connected and disconnected from the network quickly and effortlessly, and that new devices can be incorporated into the network with similar ease. Determining how to package these drivers and other software products in order to facilitate this is another critical area for edge intelligence research.
The potential to monitor metrics on edge networks is also quite useful. Any improvements in this regard would provide great benefit overall. Specifically, if a framework were created to enable rapid deployment and access to edge network usage and operational metrics, not only would there be a great movement forward in the community, but this would also greatly expand the potential for the efficiency of other enhancements to be monitored in these systems.
Moving on from more general approaches, there are also a number of specific instances in which enhancements in edge systems could be useful in bettering these systems, autonomous vehicles being one application with great potential. Enhancing the speed at which these systems can communicate with one another and better equipping them to handle the numerous and ever-changing hazards of a given roadway situation is crucial to the safety of human passengers in autonomous vehicles and people outside these vehicles in their path.
Autonomous unmanned aerial vehicles can benefit from enhanced edge systems as well by operating cooperatively to reduce the potential for collisions, engage in synchronized motion, and more. The number of conflicting variables when operating in three-dimensional space necessitates that these operational decisions be made quickly and efficiently. Advancements that improve the response times of these devices increase the potential for higher, much safer utilization of these devices.
By identifying specific areas in which research can be furthered in prominent domains, we believe that improvements in these areas can and will jumpstart large growth in the applications of edge intelligence. Any movement that greatly increases the use of edge intelligence will increase public awareness of the technology and identify other opportunities for improvement. Trial and error testing in small, well-regulated environments will be necessary to ensure safe operations before any large and widespread adoption of the technology occurs.

4.2 How Algorithmic Improvements Enhance Edge Systems

There are many proposed algorithms and solutions to the multitude of problems involving edge systems. As edge technologies continue to be implemented, even more problems will arise. Whether it is optimizing resource allocation within an edge network or improving the quality of service of an application, algorithms are the backbone of great edge system implementations. By continuously studying and developing new algorithmic solutions to problems, edge technologies will continue to grow and improve.
Many of the proposed algorithms and problems within the realm of edge computing are built off of already known algorithms used to solve problems unrelated to edge systems. This is one of the major reasons that studying these problems and developing new algorithms is important. Even if an algorithm can be used only in a very specific situation, others may use that algorithm as a basis for finding the solution to a different problem.
One of the biggest fields of study that algorithmic solutions come from is graph theory. Many scenarios within edge networks significantly resemble graph-like problems that could be approached with a method such as the graph-clustering solution mentioned in Section 3.2. Because of this, many choose to implement parts of if not full algorithms that are well known to be very efficient and reliable. Although not every problem can be solved using these graph-based methods, they are a great starting point.
Edge networks are constantly handling data. In some cases, such as mobile edge devices, the data points and nodes within the network are rapidly changing. Continuously changing data require efficient and reliable data processing through new algorithms that optimize data analysis. Even if an algorithm improves only part of the process, it still provides a starting point and shows that the final outcome can be improved.
Data analysis is not the only aspect of edge systems that benefits from algorithmic improvements. Resource use is another major aspect of edge systems that many are attempting to optimize. Whether it is minimizing block-size in a blockchain-enabled edge framework, controlling buffer size, or optimizing resource allocation within a network, all of these operations are improved by the use of better algorithms. Each different application has variables and resources that need to be taken into account. Thus, a new algorithm is needed to handle different situations.
Edge computing and edge intelligence can only go so far by only improving edge infrastructure and hardware. Edge systems will continue to evolve by recognizing problems, formulating solutions, and optimizing aspects of systems. . Optimized quality of service, efficiency, and reliability are all outcomes from the many algorithmic improvements made within the domain of edge technologies. Continuing to propose new algorithms and improving upon old algorithms is one of the major keys to continued growth of edge computing.

4.3 Privacy Concerns and Managing User Security in Edge Environments

As with any modern technological change, information privacy is a serious concern. Edge networks need to appropriately handle information derived from intelligent edge systems. Whether the data being transferred are entered by users or generated from other information that users provide, the potential for this information to be transmitted to inappropriate parties exists and, as such, needs to be handled carefully. Regardless of the type of information transferred, the security of the edge network needs to be ensured. As edge networks often rely on devices frequently joining and leaving the network, it is imperative that the authenticity and veracity of any new devices are confirmed before being fully incorporated into the network. This approach could take on any number of forms, one of which being simply a limited sub-network that new devices are localized into before accessing the primary network.
The security of each element of the network needs to be ensured as well. If one device has lax security practices, it jeopardizes the security of the whole network. Devices should be routinely examined, both physically and virtually, to ensure their integrity. When inconsistencies are identified, there should be a standard operational practice that reduces or entirely removes the edge network’s dependency on the device. This would necessarily have to be done without significant impact on the performance of the task that the edge network performs. Information propagated through edge networks also need to be encrypted so that users with access to an individual component of the network do not have access to elements they should not be viewing. The interconnectedness of an edge network increases dependency on each component and, as such, reliance on those who have access to the system. A malicious agent at any one point should not be able to interfere with the entire operation, and encryption schemes are only one such approach.
The impacts of outside influences also need to be addressed. Should an interfering signal be transmitted to a network, there should be minimal impact on the performance of the operation at hand and zero negative impact on the outcome of the operation. Likewise, there should not be any additional transmissions beyond those necessary. Information should stay within the network until it reaches its intended destination — no duplication to outside parties should be permitted.
While privacy is not the main concern of this article, it is still crucial to an operational system. A system can be useful only if it is trustworthy; keeping the privacy of users is paramount in keeping their trust. With edge networks requiring a great deal of remote interactions with disparate units, maintaining privacy becomes crucial to maintaining a functional system.

5 Conclusion

In this survey, the growth of artificial intelligence as a tool within intelligent edge systems is investigated. The history of edge intelligence and its growth as a replacement for cloud computing architectures is explored. The potential use cases for edge systems and their capacity for outperforming cloud architectures is described in detail. The survey is then explored, which engages with different research tasks pertaining to edge intelligence applications. We highlight different situations in which edge intelligence is used, how artificial intelligence and algorithmic efficiency concerns can be used to improve upon edge systems, and explore specific instances in which edge systems and edge intelligence can be improved.
The division of the survey is intended to be a fluid discussion indicating where, how, and why improvements to edge systems can be implemented. By describing present implementations, we shed light on the current state of affairs and how edge systems are currently operating, noting any deficiencies. The second section presents algorithms that these approaches traditionally use, how they work, and opportunities to improve on the performance of these algorithms in different edge applications. In the third section, detailing the shortcomings of edge systems, we hope that the previous two subsections provide a way forward toward addressing these issues. We further evaluate present systems in the subsequent section, identifying more generally domains that could be improved upon and reasons for doing so. We also touch on how and why algorithmic improvements can occur in these systems. Finally, we discuss privacy, a critical component of any system but even more so in an edge-based network. In closing, we hope the reader keeps in mind the necessity of security in all systems, especially those in which information is distributed and recompiled between numerous devices.
There is no financial/personal interest or belief on the part of the authors that could affect the objectivity of the submitted research results. No conflict of interests exist.

References

[1]
Samuel Tweneboah-Koduah, Barbara Endicott-Popovsky, and Anthony Tsetse. 2014. Barriers to government cloud adoption. International Journal of Managing Information Technology 6 (082014), 1–16. DOI:
[2]
Venkatraman Balasubramanian, Safa Otoum, Moayad Aloqaily, Ismaeel Al Ridhawi, and Yaser Jararweh. 2020. Low-latency vehicular edge: A vehicular infrastructure model for 5G. Simulation Modelling Practice and Theory 98 (2020), 101968.
[3]
Haitham Al-Obiedollah, Haythem Bany Salameh, Sharief Abdel-Razeq, Ali Hayajneh, Kanapathippillai Cumanan, and Yaser Jararweh. 2022. Energy-efficient opportunistic multi-carrier NOMA-based resource allocation for beyond 5G (B5G) networks. Simulation Modelling Practice and Theory 116 (2022), 102452.
[4]
Jihong Park, Sumudu Samarakoon, Mehdi Bennis, and Mérouane Debbah. 2019. Wireless network intelligence at the edge. (2019). arxiv:cs.IT/1812.02858
[5]
Shuiguang Deng, Hailiang Zhao, Weijia Fang, Jianwei Yin, Schahram Dustdar, and Albert Y. Zomaya. 2020. Edge intelligence: The confluence of edge computing and artificial intelligence. IEEE Internet of Things Journal 7, 8 (2020), 7457–7469.
[6]
Syreen Banabilah, Moayad Aloqaily, Eitaa Alsayed, Nida Malik, and Yaser Jararweh. 2022. Federated learning review: Fundamentals, enabling technologies, and future applications. Information Processing & Management 59, 6 (2022), 103061.
[7]
N. Abbas, Y. Zhang, A. Taherkordi, and T. Skeie. 2018. Mobile edge computing: A survey. IEEE Internet of Things Journal 5, 1 (2018), 450–465. DOI:
[8]
Yaser Jararweh, Mohammad Alsmirat, Mahmoud Al-Ayyoub, Elhadj Benkhelifa, Ala’ Darabseh, Brij Gupta, and Ahmad Doulat. 2017. Software-defined system support for enabling ubiquitous mobile edge computing. Comput. J. 60, 10 (2017), 1443–1457.
[9]
Q. Qi and F. Tao. 2019. A smart manufacturing service system based on edge computing, fog computing, and cloud computing. IEEE Access 7 (2019), 86769–86777. DOI:
[10]
Priya Suresh, J. Vijay Daniel, Velusamy Parthasarathy, and R. H. Aswathy. 2014. A state of the art review on the Internet of Things (IoT) history, technology and fields of deployment. In 2014 International Conference on Science Engineering and Management Research (ICSEMR’14). IEEE, Chennai, 1–8.
[11]
Mohammed Mehedi Hassan, Md Zia Uddin, Amr Mohamed, and Ahmad Almogren. 2018. A robust human activity recognition system using smartphone sensors and deep learning. Future Generation Computer Systems 81 (2018), 307–313.
[12]
L. L. Dhirani, T. Newe, and S. Nizamani. 2018. Can IoT escape cloud QoS and cost pitfalls. In 12th International Conference on Sensing Technology (ICST’18). 65–70. DOI:
[13]
Anca Apostu, Florina Puican, GEANINA Ularu, George Suciu, Gyorgy Todoran, et al. 2013. Study on advantages and disadvantages of cloud computing–the advantages of telemetry applications in the cloud. Recent Advances in Applied Computer Science and Digital Services 118 (2013).
[14]
Fei Hu, Meikang Qiu, Jiayin Li, Travis Grant, Drew Taylor, Seth McCaleb, Lee Butler, and Richard Hamner. 2011. A review on cloud computing: Design challenges in architecture and security. Journal of Computing and Information Technology 19, 1 (2011), 25–55.
[15]
Adil Adeel, Mazhar Ali, Abdul Nasir Khan, Tauqeer Khalid, Faisal Rehman, Yaser Jararweh, and Junaid Shuja. 2022. A multi-attack resilient lightweight IoT authentication scheme. Transactions on Emerging Telecommunications Technologies 33, 3 (2022), e3676.
[16]
S. Deng, H. Zhao, W. Fang, J. Yin, S. Dustdar, and A. Y. Zomaya. 2020. Edge intelligence: The confluence of edge computing and artificial intelligence. IEEE Internet of Things Journal 7, 8 (2020), 7457–7469. DOI:
[17]
Y. Sahni, J. Cao, S. Zhang, and L. Yang. 2017. Edge mesh: A new paradigm to enable distributed intelligence in Internet of Things. IEEE Access 5 (2017), 16441–16458. DOI:
[18]
Q. Yuan, H. Zhou, J. Li, Z. Liu, F. Yang, and X. S. Shen. 2018. Toward efficient content delivery for automated driving services: An edge computing solution. IEEE Network 32, 1 (2018), 80–86. DOI:
[19]
K. Kolomvatsos and C. Anagnostopoulos. 2018. In-network decision making intelligence for task allocation in edge computing. In 2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI’18). Volos, 655–662. DOI:
[20]
Xianwei Li, Liang Zhao, Keping Yu, Moayad Aloqaily, and Yaser Jararweh. 2021. A cooperative resource allocation model for IoT applications in mobile edge computing. Computer Communications 173 (2021), 183–191.
[21]
Z. Huang, K. Lin, and C. Shih. 2016. Supporting edge intelligence in service-oriented smart IoT applications. In 2016 IEEE International Conference on Computer and Information Technology (CIT’16). Nadi, 492–499. DOI:
[22]
Linjuan Ma and Fuquan Zhang. 2021. End-to-end predictive intelligence diagnosis in brain tumor using lightweight neural network. Applied Soft Computing 111 (2021), 107666.
[23]
Aluizio Rocha Neto, Thiago P. Silva, Thais Batista, Flávia C. Delicato, Paulo F. Pires, and Frederico Lopes. 2020. Leveraging edge intelligence for video analytics in smart city applications. Information 12, 1 (2020), 14.
[24]
Miguel A. Guillén, Antonio Llanes, Baldomero Imbernón, Raquel Martínez-España, Andrés Bueno-Crespo, Juan-Carlos Cano, and José M. Cecilia. 2021. Performance evaluation of edge-computing platforms for the prediction of low temperatures in agriculture using deep learning. The Journal of Supercomputing 77, 1 (2021), 818–840.
[25]
Ghulam Jillani Ansari, Jamal Hussain Shah, Muhammad Attique Khan, Muhammad Sharif, Usman Tariq, and Tallha Akram. 2021. A non-blind deconvolution semi pipelined approach to understand text in blurry natural images for edge intelligence. Information Processing & Management 58, 6 (2021), 102675.
[26]
Yusen Zhang, Bao Li, and Yusong Tan. 2021. Making AI available for everyone at anywhere: A survey about edge intelligence. In Journal of Physics: Conference Series, Vol. 1757. IOP Publishing, 012076.
[27]
Z. Zhou, X. Chen, E. Li, L. Zeng, K. Luo, and J. Zhang. 2019. Edge intelligence: Paving the last mile of artificial intelligence with edge computing. Proceedings of the IEEE 107, 8 (2019), 1738–1762.
[28]
Usman Ahmed, Gautam Srivastava, and Jerry Chun-Wei Lin. 2022. Reliable customer analysis using federated learning and exploring deep-attention edge intelligence. Future Generation Computer Systems 127 (2022), 70–79.
[29]
En Li, Zhi Zhou, and Xu Chen. 2018. Edge intelligence: On-demand deep learning model co-inference with device-edge synergy. In Proceedings of the 2018 Workshop on Mobile Edge Communications (MECOMM’18). ACM, New York, NY, 31–36. DOI:
[30]
Jeff Christensen, Kobus Van der Merwe, Sneha Kasera, and Jeff Peterson. 2018. Final technical report: Edge intelligence for virtualization and security in open networks. Report no. DOE13102, United States.
[31]
M. Maier and A. Ebrahimzadeh. 2019. Towards immersive tactile Internet experiences: Low-latency FiWi enhanced mobile networks with edge intelligence [Invited]. IEEE/OSA Journal of Optical Communications and Networking 11, 4 (2019), B10–B25. DOI:
[32]
M. Al-Rakhami, M. Alsahli, M. M. Hassan, A. Alamri, A. Guerrieri, and G. Fortino. 2018. Cost efficient edge intelligence framework using docker containers. In 2018 IEEE 16th International Conference on Dependable, Autonomic and Secure Computing, 16th International Conference on Pervasive Intelligence and Computing, 4th International Conference on Big Data Intelligence and Computing and Cyber Science and Technology Congress (DASC/PiCom/DataCom/CyberSciTech’18). Athens, 800–807. DOI:
[33]
Y. Dai, K. Zhang, S. Maharjan, and Y. Zhang. 2020. Edge intelligence for energy-efficient computation offloading and resource allocation in 5G beyond. IEEE Transactions on Vehicular Technology 69, 10 (2020), 12175–12186. DOI:
[34]
L. Ren, Y. Laili, X. Li, and X. Wang. 2019. Coding-based large-scale task assignment for industrial edge intelligence. IEEE Transactions on Network Science and Engineering (2019), 1–1. DOI:
[35]
R. Doku and D. B. Rawat. 2020. IFLBC: On the edge intelligence using federated learning blockchain network. In 2020 IEEE 6th International Conference on Big Data Security on Cloud (BigDataSecurity), IEEE International Conference on High Performance and Smart Computing (HPSC) and IEEE International Conference on Intelligent Data and Security (IDS). Baltimore, MD, 221–226. DOI:
[36]
A. Meloni, P. A. Pegoraro, L. Atzori, A. Benigni, and S. Sulis. 2018. Cloud-based IoT solution for state estimation in smart grids: Exploiting virtualization and edge-intelligence technologies. Computer Networks 130 (2018), 156–165. DOI:
[37]
Yiping Kang, Johann Hauswald, Cao Gao, Austin Rovinski, Trevor Mudge, Jason Mars, and Lingjia Tang. 2017. Neurosurgeon: Collaborative intelligence between the cloud and mobile edge. In Proceedings of the 22nd International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS’17). ACM, New York, NY, 615–629. DOI:
[38]
Z. Chen, Q. He, L. Liu, D. Lan, H. M. Chung, and Z. Mao. 2019. An artificial intelligence perspective on mobile edge computing. In 2019 IEEE International Conference on Smart Internet of Things (SmartIoT). Tianjin, 100–106. DOI:
[39]
S. Zhang, W. Li, Y. Wu, P. Watson, and A. Zomaya. 2018. Enabling edge intelligence for activity recognition in smart homes. In 2018 IEEE 15th International Conference on Mobile Ad Hoc and Sensor Systems (MASS’18). Chengdu, 228–236. DOI:
[40]
Hasibur Rahman, Rahim Rahmani, and Theo Kanter. 2019. The role of mobile edge computing towards assisting IoT with distributed intelligence: A smart living perspective. In Mobile Solutions and Their Usefulness in Everyday Life, Sara Paiva (Ed.). Springer International Publishing, Cham, 33–45. DOI:
[41]
Inés Sittón-Candanedo, Ricardo S. Alonso, Juan M. Corchado, Sara Rodríguez-González, and Roberto Casado-Vara. 2019. A review of edge computing reference architectures and a new global edge proposal. Future Generation Computer Systems 99 (2019), 278–294. DOI:
[42]
Hwei-Ming Chung, Sabita Maharjan, Yan Zhang, Frank Eliassen, and Tingting Yuan. 2020. Edge intelligence empowered UAVs for automated wind farm monitoring in smart grids. (2020). arxiv:eess.SY/2009.11256
[43]
Y. Dai, D. Xu, S. Maharjan, G. Qiao, and Y. Zhang. 2019. Artificial intelligence empowered edge computing and caching for Internet of Vehicles. IEEE Wireless Communications 26, 3 (2019), 12–18. DOI:
[44]
Xianfei Zhou, Kai Xu, Naiyu Wang, Jianlin Jiao, Ning Dong, Meng Han, and Hao Xu. 2021. A secure and privacy-preserving machine learning model sharing scheme for edge-enabled IoT. IEEE Access 9 (2021), 17256–17265.
[45]
L. Wang, L. Jiao, T. He, J. Li, and M. Mühlhäuser. 2018. Service entity placement for social virtual reality applications in edge computing. In IEEE INFOCOM 2018 —IEEE Conference on Computer Communications. Honolulu, HI, 468–476. DOI:
[46]
Y. Li and T. Lan. 2018. Multichoice games for optimizing task assignment in edge computing. In 2018 IEEE Global Communications Conference (GLOBECOM’18). Abu Dhabi, 1–7. DOI:
[47]
Zehong Lin, Suzhi Bi, and Ying-Jun Angela Zhang. 2020. Optimizing AI service placement and resource allocation in mobile edge intelligence systems. (2020). arxiv:cs.NI/2011.05708
[48]
F. Zhou, Y. Wu, H. Sun, and Z. Chu. 2018. UAV-Enabled mobile edge computing: Offloading optimization and trajectory design. In 2018 IEEE International Conference on Communications (ICC’18). Kansas City, MO, 1–6. DOI:
[49]
S. Wang, R. Wang, Q. Hao, Y. Wu, and H. V. Poor. 2020. Learning centric power allocation for edge intelligence. In 2020 IEEE International Conference on Communications (ICC’20). Dublin, 1–6. DOI:
[50]
Zhenqiu Huang, Kwei-Jay Lin, Bo-Lung Tsai, Surong Yan, and Chi-Sheng Shih. 2018. Building edge intelligence for online activity recognition in service-oriented IoT systems. Future Generation Computer Systems 87 (2018), 557–567. DOI:
[51]
M. Bouet and V. Conan. 2018. Mobile edge computing resources optimization: A geo-clustering approach. IEEE Transactions on Network and Service Management 15, 2 (2018), 787–796. DOI:
[52]
Ali Hassan Sodhro, Zongwei Luo, Arun Kumar Sangaiah, and Sung Wook Baik. 2019. Mobile edge computing based QoS optimization in medical healthcare applications. International Journal of Information Management 45 (2019), 308–318. DOI:
[53]
G. Luo, Q. Yuan, H. Zhou, N. Cheng, Z. Liu, F. Yang, and X. S. Shen. 2018. Cooperative vehicular content distribution in edge computing assisted 5G-VANET. China Communications 15, 7 (2018), 1–17. DOI:
[54]
Luning Liu, Zhaoming Lu, Luhan Wang, Xin Chen, and Xiangming Wen. 2020. Large-volume data dissemination for cellular-assisted automated driving with edge intelligence. Journal of Network and Computer Applications 155 (2020), 102535. DOI:
[55]
M. Liu, F. R. Yu, Y. Teng, V. C. M. Leung, and M. Song. 2019. Distributed resource allocation in blockchain-based video streaming systems with mobile edge computing. IEEE Transactions on Wireless Communications 18, 1 (2019), 695–708. DOI:
[56]
Y. An, F. R. Yu, J. Li, J. Chen, and V. C. M. Leung. 2020. Edge intelligence (EI)-Enabled HTTP anomaly detection framework for the Internet of Things (IoT). IEEE Internet of Things Journal (2020), 1–1. DOI:
[57]
Ahnaf Hannan Lodhi, Barış Akgün, and Öznur Özkasap. 2020. State-of-the-art techniques in deep edge intelligence. arXiv preprint arXiv:2008.00824 (2020).
[58]
Jakub Konečnỳ, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016).
[59]
Haythem Bany Salameh, Mohannad Alhafnawi, Alaeddin Masadeh, and Yaser Jararweh. 2023. Federated reinforcement learning approach for detecting uncertain deceptive target using autonomous dual UAV system. Information Processing & Management 60, 2 (2023), 103149.
[60]
Zhi Zhou, Xu Chen, En Li, Liekang Zeng, Ke Luo, and Junshan Zhang. 2019. Edge intelligence: Paving the last mile of artificial intelligence with edge computing. Proceedings of the IEEE 107, 8 (2019), 1738–1762.
[61]
Afaf Tak and Soumaya Cherkaoui. 2020. Federated edge learning: Design issues and challenges. IEEE Network 35, 2 (2020), 252–258.
[62]
Wang Zhaohang, Xia Geming, Chen Jian, and Yu Chaodong. 2021. Adaptive asynchronous federated learning for edge intelligence. In 2021 International Conference on Machine Learning and Intelligent Systems Engineering (MLISE’21). IEEE, 285–289.
[63]
Dongqing Li, Yuegang Li, Haizhou Hu, Ting Zhang, and Congfeng Jiang. 2022. Heterogeneous platform-aware workload feature recognition for edge intelligence. Physical Communication 52 (2022), 101620.
[64]
Vincent Ng and Claire Cardie. 2002. Improving machine learning approaches to coreference resolution. Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (2002), 104–111.
[65]
Venkat Gudivada, Amy Apon, and Junhua Ding. 2017. Data quality considerations for big data and machine learning: Going beyond data cleaning and transformations. International Journal on Advances in Software 10, 1 (2017), 1–20.
[66]
How Data Quality Impacts Machine Learning. ([n. d.]). Retrieved February 5, 2023 from https://www.precisely.com/blog/data-quality/data-quality-impact-machine-learning.
[67]
Why deep learning may be not a good idea? ([n. d.]). Retrieved February 5, 2023 from http://storybydata.com/datacated-challenge/why-deep-learning-may-be-not-a-good-idea/.
[68]
Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectifier neural networks. Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (2011), 315–323.
[69]
4 Reasons Why Deep Learning and Neural Networks Aren’t Always the Right Choice. ([n. d.]). Retrieved February 5, 2023 from https://builtin.com/data-science/disadvantages-neural-networks.
[70]
Lightweight Machine Learning and Analytics. ([n. d.]). Retrieved February 5, 2023 from https://striped-giraffe.com/blog/lightweight-machine-learning-and-analytics/.
[71]
TensorFlow Lite. ([n. d.]). Retrieved February 5, 2023 from https://www.tensorflow.org/lite.
[72]
Caffee2. ([n. d.]). Retrieved February 5, 2023 from https://caffe2.ai/.
[73]
Apache MXNet. ([n. d.]). Retrieved February 5, 2023 from https://mxnet.apache.org/versions/1.8.0/.
[74]
Welcome to Lasagne. ([n. d.]). Retrieved February 5, 2023 from https://lasagne.readthedocs.io/en/latest/.
[75]
G. Plastiras, M. Terzi, C. Kyrkou, and T. Theocharidcs. 2018. Edge intelligence: Challenges and opportunities of near-sensor machine learning applications. In 2018 IEEE 29th International Conference on Application-Specific Systems, Architectures and Processors (ASAP’18). Milan, 1–7. DOI:
[76]
X. Su, G. Sperlì, V. Moscato, A. Picariello, C. Esposito, and C. Choi. 2019. An edge intelligence empowered recommender system enabling cultural heritage applications. IEEE Transactions on Industrial Informatics 15, 7 (2019), 4266–4275. DOI:
[77]
Yanjie Jia, Xi Chen, Jieqiong Yu, Lianming Wang, Yuanzhe Xu, Shaojin Liu, and Yonghui Wang. 2020. Speaker recognition based on characteristic spectrograms and an improved self-organizing feature map neural network. Complex & Intelligent Systems (29 Jun2020). DOI:
[78]
Rodrigo Roman, Jianying Zhou, and Javier Lopez. 2013. On the features and challenges of security and privacy in distributed Internet of Things. Computer Networks 57, 10 (2013), 2266–2279. DOI:
[79]
J. Huh and Y. Seo. 2019. Understanding edge computing: Engineering evolution with artificial intelligence. IEEE Access 7 (2019), 164229–164245. DOI:
[80]
Joanne Ling Sin Yee, Usman Ullah Sheikh, Musa Mohd Mokji, and Syed Abd Rahman. 2020. Face recognition and machine learning at the edge. IOP Conference Series: Materials Science and Engineering 884 (Jul2020), 012084. DOI:
[81]
L. Wang, J. Zhang, J. Chuan, R. Ma, and A. Fei. 2020. Edge intelligence for mission cognitive wireless emergency networks. IEEE Wireless Communications 27, 4 (2020), 103–109. DOI:
[82]
S. Pan, P. Li, C. Yi, D. Zeng, Y. C. Liang, and G. Hu. 2020. Edge intelligence empowered urban traffic monitoring: A network tomography perspective. IEEE Transactions on Intelligent Transportation Systems (2020), 1–14. DOI:
[83]
Sukhpal Singh Gill, Shreshth Tuli, Minxian Xu, Inderpreet Singh, Karan Vijay Singh, Dominic Lindsay, Shikhar Tuli, Daria Smirnova, Manmeet Singh, Udit Jain, Haris Pervaiz, Bhanu Sehgal, Sukhwinder Singh Kaila, Sanjay Misra, Mohammad Sadegh Aslanpour, Harshit Mehta, Vlado Stankovski, and Peter Garraghan. 2019. Transformative effects of IoT, Blockchain and Artificial Intelligence on cloud computing: Evolution, vision, trends and open challenges. Internet of Things 8 (2019), 100118. DOI:
[84]
Sumon Kumar Bose, Bapi Kar, Mohendra Roy, Pradeep Kumar Gopalakrishnan, and Arindam Basu. 2019. ADEPOS: Anomaly detection based power saving for predictive maintenance using edge computing. In Proceedings of the 24th Asia and South Pacific Design Automation Conference (ASPDAC’19). ACM, New York, NY, 597–602. DOI:
[85]
Randeep Bhatia, Steven Benno, Jairo Esteban, T. V. Lakshman, and John Grogan. 2019. Unsupervised machine learning for network-centric anomaly detection in IoT. In Proceedings of the 3rd ACM CoNEXT Workshop on Big Data, Machine Learning and Artificial Intelligence for Data Communication Networks (Big-DAMA’19). ACM, New York, NY, 42–48. DOI:
[86]
Xiaofan Zhang, Anand Ramachandran, Chuanhao Zhuge, Di He, Wei Zuo, Zuofu Cheng, Kyle Rupnow, and Deming Chen. 2017. Machine learning on FPGAs to face the IoT revolution. In Proceedings of the 36th International Conference on Computer-Aided Design (ICCAD’17). IEEE Press, Hammamet, 819–826.
[87]
K. Guo, Z. Liang, R. Shi, C. Hu, and Z. Li. 2018. Transparent learning: An incremental machine learning framework based on transparent computing. IEEE Network 32, 1 (2018), 146–151. DOI:
[88]
Håkon Hapnes Strand. 2019. A lightweight machine learning architecture for IoT streams. (September2019). Retrieved February 5, 2023 from https://towardsdatascience.com/a-lightweight-machine-learning-architecture-for-iot-streams-bd1bf81afa2.
[89]
En Li, Zhi Zhou, and Xu Chen. 2018. Edge intelligence: On-demand deep learning model co-inference with device-edge synergy. In Proceedings of the 2018 Workshop on Mobile Edge Communications (MECOMM’18). ACM, New York, NY, 31–36. DOI:
[90]
Salam Hamdan, Sufyan Almajali, Moussa Ayyash, Haythem Bany Salameh, and Yaser Jararweh. 2023. An intelligent edge-enabled distributed multi-task learning architecture for large-scale IoT-based cyber– physical systems. Simulation Modelling Practice and Theory 122 (2023), 102685. DOI:

Cited By

View all
  • (2025)Edge AI: A Taxonomy, Systematic Review and Future DirectionsCluster Computing10.1007/s10586-024-04686-y28:1Online publication date: 1-Feb-2025
  • (2024)A Systematic Mapping Study of UAV-Enabled Mobile Edge Computing for Task OffloadingIEEE Access10.1109/ACCESS.2024.343192212(101936-101970)Online publication date: 2024
  • (2024)Process-aware security monitoring in industrial control systems: A systematic review and future directionsInternational Journal of Critical Infrastructure Protection10.1016/j.ijcip.2024.10071947(100719)Online publication date: Dec-2024
  • Show More Cited By

Index Terms

  1. A Survey on Edge Intelligence and Lightweight Machine Learning Support for Future Applications and Services

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image Journal of Data and Information Quality
      Journal of Data and Information Quality  Volume 15, Issue 2
      June 2023
      363 pages
      ISSN:1936-1955
      EISSN:1936-1963
      DOI:10.1145/3605909
      Issue’s Table of Contents

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 22 June 2023
      Online AM: 25 January 2023
      Accepted: 18 December 2022
      Revised: 11 December 2022
      Received: 28 March 2022
      Published in JDIQ Volume 15, Issue 2

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Edge intelligence
      2. lightweight machine learning
      3. cloud computing
      4. artificial intelligence
      5. edge computing
      6. network services
      7. quality of service

      Qualifiers

      • Survey

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)3,353
      • Downloads (Last 6 weeks)558
      Reflects downloads up to 14 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Edge AI: A Taxonomy, Systematic Review and Future DirectionsCluster Computing10.1007/s10586-024-04686-y28:1Online publication date: 1-Feb-2025
      • (2024)A Systematic Mapping Study of UAV-Enabled Mobile Edge Computing for Task OffloadingIEEE Access10.1109/ACCESS.2024.343192212(101936-101970)Online publication date: 2024
      • (2024)Process-aware security monitoring in industrial control systems: A systematic review and future directionsInternational Journal of Critical Infrastructure Protection10.1016/j.ijcip.2024.10071947(100719)Online publication date: Dec-2024
      • (2024)Edge AI for Internet of Medical Things: A literature reviewComputers and Electrical Engineering10.1016/j.compeleceng.2024.109202116(109202)Online publication date: May-2024
      • (2024)Modeling, Simulating, and Evaluating Complex End-to-End Edge Intelligence SystemsIoT Edge Intelligence10.1007/978-3-031-58388-9_1(3-35)Online publication date: 4-Jun-2024
      • (2023)Fortified Edge 3.0: A Lightweight Machine Learning based Approach for Security in Collaborative Edge Computing2023 OITS International Conference on Information Technology (OCIT)10.1109/OCIT59427.2023.10430911(450-455)Online publication date: 13-Dec-2023

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Get Access

      Login options

      Full Access

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media