Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 16, November
Previous Issue
Volume 16, September
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Future Internet, Volume 16, Issue 10 (October 2024) – 37 articles

Cover Story (view full-size image): The study introduces a new model for defining and executing performance tests used during the IT system design and operation. By establishing clear objectives, test types, and implementation methods, the model streamlines the preparation, execution, and replication of performance tests. It divides the testing process into layers, allowing specialized teams to independently handle different test components, thereby accelerating test implementation and reducing costs. In addition, the proposed solution facilitates communication between developers and testers by introducing an unambiguous and precise test description. The model was validated in both laboratory and production environments, and was useful in identifying system bottlenecks and enabling rapid performance optimization. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
28 pages, 3396 KiB  
Review
Internet of Things and Distributed Computing Systems in Business Models
by Albérico Travassos Rosário and Ricardo Raimundo
Future Internet 2024, 16(10), 384; https://doi.org/10.3390/fi16100384 - 21 Oct 2024
Viewed by 886
Abstract
The integration of the Internet of Things (IoT) and Distributed Computing Systems (DCS) is transforming business models across industries. IoT devices allow immediate monitoring of equipment and processes, mitigating lost time and enhancing efficiency. In this case, manufacturing companies use IoT sensors to [...] Read more.
The integration of the Internet of Things (IoT) and Distributed Computing Systems (DCS) is transforming business models across industries. IoT devices allow immediate monitoring of equipment and processes, mitigating lost time and enhancing efficiency. In this case, manufacturing companies use IoT sensors to monitor machinery, predict failures, and schedule maintenance. Also, automation via IoT reduces manual intervention, resulting in boosted productivity in smart factories and automated supply chains. IoT devices generate this vast amount of data, which businesses analyze to gain insights into customer behavior, operational inefficiencies, and market trends. In turn, Distributed Computing Systems process this data, providing actionable insights and enabling advanced analytics and machine learning for future trend predictions. While, IoT facilitates personalized products and services by collecting data on customer preferences and usage patterns, enhancing satisfaction and loyalty, IoT devices support new customer interactions, like wearable health devices, and enable subscription-based and pay-per-use models in transportation and utilities. Conversely, real-time monitoring enhances security, as distributed systems quickly respond to threats, ensuring operational safety. It also aids regulatory compliance by providing accurate operational data. In this way, this study, through a Bibliometric Literature Review (LRSB) of 91 screened pieces of literature, aims at ascertaining to what extent the aforementioned capacities, overall, enhance business models, in terms of efficiency and effectiveness. The study concludes that those systems altogether leverage businesses, promoting competitive edge, continuous innovation, and adaptability to market dynamics. In particular, overall, the integration of both IoT and Distributed Systems in business models augments its numerous advantages: it develops smart infrastructures e.g., smart grids; edge computing that allows data processing closer to the data source e.g., autonomous vehicles; predictive analytics, by helping businesses anticipate issues e.g., to foresee equipment failures; personalized services e.g., through e-commerce platforms of personalized recommendations to users; enhanced security, while reducing the risk of centralized attacks e.g., blockchain technology, in how IoT and Distributed Computing Systems altogether impact business models. Future research avenues are suggested. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Figure 1

Figure 1
<p>PRISMA 2020 flow diagram of the literature search and screening process [<a href="#B7-futureinternet-16-00384" class="html-bibr">7</a>,<a href="#B8-futureinternet-16-00384" class="html-bibr">8</a>].</p>
Full article ">Figure 2
<p>Documents by year. Source: Scopus platform output.</p>
Full article ">Figure 3
<p>Literature by Geography.</p>
Full article ">Figure 4
<p>Trend in citations ranging from 2014 to 2024.</p>
Full article ">Figure 5
<p>A web of keywords.</p>
Full article ">Figure 6
<p>A Web of Related Keywords.</p>
Full article ">Figure 7
<p>A snapshot of co-citations.</p>
Full article ">Figure 8
<p>IoT evolution since 1999 [<a href="#B14-futureinternet-16-00384" class="html-bibr">14</a>].</p>
Full article ">Figure 9
<p>Business model canvas [<a href="#B5-futureinternet-16-00384" class="html-bibr">5</a>].</p>
Full article ">
4 pages, 935 KiB  
Correction
Correction: Vasilas et al. Beat the Heat: Syscall Attack Detection via Thermal Side Channel. Future Internet 2024, 16, 301
by Teodora Vasilas, Claudiu Bacila and Remus Brad
Future Internet 2024, 16(10), 383; https://doi.org/10.3390/fi16100383 - 21 Oct 2024
Viewed by 330
Abstract
In the original publication [...] Full article
Show Figures

Figure 6

Figure 6
<p>Reproduced results for <span class="html-italic">ls</span> and <span class="html-italic">chmod</span> commands on D-151.</p>
Full article ">Figure 7
<p>Reproduced results for <span class="html-italic">ls</span> and <span class="html-italic">chmod</span> commands on D-152.</p>
Full article ">Figure 10
<p>Results with moving a small file as a noise, selecting CPU affinity.</p>
Full article ">Figure A4
<p>Results obtained with keystrokes as a noise, selecting CPU affinity.</p>
Full article ">Figure A5
<p>Results obtained with mathematical computations as a noise, selecting CPU affinity.</p>
Full article ">
22 pages, 2551 KiB  
Review
A Performance Benchmark for the PostgreSQL and MySQL Databases
by Sanket Vilas Salunke and Abdelkader Ouda
Future Internet 2024, 16(10), 382; https://doi.org/10.3390/fi16100382 - 19 Oct 2024
Viewed by 711
Abstract
This study highlights the necessity for efficient database management in continuous authentication systems, which rely on large-scale behavioral biometric data such as keystroke patterns. A benchmarking framework was developed to evaluate the PostgreSQL and MySQL databases, minimizing repetitive coding through configurable functions and [...] Read more.
This study highlights the necessity for efficient database management in continuous authentication systems, which rely on large-scale behavioral biometric data such as keystroke patterns. A benchmarking framework was developed to evaluate the PostgreSQL and MySQL databases, minimizing repetitive coding through configurable functions and variables. The methodology involved experiments assessing select and insert queries under primary and complex conditions, simulating real-world scenarios. Our quantified results show PostgreSQL’s superior performance in select operations. In primary tests, PostgreSQL’s execution time for 1 million records ranged from 0.6 ms to 0.8 ms, while MySQL’s ranged from 9 ms to 12 ms, indicating that PostgreSQL is about 13 times faster. For select queries with a where clause, PostgreSQL required 0.09 ms to 0.13 ms compared to MySQL’s 0.9 ms to 1 ms, making it roughly 9 times more efficient. Insert operations were similar, with PostgreSQL at 0.0007 ms to 0.0014 ms and MySQL at 0.0010 ms to 0.0030 ms. In complex experiments with simultaneous operations, PostgreSQL maintained stable performance (0.7 ms to 0.9 ms for select queries during inserts), while MySQL’s performance degraded significantly (7 ms to 13 ms). These findings underscore PostgreSQL’s suitability for environments requiring low data latency and robust concurrent processing capabilities, making it ideal for continuous authentication systems. Full article
(This article belongs to the Special Issue Distributed Storage of Large Knowledge Graphs with Mobility Data)
Show Figures

Figure 1

Figure 1
<p>Continuous authentication architecture.</p>
Full article ">Figure 2
<p>Benchmarking framework block diagram.</p>
Full article ">Figure 3
<p>Configuration file variables.</p>
Full article ">Figure 4
<p>Database benchmarking activity diagram.</p>
Full article ">Figure 5
<p>Select query execution time of MySQL for primary experiment one.</p>
Full article ">Figure 6
<p>Select query execution time of PostgreSQL for primary experiment one.</p>
Full article ">Figure 7
<p>Select query comparison of MySQL and PostgreSQL for primary experiment one.</p>
Full article ">Figure 8
<p>Select operation with where condition query execution time of MySQL for primary experiment two.</p>
Full article ">Figure 9
<p>Select operation with where condition query execution time of PostgreSQL for primary experiment two.</p>
Full article ">Figure 10
<p>Select operation with where condition query comparison of MySQL and PostgreSQL for primary experiment two.</p>
Full article ">Figure 11
<p>Insert query execution time of MySQL for primary experiment three.</p>
Full article ">Figure 12
<p>Insert query execution time of PostgreSQL for primary experiment three.</p>
Full article ">Figure 13
<p>Insert query comparison of MySQL and PostgreSQL for primary experiment three.</p>
Full article ">Figure 14
<p>Select query execution time of MySQL with insert operation in parallel.</p>
Full article ">Figure 15
<p>Select query execution time of PostgreSQL with insert operation in parallel.</p>
Full article ">Figure 16
<p>Select query comparison of MySQL and PostgreSQL with insert operation in parallel.</p>
Full article ">Figure 17
<p>Select operation with where query execution time of MySQL with insert operation in parallel.</p>
Full article ">Figure 18
<p>Select operation with where query execution time of PostgreSQL with insert operation in parallel.</p>
Full article ">Figure 19
<p>Select operation with where query comparison of MySQL and PostgreSQL with insert operation in parallel.</p>
Full article ">Figure 20
<p>Insert query execution time of MySQL with select operation in parallel.</p>
Full article ">Figure 21
<p>Insert query execution time of PostgreSQL with select operation in parallel.</p>
Full article ">Figure 22
<p>Insert query comparison of MySQL and PostgreSQL with select operation in parallel.</p>
Full article ">
20 pages, 1607 KiB  
Article
Securing the Edge: CatBoost Classifier Optimized by the Lyrebird Algorithm to Detect Denial of Service Attacks in Internet of Things-Based Wireless Sensor Networks
by Sennanur Srinivasan Abinayaa, Prakash Arumugam, Divya Bhavani Mohan, Anand Rajendran, Abderezak Lashab, Baoze Wei and Josep M. Guerrero
Future Internet 2024, 16(10), 381; https://doi.org/10.3390/fi16100381 - 19 Oct 2024
Viewed by 766
Abstract
The security of Wireless Sensor Networks (WSNs) is of the utmost importance because of their widespread use in various applications. Protecting WSNs from harmful activity is a vital function of intrusion detection systems (IDSs). An innovative approach to WSN intrusion detection (ID) utilizing [...] Read more.
The security of Wireless Sensor Networks (WSNs) is of the utmost importance because of their widespread use in various applications. Protecting WSNs from harmful activity is a vital function of intrusion detection systems (IDSs). An innovative approach to WSN intrusion detection (ID) utilizing the CatBoost classifier (Cb-C) and the Lyrebird Optimization Algorithm is presented in this work (LOA). As is typical in ID settings, Cb-C excels at handling datasets that are imbalanced. The lyrebird’s remarkable capacity to imitate the sounds of its surroundings served as inspiration for the LOA, a metaheuristic optimization algorithm. The WSN-DS dataset, acquired from Prince Sultan University in Saudi Arabia, is used to assess the suggested method. Among the models presented, LOA-Cb-C produces the highest accuracy of 99.66%; nevertheless, when compared with the other methods discussed in this article, its error value of 0.34% is the lowest. Experimental results reveal that the suggested strategy improves WSN-IoT security over the existing methods in terms of detection accuracy and the false alarm rate. Full article
Show Figures

Figure 1

Figure 1
<p>IoT-based Wireless Sensor Network—the basic structure [<a href="#B1-futureinternet-16-00381" class="html-bibr">1</a>].</p>
Full article ">Figure 2
<p>Flowchart of the LOA.</p>
Full article ">Figure 3
<p>Flowchart of the CatBoost classifier.</p>
Full article ">Figure 4
<p>Flow diagram of the proposed LOA-Cb-C.</p>
Full article ">
19 pages, 693 KiB  
Article
Collision Avoidance Adaptive Data Rate Algorithm for LoRaWAN
by Rachel Kufakunesu, Gerhard P. Hancke and Adnan M. Abu-Mahfouz
Future Internet 2024, 16(10), 380; https://doi.org/10.3390/fi16100380 - 19 Oct 2024
Viewed by 671
Abstract
Long-Range Wide-Area Network (LoRaWAN) technology offers efficient connectivity for numerous end devices over a wide coverage area in the Internet of Things (IoT) network, enabling the exchange of data over the Internet between even the most minor Internet-connected devices and systems. One of [...] Read more.
Long-Range Wide-Area Network (LoRaWAN) technology offers efficient connectivity for numerous end devices over a wide coverage area in the Internet of Things (IoT) network, enabling the exchange of data over the Internet between even the most minor Internet-connected devices and systems. One of LoRaWAN’s hallmark features is the Adaptive Data Rate (ADR) algorithm. ADR is a resource allocation function which dynamically adjusts the network’s data rate, airtime, and energy dissipation to optimise its performance. The allocation of spreading factors plays a critical function in defining the throughput of the end device and its robustness to interference. However, in practical deployments, LoRaWAN networks experience considerable interference, severely affecting the packet delivery ratio, energy utilisation, and general network performance. To address this, we present a novel ADR framework, SSFIR-ADR, which utilises randomised spreading factor allocation to minimise energy consumption and packet collisions while maintaining optimal network performance. We implement a LoRa network composed of a single gateway that connects loads of end nodes to a network server. In terms of energy use, packet delivery rate, and interference rate (IR), our simulation implementation does better than LoRaWAN’s legacy ADR scheme for a range of application data intervals. Full article
Show Figures

Figure 1

Figure 1
<p>Standard ADR model ED side.</p>
Full article ">Figure 2
<p>The LoRaWAN network scenario.</p>
Full article ">Figure 3
<p>The SF Map−Algorithm SSFIR-ADR2.</p>
Full article ">Figure 4
<p>Data interval vs. Total consumed energy.</p>
Full article ">Figure 5
<p>Number of EDs vs. Total consumed energy.</p>
Full article ">Figure 6
<p>Data interval vs. UL-PDR.</p>
Full article ">Figure 7
<p>Number of EDs vs. UL-PDR.</p>
Full article ">Figure 8
<p>Data interval vs. CPSR.</p>
Full article ">Figure 9
<p>Number of EDs vs. CPSR.</p>
Full article ">Figure 10
<p>Data interval vs. Interference/collision rate.</p>
Full article ">Figure 11
<p>Number of EDs vs. Interference/collision rate.</p>
Full article ">
52 pages, 18006 KiB  
Review
A Survey of the Real-Time Metaverse: Challenges and Opportunities
by Mohsen Hatami, Qian Qu, Yu Chen, Hisham Kholidy, Erik Blasch and Erika Ardiles-Cruz
Future Internet 2024, 16(10), 379; https://doi.org/10.3390/fi16100379 - 18 Oct 2024
Viewed by 3698
Abstract
The metaverse concept has been evolving from static, pre-rendered virtual environments to a new frontier: the real-time metaverse. This survey paper explores the emerging field of real-time metaverse technologies, which enable the continuous integration of dynamic, real-world data into immersive virtual environments. We [...] Read more.
The metaverse concept has been evolving from static, pre-rendered virtual environments to a new frontier: the real-time metaverse. This survey paper explores the emerging field of real-time metaverse technologies, which enable the continuous integration of dynamic, real-world data into immersive virtual environments. We examine the key technologies driving this evolution, including advanced sensor systems (LiDAR, radar, cameras), artificial intelligence (AI) models for data interpretation, fast data fusion algorithms, and edge computing with 5G networks for low-latency data transmission. This paper reveals how these technologies are orchestrated to achieve near-instantaneous synchronization between physical and virtual worlds, a defining characteristic that distinguishes the real-time metaverse from its traditional counterparts. The survey provides a comprehensive insight into the technical challenges and discusses solutions to realize responsive dynamic virtual environments. The potential applications and impact of real-time metaverse technologies across various fields are considered, including live entertainment, remote collaboration, dynamic simulations, and urban planning with digital twins. By synthesizing current research and identifying future directions, this survey provides a foundation for understanding and advancing the rapidly evolving landscape of real-time metaverse technologies, contributing to the growing body of knowledge on immersive digital experiences and setting the stage for further innovations in the Metaverse transformative field. Full article
Show Figures

Figure 1

Figure 1
<p>An illustration of the 7-layer metaverse architecture.</p>
Full article ">Figure 2
<p>Metaverse technologies.</p>
Full article ">Figure 3
<p>Real-time metaverse hierarchical system.</p>
Full article ">Figure 4
<p>Metaverse architecture.</p>
Full article ">Figure 5
<p>Real-time metaverse in a closed-loop system.</p>
Full article ">Figure 6
<p>Structures of computing in the network.</p>
Full article ">Figure 7
<p>A general 5G cellular network architecture.</p>
Full article ">Figure 8
<p>Immersive metaverse technologies.</p>
Full article ">Figure 9
<p>Interoperability of the metaverse.</p>
Full article ">Figure 10
<p>Metaverse applications - bandwidth versus latency.</p>
Full article ">Figure 11
<p>Security challenges associated with the metaverse.</p>
Full article ">
21 pages, 3530 KiB  
Systematic Review
A Systematic Review and Multifaceted Analysis of the Integration of Artificial Intelligence and Blockchain: Shaping the Future of Australian Higher Education
by Mahmoud Elkhodr, Ketmanto Wangsa, Ergun Gide and Shakir Karim
Future Internet 2024, 16(10), 378; https://doi.org/10.3390/fi16100378 - 18 Oct 2024
Viewed by 775
Abstract
This study explores the applications and implications of blockchain technology in the Australian higher education system, focusing on its integration with artificial intelligence (AI). By addressing critical challenges in credential verification, administrative efficiency, and academic integrity, this integration aims to enhance the global [...] Read more.
This study explores the applications and implications of blockchain technology in the Australian higher education system, focusing on its integration with artificial intelligence (AI). By addressing critical challenges in credential verification, administrative efficiency, and academic integrity, this integration aims to enhance the global competitiveness of Australian higher education institutions. A comprehensive review of 25 recent research papers quantifies the benefits, challenges, and prospects of blockchain adoption in educational settings. Our findings reveal that 52% of the reviewed papers focus on systematic reviews, 28% focus on application-based studies, and 20% combine both approaches. The keyword analysis identified 287 total words, with “blockchain” and “education” as the most prominent themes. This study highlights blockchain’s potential to improve credential management, academic integrity, administrative efficiency, and funding mechanisms in education. However, challenges such as technical implementation (24%), regulatory compliance (32%), environmental concerns (28%), and data security risks (40%) must be addressed to achieve widespread adoption. This study also discusses critical prerequisites for successful blockchain integration, including infrastructure development, staff training, regulatory harmonisation, and the incorporation of AI for personalised learning. Our research concludes that blockchain, when strategically implemented and combined with AI, has the potential to transform the Australian higher education system, significantly enhancing its integrity, efficiency, and global competitiveness. Full article
(This article belongs to the Special Issue ICT and AI in Intelligent E-systems)
Show Figures

Figure 1

Figure 1
<p>Initial search criteria and results from the Scopus database, after applying relevant filters. Source: Scopus.</p>
Full article ">Figure 2
<p>Proportions of research documents by types.</p>
Full article ">Figure 3
<p>Proportions of research documents by subject area. Source: Scopus.</p>
Full article ">Figure 4
<p>PRISMA.</p>
Full article ">Figure 5
<p>Word cloud.</p>
Full article ">Figure 6
<p>Heatmap.</p>
Full article ">Figure 7
<p>Blockchain Integration with AI in Education’s Mindmap.</p>
Full article ">
20 pages, 1850 KiB  
Article
An IoT-Enhanced Traffic Light Control System with Arduino and IR Sensors for Optimized Traffic Patterns
by Kian Raheem Qasim, Noor M. Naser and Ahmed J. Jabur
Future Internet 2024, 16(10), 377; https://doi.org/10.3390/fi16100377 - 18 Oct 2024
Viewed by 1044
Abstract
Traffic lights play an important role in efficient traffic management, especially in crowded cities. Optimizing traffic helps to reduce crowding, save time, and ensure the smooth flow of traffic. Metaheuristic algorithms have a proven ability to optimize smart traffic management systems. This paper [...] Read more.
Traffic lights play an important role in efficient traffic management, especially in crowded cities. Optimizing traffic helps to reduce crowding, save time, and ensure the smooth flow of traffic. Metaheuristic algorithms have a proven ability to optimize smart traffic management systems. This paper investigates the effectiveness of two metaheuristic algorithms: particle swarm optimization (PSO) and grey wolf optimization (GWO). In addition, we posit a hybrid PSO-GWO method of optimizing traffic light control using IoT-enabled data from sensors. In this study, we aimed to enhance the movement of traffic, minimize delays, and improve overall traffic precision. Our results demonstrate that the hybrid PSO-GWO method outperforms individual PSO and GWO algorithms, achieving superior traffic movement precision (0.925173), greater delay reduction (0.994543), and higher throughput improvement (0.89912) than standalone methods. PSO excels in reducing wait times (0.7934), while GWO shows reasonable performance across a range of metrics. The hybrid approach leverages the power of both PSO and GWO algorithms, proving to be the most effective solution for smart traffic management. This research highlights using hybrid optimization techniques and IoT (Internet of Things) in developing traffic control systems. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of hardware.</p>
Full article ">Figure 2
<p>Schematic diagram of the LDR and laser.</p>
Full article ">Figure 3
<p>Fitness values across iterations for the three proposed methods.</p>
Full article ">Figure 4
<p>Precision of traffic movement for the three proposed methods.</p>
Full article ">Figure 5
<p>Reduction in delay when using the three proposed methods.</p>
Full article ">Figure 6
<p>Average reduction in waiting time for the three proposed methods.</p>
Full article ">Figure 7
<p>Throughput improvement for the three proposed methods.</p>
Full article ">Figure 8
<p>Best objective function results of the three proposed methods.</p>
Full article ">Figure 9
<p>Result of cumulative traffic movement over time for three methods.</p>
Full article ">
28 pages, 1126 KiB  
Article
Internet of Things Adoption in Technology Ecosystems Within the Central African Region: The Case of Silicon Mountain
by Godlove Suila Kuaban, Valery Nkemeni, Onyeka J. Nwobodo, Piotr Czekalski and Fabien Mieyeville
Future Internet 2024, 16(10), 376; https://doi.org/10.3390/fi16100376 - 16 Oct 2024
Viewed by 723
Abstract
The Internet of Things (IoT) has emerged as a transformative technology with the potential to revolutionize various sectors and industries worldwide. Despite its global significance, the adoption and implementation of IoT technologies in emerging technology ecosystems within the Central African region still need [...] Read more.
The Internet of Things (IoT) has emerged as a transformative technology with the potential to revolutionize various sectors and industries worldwide. Despite its global significance, the adoption and implementation of IoT technologies in emerging technology ecosystems within the Central African region still need to be studied and explored. This paper presents a case study of the Silicon Mountain technology ecosystem, located in Fako division of the southwest region of Cameroon, focusing on the barriers and challenges to adopting and integrating IoT technologies within this emerging tech ecosystem. Through a survey-based approach, we investigate the factors influencing IoT adoption in the Silicon Mountain tech ecosystem, including technological, economic, social, and regulatory factors. Our study reveals key insights into the current state of IoT adoption, opportunities for growth and innovation, and IoT adoption challenges. Key among the challenges identified for impeding IoT uptake were issues related to standardization and financial resources, labor shortage in the industry, educational and knowledge gaps, market challenges, government policies, security and data privacy concerns, and inadequate power supply. Based on our findings, we provide recommendations for policymakers, industry stakeholders, and academic institutions to promote and facilitate the widespread adoption of IoT technologies in Silicon Mountain and the Central African region at large. Full article
Show Figures

Figure 1

Figure 1
<p>Respondents’ affiliation within the Silicon Mountain technology ecosystem.</p>
Full article ">Figure 2
<p>Respondents’ field of specialization.</p>
Full article ">Figure 3
<p>Respondents’ identification of potential areas for the implementation of the Internet of Things in the Central African sub-region.</p>
Full article ">
14 pages, 2710 KiB  
Article
SPDepth: Enhancing Self-Supervised Indoor Monocular Depth Estimation via Self-Propagation
by Xiaotong Guo, Huijie Zhao, Shuwei Shao, Xudong Li, Baochang Zhang and Na Li
Future Internet 2024, 16(10), 375; https://doi.org/10.3390/fi16100375 - 16 Oct 2024
Viewed by 574
Abstract
Due to the existence of low-textured areas in indoor scenes, some self-supervised depth estimation methods have specifically designed sparse photometric consistency losses and geometry-based losses. However, some of the loss terms cannot supervise all the pixels, which limits the performance of these methods. [...] Read more.
Due to the existence of low-textured areas in indoor scenes, some self-supervised depth estimation methods have specifically designed sparse photometric consistency losses and geometry-based losses. However, some of the loss terms cannot supervise all the pixels, which limits the performance of these methods. Some approaches introduce an additional optical flow network to provide dense correspondences supervision, but overload the loss function. In this paper, we propose to perform depth self-propagation based on feature self-similarities, where high-accuracy depths are propagated from supervised pixels to unsupervised ones. The enhanced self-supervised indoor monocular depth estimation network is called SPDepth. Since depth self-similarities are significant in a local range, a local window self-attention module is embedded at the end of the network to propagate depths in a window. The depth of a pixel is weighted using the feature correlation scores with other pixels in the same window. The effectiveness of self-propagation mechanism is demonstrated in the experiments on the NYU Depth V2 dataset. The root-mean-squared error of SPDepth is 0.585 and the δ1 accuracy is 77.6%. Zero-shot generalization studies are also conducted on the 7-Scenes dataset and provide a more comprehensive analysis about the application characteristics of SPDepth. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Computer Vision)
Show Figures

Figure 1

Figure 1
<p>The areas being supervised in P<sup>2</sup>Net [<a href="#B1-futureinternet-16-00375" class="html-bibr">1</a>]: (<b>a</b>) RGB image; (<b>b</b>) pixels supervised by the patch-based photometric loss; (<b>c</b>) pixels supervised by the plane fitting loss; (<b>d</b>) all the pixels being supervised. The pixels in black are supervised.</p>
Full article ">Figure 2
<p>The pipeline of SPDepth. Features with a resolution half the size of the training images are encoded from the target image. The features are then up-sampled to the same size as the image. The depths from DepthCNN and up-sampled features <math display="inline"><semantics> <mi>F</mi> </semantics></math> are together input to the self-propagation module. Based on self-attention, the depths are weighted with the self-similarity scores to perform propagation.</p>
Full article ">Figure 3
<p>Details of the self-propagation module. The projected key features are split into feature windows. The window attention scores are calculated through the self-attention operation. The depths are then weighted in respective windows.</p>
Full article ">Figure 4
<p>Zeros are padded on both sides of the input to form windows at the edges. The number of zeros for each dimension equals the window radius. Both key feature windows and depth windows are partitioned in this way.</p>
Full article ">Figure 5
<p>Visualization results on the NYU Depth V2 dataset. (<b>a</b>) RGB image; (<b>b</b>) P<sup>2</sup>Net [<a href="#B1-futureinternet-16-00375" class="html-bibr">1</a>]; (<b>c</b>) our SPDepth; (<b>d</b>) ground truth.</p>
Full article ">Figure 5 Cont.
<p>Visualization results on the NYU Depth V2 dataset. (<b>a</b>) RGB image; (<b>b</b>) P<sup>2</sup>Net [<a href="#B1-futureinternet-16-00375" class="html-bibr">1</a>]; (<b>c</b>) our SPDepth; (<b>d</b>) ground truth.</p>
Full article ">Figure 6
<p>Visualization of generalized results in the scene Stairs of the 7-Scenes dataset. (<b>a</b>) RGB image; (<b>b</b>) P<sup>2</sup>Net [<a href="#B1-futureinternet-16-00375" class="html-bibr">1</a>]; (<b>c</b>) our SPDepth; (<b>d</b>) ground truth.</p>
Full article ">
37 pages, 2626 KiB  
Article
A Survey of Security Strategies in Federated Learning: Defending Models, Data, and Privacy
by Habib Ullah Manzoor, Attia Shabbir, Ao Chen, David Flynn and Ahmed Zoha
Future Internet 2024, 16(10), 374; https://doi.org/10.3390/fi16100374 - 15 Oct 2024
Viewed by 3388
Abstract
Federated Learning (FL) has emerged as a transformative paradigm in machine learning, enabling decentralized model training across multiple devices while preserving data privacy. However, the decentralized nature of FL introduces significant security challenges, making it vulnerable to various attacks targeting models, data, and [...] Read more.
Federated Learning (FL) has emerged as a transformative paradigm in machine learning, enabling decentralized model training across multiple devices while preserving data privacy. However, the decentralized nature of FL introduces significant security challenges, making it vulnerable to various attacks targeting models, data, and privacy. This survey provides a comprehensive overview of the defense strategies against these attacks, categorizing them into data and model defenses and privacy attacks. We explore pre-aggregation, in-aggregation, and post-aggregation defenses, highlighting their methodologies and effectiveness. Additionally, the survey delves into advanced techniques such as homomorphic encryption and differential privacy to safeguard sensitive information. The integration of blockchain technology for enhancing security in FL environments is also discussed, along with incentive mechanisms to promote active participation among clients. Through this detailed examination, the survey aims to inform and guide future research in developing robust defense frameworks for FL systems. Full article
(This article belongs to the Special Issue Privacy and Security Issues with Edge Learning in IoT Systems)
Show Figures

Figure 1

Figure 1
<p>Paper distribution.</p>
Full article ">Figure 2
<p>Overview of FL.</p>
Full article ">Figure 3
<p>Types of FL.</p>
Full article ">Figure 4
<p>Types of attacks in FL.</p>
Full article ">Figure 5
<p>Types of defense strategies.</p>
Full article ">Figure 6
<p>Visual representation of detect and remove defense strategy.</p>
Full article ">Figure 7
<p>Visual representation of adversarial training.</p>
Full article ">Figure 8
<p>Model pruning.</p>
Full article ">Figure 9
<p>Visual representation of byzantine robust aggregation techniques.</p>
Full article ">Figure 10
<p>Visual representation of robust client selection.</p>
Full article ">Figure 11
<p>Visual representation of data and update analysis.</p>
Full article ">Figure 12
<p>Visual representation of blockchain.</p>
Full article ">Figure 13
<p>Visual representation of Incentivized Federated Learning.</p>
Full article ">Figure 14
<p>Visual representation of regularization.</p>
Full article ">Figure 15
<p>Visual representation of homomorphic encryption.</p>
Full article ">Figure 16
<p>Visual representation of knowledge distillation.</p>
Full article ">Figure 17
<p>Visual representation of Secure Multi-party Computation.</p>
Full article ">Figure 18
<p>Visual representation of split learning.</p>
Full article ">Figure 19
<p>Visual representation of perturbing gradients.</p>
Full article ">Figure 20
<p>Visual representation of differential privacy.</p>
Full article ">Figure 21
<p>Visual representation of Trusted Execution Environments (TEEs).</p>
Full article ">
15 pages, 956 KiB  
Article
Healthiness and Safety of Smart Environments through Edge Intelligence and Internet of Things Technologies
by Rafiq Ul Islam, Pasquale Mazzei and Claudio Savaglio
Future Internet 2024, 16(10), 373; https://doi.org/10.3390/fi16100373 - 14 Oct 2024
Viewed by 726
Abstract
Smart environments exploit rising technologies like Internet of Things (IoT) and edge intelligence (EI) to achieve unseen effectiveness and efficiency in every tasks, including air sanitization. The latter represents a key preventative measure–made even more evident by the COVID-19 pandemic–to significantly reduce disease [...] Read more.
Smart environments exploit rising technologies like Internet of Things (IoT) and edge intelligence (EI) to achieve unseen effectiveness and efficiency in every tasks, including air sanitization. The latter represents a key preventative measure–made even more evident by the COVID-19 pandemic–to significantly reduce disease transmission and create healthier and safer indoor spaces, for the sake of its occupants. Therefore, in this paper, we present an IoT-based system aimed at the continuous monitoring of the air quality and, through EI techniques, at the proactively activation of ozone lamps, while ensuring safety in sanitization. Indeed, these devices ensure extreme effectiveness in killing viruses and bacteria but, due to ozone toxicity, they must be properly controlled with advanced technologies for preventing occupants from dangerous exposition as well as for ensuring system reliability, operational efficiency, and regulatory compliance. Full article
Show Figures

Figure 1

Figure 1
<p>System architecture.</p>
Full article ">Figure 2
<p>UV-C lamp effectiveness vs. lifetime (hours) [<a href="#B27-futureinternet-16-00373" class="html-bibr">27</a>].</p>
Full article ">Figure 3
<p>The ozonizer machine CRJ O3-UV-500.</p>
Full article ">Figure 4
<p>The Node-Red flow.</p>
Full article ">Figure 5
<p>Snapshot of the PostgreSQL database for collecting the sensors’ data.</p>
Full article ">
17 pages, 1040 KiB  
Article
Enhancing Heart Disease Prediction with Federated Learning and Blockchain Integration
by Yazan Otoum, Chaosheng Hu, Eyad Haj Said and Amiya Nayak
Future Internet 2024, 16(10), 372; https://doi.org/10.3390/fi16100372 - 14 Oct 2024
Viewed by 912
Abstract
Federated learning offers a framework for developing local models across institutions while safeguarding sensitive data. This paper introduces a novel approach for heart disease prediction using the TabNet model, which combines the strengths of tree-based models and deep neural networks. Our study utilizes [...] Read more.
Federated learning offers a framework for developing local models across institutions while safeguarding sensitive data. This paper introduces a novel approach for heart disease prediction using the TabNet model, which combines the strengths of tree-based models and deep neural networks. Our study utilizes the Comprehensive Heart Disease and UCI Heart Disease datasets, leveraging TabNet’s architecture to enhance data handling in federated environments. Horizontal federated learning was implemented using the federated averaging algorithm to securely aggregate model updates across participants. Blockchain technology was integrated to enhance transparency and accountability, with smart contracts automating governance. The experimental results demonstrate that TabNet achieved the highest balanced metrics score of 1.594 after 50 epochs, with an accuracy of 0.822 and an epsilon value of 6.855, effectively balancing privacy and performance. The model also demonstrated strong accuracy with only 10 iterations on aggregated data, highlighting the benefits of multi-source data integration. This work presents a scalable, privacy-preserving solution for heart disease prediction, combining TabNet and blockchain to address key healthcare challenges while ensuring data integrity. Full article
Show Figures

Figure 1

Figure 1
<p>Horizontal federated learning system overview.</p>
Full article ">Figure 2
<p>Decision tree with DNN architecture.</p>
Full article ">Figure 3
<p>TabNet model architecture.</p>
Full article ">Figure 4
<p>Feature transformer architecture.</p>
Full article ">Figure 5
<p>Attentive transformer structure.</p>
Full article ">Figure 6
<p>Training accuracy and loss on the UCI dataset over multiple epochs.</p>
Full article ">Figure 7
<p>Training accuracy and loss on the Cleveland dataset over multiple epochs.</p>
Full article ">Figure 8
<p>Training accuracy and loss on the aggregated dataset.</p>
Full article ">Figure 9
<p>Testing accuracy of the UCI dataset across various epochs.</p>
Full article ">Figure 10
<p>Testing accuracy of the Cleveland dataset across various epochs.</p>
Full article ">Figure 11
<p>Testing accuracy of the aggregated dataset.</p>
Full article ">Figure 12
<p>Testing of UCI data in terms of balanced metrics and accuracy.</p>
Full article ">Figure 13
<p>Testing of Cleveland data in terms of balanced metrics and epsilon.</p>
Full article ">
18 pages, 20092 KiB  
Article
Multi-Source Data Fusion for Vehicle Maintenance Project Prediction
by Fanghua Chen, Deguang Shang, Gang Zhou, Ke Ye and Guofang Wu
Future Internet 2024, 16(10), 371; https://doi.org/10.3390/fi16100371 - 14 Oct 2024
Viewed by 604
Abstract
Ensuring road safety is heavily reliant on the effective maintenance of vehicles. Accurate predictions of maintenance requirements can substantially reduce ownership costs for vehicle owners. Consequently, this field has attracted increasing attention from researchers in recent years. However, existing studies primarily focus on [...] Read more.
Ensuring road safety is heavily reliant on the effective maintenance of vehicles. Accurate predictions of maintenance requirements can substantially reduce ownership costs for vehicle owners. Consequently, this field has attracted increasing attention from researchers in recent years. However, existing studies primarily focus on predicting a limited number of maintenance needs, predominantly based solely on vehicle mileage and driving time. This approach often falls short, as it does not comprehensively monitor the overall health condition of vehicles, thus posing potential safety risks. To address this issue, we propose a deep fusion network model that utilizes multi-source data, including vehicle maintenance record data and vehicle base information data, to provide comprehensive predictions for vehicle maintenance projects. To capture the relationships among various maintenance projects, we create a correlation representation using the maintenance project co-occurrence matrix. Furthermore, building on the correlation representation, we propose a deep fusion network that employs the attention mechanism to efficiently merge vehicle mileage and vehicle base information. Experiments conducted on real data demonstrate the superior performance of our proposed model relative to competitive baseline models in predicting vehicle maintenance projects. Full article
Show Figures

Figure 1

Figure 1
<p>The number of common maintenance projects with different annual mileage.</p>
Full article ">Figure 2
<p>An overview of the MsDFN model. It is mainly composed of two modules: (1) Maintenance Project Correlation Representation; (2) Multi-source Data Deep Fusion Network.</p>
Full article ">Figure 3
<p>The distributions of the vehicle maintenance record. (<b>a</b>) The distribution of vehicles by the number of maintenance. (<b>b</b>) The distribution of data by the number of maintenance project.</p>
Full article ">Figure 4
<p>Accuracy of predictions of MsDFN and baseline with varying training data.</p>
Full article ">Figure 5
<p>The performance under different dimensions of <span class="html-italic">E</span> and <span class="html-italic">U</span> combinations.</p>
Full article ">Figure 6
<p>Parameter sensitivity analysis of <math display="inline"><semantics> <mi>δ</mi> </semantics></math>.</p>
Full article ">Figure 7
<p>Correlation of frequency and attention weights for each group of projections. (<b>a</b>) Correlation of frequency and attention weights for Group I; (<b>b</b>) Correlation of frequency and attention weights for Group II; (<b>c</b>) Correlation of frequency and attention weights for Group III; (<b>d</b>) Correlation of frequency and attention weights for Group IV.</p>
Full article ">
31 pages, 5936 KiB  
Article
Advanced Optimization Techniques for Federated Learning on Non-IID Data
by Filippos Efthymiadis, Aristeidis Karras, Christos Karras and Spyros Sioutas
Future Internet 2024, 16(10), 370; https://doi.org/10.3390/fi16100370 - 13 Oct 2024
Viewed by 886
Abstract
Federated learning enables model training on multiple clients locally, without the need to transfer their data to a central server, thus ensuring data privacy. In this paper, we investigate the impact of Non-Independent and Identically Distributed (non-IID) data on the performance of federated [...] Read more.
Federated learning enables model training on multiple clients locally, without the need to transfer their data to a central server, thus ensuring data privacy. In this paper, we investigate the impact of Non-Independent and Identically Distributed (non-IID) data on the performance of federated training, where we find a reduction in accuracy of up to 29% for neural networks trained in environments with skewed non-IID data. Two optimization strategies are presented to address this issue. The first strategy focuses on applying a cyclical learning rate to determine the learning rate during federated training, while the second strategy develops a sharing and pre-training method on augmented data in order to improve the efficiency of the algorithm in the case of non-IID data. By combining these two methods, experiments show that the accuracy on the CIFAR-10 dataset increased by about 36% while achieving faster convergence by reducing the number of required communication rounds by 5.33 times. The proposed techniques lead to improved accuracy and faster model convergence, thus representing a significant advance in the field of federated learning and facilitating its application to real-world scenarios. Full article
(This article belongs to the Special Issue Distributed Storage of Large Knowledge Graphs with Mobility Data)
Show Figures

Figure 1

Figure 1
<p>Example of application of the above augmentation techniques to a random CIFAR-10 image.</p>
Full article ">Figure 2
<p>Illustration of the proposed methodology architecture.</p>
Full article ">Figure 3
<p>MNIST IID vs. MNIST non-IID with fixed learning rate.</p>
Full article ">Figure 4
<p>Fashion MNIST IID vs. Fashion MNIST non-IID with fixed learning rate.</p>
Full article ">Figure 5
<p>CIFAR-10 IID vs. CIFAR-10 non-IID with fixed learning rate.</p>
Full article ">Figure 6
<p>Learning rate range test for MNIST.</p>
Full article ">Figure 7
<p>MNIST non-IID with fixed learning rate vs. MNIST non-IID with cyclical learning rate.</p>
Full article ">Figure 8
<p>Learning rate range test for fashion MNIST.</p>
Full article ">Figure 9
<p>Fashion MNIST non-IID with fixed learning rate vs. Fashion MNIST non-IID with CLR.</p>
Full article ">Figure 10
<p>Learning rate range test for CIFAR-10.</p>
Full article ">Figure 11
<p>CIFAR-10 non-IID with fixed learning rate vs. CIFAR-10 non-IID with CLR.</p>
Full article ">Figure 12
<p>CIFAR-10 Fixed LR vs. CIFAR-10 CLR vs. CIFAR-10 CLR + PreTrained.</p>
Full article ">
29 pages, 6269 KiB  
Article
Malware Detection Based on API Call Sequence Analysis: A Gated Recurrent Unit–Generative Adversarial Network Model Approach
by Nsikak Owoh, John Adejoh, Salaheddin Hosseinzadeh, Moses Ashawa, Jude Osamor and Ayyaz Qureshi
Future Internet 2024, 16(10), 369; https://doi.org/10.3390/fi16100369 - 13 Oct 2024
Viewed by 1278
Abstract
Malware remains a major threat to computer systems, with a vast number of new samples being identified and documented regularly. Windows systems are particularly vulnerable to malicious programs like viruses, worms, and trojans. Dynamic analysis, which involves observing malware behavior during execution in [...] Read more.
Malware remains a major threat to computer systems, with a vast number of new samples being identified and documented regularly. Windows systems are particularly vulnerable to malicious programs like viruses, worms, and trojans. Dynamic analysis, which involves observing malware behavior during execution in a controlled environment, has emerged as a powerful technique for detection. This approach often focuses on analyzing Application Programming Interface (API) calls, which represent the interactions between the malware and the operating system. Recent advances in deep learning have shown promise in improving malware detection accuracy using API call sequence data. However, the potential of Generative Adversarial Networks (GANs) for this purpose remains largely unexplored. This paper proposes a novel hybrid deep learning model combining Gated Recurrent Units (GRUs) and GANs to enhance malware detection based on API call sequences from Windows portable executable files. We evaluate our GRU–GAN model against other approaches like Bidirectional Long Short-Term Memory (BiLSTM) and Bidirectional Gated Recurrent Unit (BiGRU) on multiple datasets. Results demonstrated the superior performance of our hybrid model, achieving 98.9% accuracy on the most challenging dataset. It outperformed existing models in resource utilization, with faster training and testing times and low memory usage. Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the proposed method.</p>
Full article ">Figure 2
<p>The architecture of the proposed GRU–GAN model.</p>
Full article ">Figure 3
<p>Accuracy and validation loss of the BiLSTM and BiGRU models on dataset 1.</p>
Full article ">Figure 4
<p>Discriminator accuracy and validation loss of the GRU–GAN model on dataset 1.</p>
Full article ">Figure 5
<p>Accuracy and validation loss of the BiLSTM and BiGRU models on dataset 2.</p>
Full article ">Figure 6
<p>Discriminator accuracy and validation loss of the GRU–GAN model on dataset 2.</p>
Full article ">Figure 7
<p>Confusion matrix of the BiLSTM, BiGRU, and GRU–GAN models on datasets 1, 2 and 3.</p>
Full article ">Figure 8
<p>Evaluation metrics results of the BiLSTM, BiGRU, and GRU–GAN models on the three datasets.</p>
Full article ">Figure 9
<p>ROC curve results of the BiLSTM, BiGRU, and GRU–GAN models on the three datasets.</p>
Full article ">Figure 10
<p>Prediction performance of the BiLSTM, BiGRU, and GRU–GAN models on dataset 1.</p>
Full article ">Figure 11
<p>Prediction performance of the BiLSTM, BiGRU, and GRU–GAN models on dataset 2.</p>
Full article ">Figure 12
<p>Prediction performance of the BiLSTM, BiGRU, and GRU–GAN models on dataset 3.</p>
Full article ">Figure 13
<p>Computation resources used by the BiLSTM, BiGRU, and GRU–GAN models.</p>
Full article ">
38 pages, 2305 KiB  
Review
Towards Ensemble Feature Selection for Lightweight Intrusion Detection in Resource-Constrained IoT Devices
by Mahawish Fatima, Osama Rehman, Ibrahim M. H. Rahman, Aisha Ajmal and Simon Jigwan Park
Future Internet 2024, 16(10), 368; https://doi.org/10.3390/fi16100368 - 12 Oct 2024
Viewed by 639
Abstract
The emergence of smart technologies and the wide adoption of the Internet of Things (IoT) have revolutionized various sectors, yet they have also introduced significant security challenges due to the extensive attack surface they present. In recent years, many efforts have been made [...] Read more.
The emergence of smart technologies and the wide adoption of the Internet of Things (IoT) have revolutionized various sectors, yet they have also introduced significant security challenges due to the extensive attack surface they present. In recent years, many efforts have been made to minimize the attack surface. However, most IoT devices are resource-constrained with limited processing power, memory storage, and energy sources. Such devices lack the sufficient means for running existing resource-hungry security solutions, which in turn makes it challenging to secure IoT networks from sophisticated attacks. Feature Selection (FS) approaches in Machine Learning enabled Intrusion Detection Systems (IDS) have gained considerable attention in recent years for having the potential to detect sophisticated cyber-attacks while adhering to the resource limitations issues in IoT networks. Apropos of that, several researchers proposed FS-enabled IDS for IoT networks with a focus on lightweight security solutions. This work presents a comprehensive study discussing FS-enabled lightweight IDS tailored for resource-constrained IoT devices, with a special focus on the emerging Ensemble Feature Selection (EFS) techniques, portraying a new direction for the research community to inspect. The research aims to pave the way for the effective design of futuristic FS/EFS-enabled lightweight IDS for IoT networks, addressing the critical need for robust security measures in the face of resource limitations. Full article
Show Figures

Figure 1

Figure 1
<p>Two-phase review methodology adopted for the survey on FS-enabled lightweight IDS in IoT.</p>
Full article ">Figure 2
<p>Generic IoT architecture, encompassing end devices and gateways as resource-constrained components.</p>
Full article ">Figure 3
<p>IoT Prominent Application Domains.</p>
Full article ">Figure 4
<p>Feature selection procedure.</p>
Full article ">Figure 5
<p>Percentage of features selected from the total number of available features in the datasets used in the research [<a href="#B31-futureinternet-16-00368" class="html-bibr">31</a>,<a href="#B32-futureinternet-16-00368" class="html-bibr">32</a>,<a href="#B103-futureinternet-16-00368" class="html-bibr">103</a>,<a href="#B111-futureinternet-16-00368" class="html-bibr">111</a>,<a href="#B112-futureinternet-16-00368" class="html-bibr">112</a>,<a href="#B113-futureinternet-16-00368" class="html-bibr">113</a>,<a href="#B114-futureinternet-16-00368" class="html-bibr">114</a>,<a href="#B115-futureinternet-16-00368" class="html-bibr">115</a>,<a href="#B116-futureinternet-16-00368" class="html-bibr">116</a>,<a href="#B117-futureinternet-16-00368" class="html-bibr">117</a>,<a href="#B118-futureinternet-16-00368" class="html-bibr">118</a>,<a href="#B119-futureinternet-16-00368" class="html-bibr">119</a>,<a href="#B120-futureinternet-16-00368" class="html-bibr">120</a>,<a href="#B123-futureinternet-16-00368" class="html-bibr">123</a>,<a href="#B124-futureinternet-16-00368" class="html-bibr">124</a>,<a href="#B128-futureinternet-16-00368" class="html-bibr">128</a>,<a href="#B129-futureinternet-16-00368" class="html-bibr">129</a>].</p>
Full article ">Figure 6
<p>Ensemble feature selection.</p>
Full article ">
22 pages, 3942 KiB  
Article
Countering Social Media Cybercrime Using Deep Learning: Instagram Fake Accounts Detection
by Najla Alharbi, Bashayer Alkalifah, Ghaida Alqarawi and Murad A. Rassam
Future Internet 2024, 16(10), 367; https://doi.org/10.3390/fi16100367 - 11 Oct 2024
Viewed by 2588
Abstract
An online social media platform such as Instagram has become a popular communication channel that millions of people are using today. However, this media also becomes an avenue where fake accounts are used to inflate the number of followers on a targeted account. [...] Read more.
An online social media platform such as Instagram has become a popular communication channel that millions of people are using today. However, this media also becomes an avenue where fake accounts are used to inflate the number of followers on a targeted account. Fake accounts tend to alter the concepts of popularity and influence on the Instagram media platform and significantly impact the economy, politics, and society, which is considered cybercrime. This paper proposes a framework to classify fake and real accounts on Instagram based on a deep learning approach called the Long Short-Term Memory (LSTM) network. Experiments and comparisons with existing machine and deep learning frameworks demonstrate considerable improvement in the proposed framework. It achieved a detection accuracy of 97.42% and 94.21% on two publicly available Instagram datasets, with F-measure scores of 92.17% and 89.55%, respectively. Further experiments on the Twitter dataset reveal the effectiveness of the proposed framework by achieving an impressive accuracy rate of 99.42%. Full article
Show Figures

Figure 1

Figure 1
<p>Popular social networks.</p>
Full article ">Figure 2
<p>Social spam categories.</p>
Full article ">Figure 3
<p>DL as subfield of AI [<a href="#B18-futureinternet-16-00367" class="html-bibr">18</a>].</p>
Full article ">Figure 4
<p>Conceptual framework.</p>
Full article ">Figure 5
<p>LSTM cell structure [<a href="#B48-futureinternet-16-00367" class="html-bibr">48</a>].</p>
Full article ">Figure 6
<p>Results on dataset 1: (<b>a</b>) shows 50 iterations loss, (<b>b</b>) shows 50 iterations of accuracy, (<b>c</b>) shows 100 iterations lost, and (<b>d</b>) shows 100 iterations of accuracy.</p>
Full article ">Figure 7
<p>Results on dataset 2: (<b>a</b>) shows 50 iterations loss, (<b>b</b>) shows 50 iterations of accuracy, (<b>c</b>) showes 100 iterations lost, and (<b>d</b>) shows 100 iterations of accuracy.</p>
Full article ">Figure 7 Cont.
<p>Results on dataset 2: (<b>a</b>) shows 50 iterations loss, (<b>b</b>) shows 50 iterations of accuracy, (<b>c</b>) showes 100 iterations lost, and (<b>d</b>) shows 100 iterations of accuracy.</p>
Full article ">Figure 8
<p>Results on dataset 3: (<b>a</b>) shows 50 iterations loss, (<b>b</b>) shows 50 iterations of accuracy, (<b>c</b>) showes 100 iterations lost, and (<b>d</b>) shows 100 iterations of accuracy.</p>
Full article ">
27 pages, 14786 KiB  
Article
New Model for Defining and Implementing Performance Tests
by Marek Bolanowski, Michał Ćmil and Adrian Starzec
Future Internet 2024, 16(10), 366; https://doi.org/10.3390/fi16100366 - 10 Oct 2024
Viewed by 636
Abstract
The article proposes a new model for defining and implementing performance tests used in the process of designing and operating IT systems. By defining the objectives, types, topological patterns, and methods of implementation, a coherent description of the test preparation and execution is [...] Read more.
The article proposes a new model for defining and implementing performance tests used in the process of designing and operating IT systems. By defining the objectives, types, topological patterns, and methods of implementation, a coherent description of the test preparation and execution is achieved, facilitating the interpretation of results and enabling straightforward replication of test scenarios. The model was used to develop and implement performance tests in a laboratory environment and in a production system. The proposed division of the testing process into layers correlated with the test preparation steps allows to separate quasi-independent areas, which can be handled by isolated teams of engineers. Such an approach allows to accelerate the process of implementation of performance tests and may affect the optimization of the cost of their implementation. Full article
(This article belongs to the Special Issue Internet of Things Technology and Service Computing)
Show Figures

Figure 1

Figure 1
<p>Dependency matrix of objectives and test types.</p>
Full article ">Figure 2
<p>Dependency matrix of objectives and test types for file server.</p>
Full article ">Figure 3
<p>Topological pattern of performance test execution.</p>
Full article ">Figure 4
<p>Examples of elements of sets <span class="html-italic">G</span>, <span class="html-italic">N</span>, <span class="html-italic">C</span>.</p>
Full article ">Figure 5
<p>Steps in the test definition process.</p>
Full article ">Figure 6
<p>Template of the test topology for scenario 1.</p>
Full article ">Figure 7
<p>Graph of the dependence of the number of connections per second on time on the test web server.</p>
Full article ">Figure 8
<p>Report on http client-server communication for test <span class="html-italic">t</span><sub>0</sub> in Scenario 1.</p>
Full article ">Figure 9
<p>Throughput vs. time graph on test web server for test <span class="html-italic">t</span><sub>1</sub> in Scenario 1.</p>
Full article ">Figure 10
<p>Report on http client-server communication for test <span class="html-italic">t</span><sub>1</sub> in Scenario 1.</p>
Full article ">Figure 11
<p>Template of the test topology for scenario 2.</p>
Full article ">Figure 12
<p>Graph of the dependence of the CPS on time on e-learning platform.</p>
Full article ">Figure 13
<p>Graph of the number of concurrent connections versus time on e-learning platform.</p>
Full article ">Figure 14
<p>Report on http client-server communication for test <span class="html-italic">t</span><sub>0</sub> in Scenario 2 (maximum possible CPS).</p>
Full article ">Figure 15
<p>Graph of the dependence of the resource load on the Moodle virtual machine on time for test <span class="html-italic">t</span><sub>0</sub> in Scenario 2 (maximum possible CPS).</p>
Full article ">Figure 16
<p>Dependence graph of resource load of the virtual machine with the database on time for test <span class="html-italic">t</span><sub>0</sub> in Scenario 2 (maximum possible CPS).</p>
Full article ">Figure 17
<p>Graph of the dependence of the CPS on time.</p>
Full article ">Figure 18
<p>Graph of the number of concurrent connections versus time for test <span class="html-italic">t</span><sub>0</sub> in Scenario 2.</p>
Full article ">Figure 19
<p>Report on http client-server communication for test <span class="html-italic">t</span><sub>0</sub> in Scenario 2 (CPS limited to 400).</p>
Full article ">Figure 20
<p>Graph of the dependence of the resource load on the Moodle virtual machine on time for test <span class="html-italic">t</span><sub>0</sub> in Scenario 2 (CPS limited to 400).</p>
Full article ">Figure 21
<p>Dependence graph of resource load of the virtual machine with the database on time for test <span class="html-italic">t</span><sub>0</sub> in Scenario 2 (CPS limited to 400).</p>
Full article ">Figure 22
<p>Graph of the number of concurrent connections versus time for test <span class="html-italic">t</span><sub>2</sub> in Scenario 2.</p>
Full article ">Figure 23
<p>Graph of the dependence of the resource load on the Moodle virtual machine on time for test <span class="html-italic">t</span><sub>2</sub> in Scenario 2.</p>
Full article ">Figure 24
<p>Dependence graph of resource load of the virtual machine with the database on time for test <span class="html-italic">t</span><sub>2</sub> in Scenario 2.</p>
Full article ">Figure 25
<p>Throughput vs. time graph on test web server for test <span class="html-italic">t</span><sub>1</sub> in Scenario 2.</p>
Full article ">Figure 26
<p>Report on http client-server communication for test <span class="html-italic">t</span><sub>1</sub> in Scenario 2.</p>
Full article ">Figure 27
<p>Graph of the dependence of the resource load on the Moodle virtual machine on time for test <span class="html-italic">t</span><sub>1</sub> in Scenario 2.</p>
Full article ">Figure 28
<p>Dependence graph of resource load of the virtual machine with the database on time for test <span class="html-italic">t</span><sub>1</sub> in Scenario 2.</p>
Full article ">
29 pages, 778 KiB  
Review
Large Language Models Meet Next-Generation Networking Technologies: A Review
by Ching-Nam Hang, Pei-Duo Yu, Roberto Morabito and Chee-Wei Tan
Future Internet 2024, 16(10), 365; https://doi.org/10.3390/fi16100365 - 7 Oct 2024
Viewed by 7466
Abstract
The evolution of network technologies has significantly transformed global communication, information sharing, and connectivity. Traditional networks, relying on static configurations and manual interventions, face substantial challenges such as complex management, inefficiency, and susceptibility to human error. The rise of artificial intelligence (AI) has [...] Read more.
The evolution of network technologies has significantly transformed global communication, information sharing, and connectivity. Traditional networks, relying on static configurations and manual interventions, face substantial challenges such as complex management, inefficiency, and susceptibility to human error. The rise of artificial intelligence (AI) has begun to address these issues by automating tasks like network configuration, traffic optimization, and security enhancements. Despite their potential, integrating AI models in network engineering encounters practical obstacles including complex configurations, heterogeneous infrastructure, unstructured data, and dynamic environments. Generative AI, particularly large language models (LLMs), represents a promising advancement in AI, with capabilities extending to natural language processing tasks like translation, summarization, and sentiment analysis. This paper aims to provide a comprehensive review exploring the transformative role of LLMs in modern network engineering. In particular, it addresses gaps in the existing literature by focusing on LLM applications in network design and planning, implementation, analytics, and management. It also discusses current research efforts, challenges, and future opportunities, aiming to provide a comprehensive guide for networking professionals and researchers. The main goal is to facilitate the adoption and advancement of AI and LLMs in networking, promoting more efficient, resilient, and intelligent network systems. Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>The comprehensive life cycle of network systems, highlighting the four primary stages in network engineering: Network Design and Planning, Network Implementation, Network Analytics, and Network Management. Each stage involves critical tasks such as resource scheduling, deployment, traffic analysis, and security protection. These four network engineering stages are interconnected, ensuring seamless management of the network from initial planning to ongoing optimization and protection. Network intelligence is pivotal in automating processes and improving the efficiency, accuracy, and reliability of network tasks across all stages.</p>
Full article ">Figure 2
<p>The traditional encoder–decoder transformer model architecture.</p>
Full article ">Figure 3
<p>The timeline of the development of LLMs from 2018 to 2024 (June), showcasing key advancements and notable models.</p>
Full article ">Figure 4
<p>Iterative balancing of resource allocations using AIMD. (<b>a</b>) A visual representation of the AIMD algorithm. (<b>b</b>) An illustration of the convergence of AIMD resource allocation based on Perron-Frobenius theory. The bold arrow represents the Perron-Frobenius right eigenvector of a non-negative matrix, illustrating its convergence towards the fairness line.</p>
Full article ">
16 pages, 731 KiB  
Article
Stance Detection in the Context of Fake News—A New Approach
by Izzat Alsmadi, Iyad Alazzam, Mohammad Al-Ramahi and Mohammad Zarour
Future Internet 2024, 16(10), 364; https://doi.org/10.3390/fi16100364 - 6 Oct 2024
Viewed by 841
Abstract
Online social networks (OSNs) are inundated with an enormous daily influx of news shared by users worldwide. Information can originate from any OSN user and quickly spread, making the task of fact-checking news both time-consuming and resource-intensive. To address this challenge, researchers are [...] Read more.
Online social networks (OSNs) are inundated with an enormous daily influx of news shared by users worldwide. Information can originate from any OSN user and quickly spread, making the task of fact-checking news both time-consuming and resource-intensive. To address this challenge, researchers are exploring machine learning techniques to automate fake news detection. This paper specifically focuses on detecting the stance of content producers—whether they support or oppose the subject of the content. Our study aims to develop and evaluate advanced text-mining models that leverage pre-trained language models enhanced with meta features derived from headlines and article bodies. We sought to determine whether incorporating the cosine distance feature could improve model prediction accuracy. After analyzing and assessing several previous competition entries, we identified three key tasks for achieving high accuracy: (1) a multi-stage approach that integrates classical and neural network classifiers, (2) the extraction of additional text-based meta features from headline and article body columns, and (3) the utilization of recent pre-trained embeddings and transformer models. Full article
Show Figures

Figure 1

Figure 1
<p>Talos ensemble.</p>
Full article ">Figure 2
<p>Talos ensemble.</p>
Full article ">Figure 3
<p>Provided data description.</p>
Full article ">Figure 4
<p>Distribution of the outcome variable.</p>
Full article ">Figure 5
<p>Training data by stance.</p>
Full article ">Figure 6
<p>Pipeline of the proposed method.</p>
Full article ">Figure 7
<p>Feature correlation.</p>
Full article ">Figure 8
<p>Classifier scores and training durations (Training Stage).</p>
Full article ">
23 pages, 273 KiB  
Article
Combating Web Tracking: Analyzing Web Tracking Technologies for User Privacy
by Kyungmin Sim, Honyeong Heo and Haehyun Cho
Future Internet 2024, 16(10), 363; https://doi.org/10.3390/fi16100363 - 5 Oct 2024
Viewed by 3945
Abstract
Behind everyday websites, a hidden shadow world tracks the behavior of Internet users. Web tracking analyzes online activity based on collected data and delivers content tailored to users’ interests. It gathers vast amounts of information for various purposes, ranging from sensitive personal data [...] Read more.
Behind everyday websites, a hidden shadow world tracks the behavior of Internet users. Web tracking analyzes online activity based on collected data and delivers content tailored to users’ interests. It gathers vast amounts of information for various purposes, ranging from sensitive personal data to seemingly minor details such as IP addresses, devices, browsing histories, settings, and preferences. While Web tracking is largely a legitimate technology, the increase in illegal user tracking, data breaches, and the unlawful sale of data has become a growing concern. As a result, the demand for technologies that can detect and prevent Web trackers is more important than ever. This paper provides an overview of Web tracking technologies, relevant research, and website measurement tools designed to identify web-based tracking. It also explores technologies for preventing Web tracking and discusses potential directions for future research. Full article
(This article belongs to the Special Issue Security and Privacy Issues in the Internet of Cloud)
30 pages, 585 KiB  
Article
Decoding Urban Intelligence: Clustering and Feature Importance in Smart Cities
by Enrico Barbierato and Alice Gatti
Future Internet 2024, 16(10), 362; https://doi.org/10.3390/fi16100362 - 5 Oct 2024
Viewed by 2755
Abstract
The rapid urbanization trend underscores the need for effective management of city resources and services, making the concept of smart cities increasingly important. This study leverages the IMD Smart City Index (SCI) dataset to analyze and rank smart cities worldwide. Our research has [...] Read more.
The rapid urbanization trend underscores the need for effective management of city resources and services, making the concept of smart cities increasingly important. This study leverages the IMD Smart City Index (SCI) dataset to analyze and rank smart cities worldwide. Our research has a dual objective: first, we aim to apply a set of unsupervised learning models to cluster cities based on their smartness indices. Second, we aim to employ supervised learning models such as random forest, support vector machines (SVMs), and others to determine the importance of various features that contribute to a city’s smartness. Our findings reveal that while smart living was the most critical factor, with an importance of 0.259014. Smart mobility and smart environment also played significant roles, with the importance of 0.170147 and 0.163159, respectively, in determining a city’s smartness. While the clustering provides insights into the similarities and groupings among cities, the feature importance analysis elucidates the critical factors that drive these classifications. The integration of these two approaches aims to demonstrate that understanding the similarities between smart cities is of limited utility without a clear comprehension of the importance of the underlying features. This holistic approach provides a comprehensive understanding of what makes a city ’smart’ and offers a robust framework for policymakers to enhance urban living standards. Full article
(This article belongs to the Special Issue Machine Learning for Blockchain and IoT Systems in Smart City)
Show Figures

Figure 1

Figure 1
<p>Percentage of urban population between 1950 and 2050.</p>
Full article ">Figure 2
<p>Visual representation of the work’s structure.</p>
Full article ">Figure 3
<p>k-means clustering for <math display="inline"><semantics> <mrow> <mi>k</mi> <mspace width="3.33333pt"/> <mo>=</mo> <mspace width="3.33333pt"/> <mn>5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Dendrogram of smart city indices.</p>
Full article ">Figure 5
<p>Gaussian mixture model (GMM) clustering of smart city indices.</p>
Full article ">Figure 6
<p>Self-organizing map (SOM) clustering of smart city indices with convex hulls.</p>
Full article ">
29 pages, 344 KiB  
Article
Decentralized Web3 Reshaping Internet Governance: Towards the Emergence of New Forms of Nation-Statehood?
by Igor Calzada
Future Internet 2024, 16(10), 361; https://doi.org/10.3390/fi16100361 - 4 Oct 2024
Viewed by 2381
Abstract
This article explores how decentralized Web3 is reshaping Internet governance by enabling the emergence of new forms of nation-statehood and redefining traditional concepts of state sovereignty. Based on fieldwork conducted in Silicon Valley since August 2022, this article systematically addresses the following research [...] Read more.
This article explores how decentralized Web3 is reshaping Internet governance by enabling the emergence of new forms of nation-statehood and redefining traditional concepts of state sovereignty. Based on fieldwork conducted in Silicon Valley since August 2022, this article systematically addresses the following research question: How is decentralized Web3 reshaping Internet governance and influencing the rise in new nation-statehood paradigms? It compares three emerging paradigms around Web3: (i) Network States (Srinivasan), envisioning digital entities rooted in crypto-libertarian principles; (ii) Network Sovereignties (De Filippi), emphasizing communal governance aligned with digital commons; and (iii) Algorithmic Nations (Calzada), drawing on Arendtian thought and demonstrating how communities—such as indigenous and stateless groups, as well as e-diasporas—can attain self-determination through data sovereignty. This article contributes a unique conceptual analysis of these paradigms based on fieldwork action research in Silicon Valley, responding to evolving technologies and their potential to reshape Internet governance. This article argues that decentralized Web3 provides a transformative vision for Internet governance but requires careful evaluation to ensure that it promotes inclusivity and equity. It advocates for a hybrid approach that balances global and local dynamics, emphasizing the need for solidarity, digital justice, and an internationalist perspective in shaping future Internet governance protocols. Full article
15 pages, 469 KiB  
Article
Employing Huber and TAP Losses to Improve Inter-SubNet in Speech Enhancement
by Jeih-Weih Hung, Pin-Chen Huang and Li-Yin Li
Future Internet 2024, 16(10), 360; https://doi.org/10.3390/fi16100360 - 4 Oct 2024
Viewed by 587
Abstract
In this study, improvements are made to Inter-SubNet, a state-of-the-art speech enhancement method. Inter-SubNet is a single-channel speech enhancement framework that enhances the sub-band spectral model by integrating global spectral information, such as cross-band relationships and patterns. Despite the success of Inter-SubNet, one [...] Read more.
In this study, improvements are made to Inter-SubNet, a state-of-the-art speech enhancement method. Inter-SubNet is a single-channel speech enhancement framework that enhances the sub-band spectral model by integrating global spectral information, such as cross-band relationships and patterns. Despite the success of Inter-SubNet, one crucial aspect probably overlooked by Inter-SubNet is the unequal perceptual weighting of different spectral regions by the human ear, as it employs MSE as its loss function. In addition, MSE loss has a potential convergence concern for model learning due to gradient explosion. Hence, we propose further enhancing Inter-SubNet by either integrating perceptual loss with MSE loss or modifying MSE loss directly in the learning process. Among various types of perceptual loss, we adopt the temporal acoustic parameter (TAP) loss, which provides detailed estimation for low-level acoustic descriptors, thereby offering a comprehensive evaluation of speech signal distortion. In addition, we leverage Huber loss, a combination of L1 and L2 (MSE) loss, to avoid the potential convergence issue for the training of Inter-SubNet. By the evaluation conducted on the VoiceBank-DEMAND database and task, we see that Inter-SubNet with the modified loss function reveals improvements in speech enhancement performance. Specifically, replacing MSE loss with Huber loss results in increases of 0.057 and 0.38 in WB-PESQ and SI-SDR metrics, respectively. Additionally, integrating TAP loss with MSE loss yields improvements of 0.115 and 0.196 in WB-PESQ and CSIG metrics. Full article
Show Figures

Figure 1

Figure 1
<p>The flowchart of Inter-SubNet (using the MSE of cIRM as the loss).</p>
Full article ">Figure 2
<p>The flowchart of the SubInter module in Inter-SubNet.</p>
Full article ">
22 pages, 2856 KiB  
Article
An Intrusion Detection System for 5G SDN Network Utilizing Binarized Deep Spiking Capsule Fire Hawk Neural Networks and Blockchain Technology
by Nanavath Kiran Singh Nayak and Budhaditya Bhattacharyya
Future Internet 2024, 16(10), 359; https://doi.org/10.3390/fi16100359 - 3 Oct 2024
Viewed by 694
Abstract
The advent of 5G heralds unprecedented connectivity with high throughput and low latency for network users. Software-defined networking (SDN) plays a significant role in fulfilling these requirements. However, it poses substantial security challenges due to its inherent centralized management strategy. Moreover, SDN confronts [...] Read more.
The advent of 5G heralds unprecedented connectivity with high throughput and low latency for network users. Software-defined networking (SDN) plays a significant role in fulfilling these requirements. However, it poses substantial security challenges due to its inherent centralized management strategy. Moreover, SDN confronts limitations in handling malicious traffic under 5G’s extensive data flow. To deal with these issues, this paper presents a novel intrusion detection system (IDS) designed for 5G SDN networks, leveraging the advanced capabilities of binarized deep spiking capsule fire hawk neural networks (BSHNN) and blockchain technology, which operates across multiple layers. Initially, the lightweight encryption algorithm (LEA) is used at the data acquisition layer to authenticate mobile users via trusted third parties. Followed by optimal switch selection using the mud-ring algorithm in the switch layer, and the data flow rules are secured by employing blockchain technology incorporating searchable encryption algorithms within the blockchain plane. The domain controller layer utilizes binarized deep spiking capsule fire hawk neural network (BSHNN) for real-time data packet classification, while the smart controller layer uses enhanced adapting hidden attribute-weighted naive bayes (EAWNB) to identify suspicious packets during data transmission. The experimental results show that the proposed technique outperforms the state-of-the-art approaches in terms of accuracy (98.02%), precision (96.40%), detection rate (96.41%), authentication time (16.2 s), throughput, delay, and packet loss ratio. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flow diagram of the proposed work.</p>
Full article ">Figure 2
<p>Flow chart of mud-ring algorithm.</p>
Full article ">Figure 3
<p>Workflow of the BSHNN approach.</p>
Full article ">Figure 4
<p>Network simulation environment.</p>
Full article ">Figure 5
<p>Simulation results for the BSHNN method (<b>a</b>) Detection Rate. (<b>b</b>) Authentication time.</p>
Full article ">Figure 6
<p>BSHNN method simulation results. (<b>a</b>) Delay. (<b>b</b>) Throughput.</p>
Full article ">Figure 7
<p>BSHNN approach simulation results. (<b>a</b>) Packet loss ratio. (<b>b</b>) Accuracy.</p>
Full article ">Figure 8
<p>BSHNN method’s simulation results. (<b>a</b>) Recall. (<b>b</b>) Precision.</p>
Full article ">
42 pages, 1312 KiB  
Article
Mobility–Multihoming Duality
by Ryo Yanagida and Saleem Noel Bhatti
Future Internet 2024, 16(10), 358; https://doi.org/10.3390/fi16100358 - 1 Oct 2024
Viewed by 642
Abstract
In modern Internet-based communication, especially mobile systems, a mobile node (MN) will commonly have more than one possibility for Internet Protocol (IP) connectivity. For example, an MN such as a smartphone may be associated with an IEEE 802.11 network at a site while [...] Read more.
In modern Internet-based communication, especially mobile systems, a mobile node (MN) will commonly have more than one possibility for Internet Protocol (IP) connectivity. For example, an MN such as a smartphone may be associated with an IEEE 802.11 network at a site while also connected to a cellular base station for 5G. In such a scenario, the smartphone might only be able to utilise the IEEE 802.11 network, not making use of the cellular connectivity simultaneously. Currently, IP does not allow applications and devices to easily utilise multiple IP connectivity opportunities—multihoming for the MN—without implementing special mechanisms to manage them. We demonstrate how the use of the Identifier Locator Network Protocol (ILNP), realised as an extension to IPv6, can enable mobility with multihoming using a duality mechanism that treats mobility and multihoming as the same logical concept. We present a network layer solution that does not require any modification to transport protocols, can be implemented using existing application programming interfaces (APIs), and can work for any application. We have evaluated our approach using an implementation in Linux and a testbed. The testbed consisted of commercial equipment to demonstrate that our approach can be used over existing network infrastructure requiring only normal unicast routing for IPv6. Full article
Show Figures

Figure 1

Figure 1
<p>Comparison of the IPv6 unicast address format with the ILNP unicast addressing format. The L64 value has the same syntax and semantics as the IPv6 routing prefix. The NID value has the same syntax as the IPv6 Interface Identifier, but it has different semantics. The NID-L64 pairing is an Identifier Locator Vector (IL-V), which can be used the same way as an IPv6 address.</p>
Full article ">Figure 2
<p>An example of a Locator Update (LU) handshake for a Mobile Node (MN). The MN discovers a new L64 value via an IPv6 Router Advertisement (RA) message. It updates its local ILNP Communication Cache (ILCC) and sends an LU message to the Correspondent Node (CN). The CN updates its own ILCC and sends a LU-ack (acknowledgement) to the MN.</p>
Full article ">Figure 3
<p>The ILNP IPv6 extension header as in RFC6744.</p>
Full article ">Figure 4
<p>The ILNP LU message structure based on the message format from RFC6743.</p>
Full article ">Figure 5
<p>A flowchart describing UDP/TCP packet processing with ILNP mobility–multihoming duality mechanism with Deficit Round-Robin (DRR) load sharing. Overall, the existing IPv6 packet processing code path has been re-used and modified effectively. Grey boxes indicate unmodified processes with respect to IPv6 packet processing. Orange boxes indicate modifications of existing IPv6 packet processing logic. Green boxes are the additional logic and processing for ILNP.</p>
Full article ">Figure 6
<p>A scenario diagram describing host movement for the mobility–multihoming duality evaluation. There are four IPv6 networks, aa–dd, connected via the 4 routers, R1–R4, scenario. Network dd is used connect R1, R2, and R3 to R4, and it represents connectivity on over the Internet between MN and CN. The arrow labelled 1 is the first movement the MN carries out: moving from network aa on R1 to network cc on R3. The arrow labelled 2 shows the second set of movement, where the MN returns from network cc back to network aa.</p>
Full article ">Figure 7
<p>A timeline diagram showing an example of a mobility–multihoming duality scenario. The MN has three interfaces, and the CN has one interface. The MN starts communication only on network aa. The MN and the CN begin a communication session using a single interface on both sides. The MN activates interface 2, receives L64 (IPv6 prefix) <tt>bb</tt>, sends an LU, and sets the new L64 as ACTIVE. The CN responds with an LU-Ack, acknowledging the new set of L64 values that the MN now has. The MN continues to activate another interface, which is also signalled to the CN. The MN then lists the first interface in the <tt>net.ilnp6.disabled_interface</tt> <tt>sysctl</tt> list, triggering another LU and state changes in the ILCC. After the CN acknowledges the removal of the first interface, the MN removes the interface.</p>
Full article ">Figure 8
<p>The procedure for a data collection run. Both CN and MN initially have only a single interface (i/f) enabled. <tt>iperf2</tt> is started with bi-directional data transfer. Additional interfaces are enabled at the MN until all interfaces are enabled. Then, interfaces are disabled at the MN until only a single interface remains enabled. This is repeated so that all additional interfaces have been enabled/disabled twice.</p>
Full article ">Figure 9
<p>Plots showing packet delivery statistics of TCP and UDP over ILNP. Note that the y axis is in the range of 0.00–0.01, i.e., 0.00–1.00%. The box around the median value is invisible, as the results were consistent across the runs, and the rate remained near zero. In all cases, both the misorder and loss were very low and very consistent across multiple runs. Note that the different transmission characteristics and behaviours for TCP and UDP mean these metrics are not directly comparable. (<b>a</b>) TCP misordering ratio based on sequence numbers (data packets) and acknowledgement numbers received. Negligible misordering was observed. (<b>b</b>) TCP duplicate ratio based on sequence numbers (data packets) and acknowledgement numbers sent and received.No significant numbers of duplicate packets were observed in the sequence numbers or the acknowledgement numbers. (<b>c</b>) UDP packet statistics observed in mobility–multihoming duality scenarios with <tt>iperf2</tt> UDP over ILNP. With all scenarios, both misordering and loss ratio remained low or nil. The small size of the box indicates that there was little to no variation across different runs.</p>
Full article ">Figure 10
<p>Plots showing the throughput for TCP and UDP over ILNP. Note that, due to the different protocols and their characteristics, these are not directly comparable to each other. (<b>a</b>) Throughput observed in the mobility–multihoming duality scenarios with <tt>iperf2</tt> TCP flows over ILNP. Across all delay scenarios, the throughput remained near the 10 Mbps target with little variation, as shown by barely visible 25th and 75th percentile line of the box plot. Note that the <span class="html-italic">y</span> axis is in the range of 9.0–11.0 Mbps. The box around the median value is invisible, as the results were consistent across the runs, and the value remained near 10.2 Mbps. (<b>b</b>) Throughput observed in mobility–multihoming duality scenarios with <tt>iperf2</tt> UDP flows over ILNP. The throughput remained consistent at around the 10 Mbps target with very few exceptions. Note that the <span class="html-italic">y</span> axis is in the range of 9.0–11.0 Mbps. The box around the median value is invisible, as the results were consistent across the runs, and the value remained near 10.1 Mbps.</p>
Full article ">Figure 11
<p>Plots showing the throughput and sequence numbers observed in typical mobility–multihoming duality <tt>iperf2</tt> TCP scenario evaluations received at the CN. In each column, the top graph is throughput (faceted top to bottom as network aa, bb, cc, and aggregate). There was consistent aggregate throughput (bottom facet), with the expected throughput observed at the respective source/destination IL-V on each network (aa, bb, cc) as expected. The vertical dashed line shows the Locator Update (LU) message event. In each column, the lower graph is the TCP sequence number progression. This also showed consistent increase, indicating a consistent flow and delivery of packets.</p>
Full article ">Figure 12
<p>Plots showing the throughput and sequence numbers observed in typical mobility–multihoming duality <tt>iperf2</tt> TCP scenario evaluations received at the MN. In each column, the top graph is throughput (faceted top to bottom as network aa, bb, cc, and aggregate). There was consistent aggregate throughput (bottom facet), with the expected throughput observed at the respective source/destination IL-V on each network (aa, bb, cc) as expected. The vertical dashed line shows the Locator Update (LU) message event. In each column, the lower graph is the TCP sequence number progression. This also showed consistent increase, indicating a consistent flow and delivery of packets.</p>
Full article ">Figure 13
<p>Plots showing the throughput and sequence numbers observed in typical mobility–multihoming duality <tt>iperf2</tt> UDP scenario evaluations received at the CN. In each column, the top graph is throughput (faceted top to bottom as network aa, bb, cc, and aggregate). There was consistent aggregate throughput (bottom facet), with the expected throughput observed at the respective source/destination IL-V on each network (aa, bb, cc) as expected. The vertical dashed line shows the Locator Update (LU) message event. In each column, the lower graph is the <tt>iperf2</tt> sequence number progression. This also showed consistent increase, indicating a consistent flow and delivery of packets.</p>
Full article ">Figure 14
<p>Plots showing the throughput and sequence numbers observed in typical mobility–multihoming duality <tt>iperf2</tt> UDP scenario evaluations received at the MN. In each column, the top graph is throughput (faceted top to bottom as network aa, bb, cc, and aggregate). There was consistent aggregate throughput (bottom facet), with the expected throughput observed at the respective source/destination IL-V on each network (aa, bb, cc) as expected. The vertical dashed line shows the Locator Update (LU) message event. In each column, the lower graph is the <tt>iperf2</tt> sequence number progression. This also showed consistent increase, indicating a consistent flow and delivery of packets.</p>
Full article ">Figure 15
<p>Box plot showing MP-TCP flow for 20 runs with no added delay on path. While the individual interfaces may exhibit ‘bursty’ behaviour due to the way multipath congestion control algorithm distributes traffic, it satisfies the target load requirement of 10 Mbps.</p>
Full article ">Figure 16
<p>MP-TCP typical behaviour on the same testbed as for the ILNP evaluation. The distribution of the throughput is uneven, and changes to throughput on the individual interfaces are ‘bursty’. The vertical line shows the protocol level signalling (MP-TCP-specific multipath control plane protocol) to add or remove connectivity received at the respective IPv6 addresses. (<b>a</b>) Throughput facet plot of MP-TCP flow received at the MN. The top three plots show the throughput received at the addresses of the respective three interfaces at the MN, and the bottom plot shows the aggregate throughput. (<b>b</b>) Throughput facet plot of MP-TCP flow received at the CN. The top three plots show the throughputs received from the addresses of the respective three interfaces at the MN, and the bottom plot shows the aggregate throughput.</p>
Full article ">Figure 16 Cont.
<p>MP-TCP typical behaviour on the same testbed as for the ILNP evaluation. The distribution of the throughput is uneven, and changes to throughput on the individual interfaces are ‘bursty’. The vertical line shows the protocol level signalling (MP-TCP-specific multipath control plane protocol) to add or remove connectivity received at the respective IPv6 addresses. (<b>a</b>) Throughput facet plot of MP-TCP flow received at the MN. The top three plots show the throughput received at the addresses of the respective three interfaces at the MN, and the bottom plot shows the aggregate throughput. (<b>b</b>) Throughput facet plot of MP-TCP flow received at the CN. The top three plots show the throughputs received from the addresses of the respective three interfaces at the MN, and the bottom plot shows the aggregate throughput.</p>
Full article ">
25 pages, 2369 KiB  
Article
A Secure Key Exchange and Authentication Scheme for Securing Communications in the Internet of Things Environment
by Ali Peivandizadeh, Haitham Y. Adarbah, Behzad Molavi, Amirhossein Mohajerzadeh and Ali H. Al-Badi
Future Internet 2024, 16(10), 357; https://doi.org/10.3390/fi16100357 - 30 Sep 2024
Viewed by 982
Abstract
In today’s advanced network and digital age, the Internet of Things network is experiencing a significant growing trend and, due to its wide range of services and network coverage, has been able to take a special place in today’s technology era. Among the [...] Read more.
In today’s advanced network and digital age, the Internet of Things network is experiencing a significant growing trend and, due to its wide range of services and network coverage, has been able to take a special place in today’s technology era. Among the applications that can be mentioned for this network are the field of electronic health, smart residential complexes, and a wide level of connections that have connected the inner-city infrastructure in a complex way to make it smart. The notable and critical issue that exists in this network is the extent of the elements that make up the network and, due to this, the strong and massive data exchanges at the network level. With the increasing deployment of the Internet of Things, a wide range of challenges arise, especially in the discussion of establishing network security. Regarding security concerns, ensuring the confidentiality of the data being exchanged in the network, maintaining the privacy of the network nodes, protecting the identity of the network nodes, and finally implementing the security policies required to deal with a wide range of network cyber threats are of great importance. A fundamental element in the security of IoT networks is the authentication process, wherein nodes are required to validate each other’s identities to ensure the establishment of secure communication channels. Through the enforcement of security prerequisites, in this study, we suggested a security protocol focused on reinforcing security characteristics and safeguarding IoT nodes. By utilizing the security features provided by Elliptic Curve Cryptography (ECC) and employing the Elliptic Curve Diffie–Hellman (ECDH) key-exchange mechanism, we designed a protocol for authenticating nodes and establishing encryption keys for every communication session within the Internet of Things. To substantiate the effectiveness and resilience of our proposed protocol in withstanding attacks and network vulnerabilities, we conducted evaluations utilizing both formal and informal means. Furthermore, our results demonstrate that the protocol is characterized by low computational and communication demands, which makes it especially well-suited for IoT nodes operating under resource constraints. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

Figure 1
<p>Internet of Things communication.</p>
Full article ">Figure 2
<p>Registration phase.</p>
Full article ">Figure 3
<p>Process of authentication and key-agreement phase.</p>
Full article ">Figure 4
<p>AVISPA results. (<b>A</b>) ATSE (<b>B</b>) OFMC.</p>
Full article ">
35 pages, 6382 KiB  
Article
Blockchain-Driven Generalization of Policy Management for Multiproduct Insurance Companies
by Abraham Romero and Roberto Hernandez
Future Internet 2024, 16(10), 356; https://doi.org/10.3390/fi16100356 - 30 Sep 2024
Viewed by 1786
Abstract
This article presents a Blockchain-based solution for the management of multipolicies in insurance companies, introducing a standardized policy model to facilitate streamlined operations and enhance collaboration between entities. The model ensures uniform policy management, providing scalability and flexibility to adapt to new market [...] Read more.
This article presents a Blockchain-based solution for the management of multipolicies in insurance companies, introducing a standardized policy model to facilitate streamlined operations and enhance collaboration between entities. The model ensures uniform policy management, providing scalability and flexibility to adapt to new market demands. The solution leverages Merkle trees for secure data management, with each policy represented by an independent Merkle tree, enabling updates and additions without altering existing policies. The architecture, implemented on a private Ethereum network using Hyperledger Besu and Tessera, ensures secure and transparent transactions, robust dispute resolution, and fraud prevention mechanisms. The validation phase demonstrated the model’s efficiency in reducing data redundancy and ensuring the consistency and integrity of policy information. Additionally, the system’s technical management has been simplified, operational redundancies have been eliminated, and privacy is enhanced. Full article
Show Figures

Figure 1

Figure 1
<p>Client use cases.</p>
Full article ">Figure 2
<p>Architecture comprehensive management system current model.</p>
Full article ">Figure 3
<p>Policy modeling.</p>
Full article ">Figure 4
<p>Leaf node as policy.</p>
Full article ">Figure 5
<p>Tree root as a policy.</p>
Full article ">Figure 6
<p>Architecture design Blockchain.</p>
Full article ">Figure 7
<p>Merkle example.</p>
Full article ">Figure 8
<p>Blockchain example.</p>
Full article ">Figure 9
<p>Technology stack.</p>
Full article ">Figure 10
<p>ibftConfigFile.</p>
Full article ">Figure 11
<p>Besu project structure definition.</p>
Full article ">Figure 12
<p>Tessera operation.</p>
Full article ">Figure 13
<p>Tessera node configuration.</p>
Full article ">Figure 14
<p>Anchoring protocol.</p>
Full article ">Figure 15
<p>UML class diagram modeling multi-product insurance policy.</p>
Full article ">Figure 16
<p>UML smart-contract representation.</p>
Full article ">Figure 17
<p>Smart-contract operations sequence diagram.</p>
Full article ">Figure 18
<p>DAPP Design.</p>
Full article ">Figure A1
<p>DAPP Login.</p>
Full article ">Figure A2
<p>DAPP initial view.</p>
Full article ">Figure A3
<p>Conctract Policy operation.</p>
Full article ">Figure A4
<p>Conctract Policy operation successive step.</p>
Full article ">Figure A5
<p>Blockchain transaction log.</p>
Full article ">Figure A6
<p>View policy action.</p>
Full article ">
23 pages, 2906 KiB  
Article
Multi-User Optimal Load Scheduling of Different Objectives Combined with Multi-Criteria Decision Making for Smart Grid
by Yaarob Al-Nidawi, Haider Tarish Haider, Dhiaa Halboot Muhsen and Ghadeer Ghazi Shayea
Future Internet 2024, 16(10), 355; https://doi.org/10.3390/fi16100355 - 29 Sep 2024
Viewed by 3466
Abstract
Load balancing between required power demand and the available generation capacity is the main task of demand response for a smart grid. Matching between the objectives of users and utilities is the main gap that should be addressed in the demand response context. [...] Read more.
Load balancing between required power demand and the available generation capacity is the main task of demand response for a smart grid. Matching between the objectives of users and utilities is the main gap that should be addressed in the demand response context. In this paper, a multi-user optimal load scheduling is proposed to benefit both utility companies and users. Different objectives are considered to form a multi-objective artificial hummingbird algorithm (MAHA). The cost of energy consumption, peak of load, and user inconvenience are the main objectives considered in this work. A hybrid multi-criteria decision making method is considered to select the dominance solutions. This approach is based on the removal effects of criteria (MERECs) and is utilized for deriving appropriate weights of various criteria. Next, the Vlse Kriterijumska Optimizacija Kompromisno Resenje (VIKOR) method is used to find the best solution of load scheduling from a set of Pareto front solutions produced by MAHA. Multiple pricing schemes are applied in this work, namely the time of use (ToU) and adaptive consumption level pricing scheme (ACLPS), to test the proposed system with regards to different pricing rates. Furthermore, non-cooperative and cooperative users’ working schemes are considered to overcome the issue of making a new peak load time through shifting the user load from the peak to off-peak period to realize minimum energy cost. The results demonstrate 81% cost savings for the proposed method with the cooperative mode while using ACLPS and 40% savings regarding ToU. Furthermore, the peak saving for the same mode of operation provides about 68% and 64% for ACLPs and ToU, respectively. The finding of this work has been validated against other related contributions to examine the significance of the proposed technique. The analyses in this research have concluded that the presented approach has realized a remarkable saving for the peak power intervals and energy cost while maintaining an acceptable range of the customer inconvenience level. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

Figure 1
<p>AHA algorithm flowchart.</p>
Full article ">Figure 2
<p>Power consumption without and with scheduling for ToU and ACLPS for first user.</p>
Full article ">Figure 3
<p>Power consumption without and with scheduling for ToU and ACLPS for second user.</p>
Full article ">Figure 4
<p>Power consumption without and with scheduling for ToU and ACLPS for third user.</p>
Full article ">Figure 5
<p>Power consumption without and with scheduling for ToU and ACLPS for fourth user.</p>
Full article ">Figure 6
<p>Power consumption without and with scheduling for ToU and ACLPS for fifth user.</p>
Full article ">Figure 7
<p>Power consumption without and with scheduling for ToU and ACLPS for all users.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop