Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (177)

Search Parameters:
Keywords = big data virtualization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 8281 KiB  
Review
Research Progress of Automation Ergonomic Risk Assessment in Building Construction: Visual Analysis and Review
by Ruize Qin, Peng Cui and Jaleel Muhsin
Buildings 2024, 14(12), 3789; https://doi.org/10.3390/buildings14123789 - 27 Nov 2024
Viewed by 353
Abstract
In recent years, the increasing demand for worker safety and workflow efficiency in the construction industry has drawn considerable attention to the application of automated ergonomic technologies. To gain a comprehensive understanding of the current research landscape in this field, this study conducts [...] Read more.
In recent years, the increasing demand for worker safety and workflow efficiency in the construction industry has drawn considerable attention to the application of automated ergonomic technologies. To gain a comprehensive understanding of the current research landscape in this field, this study conducts an in-depth visual analysis of the literature on automated ergonomic risk assessment published between 2001 and 2024 in the Web of Science database using CiteSpace and VOSviewer. The analysis systematically reviews key research themes, collaboration networks, keywords, and citation patterns. Building on this, an SWOT analysis is employed to evaluate the core technologies currently widely adopted in the construction sector. By focusing on the integrated application of wearable sensors, artificial intelligence (AI), big data analytics, virtual reality (VR), and computer vision, this research highlights the significant advantages of these technologies in enhancing worker safety and optimizing construction processes. It also delves into potential challenges related to the complexity of these technologies, high implementation costs, and concerns regarding data privacy and worker health. While these technologies hold immense potential to transform the construction industry, future efforts will need to address these challenges through technological optimization and policy support to ensure broader adoption. Full article
Show Figures

Figure 1

Figure 1
<p>Data collection process.</p>
Full article ">Figure 2
<p>Annual publication trends of articles from 2001 to 2024 (October).</p>
Full article ">Figure 3
<p>Top 10 subject categories of the Web of Science for automated ergonomic risk from 2001 to 2024.</p>
Full article ">Figure 4
<p>Analysis of published journals.</p>
Full article ">Figure 5
<p>Analysis of published journals (2001–2024).</p>
Full article ">Figure 6
<p>Keyword clustering diagram for the research of automated ergonomic risk evaluation.</p>
Full article ">Figure 7
<p>Visual representation of the rule compliance module outcomes [<a href="#B65-buildings-14-03789" class="html-bibr">65</a>].</p>
Full article ">Figure 8
<p>Principles of wearable sensor technology [<a href="#B68-buildings-14-03789" class="html-bibr">68</a>].</p>
Full article ">Figure 9
<p>Computer vision-based motion capture [<a href="#B74-buildings-14-03789" class="html-bibr">74</a>].</p>
Full article ">Figure 10
<p>Wearable sensor model diagram [<a href="#B75-buildings-14-03789" class="html-bibr">75</a>].</p>
Full article ">
26 pages, 5734 KiB  
Article
Big Data Analysis of ‘VTuber’ Perceptions in South Korea: Insights for the Virtual YouTuber Industry
by Hyemin Kim and Jungho Suh
Journal. Media 2024, 5(4), 1723-1748; https://doi.org/10.3390/journalmedia5040105 - 15 Nov 2024
Viewed by 752
Abstract
The global VTuber market is experiencing rapid growth, with VTubers extending beyond mere content creators to be utilized in various fields such as social interaction, public relations, and health. VTubers have the potential to expand the existing content market and contribute to increasing [...] Read more.
The global VTuber market is experiencing rapid growth, with VTubers extending beyond mere content creators to be utilized in various fields such as social interaction, public relations, and health. VTubers have the potential to expand the existing content market and contribute to increasing economic and public value. This study aims to investigate the perception of VTubers in South Korea and to provide insights that can contribute to the global activation of the VTuber entertainment industry. For this purpose, unstructured data on VTubers from the past three years, during which interest in VTubers has significantly grown in South Korea, was collected. A total of 57,891 samples were gathered from Naver, Daum, and Google, of which 50 highly relevant data points between VTubers and users were selected for analysis. First, key terms such as ‘Broadcast’, ‘YouTube’, ‘Live’, ‘Game’, ‘Youtuber’, ‘Japan’, ‘Character’, ‘Video’, ‘Sing’, ‘Virtual’, ‘Woowakgood’, ‘Fan’, ‘Idol’, ‘Korea’, ‘Twitch’, ‘IsegyeIdol’, ‘Communication’, ‘Worldview’, ‘VTuberIndustry’, ‘Contents’, ‘AfricaTV’, ‘Nijisanji’, and ‘Streamer’ were extracted. Second, CONCOR analysis revealed four clusters: ‘Famous VTubers’, ‘Features of VTubers’, ‘VTuber Industry’, and ‘VTuber Platforms’. Based on these findings, the study offers various academic and practical implications regarding VTubers in South Korea and explores the potential for global growth in the VTuber industry. Full article
Show Figures

Figure 1

Figure 1
<p>Trend graph of ‘VTuber’ mentions on Google Trends.</p>
Full article ">Figure 2
<p>Time series analysis of data collection for ‘VTubers’.</p>
Full article ">Figure 3
<p>Centrality analysis and ego network density analysis for ‘VTuber’.</p>
Full article ">Figure 4
<p>CONCOR analysis network of 50 nodes for ‘VTuber’.</p>
Full article ">Figure 5
<p>The four groups and clusters among the groups.</p>
Full article ">
31 pages, 4535 KiB  
Article
Prediction of Attention Groups and Big Five Personality Traits from Gaze Features Collected from an Outlier Search Game
by Rachid Rhyad Saboundji, Kinga Bettina Faragó and Violetta Firyaridi
J. Imaging 2024, 10(10), 255; https://doi.org/10.3390/jimaging10100255 - 16 Oct 2024
Viewed by 750
Abstract
This study explores the intersection of personality, attention and task performance in traditional 2D and immersive virtual reality (VR) environments. A visual search task was developed that required participants to find anomalous images embedded in normal background images in 3D space. Experiments were [...] Read more.
This study explores the intersection of personality, attention and task performance in traditional 2D and immersive virtual reality (VR) environments. A visual search task was developed that required participants to find anomalous images embedded in normal background images in 3D space. Experiments were conducted with 30 subjects who performed the task in 2D and VR environments while their eye movements were tracked. Following an exploratory correlation analysis, we applied machine learning techniques to investigate the predictive power of gaze features on human data derived from different data collection methods. Our proposed methodology consists of a pipeline of steps for extracting fixation and saccade features from raw gaze data and training machine learning models to classify the Big Five personality traits and attention-related processing speed/accuracy levels computed from the Group Bourdon test. The models achieved above-chance predictive performance in both 2D and VR settings despite visually complex 3D stimuli. We also explored further relationships between task performance, personality traits and attention characteristics. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic representation of our proposed classification pipeline. The process starts with the collection of personal data and the recording of game logs. We then extract gaze features from both 2D and VR data sets. In the VR/2D sessions’ box (see (<b>lower left corner</b>)), T refers to the total number of features. As the next step of the pipeline, we apply feature selection within nested cross-validation to classify personality traits and attention groups.</p>
Full article ">Figure 2
<p>Example screenshot from the NipgBoard interface. On the (<b>left</b>), various display settings and dimension reduction options can be selected. Correctly selected items have a green overlay and incorrect selections have a red overlay, as shown in the (<b>upper left</b>) corner. In the middle, sample images from the MVTec Anomaly Detection dataset can be seen in the 3D projector panel after the PCA application. These grouped image sets represent the bottle, hazelnut, transistor, leather and tile categories. On the right side, the enlarged version of the currently selected image is presented, and below it, the timer, outlier counter, and F1 score are shown as text.</p>
Full article ">Figure 3
<p>Screenshot from the HTC Vive Pro Eye headset’s VR view. In the (<b>upper left</b>) corner, the displayed gloves represent the player’s hand in the virtual environment, the small squares in the field of view are samples from the MVTec Anomaly Detection dataset. On the top, a green overlayed cluster of images (bottles) can be seen, as both outliers have been found there. In the (<b>bottom right</b>) corner, a red overlayed image (tile category) is an incorrect selection. From left to right, the displayed numbers in the middle are the outlier counter, the timer, and the current F1 score. The small transparent circle in the middle represents the target of the participant’s gaze.</p>
Full article ">Figure 4
<p>Illustrative screenshots from the outlier search game in the virtual reality environment. The (<b>top row</b>) shows the OHF mode, and the (<b>bottom row</b>) presents the THF technique. A blue grid appears due to proximity, alerting the person wearing the VR headset where the edges of the safe play area are in the real world. The screenshots show changes in ambient light conditions and instances where participants observe objects very close and far away.</p>
Full article ">Figure 5
<p>Flowchart of the experimental design with the three main phases: the introduction, training, and data collection. Each stage contains the list of official tests and questionnaires, the time schedule and the order of tasks for the conductors and participants.</p>
Full article ">Figure 6
<p>Score distributions for Big 5 Personality Traits calculated from the BFI-2 test, measured on a 5-point Likert scale. The values are scores between 0 and 100 for each trait.</p>
Full article ">Figure 7
<p>Calculation of performance metrics based on Group Burdon test scores. On the left, the score distribution can be seen for the processing speed (group/minute), which shows how many groups of points the participant observed during the given time. The right side of the graph shows the processing accuracy rate, which indicates the proportion of correct selections, taking into account both missed groups and incorrect selections.</p>
Full article ">
27 pages, 6340 KiB  
Article
Design and Evaluation of Real-Time Data Storage and Signal Processing in a Long-Range Distributed Acoustic Sensing (DAS) Using Cloud-Based Services
by Abdusomad Nur and Yonas Muanenda
Sensors 2024, 24(18), 5948; https://doi.org/10.3390/s24185948 - 13 Sep 2024
Viewed by 891
Abstract
In cloud-based Distributed Acoustic Sensing (DAS) sensor data management, we are confronted with two primary challenges. First, the development of efficient storage mechanisms capable of handling the enormous volume of data generated by these sensors poses a challenge. To solve this issue, we [...] Read more.
In cloud-based Distributed Acoustic Sensing (DAS) sensor data management, we are confronted with two primary challenges. First, the development of efficient storage mechanisms capable of handling the enormous volume of data generated by these sensors poses a challenge. To solve this issue, we propose a method to address the issue of handling the large amount of data involved in DAS by designing and implementing a pipeline system to efficiently send the big data to DynamoDB in order to fully use the low latency of the DynamoDB data storage system for a benchmark DAS scheme for performing continuous monitoring over a 100 km range at a meter-scale spatial resolution. We employ the DynamoDB functionality of Amazon Web Services (AWS), which allows highly expandable storage capacity with latency of access of a few tens of milliseconds. The different stages of DAS data handling are performed in a pipeline, and the scheme is optimized for high overall throughput with reduced latency suitable for concurrent, real-time event extraction as well as the minimal storage of raw and intermediate data. In addition, the scalability of the DynamoDB-based data storage scheme is evaluated for linear and nonlinear variations of number of batches of access and a wide range of data sample sizes corresponding to sensing ranges of 1–110 km. The results show latencies of 40 ms per batch of access with low standard deviations of a few milliseconds, and latency per sample decreases for increasing the sample size, paving the way toward the development of scalable, cloud-based data storage services integrating additional post-processing for more precise feature extraction. The technique greatly simplifies DAS data handling in key application areas requiring continuous, large-scale measurement schemes. In addition, the processing of raw traces in a long-distance DAS for real-time monitoring requires the careful design of computational resources to guarantee requisite dynamic performance. Now, we will focus on the design of a system for the performance evaluation of cloud computing systems for diverse computations on DAS data. This system is aimed at unveiling valuable insights into performance metrics and operational efficiencies of computations on the data in the cloud, which will provide a deeper understanding of the system’s performance, identify potential bottlenecks, and suggest areas for improvement. To achieve this, we employ the CloudSim framework. The analysis reveals that the virtual machine (VM) performance decreases significantly the processing time with more capable VMs, influenced by Processing Elements (PEs) and Million Instructions Per Second (MIPS). The results also reflect that, although a larger number of computations is required as the fiber length increases, with the subsequent increase in processing time, the overall speed of computation is still suitable for continuous real-time monitoring. We also see that VMs with lower performance in terms of processing speed and number of CPUs have more inconsistent processing times compared to those with higher performance, while not incurring significantly higher prices. Additionally, the impact of VM parameters on computation time is explored, highlighting the importance of resource optimization in the DAS system design for efficient performance. The study also observes a notable trend in processing time, showing a significant decrease for every additional 50,000 columns processed as the length of the fiber increases. This finding underscores the efficiency gains achieved with larger computational loads, indicating improved system performance and capacity utilization as the DAS system processes more extensive datasets. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental setup of a distributed vibration sensor using a <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>-OTDR scheme in direct detection [<a href="#B8-sensors-24-05948" class="html-bibr">8</a>].</p>
Full article ">Figure 2
<p>Intrusion detection using a <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>-OTDR sensor [<a href="#B16-sensors-24-05948" class="html-bibr">16</a>].</p>
Full article ">Figure 3
<p>Block diagram of the developed system.</p>
Full article ">Figure 4
<p>Schematic representation of the connection of the DAS sensor system to DynamoDB.</p>
Full article ">Figure 5
<p>Steps to use CloudSim.</p>
Full article ">Figure 6
<p>Block diagram of simulation flow for the basic scenario.</p>
Full article ">Figure 7
<p>Schematic representation of the implementation of processing of DAS data in CloudSim.</p>
Full article ">Figure 8
<p>Latency per batch of DynamoDB access for sample number of batches used to write trace samples.</p>
Full article ">Figure 9
<p>Latency per batch of DynamoDB access used to write trace samples with number of batches scaling with <math display="inline"><semantics> <msup> <mn>2</mn> <mi>n</mi> </msup> </semantics></math> for each index n.</p>
Full article ">Figure 10
<p>(<b>a</b>) Total latency of DynamoDB access (<b>b</b>) Latency per sample for varying trace sample sizes in the range of 5000–550,000 samples corresponding to 1–110 km sensing distances.</p>
Full article ">Figure 11
<p>Analysis of processing time and cloudlet utilization for differential operations in DAS sensing system: a study on single cycle versus multiple cycles. The study focuses on two distinct scenarios: (<b>a</b>) a single cycle of measurement, and (<b>b</b>) a series of 10 consecutive cycles of measurement. The measurements are conducted in a 110 km long optical sensing fiber. Note that the number of cloudlets increases for each cloudlet ID in the horizontal axis.</p>
Full article ">Figure 12
<p>Processing time versus cloudlets for FFT operation for (<b>a</b>) a single cycle, and (<b>b</b>) 10 cycles, of measurement in a 110 km fiber. Note that the number of cloudlets increases for each cloudlet ID in the horizontal axis.</p>
Full article ">Figure 13
<p>Evaluation of the mean processing time for each virtual machine in differential operations: a comparative study on a single cycle versus multiple cycles in a 110 km optical fiber. The investigation is conducted under two distinct conditions: (<b>a</b>) a single cycle of measurement, and (<b>b</b>) a series of 10 consecutive cycles of measurement. The measurements are performed in a 110 km long optical fiber. This research aims to understand the computational efficiency of cloud services in DAS sensing systems.</p>
Full article ">Figure 14
<p>Mean processing time for each VM for FFT operation for (<b>a</b>) a single cycle, and (<b>b</b>) 10 cycles, of measurement in a 110 km fiber.</p>
Full article ">Figure 15
<p>Statistical analysis of processing time for virtual machines in differential operations: an examination of standard deviation and variance across single and multiple cycles in a 110 km optical fiber. The analysis is conducted under two different scenarios: (<b>a</b>) a single cycle of measurement, and (<b>b</b>) a sequence of 10 cycles of measurement. The measurements are carried out in a 110 km long optical fiber. This study provides a deeper understanding of the variability and consistency in the performance of VMs during differential operations in DAS sensing systems.</p>
Full article ">Figure 16
<p>Standard deviation and variance for vms based on processing time-for differential operation for (<b>a</b>) a single cycle, and (<b>b</b>) 10 cycles, of measurement in a 110 km fiber.</p>
Full article ">Figure 17
<p>Evaluation of processing time for incremental data in optical fiber measurements (for each additional 50,000 rows) during two distinct operations: (<b>a</b>) the differential operation, and (<b>b</b>) the Fast Fourier Transform (FFT) operation. The measurements are conducted in a 110 km long optical fiber. This examination aims to understand the computational scalability of these operations in the context of increasing data volume.</p>
Full article ">Figure 18
<p>Analysis of processing time and cloudlet utilization for differential operations in optical fiber measurements with a specific focus on two distinct scenarios: (<b>a</b>) varying only the Million Instructions Per Second (MIPS) of the virtual machines (VMs), and (<b>b</b>) varying only the Processing Elements (PE) of the VMs. The measurements are conducted during a single cycle in a 110 km long optical fiber. This study aims to understand the influence of MIPS and PE variations on the performance and efficiency of VMs during differential operations in DAS sensing systems.</p>
Full article ">Figure 19
<p>Processing time versus cloudlets for differential operation for (<b>a</b>) varying only the MIPS of the VMs, and (<b>b</b>) varying only the PE of the VMs, for a 10 cycle of measurements in a 110 km fiber.</p>
Full article ">Figure 20
<p>Processing time versus cost for (<b>a</b>) differential, and (<b>b</b>) FFT operation for 10 cycles of measurement in a 110 km fiber.</p>
Full article ">Figure 21
<p>Cost of processing versus cloudlets for differential operation for (<b>a</b>) a single cycle, and (<b>b</b>) 10 cycles, of measurement in a 110 km fiber.</p>
Full article ">Figure 22
<p>Cost of processing versus cloudlets for FFT operation for (<b>a</b>) a single cycle, and (<b>b</b>) 10 cycles, of measurement in a 110 km fiber.</p>
Full article ">
15 pages, 5431 KiB  
Article
A Semi-Supervised Approach for Partial Discharge Recognition Combining Graph Convolutional Network and Virtual Adversarial Training
by Yi Zhang, Yang Yu, Yingying Zhang, Zehuan Liu and Mingjia Zhang
Energies 2024, 17(18), 4574; https://doi.org/10.3390/en17184574 - 12 Sep 2024
Viewed by 525
Abstract
With the digital transformation of the grid, partial discharge (PD) recognition using deep learning (DL) and big data has become essential for intelligent transformer upgrades. However, labeling on-site PD data poses challenges, even necessitating the removal of covers for internal examination, which makes [...] Read more.
With the digital transformation of the grid, partial discharge (PD) recognition using deep learning (DL) and big data has become essential for intelligent transformer upgrades. However, labeling on-site PD data poses challenges, even necessitating the removal of covers for internal examination, which makes it difficult to train DL models. To reduce the reliance of DL models on labeled PD data, this study proposes a semi-supervised approach for PD fault recognition by combining the graph convolutional network (GCN) and virtual adversarial training (VAT). The approach introduces a novel PD graph signal to effectively utilize phase-resolved partial discharge (PRPD) information by integrating numerical data and region correlations of PRPD. Then, GCN autonomously extracts features from PD graph signals and identifies fault types, while VAT learns from unlabeled PD samples and improves the robustness during training. The approach is validated using test and on-site data. The results show that the approach significantly reduces the demand for labeled samples and that its PD recognition rates have increased by 6.14% to 14.72% compared with traditional approaches, which helps to reduce the time and labor costs of manually labeling on-site PD faults. Full article
(This article belongs to the Section F6: High Voltage)
Show Figures

Figure 1

Figure 1
<p>The granularity window scan and nodes.</p>
Full article ">Figure 2
<p>The topological connections among nodes.</p>
Full article ">Figure 3
<p>The diagram of the semi-supervised PD recognition approach combining GCN and VAT.</p>
Full article ">Figure 4
<p>Flowchart of the proposed semi-supervised PD recognition approach.</p>
Full article ">Figure 5
<p>PD test platform.</p>
Full article ">Figure 6
<p>Discharge fault models.</p>
Full article ">Figure 7
<p>PD data preprocessing.</p>
Full article ">Figure 8
<p>Four types of PRPD.</p>
Full article ">Figure 9
<p>Feature visualization results for varying GCN layers.</p>
Full article ">Figure 10
<p>Comparison of iteration curves with supervised methods.</p>
Full article ">Figure 11
<p>The results with varying numbers of unlabeled samples.</p>
Full article ">Figure 12
<p>The results of ablation experiment.</p>
Full article ">Figure 13
<p>Preprocessing of on-site PRPD images.</p>
Full article ">
21 pages, 10483 KiB  
Article
Evading Cyber-Attacks on Hadoop Ecosystem: A Novel Machine Learning-Based Security-Centric Approach towards Big Data Cloud
by Neeraj A. Sharma, Kunal Kumar, Tanzim Khorshed, A B M Shawkat Ali, Haris M. Khalid, S. M. Muyeen and Linju Jose
Information 2024, 15(9), 558; https://doi.org/10.3390/info15090558 - 10 Sep 2024
Viewed by 701
Abstract
The growing industry and its complex and large information sets require Big Data (BD) technology and its open-source frameworks (Apache Hadoop) to (1) collect, (2) analyze, and (3) process the information. This information usually ranges in size from gigabytes to petabytes of data. [...] Read more.
The growing industry and its complex and large information sets require Big Data (BD) technology and its open-source frameworks (Apache Hadoop) to (1) collect, (2) analyze, and (3) process the information. This information usually ranges in size from gigabytes to petabytes of data. However, processing this data involves web consoles and communication channels which are prone to intrusion from hackers. To resolve this issue, a novel machine learning (ML)-based security-centric approach has been proposed to evade cyber-attacks on the Hadoop ecosystem while considering the complexity of Big Data in Cloud (BDC). An Apache Hadoop-based management interface “Ambari” was implemented to address the variation and distinguish between attacks and activities. The analyzed experimental results show that the proposed scheme effectively (1) blocked the interface communication and retrieved the performance measured data from (2) the Ambari-based virtual machine (VM) and (3) BDC hypervisor. Moreover, the proposed architecture was able to provide a reduction in false alarms as well as cyber-attack detection. Full article
(This article belongs to the Special Issue Cybersecurity, Cybercrimes, and Smart Emerging Technologies)
Show Figures

Figure 1

Figure 1
<p>BD gaps and loopholes. Here, MAPreduce and HDFS are the acronyms for big data analysis model that processes data sets using a parallel algorithm on computer clusters and Hadoop Distributed File System.</p>
Full article ">Figure 2
<p>Graphical abstract of BDC and security vulnerabilities.</p>
Full article ">Figure 3
<p>BDC—ingredients and basis. In this figure, SaaS, PaaS, and IaaS are the acronyms of software as a service, platform as a service, and infrastructure as a service, respectively.</p>
Full article ">Figure 4
<p>Hadoop Ecosystem—an infrastructure. Here, HDFS is the acronym of Hadoop Distributed File System.</p>
Full article ">Figure 5
<p>Experimental design.</p>
Full article ">Figure 6
<p>Ambari-based web interface in pre-attack.</p>
Full article ">Figure 7
<p>Ambari-based web interfaced during an attack.</p>
Full article ">Figure 8
<p>Attack performed on VM port 8080 with Java LOIC.</p>
Full article ">Figure 9
<p>Hadoop VM performance graph—Generated attack using Java LOIC [<a href="#B28-information-15-00558" class="html-bibr">28</a>].</p>
Full article ">Figure 10
<p>Hadoop VM attack—Running RTDOS (Rixer) on default HTTP port 80.</p>
Full article ">Figure 11
<p>Hadoop VM during RTDoS attack (Rixer)—CPU performance and trends [<a href="#B24-information-15-00558" class="html-bibr">24</a>].</p>
Full article ">Figure 12
<p>Graphical presentation—an ML-driven workflow.</p>
Full article ">Figure 13
<p>Percentage-based comparative analysis. From left to right, the comparison is made between references [<a href="#B77-information-15-00558" class="html-bibr">77</a>,<a href="#B78-information-15-00558" class="html-bibr">78</a>,<a href="#B79-information-15-00558" class="html-bibr">79</a>,<a href="#B80-information-15-00558" class="html-bibr">80</a>,<a href="#B81-information-15-00558" class="html-bibr">81</a>], and proposed PART algorithm respectively.</p>
Full article ">
17 pages, 2377 KiB  
Review
Overview of the Research Status of Intelligent Water Conservancy Technology System
by Qinghua Li, Zifei Ma, Jing Li, Wengang Li, Yang Li and Juan Yang
Appl. Sci. 2024, 14(17), 7809; https://doi.org/10.3390/app14177809 - 3 Sep 2024
Cited by 1 | Viewed by 1064
Abstract
A digital twin is a new trend in the development of the current smart water conservancy industry. The main research content of intelligent water conservancy is clarified. This paper first summarizes and combs the relevant system architecture of smart water conservancy, and puts [...] Read more.
A digital twin is a new trend in the development of the current smart water conservancy industry. The main research content of intelligent water conservancy is clarified. This paper first summarizes and combs the relevant system architecture of smart water conservancy, and puts forward a smart water conservancy framework based on digital twins, highlighting the characteristics of virtual and real interaction, and symbiosis of the water conservancy twin platform. Secondly, the status quo of intelligent water conservancy “sky, air, ground and water” integrated monitoring technology, big data and artificial intelligence, model platform technology, knowledge graph and security technology is analyzed. From the perspective of application, the research progress of each technology in water security, water resources and hydraulic engineering is reviewed. Although the construction of smart water conservancy has made remarkable progress, it still faces many challenges such as data governance, technology integration and innovation, and standardization. In view of these challenges, this paper puts forward a series of countermeasures, and looks forward to the future development direction of intelligent water conservancy. Full article
Show Figures

Figure 1

Figure 1
<p>Progress of smart water conservancy digital twin.</p>
Full article ">Figure 2
<p>Overall framework of intelligent water conservancy.</p>
Full article ">Figure 3
<p>Triangle model of water conservancy twin “cloud–edge–end”.</p>
Full article ">Figure 4
<p>Basic construction process of knowledge graph.</p>
Full article ">
16 pages, 3104 KiB  
Article
Unveiling the Evolution of Virtual Reality in Medicine: A Bibliometric Analysis of Research Hotspots and Trends over the Past 12 Years
by Guangxi Zuo, Ruoyu Wang, Cheng Wan, Zhe Zhang, Shaochong Zhang and Weihua Yang
Healthcare 2024, 12(13), 1266; https://doi.org/10.3390/healthcare12131266 - 26 Jun 2024
Viewed by 1982
Abstract
Background: Virtual reality (VR), widely used in the medical field, may affect future medical training and treatment. Therefore, this study examined VR’s potential uses and research directions in medicine. Methods: Citation data were downloaded from the Web of Science Core Collection database (WoSCC) [...] Read more.
Background: Virtual reality (VR), widely used in the medical field, may affect future medical training and treatment. Therefore, this study examined VR’s potential uses and research directions in medicine. Methods: Citation data were downloaded from the Web of Science Core Collection database (WoSCC) to evaluate VR in medicine in articles published between 1 January 2012 and 31 December 2023. These data were analyzed using CiteSpace 6.2. R2 software. Present limitations and future opportunities were summarized based on the data. Results: A total of 2143 related publications from 86 countries and regions were analyzed. The country with the highest number of publications is the USA, with 461 articles. The University of London has the most publications among institutions, with 43 articles. The burst keywords represent the research frontier from 2020 to 2023, such as “task analysis”, “deep learning”, and “machine learning”. Conclusion: The number of publications on VR applications in the medical field has been steadily increasing year by year. The USA is the leading country in this area, while the University of London stands out as the most published, and most influential institution. Currently, there is a strong focus on integrating VR and AI to address complex issues such as medical education and training, rehabilitation, and surgical navigation. Looking ahead, the future trend involves integrating VR, augmented reality (AR), and mixed reality (MR) with the Internet of Things (IoT), wireless sensor networks (WSNs), big data analysis (BDA), and cloud computing (CC) technologies to develop intelligent healthcare systems within hospitals or medical centers. Full article
Show Figures

Figure 1

Figure 1
<p>A frame flow diagram showing the specific selection criteria and bibliometric analysis steps for the study of VR in medicine between 2012 and 2023.</p>
Full article ">Figure 2
<p>The annual number of publications on VR in medicine between 2012 and 2023.</p>
Full article ">Figure 3
<p>Collaboration of countries or regions that contributed to publications on VR in medicine between 2012 and 2023. <a href="#sec3dot1-healthcare-12-01266" class="html-sec">Section 3.1</a>.</p>
Full article ">Figure 4
<p>Cooperation of institutions that contributed to publications on VR in medicine between 2012 and 2023.</p>
Full article ">Figure 5
<p>Category-based clusters of publications on VR in medicine between 2012 and 2023. The symbol means Cluster.</p>
Full article ">Figure 6
<p>Keywords with the strongest citation bursts for publications on VR in medicine between 2012 and 2023.</p>
Full article ">
31 pages, 4012 KiB  
Review
Towards a Software-Defined Industrial IoT-Edge Network for Next-Generation Offshore Wind Farms: State of the Art, Resilience, and Self-X Network and Service Management
by Agrippina Mwangi, Rishikesh Sahay, Elena Fumagalli, Mikkel Gryning and Madeleine Gibescu
Energies 2024, 17(12), 2897; https://doi.org/10.3390/en17122897 - 13 Jun 2024
Cited by 2 | Viewed by 1732
Abstract
Offshore wind farms are growing in complexity and size, expanding deeper into maritime environments to capture stronger and steadier wind energy. Like other domains in the energy sector, the wind energy domain is continuing to digitalize its systems by embracing Industry 4.0 technologies [...] Read more.
Offshore wind farms are growing in complexity and size, expanding deeper into maritime environments to capture stronger and steadier wind energy. Like other domains in the energy sector, the wind energy domain is continuing to digitalize its systems by embracing Industry 4.0 technologies such as the Industrial Internet of Things (IIoT), virtualization, and edge computing to monitor and manage its critical infrastructure remotely. Adopting these technologies creates dynamic, scalable, and cost-effective data-acquisition systems. At the heart of these data-acquisition systems is a communication network that facilitates data transfer between communicating nodes. Given the challenges of configuring, managing, and troubleshooting large-scale communication networks, this review paper explores the adoption of the state-of-the-art software-defined networking (SDN) and network function virtualization (NFV) technologies in the design of next-generation offshore wind farm IIoT–Edge communication networks. While SDN and NFV technologies present a promising solution to address the challenges of these large-scale communication networks, this paper discusses the SDN/NFV-related performance, security, reliability, and scalability concerns, highlighting current mitigation strategies. Building on these mitigation strategies, the concept of resilience (that is, the ability to recover from component failures, attacks, and service interruptions) is given special attention. The paper highlights the self-X (self-configuring, self-healing, and self-optimizing) approaches that build resilience in the software-defined IIoT–Edge communication network architectures. These resilience approaches enable the network to autonomously adjust its configuration, self-repair during stochastic failures, and optimize performance in response to changing conditions. The paper concludes that resilient software-defined IIoT–Edge communication networks will play a big role in guaranteeing seamless next-generation offshore wind farm operations by facilitating critical, latency-sensitive data transfers. Full article
Show Figures

Figure 1

Figure 1
<p>The evolution of network automation for network and service management (1960 to 2050 and beyond).</p>
Full article ">Figure 2
<p>The seven-layer Internet of Things World Forum reference model, adapted from [<a href="#B34-energies-17-02897" class="html-bibr">34</a>] and classified into Data-in-Motion (Layer 1 to Layer 3) and Data-at-rest (Layer 5 to Layer 7).</p>
Full article ">Figure 3
<p>Architecture of a next-generation offshore wind farm data-acquisition system leveraging Industry 4.0 IIoT, Edge, and virtualization technologies using a publish/subscribe model.</p>
Full article ">Figure 4
<p>Comparing the traditional networks with the software-defined networks applicable to both switch and router networks.</p>
Full article ">Figure 5
<p>Software-defined networking (SDN) architecture, where SDNC is the SDN controller at the control plane and FD is the forwarding device at the data plane.</p>
Full article ">Figure 6
<p>Software-defined IIoT–Edge network for next-generation offshore wind farm data-acquisition system (see <a href="#energies-17-02897-f003" class="html-fig">Figure 3</a>) connecting wind turbine generator access network (nacelle and tower access switches) to the pico data center ethernet switches.</p>
Full article ">Figure 7
<p>Considerations for adopting software-defined networking for industrial OT networks.</p>
Full article ">Figure 8
<p>ISA/IEC62443 Industrial Control System Cyber-kill chain adapted from [<a href="#B126-energies-17-02897" class="html-bibr">126</a>,<a href="#B127-energies-17-02897" class="html-bibr">127</a>].</p>
Full article ">Figure 9
<p>An illustration of a system’s capacity to recover from disruptions within time [0, t) [<a href="#B144-energies-17-02897" class="html-bibr">144</a>].</p>
Full article ">Figure 10
<p>A self-managing software-defined IIoT–Edge network tailored for next-generation offshore wind farms (see <a href="#energies-17-02897-f006" class="html-fig">Figure 6</a>) by incorporating the European Telecommunication Standards Institute (ETSI) zero-touch network and service management framework [<a href="#B156-energies-17-02897" class="html-bibr">156</a>].</p>
Full article ">
17 pages, 1373 KiB  
Article
Chip-Level Defect Analysis with Virtual Bad Wafers Based on Huge Big Data Handling for Semiconductor Production
by Jinsik Kim and Inwhee Joe
Electronics 2024, 13(11), 2205; https://doi.org/10.3390/electronics13112205 - 5 Jun 2024
Viewed by 1213
Abstract
Semiconductors continue to shrink in die size because of benefits like cost savings, lower power consumption, and improved performance. However, this reduction leads to more defects due to increased inter-cell interference. Among the various defect types, customer-found defects are the most costly. Thus, [...] Read more.
Semiconductors continue to shrink in die size because of benefits like cost savings, lower power consumption, and improved performance. However, this reduction leads to more defects due to increased inter-cell interference. Among the various defect types, customer-found defects are the most costly. Thus, finding the root cause of customer-found defects has become crucial to the quality of semiconductors. Traditional methods involve analyzing the pathways of many low-yield wafers. Yet, because of the extremely limited number of customer-found defects, obtaining significant results is difficult. After the products are provided to customers, they undergo rigorous testing and selection, leading to a very low defect rate. However, since the timing of defect occurrence varies depending on the environment in which the product is used, the quantity of defective samples is often quite small. Unfortunately, with such a low number of samples, typically 10 or fewer, it becomes impossible to investigate the root cause of wafer-level defects using conventional methods. This paper introduces a novel approach to finding the root cause of these rare defective chips for the first time in the semiconductor industry. Defective wafers are identified using rare customer-found chips and chip-level EDS (Electrical Die Sorting) data, and these newly identified defective wafers are termed vBADs (virtual bad wafers). The performance of root cause analysis is dramatically improved with vBADs. However, the chip-level analysis presented here demands substantial computing power. Therefore, MPP (Massive Parallel Processing) architecture is implemented and optimized to handle large volumes of chip-level data within a large architecture infrastructure that can manage big data. This allows for a chip-level defect analysis system that can recommend the relevant EDS test and identify the root cause in real time even with a single defective chip. The experimental results demonstrate that the proposed root cause search can reveal the hidden cause of a single defective chip by amplifying it with 90 vBADs, and system performance improves by a factor of 61. Full article
(This article belongs to the Section Industrial Electronics)
Show Figures

Figure 1

Figure 1
<p>Overall procedure of semiconductor production. FAB (fabrication) and EDS (Electrical Die Sorting) are the most important steps in the semiconductor field. Customer-found defective chips are analyzed to find the root cause and prevent recurrence.</p>
Full article ">Figure 2
<p>Potential latent reliability defect and killer defect. Potential latent defect (middle) is not detected at t = 0, but it will fail after being embedded in the customer’s electronic device.</p>
Full article ">Figure 3
<p>Potential latent reliability defect in EDS wafer map. Red chips are defective chips. The normal white chip surrounded by defective chips can be a reliability defective chip.</p>
Full article ">Figure 4
<p>Functional scheme for selection of a critical EDS test item and latent defect discovery.</p>
Full article ">Figure 5
<p>High-level system architecture.</p>
Full article ">Figure 6
<p>Distributed and asynchronous architecture. (<b>a</b>) Synchronous job execution. (<b>b</b>) Distributed and asynchronous job execution.</p>
Full article ">Figure 7
<p>Visualizations for defective chip analysis. (<b>a</b>) The scatter chart of chips shows that a defective chip’s measurement value is in the outlier range. (<b>b</b>) The wafer map chart shows that the defective chip is in the line patterns.</p>
Full article ">Figure 8
<p>Similar defective wafers discovery. Using the selected test item and the defective chip measurement value, similar wafers can be found.</p>
Full article ">Figure 9
<p>Root cause analysis with vBADs (virtual bad wafers).</p>
Full article ">Figure 10
<p>Root cause search performance increased by vBADs.</p>
Full article ">Figure 11
<p>System parameters affecting the performance. (<b>a</b>) Data size affects the performance. 50 GB on average, 200 s elapsed. (<b>b</b>) The number of test items affects the performance. Test item counts are 2200, and 120 s elapsed.</p>
Full article ">Figure 12
<p>Performance improvement by MPP nodes counts.</p>
Full article ">
28 pages, 915 KiB  
Review
Enhancing Food Integrity through Artificial Intelligence and Machine Learning: A Comprehensive Review
by Sefater Gbashi and Patrick Berka Njobeh
Appl. Sci. 2024, 14(8), 3421; https://doi.org/10.3390/app14083421 - 18 Apr 2024
Cited by 3 | Viewed by 3466
Abstract
Herein, we examined the transformative potential of artificial intelligence (AI) and machine learning (ML) as new fronts in addressing some of the pertinent challenges posed by food integrity to human and animal health. In recent times, AI and ML, along with other Industry [...] Read more.
Herein, we examined the transformative potential of artificial intelligence (AI) and machine learning (ML) as new fronts in addressing some of the pertinent challenges posed by food integrity to human and animal health. In recent times, AI and ML, along with other Industry 4.0 technologies such as big data, blockchain, virtual reality, and the internet of things (IoT), have found profound applications within nearly all dimensions of the food industry with a key focus on enhancing food safety and quality and improving the resilience of the food supply chain. This paper provides an accessible scrutiny of these technologies (in particular, AI and ML) in relation to food integrity and gives a summary of their current advancements and applications within the field. Key areas of emphasis include the application of AI and ML in quality control and inspection, food fraud detection, process control, risk assessments, prediction, and management, and supply chain traceability, amongst other critical issues addressed. Based on the literature reviewed herein, the utilization of AI and ML in the food industry has unequivocally led to improved standards of food integrity and consequently enhanced public health and consumer trust, as well as boosting the resilience of the food supply chain. While these applications demonstrate significant promise, the paper also acknowledges some of the challenges associated with the domain-specific implementation of AI in the field of food integrity. The paper further examines the prospects and orientations, underscoring the significance of overcoming the obstacles in order to fully harness the capabilities of AI and ML in safeguarding the integrity of the food system. Full article
(This article belongs to the Special Issue Food Safety and Microbiological Hazards)
Show Figures

Figure 1

Figure 1
<p>Artificial intelligence vs. machine learning.</p>
Full article ">Figure 2
<p>Dimensions of AI and ML in food integrity.</p>
Full article ">
24 pages, 26431 KiB  
Review
When Taekwondo Meets Artificial Intelligence: The Development of Taekwondo
by Min-Chul Shin, Dae-Hoon Lee, Albert Chung and Yu-Won Kang
Appl. Sci. 2024, 14(7), 3093; https://doi.org/10.3390/app14073093 - 7 Apr 2024
Viewed by 2904
Abstract
This study explores the comprehensive understanding of taekwondo, the application of fourth industrial revolution technologies in various kinds of sports, the development of taekwondo through artificial intelligence (AI), and essential technology in the fourth industrial revolution while suggesting advanced science directions through a [...] Read more.
This study explores the comprehensive understanding of taekwondo, the application of fourth industrial revolution technologies in various kinds of sports, the development of taekwondo through artificial intelligence (AI), and essential technology in the fourth industrial revolution while suggesting advanced science directions through a literature review. Literature was sourced from six internet search electronic databases, consisting of three English databases and three Korean databases, from January 2016 to August 2023. The literature indicated cases of sports convergence with the application of fourth industrial revolution technologies, such as the game of go, golf, table tennis, soccer, American football, skiing, archery, and fencing. These sports not only use big data but also virtual reality and augmented reality. Taekwondo is a traditional martial art that originated in Republic of Korea and gradually became a globally recognized sport. Since taekwondo’s competition analysis is an analysis in which researchers manually write events, it takes a very long time to analyze, and the scale of the analysis varies depending on the researcher’s tendencies. This study presented the development of an AI Taekwondo performance improvement analysis and evaluation system and a metaverse-based virtual Taekwondo pumsae/fighting coaching platform through an AI-based motion tracking analysis method. Full article
Show Figures

Figure 1

Figure 1
<p>High-precision shooting machine and vision-based heart rate measurement equipment.</p>
Full article ">Figure 2
<p>Virtual taekwondo, selected as a 2023 Olympic e-sports event.</p>
Full article ">Figure 3
<p>3D modeling and rigging.</p>
Full article ">Figure 4
<p>Example of taekwondo sparring in real-time in a metaverse.</p>
Full article ">Figure 5
<p>DensePose solution from Facebook.</p>
Full article ">Figure 6
<p>Landmarks (key points).</p>
Full article ">Figure 7
<p>Key point extraction by Google.</p>
Full article ">Figure 8
<p>System concept diagram.</p>
Full article ">Figure 9
<p>Target system configuration.</p>
Full article ">Figure 10
<p>Example of a proprietary CNN architecture for performance improvement.</p>
Full article ">Figure 11
<p>Technical development objectives of 3D reconstruction.</p>
Full article ">Figure 12
<p>A technical review of 3D reconstruction.</p>
Full article ">Figure 13
<p>Example of 3D data creation architecture.</p>
Full article ">Figure 14
<p>Expected process for 3D human motion features.</p>
Full article ">Figure 15
<p>A technical review of 3D human motion features.</p>
Full article ">Figure 16
<p>Proposed example of taekwondo XR training simulator. (Left side: Analyzing the data of a hypothetical player to simulate a confrontation with himself in advance, analyzing the outcome of the game and predicting the outcome of the game; Center side: Provide the actual stadium of the championship where the competition takes place in a virtual environment; Right side: Analyzing individual athletes’ athletic abilities by learning the movements of athletes with artificial intelligence).</p>
Full article ">Figure 17
<p>Proposed example of athletic performance analysis. (Left table: Athlete’s weight, age, height, weight, waist circumference, shoulder width, chest circumference, waist circumference, arm length, hip circumference, leggy, thigh circumference, calf circumference, ankle circumference, foot size, jumping force, rotational force; Right table: Ranking of athletes’ athletic ability (grade); Digitize and graphically illustrate endurance, quickness, kick, punch, defense, attack, etc.).</p>
Full article ">Figure 18
<p>Proposed example of real-time taekwondo sparring in a metaverse. (Left table: Blue Corner’s player name, nationality, totality, probability of winning; Center figure: Comparative Analysis of Endurance, Quickness, Attack, Defensive, Kick, and Punching with Schematic Data; Right table: Red Connor’s player name, nationality, totality, odds of winning).</p>
Full article ">Figure 19
<p>Proposed example of pumsae motion evaluation program. (Left table: Real-time check schematic of the player’s postural deduction factors. Right table: Schematic table showing the number of deductions and deductions).</p>
Full article ">Figure 20
<p>Example of metaverse-based taekwondo XR training simulator. (Left side: Analyzing the data of a hypothetical player to simulate a confrontation with himself in advance, analyzing the outcome of the game and predicting the outcome of the gam; Center side: Provide the actual stadium of the championship where the competition takes place in a virtual environment; Right side: Analyzing individual athletes’ athletic abilities by learning the movements of athletes with artificial intelligence).</p>
Full article ">
19 pages, 6024 KiB  
Article
A Hardware-Based Orientation Detection System Using Dendritic Computation
by Masahiro Nomura, Tianqi Chen, Cheng Tang, Yuki Todo, Rong Sun, Bin Li and Zheng Tang
Electronics 2024, 13(7), 1367; https://doi.org/10.3390/electronics13071367 - 4 Apr 2024
Cited by 1 | Viewed by 1158
Abstract
Studying how objects are positioned is vital for improving technologies like robots, cameras, and virtual reality. In our earlier papers, we introduced a bio-inspired artificial visual system for orientation detection, demonstrating its superiority over traditional systems with higher recognition rates, greater biological resemblance, [...] Read more.
Studying how objects are positioned is vital for improving technologies like robots, cameras, and virtual reality. In our earlier papers, we introduced a bio-inspired artificial visual system for orientation detection, demonstrating its superiority over traditional systems with higher recognition rates, greater biological resemblance, and increased resistance to noise. In this paper, we propose a hardware-based orientation detection system (ODS). The ODS is implemented by a multiple dendritic neuron model (DNM), and a neuronal pruning scheme for the DNM is proposed. After performing the neuronal pruning, only the synapses in the direct and inverse connections states are retained. The former can be realized by a comparator, and the latter can be replaced by a combination of a comparator and a logic NOT gate. For the dendritic function, the connection of synapses on dendrites can be realized with logic AND gates. Then, the output of the neuron is equivalent to a logic OR gate. Compared with other machine learning methods, this logic circuit circumvents floating-point arithmetic and therefore requires very little computing resources to perform complex classification. Furthermore, the ODS can be designed based on experience, so no learning process is required. The superiority of ODS is verified by experiments on binary, grayscale, and color image datasets. The ability to process data rapidly owing to advantages such as parallel computation and simple hardware implementation allows the ODS to be desirable in the era of big data. It is worth mentioning that the experimental results are corroborated with anatomical, physiological, and neuroscientific studies, which may provide us with a new insight for understanding the complex functions in the human brain. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>Visual system: (<b>A</b>) flowchart of visual system; (<b>B</b>) organization of the retina; (<b>C</b>) dendritic neuron.</p>
Full article ">Figure 2
<p>Main components of the DNM: (<b>A</b>) architectural description of the DNM; (<b>B</b>) four kinds of synapses; (<b>C</b>) six cases of connection states; (<b>D</b>) the logic circuit gate represented by each connection state.</p>
Full article ">Figure 3
<p>Hardware implementation of the ODS-01: (<b>A</b>) schematic model of a L2/3 pyramidal neuron [<a href="#B84-electronics-13-01367" class="html-bibr">84</a>]; (<b>B</b>) hardware implementation of the DNM; (<b>C</b>) flowchart of the ODS-01; (<b>D</b>) hardware implementation of the ODS-01.</p>
Full article ">Figure 4
<p>Inhibition scheme and ODS-03: (<b>A</b>) an example of receptive fields scanning; (<b>B</b>) activation intensity of neurons without inhibition scheme (ODS-01); (<b>C</b>) activation intensity of neurons with inhibition scheme (ODS-02); (<b>D</b>) flowchart of ODS-03.</p>
Full article ">Figure 5
<p>Description of (<b>A</b>) dataset-01, (<b>B</b>) dataset-02, and (<b>C</b>) dataset-03.</p>
Full article ">Figure 6
<p>Description of (<b>A</b>) dataset-04 and (<b>B</b>) dataset-05.</p>
Full article ">Figure 7
<p>Description of (<b>A</b>) dataset-06, (<b>B</b>) dataset-07, and (<b>C</b>) dataset-08.</p>
Full article ">Figure 8
<p>Description of dataset-09.</p>
Full article ">Figure 9
<p>Training curves of CNNs. (<b>A</b>) Training curve of CNN-04 with one convolutional layer. (<b>B</b>) Training curve of CNN-04 with two convolutional layers. (<b>C</b>) Training curve of CNN-04 with three convolutional layers. (<b>D</b>) Training curve of CNN-04 with four convolutional layers. (<b>E</b>) Training curve of CNN-30 with one convolutional layer. (<b>F</b>) Training curve of CNN-30 with two convolutional layers. (<b>G</b>) Training curve of CNN-30 with three convolutional layers. (<b>H</b>) Training curve of CNN-30 with four convolutional layers.</p>
Full article ">Figure 10
<p>Comparison of shaded error bars on the datasets with noise.</p>
Full article ">
17 pages, 18006 KiB  
Article
Multi-IRS-Assisted mmWave UAV-BS Network for Coverage Extension
by Sota Yamamoto, Jin Nakazato and Gia Khanh Tran
Sensors 2024, 24(6), 2006; https://doi.org/10.3390/s24062006 - 21 Mar 2024
Cited by 3 | Viewed by 1617
Abstract
In the era of Industry 5.0, advanced technologies like artificial intelligence (AI), robotics, big data, and the Internet of Things (IoT) offer promising avenues for economic growth and solutions to societal challenges. Digital twin technology is important for real-time three-dimensional space reproduction in [...] Read more.
In the era of Industry 5.0, advanced technologies like artificial intelligence (AI), robotics, big data, and the Internet of Things (IoT) offer promising avenues for economic growth and solutions to societal challenges. Digital twin technology is important for real-time three-dimensional space reproduction in this transition, and unmanned aerial vehicles (UAVs) can support it. While recent studies have explored the potential applications of UAVs in nonterrestrial networks (NTNs), bandwidth limitations have restricted their utility. This paper addresses these constraints by integrating millimeter wave (mmWave) technology into UAV networks for high-definition video transmission. Specifically, we focus on coordinating intelligent reflective surfaces (IRSs) and UAV networks to extend coverage while maintaining virtual line-of-sight (LoS) conditions essential for mmWave communication. We present a novel approach for integrating IRS into Beyond 5G/6G networks to enhance high-speed communication coverage. Our proposed IRS selection method ensures optimal communication paths between UAVs and user equipment (UE). We perform numerical analysis in a realistically modeled 3D urban environment to validate our approach. Our results demonstrate significant improvements in the received SNR for multiple UEs upon the introduction of IRSs, and they confirm the feasibility of coverage extension in mmWave UAV networks. Full article
Show Figures

Figure 1

Figure 1
<p>Proposed architecture overview.</p>
Full article ">Figure 2
<p>System model of IRS-assisted UAV networks.</p>
Full article ">Figure 3
<p>Actual environment map data around Tokyo Metropolitan Government Office in Google Earth.</p>
Full article ">Figure 4
<p>A 3D simulation environment around Tokyo Metropolitan Government Office.</p>
Full article ">Figure 5
<p>The sequence of replication.</p>
Full article ">Figure 6
<p>Position of UEs, buildings, IRSs, and UAV.</p>
Full article ">Figure 7
<p>UE distribution and pairing with IRS on July 2024 weekday. (<b>a</b>) 6:00; (<b>b</b>) 12:00; (<b>c</b>) 18:00; (<b>d</b>) 0:00.</p>
Full article ">Figure 8
<p>UE received SNR when not using IRS.</p>
Full article ">Figure 9
<p>UE received SNR when using IRS.</p>
Full article ">Figure 10
<p>UE received SNR when using IRS &amp; neighboring BS.</p>
Full article ">Figure 11
<p>CDF of UE received SNR.</p>
Full article ">Figure 12
<p>Time vs. Number of IRS used/SNR-improved UE ratio.</p>
Full article ">
21 pages, 4943 KiB  
Article
Unveiling Insights: A Bibliometric Analysis of Artificial Intelligence in Teaching
by Malinka Ivanova, Gabriela Grosseck and Carmen Holotescu
Informatics 2024, 11(1), 10; https://doi.org/10.3390/informatics11010010 - 25 Feb 2024
Cited by 5 | Viewed by 3961
Abstract
The penetration of intelligent applications in education is rapidly increasing, posing a number of questions of a different nature to the educational community. This paper is coming to analyze and outline the influence of artificial intelligence (AI) on teaching practice which is an [...] Read more.
The penetration of intelligent applications in education is rapidly increasing, posing a number of questions of a different nature to the educational community. This paper is coming to analyze and outline the influence of artificial intelligence (AI) on teaching practice which is an essential problem considering its growing utilization and pervasion on a global scale. A bibliometric approach is applied to outdraw the “big picture” considering gathered bibliographic data from scientific databases Scopus and Web of Science. Data on relevant publications matching the query “artificial intelligence and teaching” over the past 5 years have been researched and processed through Biblioshiny in R environment in order to establish a descriptive structure of the scientific production, to determine the impact of scientific publications, to trace collaboration patterns and to identify key research areas and emerging trends. The results point out the growth in scientific production lately that is an indicator of increased interest in the investigated topic by researchers who mainly work in collaborative teams as some of them are from different countries and institutions. The identified key research areas include techniques used in educational applications, such as artificial intelligence, machine learning, and deep learning. Additionally, there is a focus on applicable technologies like ChatGPT, learning analytics, and virtual reality. The research also explores the context of application for these techniques and technologies in various educational settings, including teaching, higher education, active learning, e-learning, and online learning. Based on our findings, the trending research topics can be encapsulated by terms such as ChatGPT, chatbots, AI, generative AI, machine learning, emotion recognition, large language models, convolutional neural networks, and decision theory. These findings offer valuable insights into the current landscape of research interests in the field. Full article
Show Figures

Figure 1

Figure 1
<p>The PRISMA process for document collection.</p>
Full article ">Figure 2
<p>Annual scientific production for the period 2018–2023 according to Scopus and Web of Science.</p>
Full article ">Figure 3
<p>Countries’ production over time according to (<b>a</b>) Scopus, (<b>b</b>) Web of Science.</p>
Full article ">Figure 3 Cont.
<p>Countries’ production over time according to (<b>a</b>) Scopus, (<b>b</b>) Web of Science.</p>
Full article ">Figure 4
<p>The most relevant affiliations according to (<b>a</b>) Scopus, (<b>b</b>) Web of Science.</p>
Full article ">Figure 4 Cont.
<p>The most relevant affiliations according to (<b>a</b>) Scopus, (<b>b</b>) Web of Science.</p>
Full article ">Figure 5
<p>The most cited countries (total citations) according to (<b>a</b>) Scopus, (<b>b</b>) Web of Science.</p>
Full article ">Figure 6
<p>Country collaboration networks according to (<b>a</b>) Scopus, (<b>b</b>) Web of science.</p>
Full article ">Figure 7
<p>Institutions collaborative network concerning data from (<b>a</b>) Scopus, (<b>b</b>) Web of Science.</p>
Full article ">Figure 8
<p>Collaboration among the most influential authors: (<b>a</b>) Scopus, (<b>b</b>) Web of Science.</p>
Full article ">Figure 9
<p>Most frequent author keywords concerning bibliometric data during 2023 from Scopus and Web of Science.</p>
Full article ">Figure 10
<p>Trend topics concerning (<b>a</b>) Scopus, (<b>b</b>) Web of Science.</p>
Full article ">Figure 10 Cont.
<p>Trend topics concerning (<b>a</b>) Scopus, (<b>b</b>) Web of Science.</p>
Full article ">Figure 11
<p>Co-occurrence network of author’s keywords considering bibliometric data from (<b>a</b>) Scopus, (<b>b</b>) Web of Science.</p>
Full article ">Figure 12
<p>Summarized information considering the main objectives.</p>
Full article ">
Back to TopTop