Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (10,738)

Search Parameters:
Keywords = network architecture

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 1636 KiB  
Review
Deciphering the Functions of Raphe–Hippocampal Serotonergic and Glutamatergic Circuits and their Deficits in Alzheimer’s Disease
by Wanting Yu, Ruonan Zhang, Aohan Zhang and Yufei Mei
Int. J. Mol. Sci. 2025, 26(3), 1234; https://doi.org/10.3390/ijms26031234 (registering DOI) - 30 Jan 2025
Abstract
Subcortical innervation of the hippocampus by the raphe nucleus is essential for emotional and cognitive control. The two major afferents from raphe to hippocampus originate from serotonergic and glutamatergic neurons, of which the serotonergic control of hippocampal inhibitory network, theta activity, and synaptic [...] Read more.
Subcortical innervation of the hippocampus by the raphe nucleus is essential for emotional and cognitive control. The two major afferents from raphe to hippocampus originate from serotonergic and glutamatergic neurons, of which the serotonergic control of hippocampal inhibitory network, theta activity, and synaptic plasticity have been extensively explored in the growing body of literature, whereas those of glutamatergic circuits have received little attention. Notably, both serotonergic and glutamatergic circuits between raphe and hippocampus are disrupted in Alzheimer’s disease (AD), which may contribute to initiation and progression of behavioral and psychological symptoms of dementia. Thus, deciphering the mechanism underlying abnormal raphe–hippocampal circuits in AD is crucial to prevent dementia-associated emotional and cognitive symptoms. In this review, we summarize the anatomical, neurochemical, and electrophysiological diversity of raphe nuclei as well as the architecture of raphe–hippocampal circuitry. We then elucidate subcortical control of hippocampal activity by raphe nuclei and their role in regulation of emotion and cognition. Additionally, we present an overview of disrupted raphe–hippocampal circuits in AD pathogenesis and analyze the available therapies that can potentially be used clinically to alleviate the neuropsychiatric symptoms and cognitive decline in AD course. Full article
(This article belongs to the Special Issue Dysfunctional Neural Circuits and Impairments in Brain Function)
17 pages, 9846 KiB  
Article
Probabilistic Forecasting of Provincial Regional Wind Power Considering Spatio-Temporal Features
by Gang Li, Chen Lin and Yupeng Li
Energies 2025, 18(3), 652; https://doi.org/10.3390/en18030652 - 30 Jan 2025
Abstract
Accurate prediction of regional wind power generation intervals is an effective support tool for the economic and stable operation of provincial power grid. However, it involves a large amount of high-dimensional meteorological and historical power generation information related to massive wind power stations [...] Read more.
Accurate prediction of regional wind power generation intervals is an effective support tool for the economic and stable operation of provincial power grid. However, it involves a large amount of high-dimensional meteorological and historical power generation information related to massive wind power stations in a province. In this paper, a lightweight model is developed to directly obtain probabilistic predictions in the form of intervals. Firstly, the input features are formed through a fused image generation method of geographic and meteorological information as well as a power aggregation strategy, which avoids the extensive and tedious data processing process prior to modeling in the traditional approach. Then, in order to effectively consider the spatial meteorological distribution characteristics of regional power stations and the temporal characteristics of historical power, a parallel prediction network architecture of a convolutional neural network (CNN) and long short-term memory (LSTM) is designed. Meanwhile, an efficient channel attention (ECA) mechanism and an improved quantile regression-based loss function are introduced in the training to directly generate prediction intervals. The case study shows that the model proposed in this paper improves the interval prediction performance by at least 12.3% and reduces the deterministic prediction root mean square error (RMSE) by at least 19.4% relative to the benchmark model. Full article
(This article belongs to the Section A3: Wind, Wave and Tidal Energy)
Show Figures

Figure 1

Figure 1
<p>Structure of the forecasting model.</p>
Full article ">Figure 2
<p>Mechanism of image generation.</p>
Full article ">Figure 3
<p>The structure of ECA.</p>
Full article ">Figure 4
<p>Study area.</p>
Full article ">Figure 5
<p>Various meteorological images generated at a given moment. (The deeper the color, the greater the value corresponding to the pair of quantities of this meteorological variable).</p>
Full article ">Figure 6
<p>Distribution of predicted values between intervals.</p>
Full article ">Figure 7
<p>Comparison of QR-LIFF model and IQR-LIFF continuous prediction.</p>
Full article ">Figure 8
<p>Deterministic prediction results.</p>
Full article ">Figure 9
<p>Sensitivity analysis of penalty coefficients.</p>
Full article ">
38 pages, 2336 KiB  
Article
End-to-End Power Models for 5G Radio Access Network Architectures with a Perspective on 6G
by Bhuvaneshwar Doorgakant, Tulsi Pawan Fowdur and Mobayode O. Akinsolu
Mathematics 2025, 13(3), 466; https://doi.org/10.3390/math13030466 - 30 Jan 2025
Abstract
5G, the fifth-generation mobile network, is predicted to significantly increase the traditional trajectory of energy consumption. It now uses four times as much energy as 4G, the fourth-generation mobile network. As a result, compared to previous generations, 5G’s increased cell density makes energy [...] Read more.
5G, the fifth-generation mobile network, is predicted to significantly increase the traditional trajectory of energy consumption. It now uses four times as much energy as 4G, the fourth-generation mobile network. As a result, compared to previous generations, 5G’s increased cell density makes energy efficiency a top priority. The objective of this paper is to formulate end-to-end power consumption models for three different 5G radio access network (RAN) deployment architectures, namely the 5G distributed RAN, the 5G centralized RAN with dedicated hardware and the 5G Cloud Centralized-RAN. The end-to-end modelling of the power consumption of a complete 5G system is obtained by combining the power models of individual components such as the base station, the core network, front-haul, mid-haul and backhaul links, as applicable for the different architectures. The authors considered the deployment of software-defined networking (SDN) at the 5G Core network and gigabit passive optical network as access technology for the backhaul network. This study examines the end-to-end power consumption of 5G networks across various architectures, focusing on key dependent parameters. The findings indicate that the 5G distributed RAN scenario has the highest power consumption among the three models evaluated. In comparison, the centralized 5G and 5G Cloud C-RAN scenarios consume 12% and 20% less power, respectively, than the Centralized RAN solution. Additionally, calculations reveal that base stations account for 74% to 78% of the total power consumption in 5G networks. These insights helped pioneer the calculation of the end-to-end power requirements of different 5G network architectures, forming a solid foundation for their sustainable implementation. Furthermore, this study lays the groundwork for extending power modeling to future 6G networks. Full article
27 pages, 2655 KiB  
Article
Research and Development of an IoT Smart Irrigation System for Farmland Based on LoRa and Edge Computing
by Ying Zhang, Xingchen Wang, Liyong Jin, Jun Ni, Yan Zhu, Weixing Cao and Xiaoping Jiang
Agronomy 2025, 15(2), 366; https://doi.org/10.3390/agronomy15020366 - 30 Jan 2025
Abstract
In response to the current key issues in the field of smart irrigation for farmland, such as the lack of data sources and insufficient integration, a low degree of automation in drive execution and control, and over-reliance on cloud platforms for analyzing and [...] Read more.
In response to the current key issues in the field of smart irrigation for farmland, such as the lack of data sources and insufficient integration, a low degree of automation in drive execution and control, and over-reliance on cloud platforms for analyzing and calculating decision making processes, we have developed nodes and gateways for smart irrigation. These developments are based on the EC-IOT edge computing IoT architecture and long range radio (LoRa) communication technology, utilizing STM32 MCU, WH-101-L low-power LoRa modules, 4G modules, high-precision GPS, and other devices. An edge computing analysis and decision model for smart irrigation in farmland has been established by collecting the soil moisture and real-time meteorological information in farmland in a distributed manner, as well as integrating crop growth period and soil properties of field plots. Additionally, a mobile mini-program has been developed using WeChat Developer Tools that interacts with the cloud via the message queuing telemetry transport (MQTT) protocol to realize data visualization on the mobile and web sides and remote precise irrigation control of solenoid valves. The results of the system wireless communication tests indicate that the LoRa-based sensor network has stable data transmission with a maximum communication distance of up to 4 km. At lower communication rates, the signal-to-noise ratio (SNR) and received signal strength indication (RSSI) values measured at long distances are relatively higher, indicating better communication signal quality, but they take longer to transmit. It takes 6 s to transmit 100 bytes at the lowest rate of 0.268 kbps to a distance of 4 km, whereas, at 10.937 kbps, it only takes 0.9 s. The results of field irrigation trials during the wheat grain filling stage have demonstrated that the irrigation amount determined based on the irrigation algorithm can maintain the soil moisture content after irrigation within the suitable range for wheat growth and above 90% of the upper limit of the suitable range, thereby achieving a satisfactory irrigation effect. Notably, the water content in the 40 cm soil layer has the strongest correlation with changes in crop evapotranspiration, and the highest temperature is the most critical factor influencing the water requirements of wheat during the grain-filling period in the test area. Full article
(This article belongs to the Section Water Use and Irrigation)
27 pages, 1548 KiB  
Article
An Intrusion Detection System over the IoT Data Streams Using eXplainable Artificial Intelligence (XAI)
by Adel Alabbadi and Fuad Bajaber
Sensors 2025, 25(3), 847; https://doi.org/10.3390/s25030847 - 30 Jan 2025
Abstract
The rise in intrusions on network and IoT systems has led to the development of artificial intelligence (AI) methodologies in intrusion detection systems (IDSs). However, traditional AI or machine learning (ML) methods can compromise accuracy due to the vast, diverse, and dynamic nature [...] Read more.
The rise in intrusions on network and IoT systems has led to the development of artificial intelligence (AI) methodologies in intrusion detection systems (IDSs). However, traditional AI or machine learning (ML) methods can compromise accuracy due to the vast, diverse, and dynamic nature of the data generated. Moreover, many of these methods lack transparency, making it challenging for security professionals to make predictions. To address these challenges, this paper presents a novel IDS architecture that uses deep learning (DL)-based methodology along with eXplainable AI (XAI) techniques to create explainable models in network intrusion detection systems, empowering security analysts to use these models effectively. DL models are needed to train enormous amounts of data and produce promising results. Three different DL models, i.e., customized 1-D convolutional neural networks (1-D CNNs), deep neural networks (DNNs), and pre-trained model TabNet, are proposed. The experiments are performed on seven different datasets of TON_IOT. The CNN model for the network dataset achieves an impressive accuracy of 99.24%. Meanwhile, for the six different IoT datasets, in most of the datasets, the CNN and DNN achieve 100% accuracy, further validating the effectiveness of the proposed models. In all the datasets, the least-performing model is TabNet. Implementing the proposed method in real time requires an explanation of the predictions generated. Thus, the XAI methods are implemented to understand the essential features responsible for predicting the particular class. Full article
(This article belongs to the Section Internet of Things)
23 pages, 2415 KiB  
Article
Framework for Apple Phenotype Feature Extraction Using Instance Segmentation and Edge Attention Mechanism
by Zichong Wang, Weiyuan Cui, Chenjia Huang, Yuhao Zhou, Zihan Zhao, Yuchen Yue, Xinrui Dong and Chunli Lv
Agriculture 2025, 15(3), 305; https://doi.org/10.3390/agriculture15030305 - 30 Jan 2025
Abstract
A method for apple phenotypic feature extraction and growth anomaly identification based on deep learning and natural language processing technologies is proposed in this paper, aiming to enhance the accuracy of apple quality detection and anomaly prediction in agricultural production. This method integrates [...] Read more.
A method for apple phenotypic feature extraction and growth anomaly identification based on deep learning and natural language processing technologies is proposed in this paper, aiming to enhance the accuracy of apple quality detection and anomaly prediction in agricultural production. This method integrates instance segmentation, edge perception mechanisms, attention mechanisms, and multimodal data fusion to accurately extract an apple’s phenotypic features, such as its shape, color, and surface condition, while identifying potential anomalies which may arise during the growth process. Specifically, the edge transformer segmentation network is employed to combine deep convolutional networks (CNNs) with the Transformer architecture, enhancing feature extraction and modeling long-range dependencies across different regions of an image. The edge perception mechanism improves segmentation accuracy by focusing on the boundary regions of the apple, particularly in the case of complex shapes or surface damage. Additionally, the natural language processing (NLP) module analyzes agricultural domain knowledge, such as planting records and meteorological data, providing insights into potential causes of growth anomalies and enabling more accurate predictions. The experimental results demonstrate that the proposed method significantly outperformed traditional models across multiple metrics. Specifically, in the apple phenotypic feature extraction task, the model achieved exceptional performance, with accuracy of 0.95, recall of 0.91, precision of 0.93, and mean intersection over union (mIoU) of 0.92. Furthermore, in the growth anomaly identification task, the model also performed excellently, with a precision of 0.93, recall of 0.90, accuracy of 0.91, and mIoU of 0.89, further validating its efficiency and robustness in handling complex growth anomaly scenarios. The method’s integration of image data with agricultural knowledge provides a comprehensive approach to both apple quality detection and growth anomaly prediction, offering reliable decision support for agricultural production. The proposed method, by integrating image data with agricultural domain knowledge, provides precise decision support for agricultural production, not only improving the efficiency and accuracy of apple quality detection but also offering reliable technical assurance for agricultural economic analysis. Full article
Show Figures

Figure 1

Figure 1
<p>Image acquisition scheme and examples.</p>
Full article ">Figure 2
<p>Data preprocessing: (<b>A</b>) original image, (<b>B</b>) horizontal flip, (<b>C</b>) perspective transformation, (<b>D</b>) rotation, (<b>E</b>) translation, and (<b>F</b>) center crop.</p>
Full article ">Figure 3
<p>Flowchart of the whole process in the proposed method, where the Agriculture Knowledge Data block integrates domain-specific agricultural knowledge to enhance model performance and decision making.</p>
Full article ">Figure 4
<p>Architecture of edge transformer segmentation network.</p>
Full article ">Figure 5
<p>Flowchart of edge attention mechanism.</p>
Full article ">
30 pages, 4418 KiB  
Article
Towards an Energy Consumption Index for Deep Learning Models: A Comparative Analysis of Architectures, GPUs, and Measurement Tools
by Sergio Aquino-Brítez, Pablo García-Sánchez, Andrés Ortiz and Diego Aquino-Brítez
Sensors 2025, 25(3), 846; https://doi.org/10.3390/s25030846 - 30 Jan 2025
Abstract
The growing global demand for computational resources, particularly in Artificial Intelligence (AI) applications, raises increasing concerns about energy consumption and its environmental impact. This study introduces a newly developed energy consumption index that evaluates the energy efficiency of Deep Learning (DL) models, providing [...] Read more.
The growing global demand for computational resources, particularly in Artificial Intelligence (AI) applications, raises increasing concerns about energy consumption and its environmental impact. This study introduces a newly developed energy consumption index that evaluates the energy efficiency of Deep Learning (DL) models, providing a standardized and adaptable approach for various models. Convolutional neural networks, including both classical and modern architectures, serve as the primary case study to demonstrate the applicability of the index. Furthermore, the inclusion of the Swin Transformer, a state-of-the-art and modern non-convolutional model, highlights the adaptability of the framework to diverse architectural paradigms. This study analyzes the energy consumption during both training and inference of representative DL architectures, including AlexNet, ResNet18, VGG16, EfficientNet-B3, ConvNeXt-T, and Swin Transformer, trained on the Imagenette dataset using TITAN XP and GTX 1080 GPUs. Energy measurements are obtained using sensor-based tools, including OpenZmeter (v2) with integrated electrical sensors. Additionally, software-based tools such as CarbonTracker (v1.2.5) and CodeCarbon (v2.4.1) retrieve energy consumption data from computational component sensors. The results reveal significant differences in energy efficiency across architectures and GPUs, providing insights into the trade-offs between model performance and energy use. By offering a flexible framework for comparing energy efficiency across DL models, this study advances sustainability in AI systems, supporting accurate and standardized energy evaluations applicable to various computational settings. Full article
(This article belongs to the Special Issue Sensor Application for Smart and Sustainable Energy Management)
Show Figures

Figure 1

Figure 1
<p>Images from the Imagenette dataset, a subset of ImageNet, showcasing 10 categories.</p>
Full article ">Figure 2
<p>Comparison of different deep learning architectures: (<b>a</b>) AlexNet, (<b>b</b>) VGG16, (<b>c</b>) ResNet18, (<b>d</b>) EfficientNet-B3, (<b>e</b>) Swin-T, (<b>f</b>) ConvNeXt-T.</p>
Full article ">Figure 3
<p>Schematic diagram of the OpenZmeter. It includes an ARM board, AC/DC converter, LiPo battery, and sensors to measure electrical parameters from the power source to the load. Image adapted from [<a href="#B10-sensors-25-00846" class="html-bibr">10</a>].</p>
Full article ">Figure 4
<p>CodeCarbon [<a href="#B61-sensors-25-00846" class="html-bibr">61</a>] energy measurement process. Energy consumption (<span class="html-italic">E</span>, in kWh) is estimated using RAM (3 W/8 GB), CPU (Intel-RAPL), and GPU (Nvml). Carbon intensity (<span class="html-italic">C</span>, in kgCO<sub>2</sub>eq/kWh) integrates data from cloud providers, country energy mixes, and world averages to compute CO<sub>2</sub>eq.</p>
Full article ">Figure 5
<p>Carbontracker [<a href="#B11-sensors-25-00846" class="html-bibr">11</a>] process for estimating energy consumption (<span class="html-italic">E</span>, kWh) using CPU (Intel RAPL), GPU (Nvml, TDP), and RAM data, adjusted by PUE. Carbon intensity (<span class="html-italic">C</span>, kgCO<sub>2</sub>eq/kWh) is obtained from regional APIs. Final emissions (CO<sub>2</sub>eq) are calculated as <math display="inline"><semantics> <mrow> <mi>C</mi> <msub> <mi>O</mi> <mn>2</mn> </msub> <mi>e</mi> <mi>q</mi> <mo>=</mo> <mi>C</mi> <mo>×</mo> <mi>E</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Experimental framework for evaluating the performance and energy consumption of AlexNet, VGG16, ResNet18, EfficientNet-B3, Swin-T, and ConvNeXt-T measured with OpenZmeter, CodeCarbon and Carbontracker.</p>
Full article ">Figure 7
<p>Energy consumption (kWh) during the training of AlexNet, VGG16, ResNet18, EfficientNet-B3, Swin-T, and ConvNeXt-T on GPUs TITAN Xp and GTX 1080 Ti. The graph compares the results from three energy measurement tools: OpenZmeter (energy meter reference), CodeCarbon, and Carbontracker.</p>
Full article ">Figure 8
<p>Average Execution times (s) during the training of AlexNet, VGG16, ResNet18, EfficientNet-B3, Swin-T, and ConvNeXt-T on GPUs TITAN Xp and GTX 1080 Ti. The graph compares the results from three energy measurement tools: OpenZmeter (energy meter reference), CodeCarbon, and Carbontracker.</p>
Full article ">Figure 9
<p>CO<sub>2</sub> Emissions during the training of AlexNet, VGG16, ResNet18, EfficientNet-B3, Swin-T, and ConvNeXt-T on GPUs TITAN Xp and GTX 1080 Ti; results obtained using CodeCarbon.</p>
Full article ">Figure 10
<p>Active power consumption during the training of DL models on TITAN Xp using OpenZmeter. The solid lines show the average active power consumption over time, while the shaded areas indicate the variability of the data at each time point.</p>
Full article ">Figure 11
<p>Active power consumption during the training of DL Models on GTX 1080 Ti using OpenZmeter. The solid lines show the average active power consumption over time, while the shaded areas indicate the variability of the data at each time point.</p>
Full article ">Figure 12
<p><span class="html-italic">Kappa–Energy Index</span> for AlexNet, VGG16, ResNet18, EfficientNet-B3, Swin-T, and ConvNeXt-T on TITAN Xp and GTX 1080 Ti GPUs, comparing OpenZmeter (hardware reference meter) and CodeCarbon (software meter) results.</p>
Full article ">Figure 13
<p>Energy consumption by component (RAM, CPU, GPU) during the training of AlexNet, VGG16, ResNet18, EfficientNet-B3, Swin-T, and ConvNeXt-T on TITAN Xp and GTX 1080 Ti GPUs. The results are obtained using CodeCarbon.</p>
Full article ">Figure 14
<p>Energy consumption by component (RAM, CPU, GPU) during the inference of AlexNet, VGG16, ResNet18, EfficientNet-B3, Swin-T, and ConvNeXt-T on TITAN Xp and GTX 1080 Ti GPUs. The results are obtained using CodeCarbon.</p>
Full article ">
22 pages, 5791 KiB  
Article
Vibration Analysis Using Multi-Layer Perceptron Neural Networks for Rotor Imbalance Detection in Quadrotor UAV
by Ba Tarfi Salem Abdullah Salem, Mohd Na’im Abdullah, Faizal Mustapha, Nur Shahirah Atifah Kanirai and Mazli Mustapha
Drones 2025, 9(2), 102; https://doi.org/10.3390/drones9020102 - 30 Jan 2025
Abstract
Rotor imbalance in quadrotor UAVs poses a critical challenge, compromising flight stability, increasing maintenance demands, and reducing overall operational efficiency. Traditional vibration analysis methods, such as Fast Fourier Transform (FFT) and wavelet analysis, often struggle with non-stationary signals and real-time data processing, limiting [...] Read more.
Rotor imbalance in quadrotor UAVs poses a critical challenge, compromising flight stability, increasing maintenance demands, and reducing overall operational efficiency. Traditional vibration analysis methods, such as Fast Fourier Transform (FFT) and wavelet analysis, often struggle with non-stationary signals and real-time data processing, limiting their effectiveness under dynamic UAV operating conditions. To address these challenges, this study develops a machine learning-based vibration analysis system using a Multi-Layer Perceptron (MLP) neural network for real-time rotor imbalance detection. The system integrates Micro-Electro-Mechanical Systems (MEMS) sensors for vibration data acquisition, preprocessing techniques for noise reduction and feature extraction, and an optimized MLP architecture tailored to high-dimensional vibration data. Experimental validation was conducted under controlled flight scenarios, collecting a comprehensive dataset of 800 samples representing both balanced and imbalanced rotor conditions. The optimized MLP model, featuring five hidden layers, achieved a Root Mean Squared Error (RMSE) of 0.1414 and a correlation coefficient (R2) of 0.9224 on the test dataset, demonstrating high accuracy and reliability. This study highlights the potential of MLP-based diagnostics to enhance UAV reliability, safety, and operational efficiency, providing a scalable and effective solution for rotor imbalance detection in dynamic environments. The findings offer significant implications for improving UAV performance in addition to minimizing downtime in various industrial and commercial applications. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of the research.</p>
Full article ">Figure 2
<p>UAV frame configuration.</p>
Full article ">Figure 3
<p>Quadcopter arm and MEMS sensor placement.</p>
Full article ">Figure 4
<p>Data processing flowchart.</p>
Full article ">Figure 5
<p>Multi-Layer Perceptron architecture with five hidden layers.</p>
Full article ">Figure 6
<p>Vibration acceleration of (<b>a</b>) balanced and (<b>b</b>) imbalanced rotors for arm 1 across X, Y, and Z axes.</p>
Full article ">Figure 7
<p>Vibration acceleration of (<b>a</b>) balanced and (<b>b</b>) imbalanced rotors for arm 2 across X, Y, and Z axes.</p>
Full article ">Figure 8
<p>Vibration acceleration of (<b>a</b>) balanced and (<b>b</b>) imbalanced rotors for arm 3 across X, Y, and Z axes.</p>
Full article ">Figure 9
<p>Vibration acceleration of (<b>a</b>) balanced and (<b>b</b>) imbalanced rotors for arm 4 across X, Y, and Z axes.</p>
Full article ">Figure 10
<p>Cross-entropy performance of training, validation, and testing.</p>
Full article ">Figure 11
<p>Confusion matrices for ANN.</p>
Full article ">Figure 12
<p>Regression plots for training, validation, testing, and overall datasets with correlation coefficients.</p>
Full article ">Figure 13
<p>Gradient dynamics and validation checks during training.</p>
Full article ">
19 pages, 10260 KiB  
Article
Improving the Seismic Impedance Inversion by Fully Convolutional Neural Network
by Liurong Tao, Zhiwei Gu and Haoran Ren
J. Mar. Sci. Eng. 2025, 13(2), 262; https://doi.org/10.3390/jmse13020262 - 30 Jan 2025
Abstract
Applying deep neural networks (DNNs) to broadband seismic wave impedance inversion is challenging, especially in generalizing from synthetic to field data, which limits the exploitation of their nonlinear mapping capabilities. While many research studies are about advanced and enhanced architectures of DNNs, this [...] Read more.
Applying deep neural networks (DNNs) to broadband seismic wave impedance inversion is challenging, especially in generalizing from synthetic to field data, which limits the exploitation of their nonlinear mapping capabilities. While many research studies are about advanced and enhanced architectures of DNNs, this article explores how variations in input data affect DNNs and consequently enhance their generalizability and inversion performance. This study introduces a novel data pre-processing strategy based on histogram equalization and an iterative testing strategy. By employing a U-Net architecture within a fully convolutional neural network (FCN) exclusively trained on synthetic and monochrome data, including post-stack profile, and 1D linear background impedance profiles, we successfully achieve broadband impedance inversion for both new synthetic data and marine seismic data by integrating imaging profiles with background impedance profiles. Notably, the proposed method is applied to reverse time migration (RTM) data from the Ceduna sub-basin, located in offshore southern Australia, significantly expanding the wavenumber bandwidth of the available data. This demonstrates its generalizability and improved inversion performance. Our findings offer new insights into the challenges of seismic data fusion and promote the utilization of deep neural networks for practical seismic inversion and outcomes improvement. Full article
(This article belongs to the Special Issue Modeling and Waveform Inversion of Marine Seismic Data)
14 pages, 9996 KiB  
Article
Road Extraction from Remote Sensing Images Using a Skip-Connected Parallel CNN-Transformer Encoder-Decoder Model
by Linger Gui, Xingjian Gu, Fen Huang, Shougang Ren, Huanhuan Qin and Chengcheng Fan
Appl. Sci. 2025, 15(3), 1427; https://doi.org/10.3390/app15031427 - 30 Jan 2025
Abstract
Extracting roads from remote sensing images holds significant practical value across fields like urban planning, traffic management, and disaster monitoring. Current Convolutional Neural Network (CNN) methods, praised for their robust local feature learning enabled by inductive biases, deliver impressive results. However, they face [...] Read more.
Extracting roads from remote sensing images holds significant practical value across fields like urban planning, traffic management, and disaster monitoring. Current Convolutional Neural Network (CNN) methods, praised for their robust local feature learning enabled by inductive biases, deliver impressive results. However, they face challenges in capturing global context and accurately extracting the linear features of roads due to their localized receptive fields. To address these shortcomings of traditional methods, this paper proposes a novel parallel encoder architecture that integrates a CNN Encoder Module (CEM) with a Transformer Encoder Module (TEM). The integration combines the CEM’s strength in local feature extraction with the TEM’s ability to incorporate global context, achieving complementary advantages and overcoming limitations of both Transformers and CNNs. Furthermore, the architecture also includes a Linear Convolution Module (LCM), which uses linear convolutions tailored to the shape and distribution of roads. By capturing image features in four specific directions, the LCM significantly improves the model’s ability to detect and represent global and linear road features. Experimental results demonstrate that our proposed method achieves substantial improvements on the German-Street Dataset and the Massachusetts Roads Dataset, increasing the Intersection over Union (IoU) of road class by at least 3% and the overall F1 score by at least 2%. Full article
(This article belongs to the Special Issue Deep Learning and Digital Image Processing)
Show Figures

Figure 1

Figure 1
<p>The overall architecture comprises a multi-level encoder with parallel CEM and TEM and LCM connected to the corresponding encoder levels via skip connections.</p>
Full article ">Figure 2
<p>Illustration of the encoder module structure: (<b>a</b>) The left shows the CEM, and the right shows the TEM. (<b>b</b>) The attention mechanism of the TEM.</p>
Full article ">Figure 3
<p>The LCM uses four-directional convolutions to capture contextual information, with skip-connections to retain key pixel data.</p>
Full article ">Figure 4
<p>Visual comparison of various methods on the German-Street dataset. Red boxes highlight areas with ground truth misidentifications.</p>
Full article ">Figure 5
<p>Visual comparison of various methods on the Massachusetts Roads dataset. Red boxes highlight areas where our method outperforms others in road extraction continuity.</p>
Full article ">Figure A1
<p>Existing challenges in road extraction illustrated by the CHN6-CUG, the DeepGlobe, the German-Street, and the Massachusetts Roads dataset: (<b>a</b>) Complex environments, such as trees and shadows, impact extraction accuracy. (<b>b</b>) The subtle inter-class differences between road and certain background objects can easily cause misclassification. (<b>c</b>) Category imbalance limits the model’s ability to accurately identify minority classes.</p>
Full article ">
17 pages, 3114 KiB  
Article
Real-Time Communication Aid System for Korean Dysarthric Speech
by Kwanghyun Park and Jungpyo Hong
Appl. Sci. 2025, 15(3), 1416; https://doi.org/10.3390/app15031416 - 30 Jan 2025
Abstract
Dysarthria is a speech disorder characterized by difficulties in articulation and vocalization due to impaired control of the articulatory system. Around 30% of individuals with speech disorders have dysarthria, facing significant communication challenges. Existing assistive tools for dysarthria either require additional manipulation or [...] Read more.
Dysarthria is a speech disorder characterized by difficulties in articulation and vocalization due to impaired control of the articulatory system. Around 30% of individuals with speech disorders have dysarthria, facing significant communication challenges. Existing assistive tools for dysarthria either require additional manipulation or only provide word-level speech support, limiting their ability to support effective communication in real-world situations. Thus, this paper proposes a real-time communication aid system that converts sentence-level Korean dysarthric speech to non-dysarthric normal speech. The proposed system consists of two main parts in cascading form. Specifically, a Korean Automatic Speech Recognition (ASR) model is trained with dysarthric utterances using a conformer-based architecture and the graph transducer network–connectionist temporal classification algorithm, significantly enhancing recognition performance over previous models. Subsequently, a Korean Text-To-Speech (TTS) model based on Jointly Training FastSpeech2 and HiFi-GAN for end-to-end Text-to-Speech (JETS) is pipelined to synthesize high-quality non-dysarthric normal speech. These models are integrated into a single system on an app server, which receives 5–10 s of dysarthric speech and converts it to normal speech after 2–3 s. This can provide a practical communication aid for people with dysarthria. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of the existing state-of-the-art Korean dysarthric ASR model.</p>
Full article ">Figure 2
<p>Structural diagram of our proposed real-time dysarthria communication aid system.</p>
Full article ">Figure 3
<p>Flowchart of the entire learning process for the Korean dysarthric ASR model.</p>
Full article ">Figure 4
<p>Flowchart of the learning process for conformer encoder layer block.</p>
Full article ">Figure 5
<p>Flowchart of the learning process in GTN-CTC.</p>
Full article ">Figure 6
<p>Flowchart of the learning process in JETS.</p>
Full article ">Figure 7
<p>Visualization of the system output for the Korean dysarthric speech input: (Meaning: “At dawn, it thundered and rained”) and (English phonetic transcription: “saebyeoge cheondungi chigo biga naeryeosseoyo”). The gray areas in the graph indicate silent segments within the speech, which were caused by dysarthric speech disfluencies.</p>
Full article ">Figure 8
<p>Visualization of the system output for the Korean dysarthric speech input: (Meaning: “I ate bibimbap with a gentle breeze from the sea”) and (English phonetic transcription: “badaeseo budeureoun barameul majeumyeo bibimbabeul meogeossda”). The gray areas in the graph indicate silent segments within the speech, which were caused by dysarthric speech disfluencies.</p>
Full article ">
18 pages, 3827 KiB  
Article
A Novel Virtual Navigation Route Generation Scheme for Augmented Reality Car Navigation System
by Yu-Chen Lin, Yu-Ching Chan and Ming-Chih Lin
Sensors 2025, 25(3), 820; https://doi.org/10.3390/s25030820 - 30 Jan 2025
Viewed by 182
Abstract
This paper develops a novel virtual navigation route generation scheme for an augmented reality (AR) car navigation system based on the generative adversarial network–long short-term memory network (GAN–LSTM) framework with an integrated camera and GPS module. Unlike the present AR car navigation systems, [...] Read more.
This paper develops a novel virtual navigation route generation scheme for an augmented reality (AR) car navigation system based on the generative adversarial network–long short-term memory network (GAN–LSTM) framework with an integrated camera and GPS module. Unlike the present AR car navigation systems, the virtual navigation route is “autonomously” generated in captured images rather than superimposed on the image utilizing the pre-rendered 3D content, such as an arrow or trajectory, which not only provide a more authentic and correct AR effect to the user but also correctly guide the driver earlier when driving in complex road traffic environments. First, an evolved fully convolutional network architecture which uses a top-view image through an inverse perspective mapping scheme as input is utilized to obtain a more accurate semantic segmentation result for the lane markings in the traffic scene. Next, according to the above segmentation result and known location information from path planning, an AR Navigation-Nets based on an LSTM framework is proposed to predict the global relationship codes of the virtual navigation route. Simultaneously, the discriminator is utilized to evaluate the generated virtual navigation route that can approximate the real-world vehicle trajectory. Finally, the virtual navigation route can be superimposed on the original image with the correct ratio and position through an IPM process. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

Figure 1
<p>Commercial automotive navigation system (Garmin International Inc., Olathe, KS, USA). (<b>a</b>) superimposing the virtual guidance path on the electronic map and combining voice for route guidance to help drivers navigate; (<b>b</b>) navigation systems combine electronic maps and virtual reality.</p>
Full article ">Figure 2
<p>AR navigation systems. (<b>a</b>) Google’s live view [<a href="#B6-sensors-25-00820" class="html-bibr">6</a>]; (<b>b</b>) Phiar’s App [<a href="#B7-sensors-25-00820" class="html-bibr">7</a>]; and (<b>c</b>) Dr. W. Narzt [<a href="#B11-sensors-25-00820" class="html-bibr">11</a>], showing a hidden exit, where other vehicles or rises in the landscape restrict the driver’s view.</p>
Full article ">Figure 3
<p>Overview of the proposed augmented reality navigation system.</p>
Full article ">Figure 4
<p>Flowchart of the route planning and coordinate data acquisition.</p>
Full article ">Figure 5
<p>Datasets. (<b>a</b>) Dataset construction by bird’s-eye-view image. (<b>b</b>) Verification of the constructed dataset on the original perspective.</p>
Full article ">Figure 6
<p>Results for lane marking detection.</p>
Full article ">Figure 7
<p>The diagram of the proposed AR Navigation-Nets.</p>
Full article ">Figure 8
<p>Navigation route position grid-based prediction with classification on a stacked LSTM architecture.</p>
Full article ">Figure 9
<p>The diagram of the proposed discriminator.</p>
Full article ">Figure 10
<p>Demonstration of the proposed AR navigation system in numerous challenging traffic scenes (including lane marking detection and virtual navigation route autonomous generation results). (<b>a</b>) Multi-lane road scenario on straight road. (<b>b</b>) Multi-lane road scenario on curved road. (<b>c</b>) Lane change to exit lane guidance on approaching the highway exit. (<b>d</b>) Merging onto highway acceleration lane guidance. (<b>e</b>) Entering a freeway on-ramp guidance. (<b>f</b>) Fast to slow lane change early guidance on approaching a right turn at the next intersection.</p>
Full article ">Figure 11
<p>Comparison of two different training strategies for the generator. (<b>a</b>) Non-adversarial training. (<b>b</b>) Adversarial training.</p>
Full article ">
32 pages, 11003 KiB  
Article
Upgrading a Low-Cost Seismograph for Monitoring Local Seismicity
by Ioannis Vlachos, Marios N. Anagnostou, Markos Avlonitis and Vasileios Karakostas
GeoHazards 2025, 6(1), 4; https://doi.org/10.3390/geohazards6010004 - 29 Jan 2025
Viewed by 308
Abstract
The use of a dense network of commercial high-cost seismographs for earthquake monitoring is often financially unfeasible. A viable alternative to address this limitation is the development of a network of low-cost seismographs capable of monitoring local seismic events with a precision comparable [...] Read more.
The use of a dense network of commercial high-cost seismographs for earthquake monitoring is often financially unfeasible. A viable alternative to address this limitation is the development of a network of low-cost seismographs capable of monitoring local seismic events with a precision comparable to that of high-cost instruments within a specified distance from the epicenter. The primary aim of this study is to compare the performance of an advanced, contemporary low-cost seismograph with that of a commercial, high-cost seismograph. The proposed system is enhanced through the integration of a 24-bit analog-to-digital converter board and an optimized architecture for a low-noise signal amplifier employing active components for seismic signal detection. To calibrate and assess the performance of the low-cost seismograph, an installation was deployed in a region of high seismic activity in Evgiros, Lefkada Island, Greece. The low-cost system was co-located with a high-resolution 24-bit commercial digitizer, equipped with a broadband (30 s—50 Hz) seismometer. An uninterrupted dataset was collected from the low-cost system over a period of more than two years, encompassing 60 local events with magnitudes ranging from 0.9 to 3.2, epicentral distances from 5.71 km to 23.45 km, and focal depths from 1.83 km to 19.69 km. Preliminary findings demonstrate a significant improvement in the accuracy of earthquake magnitude estimation compared to the initial configuration of the low-cost seismograph. Specifically, the proposed system achieved a mean error of ±0.087 when benchmarked against the data collected by the high-cost commercial seismograph. These results underscore the potential of low-cost seismographs to serve as an effective and financially accessible solution for local seismic monitoring. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Low-cost seismograph setup, (<b>b</b>) High-cost seismograph used as reference REFTEK-130, (<b>c</b>) High-cost seismometer Guralp CMG-40T Intermediate Sensor.</p>
Full article ">Figure 2
<p>Block diagram of the Low-Cost System setup. The seismic signal produces ground motion that is sensed from the 4.5Hz geophone at the upper left side of the schematic. Signal is entered to the input of the Low-Noise Preamplifier (grey box) via High-Pass Filter then amplified and passed through a Low-Pass Filter. The signal is then fed into the input of the Datalogger-Digitizer (red box) and converted from analog to digital form with a resolution of 24 bits. Global position system board (Gps) and Real-time Clock board (Rtc) provide maximum time accuracy to the data recorded with a sampling rate of 200 sps with timestamp by the Microcomputer Board. The data are stored on the Sd-Card in chunks of five minutes and then transmitted via an Internet connection to the CMODLab server of Ionian University. A step-down converter circuit board is used to supply the appropriate voltage for the system. A battery charger-maintainer and a 12V DC battery with a capacity of 55 Ah are used for grid operation, ensuring 24/7 uninterruptible operation even during power failures lasting for days.</p>
Full article ">Figure 3
<p>(<b>a</b>) The geophone vertical sensor used in this system is the Geospace GS-11D model, featuring an internal frequency of 4.5 Hz, an internal resistance of 380 ohms, and a sensitivity of 32 V/m/s. When no shunt resistor is connected in parallel to its two pins, the damping factor is ζ = 0.34. The sensor is utilized both with and without its housing, depending on the specific application requirements. (<b>b</b>) Specifications of the used seismic velocity sensor (geophone model Geospace GS-11D, resonance frequency 4.5 Hz, internal resistor 380 ohm. (<b>c</b>) The manufacturer’s sensor response curve (output versus frequency chart) illustrates the geophone’s performance for different damping factors (ζ). Curve A represents a damping factor of ζ = 0.34 (no shunt resistor in parallel, with a sensitivity of 32 V/m/s). Curve B corresponds to a damping factor of ζ = 0.5 (shunt resistor in parallel equal to 4420 ohms, sensitivity of 29.4 V/m/s). Curve C represents a damping factor of ζ = 0.707 (shunt resistor in parallel equal to 1740 ohms, with a sensitivity of 27.2 V/m/s). According to [<a href="#B30-geohazards-06-00004" class="html-bibr">30</a>], the low-cost 4.5 Hz geophone sensor is a highly suitable choice for recording earthquake signals within a short radial distance from the point of installation.</p>
Full article ">Figure 4
<p>(<b>a</b>) Schematic diagram of the low noise preamplifier using the operational amplifier (ADA4528-1) as active component. (<b>b</b>) Noise level response simulation of the signal preamplifier with a higher noise level equal to 2.29 μV/√Hz at the frequency of 2.29 Hz at the output of the circuit (out 1).</p>
Full article ">Figure 5
<p>The frequency response simulation of the signal preamplifier demonstrates a linear response across a frequency spectrum of 28 Hz, with a lower frequency limit of 0.5 Hz and an upper-frequency limit of 28.5 Hz.</p>
Full article ">Figure 6
<p>(<b>a</b>) Installation point of the low-cost seismograph near the high-cost seismograph inside the old school of Evgiros village. (<b>b</b>) Installation location Evgiros Village at Lefkada Island (light blue polygon dot) of both systems, with the locations and the magnitudesof the sixty (60) recording seismic events located at Lefkada island (Part of Greek territory map). (<b>c</b>) The seismicity of Lefkada Island from January 2019 to December 2024 is visualized for seismic events with magnitudes of M &lt; 2 or higher, represented by different color dots and dot sizes correspond to the events magnitudes. Seismicity data catalog created from Aristotle University of Thessaloniki. and it is based on Geophysics Department of the Aristotle University of Thessaloniki, the bulletin of the Geodynamic Institute of the National observatory of Athens. In affection, more earthquakes are analysed and located using all the available data of the recordings of the seismological stations located in the Ionian Islands (<b>d</b>).</p>
Full article ">Figure 6 Cont.
<p>(<b>a</b>) Installation point of the low-cost seismograph near the high-cost seismograph inside the old school of Evgiros village. (<b>b</b>) Installation location Evgiros Village at Lefkada Island (light blue polygon dot) of both systems, with the locations and the magnitudesof the sixty (60) recording seismic events located at Lefkada island (Part of Greek territory map). (<b>c</b>) The seismicity of Lefkada Island from January 2019 to December 2024 is visualized for seismic events with magnitudes of M &lt; 2 or higher, represented by different color dots and dot sizes correspond to the events magnitudes. Seismicity data catalog created from Aristotle University of Thessaloniki. and it is based on Geophysics Department of the Aristotle University of Thessaloniki, the bulletin of the Geodynamic Institute of the National observatory of Athens. In affection, more earthquakes are analysed and located using all the available data of the recordings of the seismological stations located in the Ionian Islands (<b>d</b>).</p>
Full article ">Figure 6 Cont.
<p>(<b>a</b>) Installation point of the low-cost seismograph near the high-cost seismograph inside the old school of Evgiros village. (<b>b</b>) Installation location Evgiros Village at Lefkada Island (light blue polygon dot) of both systems, with the locations and the magnitudesof the sixty (60) recording seismic events located at Lefkada island (Part of Greek territory map). (<b>c</b>) The seismicity of Lefkada Island from January 2019 to December 2024 is visualized for seismic events with magnitudes of M &lt; 2 or higher, represented by different color dots and dot sizes correspond to the events magnitudes. Seismicity data catalog created from Aristotle University of Thessaloniki. and it is based on Geophysics Department of the Aristotle University of Thessaloniki, the bulletin of the Geodynamic Institute of the National observatory of Athens. In affection, more earthquakes are analysed and located using all the available data of the recordings of the seismological stations located in the Ionian Islands (<b>d</b>).</p>
Full article ">Figure 7
<p>Earthquakes along with their frequency spectrum responses, recorded from the low-cost system located at Greece, Lefkas Island, Evgiros village (Lat:38.621° N, Long:20.656° E) as they presented in <a href="#app1-geohazards-06-00004" class="html-app">Appendix A</a>—<a href="#geohazards-06-00004-t0A1" class="html-table">Table A1</a>: (<b>a</b>) Mag = 1.35, Epicentral distance = 15.42 Km, Focal Depth = 6.95 Km; (<b>b</b>) Mag = 1.48, Epicentral distance = 17.54 Km, Focal Depth = 2.73 Km; (<b>c</b>) Mag = 1.91, Epicentral distance = 14.37 Km, Focal Depth = 6.17 Km, (<b>d</b>) Mag = 2.27, Epicentral distance = 13.86 Km, Focal Depth = 10.77 Km; (<b>e</b>) Mag = 2.23, Epicentral distance = 11.82 Km, Focal Depth = 5.53 Km; (<b>f</b>) Mag = 2.1, Epicentral distance = 7.22 Km, Focal Depth = 7.25 Km; (<b>g</b>) Mag = 2.74, Epicentral distance = 16.63 Km, Focal Depth = 19.69 Km; (<b>h</b>) Mag = 2.84, Epicentral distance = 12.07 Km, Focal Depth = 5.68 Km; (<b>i</b>) Mag = 3.22, Epicentral distance = 15.36 Km, Focal Depth = 4.44 Km; (<b>j</b>) Mag = 2.97, Epicentral distance = 13.97 Km, Focal Depth = 5.14 Km (<a href="#app1-geohazards-06-00004" class="html-app">Appendix A</a>—<a href="#geohazards-06-00004-t0A3" class="html-table">Table A3</a>). The spectral amplitudes are attenuated in frequencies above 18-20 Hz. The earthquakes do not include vibrations in frequency ranges above 15 to 20 Hz.</p>
Full article ">Figure 7 Cont.
<p>Earthquakes along with their frequency spectrum responses, recorded from the low-cost system located at Greece, Lefkas Island, Evgiros village (Lat:38.621° N, Long:20.656° E) as they presented in <a href="#app1-geohazards-06-00004" class="html-app">Appendix A</a>—<a href="#geohazards-06-00004-t0A1" class="html-table">Table A1</a>: (<b>a</b>) Mag = 1.35, Epicentral distance = 15.42 Km, Focal Depth = 6.95 Km; (<b>b</b>) Mag = 1.48, Epicentral distance = 17.54 Km, Focal Depth = 2.73 Km; (<b>c</b>) Mag = 1.91, Epicentral distance = 14.37 Km, Focal Depth = 6.17 Km, (<b>d</b>) Mag = 2.27, Epicentral distance = 13.86 Km, Focal Depth = 10.77 Km; (<b>e</b>) Mag = 2.23, Epicentral distance = 11.82 Km, Focal Depth = 5.53 Km; (<b>f</b>) Mag = 2.1, Epicentral distance = 7.22 Km, Focal Depth = 7.25 Km; (<b>g</b>) Mag = 2.74, Epicentral distance = 16.63 Km, Focal Depth = 19.69 Km; (<b>h</b>) Mag = 2.84, Epicentral distance = 12.07 Km, Focal Depth = 5.68 Km; (<b>i</b>) Mag = 3.22, Epicentral distance = 15.36 Km, Focal Depth = 4.44 Km; (<b>j</b>) Mag = 2.97, Epicentral distance = 13.97 Km, Focal Depth = 5.14 Km (<a href="#app1-geohazards-06-00004" class="html-app">Appendix A</a>—<a href="#geohazards-06-00004-t0A3" class="html-table">Table A3</a>). The spectral amplitudes are attenuated in frequencies above 18-20 Hz. The earthquakes do not include vibrations in frequency ranges above 15 to 20 Hz.</p>
Full article ">Figure 7 Cont.
<p>Earthquakes along with their frequency spectrum responses, recorded from the low-cost system located at Greece, Lefkas Island, Evgiros village (Lat:38.621° N, Long:20.656° E) as they presented in <a href="#app1-geohazards-06-00004" class="html-app">Appendix A</a>—<a href="#geohazards-06-00004-t0A1" class="html-table">Table A1</a>: (<b>a</b>) Mag = 1.35, Epicentral distance = 15.42 Km, Focal Depth = 6.95 Km; (<b>b</b>) Mag = 1.48, Epicentral distance = 17.54 Km, Focal Depth = 2.73 Km; (<b>c</b>) Mag = 1.91, Epicentral distance = 14.37 Km, Focal Depth = 6.17 Km, (<b>d</b>) Mag = 2.27, Epicentral distance = 13.86 Km, Focal Depth = 10.77 Km; (<b>e</b>) Mag = 2.23, Epicentral distance = 11.82 Km, Focal Depth = 5.53 Km; (<b>f</b>) Mag = 2.1, Epicentral distance = 7.22 Km, Focal Depth = 7.25 Km; (<b>g</b>) Mag = 2.74, Epicentral distance = 16.63 Km, Focal Depth = 19.69 Km; (<b>h</b>) Mag = 2.84, Epicentral distance = 12.07 Km, Focal Depth = 5.68 Km; (<b>i</b>) Mag = 3.22, Epicentral distance = 15.36 Km, Focal Depth = 4.44 Km; (<b>j</b>) Mag = 2.97, Epicentral distance = 13.97 Km, Focal Depth = 5.14 Km (<a href="#app1-geohazards-06-00004" class="html-app">Appendix A</a>—<a href="#geohazards-06-00004-t0A3" class="html-table">Table A3</a>). The spectral amplitudes are attenuated in frequencies above 18-20 Hz. The earthquakes do not include vibrations in frequency ranges above 15 to 20 Hz.</p>
Full article ">Figure 7 Cont.
<p>Earthquakes along with their frequency spectrum responses, recorded from the low-cost system located at Greece, Lefkas Island, Evgiros village (Lat:38.621° N, Long:20.656° E) as they presented in <a href="#app1-geohazards-06-00004" class="html-app">Appendix A</a>—<a href="#geohazards-06-00004-t0A1" class="html-table">Table A1</a>: (<b>a</b>) Mag = 1.35, Epicentral distance = 15.42 Km, Focal Depth = 6.95 Km; (<b>b</b>) Mag = 1.48, Epicentral distance = 17.54 Km, Focal Depth = 2.73 Km; (<b>c</b>) Mag = 1.91, Epicentral distance = 14.37 Km, Focal Depth = 6.17 Km, (<b>d</b>) Mag = 2.27, Epicentral distance = 13.86 Km, Focal Depth = 10.77 Km; (<b>e</b>) Mag = 2.23, Epicentral distance = 11.82 Km, Focal Depth = 5.53 Km; (<b>f</b>) Mag = 2.1, Epicentral distance = 7.22 Km, Focal Depth = 7.25 Km; (<b>g</b>) Mag = 2.74, Epicentral distance = 16.63 Km, Focal Depth = 19.69 Km; (<b>h</b>) Mag = 2.84, Epicentral distance = 12.07 Km, Focal Depth = 5.68 Km; (<b>i</b>) Mag = 3.22, Epicentral distance = 15.36 Km, Focal Depth = 4.44 Km; (<b>j</b>) Mag = 2.97, Epicentral distance = 13.97 Km, Focal Depth = 5.14 Km (<a href="#app1-geohazards-06-00004" class="html-app">Appendix A</a>—<a href="#geohazards-06-00004-t0A3" class="html-table">Table A3</a>). The spectral amplitudes are attenuated in frequencies above 18-20 Hz. The earthquakes do not include vibrations in frequency ranges above 15 to 20 Hz.</p>
Full article ">Figure 7 Cont.
<p>Earthquakes along with their frequency spectrum responses, recorded from the low-cost system located at Greece, Lefkas Island, Evgiros village (Lat:38.621° N, Long:20.656° E) as they presented in <a href="#app1-geohazards-06-00004" class="html-app">Appendix A</a>—<a href="#geohazards-06-00004-t0A1" class="html-table">Table A1</a>: (<b>a</b>) Mag = 1.35, Epicentral distance = 15.42 Km, Focal Depth = 6.95 Km; (<b>b</b>) Mag = 1.48, Epicentral distance = 17.54 Km, Focal Depth = 2.73 Km; (<b>c</b>) Mag = 1.91, Epicentral distance = 14.37 Km, Focal Depth = 6.17 Km, (<b>d</b>) Mag = 2.27, Epicentral distance = 13.86 Km, Focal Depth = 10.77 Km; (<b>e</b>) Mag = 2.23, Epicentral distance = 11.82 Km, Focal Depth = 5.53 Km; (<b>f</b>) Mag = 2.1, Epicentral distance = 7.22 Km, Focal Depth = 7.25 Km; (<b>g</b>) Mag = 2.74, Epicentral distance = 16.63 Km, Focal Depth = 19.69 Km; (<b>h</b>) Mag = 2.84, Epicentral distance = 12.07 Km, Focal Depth = 5.68 Km; (<b>i</b>) Mag = 3.22, Epicentral distance = 15.36 Km, Focal Depth = 4.44 Km; (<b>j</b>) Mag = 2.97, Epicentral distance = 13.97 Km, Focal Depth = 5.14 Km (<a href="#app1-geohazards-06-00004" class="html-app">Appendix A</a>—<a href="#geohazards-06-00004-t0A3" class="html-table">Table A3</a>). The spectral amplitudes are attenuated in frequencies above 18-20 Hz. The earthquakes do not include vibrations in frequency ranges above 15 to 20 Hz.</p>
Full article ">Figure 7 Cont.
<p>Earthquakes along with their frequency spectrum responses, recorded from the low-cost system located at Greece, Lefkas Island, Evgiros village (Lat:38.621° N, Long:20.656° E) as they presented in <a href="#app1-geohazards-06-00004" class="html-app">Appendix A</a>—<a href="#geohazards-06-00004-t0A1" class="html-table">Table A1</a>: (<b>a</b>) Mag = 1.35, Epicentral distance = 15.42 Km, Focal Depth = 6.95 Km; (<b>b</b>) Mag = 1.48, Epicentral distance = 17.54 Km, Focal Depth = 2.73 Km; (<b>c</b>) Mag = 1.91, Epicentral distance = 14.37 Km, Focal Depth = 6.17 Km, (<b>d</b>) Mag = 2.27, Epicentral distance = 13.86 Km, Focal Depth = 10.77 Km; (<b>e</b>) Mag = 2.23, Epicentral distance = 11.82 Km, Focal Depth = 5.53 Km; (<b>f</b>) Mag = 2.1, Epicentral distance = 7.22 Km, Focal Depth = 7.25 Km; (<b>g</b>) Mag = 2.74, Epicentral distance = 16.63 Km, Focal Depth = 19.69 Km; (<b>h</b>) Mag = 2.84, Epicentral distance = 12.07 Km, Focal Depth = 5.68 Km; (<b>i</b>) Mag = 3.22, Epicentral distance = 15.36 Km, Focal Depth = 4.44 Km; (<b>j</b>) Mag = 2.97, Epicentral distance = 13.97 Km, Focal Depth = 5.14 Km (<a href="#app1-geohazards-06-00004" class="html-app">Appendix A</a>—<a href="#geohazards-06-00004-t0A3" class="html-table">Table A3</a>). The spectral amplitudes are attenuated in frequencies above 18-20 Hz. The earthquakes do not include vibrations in frequency ranges above 15 to 20 Hz.</p>
Full article ">Figure 8
<p>(<b>a</b>) A plot of the 60 analyzed seismic events from the dataset shows significant magnitudes (in R), as recorded by the high-cost system (EVGI), on the x-axis, versus the frequency of seismic event occurrences on the y-axis. (<b>b</b>) A plot of the 60 analyzed events from the dataset displays amplitude velocities (in millimeters per second), recorded by the low-cost system, on the x-axis, versus the frequency of seismic event occurrences on the y-axis. (<b>c</b>) A plot of the 60 analyzed events from the dataset illustrates epicentral distances (in kilometers), recorded by the high-cost system (EVGI), on the x-axis, versus the frequency of seismic event occurrences on the y-axis. (<b>d</b>) A plot of the 60 analyzed events from the dataset presents hypocentral distances (in kilometers), calculated using the Pythagorean theorem and recorded by the high-cost system (EVGI) (<a href="#app1-geohazards-06-00004" class="html-app">Appendix A</a>—<a href="#geohazards-06-00004-t0A1" class="html-table">Table A1</a>), on the x-axis, versus the frequency of seismic event occurrences on the y-axis. (<b>e</b>) A plot of the 60 analyzed events from the dataset depicts focal depths (in kilometers), as recorded by the high-cost system (EVGI), on the x-axis, versus the frequency of seismic event occurrences on the y-axis.</p>
Full article ">Figure 9
<p>Magnitude Corelation by using Epicentral distance (high-cost seismograph on the x-axis vs low-cost seismograph system on the y-axis).</p>
Full article ">Figure 10
<p>Magnitude Corelation by using Hypocentral distance (high-cost seismograph on the x-axis vs low-cost seismograph system on the y-axis).</p>
Full article ">Figure 11
<p>Comparison of the maximum, minimum, and mean magnitude errors for epicentral and hypocentral distances presented in <a href="#geohazards-06-00004-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 12
<p>Comparison of the mean magnitude error for focal depths of 5–10 km and 10–15 km, combined with epicentral distances of 0–10 km and 10–23.45 km.</p>
Full article ">
19 pages, 945 KiB  
Article
Graph Neural Network Learning on the Pediatric Structural Connectome
by Anand Srinivasan, Rajikha Raja, John O. Glass, Melissa M. Hudson, Noah D. Sabin, Kevin R. Krull and Wilburn E. Reddick
Tomography 2025, 11(2), 14; https://doi.org/10.3390/tomography11020014 - 29 Jan 2025
Viewed by 202
Abstract
Purpose: Sex classification is a major benchmark of previous work in learning on the structural connectome, a naturally occurring brain graph that has proven useful for studying cognitive function and impairment. While graph neural networks (GNNs), specifically graph convolutional networks (GCNs), have gained [...] Read more.
Purpose: Sex classification is a major benchmark of previous work in learning on the structural connectome, a naturally occurring brain graph that has proven useful for studying cognitive function and impairment. While graph neural networks (GNNs), specifically graph convolutional networks (GCNs), have gained popularity lately for their effectiveness in learning on graph data, achieving strong performance in adult sex classification tasks, their application to pediatric populations remains unexplored. We seek to characterize the capacity for GNN models to learn connectomic patterns on pediatric data through an exploration of training techniques and architectural design choices. Methods: Two datasets comprising an adult BRIGHT dataset (N = 147 Hodgkin’s lymphoma survivors and N = 162 age similar controls) and a pediatric Human Connectome Project in Development (HCP-D) dataset (N = 135 healthy subjects) were utilized. Two GNN models (GCN simple and GCN residual), a deep neural network (multi-layer perceptron), and two standard machine learning models (random forest and support vector machine) were trained. Architecture exploration experiments were conducted to evaluate the impact of network depth, pooling techniques, and skip connections on the ability of GNN models to capture connectomic patterns. Models were assessed across a range of metrics including accuracy, AUC score, and adversarial robustness. Results: GNNs outperformed other models across both populations. Notably, adult GNN models achieved 85.1% accuracy in sex classification on unseen adult participants, consistent with prior studies. The extension of the adult models to the pediatric dataset and training on the smaller pediatric dataset were sub-optimal in their performance. Using adult data to augment pediatric models, the best GNN achieved comparable accuracy across unseen pediatric (83.0%) and adult (81.3%) participants. Adversarial sensitivity experiments showed that the simple GCN remained the most robust to perturbations, followed by the multi-layer perceptron and the residual GCN. Conclusions: These findings underscore the potential of GNNs in advancing our understanding of sex-specific neurological development and disorders and highlight the importance of data augmentation in overcoming challenges associated with small pediatric datasets. Further, they highlight relevant tradeoffs in the design landscape of connectomic GNNs. For example, while the simpler GNN model tested exhibits marginally worse accuracy and AUC scores in comparison to the more complex residual GNN, it demonstrates a higher degree of adversarial robustness. Full article
(This article belongs to the Section Artificial Intelligence in Medical Imaging)
38 pages, 6502 KiB  
Article
Advanced Hybrid Models for Air Pollution Forecasting: Combining SARIMA and BiLSTM Architectures
by Sabina-Cristiana Necula, Ileana Hauer, Doina Fotache and Luminița Hurbean
Electronics 2025, 14(3), 549; https://doi.org/10.3390/electronics14030549 - 29 Jan 2025
Viewed by 210
Abstract
This study explores a hybrid forecasting framework for air pollutant concentrations (PM10, PM2.5, and NO2) that integrates Seasonal Autoregressive Integrated Moving Average (SARIMA) models with Bidirectional Long Short-Term Memory (BiLSTM) networks. By leveraging SARIMA’s strength in linear [...] Read more.
This study explores a hybrid forecasting framework for air pollutant concentrations (PM10, PM2.5, and NO2) that integrates Seasonal Autoregressive Integrated Moving Average (SARIMA) models with Bidirectional Long Short-Term Memory (BiLSTM) networks. By leveraging SARIMA’s strength in linear and seasonal trend modeling and addressing nonlinear dependencies using BiLSTM, the framework incorporates Box-Cox transformations and Fourier terms to enhance variance stabilization and seasonal representation. Additionally, attention mechanisms are employed to prioritize temporal features, refining forecast accuracy. Using five years of daily pollutant data from Romania’s National Air Quality Monitoring Network, the models were rigorously evaluated across short-term (1-day), medium-term (7-day), and long-term (30-day) horizons. Metrics such as RMSE, MAE, and MAPE revealed the hybrid models’ superior performance in capturing complex pollutant dynamics, particularly for PM2.5 and PM10. The SARIMA combined with BiLSTM, Fourier, and Attention configuration demonstrated consistent improvements in predictive accuracy and interpretability, with attention mechanisms proving effective for extreme values and long-term dependencies. This study highlights the benefits of combining statistical preprocessing with advanced neural architectures, offering a robust and scalable solution for air quality forecasting. The findings provide valuable insights for environmental policymakers and urban planners, emphasizing the potential of hybrid models for improving air quality management and decision-making in dynamic urban environments. Full article
Back to TopTop