Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,754)

Search Parameters:
Keywords = temporal features

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 4203 KiB  
Article
Sensitivity of Spiking Neural Networks Due to Input Perturbation
by Haoran Zhu, Xiaoqin Zeng, Yang Zou and Jinfeng Zhou
Brain Sci. 2024, 14(11), 1149; https://doi.org/10.3390/brainsci14111149 (registering DOI) - 16 Nov 2024
Abstract
Background. To investigate the behavior of spiking neural networks (SNNs), the sensitivity of input perturbation serves as an effective metric for assessing the influence on the network output. However, existing methods fall short in evaluating the sensitivity of SNNs featuring biologically plausible leaky [...] Read more.
Background. To investigate the behavior of spiking neural networks (SNNs), the sensitivity of input perturbation serves as an effective metric for assessing the influence on the network output. However, existing methods fall short in evaluating the sensitivity of SNNs featuring biologically plausible leaky integrate-and-fire (LIF) neurons due to the intricate neuronal dynamics during the feedforward process. Methods. This paper first defines the sensitivity of a temporal-coded spiking neuron (SN) as the deviation between the perturbed and unperturbed output under a given input perturbation with respect to overall inputs. Then, the sensitivity algorithm of an entire SNN is derived iteratively from the sensitivity of each individual neuron. Instead of using the actual firing time, the desired firing time is employed to derive a more precise analytical expression of the sensitivity. Moreover, the expectation of the membrane potential difference is utilized to quantify the magnitude of the input deviation. Results/Conclusions. The theoretical results achieved with the proposed algorithm are in reasonable agreement with the simulation results obtained with extensive input data. The sensitivity also varies monotonically with changes in other parameters, except for the number of time steps, providing valuable insights for choosing appropriate values to construct the network. Nevertheless, the sensitivity exhibits a piecewise decreasing tendency with respect to the number of time steps, with the length and starting point of each piece contingent upon the specific parameter values of the neuron. Full article
26 pages, 2737 KiB  
Article
Multiscale Spatiotemporal Variation Analysis of Regional Water Use Efficiency Based on Multifractals
by Tong Zhao, Yanan Wang, Yulu Zhang, Qingyun Wang, Penghai Wu, Hui Yang, Zongyi He and Junli Li
Remote Sens. 2024, 16(22), 4269; https://doi.org/10.3390/rs16224269 (registering DOI) - 16 Nov 2024
Viewed by 61
Abstract
Understanding the complex variations in water use efficiency (WUE) is critical for optimizing agricultural productivity and resource management. Traditional analytical methods often fail to capture the nonlinear and multiscale variations inherent in WUE, where multifractal theory offers distinct advantages. Given its limited application [...] Read more.
Understanding the complex variations in water use efficiency (WUE) is critical for optimizing agricultural productivity and resource management. Traditional analytical methods often fail to capture the nonlinear and multiscale variations inherent in WUE, where multifractal theory offers distinct advantages. Given its limited application in WUE studies, this paper analyzes the spatiotemporal characteristics and influencing factors of the WUE in Anhui Province from 2001 to 2022 using a multifractal, multiscale approach. The results indicated that the WUE exhibited significant interannual variation, peaking in summer, especially in August (2.4552 gC·mm⁻1·m⁻2), with the monthly average showing an inverted “V” shape. Across different spatial and temporal scales, the WUE displayed clear multifractal characteristics. Temporally, the variation in fractal features between years was not prominent, while inter-seasonal variation was most complex in August during summer. Spatially, the most distinct multifractal patterns were observed in hilly and mountainous areas, particularly in regions with brown soil distribution. Rainfall was identified as the primary natural driver influencing regional WUE changes. This study aims to promote the sustainable use of water resources while ensuring the stability of agricultural production within protected farmlands. Full article
19 pages, 8125 KiB  
Article
A Hybrid Deep Learning Framework for OFDM with Index Modulation Under Uncertain Channel Conditions
by Md Abdul Aziz, Md Habibur Rahman, Rana Tabassum, Mohammad Abrar Shakil Sejan, Myung-Sun Baek and Hyoung-Kyu Song
Mathematics 2024, 12(22), 3583; https://doi.org/10.3390/math12223583 (registering DOI) - 15 Nov 2024
Viewed by 233
Abstract
Index modulation (IM) is considered a promising approach for fifth-generation wireless systems due to its spectral efficiency and reduced complexity compared to conventional modulation techniques. However, IM faces difficulties in environments with unpredictable channel conditions, particularly in accurately detecting index values and dynamically [...] Read more.
Index modulation (IM) is considered a promising approach for fifth-generation wireless systems due to its spectral efficiency and reduced complexity compared to conventional modulation techniques. However, IM faces difficulties in environments with unpredictable channel conditions, particularly in accurately detecting index values and dynamically adjusting index assignments. Deep learning (DL) offers a potential solution by improving detection performance and resilience through the learning of intricate patterns in varying channel conditions. In this paper, we introduce a robust detection method based on a hybrid DL (HDL) model designed specifically for orthogonal frequency-division multiplexing with IM (OFDM-IM) in challenging channel environments. Our proposed HDL detector leverages a one-dimensional convolutional neural network (1D-CNN) for feature extraction, followed by a bidirectional long short-term memory (Bi-LSTM) network to capture temporal dependencies. Before feeding data into the network, the channel matrix and received signals are preprocessed using domain-specific knowledge. We evaluate the bit error rate (BER) performance of the proposed model using different optimizers and equalizers, then compare it with other models. Moreover, we evaluate the throughput and spectral efficiency across varying SNR levels. Simulation results demonstrate that the proposed hybrid detector surpasses traditional and other DL-based detectors in terms of performance, underscoring its effectiveness for OFDM-IM under uncertain channel conditions. Full article
22 pages, 4599 KiB  
Article
Radar Echo Extrapolation Based on Translator Coding and Decoding Conditional Generation Adversarial Network
by Xingang Mou, Yuan He, Wenfeng Li and Xiao Zhou
Appl. Sci. 2024, 14(22), 10550; https://doi.org/10.3390/app142210550 (registering DOI) - 15 Nov 2024
Viewed by 200
Abstract
In response to the shortcomings of current spatiotemporal prediction models, which frequently encounter difficulties in temporal feature extraction and the forecasting of medium to high echo intensity regions over extended sequences, this study presents a novel model for radar echo extrapolation that combines [...] Read more.
In response to the shortcomings of current spatiotemporal prediction models, which frequently encounter difficulties in temporal feature extraction and the forecasting of medium to high echo intensity regions over extended sequences, this study presents a novel model for radar echo extrapolation that combines a translator encoder-decoder architecture with a spatiotemporal dual-discriminator conditional generative adversarial network (STD-TranslatorNet). Initially, an image reconstruction network is established as the generator, employing a combination of a temporal attention unit (TAU) and an encoder–decoder framework. Within this architecture, both intra-frame static attention and inter-frame dynamic attention mechanisms are utilized to derive attention weights across image channels, thereby effectively capturing the temporal evolution of time series images. This approach enhances the network’s capacity to comprehend local spatial features alongside global temporal dynamics. The encoder–decoder configuration further bolsters the network’s proficiency in feature extraction through image reconstruction. Subsequently, the spatiotemporal dual discriminator is crafted to encapsulate both temporal correlations and spatial attributes within the generated image sequences. This design serves to effectively steer the generator’s output, thereby augmenting the realism of the produced images. Lastly, a composite multi-loss function is proposed to enhance the network’s capability to model intricate spatiotemporal evolving radar echo data, facilitating a more comprehensive assessment of the quality of the generated images, which in turn fortifies the network’s robustness. Experimental findings derived from the standard radar echo dataset (SRAD) reveal that the proposed radar echo extrapolation technique exhibits superior performance, with average critical success index (CSI) and probability of detection (POD) metrics per frame increasing by 6.9% and 7.6%, respectively, in comparison to prior methodologies. Full article
14 pages, 1028 KiB  
Article
Person Identification Using Temporal Analysis of Facial Blood Flow
by Maria Raia, Thomas Stogiannopoulos, Nikolaos Mitianoudis and Nikolaos V. Boulgouris
Electronics 2024, 13(22), 4499; https://doi.org/10.3390/electronics13224499 - 15 Nov 2024
Viewed by 249
Abstract
Biometrics play an important role in modern access control and security systems. The need of novel biometrics to complement traditional biometrics has been at the forefront of research. The Facial Blood Flow (FBF) biometric trait, recently proposed by our team, is a spatio-temporal [...] Read more.
Biometrics play an important role in modern access control and security systems. The need of novel biometrics to complement traditional biometrics has been at the forefront of research. The Facial Blood Flow (FBF) biometric trait, recently proposed by our team, is a spatio-temporal representation of facial blood flow, constructed using motion magnification from facial areas where skin is visible. Due to its design and construction, the FBF does not need information from the eyes, nose, or mouth, and, therefore, it yields a versatile biometric of great potential. In this work, we evaluate the effectiveness of novel temporal partitioning and Fast Fourier Transform-based features that capture the temporal evolution of facial blood flow. These new features, along with a “time-distributed” Convolutional Neural Network-based deep learning architecture, are experimentally shown to increase the performance of FBF-based person identification compared to our previous efforts. This study provides further evidence of FBF’s potential for use in biometric identification. Full article
Show Figures

Figure 1

Figure 1
<p>The proposed person identification system based on facial blood flow (FBF).</p>
Full article ">Figure 2
<p>(<b>a</b>) Original face image. (<b>b</b>) Face detection using [<a href="#B32-electronics-13-04499" class="html-bibr">32</a>]. (<b>c</b>) Active Appearance Model (AAM) fit using [<a href="#B34-electronics-13-04499" class="html-bibr">34</a>]. The three control points (two from the AAM and another inferred from the other two) are highlighted. (<b>d</b>) Detection of the forehead region using two control points. (<b>e</b>,<b>f</b>) Detection of the left and right facial regions using the left and right control points, respectively. (The subject in this figure has agreed to have his image included in the paper for demonstration purposes).</p>
Full article ">Figure 3
<p>Snapshots from the three extracted areas for the subject in <a href="#electronics-13-04499-f002" class="html-fig">Figure 2</a>. It is clear that these areas do not contain any traditional facial biometric traits.</p>
Full article ">Figure 4
<p>Division of a video clip into five sub-clips. Temporal averages are calculated for each sub-clip.</p>
Full article ">Figure 5
<p>Temporal features for the FBF biometric extracted from the forehead region. The averaged image template [<a href="#B26-electronics-13-04499" class="html-bibr">26</a>] is shown for for subject A and B. Lighter colors represent greater values while darker colors represent smaller values (best seen in color).</p>
Full article ">Figure 6
<p>Temporal features for the FBF biometric extracted from the forehead region. The proposed temporal averages of (<a href="#FD1-electronics-13-04499" class="html-disp-formula">1</a>) is shown for subject A and B. Lighter colors represent greater values while darker colors represent smaller values (best seen in color).</p>
Full article ">Figure 7
<p>Frequency-domain features for the FBF biometric extracted from the forehead region of subject A. (<b>a</b>) DCT features, (<b>b</b>) FFT features calculated using (<a href="#FD3-electronics-13-04499" class="html-disp-formula">3</a>).</p>
Full article ">Figure 8
<p>The proposed CNN structure with ‘time’-distributed 2D convolutions used in the convolutional layers of the network. Conv2D refers to a “time-distributed” 2D convolutional layer, ReLU and Softmax refer to the corresponding activation functions, Dropout refers to Dropout regularization [<a href="#B40-electronics-13-04499" class="html-bibr">40</a>], BN refers to Batch Normalization. The numbers of filters and the sizes of the filters are indicated at the top of the respective level.</p>
Full article ">Figure 9
<p>Two ensemble methods to combine the features from the three facial regions of interest. Pipeline refers to the architecture of <a href="#electronics-13-04499-f008" class="html-fig">Figure 8</a> without the fully connected (Dense) layers.</p>
Full article ">Figure 10
<p>The evolution of the loss function and accuracy over 30 epochs for the proposed “time-distributed” VGG network with the FFT features and the Ensemble 2 strategy.</p>
Full article ">Figure 11
<p>The confusion matrix for the proposed “time-distributed” VGG network with the FFT features and the Ensemble 2 strategy.</p>
Full article ">
25 pages, 2899 KiB  
Article
Learning Omni-Dimensional Spatio-Temporal Dependencies for Millimeter-Wave Radar Perception
by Hang Yan, Yongji Li, Luping Wang and Shichao Chen
Remote Sens. 2024, 16(22), 4256; https://doi.org/10.3390/rs16224256 - 15 Nov 2024
Viewed by 262
Abstract
Reliable environmental perception capabilities are a prerequisite for achieving autonomous driving. Cameras and LiDAR are sensitive to illumination and weather conditions, while millimeter-wave radar avoids these issues. Existing models rely heavily on image-based approaches, which may not be able to fully characterize radar [...] Read more.
Reliable environmental perception capabilities are a prerequisite for achieving autonomous driving. Cameras and LiDAR are sensitive to illumination and weather conditions, while millimeter-wave radar avoids these issues. Existing models rely heavily on image-based approaches, which may not be able to fully characterize radar sensor data or efficiently further utilize them for perception tasks. This paper rethinks the approach to modeling radar signals and proposes a novel U-shaped multilayer perceptron network (U-MLPNet) that aims to enhance the learning of omni-dimensional spatio-temporal dependencies. Our method involves innovative signal processing techniques, including a 3D CNN for spatio-temporal feature extraction and an encoder–decoder framework with cross-shaped receptive fields specifically designed to capture the sparse and non-uniform characteristics of radar signals. We conducted extensive experiments using a diverse dataset of urban driving scenarios to characterize the sensor’s performance in multi-view semantic segmentation and object detection tasks. Experiments showed that U-MLPNet achieves competitive performance against state-of-the-art (SOTA) methods, improving the mAP by 3.0% and mDice by 2.7% in RD segmentation and AR and AP by 1.77% and 2.03%, respectively, in object detection. These improvements signify an advancement in radar-based perception for autonomous vehicles, potentially enhancing their reliability and safety across diverse driving conditions. Full article
Show Figures

Figure 1

Figure 1
<p>The complete millimeter-wave radar signal collection and preprocessing pipeline. First, the received and transmitted signals are mixed to generate raw ADC data. These signals are then subjected to various forms of FFT algorithms, resulting in the RA view, RD view, and RAD tensor, which are the RF signals prepared for further processing.</p>
Full article ">Figure 2
<p>Overall framework of our U-MLPNet. The left part represents the multi-view encoder, the middle part is the latent space, and the right part is the dual-view decoder. The skip connections between the encoder and decoder effectively maintain the disparities between different perspectives and balance model performance. The latent space contains the U-MLP module, which can efficiently fuse multi-scale, multi-view global and local spatio-temporal features.</p>
Full article ">Figure 3
<p>Radar RF features. The top row illustrates the CARRADA dataset with RGB images and RA, RD, and AD views arranged from left to right. The bottom row shows the echo of the CRUW dataset, with RGB images on the left and RA images on the right.</p>
Full article ">Figure 4
<p>Overall framework of our U-MLP. The left side the encoder, while the right side represents the decoder. The encoder employs a lightweight MLP to extract meaningful radar features. The decoder progressively integrates these features and restores resolution in a stepwise manner.</p>
Full article ">Figure 5
<p>The receptive field of U-MLP. The original receptive field, the receptive field proposed in this paper, and the equivalent guard band are displayed from left to right. Feature points, the guard band, and feature regions are distinguished by orange, a blue diagonal grid, and light blue, respectively.</p>
Full article ">Figure 6
<p>Visual comparison of RA views for various algorithms on the CARRADA dataset. The pedestrian category is annotated in red, the car category in blue, and the cyclist category in green.</p>
Full article ">Figure 7
<p>Visual comparison of RD views for various algorithms on the CARRADA dataset. The pedestrian category is highlighted in red, the car category in blue, and the cyclist category in green. (<b>a</b>–<b>h</b>) RGB images, RF images, ground truth (GT), U-MLPNet, TransRadar, PeakConv, TMVA-Net, and MVNet, respectively.</p>
Full article ">Figure 8
<p>Polar plot of RD views for various algorithms on the CARRADA dataset across different categories. Each line represents the mIoU of a specific algorithm across these categories, with higher values indicating superior performance.</p>
Full article ">Figure 9
<p>Visual comparison of RA views for various algorithms on the CRUW dataset. The pedestrian category is annotated in red, the car category in blue, and the cyclist category in green.</p>
Full article ">Figure 10
<p>To evaluate the performance and robustness of U-MLPNet in complex environments, we conduct qualitative testing using a nighttime dataset.</p>
Full article ">
16 pages, 966 KiB  
Article
A Diachronic Agent-Based Framework to Model MaaS Programs
by Maria Nadia Postorino and Giuseppe M. L. Sarnè
Urban Sci. 2024, 8(4), 211; https://doi.org/10.3390/urbansci8040211 - 15 Nov 2024
Viewed by 206
Abstract
In recent years, mobility as a service (MaaS) has been thought as one of the opportunities for shifting towards shared travel solutions with respect to private transport modes, particularly owned cars. Although many MaaS aspects have been explored in the literature, there are [...] Read more.
In recent years, mobility as a service (MaaS) has been thought as one of the opportunities for shifting towards shared travel solutions with respect to private transport modes, particularly owned cars. Although many MaaS aspects have been explored in the literature, there are still issues, such as platform implementations, travel solution generation, and the user’s role for making an effective system, that require more research. This paper extends and improves a previous study carried out by the authors by providing more details and experiments. The paper proposes a diachronic network model for representing travel services available in a given MaaS platform by using an agent-based approach to simulate the interactions between travel operators and travelers. Particularly, the diachronic network model allows the consideration of both the spatial and temporal features of the available transport services, while the agent-based framework allows the representation of how shared services might be used and which effects, in terms of modal split, could be expected. The final aim is to provide insights for setting the architecture of an agent-based MaaS platform where transport operators would share their data for providing seamless travel opportunities to travelers. The results obtained for a simulated test case are promising. Particularly, there are interesting findings concerning the traffic congestion boundary values that would move users towards shared travel solutions. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the methodological approach.</p>
Full article ">Figure 2
<p>Diachronic network: representation of transport supply for scheduled services.</p>
Full article ">Figure 3
<p>The agent-based structure including user’s choice by discrete choice models.</p>
Full article ">Figure 4
<p>Multi-layers structure in the proposed framework.</p>
Full article ">Figure 5
<p>Percentage variations of users’ choices in the simulated MaaS context.</p>
Full article ">
22 pages, 4879 KiB  
Article
Research on Medium- and Long-Term Hydropower Generation Forecasting Method Based on LSTM and Transformer
by Guoyong Zhang, Haochuan Li, Lingli Wang, Weiying Wang, Jun Guo, Hui Qin and Xiu Ni
Energies 2024, 17(22), 5707; https://doi.org/10.3390/en17225707 - 14 Nov 2024
Viewed by 414
Abstract
Hydropower generation is influenced by various factors such as precipitation, temperature, and installed capacity, with hydrometeorological factors exhibiting significant temporal variability. This study proposes a hydropower generation forecasting method based on Transformer and SE-Attention for different provinces. In the model, the outputs of [...] Read more.
Hydropower generation is influenced by various factors such as precipitation, temperature, and installed capacity, with hydrometeorological factors exhibiting significant temporal variability. This study proposes a hydropower generation forecasting method based on Transformer and SE-Attention for different provinces. In the model, the outputs of the Transformer and SE-Attention modules are fed into an LSTM layer to capture long-term data dependencies. The SE-Attention module is reintroduced to enhance the model’s focus on important temporal features, and a linear layer maps the hidden state of the last time step to the final output. The proposed Transformer-LSTM-SE model was tested using provincial hydropower generation data from Yunnan, Sichuan, and Chongqing. The experimental results demonstrate that this model achieves high accuracy and stability in medium- and long-term hydropower forecasting at the provincial level, with an average accuracy improvement of 33.79% over the LSTM model and 24.30% over the Transformer-LSTM model. Full article
(This article belongs to the Section F: Electrical Engineering)
Show Figures

Figure 1

Figure 1
<p>Flowchart summarizing the research methodology.</p>
Full article ">Figure 2
<p>Monthly Hydropower Generation Trends: (<b>a</b>) Yunnan Province, (<b>b</b>) Sichuan Province, (<b>c</b>) Chongqing Municipality (2003–2023).</p>
Full article ">Figure 3
<p>Structure of the Transformer-LSTM-SE model.</p>
Full article ">Figure 4
<p>(<b>a</b>) Comparison between actual and predicted hydropower generation in Sichuan Province using the BP model; (<b>b</b>) Correlation analysis between predicted and actual values in Sichuan Province.</p>
Full article ">Figure 5
<p>(<b>a</b>) Comparison between actual and predicted hydropower generation in Sichuan Province using the LSTM model; (<b>b</b>) Correlation analysis between predicted and actual values in Sichuan Province.</p>
Full article ">Figure 6
<p>(<b>a</b>) Comparison between actual and predicted hydropower generation in Sichuan Province using the Transformer-LSTM model; (<b>b</b>) Correlation analysis between predicted and actual values in Sichuan Province.</p>
Full article ">Figure 7
<p>(<b>a</b>) Comparison between actual and predicted hydropower generation in Sichuan Province using the Transformer-LSTM-SE model; (<b>b</b>) Correlation analysis between predicted and actual values in Sichuan Province.</p>
Full article ">Figure 8
<p>(<b>a</b>) Comparison between actual and predicted hydropower generation in Yunnan Province using the BP model; (<b>b</b>) Correlation analysis between predicted and actual values in Yunnan Province.</p>
Full article ">Figure 9
<p>(<b>a</b>) Comparison between actual and predicted hydropower generation in Yunnan Province using the LSTM model; (<b>b</b>) Correlation analysis between predicted and actual values in Yunnan Province.</p>
Full article ">Figure 10
<p>(<b>a</b>) Comparison between actual and predicted hydropower generation in Yunnan Province using the Transformer-LSTM model; (<b>b</b>) Correlation analysis between predicted and actual values in Yunnan Province.</p>
Full article ">Figure 11
<p>(<b>a</b>) Comparison between actual and predicted hydropower generation in Yunnan Province using the Transformer-LSTM-SE model; (<b>b</b>) Correlation analysis between predicted and actual values in Yunnan Province.</p>
Full article ">Figure 12
<p>(<b>a</b>) Comparison between actual and predicted hydropower generation in Chongqing Municipality using the BP model; (<b>b</b>) Correlation analysis between predicted and actual values in Chongqing Municipality.</p>
Full article ">Figure 13
<p>(<b>a</b>) Comparison between actual and predicted hydropower generation in Chongqing Municipality using the LSTM model; (<b>b</b>) Correlation analysis between predicted and actual values in Chongqing Municipality.</p>
Full article ">Figure 14
<p>(<b>a</b>) Comparison between actual and predicted hydropower generation in Chongqing Municipality using the Transformer-LSTM model; (<b>b</b>) Correlation analysis between predicted and actual values in Chongqing Municipality.</p>
Full article ">Figure 15
<p>(<b>a</b>) Comparison between actual and predicted hydropower generation in Chongqing Municipality using the Transformer-LSTM-SE model; (<b>b</b>) Correlation analysis between predicted and actual values in Chongqing Municipality.</p>
Full article ">Figure 16
<p>Monthly Hydropower Generation Forecast Comparison: (<b>a</b>) Yunnan Province, (<b>b</b>) Sichuan Province, (<b>c</b>) Chongqing Municipality (2023).</p>
Full article ">
28 pages, 9113 KiB  
Article
A Multi Source Data-Based Method for Assessing Carbon Sequestration of Urban Parks from a Spatial–Temporal Perspective: A Case Study of Shanghai Century Park
by Yiqi Wang, Jiao Yu, Weixuan Wei and Nannan Dong
Land 2024, 13(11), 1914; https://doi.org/10.3390/land13111914 - 14 Nov 2024
Viewed by 258
Abstract
As urbanization accelerates globally, urban areas have become major sources of greenhouse gas emissions. In this context, urban parks are crucial as significant components of carbon sinks. Using Shanghai Century Park as a case study, this study aims to develop an applicable and [...] Read more.
As urbanization accelerates globally, urban areas have become major sources of greenhouse gas emissions. In this context, urban parks are crucial as significant components of carbon sinks. Using Shanghai Century Park as a case study, this study aims to develop an applicable and reliable workflow to accurately assess the carbon sequestration capacity of urban parks from a spatial–temporal perspective. Firstly, the random forest model is employed for biotope classification and mapping in the park based on multi-source data, including raw spectral bands, vegetation indices, and texture features. Subsequently, the Net Primary Productivity and biomass of different biotope types are calculated, enabling dynamic monitoring of the park’s carbon sequestration capacity from 2018 to 2023. Moreover, the study explores the main factors influencing changes in carbon sequestration capacity from the management perspective. The findings reveal: (1) The application of multi-source imagery data enhances the accuracy of biotope mapping, with winter imagery proving more precise in classification. (2) From 2018 to 2023, Century Park’s carbon sequestration capacity showed a fluctuating upward trend, with significant variations in the carbon sequestration abilities of different biotope types within the park. (3) Renovation and construction work related to biotope types significantly impacted the park’s carbon sequestration capacity. Finally, the study proposes optimization strategies focused on species selection and layout, planting density, and park management. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area: (<b>a</b>) China; (<b>b</b>) Pudong New District, Shanghai; (<b>c</b>) Century Park.</p>
Full article ">Figure 2
<p>Flowchart of methodology.</p>
Full article ">Figure 3
<p>Schematic diagram of Random Forest model for image classification.</p>
Full article ">Figure 4
<p>Accuracy assessment of classification results from four RF models ((<b>top</b>): overall accuracy; (<b>bottom</b>): Kappa coefficient).</p>
Full article ">Figure 5
<p>Schematic diagram of biotope type transitions in Century Park from 2018 to 2020.</p>
Full article ">Figure 6
<p>Schematic diagram of biotope type transitions in Century Park from 2020 to 2023.</p>
Full article ">Figure 7
<p>NPP (<b>top</b>) and biomass (<b>bottom</b>) map of Century Park in 2023.</p>
Full article ">Figure 8
<p>Variation in CS capacity of Century Park from 2018 to 2023.</p>
Full article ">Figure 9
<p>Variation in NPP per unit area for landscape zones in Century Park from 2018 to 2023.</p>
Full article ">Figure 10
<p>Variation in biomass per unit area for landscape zones in Century Park from 2018 to 2023.</p>
Full article ">Figure 11
<p>Spatial distribution of renovation and construction work in Century Park.</p>
Full article ">Figure 12
<p>Variation in area of 5 biotopes and CS capacity in Lakeside Scenic Zone.</p>
Full article ">Figure 13
<p>Variation in area of 5 biotopes and CS capacity in Scenic Forest Zone.</p>
Full article ">Figure 14
<p>Variation in area of 5 biotopes and CS capacity in Golf Course Zone.</p>
Full article ">Figure A1
<p>Biotope maps of Century Park from 2018 to 2023.</p>
Full article ">
20 pages, 25584 KiB  
Article
LIDeepDet: Deepfake Detection via Image Decomposition and Advanced Lighting Information Analysis
by Zhimao Lai, Jicheng Li, Chuntao Wang, Jianhua Wu and Donghua Jiang
Electronics 2024, 13(22), 4466; https://doi.org/10.3390/electronics13224466 - 14 Nov 2024
Viewed by 245
Abstract
The proliferation of AI-generated content (AIGC) has empowered non-experts to create highly realistic Deepfake images and videos using user-friendly software, posing significant challenges to the legal system, particularly in criminal investigations, court proceedings, and accident analyses. The absence of reliable Deepfake verification methods [...] Read more.
The proliferation of AI-generated content (AIGC) has empowered non-experts to create highly realistic Deepfake images and videos using user-friendly software, posing significant challenges to the legal system, particularly in criminal investigations, court proceedings, and accident analyses. The absence of reliable Deepfake verification methods threatens the integrity of legal processes. In response, researchers have explored deep forgery detection, proposing various forensic techniques. However, the swift evolution of deep forgery creation and the limited generalizability of current detection methods impede practical application. We introduce a new deep forgery detection method that utilizes image decomposition and lighting inconsistency. By exploiting inherent discrepancies in imaging environments between genuine and fabricated images, this method extracts robust lighting cues and mitigates disturbances from environmental factors, revealing deeper-level alterations. A crucial element is the lighting information feature extractor, designed according to color constancy principles, to identify inconsistencies in lighting conditions. To address lighting variations, we employ a face material feature extractor using Pattern of Local Gravitational Force (PLGF), which selectively processes image patterns with defined convolutional masks to isolate and focus on reflectance coefficients, rich in textural details essential for forgery detection. Utilizing the Lambertian lighting model, we generate lighting direction vectors across frames to provide temporal context for detection. This framework processes RGB images, face reflectance maps, lighting features, and lighting direction vectors as multi-channel inputs, applying a cross-attention mechanism at the feature level to enhance detection accuracy and adaptability. Experimental results show that our proposed method performs exceptionally well and is widely applicable across multiple datasets, underscoring its importance in advancing deep forgery detection. Full article
(This article belongs to the Special Issue Deep Learning Approach for Secure and Trustworthy Biometric System)
Show Figures

Figure 1

Figure 1
<p>Imaging process of digital image.</p>
Full article ">Figure 2
<p>Process of image generation using generative adversarial networks.</p>
Full article ">Figure 3
<p>Architecture of the proposed method.</p>
Full article ">Figure 4
<p>Illustration of artifacts in deep learning-generated faces. The right-most image shows over-rendering around the nose area.</p>
Full article ">Figure 5
<p>Illustration of inconsistent iris colors in generated faces.</p>
Full article ">Figure 6
<p>Visualization of illumination maps for real images and four forgery methods from the FF++ database.</p>
Full article ">Figure 7
<p>Face material map after illumination normalization. Abnormal traces in the eye and mouth regions are more noticeable.</p>
Full article ">Figure 8
<p>Visualization of face material maps for the facial regions in real images and four forgery methods from the FF++ database for the same frame.</p>
Full article ">Figure 9
<p>Three-dimensional lighting direction vector.</p>
Full article ">Figure 10
<p>Two-dimensional lighting direction vector.</p>
Full article ">Figure 11
<p>Calculation process of lighting direction.</p>
Full article ">Figure 12
<p>Calculation the angle of lighting direction.</p>
Full article ">Figure 13
<p>Comparison of lighting direction angles between real videos and their corresponding Deepfake videos.</p>
Full article ">
19 pages, 3317 KiB  
Article
Multi-Step Parking Demand Prediction Model Based on Multi-Graph Convolutional Transformer
by Yixiong Zhou, Xiaofei Ye, Xingchen Yan, Tao Wang and Jun Chen
Systems 2024, 12(11), 487; https://doi.org/10.3390/systems12110487 - 13 Nov 2024
Viewed by 440
Abstract
The increase in motorized vehicles in cities and the inefficient use of parking spaces have exacerbated parking difficulties in cities. To effectively improve the utilization rate of parking spaces, it is necessary to accurately predict future parking demand. This paper proposes a deep [...] Read more.
The increase in motorized vehicles in cities and the inefficient use of parking spaces have exacerbated parking difficulties in cities. To effectively improve the utilization rate of parking spaces, it is necessary to accurately predict future parking demand. This paper proposes a deep learning model based on multi-graph convolutional Transformer, which captures geographic spatial features through a Multi-Graph Convolutional Network (MGCN) module and mines temporal feature patterns using a Transformer module to accurately predict future multi-step parking demand. The model was validated using historical parking transaction volume data from all on-street parking lots in Nanshan District, Shenzhen, from September 2018 to March 2019, and its superiority was verified through comparative experiments with benchmark models. The results show that the MGCN–Transformer model has a MAE, RMSE, and R2 error index of 0.26, 0.42, and 95.93%, respectively, in the multi-step prediction task of parking demand, demonstrating its superior predictive accuracy compared to other benchmark models. Full article
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)
Show Figures

Figure 1

Figure 1
<p>The trend of changes in the number of automobiles and parking spaces in China.</p>
Full article ">Figure 2
<p>The distribution of on-street parking lots in Nanshan District.</p>
Full article ">Figure 3
<p>Parking demand characteristics under different weather conditions.</p>
Full article ">Figure 4
<p>Characteristics of the weekly parking demand distribution for Parking Lot 1 from 1 September to 7 September 2018.</p>
Full article ">Figure 5
<p>MGCN–Transformer model.</p>
Full article ">Figure 6
<p>Transformer model.</p>
Full article ">Figure 7
<p>Prediction results for parking lot No. 1.</p>
Full article ">Figure 8
<p>Training error results for parking lots in the region.</p>
Full article ">Figure 9
<p>MGCN–Transformer ablation experiment results.</p>
Full article ">
24 pages, 9643 KiB  
Article
Analysis of the Spatial-Temporal Characteristics and Driving Factors of Cultivated Land Fragmentation Under the Expansion of Urban and Rural Construction Land: A Case Study of Ezhou City
by Ke Feng, Haoran Gao, Liping Qu and Jian Gong
Land 2024, 13(11), 1905; https://doi.org/10.3390/land13111905 - 13 Nov 2024
Viewed by 310
Abstract
A systematic understanding of the spatial-temporal evolution patterns of cultivated land fragmentation (CLF), its driving factors, and its relationship with the expansion of urban and rural construction land is essential for identifying strategies to mitigate CLF in rapidly urbanizing regions. This study combined [...] Read more.
A systematic understanding of the spatial-temporal evolution patterns of cultivated land fragmentation (CLF), its driving factors, and its relationship with the expansion of urban and rural construction land is essential for identifying strategies to mitigate CLF in rapidly urbanizing regions. This study combined landscape fragmentation with ownership fragmentation, analyzing CLF through three dimensions: resource endowment, spatial concentration, and convenience of utilization, with eight selected indicators. By comparing village-level data from 2013 to 2022, we explored the key drivers of CLF and its conflicts with urban and rural construction land expansion. The findings indicate a clear spatial variation in village-level CLF in Ezhou, characterized by low fragmentation in the northwest and northeast, and high fragmentation in the southwest and central regions. This pattern is in contrast to Ezhou’s economic development, which decreased progressively from east to north and south. Over the study period, village-level CLF in Ezhou evolved from being primarily moderately and relatively severely fragmented to predominantly severely and relatively severely fragmented, with an overall declining trend and more pronounced polarization. At the same time, the CLF within the village region demonstrated notable spatial clustering features, with a rapid increase observed between 2013 and 2022. It was also discovered that CLF is driven by various factors, with the main influences being the proportion of construction land, land use intensity, and population density. Cultivated land is the main source of both urban construction land (UCL) and rural construction land (RCL), with average contribution rates of 46.47% and 62.62%, respectively. This research offers empirical evidence for rapid urbanization and serves as a critical reference for rural revitalization and coordinated urban–rural development, with potential guidance for future policy formulation and implementation. Full article
Show Figures

Figure 1

Figure 1
<p>Current land use status of Ezhou City.</p>
Full article ">Figure 2
<p>Evaluation map of Ezhou City for resource endowment (<b>a</b>), spatial concentration (<b>b</b>), convenience of utilization (<b>c</b>), and CLFI (<b>d</b>) in 2013.</p>
Full article ">Figure 3
<p>Evaluation map of Ezhou City for resource endowment (<b>a</b>), spatial concentration (<b>b</b>), convenience of utilization (<b>c</b>), and CLFI (<b>d</b>) in 2022.</p>
Full article ">Figure 4
<p>Annual average change rates of CLFI from 2013 to 2022 and their spatial distribution.</p>
Full article ">Figure 5
<p>The local Moran’s I map for Ezhou City in 2013, resource endowment (<b>a</b>), spatial concentration (<b>b</b>), convenience of utilization (<b>c</b>), and CLFI (<b>d</b>).</p>
Full article ">Figure 6
<p>The local Moran’s I map for Ezhou City in 2022: resource endowment (<b>a</b>), spatial concentration (<b>b</b>), convenience of utilization (<b>c</b>), and CLFI (<b>d</b>).</p>
Full article ">Figure 7
<p>Significance of the driving factors for cultivated land fragmentation in Ezhou City.</p>
Full article ">Figure 8
<p>Spatial characteristics of key driving factors of CLF for 2013 and 2022: population density (<b>a</b>,<b>d</b>), land use intensity (<b>b</b>,<b>e</b>), and proportion of built-up land (<b>c</b>,<b>f</b>).</p>
Full article ">Figure 9
<p>Chord diagram of land use changes in Ezhou from 2013 to 2022.</p>
Full article ">Figure 10
<p>Expansion changes in UCL, RCL, and cultivated land from 2013 to 2022.</p>
Full article ">
13 pages, 766 KiB  
Review
Application of Muscle Synergies for Gait Rehabilitation After Stroke: Implications for Future Research
by Jaehyuk Lee, Kimyung Kim, Youngchae Cho and Hyeongdong Kim
Neurol. Int. 2024, 16(6), 1451-1463; https://doi.org/10.3390/neurolint16060108 - 13 Nov 2024
Viewed by 218
Abstract
Background/Objective: Muscle synergy analysis based on machine learning has significantly advanced our understanding of the mechanisms underlying the central nervous system motor control of gait and has identified abnormal gait synergies in stroke patients through various analytical approaches. However, discrepancies in experimental conditions [...] Read more.
Background/Objective: Muscle synergy analysis based on machine learning has significantly advanced our understanding of the mechanisms underlying the central nervous system motor control of gait and has identified abnormal gait synergies in stroke patients through various analytical approaches. However, discrepancies in experimental conditions and computational methods have limited the clinical application of these findings. This review seeks to integrate the results of existing studies on the features of muscle synergies in stroke-related gait abnormalities and provide clinical and research insights into gait rehabilitation. Methods: A systematic search of Web of Science, PubMed, and Scopus was conducted, yielding 10 full-text articles for inclusion. Results: By comprehensively reviewing the consistencies and differences in the study outcomes, we emphasize the need to segment the gait cycle into specific phases (e.g., weight acceptance, push-off, foot clearance, and leg deceleration) during the treatment process of gait rehabilitation and to develop rehabilitation protocols aimed at restoring normal synergy patterns in each gait phase and fractionating reduced synergies. Conclusions: Future research should focus on validating these protocols to improve clinical outcomes and introducing indicators to assess abnormalities in the temporal features of muscle synergies. Full article
(This article belongs to the Special Issue Treatment Strategy and Mechanism of Acute Ischemic Stroke)
Show Figures

Figure 1

Figure 1
<p>PRISMA flowchart for study inclusion/exclusion.</p>
Full article ">
19 pages, 16510 KiB  
Article
Mapping Crop Types for Beekeepers Using Sentinel-2 Satellite Image Time Series: Five Essential Crops in the Pollination Services
by Navid Mahdizadeh Gharakhanlou, Liliana Perez and Nico Coallier
Remote Sens. 2024, 16(22), 4225; https://doi.org/10.3390/rs16224225 - 13 Nov 2024
Viewed by 309
Abstract
Driven by the widespread adoption of deep learning (DL) in crop mapping with satellite image time series (SITS), this study was motivated by the recent success of temporal attention-based approaches in crop mapping. To meet the needs of beekeepers, this study aimed to [...] Read more.
Driven by the widespread adoption of deep learning (DL) in crop mapping with satellite image time series (SITS), this study was motivated by the recent success of temporal attention-based approaches in crop mapping. To meet the needs of beekeepers, this study aimed to develop DL-based classification models for mapping five essential crops in pollination services in Quebec province, Canada, by using Sentinel-2 SITS. Due to the challenging task of crop mapping using SITS, this study employed three DL-based models, namely one-dimensional temporal convolutional neural networks (CNNs) (1DTempCNNs), one-dimensional spectral CNNs (1DSpecCNNs), and long short-term memory (LSTM). Accordingly, this study aimed to capture expert-free temporal and spectral features, specifically targeting temporal features using 1DTempCNN and LSTM models, and spectral features using the 1DSpecCNN model. Our findings indicated that the LSTM model (macro-averaged recall of 0.80, precision of 0.80, F1-score of 0.80, and ROC of 0.89) outperformed both 1DTempCNNs (macro-averaged recall of 0.73, precision of 0.74, F1-score of 0.73, and ROC of 0.85) and 1DSpecCNNs (macro-averaged recall of 0.78, precision of 0.77, F1-score of 0.77, and ROC of 0.88) models, underscoring its effectiveness in capturing temporal features and highlighting its suitability for crop mapping using Sentinel-2 SITS. Furthermore, applying one-dimensional convolution (Conv1D) across the spectral domain demonstrated greater potential in distinguishing land covers and crop types than applying it across the temporal domain. This study contributes to providing insights into the capabilities and limitations of various DL-based classification models for crop mapping using Sentinel-2 SITS. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The flowchart of the research methodology.</p>
Full article ">Figure 2
<p>Geographic location of the study area with a true-color median composite of Sentinel-2 satellite imagery generated for 1–10 April 2021.</p>
Full article ">Figure 3
<p>The macro-average of the F1-score for the 100 designed architectures on the validation dataset for (<b>a</b>) 1DTempCNN, (<b>b</b>) 1DSpecCNN, and (<b>c</b>) LSTM models.</p>
Full article ">Figure 3 Cont.
<p>The macro-average of the F1-score for the 100 designed architectures on the validation dataset for (<b>a</b>) 1DTempCNN, (<b>b</b>) 1DSpecCNN, and (<b>c</b>) LSTM models.</p>
Full article ">Figure 4
<p>The 1DTempCNN architecture with optimal performance.</p>
Full article ">Figure 5
<p>The 1DSpecCNN architecture with optimal performance.</p>
Full article ">Figure 6
<p>The LSTM architecture with optimal performance.</p>
Full article ">Figure 7
<p>(<b>a</b>) The ground reference map; and (<b>b</b>) the LSTM-provided map of land cover and crop type across the entire study area.</p>
Full article ">Figure A1
<p>Confusion matrix of the top-performing DL model (i.e., LSTM) in predicting land cover and crop type on the test dataset.</p>
Full article ">
17 pages, 3221 KiB  
Article
Dynamic Spatio-Temporal Hypergraph Convolutional Network for Traffic Flow Forecasting
by Zhiwei Ye, Hairu Wang, Krzysztof Przystupa, Jacek Majewski, Nataliya Hots and Jun Su
Electronics 2024, 13(22), 4435; https://doi.org/10.3390/electronics13224435 - 12 Nov 2024
Viewed by 343
Abstract
Graph convolutional networks (GCN) are an important research method for intelligent transportation systems (ITS), but they also face the challenge of how to describe the complex spatio-temporal relationships between traffic objects (nodes) more effectively. Although most predictive models are designed based on graph [...] Read more.
Graph convolutional networks (GCN) are an important research method for intelligent transportation systems (ITS), but they also face the challenge of how to describe the complex spatio-temporal relationships between traffic objects (nodes) more effectively. Although most predictive models are designed based on graph convolutional structures and have achieved effective results, they have certain limitations in describing the high-order relationships between real data. The emergence of hypergraphs breaks this limitation. A dynamic spatio-temporal hypergraph convolutional network (DSTHGCN) model is proposed in this paper. It models the dynamic characteristics of traffic flow graph nodes and the hyperedge features of hypergraphs simultaneously, achieving collaborative convolution between graph convolution and hypergraph convolution (HGCN). On this basis, a hyperedge outlier removal mechanism (HOR) is introduced during the process of node information propagation to hyper-edges, effectively removing outliers and optimizing the hypergraph structure while reducing complexity. Through in-depth experimental analysis on real-world datasets, this method has better performance compared to other methods. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of two types of intersection structures in a traffic network.</p>
Full article ">Figure 2
<p>Graph structure and hypergraph structure.</p>
Full article ">Figure 3
<p>The overall framework of the model.</p>
Full article ">Figure 4
<p>Hyperedge outlier mechanism.</p>
Full article ">Figure 5
<p>PeMSD4 and PeMSD8 datasets sensor distribution maps.</p>
Full article ">Figure 6
<p>Performance comparison of different forecasting models across multiple datasets at various time steps.</p>
Full article ">Figure 7
<p>Comparison of predictive curves for DSTHGCN with ground truth and Graph WaveNet on PeMSD4 at Nodes #1, #69, #123, and #196.</p>
Full article ">Figure 8
<p>Hyperparameter study on PeMSD4 and PeMSD8.</p>
Full article ">
Back to TopTop