Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (9,943)

Search Parameters:
Keywords = feature points

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 31509 KiB  
Article
Expanding Open-Vocabulary Understanding for UAV Aerial Imagery: A Vision–Language Framework to Semantic Segmentation
by Bangju Huang, Junhui Li, Wuyang Luan, Jintao Tan, Chenglong Li and Longyang Huang
Drones 2025, 9(2), 155; https://doi.org/10.3390/drones9020155 (registering DOI) - 19 Feb 2025
Abstract
The open-vocabulary understanding of UAV aerial images plays a crucial role in enhancing the intelligence level of remote sensing applications, such as disaster assessment, precision agriculture, and urban planning. In this paper, we propose an innovative open-vocabulary model for UAV images, which combines [...] Read more.
The open-vocabulary understanding of UAV aerial images plays a crucial role in enhancing the intelligence level of remote sensing applications, such as disaster assessment, precision agriculture, and urban planning. In this paper, we propose an innovative open-vocabulary model for UAV images, which combines vision–language methods to achieve efficient recognition and segmentation of unseen categories by generating multi-view image descriptions and feature extraction. To enhance the generalization ability and robustness of the model, we adopted Mixup technology to blend multiple UAV images, generating more diverse and representative training data. To address the limitations of existing open-vocabulary models in UAV image analysis, we leverage the GPT model to generate accurate and professional text descriptions of aerial images, ensuring contextual relevance and precision. The image encoder utilizes a U-Net with Mamba architecture to extract key point information through edge detection and partition pooling, further improving the effectiveness of feature representation. The text encoder employs a fine-tuned BERT model to convert text descriptions of UAV images into feature vectors. Three key loss functions were designed: Generalization Loss to balance old and new category scores, semantic segmentation loss to evaluate model performance on UAV image segmentation tasks, and Triplet Loss to enhance the model’s ability to distinguish features. The Comprehensive Loss Function integrates these terms to ensure robust performance in complex UAV segmentation tasks. Experimental results demonstrate that the proposed method has significant advantages in handling unseen categories and achieving high accuracy in UAV image segmentation tasks, showcasing its potential for practical applications in diverse aerial imagery scenarios. Full article
Show Figures

Figure 1

Figure 1
<p>The architecture of our open-vocabulary model for UAV aerial images: First, multi-view image generation and description create diverse image perspectives with corresponding textual descriptions. Next, image encoding and feature extraction modules process these images to obtain high-level visual features. The visual-language (VL) model then connects the visual features with their textual descriptions, enabling multimodal understanding. Finally, classification and segmentation heads use the extracted features to recognize and segment unseen categories efficiently. This streamlined framework enables effective handling of novel categories in UAV aerial imagery through integrated vision–language processing.</p>
Full article ">Figure 2
<p>The detailed architecture of our Image Encoder, which combines a U-Net framework with Mamba-based State Space Model modules and dual-path processing to efficiently extract and normalize features from complex UAV aerial images.</p>
Full article ">Figure 3
<p>Visualization results on UAV-City dataset.</p>
Full article ">Figure 4
<p>Grad-CAM visualization on UAV-City.</p>
Full article ">Figure 5
<p>Visualization of original and processed images before and after applying grid feature grouping.</p>
Full article ">Figure 6
<p>Visualization of training loss and mIoU metrics comparison on the UAV-City dataset.</p>
Full article ">
15 pages, 10730 KiB  
Article
An Efficient Forest Smoke Detection Approach Using Convolutional Neural Networks and Attention Mechanisms
by Quy-Quyen Hoang, Quy-Lam Hoang and Hoon Oh
J. Imaging 2025, 11(2), 67; https://doi.org/10.3390/jimaging11020067 - 19 Feb 2025
Abstract
This study explores a method of detecting smoke plumes effectively as the early sign of a forest fire. Convolutional neural networks (CNNs) have been widely used for forest fire detection; however, they have not been customized or optimized for smoke characteristics. This paper [...] Read more.
This study explores a method of detecting smoke plumes effectively as the early sign of a forest fire. Convolutional neural networks (CNNs) have been widely used for forest fire detection; however, they have not been customized or optimized for smoke characteristics. This paper proposes a CNN-based forest smoke detection model featuring novel backbone architecture that can increase detection accuracy and reduce computational load. Since the proposed backbone detects the plume of smoke through different views using kernels of varying sizes, it can better detect smoke plumes of different sizes. By decomposing the traditional square kernel convolution into a depth-wise convolution of the coordinate kernel, it can not only better extract the features of the smoke plume spreading along the vertical dimension but also reduce the computational load. An attention mechanism was applied to allow the model to focus on important information while suppressing less relevant information. The experimental results show that our model outperforms other popular ones by achieving detection accuracy of up to 52.9 average precision (AP) and significantly reduces the number of parameters and giga floating-point operations (GFLOPs) compared to the popular models. Full article
Show Figures

Figure 1

Figure 1
<p>The architecture of the forest fire detection model.</p>
Full article ">Figure 2
<p>Reduction in the number of parameters by using different sized kernels.</p>
Full article ">Figure 3
<p>The smoke features tend to be vertically distributed through the layers.</p>
Full article ">Figure 4
<p>The proposed Backbone structure for forest fire detection.</p>
Full article ">Figure 5
<p>CBAM architecture.</p>
Full article ">Figure 6
<p>The Neck architecture.</p>
Full article ">Figure 7
<p>The Head architecture.</p>
Full article ">Figure 8
<p>Qualitative test results for 15 forest fire images numbered 1 to 15, with the class name and confidence value given at the top of each bounding box.</p>
Full article ">Figure 9
<p>Heat maps of the images to which different models are applied.</p>
Full article ">
18 pages, 33036 KiB  
Article
Three-Dimensional Magnetotelluric Forward Modeling Using Multi-Task Deep Learning with Branch Point Selection
by Fei Deng, Hongyu Shi, Peifan Jiang and Xuben Wang
Remote Sens. 2025, 17(4), 713; https://doi.org/10.3390/rs17040713 - 19 Feb 2025
Abstract
Magnetotelluric (MT) forward modeling is a key technique in magnetotelluric sounding, and deep learning has been widely applied to MT forward modeling. In three-dimensional (3-D) problems, although existing methods can predict forward modeling results with high accuracy, they often use multiple networks to [...] Read more.
Magnetotelluric (MT) forward modeling is a key technique in magnetotelluric sounding, and deep learning has been widely applied to MT forward modeling. In three-dimensional (3-D) problems, although existing methods can predict forward modeling results with high accuracy, they often use multiple networks to simulate multiple forward modeling parameters, resulting in low efficiency. We apply multi-task learning (MTL) to 3-D MT forward modeling to achieve simultaneous inference of apparent resistivity and impedance phase, effectively improving overall efficiency. Furthermore, through comparative analysis of feature map differences in various decoder layers of the network, we identify the optimal branching point for multi-task learning decoders. This enhances the feature extraction capabilities of the network and improves the prediction accuracy of forward modeling parameters. Additionally, we introduce an uncertainty-based loss function to dynamically balance the learning weights between tasks, addressing the shortcomings of traditional loss functions. Experiments demonstrate that compared with single-task networks and existing multi-task networks, the proposed network (MT-FeatureNet) achieves the best results in terms of Structural Similarity Index Measure (SSIM), Mean Relative Error (MRE), and Mean Absolute Error (MAE). The proposed multi-task learning model not only improves the efficiency and accuracy of 3-D MT forward modeling but also provides a novel approach to the design of multi-task learning network structures. Full article
Show Figures

Figure 1

Figure 1
<p>Comparison between (<b>a</b>) single-task networks and (<b>b</b>) multi-task networks.</p>
Full article ">Figure 2
<p>U-Net single-task network model.</p>
Full article ">Figure 3
<p>U-Net single-task network model.</p>
Full article ">Figure 4
<p>Feature map of layer (<b>a</b>) A and (<b>b</b>) B and (<b>c</b>) C.</p>
Full article ">Figure 5
<p>Loss per Epoch Comparison.</p>
Full article ">Figure 6
<p>Comparison of single anomaly results.</p>
Full article ">Figure 7
<p>Comparison of the results of the two anomalies.</p>
Full article ">Figure 8
<p>Comparison of the results of the three anomalies.</p>
Full article ">Figure 9
<p>Loss per Epoch Comparison.</p>
Full article ">
17 pages, 9626 KiB  
Article
Semantic Segmentation of Distribution Network Point Clouds Based on NF-PTV2
by Long Han, Bin Song, Shaocheng Wu, Deyu Nie, Zhenyang Chen and Linong Wang
Electronics 2025, 14(4), 812; https://doi.org/10.3390/electronics14040812 - 19 Feb 2025
Abstract
An on-site survey is the primary task of working live in distribution networks. However, the traditional manual on-site survey method is not only not very intuitive but also inefficient. The application of 3D point cloud technology has opened up new avenues for on-site [...] Read more.
An on-site survey is the primary task of working live in distribution networks. However, the traditional manual on-site survey method is not only not very intuitive but also inefficient. The application of 3D point cloud technology has opened up new avenues for on-site surveys in life working in distribution networks. This paper focused on the application of the Point Transformer V2(PTV2) model in the segmentation of distribution network point clouds. Given its deficiencies in boundary discrimination ability and limited feature extraction ability when processing the point clouds of distribution networks, an improved Non-local Focal Loss-Point Transformer V2 (NF-PTV2) model was proposed. With PTV2 as its core, this model incorporated the Non-Local attention to capturing long-distance feature dependencies, thereby compensating for the PTV2 model’s shortcomings in extracting features of large-scale objects with complex features. Simultaneously, the Focal Loss function was introduced to address the issue of class imbalance and enhance the model’s learning ability for small complex samples. The experimental results demonstrated that the overall accuracy (OA) of this model on the distribution network dataset reached 93.28%, the mean intersection over union (mIoU) reached 81.58%, and the mean accuracy (mAcc) reached 87.21%. In summary, the NF-PTV2 model proposed in this article demonstrated good performance in the point cloud segmentation task of the distribution network and can accurately identify various objects, which, to some extent, overcomes the limitations of the PTV2 model. Full article
Show Figures

Figure 1

Figure 1
<p>Non-Local attention.</p>
Full article ">Figure 2
<p>The framework of the NF-PTV2 model. The red parts represent the modules that are unique to the NF-PTV2 model and are not included in the PTV2 model.</p>
Full article ">Figure 3
<p>Segmentation results of the NF-PTV2 model on the distribution network point cloud dataset. (<b>a</b>) shows the point cloud segmented by the NF-PTV2 model; (<b>b</b>) shows the manually labeled point cloud. The circles mark the areas where the NF-PTV2 model’s segmentation results differ from the manually labeled point clouds.</p>
Full article ">Figure 4
<p>Segmentation results of models embedded with an attention mechanism on the distribution network point cloud dataset.</p>
Full article ">Figure 5
<p>Classification of various models in Area 1 to Area 4. The circles in the figure mark the areas where differences exist between the results of various models.</p>
Full article ">Figure 6
<p>Test results of various models in Area 3 and Area 4 of the test set. The circles in the figure mark the areas where differences exist between the results of various models.</p>
Full article ">Figure 7
<p>The overall structure of the proposed method.</p>
Full article ">
16 pages, 4674 KiB  
Article
Wave Attenuation by Australian Temperate Mangroves
by Ruth Reef and Sabrina Sayers
J. Mar. Sci. Eng. 2025, 13(2), 382; https://doi.org/10.3390/jmse13020382 - 19 Feb 2025
Viewed by 83
Abstract
Wave attenuation by natural coastal features is recognised as a soft engineering approach to shoreline protection from storm surges and destructive waves. The effectiveness of wave energy dissipation is determined, in part, by vegetation structure, extent, and distribution. Mangroves line ca. 15% of [...] Read more.
Wave attenuation by natural coastal features is recognised as a soft engineering approach to shoreline protection from storm surges and destructive waves. The effectiveness of wave energy dissipation is determined, in part, by vegetation structure, extent, and distribution. Mangroves line ca. 15% of the world’s coastlines, primarily in tropical and subtropical regions but also extending into temperate climates, where mangroves are shorter and multi-stemmed. Using wave loggers deployed across mangrove and non-mangrove shorelines, we studied the wave attenuating capacity and the drag coefficient (CD) of temperate Avicennia marina mangrove forests of varying structure in Western Port, Australia. The structure of the vegetation obstructing the flow path was represented along each transect in a three-dimensional point cloud derived from overlapping uncrewed aerial vehicle (UAV) images and structure-from-motion (SfM) algorithms. The wave attenuation coefficient (b) calculated from a fitted exponential decay model at the vegetated sites was on average 0.011 m−1 relative to only 0.009 m−1 at the unvegetated site. We calculated a CD for this forest type that ranged between 2.7 and 4.9, which is within the range of other pencil-rooted species such as Sonneratia sp. but significantly lower than prop-rooted species such as Rhizophora spp. Wave attenuation efficiency significantly decreased with increasing water depth, highlighting the dominance of near-bed friction on attenuation in this forest type. The UAV-derived point cloud did not describe the vegetation (especially near-bed) in sufficient detail to accurately depict the obstacles. We found that a temperate mangrove greenbelt of just 100 m can decrease incoming wave heights by close to 70%, indicating that, similarly to tropical and subtropical forests, temperate mangroves significantly attenuate incoming wave energy under normal sea conditions. Full article
(This article belongs to the Section Coastal Engineering)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The bathymetry and geographical context of Western Port, Victoria. Red stars indicate the locations of wave–logger transects and the light green highlight is an estimation of mangrove cover [<a href="#B30-jmse-13-00382" class="html-bibr">30</a>]. Inset shows site location (red arrow) in Australia.</p>
Full article ">Figure 2
<p>(<b>Top</b>) The long-term average wind speeds from two weather stations in Western Port, showing the average wind speed and direction between June 1990 and August 2024 (Bureau of Meteorology). (<b>Bottom</b>) Map showing the location of the weather stations (black) and the sampling locations (red). Image of the seaward edge of the <span class="html-italic">Avicennia marina</span> mangrove forest at Stony Point. Scale bar is 1.5 m.</p>
Full article ">Figure 3
<p>UAV point cloud-derived bed slope and vegetation height distribution along the seaward &gt;&gt; landward transect at Stony Point (SP), the vegetated site at Pioneer Bay (PBV), Hastings, and the non-vegetated site at Pioneer Bay (PBNV). Black rectangles indicate the position of the loggers along the longest transect. The logger at 40 m indicates the appearance of pneumatophores in the vegetated sites.</p>
Full article ">Figure 4
<p>The change in significant wave height (Hs) over seaward &gt;&gt; landward distance at each site, as a percent of the incoming Hs (Hs<sub>0</sub>). Each line represents a time-independent 20 min average value (burst) collected during the high-tide slack period and is coloured differently. The black dashed line is the mean for each site. Logger 1 was the most seaward logger at 0 m. The dotted line at 40 m is the start of the vegetation at each site (Logger 2). Subsequent loggers were placed in 20 m intervals from this point landwards. Pioneer Bay (NV) is a non-vegetated site.</p>
Full article ">Figure 5
<p>The spread of attenuation coefficient value <span class="html-italic">b</span> from each individual burst period at each vegetated site. Larger <span class="html-italic">b</span> values indicate higher attenuation of incoming wave height. PBV indicates the mangrove site at Pioneer Bay and SP is the Stony Point vegetated site. Vertical dotted lines represent the mean.</p>
Full article ">Figure 6
<p>The effect of incoming significant wave period (<b>A</b>), incoming significant wave height (<b>B</b>), the number of aboveground points (<b>C</b>), wind speed (<b>D</b>), and incoming water depth (<b>E</b>), on wave attenuation coefficient <span class="html-italic">b</span> at the different sites. (<b>F</b>) A box plot of the landward distance from the start of the mangrove (Logger 2) at which significant wave height is reduced by 50%. Black points indicate outliers beyond 1.5 times the interquartile range. Please note that the attenuation coefficient derivation assumes that vegetation is the primary wave dissipation factor, thus <span class="html-italic">b</span> at the non-vegetated site should be interpreted with caution.</p>
Full article ">Figure 7
<p>Box plots of (<b>A</b>) the calculated dimensionless drag coefficient (C<sub>D</sub>) computed at each site from the reduction in wave height over distance (black points indicate outliers beyond 1.5 times the interquartile range) and (<b>B</b>) the number of subaqueous points counted across the 20 m wide and 100 m long wave logger transects at each site based on the UAV-derived point cloud and the depth during the burst.</p>
Full article ">
21 pages, 4968 KiB  
Article
PE-DOCC: A Novel Periodicity-Enhanced Deep One-Class Classification Framework for Electricity Theft Detection
by Zhijie Wu and Yufeng Wang
Appl. Sci. 2025, 15(4), 2193; https://doi.org/10.3390/app15042193 - 19 Feb 2025
Viewed by 128
Abstract
Electricity theft, emerging as one of the severe cyberattacks in smart grids, causes significant economic losses. Due to the powerful expressive ability of deep neural networks (DNN), supervised and unsupervised DNN-based electricity theft detection (ETD) schemes have experienced widespread deployment. However, existing works [...] Read more.
Electricity theft, emerging as one of the severe cyberattacks in smart grids, causes significant economic losses. Due to the powerful expressive ability of deep neural networks (DNN), supervised and unsupervised DNN-based electricity theft detection (ETD) schemes have experienced widespread deployment. However, existing works have the following weak points: Supervised DNN-based schemes require abundant labeled anomalous samples for training, and even worse, cannot detect unseen theft patterns. To avoid the extensively labor-consuming activity of labeling anomalous samples, unsupervised DNNs-based schemes aim to learn the normality of time-series and infer an anomaly score for each data instance, but they fail to capture periodic features effectively. To address these challenges, this paper proposes a novel periodicity-enhanced deep one-class classification framework (PE-DOCC) based on a periodicity-enhanced transformer encoder, named Periodicformer encoder. Specifically, within the encoder, a novel criss-cross periodic attention is proposed to capture both horizontal and vertical periodic features. The Periodicformer encoder is pre-trained by reconstructing partially masked input sequences, and the learned latent representations are then fed into a one-class classification for anomaly detection. Extensive experiments on real-world datasets demonstrate that our proposed PE-DOCC framework outperforms state-of-the-art unsupervised ETD methods. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

Figure 1
<p>Proposed framework integrating unsupervised representation learning with one-class classification.</p>
Full article ">Figure 2
<p>The overall architecture of the proposed Periodicformer encoder.</p>
Full article ">Figure 3
<p>Criss-cross periodic attention (<b>left</b>) and the corresponding row and column autocorrelation for a single head <span class="html-italic">h</span> (<b>right</b>).</p>
Full article ">Figure 4
<p>Autocorrelation of a normal sample and an abnormal sample. (<b>a</b>) A normal sample. (<b>b</b>) An abnormal sample.</p>
Full article ">Figure 5
<p>Example of daily electricity consumption and its tampered data corresponding to six types of FDI attacks.</p>
Full article ">Figure 6
<p>Confusion matrices of different methods: (<b>a</b>) OCSVM; (<b>b</b>) autoencoder + OCSVM; (<b>c</b>) autoencoder + iForest; (<b>d</b>) autoencoder + LOF; (<b>e</b>) Periodicformer encoder + OCSVM; (<b>f</b>) Periodicformer encoder + iForest; (<b>g</b>) Periodicformer encoder + LOF.</p>
Full article ">Figure 7
<p>Comparison of training time with and without GPU acceleration.</p>
Full article ">Figure 8
<p>ROC curves of the proposed method and other methods and the value of the area under each curve AUC.</p>
Full article ">Figure 9
<p>Performance of proposed method and its variants. (<b>a</b>) F1 scores of PE-DOCC and its variants. (<b>b</b>) AUC of PE-DOCC and its variants. (<b>c</b>) Recall of PE-DOCC and its variants. (<b>d</b>) FPR of PE-DOCC and its variants.</p>
Full article ">
25 pages, 5170 KiB  
Article
An MGRN1-Based Biomarker Combination Accurately Predicts Melanoma Patient Survival
by José Sánchez-Beltrán, Javier Soler Díaz, Cecilia Herraiz, Conchi Olivares, Sonia Cerdido, Pablo Cerezuela-Fuentes, José Carlos García-Borrón and Celia Jiménez-Cervantes
Int. J. Mol. Sci. 2025, 26(4), 1739; https://doi.org/10.3390/ijms26041739 - 18 Feb 2025
Viewed by 117
Abstract
With ever-increasing incidence and high metastatic potential, cutaneous melanoma is the deadliest skin cancer. Risk prediction based on the Tumor-Node-Metastasis (TNM) staging system has medium accuracy with intermediate IIB-IIIB stages, as roughly 25% of patients with low-medium-grade TNM, and hence a favorable prognostic, [...] Read more.
With ever-increasing incidence and high metastatic potential, cutaneous melanoma is the deadliest skin cancer. Risk prediction based on the Tumor-Node-Metastasis (TNM) staging system has medium accuracy with intermediate IIB-IIIB stages, as roughly 25% of patients with low-medium-grade TNM, and hence a favorable prognostic, undergo an aggressive disease with short survival and around 15% of deaths arise from metastases of thin, low-risk lesions. Therefore, reliable prognostic biomarkers are required. We used genomic and clinical information of melanoma patients from the TCGA-SKCM cohort and two GEO studies for discovery and validation of potential biomarkers, respectively. Neither mutation nor overexpression of major melanoma driver genes provided significant prognostic information. Conversely, expression of MGRN1 and the melanocyte-specific genes MLANA, PMEL, and TYRP1 provided a simple 4-gene signature identifying with high-sensitivity (>80%), low-medium TNM patients with adverse outcomes. Transcriptomic analysis of tumors with this signature, or from low-medium-grade TNM patients with poor outcomes, revealed comparable dysregulation of an inflammatory response, cell cycle progression, and DNA damage/repair programs. A functional analysis of MGRN1-knockout cells confirmed these molecular features. Therefore, the simple MGRN1-MLANA-PMEL-TYRP1 combination of biomarkers complemented TNM staging prognostic accuracy and pointed to the dysregulation of immunological responses and genomic stability as determinants of a melanoma outcome. Full article
(This article belongs to the Section Molecular Oncology)
Show Figures

Figure 1

Figure 1
<p>Relationship of TNM stage and survival of SKCM patients. (<b>a</b>) Plot of OS for MM patients stratified by TNM stage. TNM scores were grouped as 0-I; II; IIIA-B; IIIC; and IV. Mann–Whitney test was used to determine <span class="html-italic">p</span>-value. (<b>b</b>) Scatter plot of OS for MM patients stratified in 2 groups: favorable (TNM-F, stages 0–IIIB) and unfavorable TNM-NF (stages IIIC and IV). Patients of the TNM-F group with favorable (OS &gt; 3000 days) or adverse (OS &lt; 590 days) outcomes are indicated by green and red boxes, respectively. Statistics as in (<b>a</b>). (<b>c</b>) Kaplan–Meier analysis for TNM-F and TNM-NF patients. Number of patients, <span class="html-italic">p</span>-values (Log-rank, Mantel–Cox test), and Hazard Ratio (HR, Mantel–Haenszel test) are indicated. (<b>d</b>) Kaplan–Meier analysis of the subgroups of TNM-F patients with favorable or adverse outcomes defined in panel (<b>c</b>). (<b>e</b>) Volcano plot of differential gene expression in TNM-F subgroups with favorable or adverse outcomes, highlighting repression of selected markers in patients with favorable outcomes. Insignificant changes are shown in blue. On the right, a Forest plot represents enrichment analysis of pre-ranked expression datasets from the same groups of tumors. Bars indicate Hallmark collection gene sets differentially expressed with significant NES in Long vs. Short OS patients with favorable TNM at diagnosis. **** <span class="html-italic">p</span> ≤ 0.0001.</p>
Full article ">Figure 2
<p>Differential expression and correlation with patient survival of potential prognostic biomarkers. For each gene analyzed, the corresponding panel shows a plot of normalized expression in melanoma (SKCM, red, <span class="html-italic">n</span> = 461, TCGA data) and normal skin (N, grey, <span class="html-italic">n</span> = 558, GTEx data from GEPIA) indicating the median values ([Log<sub>2</sub>FC] cutoff 0.9; * <span class="html-italic">p</span>-value &lt; 0.01), and the Kaplan–Meier curve for patients with 33% higher (High, red) or lower (Low, green) expression of the gene, with indication of number of patients, <span class="html-italic">p</span>-values (Log-rank, Mantel–Cox test), and Hazard Ratio (HR, Mantel–Haenszel test).</p>
Full article ">Figure 3
<p>Effect of <span class="html-italic">MGRN1</span> expression on the transcriptome of melanoma. (<b>a</b>) Plot of <span class="html-italic">MGRN1</span> expression in tumors from SKCM patients with long or short overall survival. Mann–Whitney test was used to determine the <span class="html-italic">p</span>-value. (<b>b</b>) Plot of OS for patients bearing tumors in the 33% higher or lower <span class="html-italic">MGRN1</span> expression terciles. (<b>c</b>) Volcano plot of differential gene expression in tumors with high or low <span class="html-italic">MGRN1</span> levels as specified in (<b>b</b>) (<span class="html-italic">p</span> &lt; 0.05), <span class="html-italic">MGRN1</span> expression highlighted as internal control (orange dot). The Forest plot on the right represents the Hallmark collection gene sets positively (red) and negatively (green) enriched with significant NES (<span class="html-italic">p</span>-value &lt; 0.05). (<b>d</b>) Plots of mutational burden (MTB), ploidy, and weighted genome instability index (wGII) in MM of high (red) and low (green) expression of <span class="html-italic">MGRN1</span> analyzed in panel (<b>a</b>). Mann–Whitney test was used to determine <span class="html-italic">p</span>-values. * <span class="html-italic">p</span> ≤ 0.05; ** <span class="html-italic">p</span> ≤ 0.01; *** <span class="html-italic">p</span> ≤ 0.001.</p>
Full article ">Figure 4
<p>Effect of <span class="html-italic">MGRN1</span> expression on the transcriptome of human MM cells. (<b>a</b>) Hallmark collection gene sets differentially enriched with significant NES in <span class="html-italic">MGRN1</span>-KO human MM cells. (<b>b</b>) Heatmap of common genes contributing to NES in <span class="html-italic">MGRN1</span>-Low tumors and <span class="html-italic">MGRN1</span>-KO melanoma cells. Genes with [FC] &gt; 1.25 in one or both comparisons were considered. (<b>c</b>) Strong downregulation of the anti-inflammatory cytokine IL10 in MGRN1-KO cells as revealed by cytokine array analysis of conditioned media from control and knocked-out cells. (<b>d</b>,<b>e</b>) Increased burden of DNA breaks in <span class="html-italic">MGRN1</span>-KO melanoma cells as revealed by γH2AX labeling (<b>d</b>, scale bar 25 μm) and alkaline comet assay (<b>e</b>). Representative image at 40X magnification from a comet assay, and quantitative analysis of tail moments of <span class="html-italic">MGRN1</span>-KO cells normalized to controls (<span class="html-italic">n</span> = 3 experiments, each one with ≥100 comets analyzed, scale bar 25 μm). Statistical analysis with Kruskal–Wallis test with Dunn’s post-test, ** <span class="html-italic">p</span> ≤ 0.01; **** <span class="html-italic">p</span> ≤ 0.0001.</p>
Full article ">Figure 5
<p>Identification of potential prognostic biomarkers in TNM-F melanoma patients. (<b>a</b>–<b>j</b>) panels show plots of OS (<b>left</b>) and Kaplan–Meier curves (<b>right</b>) of patients carrying tumors with 33% higher or lower expression of the indicated genes. Mann–Whitney test was used to determine <span class="html-italic">p</span>-values. The number of patients, <span class="html-italic">p</span>-values (Log-rank, Mantel–Cox test), and Hazard Ratio (HR, Mantel–Haenszel test) are indicated. (<b>k</b>) Differential expression of the indicated genes in tumors from TNM-F patients of favorable (OS &gt; 3000 days, Long) or adverse (OS &lt; 590 days, Short) outcome. * <span class="html-italic">p</span> ≤ 0.05; ** <span class="html-italic">p</span> ≤ 0.01; *** <span class="html-italic">p</span> ≤ 0.001; **** <span class="html-italic">p</span> ≤ 0.0001.</p>
Full article ">Figure 6
<p>Increased accuracy of prediction of outcome of TNM-F melanoma patients by combinations of biomarkers. (<b>a</b>) Plots of OS for TNM-F patients classified according to the simultaneously higher (red) or lower (green) 33% normalized expression of the indicated pairs of genes. Black lines mark the median OS. Mann–Whitney test was used to determine <span class="html-italic">p</span>-values. (<b>b</b>) Kaplan–Meier plots curves for the higher (red) and lower (green) 33% expression groups of the indicated pair of genes, with indication of number of patients, <span class="html-italic">p</span>-values (Log-rank, Mantel–Cox test), and Hazard Ratio (HR, Mantel–Haenszel test). (<b>c</b>) Survival analysis for TNM-F patients stratified in groups of high and low expression of the indicated three-gene combinations, as in (<b>b</b>). (<b>d</b>) ROC curves analyzing the performance of high and low expression of the indicated 3g combinations as predictors of outcome of TNM-F melanoma patients. Areas under the curve (AUC) and <span class="html-italic">p</span>-values are shown (Wilson/Brown method). ** <span class="html-italic">p</span> ≤ 0.01; *** <span class="html-italic">p</span> ≤ 0.001; **** <span class="html-italic">p</span> ≤ 0.0001.</p>
Full article ">Figure 7
<p>An MGRN1-based 4-gene (4g) signature accurately predicts adverse outcomes in low-medium stage TNM melanoma patients. (<b>a</b>) TNM-F patients were stratified into 2 groups according to the simultaneously 33% higher (4g-H<sup>33</sup>, red) or lower (4g-L<sup>33</sup>, green) normalized expression of <span class="html-italic">MGRN1</span>, <span class="html-italic">PMEL</span>, <span class="html-italic">MLANA</span>, and <span class="html-italic">TYRP1</span> in the tumors. The dot plot (<b>left</b>) depicts OS of patients, with black lines indicating median survival. <span class="html-italic">p</span>-value determined with Mann–Whitney test. Kaplan–Meier curves (panel in the <b>middle</b>) for 4g-H<sup>33</sup> and 4g-L<sup>33</sup> patients, with indication of their number, <span class="html-italic">p</span>-value (Log-rank, Mantel–Cox test) and Hazard Ratio (HR, Mantel–Haenszel test). ROC analysis (<b>right</b> panel) of discriminatory performance of the 4g combination, showing AUC and <span class="html-italic">p</span>-value (Wilson/Brown method). (<b>b</b>) Performance of the 4g signature using data from the GSE19234 [<a href="#B47-ijms-26-01739" class="html-bibr">47</a>] and GSE65904 [<a href="#B46-ijms-26-01739" class="html-bibr">46</a>] studies as validation cohort. Gene expression data in these studies were normalized, combined, stratified, and analyzed as described in panel (<b>a</b>). (<b>c</b>) The TCGA-SKCM TNM-F subset of patients was stratified into 3 groups: tumors with normalized expression of the 4 genes signature in the higher or lower 50% percentile simultaneously (red, 4g-H<sup>50</sup>; green, 4g-L<sup>50</sup>), and rest of tumors, with any combination of the <span class="html-italic">MGRN1</span>, <span class="html-italic">PMEL</span>, <span class="html-italic">MLANA</span>, and <span class="html-italic">TYRP1</span> genes in higher and lower 50% percentiles (blue, 4g-M<sup>50</sup>). OS of patients in these groups (<b>left</b>), Kaplan–Meier curves (<b>middle</b>), and ROC curves (<b>right</b>) were analyzed as in panel (<b>a</b>). (<b>d</b>) Venn diagrams for the overlap of (i) TNM-F MM patients with 4g-H<sup>50</sup> + 4g-M<sup>50</sup> signature, comprising all melanomas except those with expression of <span class="html-italic">MGRN1</span>, <span class="html-italic">PMEL</span>, <span class="html-italic">MLANA</span>, and <span class="html-italic">TYRP1</span> simultaneously below the median, and (ii) patients with OS lower (sensitivity of the test), or higher (false discovery rate, FDR) than 5 years. **** <span class="html-italic">p</span> ≤ 0.0001.</p>
Full article ">Figure 8
<p>Comparable gene expression patterns in TNM-F melanomas with low or high 4g signature expression and long or short OS. (<b>a</b>) Volcano plot (left) of differential gene expression in TNM-F tumors with low vs. high expression of 4g signature, highlighting repression of <span class="html-italic">MGRN1</span>, <span class="html-italic">PMEL</span>, <span class="html-italic">MLANA</span>, and <span class="html-italic">TYRP1</span> in 4g-L<sup>33</sup> tumors. Right, Hallmark collection gene sets differentially expressed with significant NES in tumors with low expression of the 4g combination. (<b>b</b>) Heatmap of common genes in enriched gene sets in both 4g-L<sup>33</sup> tumors and tumors from long survival TNM-F patients. Only genes contributing to NES with [FC] &gt; 1.5 in both comparisons were considered. (<b>c</b>) Infiltration fraction of M0, M1, or M2 macrophages in tumors from TNM-F patients with long (green) or short OS, and with 4g-H<sup>33</sup> (red) or 4g-L<sup>33</sup> (green) signature. (<b>d</b>) Genomic scar features (mutational burden, ploidy, and wGII) in tumors from TNM-F patients with long (green), or short (red) OS, and with 4g-H<sup>33</sup> (red), or 4g-L<sup>33</sup> (green) signature. * <span class="html-italic">p</span> ≤ 0.05; ** <span class="html-italic">p</span> ≤ 0.01; determined with Mann–Whitney test.</p>
Full article ">
22 pages, 9369 KiB  
Article
Study on Mechanism of Visual Comfort Perception in Urban 3D Landscape
by Miao Zhang, Tao Shen, Liang Huo, Shunhua Liao, Wenfei Shen and Yucai Li
Buildings 2025, 15(4), 628; https://doi.org/10.3390/buildings15040628 - 18 Feb 2025
Viewed by 119
Abstract
Landscape visual evaluation is a key method for assessing the value of visual landscape resources. This study aims to enhance the visual environment and sensory quality of urban landscapes by establishing standards for the visual comfort of urban natural landscapes. Using line-of-sight and [...] Read more.
Landscape visual evaluation is a key method for assessing the value of visual landscape resources. This study aims to enhance the visual environment and sensory quality of urban landscapes by establishing standards for the visual comfort of urban natural landscapes. Using line-of-sight and multi-factor analysis algorithms, the method assesses spatial visibility and visual exposure of building clusters in the core urban areas of Harbin, identifying areas and viewpoints with high visual potential. Focusing on the viewpoints of landmark 3D models and the surrounding landscape’s visual environment, the study uses the city’s sky, greenery, and water features as key visual elements for evaluating the comfort of urban natural landscapes. By integrating GIS data, big data street-view photos, and image semantic recognition, spatial analysis algorithms extract both objective and subjective visual values at observation points, followed by mathematical modeling and quantitative analysis. The study explores the coupling relationship between objective physical visual values and subjective perceived visibility. The results show that 3D visual analysis effectively reveals the relationship between landmark buildings and surrounding landscapes, providing scientific support for urban planning and contributing to the development of a more distinctive and attractive urban space. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

Figure 1
<p>Map of the study area.</p>
Full article ">Figure 2
<p>Technical Roadmap for Comprehensive Landscape Visual Analysis.</p>
Full article ">Figure 3
<p>(<b>a</b>) Digital Elevation Model Analysis; (<b>b</b>) viewshed analysis of the city.</p>
Full article ">Figure 4
<p>Harbin city land use type map. (The red-circled area is the building complex of the study area).</p>
Full article ">Figure 5
<p>(<b>a</b>) Traffic accessibility analysis map; (<b>b</b>) traffic factor influence map; (<b>c</b>) POI data influence; (<b>d</b>) green space influence factor.</p>
Full article ">Figure 6
<p>(<b>a</b>) Flood control monument model and surrounding Buildings (Post-Modeling); (<b>b</b>) Saint Sophia Cathedral and surrounding buildings (Post-Modeling).</p>
Full article ">Figure 7
<p>Comparison of Viewpoint Selection Based on Street View Images and Models. (<b>a</b>) Flood control monument model; (<b>b</b>) Saint Sophia Cathedral. (The viewpoints of F1–F5 and S1–S5 correspond one-to-one in different perspectives).</p>
Full article ">Figure 8
<p>Skyline Analysis (<b>a</b>) F1–F5 Analysis Diagram; (<b>b</b>) S1–S5 Analysis Diagram (<b>c</b>) Skyline Radar Chart for F1–F5; (<b>d</b>) Skyline Radar Chart for S1–S5.</p>
Full article ">Figure 9
<p>Multi-Viewpoint Street Scenes of Flood Control Memorial Tower and Saint Sophia Cathedral. (Landmark buildings—Flood Control Memorial Tower has been circled in yellow, and Saint Sophia Cathedral has been circled in red). (<b>a</b>) Perspective 1; (<b>b</b>) Perspective 2 (<b>c</b>) Perspective 3; (<b>d</b>) Perspective 4.</p>
Full article ">Figure 9 Cont.
<p>Multi-Viewpoint Street Scenes of Flood Control Memorial Tower and Saint Sophia Cathedral. (Landmark buildings—Flood Control Memorial Tower has been circled in yellow, and Saint Sophia Cathedral has been circled in red). (<b>a</b>) Perspective 1; (<b>b</b>) Perspective 2 (<b>c</b>) Perspective 3; (<b>d</b>) Perspective 4.</p>
Full article ">Figure 10
<p>Multi-view Visibility Analysis of 3D Models. (<b>a</b>) View directions of F1–F5; (<b>b</b>) view directions of S1–S5.</p>
Full article ">Figure 11
<p>Percentage Statistical Chart of Comprehensive Analysis for Visual Evaluation Factors.</p>
Full article ">Figure 12
<p>Hierarchical Model Diagram for Statistical Analysis of Visual Factors.</p>
Full article ">Figure 13
<p>Visual Landscape Control Elements Diagram. (Red in the picture: Viewpoints 1–23).</p>
Full article ">Figure 14
<p>Spatial Distribution Analysis of Urban Landscape Elements.</p>
Full article ">
22 pages, 9277 KiB  
Article
LRNTRM-YOLO: Research on Real-Time Recognition of Non-Tobacco-Related Materials
by Chunjie Zhang, Lijun Yun, Chenggui Yang, Zaiqing Chen and Feiyan Cheng
Agronomy 2025, 15(2), 489; https://doi.org/10.3390/agronomy15020489 - 18 Feb 2025
Viewed by 134
Abstract
The presence of non-tobacco-related materials can significantly compromise the quality of tobacco. To accurately detect non-tobacco-related materials, this study introduces a lightweight and real-time detection model derived from the YOLOv11 framework, named LRNTRM-YOLO. Initially, due to the sub-optimal accuracy in detecting diminutive non-tobacco-related [...] Read more.
The presence of non-tobacco-related materials can significantly compromise the quality of tobacco. To accurately detect non-tobacco-related materials, this study introduces a lightweight and real-time detection model derived from the YOLOv11 framework, named LRNTRM-YOLO. Initially, due to the sub-optimal accuracy in detecting diminutive non-tobacco-related materials, the model was augmented by incorporating an additional layer dedicated to enhancing the detection of small targets, thereby improving the overall accuracy. Furthermore, an attention mechanism was incorporated into the backbone network to focus on the features of the detection targets, thereby improving the detection efficacy of the model. Simultaneously, for the introduction of the SIoU loss function, the angular vector between the bounding box regressions was utilized to define the loss function, thus improving the training efficiency of the model. Following these enhancements, a channel pruning technique was employed to streamline the network, which not only reduced the parameter count but also expedited the inference process, yielding a more compact model for non-tobacco-related material detection. The experimental results on the NTRM dataset indicate that the LRNTRM-YOLO model achieved a mean average precision (mAP) of 92.9%, surpassing the baseline model by a margin of 4.8%. Additionally, there was a 68.3% reduction in the parameters and a 15.9% decrease in floating-point operations compared to the baseline model. Comparative analysis with prominent models confirmed the superiority of the proposed model in terms of its lightweight architecture, high accuracy, and real-time capabilities, thereby offering an innovative and practical solution for detecting non-tobacco-related materials in the future. Full article
(This article belongs to the Special Issue Robotics and Automation in Farming)
Show Figures

Figure 1

Figure 1
<p>Data collection environment.</p>
Full article ">Figure 2
<p>Image acquisition system.</p>
Full article ">Figure 3
<p>Overall technical route.</p>
Full article ">Figure 4
<p>Examples of small non-tobacco-related materials: (<b>a</b>) Sample image; (<b>b</b>) enlarged display of the feather in (<b>a</b>).</p>
Full article ">Figure 5
<p>Details of adding a small target detection layer. The area delineated by the red box represents the detailed process of enhancement.</p>
Full article ">Figure 6
<p>Schematic diagram of the principle of CPCA.</p>
Full article ">Figure 7
<p>Schematic diagram of SIoU. <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">b</mi> <mrow> <mi>gt</mi> </mrow> </msup> </mrow> </semantics></math> is the ground truth box, and b is the predicted box.</p>
Full article ">Figure 8
<p>Loss function curve of the model in the training set.</p>
Full article ">Figure 9
<p>Comparison before and after pruning.</p>
Full article ">Figure 10
<p>Comparison of the different pruning strategies.</p>
Full article ">Figure 11
<p>Visualization of the detection results: (<b>a</b>–<b>c</b>) Detection results of YOLOv11n; (<b>d</b>–<b>f</b>) detection results of LRNTRM-YOLO. The yellow shape in the figure indicates the presence of missed or error detections.</p>
Full article ">Figure 12
<p>Raspberry Pi 5.</p>
Full article ">Figure 13
<p>Visualization of the detection results. The non-tobacco-related materials detected in each image were (<b>a</b>) a label paper, (<b>b</b>) a feather, (<b>c</b>) a hemp rope, (<b>d</b>) a weed, (<b>e</b>) a rubber ring and a label paper, and (<b>f</b>) plastic and a hemp rope. Different types of non-tobacco-related materials in the image are labeled with different colors.</p>
Full article ">
11 pages, 998 KiB  
Article
Pediatric Pleural Effusion and Pneumococcal Vaccination Trends in the Pre- and Post-COVID Era: A Single-Centre Retrospective Study
by Denisa Lavinia Atanasiu, Maria Mitrica, Luciana Petrescu, Oana Falup-Pecurariu, Laura Bleotu, Raluca Ileana Lixandru, David Greenberg and Alexandra Grecu
Children 2025, 12(2), 242; https://doi.org/10.3390/children12020242 - 18 Feb 2025
Viewed by 179
Abstract
Background/Objectives: Pleural effusion represents an accumulation of fluid in the pleural cavity, frequently associated with pneumonia. There has been a gradual increase in cases among children in recent years, with a notable rise during the post-pandemic period, potentially due to immune debt, [...] Read more.
Background/Objectives: Pleural effusion represents an accumulation of fluid in the pleural cavity, frequently associated with pneumonia. There has been a gradual increase in cases among children in recent years, with a notable rise during the post-pandemic period, potentially due to immune debt, decreased vaccination coverage, and changes in pathogen dynamics. Methods: We enrolled 66 children with pleural effusion treated at the Children’s Emergency Clinical Hospital, Brasov, between January 2019 and September 2024. We analyzed the data on demographics, symptoms, vaccination status, hospitalization, and treatments to assess the trends in the incidence and clinical features. Results: The median age was 5 years (ranging from 3 months to 17 years). Most patients were male (57.5%) from rural areas (34.8%). Only 40.9% fulfilled the vaccination schedule of Romania. We observed a rise in hospitalizations in the last two years, with 16 cases in 2023 and 15 in 2024, and most were being admitted in April (15.5%). Patients mainly had severe (36%) and medium (26%) acute respiratory failure. S. pneumoniae was the most common isolate with two cases each of serotype 1, 14, and 23A, and one case each of serotype 3, 31, and 34, followed by H. influenzae and P. aeruginosa. Treatment was mostly with ceftriaxone (69.6%), Vancomycin (63.6%), Meropenem (53.0%), and Teicoplanin (25.7%). Some children required thoracic drainage (34.8%). Complications like pneumothorax (16.6%), polyserositis (4.5%), and pneumomediastinum (3.0%) were found. Conclusions: The rise in pleural effusion cases may be influenced by various factors, such as changes in pathogen behavior or host immune responses following the pandemic. Further research is needed to understand these potential mechanisms. The emergence of non-PCV20 strains and the common occurrence of serotype 3 infections point out the need to study serotype trends and evaluate whether expanding vaccine programs could be beneficial. Full article
(This article belongs to the Section Pediatric Infectious Diseases)
Show Figures

Figure 1

Figure 1
<p>Distribution of pleural effusion cases by month of the year.</p>
Full article ">Figure 2
<p>Clinical features observed in our study population.</p>
Full article ">Figure 3
<p>Bacteria results from cultures and PCR.</p>
Full article ">Figure 4
<p>Complications following pleural effusion.</p>
Full article ">
23 pages, 8895 KiB  
Article
Automated 3D Image Processing System for Inspection of Residential Wall Spalls
by Junjie Wang, Yunfang Pang and Xinyu Teng
Appl. Sci. 2025, 15(4), 2140; https://doi.org/10.3390/app15042140 - 18 Feb 2025
Viewed by 92
Abstract
Continuous spalling exposure can weaken the performance of structures. Therefore, the development of methods for detecting wall spall damage remains essential in the field of Structural Health Monitoring. Currently, researchers mainly rely on 2D information for spall detection and predominantly use manual data [...] Read more.
Continuous spalling exposure can weaken the performance of structures. Therefore, the development of methods for detecting wall spall damage remains essential in the field of Structural Health Monitoring. Currently, researchers mainly rely on 2D information for spall detection and predominantly use manual data collection methods in the complex environment of residential buildings, which are usually inefficient. To address this challenge, an automated 3D image processing system for wall spalls is proposed in this study. First, UGV path planning was performed in order to collect information about the surrounding environmental defects. Second, to address the shortcomings of RandLA-Net, a dynamic enhanced dual-branch structure is established based on which consistency constraints are introduced, a lightweight attention module is added, and the loss function is optimized in order to enhance the ability of the model in extracting feature information of the point cloud. Finally, spalls are quantitatively evaluated to determine the damage to buildings. The results show that the Randla-Spall achieves 94.71% Recall and 84.20% mIoU on the test set, improved by 4.25% and 5.37%. An integrated process using a lightweight device is achieved in this study, which is capable of efficiently extracting and quantifying spalling defects and provides valuable references for SHM. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

Figure 1
<p>Residential wall spall inspection system.</p>
Full article ">Figure 2
<p>Process of residential wall spall inspections.</p>
Full article ">Figure 3
<p>Example of test scenario selection.</p>
Full article ">Figure 4
<p>Data acquisition UGV.</p>
Full article ">Figure 5
<p>Example of path planning and dataset creation.</p>
Full article ">Figure 6
<p>Comparison of dense reconstruction.</p>
Full article ">Figure 7
<p>Dataset structure.</p>
Full article ">Figure 8
<p>RandLA-Spall network architecture.</p>
Full article ">Figure 9
<p>Example of data enhancement.</p>
Full article ">Figure 10
<p>RandLA-Spall residual module.</p>
Full article ">Figure 11
<p>CBAM module structure: (<b>a</b>) CAM, (<b>b</b>) SAM, and (<b>c</b>) CBAM.</p>
Full article ">Figure 12
<p>Comparison of experimental indicators.</p>
Full article ">Figure 13
<p>Semantic segmentation results.</p>
Full article ">Figure 14
<p>Larger-scale structures semantic segmentation results.</p>
Full article ">Figure 15
<p>Comparison of segmentation indicators.</p>
Full article ">Figure 16
<p>Comparison of ablation experiment indicators.</p>
Full article ">Figure 17
<p>Sample example.</p>
Full article ">
26 pages, 1293 KiB  
Review
Moving on to Greener Pastures? A Review of South Africa’s Housing Megaproject Literature
by Louis Lategan, Brian Fisher-Holloway, Juanee Cilliers and Sarel Cilliers
Sustainability 2025, 17(4), 1677; https://doi.org/10.3390/su17041677 - 18 Feb 2025
Viewed by 121
Abstract
South Africa is a leader in the scholarship on green urbanism in the Global South, but academic progress has not translated to broad implementation. Notably, government-subsidized housing projects have produced peripheral developments featuring low build quality, conventional gray infrastructure, and deficient socio-economic and [...] Read more.
South Africa is a leader in the scholarship on green urbanism in the Global South, but academic progress has not translated to broad implementation. Notably, government-subsidized housing projects have produced peripheral developments featuring low build quality, conventional gray infrastructure, and deficient socio-economic and environmental amenities. Declining delivery and increasing informal settlement spawned a 2014 shift to housing megaprojects to increase output and improve living conditions, socio-economic integration, and sustainability. The shift offered opportunities for a normative focus on greener development mirrored in the discourse surrounding project descriptions. Yet, the level of enactment has remained unclear. In reflecting on these points, this paper employs environmental justice as a theoretical framework and completes a comprehensive review of the academic literature on housing megaprojects and the depth of their greener development commitments. A three-phase, seven-stage review protocol retrieves the relevant literature, and bibliometric and qualitative content analyses identify publication trends and themes. Results indicate limited scholarship on new megaprojects with sporadic and superficial references to greener development, mostly reserved for higher-income segments and private developments. In response, this paper calls for more determined action to launch context-aware and just greener megaprojects and offers corresponding guidance for research and practice of value to South Africa and beyond. Full article
Show Figures

Figure 1

Figure 1
<p>Environmental justice framework for South African housing megaprojects.</p>
Full article ">Figure 2
<p>Review approach followed.</p>
Full article ">
16 pages, 7597 KiB  
Article
Torque/Speed Equilibrium Point Monitoring of an Aircraft Hybrid Electric Propulsion System Through Accelerometric Signal Processing
by Vincenzo Niola, Chiara Cosenza, Enrico Fornaro, Pierangelo Malfi, Francesco Melluso, Armando Nicolella, Sergio Savino and Mario Spirto
Appl. Sci. 2025, 15(4), 2135; https://doi.org/10.3390/app15042135 - 18 Feb 2025
Viewed by 170
Abstract
The present work proposes a new torque/speed equilibrium point monitoring technique for an aircraft Hybrid Electric Propulsion System (HEPS) through an accelerometric-signal-based approach. Sampled signals were processed using statistical indexes, filtering, and a feature reduction and selection algorithm to train a classification Feedforward [...] Read more.
The present work proposes a new torque/speed equilibrium point monitoring technique for an aircraft Hybrid Electric Propulsion System (HEPS) through an accelerometric-signal-based approach. Sampled signals were processed using statistical indexes, filtering, and a feature reduction and selection algorithm to train a classification Feedforward Neural Network. A supervised Machine Learning model was developed to classify the HEPS operating modes characterized by an Internal Combustion Engine as a single propulsor or by combining the latter with an Electric Machine used as a motor or a generator. The abnormal changes in the torque/speed equilibrium point were detected by the monitoring index built by computing the Root Mean Square on the value identified by the classifier. The procedure was validated through experimental tests that demonstrated its validity. Full article
(This article belongs to the Special Issue Fault Diagnosis and Detection of Machinery)
Show Figures

Figure 1

Figure 1
<p>Hybrid electric architecture for aircraft application.</p>
Full article ">Figure 2
<p>Propeller torque curve.</p>
Full article ">Figure 3
<p>Training, validation, and testing dataset partition workflow.</p>
Full article ">Figure 4
<p>Model training, optimization, and testing workflow.</p>
Full article ">Figure 5
<p>ICE mode test: (<b>a</b>) angular speed, (<b>b</b>) total torque, (<b>c</b>) x-axis vibrational signal, (<b>d</b>) y-axis vibrational signal, (<b>e</b>) z-axis vibrational signal, and (<b>f</b>) combined signal.</p>
Full article ">Figure 6
<p>Parameter test trends: (<b>a</b>) angular speed, (<b>b</b>) total torque, and (<b>c</b>) EM torque.</p>
Full article ">Figure 7
<p>Raw and filtered index trends: (<b>a</b>) RMS, (<b>b</b>) kurtosis, (<b>c</b>) crest factor.</p>
Full article ">Figure 8
<p>PCA analysis results: (<b>a</b>) total explained variance and (<b>b</b>) single PC component explained variance.</p>
Full article ">Figure 9
<p>Loss curves: (<b>a</b>) last FFN loss curve and (<b>b</b>) Bayesian Optimization loss curve.</p>
Full article ">Figure 10
<p>Confusion matrix.</p>
Full article ">Figure 11
<p>Final test parameters trends with the equilibrium point outside the <math display="inline"><semantics> <mrow> <mo>±</mo> <mn>5</mn> </mrow> </semantics></math>% limits highlighted in red: (<b>a</b>) propeller torque, (<b>b</b>) Electric Machine torque, and (<b>c</b>) angular speed.</p>
Full article ">Figure 12
<p>Final test results with the equilibrium point outside the <math display="inline"><semantics> <mrow> <mo>±</mo> <mn>5</mn> </mrow> </semantics></math>% limits highlighted in red: (<b>a</b>) propeller torque curve trend in the angular speed–torque curve plane and (<b>b</b>) RMS trend.</p>
Full article ">
15 pages, 3675 KiB  
Article
Automatic Annotation of Map Point Features Based on Deep Learning ResNet Models
by Yaolin Zhang, Zhiwen Qin, Jingsong Ma, Qian Zhang and Xiaolong Wang
ISPRS Int. J. Geo-Inf. 2025, 14(2), 88; https://doi.org/10.3390/ijgi14020088 - 17 Feb 2025
Viewed by 171
Abstract
Point feature cartographic label placement is a key problem in the automatic configuration of map labeling. Prior research on it only addresses label conflict or overlap issues; it does not fully take into account and resolve both types of issues. In this study, [...] Read more.
Point feature cartographic label placement is a key problem in the automatic configuration of map labeling. Prior research on it only addresses label conflict or overlap issues; it does not fully take into account and resolve both types of issues. In this study, we attempt to apply machine learning techniques to the automatic placement of point feature labels since label placement is a task that heavily relies on expert expertise, which is very congruent with neural networks’ ability to mimic the human brain’s thought process. We trained ResNet using large amounts of well-labeled picture data. The label’s proper location for a given unlabeled point feature was then predicted by the trained model. We assessed the outcomes both quantitatively and qualitatively, contrasting the ResNet model’s output with that of the expert manual placement approach and the conventional Maplex automatic placement method. According to the evaluation, the ResNet model’s test set accuracy was 97.08%, demonstrating its ability to locate the point feature label in the right place. This study offers a workable solution to the label overlap and conflict issue. Simultaneously, it has significantly enhanced the map’s esthetic appeal and the information’s clarity. Full article
Show Figures

Figure 1

Figure 1
<p>Workflow for the automatic annotation of point features using ResNet. (<b>A</b>) acquiring and preprocessing the map data; (<b>B</b>) training and testing the model; (<b>C</b>) evaluating model quality. The arrows represent the order of the workflow.</p>
Full article ">Figure 2
<p>Point feature label candidate location models: schematic diagram of the 8-position model.</p>
Full article ">Figure 3
<p>Original map data of Xuzhou City, Jiangsu province. The orange area features represent residential land and facilities, and the blue area features represent drainage(area). The blue line features represent the drainage (line), and the other color line features represent the various levels of roads.</p>
Full article ">Figure 4
<p>Point feature label alternative location map automatic clipping: (<b>a</b>) the size of the text box indicated by the point feature annotation and (<b>b</b>) automatic cropping of alternate position maps for 8 orientations around point elements. The numbers 1-8 represent the order of clipping, that is the priority of the position.</p>
Full article ">Figure 5
<p>Data are grayed and added to reflect priority order by changing brightness. The arrows in the figure represent the processing order. (<b>a</b>) graying images; (<b>b</b>) adding data priority.</p>
Full article ">Figure 6
<p>ResNet model structure diagram.</p>
Full article ">Figure 7
<p>The convergence of the training loss and training accuracy.</p>
Full article ">Figure 8
<p>Text labeling performed with (<b>a</b>) ResNet, (<b>b</b>) Maplex, (<b>c</b>) ResNet, and (<b>d</b>) Maplex.</p>
Full article ">Figure 9
<p>Examples of label placement performed by ResNet: (<b>a</b>) label conflict; (<b>b</b>) label overlaps with other point features; and (<b>c</b>) label overlaps the river.</p>
Full article ">
25 pages, 13237 KiB  
Article
A High-Precision Virtual Central Projection Image Generation Method for an Aerial Dual-Camera
by Xingzhou Luo, Haitao Zhao, Yaping Liu, Nannan Liu, Jiang Chen, Hong Yang and Jie Pan
Remote Sens. 2025, 17(4), 683; https://doi.org/10.3390/rs17040683 - 17 Feb 2025
Viewed by 198
Abstract
Aerial optical cameras are the primary method for capturing high-resolution images to produce large-scale mapping products. To improve aerial photography efficiency, multiple cameras are often used in combination to generate large-format virtual central projection images. This paper presents a high-precision method for directly [...] Read more.
Aerial optical cameras are the primary method for capturing high-resolution images to produce large-scale mapping products. To improve aerial photography efficiency, multiple cameras are often used in combination to generate large-format virtual central projection images. This paper presents a high-precision method for directly transforming raw images obtained from a dual-camera system mounted at an oblique angle into virtual central projection images, thereby enabling the construction of low-cost, large-format aerial camera systems. The method commences with an adaptive sub-block in the overlapping regions of the raw images to extract evenly distributed feature points, followed by iterative relative orientation to improve accuracy and reliability. A global projection transformation matrix is constructed, and the sigmoid function is employed as a weighted distance function for image stitching. The results demonstrate that the proposed method produces more evenly distributed feature points, higher relative orientation accuracy, and greater reliability. Simulation analysis of image overlap indicates that when the overlap exceeds 7%, stitching accuracy can be better than 1.25 μm. The aerial triangulation results demonstrate that the virtual central projection images satisfy the criteria for the production of 1:1000 scale mapping products. Full article
Show Figures

Figure 1

Figure 1
<p>Imaging principle of ALC 2000. (<b>a</b>) Design model. (<b>b</b>) Geometry model.</p>
Full article ">Figure 2
<p>Workflow of virtual image generation.</p>
Full article ">Figure 3
<p>Diagram of adaptive sub-block partitioning with pre-transformation in overlapping region.</p>
Full article ">Figure 4
<p>Flowchart of feature point extraction and matching algorithm.</p>
Full article ">Figure 5
<p>Flowchart of relative orientation calibration algorithm.</p>
Full article ">Figure 6
<p>Distance weight curve of overlapping region.</p>
Full article ">Figure 7
<p>Coverage range of experimental flight routes in Hefei and typical data.</p>
Full article ">Figure 8
<p>SIFT based on adaptive sub-block partitioning with pre-transformation: (<b>a</b>) city; (<b>b</b>) forest; and (<b>c</b>) farmland.</p>
Full article ">Figure 9
<p>Distribution and point density analysis of corresponding feature points in overlapping region using different methods: (<b>a</b>) Direct SIFT extraction in the overlap. (<b>b</b>) Direct SURF extraction in the overlap. (<b>c</b>) Direct AKAZE extraction in the overlap. (<b>d</b>) Direct ORB extraction in the overlap. (<b>e</b>) LoFTR extraction based on adaptive sub-block partitioning with pre-transformation. (<b>f</b>) SIFT extraction based on adaptive sub-block partitioning with pre-transformation (ASPPT-SIFT, ours).</p>
Full article ">Figure 10
<p>The relative orientation calibration results: (<b>a</b>) The relative orientation angle (φ) result. (<b>b</b>) The relative orientation angle (ω) result. (<b>c</b>) The relative orientation angle (κ) result.</p>
Full article ">Figure 11
<p>The relative orientation accuracy assessment results: (<b>a</b>) The relative orientation angle (φ) accuracy. (<b>b</b>) The relative orientation angle (ω) accuracy. (<b>c</b>) The relative orientation angle (κ) accuracy.</p>
Full article ">Figure 11 Cont.
<p>The relative orientation accuracy assessment results: (<b>a</b>) The relative orientation angle (φ) accuracy. (<b>b</b>) The relative orientation angle (ω) accuracy. (<b>c</b>) The relative orientation angle (κ) accuracy.</p>
Full article ">Figure 12
<p>Disparity distance root mean square error.</p>
Full article ">Figure 13
<p>Simulation data of different image overlaps.</p>
Full article ">Figure 14
<p>Relative orientation calibration accuracy of different overlaps.</p>
Full article ">
Back to TopTop