Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (10,399)

Search Parameters:
Keywords = multiple features

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1370 KiB  
Article
FL-YOLOv8: Lightweight Object Detector Based on Feature Fusion
by Ying Xue, Qijin Wang, Yating Hu, Yu Qian, Long Cheng and Hongqiang Wang
Electronics 2024, 13(23), 4653; https://doi.org/10.3390/electronics13234653 - 25 Nov 2024
Abstract
In recent years, anchor-free object detectors have become predominant in deep learning, the YOLOv8 model as a real-time object detector based on anchor-free frames is universal and influential, it efficiently detects objects across multiple scales. However, the generalization performance of the model is [...] Read more.
In recent years, anchor-free object detectors have become predominant in deep learning, the YOLOv8 model as a real-time object detector based on anchor-free frames is universal and influential, it efficiently detects objects across multiple scales. However, the generalization performance of the model is lacking, and the feature fusion within the neck module overly relies on its structural design and dataset size, and it is particularly difficult to localize and detect small objects. To address these issues, we propose the FL-YOLOv8 object detector, which is improved based on YOLOv8s. Firstly, we introduce the FSDI module in the neck, enhancing semantic information across all layers and incorporating rich detailed features through straightforward layer-hopping connections. This module integrates both high-level and low-level information to enhance the accuracy and efficiency of image detection. Meanwhile, the structure of the model was optimized and designed, and the LSCD module is constructed in the detection head; adopting a lightweight shared convolutional detection head reduces the number of parameters and computation of the model by 19% and 10%, respectively. Our model achieves a comprehensive performance of 45.5% on the COCO generalized dataset, surpassing the benchmark by 0.8 percentage points. To further validate the effectiveness of the method, experiments were also performed on specific domain urine sediment data (FCUS22), and the results on category detection also better justify the FL-YOLOv8 object detection algorithm. Full article
24 pages, 349 KiB  
Article
“Settler Maintenance” and Migrant Domestic Worker Ecologies of Care
by Rachel C. Lee, Abraham Encinas and Lesley Thulin
Humanities 2024, 13(6), 164; https://doi.org/10.3390/h13060164 - 25 Nov 2024
Abstract
Oral histories of Latina domestic workers in the United States feature hybrid narratives combining accounts of illness and “toxic discourse”. We approach domestic workers’ illnesses and disabilities in a capacious, extra-medical context that registers multiple axes of precarity (economic, racial, and migratory). We [...] Read more.
Oral histories of Latina domestic workers in the United States feature hybrid narratives combining accounts of illness and “toxic discourse”. We approach domestic workers’ illnesses and disabilities in a capacious, extra-medical context that registers multiple axes of precarity (economic, racial, and migratory). We are naming this context “settler maintenance”. Riffing on the specific and general valences of “maintenance” (i.e., as a synonym for cleaning work, and as a term for the practices and ideologies involved in a structure’s upkeep), this term has multiple meanings. First, it describes U.S. domestic workers’ often-compulsory use of hazardous chemical agents that promise to remove dirt speedily, yet that imperil domestic workers’ health. The use of these chemicals perpetuates two other, more abstract kinds of settler maintenance: (1) the continuation of socioeconomic hierarchies between immigrant domestic workers and settler employers, and (2) the continuation of (white) settlers’ extractive relationship to the land qua private property. To challenge this logic of settler maintenance, which is predicated on a lack of care for care workers, Latina domestic workers have developed alternative forms of care via lateral networks and political activism. Full article
(This article belongs to the Special Issue Care in the Environmental Humanities)
21 pages, 4501 KiB  
Article
A Deep Learning-Based Method for Bearing Fault Diagnosis with Few-Shot Learning
by Yang Li, Xiaojiao Gu and Yonghe Wei
Sensors 2024, 24(23), 7516; https://doi.org/10.3390/s24237516 - 25 Nov 2024
Abstract
To tackle the issue of limited sample data in small sample fault diagnosis for rolling bearings using deep learning, we propose a fault diagnosis method that integrates a KANs-CNN network. Initially, the raw vibration signals are converted into two-dimensional time-frequency images via a [...] Read more.
To tackle the issue of limited sample data in small sample fault diagnosis for rolling bearings using deep learning, we propose a fault diagnosis method that integrates a KANs-CNN network. Initially, the raw vibration signals are converted into two-dimensional time-frequency images via a continuous wavelet transform. Next, Using CNN combined with KANs for feature extraction, the nonlinear activation of KANs helps extract deep and complex features from the data. After the output of CNN-KANs, an FAN network module is added. The FAN module can employ various feature aggregation strategies, such as weighted averaging, max pooling, addition aggregation, etc., to combine information from multiple feature levels. To further tackle the small sample issue, data generation is performed on the original data through diffusion networks under conditions of fewer samples for bearings and tools, thereby increasing the sample size of the dataset and enhancing fault diagnosis accuracy. Experimental results demonstrate that, under small sample conditions, this method achieves higher accuracy compared to other approaches. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

Figure 1
<p>Methodology of the proposed research work.</p>
Full article ">Figure 2
<p>The detailed differences between MLP and KAN.</p>
Full article ">Figure 3
<p>A schematic representation of a spline curve.</p>
Full article ">Figure 4
<p>Spline interpolation, showcasing different orders of spline interpolation. (<b>a</b>) for cubic interpolation, (<b>b</b>) for 7th-order interpolation, (<b>c</b>) for 5th-order interpolation, and (<b>d</b>) for linear spline.</p>
Full article ">Figure 5
<p>The noise addition process in the diffusion network (displayed every 20 steps).</p>
Full article ">Figure 6
<p>The structure of the KANs-CNN network.</p>
Full article ">Figure 7
<p>The fault diagnosis flowchart.</p>
Full article ">Figure 8
<p>Time-frequency diagram of the bearing after wavelet transform.</p>
Full article ">Figure 9
<p>Loss Function, Accuracy, and Confusion Matrix of the KANs-CNN Experiment on the Augmented Bearing Dataset Using DDPM.</p>
Full article ">Figure 10
<p>Comparison of time-frequency maps of the bearing dataset before and after using the diffusion network.</p>
Full article ">Figure 11
<p>In comparative experiments with other methods, KANs-CNN achieved the highest performance in terms of average accuracy.</p>
Full article ">Figure 12
<p>Comparison of confusion matrices for fault diagnosis accuracy among the DDPM, GAN, and VAE generative models.</p>
Full article ">Figure 13
<p>Time-frequency maps of the tool after wavelet transformation.</p>
Full article ">Figure 14
<p>Comparison of confusion matrices for fault diagnosis accuracy among the DDPM, GAN, and VAE generative models.</p>
Full article ">
24 pages, 25933 KiB  
Article
Accurate Paddy Rice Mapping Based on Phenology-Based Features and Object-Based Classification
by Jiayi Zhang, Lixin Gao, Miao Liu, Yingying Dong, Chongwen Liu, Raffaele Casa, Stefano Pignatti, Wenjiang Huang, Zhenhai Li, Tingting Tian and Richa Hu
Remote Sens. 2024, 16(23), 4406; https://doi.org/10.3390/rs16234406 - 25 Nov 2024
Abstract
Highly accurate rice cultivation distribution and area extraction are essential to food security. Moreover, Inner Mongolia, whose slogan is “from scientific rice to world rice”, is an essential national rice production base. However, high-quality rice mapping products at high resolutions are still scarce [...] Read more.
Highly accurate rice cultivation distribution and area extraction are essential to food security. Moreover, Inner Mongolia, whose slogan is “from scientific rice to world rice”, is an essential national rice production base. However, high-quality rice mapping products at high resolutions are still scarce around the Inner Mongolia Autonomous Region. This condition is not conducive to rational planning of farmland resources, maintaining food security, and promoting sustainable growth of the local agricultural economy. In this study, the rice backscattering intensity difference index from the vertically polarized backscatter intensity of Sentinel-1 and the phenology differential index from the spectral indices of two critical rice phenological phases of Sentinel-2 images were constructed. Other spectral features, including spectral indices, tasseled cap, and texture features, were computed using simple non-iterative clustering (SNIC) to achieve image segmentation. These variables served as input features for the random forest (RF) algorithm. Results reveal that employing the RF with the SNIC segmentation algorithm and combining it with optical and synthetic aperture radar data is an effective way to extract data on rice in mid-latitude regions. The overall accuracy and kappa coefficient are 0.98 and 0.967, correspondingly. The accuracy for rice is 0.99, as proven by empirical data. These results meet the requirements of regional rice cultivation assessment and area monitoring. Furthermore, owing to its resilience against longitude-associated influences, the model discerns rice across diverse regions and multiple years, achieving an R2 of 0.99. This capability significantly bolsters efforts to improve regional food security and the pursuit of sustainable development. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The study area and the tiles covered Hinggan League; (<b>b</b>) paddy rice sample points.</p>
Full article ">Figure 2
<p>The methodological workflow of the research.</p>
Full article ">Figure 3
<p>Sentinel-2 characteristic vegetation index time-series curve and S-G filtering effect plot (<b>left</b>); Sentinel-1 VH Backscatter Time-series Curve and S-G filtering effect plot (<b>right</b>).</p>
Full article ">Figure 4
<p>Time trends of some indices from optical data (<b>top</b>), backscattering (<b>center</b>), and during the different growth phases of rice (<b>bottom</b>), which cover the whole growth period.</p>
Full article ">Figure 5
<p>Statistical results of physical difference indices for different land cover types: (<b>a</b>) NDVI, NDWI differences for different features; (<b>b</b>) REI (“Outliers” mean’s abnormal values); (<b>c</b>) VH differences for different features; (<b>d</b>) SREI.</p>
Full article ">Figure 6
<p>The comparison among segmentation results from different seed pixel pitches.</p>
Full article ">Figure 7
<p>Contribution of feature variables ((<b>A</b>) Tqx, (<b>B</b>) Wlht, (<b>C</b>) Zltq, (<b>D</b>) Keqyyzq, (<b>E</b>) Keqyyqq, (<b>F</b>) Synthesize).</p>
Full article ">Figure 8
<p>Classification results of different combinations. (<b>a</b>) Comparison of rice extraction results from different method combinations (<b>b</b>) Comparison of rice extraction areas from the four combinations with official statistical data.</p>
Full article ">Figure 9
<p>Comparison of classification results using different algorithms (five selected regions from top to bottom: Zltq, Keqyyqq, Keqyyzq, Tqx, Wlht) [<a href="#B59-remotesensing-16-04406" class="html-bibr">59</a>].</p>
Full article ">Figure 10
<p>Rice identification results of Hinggan League (bottom left: Keqyyzq; top left: Keqyyqq; center: Wlht; top right: Zltq r; bottom right: Tqx).</p>
Full article ">Figure 11
<p>Differences in phenological curves of rice planting cities in Inner Mongolia represented by time-series of NDVI and NDWI. (<b>a</b>) NDVI and NDWI values in 2016 (<b>b</b>) Rice cultivation area extraction results for 2016.</p>
Full article ">Figure 12
<p>Differences in phenological curves of rice planting cities in Inner Mongolia. (<b>a</b>) Annual variation in NDVI values (<b>b</b>) Annual variation in NDWI values.</p>
Full article ">Figure 13
<p>The extraction of rice cultivation in other cities (from right to left: HulunBuir, Tongliao, Chifeng, Hohhot, Erdos; a represents Google’s satellite imagery; b represents the extraction results).</p>
Full article ">Figure 14
<p>Extract area of rice area and official statistical data.</p>
Full article ">Figure 15
<p>Comparison of rice mapping accuracy (F1 score) under two scenarios using ablation experiments.</p>
Full article ">Figure 16
<p>Comparison of estimated rice planting area in Hinggan League in 2016 with statistical data from 2021.</p>
Full article ">Figure 17
<p>Actual scene of HLBR and phenological curve (the red arrows from left to right represent the period of snowmelt and bare soil).</p>
Full article ">
10 pages, 2397 KiB  
Article
Clinical Characteristics and Prevalence of Celiac Disease in a Large Cohort of Type 1 Diabetes from Saudi Arabia
by Mohammed Hakami, Saeed Yafei, Abdulrahman Hummadi, Raed Abutaleb, Abdullah Khawaji, Yahia Solan, Turki Aljohani, Ali Jaber Alhagawy, Amer Al Ali, Shakir Bakkari, Morghma Adawi, Maram Saleh, Sayidah Zaylaee, Rashad Aref, Khaled Tahash, Ebrahim Haddad, Amnah Hakami, Mohammed Hobani and Ibrahem Abutaleb
Medicina 2024, 60(12), 1940; https://doi.org/10.3390/medicina60121940 - 25 Nov 2024
Abstract
Background and Objectives: The link between celiac disease (CD) and type 1 diabetes (T1D) has been well-documented in the medical literature and is thought to be due to a shared genetic predisposition in addition to environmental triggers. This study aimed to determine [...] Read more.
Background and Objectives: The link between celiac disease (CD) and type 1 diabetes (T1D) has been well-documented in the medical literature and is thought to be due to a shared genetic predisposition in addition to environmental triggers. This study aimed to determine the seroprevalence and biopsy-proven CD (PBCD) prevalence in individuals with T1D from Saudi Arabia and identify their clinical characteristics and the impact on glycemic control. Materials and Methods: A total of 969 children and adolescents with confirmed T1D were investigated. Prospective and retrospective data were collected to include clinical, anthropometric, and biochemical data. Total IgA and anti-TTG-IgA antibodies were screened to detect seropositive cases. Upper intestinal endoscopy and biopsy were performed to find BPCD. Results: The seroprevalence of CD was 14.6% (141/969), while BPCD prevalence was 7.5%. Females had a higher prevalence than males: 17.8% vs. 9.8%, p < 0.001. The CD group had lower HbA1c and more frequent hypoglycemia than the seronegative group. Conclusions: This study highlighted the high prevalence of CD in T1D Saudi patients. CD has multiple effects on glycemic control, growth, and puberty in children and adolescents with T1D. We emphasize the importance of early screening for CD at the time of diabetes diagnosis and periodically after that or if any atypical features present, especially anemia, growth delay, underweight, or frequent hypoglycemia. Full article
(This article belongs to the Section Endocrinology)
Show Figures

Figure 1

Figure 1
<p>Marsh 3b with intraepithelial lymphocytosis (<b>left</b>) and marked villous atrophy (<b>right</b>).</p>
Full article ">
19 pages, 4339 KiB  
Article
VDMNet: A Deep Learning Framework with Vessel Dynamic Convolution and Multi-Scale Fusion for Retinal Vessel Segmentation
by Guiwen Xu, Tao Hu and Qinghua Zhang
Bioengineering 2024, 11(12), 1190; https://doi.org/10.3390/bioengineering11121190 - 25 Nov 2024
Abstract
Retinal vessel segmentation is crucial for diagnosing and monitoring ophthalmic and systemic diseases. Optical Coherence Tomography Angiography (OCTA) enables detailed imaging of the retinal microvasculature, but existing methods for OCTA segmentation face significant limitations, such as susceptibility to noise, difficulty in handling class [...] Read more.
Retinal vessel segmentation is crucial for diagnosing and monitoring ophthalmic and systemic diseases. Optical Coherence Tomography Angiography (OCTA) enables detailed imaging of the retinal microvasculature, but existing methods for OCTA segmentation face significant limitations, such as susceptibility to noise, difficulty in handling class imbalance, and challenges in accurately segmenting complex vascular morphologies. In this study, we propose VDMNet, a novel segmentation network designed to overcome these challenges by integrating several advanced components. Firstly, we introduce the Fast Multi-Head Self-Attention (FastMHSA) module to effectively capture both global and local features, enhancing the network’s robustness against complex backgrounds and pathological interference. Secondly, the Vessel Dynamic Convolution (VDConv) module is designed to dynamically adapt to curved and crossing vessels, thereby improving the segmentation of complex morphologies. Furthermore, we employ the Multi-Scale Fusion (MSF) mechanism to aggregate features across multiple scales, enhancing the detection of fine vessels while maintaining vascular continuity. Finally, we propose Weighted Asymmetric Focal Tversky Loss (WAFT Loss) to address class imbalance issues, focusing on the accurate segmentation of small and difficult-to-detect vessels. The proposed framework was evaluated on the publicly available ROSE-1 and OCTA-3M datasets. Experimental results demonstrated that our model effectively preserved the edge information of tiny vessels and achieved state-of-the-art performance in retinal vessel segmentation across several evaluation metrics. These improvements highlight VDMNet’s superior ability to capture both fine vascular details and overall vessel connectivity, making it a robust solution for retinal vessel segmentation. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The architecture of VDMNet, which is composed of encoder, decoder, and skip connections.</p>
Full article ">Figure 2
<p>The proposed Fast Multi-Head Self-Attention Mechanism. (<b>a</b>) Fast Multi-Head Self-Attention Mechanism encoder. (<b>b</b>) Fast Multi-Head Self-Attention Mechanism decoder. They share similar concepts, but (<b>b</b>) takes two inputs: the high-resolution features from skip connections in the encoder and the low-resolution features from the decoder.</p>
Full article ">Figure 3
<p>Multi-Scale Fusion Module.</p>
Full article ">Figure 4
<p>Retinal vessel segmentation results of the proposed VDMNet and other segmentation networks. From top to bottom, the OCTA images of rows 1 and 3 come from ROSE-1, and rows 5 and 7 come from OCTA-3M, respectively. Rows 2, 4, 6, and 8 show the corresponding locally zoomed-in OCTA images, as well as the ground truth and segmentation results.</p>
Full article ">
20 pages, 34564 KiB  
Article
Multi-Target Tracking with Multiple Unmanned Aerial Vehicles Based on Information Fusion
by Pengnian Wu, Yixuan Li and Dong Xue
Drones 2024, 8(12), 704; https://doi.org/10.3390/drones8120704 - 25 Nov 2024
Abstract
In high-altitude scenarios, targets tend to occupy a small number of pixels within the UAV’s field of view, resulting in substantial errors when identity recognition is attempted based solely on appearance features during multi-UAV joint tracking. Existing methodologies typically propose projecting multi-view data [...] Read more.
In high-altitude scenarios, targets tend to occupy a small number of pixels within the UAV’s field of view, resulting in substantial errors when identity recognition is attempted based solely on appearance features during multi-UAV joint tracking. Existing methodologies typically propose projecting multi-view data onto a single plane and leveraging distance information for identity association; however, their accuracy remains low as they are contingent on one-dimensional target information. To address this limitation, this paper introduces the UAVST-HM (UAV Swarm Tracking in High-altitude scenarios for Multiple targets) model, specifically designed to handle the characteristics of targets in the field of view of multiple UAVs at high altitudes. Initially, we develop techniques for extracting targets’ appearance, geometric, and distribution features. Subsequently, adaptive weights, calculated based on the mean of the respective features, are devised to amalgamate these diverse features, thereby constructing a cost matrix for cross-view target identity matching. This matrix is processed through the Hungarian algorithm, and multi-view target identity association is ultimately achieved via threshold filtering. On the MDMT dataset, our method enhances the MDA indicator, which assesses cross-view target identity matching, by 1.78 percentage points compared to the current state of the art. This significant enhancement substantially improves the overall efficacy of multi-UAV joint visual tracking from a high-altitude perspective. Full article
(This article belongs to the Section Drone Design and Development)
Show Figures

Figure 1

Figure 1
<p>Multi-view target identity association matching model architecture. Initially, target detection and tracking are conducted. Subsequently, the image data captured from various viewpoints undergo affine transformation, projecting the images onto a common plane. Following this, adaptive weighted fusion of features at different levels is performed based on the target scale to construct an identity matching cost matrix. Finally, bipartite graph matching alongside threshold filtering is employed to derive the final results.</p>
Full article ">Figure 2
<p>Different view projection effects. In multi-view images with a significant amount of overlapping background regions, the corresponding positions of the targets can maintain a high degree of congruence after applying affine transformation and projecting them onto a common plane. This congruence provides valuable information for constructing the subsequent identity matching cost matrix.</p>
Full article ">Figure 3
<p>Regional target attention in different views. By observing the same area from different perspectives and applying self-attention mechanisms, it can be noted that significant regions often exhibit a certain degree of overlap.</p>
Full article ">Figure 4
<p>Key modules for target feature extraction. It can be seen that the current feature extraction process fails to comprehensively utilize the appearance information of multiple targets within the region and does not provide feature enhancement specifically for small targets.</p>
Full article ">Figure 5
<p>Spatial distribution and geometric features of targets across different views. In high-altitude scenes, the geometric distribution structures of multiple targets appear similar from different perspectives. Therefore, the spatial relationships between the targets and their surrounding neighbors can be represented as vectors. By calculating the similarity of these vectors, we can determine the identity associations of the targets across different perspectives.</p>
Full article ">Figure 6
<p>MDA mean change curves across all sequences (carafe++bytetrack). From the variations in the two curves in the figure, it is clear that in most cases, the red curve is positioned above the blue curve. This demonstrates that the proposed UAVST-HM achieves higher cross-view target identity matching accuracy than MIA-Net in the majority of scenarios.</p>
Full article ">Figure 7
<p>MDA mean change curves across all sequences (autoassign++bytetrack). From the variations in the two curves in the figure, it is evident that initially, the red curve is slightly above the blue curve. During the middle phase, both curves exhibit fluctuations and intertwine with each other. However, in the final phase, the red curve again rises above the blue curve. This pattern demonstrates that the proposed UAVST-HM achieves higher cross-view target identity matching accuracy than MIA-Net in most scenarios.</p>
Full article ">Figure 8
<p>Multi-view target identity association matching comparison. The first row of images corresponds to the UAVST-HM processing results, where the same white vehicle from different perspectives is consistently labeled as 25, successfully achieving identity association. In contrast, the second row of images shows the MIA-Net processing results, where the same white vehicle from different viewpoints is assigned two different labels, 25 and 99, indicating a failure in identity association.</p>
Full article ">Figure 9
<p>Multi-view target identity association matching comparison. The first row of images presents the UAVST-HM processing results, where the same light green vehicle from different perspectives is consistently labeled as 673, successfully achieving identity association. In contrast, the second row of images shows the MIA-Net processing results, where the same light green vehicle from different viewpoints is assigned two different labels, 734 and 741, indicating a failure in identity association.</p>
Full article ">
19 pages, 10741 KiB  
Article
Electroencephalography-Based Motor Imagery Classification Using Multi-Scale Feature Fusion and Adaptive Lasso
by Shimiao Chen, Nan Li, Xiangzeng Kong, Dong Huang and Tingting Zhang
Big Data Cogn. Comput. 2024, 8(12), 169; https://doi.org/10.3390/bdcc8120169 - 25 Nov 2024
Abstract
Brain–computer interfaces, where motor imagery electroencephalography (EEG) signals are transformed into control commands, offer a promising solution for enhancing the standard of living for disabled individuals. However, the performance of EEG classification has been limited in most studies due to a lack of [...] Read more.
Brain–computer interfaces, where motor imagery electroencephalography (EEG) signals are transformed into control commands, offer a promising solution for enhancing the standard of living for disabled individuals. However, the performance of EEG classification has been limited in most studies due to a lack of attention to the complementary information inherent at different temporal scales. Additionally, significant inter-subject variability in sensitivity to biological motion poses another critical challenge in achieving accurate EEG classification in a subject-dependent manner. To address these challenges, we propose a novel machine learning framework combining multi-scale feature fusion, which captures global and local spatial information from different-sized EEG segmentations, and adaptive Lasso-based feature selection, a mechanism for adaptively retaining informative subject-dependent features and discarding irrelevant ones. Experimental results on multiple public benchmark datasets revealed substantial improvements in EEG classification, achieving rates of 81.36%, 75.90%, and 68.30% for the BCIC-IV-2a, SMR-BCI, and OpenBMI datasets, respectively. These results not only surpassed existing methodologies but also underscored the effectiveness of our approach in overcoming specific challenges in EEG classification. Ablation studies further confirmed the efficacy of both the multi-scale feature analysis and adaptive selection mechanisms. This framework marks a significant advancement in the decoding of motor imagery EEG signals, positioning it for practical applications in real-world BCIs. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of the proposed framework.</p>
Full article ">Figure 2
<p>The BCIC-IV-2a dataset paradigm.</p>
Full article ">Figure 3
<p>The SMR-BCI dataset paradigm.</p>
Full article ">Figure 4
<p>The OpenBMI dataset paradigm.</p>
Full article ">Figure 5
<p>Ablation study classification G-mean (%) of different feature extraction cases on three datasets (FE: feature extraction).</p>
Full article ">Figure 6
<p>T-SNE visualization for preprocessed datasets and multi-scale feature sets of Subject 3 from the BCIC-IV-2a dataset, Subject 3 from the SMR-BCI dataset, and Subject 33 from the OpenBMI dataset, respectively. Each point represents a feature vector corresponding to a trial in the MI-EEG dataset, colored by class: blue for the left hand, red for the right hand, and green for the feet.</p>
Full article ">Figure 7
<p>Ablation study classification of G-mean (%) of different feature selection cases on three datasets (FS: feature selection).</p>
Full article ">Figure 8
<p>Heatmaps of the distribution across frequency bands and temporal windows of subject-specific features selected by MIC-Lasso-based feature selection on BCIC-IV-2a, SMR-BCI, and OpenBMI datasets.</p>
Full article ">
25 pages, 7597 KiB  
Article
Effects of Urban Tree Species and Morphological Characteristics on the Thermal Environment: A Case Study in Fuzhou, China
by Tao Luo, Jia Jia, Yao Qiu and Ying Zhang
Forests 2024, 15(12), 2075; https://doi.org/10.3390/f15122075 - 25 Nov 2024
Viewed by 149
Abstract
Trees and their morphology can mitigate the urban heat island (UHI) effect, but the impacts of tree species and their two-dimensional (2D) and three-dimensional (3D) morphological characteristics on the thermal environment of residential spaces at the building scale have not been effectively evaluated. [...] Read more.
Trees and their morphology can mitigate the urban heat island (UHI) effect, but the impacts of tree species and their two-dimensional (2D) and three-dimensional (3D) morphological characteristics on the thermal environment of residential spaces at the building scale have not been effectively evaluated. This research extracted the data of trees in the spatial range of a 50 m radius of the sampling sites located in a subtropical humid city’s residential area based on unmanned aerial vehicle (UAV) imagery and field measurements. It included Ficus microcarpa L. f., Cinnamomum camphora (L.) J. Presl, and Alstonia scholaris (L.) R. Br. as three typical evergreen tree species and six quantitative indicators of trees, with the number of trees (N) serving as fundamental indicator and mean canopy width (MCW), mean canopy height (MCH), mean tree height (MTH), canopy biomass (CV), and mean canopy biomass (MCV) as morphological characteristic indicators. We analyzed the impact of the six indicators above on two thermal environment parameters: Air temperature (AT) and relative humidity (RH), by correlation analysis and multiple linear regression analysis. Results showed that: (1) F. microcarpa, as a dominant local species, provided more than 65% of the tree canopy volume within the study area (50 m radius buffer zones), and its contribution to cooling and humidification effects was superior to those of C. camphora and A. scholaris. (2) The MTH and CV of F. microcarpa are the key factors influencing daytime AT and RH, respectively, with temporal fluctuation in impact intensity during the spring (May) daytime. (3) The MTH and N of F. microcarpa show the best cooling effect (adjusted R2 = 0.731, p < 0.05) during midday (13:00–14:00 p.m.), while its CV and MTH have the best humidification effect (adjusted R2 = 0.748, p < 0.05) during the morning (9:00–10:00 a.m.) among three typical tree species. The 2D and 3D morphological characteristic indicators effectively describe the impact and variation of tree species on the spring microclimate within small-scale residential spaces. This work provides new insights into the thermal benefits brought by the spatial growth features of trees at the building scale and offers reference for urban residential areas in the planning and management related to tree species selection, canopy maintenance, and the improvement of thermal comfort for inhabitants. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Range of the study area and sampling sites, (<b>b</b>) a set of sampling sites arranged along an urban street, (<b>c</b>) schematic of a 50 m radius spatial buffer around sampling sites, (<b>d</b>) identification and segmentation of tree crowns, (<b>e</b>) extraction of tree information within 50 m buffer zone (including crowns that overlap with the buffer zone boundary).</p>
Full article ">Figure 2
<p>Methodological flowchart used in this study.</p>
Full article ">Figure 3
<p>Mean air temperature and relative humidity at three measurement periods during daytime (standard deviation bars are used to represent the variability in the data).</p>
Full article ">Figure 4
<p>(<b>a</b>) Spatial characteristics of mean air temperature during daytime, (<b>b</b>) spatial characteristics of mean relative humidity during daytime.</p>
Full article ">Figure 5
<p>The proportion of N and CV of typical tree species within a 50 m radius of the sampling sites. (N: number of trees; CV: canopy volume).</p>
Full article ">Figure 6
<p>Quantitative features of morphological characteristic indicators of tree species within a 50 m radius of the sampling sites (the bottom of the box indicates the lower quartile (Q1), the top of the box indicates the upper quartile (Q3), the black triangular points represent outliers, and the solid black and red lines in the box indicate the median and mean).</p>
Full article ">Figure 7
<p>Spatial distribution of morphological characteristic indicators of three typical tree species.</p>
Full article ">Figure 8
<p>Correlation matrix between morphological characteristic indicators of trees and AT and RH.</p>
Full article ">Figure 9
<p>Multiple linear regression equation of morphological characteristic indicators of three typical tree species with AT and RH at different time periods.</p>
Full article ">
21 pages, 804 KiB  
Article
One-Dimensional Deep Residual Network with Aggregated Transformations for Internet of Things (IoT)-Enabled Human Activity Recognition in an Uncontrolled Environment
by Sakorn Mekruksavanich and Anuchit Jitpattanakul
Technologies 2024, 12(12), 242; https://doi.org/10.3390/technologies12120242 - 24 Nov 2024
Viewed by 332
Abstract
Human activity recognition (HAR) in real-world settings has gained significance due to the growth of Internet of Things (IoT) devices such as smartphones and smartwatches. Nonetheless, limitations such as fluctuating environmental conditions and intricate behavioral patterns have impacted the accuracy of the current [...] Read more.
Human activity recognition (HAR) in real-world settings has gained significance due to the growth of Internet of Things (IoT) devices such as smartphones and smartwatches. Nonetheless, limitations such as fluctuating environmental conditions and intricate behavioral patterns have impacted the accuracy of the current procedures. This research introduces an innovative methodology employing a modified deep residual network, called 1D-ResNeXt, for IoT-enabled HAR in uncontrolled environments. We developed a comprehensive network that utilizes feature fusion and a multi-kernel block approach. The residual connections and the split–transform–merge technique mitigate the accuracy degradation and reduce the parameter number. We assessed our suggested model on three available datasets, mHealth, MotionSense, and Wild-SHARD, utilizing accuracy metrics, cross-entropy loss, and F1 score. The findings indicated substantial enhancements in proficiency in recognition, attaining 99.97% on mHealth, 98.77% on MotionSense, and 97.59% on Wild-SHARD, surpassing contemporary methodologies. Significantly, our model attained these outcomes with considerably fewer parameters (24,130–26,118) than other models, several of which exceeded 700,000 parameters. The 1D-ResNeXt model demonstrated outstanding effectiveness under various ambient circumstances, tackling a significant obstacle in practical HAR applications. The findings indicate that our modified deep residual network presents a viable approach for improving the dependability and usability of IoT-based HAR systems in dynamic, uncontrolled situations while preserving the computational effectiveness essential for IoT devices. The results significantly impact multiple sectors, including healthcare surveillance, intelligent residences, and customized assistive devices. Full article
(This article belongs to the Special Issue IoT-Enabling Technologies and Applications)
19 pages, 3120 KiB  
Article
Optimized Fault Classification in Electric Vehicle Drive Motors Using Advanced Machine Learning and Data Transformation Techniques
by S. Thirunavukkarasu, K. Karthick, S. K. Aruna, R. Manikandan and Mejdl Safran
Processes 2024, 12(12), 2648; https://doi.org/10.3390/pr12122648 - 24 Nov 2024
Viewed by 443
Abstract
The increasing use of electric vehicles has made fault diagnosis in electric drive motors, particularly in variable speed drives (VSDs) using three-phase induction motors, a critical area of research. This article presents a fault classification model based on machine learning (ML) algorithms to [...] Read more.
The increasing use of electric vehicles has made fault diagnosis in electric drive motors, particularly in variable speed drives (VSDs) using three-phase induction motors, a critical area of research. This article presents a fault classification model based on machine learning (ML) algorithms to identify various faults under six operating conditions: normal operating mode (NOM), phase-to-phase fault (PTPF), phase-to-ground fault (PTGF), overloading fault (OLF), over-voltage fault (OVF), and under-voltage fault (UVF). A dataset simulating real-world operating conditions, consisting of 39,034 instances and nine key motor features, was analyzed. Comprehensive data preprocessing steps, including missing value removal, duplicate detection, and data transformation, were applied to enhance the dataset’s suitability for ML models. Yeo–Johnson and Hyperbolic Sine transformations were used to reduce skewness and improve the normality of the features. Multiple ML algorithms, including CatBoost, Random Forest (RF) Classifier, AdaBoost, and quadratic discriminant analysis (QDA), were trained and evaluated using Bayesian optimization with cross-validation. The CatBoost model achieved the best performance, with an accuracy of 94.1%, making it the most suitable model for fault classification in electric vehicle drive motors. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

Figure 1
<p>Proposed fault classification model.</p>
Full article ">Figure 2
<p>Distribution of electric vehicle drive motor faults in the dataset.</p>
Full article ">Figure 3
<p>Box plots before transformation. (<b>a</b>) Rated torque in Nm. (<b>b</b>) k constant of proportionality. (<b>c</b>) Time in s. (<b>d</b>) I<sub>a</sub> in A. (<b>e</b>) I<sub>b</sub> in A. (<b>f</b>) I<sub>c</sub> in A. (<b>g</b>) V<sub>ab</sub> in V. (<b>h</b>) Actual torque in Nm. (<b>i</b>) Motor speed in rad/s.</p>
Full article ">Figure 3 Cont.
<p>Box plots before transformation. (<b>a</b>) Rated torque in Nm. (<b>b</b>) k constant of proportionality. (<b>c</b>) Time in s. (<b>d</b>) I<sub>a</sub> in A. (<b>e</b>) I<sub>b</sub> in A. (<b>f</b>) I<sub>c</sub> in A. (<b>g</b>) V<sub>ab</sub> in V. (<b>h</b>) Actual torque in Nm. (<b>i</b>) Motor speed in rad/s.</p>
Full article ">Figure 4
<p>Box plots after transformation. (<b>a</b>) Rated torque in Nm. (<b>b</b>) k constant of proportionality. (<b>c</b>) time in s. (<b>d</b>) I<sub>a</sub> in A. (<b>e</b>) I<sub>b</sub> in A. (<b>f</b>) I<sub>c</sub> in A. (<b>g</b>) V<sub>ab</sub> in V. (<b>h</b>) Actual torque in Nm. (<b>i</b>) Motor speed in rad/s.</p>
Full article ">Figure 4 Cont.
<p>Box plots after transformation. (<b>a</b>) Rated torque in Nm. (<b>b</b>) k constant of proportionality. (<b>c</b>) time in s. (<b>d</b>) I<sub>a</sub> in A. (<b>e</b>) I<sub>b</sub> in A. (<b>f</b>) I<sub>c</sub> in A. (<b>g</b>) V<sub>ab</sub> in V. (<b>h</b>) Actual torque in Nm. (<b>i</b>) Motor speed in rad/s.</p>
Full article ">Figure 5
<p>Correlation heatmap of electric vehicle drive motor faults dataset.</p>
Full article ">Figure 6
<p>Confusion matrices of (<b>a</b>) CatBoost, (<b>b</b>) random forest classifier, (<b>c</b>) AdaBoost, and (<b>d</b>) quadratic discriminant analysis.</p>
Full article ">Figure 7
<p>ROC curves for multi-class fault classification in EV drive motors showing AUC values across six fault classes. (<b>a</b>) CatBoost, (<b>b</b>) Random Forest Classifier, (<b>c</b>) AdaBoost, and (<b>d</b>) quadratic discriminant analysis.</p>
Full article ">
22 pages, 9716 KiB  
Article
AFENet: An Attention-Focused Feature Enhancement Network for the Efficient Semantic Segmentation of Remote Sensing Images
by Jiarui Li and Shuli Cheng
Remote Sens. 2024, 16(23), 4392; https://doi.org/10.3390/rs16234392 - 24 Nov 2024
Viewed by 198
Abstract
The semantic segmentation of high-resolution remote sensing images (HRRSIs) faces persistent challenges in handling complex architectural structures and shadow occlusions, limiting the effectiveness of existing deep learning approaches. To address these limitations, we propose an attention-focused feature enhancement network (AFENet) with a novel [...] Read more.
The semantic segmentation of high-resolution remote sensing images (HRRSIs) faces persistent challenges in handling complex architectural structures and shadow occlusions, limiting the effectiveness of existing deep learning approaches. To address these limitations, we propose an attention-focused feature enhancement network (AFENet) with a novel encoder–decoder architecture. The encoder architecture combines ResNet50 with a parallel multistage feature enhancement group (PMFEG), enabling robust feature extraction through optimized channel reduction, scale expansion, and channel reassignment operations. Building upon this foundation, we develop a global multi-scale attention mechanism (GMAM) in the decoder that effectively synthesizes spatial information across multiple scales by learning comprehensive global–local relationships. The architecture is further enhanced by an efficient feature-weighted fusion module (FWFM) that systematically integrates remote spatial features with local semantic information to improve segmentation accuracy. Experimental results across diverse scenarios demonstrate that AFENet achieves superior performance in building structure detection, exhibiting enhanced segmentation connectivity and completeness compared to state-of-the-art methods. Full article
19 pages, 715 KiB  
Article
Applying Large Language Model to User Experience Testing
by Nien-Lin Hsueh, Hsuen-Jen Lin and Lien-Chi Lai
Electronics 2024, 13(23), 4633; https://doi.org/10.3390/electronics13234633 - 24 Nov 2024
Viewed by 184
Abstract
The maturation of internet usage environments has elevated User Experience (UX) to a critical factor in system success. However, traditional manual UX testing methods are hampered by subjectivity and lack of standardization, resulting in time-consuming and costly processes. This study explores the potential [...] Read more.
The maturation of internet usage environments has elevated User Experience (UX) to a critical factor in system success. However, traditional manual UX testing methods are hampered by subjectivity and lack of standardization, resulting in time-consuming and costly processes. This study explores the potential of Large Language Models (LLMs) to address these challenges by developing an automated UX testing tool. Our innovative approach integrates the Rapi web recording tool to capture user interaction data with the analytical capabilities of LLMs, utilizing Nielsen’s usability heuristics as evaluation criteria. This methodology aims to significantly reduce the initial costs associated with UX testing while maintaining assessment quality. To validate the tool’s efficacy, we conducted a case study featuring a tennis-themed course reservation system. The system incorporated multiple scenarios per page, allowing users to perform tasks based on predefined goals. We employed our automated UX testing tool to evaluate screenshots and interaction logs from user sessions. Concurrently, we invited participants to test the system and complete UX questionnaires based on their experiences. Comparative analysis revealed that varying prompts in the automated UX testing tool yielded different outcomes, particularly in detecting interface elements. Notably, our tool demonstrated superior capability in identifying issues aligned with Nielsen’s usability principles compared to participant evaluations. This research contributes to the field of UX evaluation by leveraging advanced language models and established usability heuristics. Our findings suggest that LLM-based automated UX testing tools can offer more consistent and comprehensive assessments. Full article
(This article belongs to the Special Issue Recent Advances of Software Engineering)
26 pages, 2834 KiB  
Article
Hybrid Deep Learning and Machine Learning for Detecting Hepatocyte Ballooning in Liver Ultrasound Images
by Fahad Alshagathrh, Mahmood Alzubaidi, Samuel Gecík, Khalid Alswat, Ali Aldhebaib, Bushra Alahmadi, Meteb Alkubeyyer, Abdulaziz Alosaimi, Amani Alsadoon, Maram Alkhamash, Jens Schneider and Mowafa Househ
Diagnostics 2024, 14(23), 2646; https://doi.org/10.3390/diagnostics14232646 - 24 Nov 2024
Viewed by 253
Abstract
Background: Hepatocyte ballooning (HB) is a significant histological characteristic linked to the advancement of non-alcoholic fatty liver disease (NAFLD) and non-alcoholic steatohepatitis (NASH). Although clinicians now consider liver biopsy the most reliable method for identifying HB, its invasive nature and related dangers highlight [...] Read more.
Background: Hepatocyte ballooning (HB) is a significant histological characteristic linked to the advancement of non-alcoholic fatty liver disease (NAFLD) and non-alcoholic steatohepatitis (NASH). Although clinicians now consider liver biopsy the most reliable method for identifying HB, its invasive nature and related dangers highlight the need for the development of non-invasive diagnostic options. Objective: This study aims to develop a novel methodology that combines deep learning and machine learning techniques to accurately identify and measure hepatobiliary abnormalities in liver ultrasound images. Methods: The research team expanded the dataset, consisting of ultrasound images, and used it for training deep convolutional neural networks (CNNs) such as InceptionV3, ResNet50, DenseNet121, and EfficientNetB0. A hybrid approach, combining InceptionV3 for feature extraction with a Random Forest classifier, emerged as the most accurate and stable method. An approach of dual dichotomy classification was used to categorize images into two stages: healthy vs. sick, and then mild versus severe ballooning.. Features obtained from CNNs were integrated with conventional machine learning classifiers like Random Forest and Support Vector Machines (SVM). Results: The hybrid approach achieved an accuracy of 97.40%, an area under the curve (AUC) of 0.99, and a sensitivity of 99% for the ‘Many’ class during the third phase of evaluation. The dual dichotomy classification enhanced the sensitivity in identifying severe instances of HB. The cross-validation process confirmed the strength and reliability of the suggested models. Conclusions: These results indicate that this combination method can decrease the need for invasive liver biopsies by providing a non-invasive and precise alternative for early identification and monitoring of NAFLD and NASH. Subsequent research will prioritize the validation of these models using larger datasets from multiple centers to evaluate their generalizability and incorporation into clinical practice. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

Figure 1
<p>Bar chart illustrating the class distribution in the initial and final datasets for hepatocyte ballooning detection. The chart shows the increase in sample sizes for each class (None, Few, and Many) after dataset expansion, highlighting the persistent class imbalance despite efforts to mitigate it.</p>
Full article ">Figure 2
<p>Visual representation of augmentation techniques applied to liver ultrasound images for hepatocyte ballooning detection. The figure shows original images (top row) and examples of offline (middle row) and online (bottom row) augmentations for each class (None, Few, and Many). Note the subtle variations introduced by each augmentation stage while preserving key diagnostic features.</p>
Full article ">Figure 3
<p>Training and validation loss curves for the InceptionV3 model across ten folds in HB detection. The blue line represents the mean validation loss, while the red line shows the mean training loss. Shaded areas indicate the range of losses across folds, demonstrating the model’s consistency and convergence behavior.</p>
Full article ">Figure 4
<p>Schematic diagram of the feature extraction process using InceptionV3 as a feature extractor. The top row illustrates the preprocessing steps, while the bottom row shows the feature extraction pipeline.</p>
Full article ">Figure 5
<p>Flowchart illustrating the dual dichotomy classification process. The process involves two stages: first distinguishing between Normal and Abnormal cases, then further classifying Abnormal cases into Few or Many balloon cells.</p>
Full article ">Figure 6
<p>Validation AUC curves for InceptionV3, ResNet50, DenseNet121, and EfficientNetB0. The graph shows the evolution of each model’s performance throughout the training process, with InceptionV3 demonstrating superior and more stable discriminative capability across epochs.</p>
Full article ">Figure 7
<p>Box plot showing the distribution of validation accuracies across ten folds for InceptionV3 and EfficientNetB0. The plot demonstrates the superior and more consistent performance of InceptionV3.</p>
Full article ">
39 pages, 16475 KiB  
Article
Ring: A Lightweight and Versatile Cross-Platform Dynamic Programming Language Developed Using Visual Programming
by Mahmoud Samir Fayed and Yousef A. Alohali
Electronics 2024, 13(23), 4627; https://doi.org/10.3390/electronics13234627 - 23 Nov 2024
Viewed by 700
Abstract
New programming languages are often designed to keep up with technological advancements and project requirements while also learning from previous attempts and introducing more powerful expression mechanisms. However, most existing dynamic programming languages rely on English keywords and lack features that facilitate easy [...] Read more.
New programming languages are often designed to keep up with technological advancements and project requirements while also learning from previous attempts and introducing more powerful expression mechanisms. However, most existing dynamic programming languages rely on English keywords and lack features that facilitate easy translation of language syntax. Additionally, maintaining multiple implementations of the same language for different platforms, such as desktops and microcontrollers, can lead to inconsistencies and fragmented features. Furthermore, they usually do not use visual programming to fully implement the compiler and virtual machine. In this research paper, we introduce Ring—a dynamically-typed language with a lightweight implementation. However, it boasts several advantages, including a rich and versatile standard library and direct support for classes and object-oriented programming. The Ring language offers customization features. For instance, it allows easy modification of the language syntax multiple times, enabling programming by writing code using Arabic, English, or other keywords. Additionally, the language permits the creation of domain-specific languages through new features that extend object-oriented programming, allowing for specialized languages resembling CSS or Supernova. In the era of the Internet of Things, instead of creating another language implementation to support microcontrollers, the same Ring implementation allows us to create projects and applications for desktops, the web, WebAssembly, Android, or Raspberry Pi Pico. The Ring Compiler and Virtual Machine are designed using the PWCT Visual Programming language based on ANSI C. The visual implementation is composed of 18,945 components that generate 24,743 lines of code, which increases the abstraction level by approximately 23.5% and hides unnecessary details. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

Figure 1
<p>Some of the dynamic programming languages that were developed, starting in 1960.</p>
Full article ">Figure 2
<p>Using commands in the Supernova language to describe the application user interface.</p>
Full article ">Figure 3
<p>The key features of the proposed dynamic language and environment.</p>
Full article ">Figure 4
<p>The proposed system architecture.</p>
Full article ">Figure 5
<p>Arabic syntax within a WebAssembly application developed using Ring.</p>
Full article ">Figure 6
<p>Ring code to implement a simple domain-specific language.</p>
Full article ">Figure 7
<p>Extending our DSL using inheritance and the GUI library.</p>
Full article ">Figure 8
<p>Using Declarative Style in Ring for Raspberry Pi Pico programming.</p>
Full article ">Figure 9
<p>Ring IDE (Code Editor, Form Designer, etc.) is developed using Ring itself.</p>
Full article ">Figure 10
<p>Using PWCT to define the List structure which uses a singleton cache.</p>
Full article ">Figure 11
<p>Implementing the Ring language grammar using PWCT.</p>
Full article ">Figure 12
<p>Using the VPL Compiler to get statistics about the visual representation.</p>
Full article ">Figure 13
<p>Ring Virtual Machine implementation using PWCT.</p>
Full article ">Figure 14
<p>Early users and the language used prior to Ring.</p>
Full article ">Figure 15
<p>Feedback from students about Ring language after a one-hour lecture.</p>
Full article ">Figure 16
<p>Ring downloads statistics from Sourceforge (grouped by the Operating System).</p>
Full article ">Figure 17
<p>Ring downloads statistics from Sourceforge (grouped by the Country).</p>
Full article ">Figure 18
<p>A GUI application developed using the Ring language.</p>
Full article ">Figure 19
<p>The GoldMagic800 game—A puzzle game developed using RingAllegro.</p>
Full article ">Figure 20
<p>Visual implementation size for each module.</p>
Full article ">Figure 21
<p>The loading time (LT) and code generation time (CGT) for each visual source file.</p>
Full article ">Figure 22
<p>Code generation time (CGT) for large visual source files (measured in seconds).</p>
Full article ">Figure 23
<p>Generated code size for Ring Compiler/VM from 2016 to 2024.</p>
Full article ">Figure 24
<p>Code size for Lua Compiler/VM from 1993 to 2024.</p>
Full article ">Figure 25
<p>Generated code size for Ring Compiler/VM from Ring 1.0.0 to Ring 1.21.2.</p>
Full article ">Figure 26
<p>Function call (100 M) benchmark for Ring, VFP, and Python.</p>
Full article ">Figure 27
<p>Different frames from the waving cubes animation.</p>
Full article ">
Back to TopTop