Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,768)

Search Parameters:
Keywords = automatic processing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 6531 KiB  
Article
Inverse Synthetic Aperture Radar Image Multi-Modal Zero-Shot Learning Based on the Scattering Center Model and Neighbor-Adapted Locally Linear Embedding
by Xinfei Jin, Hongxu Li, Xinbo Xu, Zihan Xu and Fulin Su
Remote Sens. 2025, 17(4), 725; https://doi.org/10.3390/rs17040725 - 19 Feb 2025
Abstract
Inverse Synthetic Aperture Radar (ISAR) images are extensively used in Radar Automatic Target Recognition (RATR) for non-cooperative targets. However, acquiring training samples for all target categories is challenging. Recognizing target classes without training samples is called Zero-Shot Learning (ZSL). When ZSL involves multiple [...] Read more.
Inverse Synthetic Aperture Radar (ISAR) images are extensively used in Radar Automatic Target Recognition (RATR) for non-cooperative targets. However, acquiring training samples for all target categories is challenging. Recognizing target classes without training samples is called Zero-Shot Learning (ZSL). When ZSL involves multiple modalities, it becomes Multi-modal Zero-Shot Learning (MZSL). To achieve MZSL, a framework is proposed for generating ISAR images with optical image aiding. The process begins by extracting edges from optical images to capture the structure of ship targets. These extracted edges are used to estimate the potential locations of the target’s scattering centers. Using the Geometric Theory of Diffraction (GTD)-based scattering center model, the edges’ ISAR images are generated from the scattering centers. Next, a mapping is established between the edges’ ISAR images and the actual ISAR images. Neighbor-Adapted Local Linear Embedding (NALLE) generates pseudo-ISAR images for the unseen classes by combining the edges’ ISAR images with the actual ISAR images from the seen classes. Finally, these pseudo-ISAR images serve as training samples, enabling the recognition of test samples. In contrast to the network-based approaches, this method requires only a limited number of training samples. Experiments based on simulated and measured data validate the effectiveness. Full article
(This article belongs to the Section Remote Sensing Image Processing)
17 pages, 1463 KiB  
Article
Interpretable Probabilistic Identification of Depression in Speech
by Stavros Ntalampiras
Sensors 2025, 25(4), 1270; https://doi.org/10.3390/s25041270 - 19 Feb 2025
Abstract
Mental health assessment is typically carried out via a series of conversation sessions with medical professionals, where the overall aim is the diagnosis of mental illnesses and well-being evaluation. Despite its arguable socioeconomic significance, national health systems fail to meet the increased demand [...] Read more.
Mental health assessment is typically carried out via a series of conversation sessions with medical professionals, where the overall aim is the diagnosis of mental illnesses and well-being evaluation. Despite its arguable socioeconomic significance, national health systems fail to meet the increased demand for such services that has been observed in recent years. To assist and accelerate the diagnosis process, this work proposes an AI-based tool able to provide interpretable predictions by automatically processing the recorded speech signals. An explainability-by-design approach is followed, where audio descriptors related to the problem at hand form the feature vector (Mel-scaled spectrum summarization, Teager operator and periodicity description), while modeling is based on Hidden Markov Models adapted from an ergodic universal one following a suitably designed data selection scheme. After extensive and thorough experiments adopting a standardized protocol on a publicly available dataset, we report significantly higher results with respect to the state of the art. In addition, an ablation study was carried out, providing a comprehensive analysis of the relevance of each system component. Last but not least, the proposed solution not only provides excellent performance, but its operation and predictions are transparent and interpretable, laying out the path to close the usability gap existing between such systems and medical personnel. Full article
(This article belongs to the Special Issue Advances in Acoustic Sensors and Deep Audio Pattern Recognition)
Show Figures

Figure 1

Figure 1
<p>MAP-based adaptation of the <span class="html-italic">k</span>-the component of model <math display="inline"><semantics> <mi mathvariant="script">M</mi> </semantics></math> using class-specific observations <span class="html-italic">R</span>.</p>
Full article ">Figure 2
<p>The topologies including the transition probabilities of the HMMs constructed to address Interview and Reading tasks.</p>
Full article ">Figure 3
<p>Effect of the number of HMM states on F1-score for Reading and Interview tasks.</p>
Full article ">Figure 4
<p>The probabilities output by the UBM-HMM on recordings representing both Healthy and Control subjects with respect to Reading and Interview tasks.</p>
Full article ">
25 pages, 5090 KiB  
Article
Research on Intelligent Verification of Equipment Information in Engineering Drawings Based on Deep Learning
by Zicheng Zhang and Yurou He
Electronics 2025, 14(4), 814; https://doi.org/10.3390/electronics14040814 - 19 Feb 2025
Abstract
This paper focuses on the crucial task of automatic recognition and understanding of table structures in engineering drawings and document processing. Given the importance of tables in information display and the urgent need for automated processing of tables in the digitalization process, an [...] Read more.
This paper focuses on the crucial task of automatic recognition and understanding of table structures in engineering drawings and document processing. Given the importance of tables in information display and the urgent need for automated processing of tables in the digitalization process, an intelligent verification method is proposed. This method integrates multiple key techniques: YOLOv10 is used for table object recognition, achieving a precision of 0.891, a recall rate of 0.899, mAP50 of 0.922, and mAP50-95 of 0.677 in table recognition, demonstrating strong target detection capabilities; the improved LORE algorithm is adopted to extract table structures, breaking through the limitations of the original algorithm by segmenting large-sized images, with a table extraction accuracy rate reaching 91.61% and significantly improving the accuracy of handling complex tables; RapidOCR is utilized to achieve text recognition and cell correspondence, solving the problem of text-cell matching; for equipment name semantic matching, a method based on BERT is introduced and calculated using a comprehensive scoring method. Meanwhile, an improved cuckoo search algorithm is proposed to optimize the adjustment factors, avoiding local optima through sine optimization and the catfish effect. Experiments show the accuracy of equipment name matching in semantic similarity calculation approaches 100%. Finally, the paper provides a concrete system practice to prove the effectiveness of the algorithm. In conclusion, through experimental comparisons, this method exhibits excellent performance in table area location, structure recognition, and semantic matching and is of great significance and practical value in advancing table data processing technology in engineering drawings. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>The framework of intelligent verification methods.</p>
Full article ">Figure 2
<p>The framework of YOLOv10.</p>
Full article ">Figure 3
<p>Illustration of the improved LORE algorithm.</p>
Full article ">Figure 4
<p>First-last layer average pooling.</p>
Full article ">Figure 5
<p>Improved cuckoo search algorithm flowchart.</p>
Full article ">Figure 6
<p>Iteration curve of algorithm training effectiveness.</p>
Full article ">Figure 7
<p>Display of recognition results.</p>
Full article ">Figure 8
<p>Comparison of the recognition process of this paper’s algorithm with the original LORE algorithm.</p>
Full article ">Figure 9
<p>Iteration curves of the three functions for CS and ICS. (<b>a</b>) Iteration Curves of Function F1 (<b>b</b>) Iteration Curves of Function F2 (<b>c</b>) Iteration Curves of Function F3.</p>
Full article ">Figure 10
<p>Iteration curves of the three algorithms.</p>
Full article ">Figure 11
<p>Schematic diagram of system process, model and components.</p>
Full article ">Figure 12
<p>Matching result system screenshot.</p>
Full article ">
26 pages, 29509 KiB  
Article
MangiSpectra: A Multivariate Phenological Analysis Framework Leveraging UAV Imagery and LSTM for Tree Health and Yield Estimation in Mango Orchards
by Muhammad Munir Afsar, Muhammad Shahid Iqbal, Asim Dilawar Bakhshi, Ejaz Hussain and Javed Iqbal
Remote Sens. 2025, 17(4), 703; https://doi.org/10.3390/rs17040703 - 19 Feb 2025
Abstract
Mango (Mangifera Indica L.), a key horticultural crop, particularly in Pakistan, has been primarily studied locally using low- to medium-resolution satellite imagery, usually focusing on a particular phenological stage. The large canopy size, complex tree structure, and unique phenology of mango trees [...] Read more.
Mango (Mangifera Indica L.), a key horticultural crop, particularly in Pakistan, has been primarily studied locally using low- to medium-resolution satellite imagery, usually focusing on a particular phenological stage. The large canopy size, complex tree structure, and unique phenology of mango trees further accentuate intrinsic challenges posed by low-spatiotemporal-resolution data. The absence of mango-specific vegetation indices compounds the problem of accurate health classification and yield estimation at the tree level. To overcome these issues, this study utilizes high-resolution multi-spectral UAV imagery collected from two mango orchards in Multan, Pakistan, throughout the annual phenological cycle. It introduces MangiSpectra, an integrated two-staged framework based on Long Short-Term Memory (LSTM) networks. In the first stage, nine conventional and three mango-specific vegetation indices derived from UAV imagery were processed through fine-tuned LSTM networks to classify the health of individual mango trees. In the second stage, associated data such as the trees’ age, variety, canopy volume, height, and weather data were combined with predicted health classes for yield estimation through a decision tree algorithm. Three mango-specific indices, namely the Mango Tree Yellowness Index (MTYI), Weighted Yellowness Index (WYI), and Normalized Automatic Flowering Detection Index (NAFDI), were developed to measure the degree of canopy covered by flowers to enhance the robustness of the framework. In addition, a Cumulative Health Index (CHI) derived from imagery analysis after every flight is also proposed for proactive orchard management. MangiSpectra outperformed the comparative benchmarks of AdaBoost and Random Forest in health classification by achieving 93% accuracy and AUC scores of 0.85, 0.96, and 0.92 for the healthy, moderate and weak classes, respectively. Yield estimation accuracy was reasonable with R2=0.21, and RMSE=50.18. Results underscore MangiSpectra’s potential as a scalable precision agriculture tool for sustainable mango orchard management, which can be improved further by fine-tuning algorithms using ground-based spectrometry, IoT-based orchard monitoring systems, computer vision-based counting of fruit on control trees, and smartphone-based data collection and insight dissemination applications. Full article
(This article belongs to the Special Issue Application of Satellite and UAV Data in Precision Agriculture)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area is located in Multan, Punjab, Pakistan, as shown in the inset maps. The main experimental site, Orchard 1 (outlined in red), covers an area of 45 acres and contains 1305 trees. The validation site, Orchard 2 (outlined in yellow), spans 55 acres with 1833 trees. The location of both orchards is indicated in the high-resolution satellite image. The grid overlay provides geospatial reference points, with latitude and longitude markers.</p>
Full article ">Figure 2
<p>Mango yield estimates across different varieties, age groups, and health conditions, showing variations in productivity based on cultivar type and overall mango tree health.</p>
Full article ">Figure 3
<p>Overview of the four-staged integrated MangiSpectra framework for tree-level health and yield estimation.</p>
Full article ">Figure 4
<p>Unsegmented tree canopies of the same group of 12 mango trees over different flight dates in Orchard 1 showing the effect of underlying vegetation. (<b>Left</b>): RGB image; (<b>Right</b>): Normalized GNDVI.</p>
Full article ">Figure 5
<p>Cumulative trend of key vegetation indices across phenological stages at Orchard 1.</p>
Full article ">Figure 6
<p>Progression of flowering from March to April 2024 as detected by Mango Tree Yellowness Index (MTYI), Weighted Yellowness Index (WYI), and Normalized Automatic Flowering Detection Index (NAFDI) on the canopy of the same tree. RGB UAV imagery in the top row is for visual reference. The color scale represents the relative density of flowers, with red indicating a lower degree of flowering and green indicating higher.</p>
Full article ">Figure 7
<p>Sample of per-flight health classification and in-season farming intervention recommendations. The map shows the health of categories of trees in Orchard 1 on 24 March 24 during the flowering stage.</p>
Full article ">Figure 8
<p>Utilization of LSTM component within the MangiSpectra framework for health classification and yield estimation.</p>
Full article ">Figure 9
<p>Key performance metrics of the LSTM model for tree health classification: (<b>a</b>) training and test accuracy over epochs, (<b>b</b>) confusion matrix, (<b>c</b>) ROC curves for each class, and (<b>d</b>) F1 score, accuracy, and class distribution.</p>
Full article ">Figure 10
<p>Analysis of tree health in the orchard using the MangiSpectra framework: (<b>a</b>) age distribution by tree health status, (<b>b</b>) model accuracy comparison, (<b>c</b>) age distribution by model agreement on tree health status, (<b>d</b>) age distribution by tree health status and MangiSpectra prediction.</p>
Full article ">Figure 11
<p>Spatial distribution of tree health as estimated by MangiSpectra for Orchard 1. Each dot represents an individual tree, categorized into three health classes: healthy (green, 639 trees), moderate (yellow, 405 trees), and weak (red, 261 trees). The background heat map provides an interpolated health estimate, highlighting areas of varying tree conditions with greenish arcs indicating prevalence of healthier trees, yellowish indicating moderate, and reddish areas showing clustering of weak trees.</p>
Full article ">Figure 12
<p>Comparison of actual yield with model predictions: (<b>a</b>) actual yield compared with MangiSpectra estimate, (<b>b</b>) actual yield compared with Random Forest estimate, (<b>c</b>) estimated yield of MangiSpectra correlated with age, and (<b>d</b>) predicted yields of MangiSpectra and Random Forest.</p>
Full article ">Figure 13
<p>Spatial distribution of yield estimates in Orchard 1. Individual trees are represented by circles. Circle sizes correspond to the normalized yield, while colors indicate yield estimation: low (red), moderate (yellow), and healthy (green). The gradient background depicts yield estimate zones from low (red) to high (green).</p>
Full article ">Figure 14
<p>Spatial distribution of health classification over Orchard 2.</p>
Full article ">
19 pages, 4816 KiB  
Article
Thickness Model of the Adhesive on Spacecraft Structural Plate
by Yanhui Guo, Peibo Li, Yanpeng Chen, Xinfu Chi and Yize Sun
Aerospace 2025, 12(2), 159; https://doi.org/10.3390/aerospace12020159 - 19 Feb 2025
Viewed by 21
Abstract
This paper establishes a physical model for the non-contact rotary screen coating process based on a spacecraft structural plate and proposes a theoretical expression for the adhesive thickness of the non-contact rotary screen coating. The thickness of the adhesive is a critical factor [...] Read more.
This paper establishes a physical model for the non-contact rotary screen coating process based on a spacecraft structural plate and proposes a theoretical expression for the adhesive thickness of the non-contact rotary screen coating. The thickness of the adhesive is a critical factor influencing the quality of the optical solar reflector (OSR) adhesion. The thickness of the adhesive layer depends on the equivalent fluid height and the ratio of the fluid flow rate to the squeegee speed below the squeegee. When the screen and fluid remain constant, the fluid flow rate below the squeegee depends on the pressure at the tip of the squeegee. The pressure is also a function related to the deformation characteristics and speed of the squeegee. Based on the actual geometric shape of the wedge-shaped squeegee, the analytical expression for the vertical displacement of the squeegee is obtained as the actual boundary of the flow field. The analytical expression for the deformation angle of the squeegee is used to solve the contact length between the squeegee and the rotary screen. It reduces the calculation difficulty compared with the previous method. Based on the theory of rheology and fluid mechanics, the velocity distribution of the fluid under the squeegee and the expression of the dynamic pressure at the tip of the squeegee were obtained. The dynamic pressure at the tip of the squeegee is a key factor for the adhesive to pass through the rotary screen. According to the continuity equation of the fluid, the theoretical thickness expression of the non-contact rotary screen coating is obtained. The simulation and experimental results show that the variation trend of coating thickness with the influence of variables is consistent. Experimental and simulation errors compared to theoretical values are less than 5%, which proves the rationality of the theoretical expression of the non-contact rotary screen coating thickness under the condition of considering the actual squeegee deformation. The existence of differences proves that a small part of the colloid remains on the rotary screen during the colloid transfer process. The expression parameterizes the rotary screen coating model and provides a theoretical basis for the design of automatic coating equipment. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

Figure 1
<p>The actuator of the non-contact rotary screen coating process with measurement function: (<b>b</b>) shows the internal structure of the actuator in (<b>a</b>).</p>
Full article ">Figure 2
<p>The geometric model of the squeegee.</p>
Full article ">Figure 3
<p>The contours with FEM for deformed configurations of the squeegee. Initial and deformation states are presented by the solid border and color area, respectively.</p>
Full article ">Figure 4
<p>Deformation curve of the lower edge of the wedge part for the FEM and the analytical solution. (<b>a</b>) 40 N, 75<math display="inline"><semantics> <mrow> <mo>°</mo> </mrow> </semantics></math>; (<b>b</b>) 60 N, 75<math display="inline"><semantics> <mrow> <mo>°</mo> </mrow> </semantics></math>; (<b>c</b>) 40 N, 85<math display="inline"><semantics> <mrow> <mo>°</mo> </mrow> </semantics></math>; (<b>d</b>) 60 N, 85<math display="inline"><semantics> <mrow> <mo>°</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Schematic of flow field division for the rotary screen printing process. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>X</mi> </mrow> <mrow> <mi>a</mi> <mi>q</mi> </mrow> </msub> </mrow> </semantics></math> is the accumulated length of adhesive in front of the squeegee. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi>p</mi> </mrow> </msub> </mrow> </semantics></math> is the adhesive flow speed under the squeegee. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> is the printing speed of the squeegee and rotary screen. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>H</mi> </mrow> <mrow> <mi>d</mi> </mrow> </msub> </mrow> </semantics></math> is the adhesive layer thickness. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>H</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> is the rotary screen thickness.</p>
Full article ">Figure 6
<p>Flow field model in front of the squeegee.</p>
Full article ">Figure 7
<p>Velocity distribution in the flow field: (<b>a</b>) Non-dimensional x-velocity of a vertical section under different non-dimensional pressure gradients; (<b>b</b>) velocity distribution of adhesive in the vertical section at x = X relative to the substrate.</p>
Full article ">Figure 8
<p>Partial schematic diagram (dashed square) and flow area of the rotary screen. a is the projection direction. The blue area is the mesh. the dashed square is the sampling area of the rotary screen.</p>
Full article ">Figure 9
<p>The dynamic pressure at the squeegee tip under different conditions. The pressure of the squeegee with (<b>a</b>) force, (<b>b</b>) angle, (<b>c</b>) elastic modulus, and (<b>d</b>) coating speed. The black line represents the given parameters, and the x-axis represents the changing parameters in the figure.</p>
Full article ">Figure 10
<p>Deposited thickness in relation to the thickness under the squeegee.</p>
Full article ">Figure 11
<p>Variations in thickness <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>H</mi> </mrow> <mrow> <mi>d</mi> </mrow> </msub> </mrow> </semantics></math> under different parameters. (<b>a</b>) Variations in thickness with angle for different power-law indices. Variations in thickness with (<b>b</b>) force <span class="html-italic">F</span>, (<b>c</b>) screen thickness <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>H</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>, and (<b>d</b>) elastic modulus <span class="html-italic">E</span> at θ = 75°and 85° for different power-law indices <span class="html-italic">n</span>.</p>
Full article ">Figure 12
<p>Variation and fit equation in viscosity with shear rate.</p>
Full article ">Figure 13
<p>CFD simulation of rotary screen printing: (<b>a</b>) Boundary conditions of CFD model; (<b>b</b>) grid division of CFD model; volume fraction of the phase contours in (<b>c</b>) t = 0.2 and (<b>d</b>) t = 0.6 cases; (<b>e</b>) total pressure field; and (<b>f</b>) streamline pattern of velocity.</p>
Full article ">Figure 14
<p>Photo and scanning contours of rotary screen printing: (<b>a</b>) Coating test; (<b>b</b>) scanning image of the adhesive layer. Red is the thicker area, while blue is the thinner area.</p>
Full article ">Figure 15
<p>Comparison of the thickness <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>H</mi> </mrow> <mrow> <mi>d</mi> </mrow> </msub> </mrow> </semantics></math> is obtained by the approximate analytical solution (red solid line), the experimental solution (black triangle), and the CFD solution (Blue Square). (<b>a</b>) Variation in thickness with force. (<b>b</b>) Variation in thickness with angle.</p>
Full article ">Figure 16
<p>Experimental and simulation errors compared to theoretical values.</p>
Full article ">
19 pages, 7568 KiB  
Article
Intelligent Analysis of Flow Field in Cleaning Chamber for Combine Harvester Based on YOLOv8 and Reasoning Mechanism
by Qinglin Li, Ruihai Wan, Zhaoyue Wu, Yuting Yan and Xihan Zhang
Appl. Sci. 2025, 15(4), 2200; https://doi.org/10.3390/app15042200 - 19 Feb 2025
Viewed by 77
Abstract
As the main working part of a combine harvester, the cleaning device affects the cleaning performance of the machine. The simulation of flow fields in a cleaning chamber has become an important part of the design. Currently, post-processing analyses of flow field simulation [...] Read more.
As the main working part of a combine harvester, the cleaning device affects the cleaning performance of the machine. The simulation of flow fields in a cleaning chamber has become an important part of the design. Currently, post-processing analyses of flow field simulation still rely on the researchers’ experience, so it is difficult to obtain information from post-processing automatically. The experience of researchers is difficult to describe and disseminate. This paper studied an intelligent method to analyze simulation result data which is based on the object detection algorithm and the reasoning mechanism. YOLOv8, one of the deep learning object detection algorithms, was selected to identify key-point data from the flow field in a cleaning chamber. First, the training dataset was constructed via scatter plot drawing, data enhancement, random screening, and other technologies. Then, the flow field in the cleaning chamber was divided into six key areas by identifying the key points of the flow field. And, an analysis of the reasonable wind velocity in the areas was conducted, and the cleaning results of the grain were obtained by using the reasoning mechanism based on rules and examples. Finally, a system based on the above method was established in Python 3.10 software. With the help of the method and the system in this paper, the flow field characteristics in a cleaning chamber and the effects of wind on the cleaning effect can be obtained automatically if the physical properties of the crop, the geometric parameters of the cleaning chamber, and the working parameters of the machine are given. Full article
Show Figures

Figure 1

Figure 1
<p>Technical route of intelligent analysis of wind field characteristics in cleaning chamber.</p>
Full article ">Figure 2
<p>Cleaning device. (<b>A</b>) Combine harvester cleaning device; (<b>B</b>) simplified geometric model of the cleaning device, where a is the air inlet of the cleaning chamber, b is the upper airflow guide plate, c is the lower airflow guide plate, d is the louver sieve, e is the mesh sieve, and f is the air outlet of the cleaning chamber.</p>
Full article ">Figure 3
<p>Example diagram of wall classification. (<b>a</b>) Air outlet of cleaning chamber, (<b>b</b>) Air inlet of cleaning chamber, (<b>c</b>) little_wall of cleaning chamber, (<b>d</b>) middle_wall of cleaning chamber, (<b>e</b>) big_wall of cleaning chamber.</p>
Full article ">Figure 4
<p>Example diagram of grid partitioning. (<b>a</b>) Mesh sieve local, (<b>b</b>) louver sieve local, (<b>c</b>) cleaning chamber wall local, and (<b>d</b>) cleaning chamber overview.</p>
Full article ">Figure 5
<p>Key-cross-section scatter and key-point position diagram. (<b>a</b>) Slice thickness of 100 mm, (<b>b</b>) slice thickness of 20 mm, (<b>c</b>) slice thickness of 10 mm, (<b>d</b>) slice thickness of 2 mm, and (<b>e</b>) example diagram of key-point location, where ① is the front endpoint of the upper-sieve surface, ② is the rear endpoint of the upper-sieve surface, ③ is the front endpoint of the lower-sieve surface, and ④ is the rear endpoint of the lower-sieve surface.</p>
Full article ">Figure 6
<p>Schematic diagram of division of regional locations.</p>
Full article ">Figure 7
<p>Reasoning process of flow field detection area division in cleaning chamber. (<b>a</b>) Identification of key points in cleaning chamber and (<b>b</b>) division of cleaning chamber by key points.</p>
Full article ">Figure 8
<p>Post-processing conclusion reasoning process. Meaning of key variables: X is the position coordinate of the section, and the position information of the section. Csv-name indicates the name of the file uploaded by the user, which contains only the file name without the path. V<sub>①</sub> is the estimated wind velocity in the front section above the upper-sieve surface (area No. 1). V<sub>②</sub> is the estimated wind velocity in the middle section above the upper-sieve surface (area No. 2). V<sub>③</sub> is the estimated wind velocity in the rear section above the upper-sieve surface (area No. 3). V<sub>④</sub> is the estimated wind velocity of the whole section under the lower-sieve surface (area No. 4). V<sub>⑤</sub> is the estimated wind velocity near the inlet of the cleaning chamber (area No. 5). V<sub>⑥</sub> is the estimated wind velocity near the grain collection site in the cleaning chamber (area No. 6). EVA is the evaluation of the rationality of the airflow distribution on the upper-sieve surface. EVA<sub>①</sub> is the evaluation of the wind velocity in area No. 1. EVA<sub>②</sub> is the evaluation of the wind velocity in area No. 2. EVA<sub>③</sub> is the evaluation of wind velocity in area No. 3. COM<sub>1</sub> is the comparison result of wind velocity between area No. 1 and area No. 2. COM<sub>2</sub> is the comparison result of wind velocity between area No. 2 and area No. 3. COM<sub>3</sub> is the comparison result of wind velocity between area No. 1 and 1.5 times area No. 2. COM<sub>4</sub> is the result of the effect of wind velocity on particle fall in area No. 4. CON<sub>1</sub> is the influence result of wind velocity in area No. 1. CON<sub>2</sub> is the influence result of wind velocity in area No. 2. CON<sub>3</sub> is the influence result of wind velocity in region area No. 3. CON<sub>4</sub> is the influence result of the eddy in region area No. 5. JUD<sub>1</sub> is the result of the existence or absence of an eddy in area No. 5. JUD<sub>2</sub> is the result of the influence of wind velocity area on material transfer in area No. 5. JUD<sub>3</sub> is the result of the existence or absence of an eddy in area No. 5. JUD<sub>4</sub> is the result of the existence or absence of an eddy in area No. 6. JUD<sub>5</sub> is the result of influence of wind velocity on material transfer in area No. 6.</p>
Full article ">Figure 9
<p>The loss curves and mAP curves of the model training process. The <span class="html-italic">x</span>-axis is the number of iteration steps. (<b>a</b>) The loss curves and mAP curves of the YOLOv8n model training process, (<b>b</b>) the loss curves and mAP curves of the YOLOv8s model training process, (<b>c</b>) the loss curves and mAP curves of the YOLOv8m model training process, (<b>d</b>) the loss curves and mAP curves of the YOLOv8l model training process, and (<b>e</b>) the loss curves and mAP curves of the YOLOv8x model training process.</p>
Full article ">Figure 9 Cont.
<p>The loss curves and mAP curves of the model training process. The <span class="html-italic">x</span>-axis is the number of iteration steps. (<b>a</b>) The loss curves and mAP curves of the YOLOv8n model training process, (<b>b</b>) the loss curves and mAP curves of the YOLOv8s model training process, (<b>c</b>) the loss curves and mAP curves of the YOLOv8m model training process, (<b>d</b>) the loss curves and mAP curves of the YOLOv8l model training process, and (<b>e</b>) the loss curves and mAP curves of the YOLOv8x model training process.</p>
Full article ">Figure 10
<p>Test results of YOLOv8l model.</p>
Full article ">
18 pages, 5623 KiB  
Article
Detection of Personality Traits Using Handwriting and Deep Learning
by Daniel Gagiu and Dorin Sendrescu
Appl. Sci. 2025, 15(4), 2154; https://doi.org/10.3390/app15042154 - 18 Feb 2025
Viewed by 172
Abstract
A series of studies and research have shown the existence of a link between handwriting and a person’s personality traits. There are numerous fields that require a psychological assessment of individuals, where there is a need to determine personality traits in a faster [...] Read more.
A series of studies and research have shown the existence of a link between handwriting and a person’s personality traits. There are numerous fields that require a psychological assessment of individuals, where there is a need to determine personality traits in a faster and more efficient manner than that based on classic questionnaires or graphological analysis. The development of image processing and recognition algorithms based on machine learning and deep neural networks has led to a series of applications in the field of graphology. In the present study, a system for automatically extracting handwriting characteristics from written documents and correlating them with Myers–Briggs type indicator is implemented. The system has an architecture composed of three levels, the main level being formed by four convolutional neural networks. To train the networks, a database with different types of handwriting was created. The experimental results show an accuracy ranging between 89% and 96% for handwritten features’ recognition and results ranging between 83% and 91% in determining Myers–Briggs indicators. Full article
(This article belongs to the Special Issue Deep Learning for Signal Processing Applications-2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Proposed architecture—overview.</p>
Full article ">Figure 2
<p>Vertical projection.</p>
Full article ">Figure 3
<p>Structure of deep convolutional network.</p>
Full article ">Figure 4
<p>Determining Myers–Briggs indicators.</p>
Full article ">Figure 5
<p>Software implementation (MATLAB R2021b) of CNN (1 input layer, 3 hidden layers, and 1 output layer).</p>
Full article ">Figure 6
<p>Training progress for baseline analysis.</p>
Full article ">Figure 7
<p>Handwriting sample text written by one of the subjects.</p>
Full article ">
11 pages, 3085 KiB  
Article
Development of a Practical Surface Image Flowmeter for Small-Sized Streams
by Kwonkyu Yu, Junhyeong Lee and Byungman Yoon
Water 2025, 17(4), 586; https://doi.org/10.3390/w17040586 - 18 Feb 2025
Viewed by 117
Abstract
The purpose of this study was to demonstrate the series of processes involved in designing, manufacturing, installing, and operating a practical Surface Image Flowmeter (SIF) system, complete with suitable hardware and software. By ‘practical’, we mean a system capable of automatically measuring discharges [...] Read more.
The purpose of this study was to demonstrate the series of processes involved in designing, manufacturing, installing, and operating a practical Surface Image Flowmeter (SIF) system, complete with suitable hardware and software. By ‘practical’, we mean a system capable of automatically measuring discharges in a river 24 h a day, 365 days a year, at 2 min intervals. The equipment required for this practical SIF includes a CCTV camera, a water level gauge, a Linux-based PC for analysis, and lighting for night-time measurements. We also developed software to operate the system. Furthermore, we applied a coordinate transformation method using projective transformation to calculate the area of the measurement cross-section according to changes in water level and to adjust the positions of velocity analysis points within the image. The CCTV captured 20 s video clips every 2 min, which were then analyzed using the Spatio-Temporal Image Velocimetry (STIV) method. For the STIV method, measurement points were set at appropriate intervals on the measurement cross-section, and spatio-temporal images (STIs) were constructed at these points for analysis. The STIs were captured parallel to the main flow direction (perpendicular to the cross-section), and the resulting STIs were analyzed using the hybrid STIV method to calculate the discharge. When the constructed SIF system was tested in a steep-sloped channel at the Andong River Experiment Center, the velocity distribution showed a difference of less than 9% compared to measurements from a traditional current meter, and the discharge showed a difference of around 10% compared to measurements from an Acoustic Doppler Current Profiler (ADCP). Full article
Show Figures

Figure 1

Figure 1
<p>Coordinate transform between physical coordinates and image coordinates [<a href="#B15-water-17-00586" class="html-bibr">15</a>].</p>
Full article ">Figure 2
<p>Variation in measurement points in the image according to the change in water level.</p>
Full article ">Figure 3
<p>Measurement points and main flow direction [<a href="#B14-water-17-00586" class="html-bibr">14</a>].</p>
Full article ">Figure 4
<p>Making an STI from an STV with image rotation [<a href="#B14-water-17-00586" class="html-bibr">14</a>].</p>
Full article ">Figure 5
<p>Comparison of results between C-STIV (correlation) and F-STIV (FFT) [<a href="#B16-water-17-00586" class="html-bibr">16</a>].</p>
Full article ">Figure 6
<p>Velocity distribution verification experiment results (Andong Experiment Station, 11–12 June 2024).</p>
Full article ">Figure 7
<p>Surface Image Flowmeter system installed at the Janghangcheon, Gyeongsangbuk-do, Korea.</p>
Full article ">Figure 7 Cont.
<p>Surface Image Flowmeter system installed at the Janghangcheon, Gyeongsangbuk-do, Korea.</p>
Full article ">
21 pages, 1850 KiB  
Review
Deep Learning for Automatic Detection of Volcanic and Earthquake-Related InSAR Deformation
by Xu Liu, Yingfeng Zhang, Xinjian Shan, Zhenjie Wang, Wenyu Gong and Guohong Zhang
Remote Sens. 2025, 17(4), 686; https://doi.org/10.3390/rs17040686 - 18 Feb 2025
Viewed by 179
Abstract
Interferometric synthetic aperture radar (InSAR) technology plays a crucial role in monitoring surface deformation and has become widely used in volcanic and earthquake research. With the rapid advancement of satellite technology, InSAR now generates vast volumes of deformation data. Deep learning has revolutionized [...] Read more.
Interferometric synthetic aperture radar (InSAR) technology plays a crucial role in monitoring surface deformation and has become widely used in volcanic and earthquake research. With the rapid advancement of satellite technology, InSAR now generates vast volumes of deformation data. Deep learning has revolutionized data analysis, offering exceptional capabilities for processing large datasets. Leveraging these advancements, automatic detection of volcanic and earthquake deformation from extensive InSAR datasets has emerged as a major research focus. In this paper, we first introduce several representative deep learning architectures commonly used in InSAR data analysis, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), generative adversarial networks (GANs), and Transformer networks. Each architecture offers unique advantages for addressing the challenges of InSAR data. We then systematically review recent progress in the automatic detection and identification of volcanic and earthquake deformation signals from InSAR images using deep learning techniques. This review highlights two key aspects: the design of network architectures and the methodologies for constructing datasets. Finally, we discuss the challenges in automatic detection and propose potential solutions. This study aims to provide a comprehensive overview of the current applications of deep learning for extracting InSAR deformation features, with a particular focus on earthquake and volcanic monitoring. Full article
Show Figures

Figure 1

Figure 1
<p>InSAR data processing based on deep learning. (<b>a</b>) The primary deep learning architectures utilized in InSAR data processing, including CNNs, RNNs, GANs, and Transformers. (<b>b</b>) DL is applied to various stages of InSAR data processing, including deformation detection, atmospheric correction, phase filtering, and phase unwrapping.</p>
Full article ">Figure 2
<p>The main architectures of CNNs, RNNs, GANs, and Transformer networks. (<b>a</b>) CNNs primarily consist of an input layer, convolutional layers, pooling layers, fully connected layers, and an output layer. (<b>b</b>) RNNs consist of input layers, recurrent hidden layers, and an output layer for sequence tasks. (<b>c</b>) GANs consist of a generator and a discriminator, which are trained together in a competitive manner. (<b>d</b>) Transformers consist of an encoder and a decoder, both using self-attention and feed-forward layers.</p>
Full article ">Figure 3
<p>Different learning processes between traditional machine learning and transfer learning. (<b>a</b>) Traditional machine learning approaches learn each task independently, starting from scratch. (<b>b</b>) Transfer learning utilizes knowledge gained from previous tasks and applies it to a target task.</p>
Full article ">Figure 4
<p>Data augmentation methods. (<b>a</b>) Geometric transformation-based data augmentation involves techniques like zoom, rotation, mirroring, and flipping to expand the training datasets. (<b>b</b>) Pixel-level transformation-based data augmentation modifies individual pixel values, such as brightness, contrast, and color, to enhance the datasets. (<b>c</b>) Filtering-based data augmentation involves applying filters like blurring, sharpening, and noise to diversify the training datasets. The original InSAR interferogram data were downloaded from the COMET-LiCS Sentinel-1 InSAR portal (<a href="https://comet.nerc.ac.uk/comet-lics-portal/" target="_blank">https://comet.nerc.ac.uk/comet-lics-portal/</a> (accessed on 1 December 2024)).</p>
Full article ">
19 pages, 1349 KiB  
Article
Effective Machine Learning Techniques for Non-English Radiology Report Classification: A Danish Case Study
by Alice Schiavone, Lea Marie Pehrson, Silvia Ingala, Rasmus Bonnevie, Marco Fraccaro, Dana Li, Michael Bachmann Nielsen and Desmond Elliott
AI 2025, 6(2), 37; https://doi.org/10.3390/ai6020037 - 17 Feb 2025
Viewed by 161
Abstract
Background: Machine learning methods for clinical assistance require a large number of annotations from trained experts to achieve optimal performance. Previous work in natural language processing has shown that it is possible to automatically extract annotations from the free-text reports associated with chest [...] Read more.
Background: Machine learning methods for clinical assistance require a large number of annotations from trained experts to achieve optimal performance. Previous work in natural language processing has shown that it is possible to automatically extract annotations from the free-text reports associated with chest X-rays. Methods: This study investigated techniques to extract 49 labels in a hierarchical tree structure from chest X-ray reports written in Danish. The labels were extracted from approximately 550,000 reports by performing multi-class, multi-label classification using a method based on pattern-matching rules, a classic approach in the literature for solving this task. The performance of this method was compared to that of open-source large language models that were pre-trained on Danish data and fine-tuned for classification. Results: Methods developed for English were also applicable to Danish and achieved similar performance (a weighted F1 score of 0.778 on 49 findings). A small set of expert annotations was sufficient to achieve competitive results, even with an unbalanced dataset. Conclusions: Natural language processing techniques provide a promising alternative to human expert annotation when annotations of chest X-ray reports are needed. Large language models can outperform traditional pattern-matching methods. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

Figure 1
<p>A total of 547,758 Danish chest X-ray reports were collected from a major hospital network in Denmark, covering 11 sites. The reports were subdivided into two sets: the first was annotated by <span class="html-italic">RegEx</span> rules (<math display="inline"><semantics> <mrow> <mi>R</mi> <mi>E</mi> </mrow> </semantics></math>) and formed the <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>B</mi> </mrow> </semantics></math> (rule-based labels) dataset; the second and smaller set was manually labeled by expert human annotators, forming the <math display="inline"><semantics> <mrow> <mi>H</mi> <mi>L</mi> </mrow> </semantics></math> set (human expert labels). The <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>B</mi> </mrow> </semantics></math> set was used to fine-tune a <span class="html-italic">BERT</span>-like model to annotate reports as one of forty-nine findings, as either a positive or negative mention or as a finding not mentioned. This model was then fine-tuned on the <math display="inline"><semantics> <mrow> <mi>H</mi> <mi>L</mi> </mrow> </semantics></math> set. The <span class="html-italic">RegEx</span> and <span class="html-italic">BERT</span> models were then evaluated against a subset of the <math display="inline"><semantics> <mrow> <mi>H</mi> <mi>L</mi> </mrow> </semantics></math> set that was not seen during training.</p>
Full article ">Figure 2
<p>Constructed example of two RegEx rules matching a Danish chest X-ray report. The Regex Labeler outputs a set of findings as being positively (1) or negatively (0) mentioned or not mentioned (-). In green, the <tt>Infiltrate</tt> rule matches a positive mention. In blue, the <tt>PleuralEffusion</tt> finding is mentioned and negated by the word “ingen”. Abnormalities that are not matched by any rule are assigned the “not mentioned” class.</p>
Full article ">Figure 3
<p>Distribution of the number of reports by the class assigned to each finding in the combined <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>B</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>H</mi> <mi>L</mi> </mrow> </semantics></math> datasets for the <span class="html-italic">most frequent findings</span>. For most abnormalities, “not mentioned” was the most frequent class, except for <span class="html-italic">Infiltrate</span> and <span class="html-italic">PleuralEffusion</span>, for which negated mentions were more common.</p>
Full article ">Figure 4
<p>A large-scale dataset annotated with rule-based labels (<math display="inline"><semantics> <mrow> <mi>R</mi> <mi>B</mi> </mrow> </semantics></math>) was used to tune a BERT-like model to predict 49 findings in Danish chest X-ray reports. This model was then fine-tuned on a smaller set of different reports labeled by human expert annotators (<math display="inline"><semantics> <mrow> <mi>H</mi> <mi>L</mi> </mrow> </semantics></math>).</p>
Full article ">Figure 5
<p>Distribution of macro F1 score across all 49 findings for the positive and negative mention classes for <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>B</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>B</mi> <mo>→</mo> <mi>H</mi> <mi>L</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>F1 scores of the most frequent findings. In square brackets, the label and class support in <math display="inline"><semantics> <mrow> <mi>H</mi> <msub> <mi>L</mi> <mrow> <mi>t</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>, where <span class="html-italic">P</span> stands for positive mentions and <span class="html-italic">N</span> stands for negative mentions.</p>
Full article ">Figure 7
<p>Positive and negative mention macro F1 scores for each <span class="html-italic">k-fold</span> trained for the model ensembles, including the averages across folds and the scores of the models’ ensembles obtained through majority voting fold predictions.</p>
Full article ">Figure 8
<p>Data ablation study on <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>B</mi> <mo>→</mo> <mi>H</mi> <mi>L</mi> </mrow> </semantics></math>, showing F1 score for positive (•) and negative (×) mentions across five model training runs. Scores on <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>B</mi> </mrow> </semantics></math> are reported as <math display="inline"><semantics> <mrow> <mi>H</mi> <msubsup> <mi>L</mi> <mrow> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>i</mi> <mi>n</mi> </mrow> <mrow> <mn>0</mn> <mo>%</mo> </mrow> </msubsup> </mrow> </semantics></math>, while <math display="inline"><semantics> <mrow> <mi>H</mi> <msubsup> <mi>L</mi> <mrow> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>i</mi> <mi>n</mi> </mrow> <mrow> <mn>100</mn> <mo>%</mo> </mrow> </msubsup> </mrow> </semantics></math> refers to <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>B</mi> <mo>→</mo> <mi>H</mi> <mi>L</mi> </mrow> </semantics></math>. (<b>a</b>) Results on all findings and (<b>b</b>) on most frequent findings.</p>
Full article ">Figure A1
<p>Distribution of labels in the dataset (550,233 samples in total), including annotations from <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>B</mi> </mrow> </semantics></math> or <math display="inline"><semantics> <mrow> <mi>H</mi> <mi>L</mi> </mrow> </semantics></math> when available. In black, the most frequent findings.</p>
Full article ">
26 pages, 6862 KiB  
Article
Application of Anti-Collision Algorithm in Dual-Coupling Tag System
by Junpeng Cui, Muhammad Mudassar Raza, Renhai Feng and Jianjun Zhang
Electronics 2025, 14(4), 787; https://doi.org/10.3390/electronics14040787 - 17 Feb 2025
Viewed by 150
Abstract
Radio Frequency Identification (RFID) is a key component in automatic systems that address challenges in environment monitoring. However, tag collision continues to be an essential challenge in such applications due to high-density RFID deployments. This paper addresses the issue of RFID tag collision [...] Read more.
Radio Frequency Identification (RFID) is a key component in automatic systems that address challenges in environment monitoring. However, tag collision continues to be an essential challenge in such applications due to high-density RFID deployments. This paper addresses the issue of RFID tag collision in large-scale intensive tags, particularly in industrial membrane contamination monitoring systems, and improves the system performance by minimizing collision rates through an innovative collision-avoiding algorithm. This research improved the Predictive Framed Slotted ALOHA–Collision Tracking Tree (PRFSCT) algorithm by cooperating probabilistic and deterministic methods through dynamic frame length adjustment and multi-branch tree processes. After simulation and validation in MATLAB R2023a, we performed a hardware test with the RFM3200 and UHFReader18 passive tags. The method’s efficiency is evaluated through collision slot reduction, delay minimization, and enhanced throughput. PRFSCT significantly reduces collision slots when the number of tags to identify is the same for PRFSCT, Framed Slotted ALOHA (FSA), and Collision Tracking Tree (CTT); the PRFSCT method needs the fewest time slots. When identifying more than 200 tags, PRFSCT has 225 collision slots for 500 tags compared to FSA and CTT, which have approximately 715 and 883 for 500 tags, respectively. It demonstrates exceptional stability and adaptability under increased density needs while improving tag reading at distances. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

Figure 1
<p>Integration of RFID-based membrane water purification system; (<b>a</b>) large-scale industrial membrane filtration setup; (<b>b</b>) RFID tags affixed to membrane housings.</p>
Full article ">Figure 2
<p>Block diagram of wireless sensing system.</p>
Full article ">Figure 3
<p>Collision model: (<b>a</b>) reader and tag collision model; (<b>b</b>) multi-reader collision model; (<b>c</b>) multi-label collision model.</p>
Full article ">Figure 4
<p>Schematic diagram of pure ALOHA algorithm collision.</p>
Full article ">Figure 5
<p>Schematic diagram of Slotted ALOHA algorithm collision.</p>
Full article ">Figure 6
<p>Schematic diagram of dynamic framed slotted ALOHA algorithm collision.</p>
Full article ">Figure 7
<p>QT algorithm collision diagram.</p>
Full article ">Figure 8
<p>Comparison of ALOHA algorithm throughput and collision rate: (<b>a</b>) throughput rate curve; (<b>b</b>) collision rate curve.</p>
Full article ">Figure 8 Cont.
<p>Comparison of ALOHA algorithm throughput and collision rate: (<b>a</b>) throughput rate curve; (<b>b</b>) collision rate curve.</p>
Full article ">Figure 9
<p>Tree algorithm throughput comparison.</p>
Full article ">Figure 10
<p>Schematic diagram of Framed Slotted ALOHA algorithm collision.</p>
Full article ">Figure 11
<p>CTT algorithm collision diagram.</p>
Full article ">Figure 12
<p>PRFSCT algorithm flow chart.</p>
Full article ">Figure 13
<p>Simulation comparison of DFSA and Vogt algorithms: (<b>a</b>) number of collision time slots; (<b>b</b>) total number of time slots.</p>
Full article ">Figure 14
<p>Algorithm time slot number simulation: (<b>a</b>) total number of time slots; (<b>b</b>) number of collision time slots.</p>
Full article ">Figure 15
<p>Algorithms’ delay simulation.</p>
Full article ">Figure 16
<p>Matching capacitance change statistics.</p>
Full article ">Figure 17
<p>Placement of tags and readers: (<b>a</b>) 2 tags; (<b>b</b>) 4 tags; (<b>c</b>) 6 tags; (<b>d</b>) 8 tags; (<b>e</b>) 10 tags.</p>
Full article ">Figure 18
<p>Statistics of average tag reading times.</p>
Full article ">
15 pages, 3675 KiB  
Article
Automatic Annotation of Map Point Features Based on Deep Learning ResNet Models
by Yaolin Zhang, Zhiwen Qin, Jingsong Ma, Qian Zhang and Xiaolong Wang
ISPRS Int. J. Geo-Inf. 2025, 14(2), 88; https://doi.org/10.3390/ijgi14020088 - 17 Feb 2025
Viewed by 171
Abstract
Point feature cartographic label placement is a key problem in the automatic configuration of map labeling. Prior research on it only addresses label conflict or overlap issues; it does not fully take into account and resolve both types of issues. In this study, [...] Read more.
Point feature cartographic label placement is a key problem in the automatic configuration of map labeling. Prior research on it only addresses label conflict or overlap issues; it does not fully take into account and resolve both types of issues. In this study, we attempt to apply machine learning techniques to the automatic placement of point feature labels since label placement is a task that heavily relies on expert expertise, which is very congruent with neural networks’ ability to mimic the human brain’s thought process. We trained ResNet using large amounts of well-labeled picture data. The label’s proper location for a given unlabeled point feature was then predicted by the trained model. We assessed the outcomes both quantitatively and qualitatively, contrasting the ResNet model’s output with that of the expert manual placement approach and the conventional Maplex automatic placement method. According to the evaluation, the ResNet model’s test set accuracy was 97.08%, demonstrating its ability to locate the point feature label in the right place. This study offers a workable solution to the label overlap and conflict issue. Simultaneously, it has significantly enhanced the map’s esthetic appeal and the information’s clarity. Full article
Show Figures

Figure 1

Figure 1
<p>Workflow for the automatic annotation of point features using ResNet. (<b>A</b>) acquiring and preprocessing the map data; (<b>B</b>) training and testing the model; (<b>C</b>) evaluating model quality. The arrows represent the order of the workflow.</p>
Full article ">Figure 2
<p>Point feature label candidate location models: schematic diagram of the 8-position model.</p>
Full article ">Figure 3
<p>Original map data of Xuzhou City, Jiangsu province. The orange area features represent residential land and facilities, and the blue area features represent drainage(area). The blue line features represent the drainage (line), and the other color line features represent the various levels of roads.</p>
Full article ">Figure 4
<p>Point feature label alternative location map automatic clipping: (<b>a</b>) the size of the text box indicated by the point feature annotation and (<b>b</b>) automatic cropping of alternate position maps for 8 orientations around point elements. The numbers 1-8 represent the order of clipping, that is the priority of the position.</p>
Full article ">Figure 5
<p>Data are grayed and added to reflect priority order by changing brightness. The arrows in the figure represent the processing order. (<b>a</b>) graying images; (<b>b</b>) adding data priority.</p>
Full article ">Figure 6
<p>ResNet model structure diagram.</p>
Full article ">Figure 7
<p>The convergence of the training loss and training accuracy.</p>
Full article ">Figure 8
<p>Text labeling performed with (<b>a</b>) ResNet, (<b>b</b>) Maplex, (<b>c</b>) ResNet, and (<b>d</b>) Maplex.</p>
Full article ">Figure 9
<p>Examples of label placement performed by ResNet: (<b>a</b>) label conflict; (<b>b</b>) label overlaps with other point features; and (<b>c</b>) label overlaps the river.</p>
Full article ">
29 pages, 2916 KiB  
Article
Advanced Digital Solutions for Food Traceability: Enhancing Origin, Quality, and Safety Through NIRS, RFID, Blockchain, and IoT
by Matyas Lukacs, Fruzsina Toth, Roland Horvath, Gyula Solymos, Boglárka Alpár, Peter Varga, Istvan Kertesz, Zoltan Gillay, Laszlo Baranyai, Jozsef Felfoldi, Quang D. Nguyen, Zoltan Kovacs and Laszlo Friedrich
J. Sens. Actuator Netw. 2025, 14(1), 21; https://doi.org/10.3390/jsan14010021 - 17 Feb 2025
Viewed by 146
Abstract
The rapid growth of the human population, the increase in consumer needs regarding food authenticity, and the sub-par synchronization between agricultural and food industry production necessitate the development of reliable track and tracing solutions for food commodities. The present research proposes a simple [...] Read more.
The rapid growth of the human population, the increase in consumer needs regarding food authenticity, and the sub-par synchronization between agricultural and food industry production necessitate the development of reliable track and tracing solutions for food commodities. The present research proposes a simple and affordable digital system that could be implemented in most production processes to improve transparency and productivity. The system combines non-destructive, rapid quality assessment methods, such as near infrared spectroscopy (NIRS) and computer/machine vision (CV/MV), with track and tracing functionalities revolving around the Internet of Things (IoT) and radio frequency identification (RFID). Meanwhile, authenticity is provided by a self-developed blockchain-based solution that validates all data and documentation “from farm to fork”. The system is introduced by taking certified Hungarian sweet potato production as a model scenario. Each element of the proposed system is discussed in detail individually and as a part of an integrated system, capable of automatizing most production flows while maintaining complete transparency and compliance with authority requirements. The results include the data and trust model of the system with sequence diagrams simulating the interactions between participants. The study lays the groundwork for future research and industrial applications combining digital tools to improve the productivity and authenticity of the agri-food industry, potentially increasing the level of trust between participants, most importantly for the consumers. Full article
(This article belongs to the Topic Trends and Prospects in Security, Encryption and Encoding)
21 pages, 2934 KiB  
Article
Study of Particle Discharge from a Fluidized Bed: Experimental Investigation and Comparative Modeling Analysis
by Aisel Ajalova, Kaicheng Chen, Torsten Hoffmann and Evangelos Tsotsas
Processes 2025, 13(2), 562; https://doi.org/10.3390/pr13020562 - 17 Feb 2025
Viewed by 164
Abstract
Studying particle discharge rates in fluidized bed technology is important for optimizing continuous processes and improving product quality. This study investigates particle discharge, specifically the mass outflow rate, from a pilot-scale fluidized bed by means of experimental methods and mathematical modeling. The modeling [...] Read more.
Studying particle discharge rates in fluidized bed technology is important for optimizing continuous processes and improving product quality. This study investigates particle discharge, specifically the mass outflow rate, from a pilot-scale fluidized bed by means of experimental methods and mathematical modeling. The modeling uses various algebraic equations to predict the mass outflow rate and the time evolution of bed mass. Experiments in which these quantities were measured were conducted under different conditions, including varying mass inflow rates and process modes such as continuous and semi-batch. The results indicate that the mass outflow rate can be effectively modeled using existing equations from the literature, as well as a newly introduced equation, providing a comprehensive understanding of the holdup and discharge behavior of the fluidized bed. The newly introduced equation seems to perform better under transient conditions, being most appropriate for automatic control. Full article
(This article belongs to the Special Issue Green Particle Technologies: Processes and Applications)
Show Figures

Figure 1

Figure 1
<p>Particle size distribution and an image of the glass beads.</p>
Full article ">Figure 2
<p>Scheme of the pilot-scale fluidized bed plant.</p>
Full article ">Figure 3
<p>Gravimetric twin-screw feeder.</p>
Full article ">Figure 4
<p>Schematic of solids outlet from the fluidized bed.</p>
Full article ">Figure 5
<p>Schematic flow diagram of the mathematical modeling.</p>
Full article ">Figure 6
<p>Time evolution of measured <math display="inline"><semantics> <mrow> <msub> <mo>Ṁ</mo> <mrow> <mi>p</mi> <mo>,</mo> <mi>o</mi> <mi>u</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math> (<b>left</b>) and <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>d</mi> </mrow> </msub> </mrow> </semantics></math> (<b>right</b>) in Case 1.</p>
Full article ">Figure 7
<p>Time evolution of measured and modeled <math display="inline"><semantics> <mrow> <msub> <mo>Ṁ</mo> <mrow> <mi>p</mi> <mo>,</mo> <mi>o</mi> <mi>u</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>d</mi> </mrow> </msub> </mrow> </semantics></math> for Case 2.</p>
Full article ">Figure 8
<p>Measured and modeled <math display="inline"><semantics> <mrow> <msub> <mo>Ṁ</mo> <mrow> <mi>p</mi> <mo>,</mo> <mi>o</mi> <mi>u</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>d</mi> </mrow> </msub> </mrow> </semantics></math> for Case 5.</p>
Full article ">
27 pages, 1244 KiB  
Article
HYLR-FO: Hybrid Approach Using Language Models and Rule-Based Systems for On-Device Food Ordering
by Subhin Yang, Donghwan Kim and Sungju Lee
Electronics 2025, 14(4), 775; https://doi.org/10.3390/electronics14040775 - 17 Feb 2025
Viewed by 67
Abstract
Recent research has explored combining large language models (LLMs) with speech recognition for various services, but such applications require a strong network environment for quality service delivery. For on-device services, which do not rely on networks, resource limitations must be considered. This study [...] Read more.
Recent research has explored combining large language models (LLMs) with speech recognition for various services, but such applications require a strong network environment for quality service delivery. For on-device services, which do not rely on networks, resource limitations must be considered. This study proposes HYLR-FO, an efficient model that integrates a smaller language model (LM) and a rule-based system (RBS) to enable fast and reliable voice-based order processing in resource-constrained environments, approximating the performance of LLMs. By considering potential error scenarios and leveraging flexible natural language processing (NLP) and inference validation, this approach ensures both efficiency and robustness in order execution. Smaller LMs are used instead of LLMs to reduce resource usage. The LM transforms speech input, received via automatic speech recognition (ASR), into a consistent form that can be processed by the RBS. The RBS then extracts the order and validates the extracted information. The experimental results show that HYLR-FO, trained and tested on 5000 order data samples, achieves up to 86% accuracy, comparable to the 90% accuracy of LLMs. Additionally, HYLR-FO achieves a processing speed of up to 55 orders per second, significantly outperforming LLM-based approaches, which handle only 1.14 orders per second. This results in a 48.25-fold improvement in processing speed in resource-constrained environments. This study demonstrates that HYLR-FO provides faster processing and achieves accuracy similar to LLMs in resource-constrained on-device environments. This finding has theoretical implications for optimizing LM efficiency in constrained settings and practical implications for real-time low-resource AI applications. Specifically, the design of HYLR-FO suggests its potential for efficient deployment in various commercial environments, achieving fast response times and low resource consumption with smaller models. Full article
(This article belongs to the Special Issue Machine/Deep Learning Applications and Intelligent Systems)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Interaction and data flow among modules and the inference engine.</p>
Full article ">Figure 2
<p>Workflow of user sentence processing and order completion.</p>
Full article ">Figure 3
<p>Overview of HYLR-FO.</p>
Full article ">Figure 4
<p>Comparison of the accuracy of the HYLR-FO based on the number of parameters and the size of the training data.</p>
Full article ">Figure 5
<p>Comparison of the accuracy and speed performances by scenarios.</p>
Full article ">
Back to TopTop