Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (72)

Search Parameters:
Keywords = AI feeding data

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4009 KiB  
Article
Applying Acoustic Signals to Monitor Hybrid Electrical Discharge-Turning with Artificial Neural Networks
by Mehdi Soleymani and Mohammadjafar Hadad
Micromachines 2025, 16(3), 274; https://doi.org/10.3390/mi16030274 - 27 Feb 2025
Viewed by 133
Abstract
Artificial intelligence (AI) models have demonstrated their capabilities across various fields by performing tasks that are currently handled by humans. However, the training of these models faces several limitations, such as the need for sufficient data. This study proposes the use of acoustic [...] Read more.
Artificial intelligence (AI) models have demonstrated their capabilities across various fields by performing tasks that are currently handled by humans. However, the training of these models faces several limitations, such as the need for sufficient data. This study proposes the use of acoustic signals as training data as this method offers a simpler way to obtain a large dataset compared to traditional approaches. Acoustic signals contain valuable information about the process behavior. We investigated the ability of extracting useful features from acoustic data expecting to predict labels separately by a multilabel classifier rather than as a multiclass classifier. This study focuses on electrical discharge turning (EDT) as a hybrid process of electrical discharge machining (EDM) and turning, an intricate process with multiple influencing parameters. The sounds generated during EDT were recorded and used as training data. The sounds underwent preprocessing to examine the effects of the parameters used for feature extraction prior to feeding the data into the ANN model. The parameters investigated included sample rate, length of the FFT window, hop length, and the number of mel-frequency cepstral coefficients (MFCC). The study aimed to determine the optimal preprocessing parameters considering the highest precision, recall, and F1 scores. The results revealed that instead of using the default set values in the python packages, it is necessary to investigate the preprocessing parameters to find the optimal values for the maximum classification performance. The promising results of the multi-label classification model depicted that it is possible to detect various aspects of a process simultaneously receiving single data, which is very beneficial in monitoring. The results also indicated that the highest prediction scores could be achieved by setting the sample rate, length of the FFT window, hop length, and number of MFCC to 4500 Hz, 1024, 256, and 80, respectively. Full article
(This article belongs to the Special Issue Future Prospects of Additive Manufacturing)
Show Figures

Figure 1

Figure 1
<p>EDT setup, the driver and controller, and the recording device applied for machining and signal recording.</p>
Full article ">Figure 2
<p>Scanning Electron Microscope images from the machined surface.</p>
Full article ">Figure 3
<p>Schematic of windows and the segmenting of frames.</p>
Full article ">Figure 4
<p>Schematic of feature extraction and the ANN architecture in which each hidden layer has a different color and each tick mark is the representative of the selected label.</p>
Full article ">Figure 5
<p>Accuracy diagrams for different sample rates (sr) and numbers of fft (n_fft).</p>
Full article ">Figure 6
<p>Accuracy diagrams for different numbers of MFCC (n_mfcc) and hop length.</p>
Full article ">
14 pages, 8539 KiB  
Article
Responsible Artificial Intelligence Hyper-Automation with Generative AI Agents for Sustainable Cities of the Future
by Daswin De Silva, Nishan Mills, Harsha Moraliyage, Prabod Rathnayaka, Sam Wishart and Andrew Jennings
Smart Cities 2025, 8(1), 34; https://doi.org/10.3390/smartcities8010034 - 17 Feb 2025
Viewed by 343
Abstract
Smart cities are Hyper-Connected Digital Environments (HCDEs) that transcend the boundaries of natural, human-made, social, virtual, and artificial environments. Human activities are no longer confined to a single environment as our presence and interactions are represented and interconnected across HCDEs. The data streams [...] Read more.
Smart cities are Hyper-Connected Digital Environments (HCDEs) that transcend the boundaries of natural, human-made, social, virtual, and artificial environments. Human activities are no longer confined to a single environment as our presence and interactions are represented and interconnected across HCDEs. The data streams and repositories of HCDEs provide opportunities for the responsible application of Artificial Intelligence (AI) that generates unique insights into the constituent environments and the interplay across constituents. The translation of data into insights poses several complex challenges originating in data generation and then propagating through the computational layers to decision outcomes. To address these challenges, this article presents the design and development of a Hyper-Automated AI framework with Generative AI agents for sustainable smart cities. The framework is empirically evaluated in the living lab setting of a ‘University City of the Future’. The developed AI framework is grounded on the core capabilities of acquisition, preparation, orchestration, dissemination, and retrospection, with an independent cognitive engine for hyper-automation of these AI capabilities using Generative AI. Hyper-automation output feeds into a human-in-the-loop process prior to decision-making outcomes. More broadly, this framework aims to provide a validated pathway for university cities of the future to take up the role of prototypes that deliver evidence-based guidelines for the development and management of sustainable smart cities. Full article
Show Figures

Figure 1

Figure 1
<p>The Proposed Responsible AI Framework for Hyper-automation.</p>
Full article ">Figure 2
<p>Schematic representation of the structure and function of the cognitive engine. The arrows in red are indicative of bi-directional information and instruction flows, for instance the human agent engages the Council with information and instructions on complex tasks that are deconstructed and assigned to agents, with feedback loops to the human following execution and delivery.</p>
Full article ">Figure 3
<p>Functional Codification: from active computing to retrieval and execution of pre-established code.</p>
Full article ">Figure 4
<p>Implementation of the framework with a cognitive engine and agentic AI capabilities.</p>
Full article ">Figure 5
<p>Human mobility prediction for indoor and outdoor activities.</p>
Full article ">Figure 6
<p>Energy Consumption and Generation Forecasting for Time-based Decisions on Demand.</p>
Full article ">Figure 7
<p>Evaluating the impact of events in the building on energy consumption.</p>
Full article ">Figure 8
<p>PAMAP 2 Results 1: Segmentation of High vs. Low Intensity Activities.</p>
Full article ">Figure 9
<p>PAMAP2 Results 2: Incrementally Learned Sequential Flow of Activities.</p>
Full article ">
30 pages, 5125 KiB  
Article
Application of Augmented Reality in Waterway Traffic Management Using Sparse Spatiotemporal Data
by Ruolan Zhang, Yue Ai, Shaoxi Li, Jingfeng Hu, Jiangling Hao and Mingyang Pan
Appl. Sci. 2025, 15(4), 1710; https://doi.org/10.3390/app15041710 - 7 Feb 2025
Viewed by 423
Abstract
The development of China’s digital waterways has led to the extensive deployment of cameras along inland waterways. However, the limited processing and utilization of digital resources hinder the ability to provide waterway services. To address this issue, this paper introduces a novel waterway [...] Read more.
The development of China’s digital waterways has led to the extensive deployment of cameras along inland waterways. However, the limited processing and utilization of digital resources hinder the ability to provide waterway services. To address this issue, this paper introduces a novel waterway perception approach based on an intelligent navigation marker system. By integrating multiple sensors into navigation markers, the fusion of camera video data and automatic identification system (AIS) data is achieved. The proposed method of an enhanced one-stage object detection algorithm improves detection accuracy for small vessels in complex inland waterway environments, while an object-tracking algorithm ensures the stable monitoring of vessel trajectories. To mitigate AIS data latency, a trajectory prediction algorithm is employed through region-based matching methods for the precise alignment of AIS data with pixel coordinates detected in video feeds. Furthermore, an augmented reality (AR)-based traffic situational awareness framework is developed to dynamically visualize key information. Experimental results demonstrate that the proposed model significantly outperforms mainstream algorithms. It achieves exceptional robustness in detecting small targets and managing complex backgrounds, with data fusion accuracy ranging from 84.29% to 94.32% across multiple tests, thereby substantially enhancing the spatiotemporal alignment between AIS and video data. Full article
Show Figures

Figure 1

Figure 1
<p>Overall architecture of the intelligent navigation marker system.</p>
Full article ">Figure 2
<p>Multi-source data fusion framework: Video data are first captured using network cameras, and the target detection algorithm is enhanced to address the specific characteristics of inland waterway targets. Following detection and tracking, the vessel positions are identified. Simultaneously, AIS data are collected through reception equipment, filtered, and processed using a trajectory prediction algorithm to ensure temporal alignment with the video data. Finally, a fusion module integrates the AIS data and video data for a comprehensive analysis.</p>
Full article ">Figure 3
<p>One-stage object detection algorithm process: It includes feature extraction networks, multi-channel feature maps, and grid predictions of feature maps (such as object confidence and class probabilities). It also involves the parallel prediction of bounding boxes and class probabilities, followed by non-maximum suppression (NMS).</p>
Full article ">Figure 4
<p>Range of video and AIS reception devices.</p>
Full article ">Figure 5
<p>Division of target detection range for waterway vessels.</p>
Full article ">Figure 6
<p>Comparison of experimental results: (<b>a</b>) different algorithms’ mAP@0.5 contrast; (<b>b</b>) different algorithms’ precision contrast; (<b>c</b>) different algorithms’ recall contrast.</p>
Full article ">Figure 7
<p>Comparison of dehazing effects: (<b>a</b>) original foggy image; (<b>b</b>) image after dehazing.</p>
Full article ">Figure 8
<p>Comparison of nighttime illumination enhancement effects: (<b>a</b>) original nighttime image; (<b>b</b>) image after illumination enhancement.</p>
Full article ">Figure 9
<p>Comparison of ship detection in foggy channel environments: (<b>a</b>) ship detection effect on original foggy image; (<b>b</b>) ship detection effect after image dehazing.</p>
Full article ">Figure 10
<p>Comparison of ship detection in nighttime channel environments: (<b>a</b>) ship detection effect on original nighttime image; (<b>b</b>) ship detection effect after image illumination enhancement.</p>
Full article ">Figure 11
<p>Comparison of real trajectory and predicted trajectory.</p>
Full article ">Figure 12
<p>Perspective area division: (<b>a</b>) the camera’s field of view is divided into six smaller sub-regions; (<b>b</b>) the video matrix is also divided into six sub-regions according to the number of divisions made via the camera.</p>
Full article ">Figure 13
<p>Pixel coordinate ship matching process: within the region set <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, there are a total of three ships. They are matched sequentially based on their distance from the set baseline.</p>
Full article ">Figure 14
<p>Fusion effect display.</p>
Full article ">Figure 15
<p>AR function display of waterway traffic situation awareness. By integrating hydrometeorological sensing equipment on the beacons, real-time local traffic environment data are provided. Display of the effects for three consecutive days separately: (<b>a</b>) 22 October; (<b>b</b>) 23 October; and (<b>c</b>) 23 October.</p>
Full article ">
47 pages, 1743 KiB  
Review
Artificial Intelligence of Things (AIoT) Advances in Aquaculture: A Review
by Yo-Ping Huang and Simon Peter Khabusi
Processes 2025, 13(1), 73; https://doi.org/10.3390/pr13010073 - 1 Jan 2025
Viewed by 3481
Abstract
The integration of artificial intelligence (AI) and the internet of things (IoT), known as artificial intelligence of things (AIoT), is driving significant advancements in the aquaculture industry, offering solutions to longstanding challenges related to operational efficiency, sustainability, and productivity. This review explores the [...] Read more.
The integration of artificial intelligence (AI) and the internet of things (IoT), known as artificial intelligence of things (AIoT), is driving significant advancements in the aquaculture industry, offering solutions to longstanding challenges related to operational efficiency, sustainability, and productivity. This review explores the latest research studies in AIoT within the aquaculture industry, focusing on real-time environmental monitoring, data-driven decision-making, and automation. IoT sensors deployed across aquaculture systems continuously track critical parameters such as temperature, pH, dissolved oxygen, salinity, and fish behavior. AI algorithms process these data streams to provide predictive insights into water quality management, disease detection, species identification, biomass estimation, and optimized feeding strategies, among others. Much as AIoT adoption in aquaculture is advantageous on various fronts, there are still numerous challenges, including high implementation costs, data privacy concerns, and the need for scalable and adaptable AI models across diverse aquaculture environments. This review also highlights future directions for AIoT in aquaculture, emphasizing the potential for hybrid AI models, improved scalability for large-scale operations, and sustainable resource management. Full article
(This article belongs to the Special Issue Transfer Learning Methods in Equipment Reliability Management)
Show Figures

Figure 1

Figure 1
<p>Conceptual framework of AIoT in aquaculture.</p>
Full article ">Figure 2
<p>AIoT pipeline in aquaculture.</p>
Full article ">Figure 3
<p>Applications of AIoT in aquaculture.</p>
Full article ">
15 pages, 3320 KiB  
Article
Upcity: Addressing Urban Problems Through an Integrated System
by Andre A. F. Silva, Adao J. S. Porto, Bruno M. C. Belo and Cecilia A. C. Cesar
Sensors 2024, 24(24), 7956; https://doi.org/10.3390/s24247956 - 13 Dec 2024
Viewed by 901
Abstract
Current technologies could potentially solve many of the urban problems in today’s cities. Many cities already possess cameras, drones, thermometers, pollution air gauges, and other sensors. However, most of these have been designated for use in individual domains within City Hall, creating a [...] Read more.
Current technologies could potentially solve many of the urban problems in today’s cities. Many cities already possess cameras, drones, thermometers, pollution air gauges, and other sensors. However, most of these have been designated for use in individual domains within City Hall, creating a maze of individual data domains that cannot connect to each other. This jumble of domains and stakeholders prevents collaboration and transparency. Cities need an integrated system in which data and dashboards can be shared by city administrators to better deal with urban problems that involve several sectors and to improve oversight. This paper presents a model of an integrative system to manage classes of problems within one administrative municipal domain. Our model contains the cyber-physical system’s elements: the physical object, the sensors and electronic devices attached to it, a database of collected problems, code running on the devices or remotely, and the human. We tested the model by using it on the recurring problem of potholes in city streets. An AI model for identifying potholes was integrated into applications available to citizens and operators so that they can feed the municipal system with images and the locations of potholes using their cell phone camera. Preliminary results indicate that these sensors can detect potholes with an accuracy of 91% and 99%, depending on the detection equipment used. In addition, the dashboards provide the manager and the citizen with a transparent view of the problems’ progress and support for their correct address. Full article
(This article belongs to the Special Issue Advanced IoT Systems in Smart Cities: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Context diagram of UpCity: urban problem treatment system.</p>
Full article ">Figure 2
<p>System modeling with UML.</p>
Full article ">Figure 3
<p>Pothole identification process.</p>
Full article ">Figure 4
<p>Steps of the pothole identification process.</p>
Full article ">Figure 5
<p>Citizen’s dashboard.</p>
Full article ">Figure 6
<p>Public infrastructure manager’s dashboard.</p>
Full article ">
20 pages, 26546 KiB  
Article
Synthetic Imaging Radar Data Generation in Various Clutter Environments Using Novel UWB Log-Periodic Antenna
by Deepmala Trivedi, Gopal Singh Phartiyal, Ajeet Kumar and Dharmendra Singh
Sensors 2024, 24(24), 7903; https://doi.org/10.3390/s24247903 - 11 Dec 2024
Viewed by 549
Abstract
In short-range microwave imaging, the collection of data in real environments for the purpose of developing techniques for target detection is very cumbersome. Simultaneously, to develop effective and efficient AI/ML-based techniques for target detection, a sufficiently large dataset is required. Therefore, to complement [...] Read more.
In short-range microwave imaging, the collection of data in real environments for the purpose of developing techniques for target detection is very cumbersome. Simultaneously, to develop effective and efficient AI/ML-based techniques for target detection, a sufficiently large dataset is required. Therefore, to complement labor-intensive and tedious experimental data collected in a real cluttered environment, synthetic data generation via cost-efficient electromagnetic wave propagation simulations is explored in this article. To obtain realistic synthetic data, a 3-D model of an antenna, instead of a point source, is used to include the coupling effects between the antenna and the environment. A novel printed scalable ultra-wide band (UWB) log-periodic antenna with a tapered feed line is designed and incorporated in simulation models. The proposed antenna has a highly directional radiation pattern with considerable high gain (more than 6 dBi) on the entire bandwidth. Synthetic data are generated for two different applications, namely through-the-wall imaging (TWI) and through-the-foliage imaging (TFI). After the generation of synthetic data, clutter removal techniques are also explored, and results are analyzed in different scenarios. Post-analysis shows evidence that the proposed UWB log-periodic antenna-based synthetic imagery is suitable for use as an alternative dataset for TWI and TFI application development, especially in training machine learning models. Full article
(This article belongs to the Special Issue Microwave and Millimeter Wave Sensing and Applications)
Show Figures

Figure 1

Figure 1
<p>Methodology for generation of synthetic imaging radar data.</p>
Full article ">Figure 2
<p>(<b>a</b>) Detailed geometry of the proposed log-periodic antenna (<b>b</b>) 3-D model of the proposed log-periodic antenna.</p>
Full article ">Figure 3
<p>Reflection coefficient for proposed antennas.</p>
Full article ">Figure 4
<p>Gains of proposed antennas in a single direction (Theta = 90 degree and pi = 90 degree) (<b>a</b>) Antenna_2 (<b>b</b>) Antenna_1.</p>
Full article ">Figure 5
<p>Front-to-back ratio (<b>a</b>) Antenna_2 (<b>b</b>) Antenna_1.</p>
Full article ">Figure 6
<p>Far-field and current distribution of proposed antennas (<b>a</b>) at a frequency of 1.5 GHz for Antenna_1, (<b>b</b>) at a frequency of 2.5 GHz for Antenna_1, (<b>c</b>) at a frequency of 3.5 GHz for Antenna_1, (<b>d</b>) at a frequency of 4.5 GHz for Antenna_1, (<b>e</b>) at a frequency of 5.5 GHz for Antenna_1, (<b>f</b>) at a frequency of 0.7 GHz for Antenna_2, (<b>g</b>) at a frequency of 1.2 GHz for Antenna_2, (<b>h</b>) at a frequency of 1.7 GHz for Antenna_2, (<b>i</b>) at a frequency of 2.2 GHz for Antenna_2, and (<b>j</b>) at a frequency of 2.7 GHz for Antenna_2. (<b>k</b>) Annotations for reference.</p>
Full article ">Figure 7
<p>Through-the-wall imaging environment (wall with target and Antenna_1).</p>
Full article ">Figure 8
<p>Foliage environment with target and Antenna_2.</p>
Full article ">Figure 9
<p>Through-the-wall imaging and post-processing at different target(s) locations. (<b>a</b>–<b>c</b>) Raw B-scans, (<b>d</b>–<b>f</b>) B-scans post SVD operation, (<b>g</b>–<b>i</b>) B-scans post REPC operation, (<b>a</b>,<b>d</b>,<b>g</b>) for a single target, (<b>b</b>,<b>e</b>,<b>h</b>) for two targets at the same range and different cross-range, and (<b>c</b>,<b>f</b>,<b>i</b>) for two targets at different range and cross range. Black circles in each image represent the targets’ locations.</p>
Full article ">Figure 10
<p>Through-the-wall imaging and post processing with different walls. (<b>a</b>–<b>d</b>) Raw B-scans, (<b>e</b>–<b>h</b>) B-scans post SVD operation, (<b>i</b>–<b>l</b>) B-scans post REPC operation, (<b>a</b>,<b>e</b>,<b>i</b>) for a brick wall, (<b>b</b>,<b>f</b>,<b>j</b>) for a wood wall, (<b>c</b>,<b>g</b>,<b>k</b>) for a concrete wall, and (<b>d</b>,<b>h</b>,<b>l</b>) for a glass wall. Black circles represent the targets locations.</p>
Full article ">Figure 11
<p>Real-time through-the-wall imaging and post-processing. (<b>a</b>,<b>b</b>) Raw B-scans, (<b>c</b>,<b>d</b>) B-scans post SVD operation, (<b>e</b>,<b>f</b>) B-scans post REPC operation, (<b>a</b>,<b>c</b>,<b>e</b>) for a single target, and (<b>b</b>,<b>d</b>,<b>f</b>) for two targets at the same range and different cross-range. Black circles represent the targets locations.</p>
Full article ">Figure 12
<p>Foliage penetrating radar imaging and post-processing when the antenna was vertically oriented. (<b>a</b>–<b>c</b>) Raw B-scans, (<b>d</b>–<b>f</b>) B-scans post SVD operation, (<b>g</b>–<b>i</b>) B-scans post REPC operation, (<b>j</b>–<b>l</b>) B-scans post SVD operation on post REPC data, (<b>a</b>,<b>d</b>,<b>g</b>,<b>j</b>) for a single target, (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>) for two targets at the same range and different cross-range, and (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>) for two targets at different range and cross-range. Black circles represent the targets locations.</p>
Full article ">Figure 13
<p>Foliage penetrating radar imaging and post-processing when antenna is horizontally oriented. (<b>a</b>–<b>c</b>) Raw B-scans, (<b>d</b>–<b>f</b>) B-scans post SVD operation, (<b>g</b>–<b>i</b>) B-scans post REPC operation, (<b>j</b>–<b>l</b>) B-scans post SVD operation on post REPC data, (<b>a</b>,<b>d</b>,<b>g</b>,<b>j</b>) for a single target, (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>) for two targets at the same range and different cross-range, and (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>) for two targets at different range and cross-range. Black circles represent the targets’ locations.</p>
Full article ">Figure 14
<p>Foliage penetrating radar imaging and post-processing for different moisture content. (<b>a</b>–<b>c</b>) Raw B-scans, (<b>d</b>–<b>f</b>) B-scans post SVD operation, (<b>g</b>–<b>i</b>) B-scans post REPC operation, (<b>j</b>–<b>l</b>) B-scans post SVD operation on post REPC data, (<b>a</b>,<b>d</b>,<b>g</b>,<b>j</b>) for dry foliage, (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>) for moist foliage, and (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>) for wet foliage. Black circles represent the targets locations.</p>
Full article ">Figure 15
<p>Real-time foliage penetrating radar imaging and post-processing when the antenna is horizontally oriented. (<b>a</b>,<b>b</b>) Raw B-scans, (<b>c</b>,<b>d</b>) B-scans post SVD operation, (<b>e</b>,<b>f</b>) B-scans post REPC operation, (<b>g</b>,<b>h</b>) B-scans post SVD operation on post REPC data, (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>) for a single target, and (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>) for two targets at the same range and different cross-range. Black circles represent the targets’ locations.</p>
Full article ">
38 pages, 4777 KiB  
Article
Utility of Certain AI Models in Climate-Induced Disasters
by Ritusnata Mishra, Sanjeev Kumar, Himangshu Sarkar and Chandra Shekhar Prasad Ojha
World 2024, 5(4), 865-900; https://doi.org/10.3390/world5040045 - 8 Oct 2024
Viewed by 1002
Abstract
To address the current challenge of climate change at the local and global levels, this article discusses a few important water resources engineering topics, such as estimating the energy dissipation of flowing waters over hilly areas through the provision of regulated stepped channels, [...] Read more.
To address the current challenge of climate change at the local and global levels, this article discusses a few important water resources engineering topics, such as estimating the energy dissipation of flowing waters over hilly areas through the provision of regulated stepped channels, predicting the removal of silt deposition in the irrigation canal, and predicting groundwater level. Artificial intelligence (AI) in water resource engineering is now one of the most active study topics. As a result, multiple AI tools such as Random Forest (RF), Random Tree (RT), M5P (M5 model trees), M5Rules, Feed-Forward Neural Networks (FFNNs), Gradient Boosting Machine (GBM), Adaptive Boosting (AdaBoost), and Support Vector Machines kernel-based model (SVM-Pearson VII Universal Kernel, Radial Basis Function) are tested in the present study using various combinations of datasets. However, in various circumstances, including predicting energy dissipation of stepped channels and silt deposition in rivers, AI techniques outperformed the traditional approach in the literature. Out of all the models, the GBM model performed better than other AI tools in both the field of energy dissipation of stepped channels with a coefficient of determination (R2) of 0.998, root mean square error (RMSE) of 0.00182, and mean absolute error (MAE) of 0.0016 and sediment trapping efficiency of vortex tube ejector with an R2 of 0.997, RMSE of 0.769, and MAE of 0.531 during testing. On the other hand, the AI technique could not adequately understand the diversity in groundwater level datasets using field data from various stations. According to the current study, the AI tool works well in some fields of water resource engineering, but it has difficulty in other domains in capturing the diversity of datasets. Full article
Show Figures

Figure 1

Figure 1
<p>A graphical representation illustrating the correlation between the input target variables for predicting the energy dissipation of a stepped channel.</p>
Full article ">Figure 2
<p>A graphical representation illustrating the correlation between the input target variables for predicting the sediment trapping efficiency of the vortex tube silt ejector.</p>
Full article ">Figure 3
<p>Study area map.</p>
Full article ">Figure 4
<p>The flow diagram of the current methodology.</p>
Full article ">Figure 5
<p>Agreement diagram of observed and predicted <math display="inline"><semantics> <mrow> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mrow> <mo>∆</mo> <mi mathvariant="normal">H</mi> </mrow> <mrow> <msub> <mrow> <mi mathvariant="normal">H</mi> </mrow> <mrow> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">a</mi> <mi mathvariant="normal">x</mi> </mrow> </msub> </mrow> </mfrac> </mstyle> </mrow> </semantics></math>. (<b>a</b>) M5P; (<b>b</b>) M5Rules; (<b>c</b>) RF; (<b>d</b>) RT; (<b>e</b>) FFNN; (<b>f</b>) GBM; (<b>g</b>) AdaBoost; (<b>h</b>) SVM_PUK; (<b>i</b>) SVM_RBF.</p>
Full article ">Figure 6
<p>Taylor’s diagram of observed and predicted <math display="inline"><semantics> <mrow> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mrow> <mo>∆</mo> <mi mathvariant="normal">H</mi> </mrow> <mrow> <msub> <mrow> <mi mathvariant="normal">H</mi> </mrow> <mrow> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">a</mi> <mi mathvariant="normal">x</mi> </mrow> </msub> </mrow> </mfrac> </mstyle> </mrow> </semantics></math>. (<b>a</b>) AI model training; (<b>b</b>) AI model testing.</p>
Full article ">Figure 7
<p>Distribution of relative errors of energy dissipation for all applied AI-based models in the (<b>a</b>) training phase and (<b>b</b>) testing phase.</p>
Full article ">Figure 8
<p>Agreement diagram of observed and predicted trap efficiency. (<b>a</b>) M5P model; (<b>b</b>) M5Rules; (<b>c</b>) RF model; (<b>d</b>) RT model; (<b>e</b>) FFNN; (<b>f</b>) GBM; (<b>g</b>) AdaBoost; (<b>h</b>) SVM_PUK; (<b>i</b>) SVM_RBF.</p>
Full article ">Figure 9
<p>Taylor’s diagram of observed and predicted trapping efficiency. (<b>a</b>) AI model training; (<b>b</b>) AI model testing.</p>
Full article ">Figure 10
<p>Distribution of relative errors of trapping efficiency for all applied AI-based models (<b>a</b>) in the training phase and (<b>b</b>) testing phase.</p>
Full article ">Figure 11
<p>Agreement diagram of observed and predicted <math display="inline"><semantics> <mrow> <mi mathvariant="normal">GWL</mi> </mrow> </semantics></math> using AI models. (<b>a</b>) M5P model; (<b>b</b>) M5Rules; (<b>c</b>) RF model; (<b>d</b>) RT model; (<b>e</b>) FFNN; (<b>f</b>) GBM; (<b>g</b>) AdaBoost; (<b>h</b>) SVM_PUK; (<b>i</b>) SVM_RBF.</p>
Full article ">Figure 12
<p>Taylor’s diagram of observed and predicted GWL. (<b>a</b>) AI model Training; (<b>b</b>) AI model Testing.</p>
Full article ">Figure 13
<p>Distribution of relative errors of groundwater level for all applied AI-based models in the (<b>a</b>) training phase and (<b>b</b>) testing phase.</p>
Full article ">
17 pages, 1579 KiB  
Article
AIDETECT2: A Novel AI-Driven Signal Detection Approach for beyond 5G and 6G Wireless Networks
by Bibin Babu, Muhammad Yunis Daha, Muhammad Ikram Ashraf, Kiran Khurshid and Muhammad Usman Hadi
Electronics 2024, 13(19), 3821; https://doi.org/10.3390/electronics13193821 - 27 Sep 2024
Cited by 1 | Viewed by 1094
Abstract
Artificial intelligence (AI) is revolutionizing multiple-input-multiple-output (MIMO) technology, making it a promising contender for the coming sixth-generation (6G) and beyond-fifth-generation (B5G) networks. However, the detection process in MIMO systems is highly complex and computationally demanding. To address this challenge, this paper presents an [...] Read more.
Artificial intelligence (AI) is revolutionizing multiple-input-multiple-output (MIMO) technology, making it a promising contender for the coming sixth-generation (6G) and beyond-fifth-generation (B5G) networks. However, the detection process in MIMO systems is highly complex and computationally demanding. To address this challenge, this paper presents an optimized AI-based signal detection method known as AIDETECT-2 which is based on feed forward neural network (FFNN) for MIMO systems. The proposed AIDETECT-2 network model demonstrates superior efficiency in signal detection in comparison with conventional and AI-based MIMO detection methods, particularly in terms of symbol error rate (SER) at various signal-to-noise ratios (SNR). This paper thoroughly explores various signal detection aspects using FFNN, including the design of system architecture, preparation of data, training processes of the network model, and performance evaluation. Simulation results show that the proposed model demonstrates a significant performance improvement ranging between 13.75% to 99.995% better SER compared to the best conventional method and also achieved between 56.52% to 97.69 better SER compared to benchmark AI-based MIMO detectors at 20 dB SNR for given MIMO scenarios respectively. It also presented the computational complexity analysis of different conventional and AI-based MIMO detectors. We believe that this optimized AI-based network model can serve as a comprehensive guide for deploying deep-learning (DL) neural networks for signal detection in the forthcoming 6G wireless networks. Full article
Show Figures

Figure 1

Figure 1
<p>Mathematical architecture of MIMO system model. The red dotted box signifies that the proposed work entails this block.</p>
Full article ">Figure 2
<p>Mathematical architecture of MIMO system model.</p>
Full article ">Figure 3
<p>Schematic representation of AIDETECT.</p>
Full article ">Figure 4
<p>AIDETECT2 Block diagram.</p>
Full article ">Figure 5
<p>Block diagram and architecture of AIDETECT2 neural network Model.</p>
Full article ">Figure 6
<p>Comparison between outputs of conventional methods and AI models with AIDETECT2 for 2 × 2 MIMO systems.</p>
Full article ">Figure 7
<p>Comparison between outputs of conventional methods and AI models with AIDETECT2 for 4 × 4 MIMO systems.</p>
Full article ">Figure 8
<p>Comparison between outputs of conventional methods and AI models with AIDETECT2for 8 × 8 MIMO systems.</p>
Full article ">Figure 9
<p>Training RMSE and training loss of AIDETECT2 for 8 × 8 MIMO system.</p>
Full article ">Figure 10
<p>Comparison between outputs of AIDETECT2’s neural network model for different number of hidden layers.</p>
Full article ">Figure 11
<p>Computational complexity comparison in terms of flops.</p>
Full article ">
22 pages, 7434 KiB  
Article
AI-Based Prediction of Ultrasonic Vibration-Assisted Milling Performance
by Mohamed S. El-Asfoury, Mohamed Baraya, Eman El Shrief, Khaled Abdelgawad, Mahmoud Sultan and Ahmed Abass
Sensors 2024, 24(17), 5509; https://doi.org/10.3390/s24175509 - 26 Aug 2024
Viewed by 1504
Abstract
The current study aims to evaluate the performance of the ultrasonic vibration-assisted milling (USVAM) process when machining two different materials with high deviations in mechanical properties, specifically 7075 aluminium alloy and Ti-6Al-4V titanium alloy. Additionally, this study seeks to develop an AI-based model [...] Read more.
The current study aims to evaluate the performance of the ultrasonic vibration-assisted milling (USVAM) process when machining two different materials with high deviations in mechanical properties, specifically 7075 aluminium alloy and Ti-6Al-4V titanium alloy. Additionally, this study seeks to develop an AI-based model to predict the process performance based on experimental data for the different workpiece characteristics. In this regard, an ultrasonic vibratory setup was designed to provide vibration oscillations at 28 kHz frequency and 8 µm amplitude in the cutting feed direction for the two characterised materials of 7075 aluminium alloy (150 BHN) and Ti-6Al-4V titanium alloy (350 BHN) workpieces. A series of slotting experiments were conducted using both conventional milling (CM) and USVAM techniques. The axial cutting force and machined slot surface roughness were evaluated for each method. Subsequently, Support Vector Regression (SVR) and artificial neural network (ANN) models were built, tested and compared. AI-based models were developed to analyse the experimental results and predict the process performance for both workpieces. The experiments demonstrated a significant reduction in cutting force by up to 30% and an improvement in surface roughness by approximately four times when using USVAM compared to CM for both materials. Validated by the experimental findings, the ANN model accurately and better predicted the performance metrics with RMSE = 0.11 µm and 0.12 N for Al surface roughness and cutting force. Regarding Ti, surface roughness and cutting force were predicted with RMSE of 0.12 µm and 0.14 N, respectively. The results indicate that USVAM significantly enhances milling performance in terms of a reduced cutting force and improved surface roughness for both 7075 aluminium alloy and Ti-6Al-4V titanium alloy. The ANN model proved to be an effective tool for predicting the outcomes of the USVAM process, offering valuable insights for optimising milling operations across different materials. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Transducer components: (1) front mass, (2) piezoelectric rings, (3) electrodes, (4) back mass, (5) steel bolt and (<b>b</b>) Assembled transducer.</p>
Full article ">Figure 2
<p>(<b>a</b>) Load and boundary conditions and (<b>b</b>) von Mises equivalent stress results from finite element static analysis for the transducer.</p>
Full article ">Figure 3
<p>Longitudinal mode shape of the transducer, (<b>a</b>) normalised displacement, (<b>b</b>) and von Mises stress.</p>
Full article ">Figure 4
<p>Longitudinal mode shape of the workpiece under the influence of an ultrasonic vibration-assisted milling device. (<b>a</b>,<b>b</b>) von Mises stress and normalised displacement, respectively, for the Al 7075 alloy workpiece. (<b>c</b>,<b>d</b>) von Mises stress and normalised displacement, respectively, for the Ti-6Al-4V titanium workpiece.</p>
Full article ">Figure 5
<p>(<b>a</b>) A longitudinal path through the workpiece and the ultrasonic vibration-assisted milling device, (<b>b</b>) von Mises stress distribution along the path (Al 7075 workpiece), (<b>c</b>) normalised displacement distribution along the path (Al 7075 workpiece), (<b>d</b>) von Mises stress distribution along the path (Ti-6Al-4V workpiece) and (<b>e</b>) normalised displacement distribution along the path (Ti-6Al-4V workpiece).</p>
Full article ">Figure 6
<p>(<b>a</b>) Iron powder distributed along the workpiece in the initial position and (<b>b</b>) iron powder concentrated at the nodal plane after applying vibration.</p>
Full article ">Figure 7
<p>(<b>a</b>) Vibration system equipment, (<b>b</b>) complete setup with force-measurement system and ultrasonic components and (<b>c</b>) vibration direction during the milling process.</p>
Full article ">Figure 8
<p>The process for selecting the number of hidden ANN layers was based on the overall RMSE error minimisation, where two layers were the best.</p>
Full article ">Figure 9
<p>A simplified representation of the architecture of the ANN was built within the current study. Input neurons are coloured red, output neurons are green, and hidden layer neurons are blue. For simplicity, this figure does not display weights, bias or the activation function.</p>
Full article ">Figure 10
<p>Average axial milling force verses (<b>a</b>) DoC (N = 1000 rpm, f = 10 mm/min), (<b>b</b>) cutting feed (N = 3000 rpm, DoC = 0.1 mm) and (<b>c</b>) cutting speed (DoC = 0.1 mm, f = 10 mm/min).</p>
Full article ">Figure 11
<p>Average surface roughness verses (<b>a</b>) DoC (N = 1000 rpm, f = 10 mm/min), (<b>b</b>) cutting feed (N = 3000 rpm, DoC = 0.1 mm) and (<b>c</b>) cutting speed (DoC = 0.1 mm, f = 10 mm/min).</p>
Full article ">Figure 12
<p>Optical microscope images for the cutting tool at its original state, after USM and after CM for aluminium and titanium alloys.</p>
Full article ">Figure 13
<p>The measurement method of the tool edge radius (<b>a</b>,<b>b</b>) tool edge radius variation under different milling conditions and cutting lengths for aluminium and titanium alloys.</p>
Full article ">Figure 14
<p>SVR model for predicting Al (<b>a</b>) surface roughness and (<b>b</b>) cutting force.</p>
Full article ">Figure 15
<p>ANN model for predicting Al (<b>a</b>) surface roughness and (<b>b</b>) cutting force.</p>
Full article ">Figure 16
<p>SVR model for predicting Ti (<b>a</b>) surface roughness and (<b>b</b>) cutting force.</p>
Full article ">Figure 17
<p>ANN model for predicting Ti (<b>a</b>) surface roughness and (<b>b</b>) cutting force.</p>
Full article ">
37 pages, 7541 KiB  
Review
AI-Assisted Detection of Biomarkers by Sensors and Biosensors for Early Diagnosis and Monitoring
by Tomasz Wasilewski, Wojciech Kamysz and Jacek Gębicki
Biosensors 2024, 14(7), 356; https://doi.org/10.3390/bios14070356 - 22 Jul 2024
Cited by 9 | Viewed by 4741
Abstract
The steady progress in consumer electronics, together with improvement in microflow techniques, nanotechnology, and data processing, has led to implementation of cost-effective, user-friendly portable devices, which play the role of not only gadgets but also diagnostic tools. Moreover, numerous smart devices monitor patients’ [...] Read more.
The steady progress in consumer electronics, together with improvement in microflow techniques, nanotechnology, and data processing, has led to implementation of cost-effective, user-friendly portable devices, which play the role of not only gadgets but also diagnostic tools. Moreover, numerous smart devices monitor patients’ health, and some of them are applied in point-of-care (PoC) tests as a reliable source of evaluation of a patient’s condition. Current diagnostic practices are still based on laboratory tests, preceded by the collection of biological samples, which are then tested in clinical conditions by trained personnel with specialistic equipment. In practice, collecting passive/active physiological and behavioral data from patients in real time and feeding them to artificial intelligence (AI) models can significantly improve the decision process regarding diagnosis and treatment procedures via the omission of conventional sampling and diagnostic procedures while also excluding the role of pathologists. A combination of conventional and novel methods of digital and traditional biomarker detection with portable, autonomous, and miniaturized devices can revolutionize medical diagnostics in the coming years. This article focuses on a comparison of traditional clinical practices with modern diagnostic techniques based on AI and machine learning (ML). The presented technologies will bypass laboratories and start being commercialized, which should lead to improvement or substitution of current diagnostic tools. Their application in PoC settings or as a consumer technology accessible to every patient appears to be a real possibility. Research in this field is expected to intensify in the coming years. Technological advancements in sensors and biosensors are anticipated to enable the continuous real-time analysis of various omics fields, fostering early disease detection and intervention strategies. The integration of AI with digital health platforms would enable predictive analysis and personalized healthcare, emphasizing the importance of interdisciplinary collaboration in related scientific fields. Full article
(This article belongs to the Special Issue Microfluidic Biosensing Technologies for Point-of-Care Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>A schematic representation of (bio)sensor components for detecting biomarkers. ML- and AI-based data processing enables integration and combination of traditional biomarkers with digital ones to personalize healthcare. The acquired data can then be collected, distributed, and evaluated by clinicians and individual patients. Created with <a href="http://BioRender.com" target="_blank">BioRender.com</a>.</p>
Full article ">Figure 2
<p>The key stages during the development of diagnostic tools based on sensors and biosensors.</p>
Full article ">Figure 3
<p>Examples of devices for the detection and/or monitoring of traditional biomarkers.</p>
Full article ">Figure 4
<p>(<b>A</b>) One-step multiplex analysis of breast cancer exosomes based on an electrochemical strategy assisted by AuNPs. Reproduced with permission from [<a href="#B112-biosensors-14-00356" class="html-bibr">112</a>]. (<b>B</b>) Setup of the AI–coupled plasmonic infrared sensor for the detection of structural protein biomarkers in neurodegenerative diseases. Reproduced with permission from [<a href="#B130-biosensors-14-00356" class="html-bibr">130</a>]. (<b>C</b>) Scheme of the multiplexed quantitative detection of biomarkers in sputum by a PoC paper-microfluidic electrochemical device [<a href="#B113-biosensors-14-00356" class="html-bibr">113</a>]. (<b>D</b>) Example of a handheld LC diagnosis device based on MIP sensor. A patient blows into the replaceable mouthpiece and the results will be shown on his/her smartphone instantly. The mobile application that graphs the data during the test, and the exploded view of the proposed lung cancer diagnosis handheld device. Reproduced with permission from [<a href="#B114-biosensors-14-00356" class="html-bibr">114</a>]. (<b>E</b>) The construction and working process of the AuNPs@NIPAm-co-AAc microgel electrodes and detection process of miRNA-21. Reproduced with permission from [<a href="#B111-biosensors-14-00356" class="html-bibr">111</a>].</p>
Full article ">Figure 5
<p>(<b>A</b>) Scheme of electrical impedance cytometer. As cells pass from the inlet to the outlet in these biosensors, alterations in impedance are detected by a lock-in amplifier. This amplifier can simultaneously apply signals at various frequencies. Subsequently, the data are recorded and analyzed using SVM. Reproduced with permission from [<a href="#B155-biosensors-14-00356" class="html-bibr">155</a>]. (<b>B</b>) Interfacing 1D graphene nanoribbons with 2D MXene for the development of pressure biosensor, trained using ML algorithm. Reproduced with permission from [<a href="#B157-biosensors-14-00356" class="html-bibr">157</a>]. (<b>C</b>) Schematic illustration of angiotensin converting enzyme 2 (ACE2)-functionalized AgNR@SiO<sub>2</sub> array for SARS-CoV-2 variant detection. Reproduced with permission from [<a href="#B162-biosensors-14-00356" class="html-bibr">162</a>].</p>
Full article ">Figure 6
<p>The scheme of analyzing proteomic data using BINNs. First step is the creation of a BINN for each dataset by selecting relevant pathways from a database such as Reactome. BINNs are trained using protein quantities from each sample to distinguish between two subphenotypes. Subsequently, SHAP (feature attribution method) is used to interpret the networks, providing feature importance values for biomarker identification. Reproduced with permission from [<a href="#B213-biosensors-14-00356" class="html-bibr">213</a>]. Created with <a href="http://BioRender.com" target="_blank">BioRender.com</a>.</p>
Full article ">Figure 7
<p>AI-assisted biomarker discovery compared to classic procedures.</p>
Full article ">
19 pages, 15698 KiB  
Article
Enhancing Maritime Navigation with Mixed Reality: Assessing Remote Pilotage Concepts and Technologies by In Situ Testing
by Arbresh Ujkani, Pascal Hohnrath, Robert Grundmann and Hans-Christoph Burmeister
J. Mar. Sci. Eng. 2024, 12(7), 1084; https://doi.org/10.3390/jmse12071084 - 27 Jun 2024
Cited by 2 | Viewed by 1695
Abstract
In response to the evolving landscape of maritime operations, new technologies are on the horizon as mixed reality (MR), which shall enhance navigation safety and efficiency during remote assistance as, e.g., in the remote pilotage use case. However, up to now, it is [...] Read more.
In response to the evolving landscape of maritime operations, new technologies are on the horizon as mixed reality (MR), which shall enhance navigation safety and efficiency during remote assistance as, e.g., in the remote pilotage use case. However, up to now, it is uncertain if this technology can provide benefits in terms of usability and situational awareness (SA) compared with screen-based visualizations, which are established in maritime navigation. Thus, this paper initially tests and assesses novel approaches to pilotage in the congested maritime environment, which integrates augmented reality (AR) for ship captains and virtual reality (VR) and desktop applications for pilots. The tested prototype employs AR glasses, notably the Hololens 2, to superimpose the Automatic Identification System (AIS) data directly into the captain’s field of view, while pilots on land receive identical information alongside live 360-degree video feeds from cameras installed on the ship. Additional minimum functionalities include waypoint setting, bearing indicators, and voice communication. The efficiency and usability of these technologies are evaluated through in situ tests conducted with experienced pilots on a real ship using the System Usability Scale, the Situational Awareness Rating Technique, as well as Simulator Sickness Questionnaires during the assessment. This includes a first indicative comparison of VR and desktop applications for the given use case. Full article
Show Figures

Figure 1

Figure 1
<p>Conceptional system overview: for the user study, a WiFi-connection was used instead of 4G/5G (adaption of [<a href="#B4-jmse-12-01084" class="html-bibr">4</a>]).</p>
Full article ">Figure 2
<p>Shore-side desktop UI in split-screen mode.</p>
Full article ">Figure 3
<p>Shore-side virtual reality user interface inside 360° video environment with hand-tracking rig.</p>
Full article ">Figure 4
<p>Ship information visualized in AR.</p>
Full article ">Figure 5
<p>Layer menu and ship information as seen through the Hololens 2.</p>
Full article ">Figure 6
<p>View through the Hololens 2.</p>
Full article ">Figure 7
<p>The four phases of the testing procedure.</p>
Full article ">Figure 8
<p>System Usability Scale Test Persons A, B, and C.</p>
Full article ">Figure 9
<p>SART questionnaire statistical results.</p>
Full article ">Figure 10
<p>Feedback on the VR system regarding comfort and interactivity.</p>
Full article ">Figure 11
<p>Feedback on the AR system regarding comfort and interactivity.</p>
Full article ">Figure 12
<p>Feedback on the desktop system regarding comfort and interactivity.</p>
Full article ">
14 pages, 573 KiB  
Article
Assessment of Water Intake among Chinese Toddlers: The Report of a Survey
by Yiding Zhuang, Zhencheng Xie, Minghan Fu, Hongliang Luo, Yitong Li, Ye Ding and Zhixu Wang
Nutrients 2024, 16(13), 2012; https://doi.org/10.3390/nu16132012 - 25 Jun 2024
Viewed by 1366
Abstract
Toddlerhood (aged 13~36 months) is a period of dietary transition, with water intake being significantly influenced by parental feeding patterns, cultural traditions, and the availability of beverages and food. Nevertheless, given the lack of applicable data, it is challenging to guide and evaluate [...] Read more.
Toddlerhood (aged 13~36 months) is a period of dietary transition, with water intake being significantly influenced by parental feeding patterns, cultural traditions, and the availability of beverages and food. Nevertheless, given the lack of applicable data, it is challenging to guide and evaluate the water intake of toddlers in China. In this study, our objectives were to assess the daily total water intake (TWI), evaluate the consumption patterns of various beverages and food sources contributing to the TWI, determine the conformity of participants to the adequate intake (AI) recommendation of water released by the Chinese Nutrition Society, and analyze the various contributors to the daily total energy intake (TEI). The data for the assessment of water and dietary intake were obtained from the cross-sectional dietary intake survey of infants and young children (DSIYC, 2018–2019). A total of 1360 eligible toddlers were recruited in the analysis. The differences in related variables between two age groups were compared by Mann–Whitney U test and Chi-Square test. The potential correlation between water and energy intake was examined utilizing age-adjusted partial correlation. Toddlers consumed a median daily TWI of 1079 mL, with 670 mL (62.3%, r = 0.752) derived from beverages and 393 mL (37.7%, r = 0.716) from foods. Plain water was the primary beverage source, contributing 300 mL (52.2%, r = 0.823), followed by milk and milk derivatives (MMDs) at 291 mL (45.6%, r = 0.595). Notably, only 28.4% of toddlers managed to reach the recommended AI value. Among these, toddlers obtain more water from beverages than from foods. The median daily TEI of toddlers was 762 kcal, including 272 kcal from beverages (36.4%, r = 0.534) and 492 kcal from foods (63.6%, r = 0.894). Among these, the median daily energy intake from MMDs was 260 kcal, making up 94.6% of the energy intake from beverages (r = 0.959). As the pioneer survey on TWI of toddlers in China based on nationally representative data, attention to the quality and quantity of water intake and actions to better guide parents by both individuals and authorities are eagerly anticipated. Additionally, the revision of the reference value of TWI for Chinese toddlers is urgently required. Full article
(This article belongs to the Topic Advances in Analysis of Food and Beverages)
Show Figures

Figure 1

Figure 1
<p>Percentages (%) of children aged 13~36 months according to compliance with AI value of TWI set by the Chinese Nutrition Society by age, segmented based on 50%AI, 75%AI, and 100%AI; AI: adequate intake, The chi-square test (χ<sup>2</sup>) was used to analyze the differences, yielding a chi-square value of 59.270 (<span class="html-italic">p</span> &lt; 0.05), indicating a statistically significant difference between the two groups.</p>
Full article ">
10 pages, 262 KiB  
Article
Usual Choline Intake of Australian Children 6–24 Months: Findings from the Australian Feeding Infants and Toddlers Study (OzFITS 2021)
by Zhixiao Li, Shao J. Zhou, Tim J. Green and Najma A. Moumin
Nutrients 2024, 16(12), 1927; https://doi.org/10.3390/nu16121927 - 18 Jun 2024
Viewed by 1494
Abstract
(1) Background: Despite the important role choline plays in child development, there are no data on dietary choline intake in early childhood in Australia. (2) Aim: In this cross-sectional study, we estimated the usual total choline intake and the proportion exceeding the Adequate [...] Read more.
(1) Background: Despite the important role choline plays in child development, there are no data on dietary choline intake in early childhood in Australia. (2) Aim: In this cross-sectional study, we estimated the usual total choline intake and the proportion exceeding the Adequate Intake (AI) and determined the main dietary sources of choline in infants 6–12 months (n = 286) and toddlers 12–24 months (n = 475) of age. (3) Methods: A single 24-h food record with repeats collected during the 2021 Australian Feeding Infants and Toddlers Study (OzFITS 2021) was used to estimate dietary choline intake. (4) Results: The mean choline intake was 142 ± 1.9 mg/day in infants and 181 ± 1.2 mg/day in toddlers. Only 35% of infants and 23% of toddlers exceeded the AI for choline based on Nutrient Reference Values (NRVs) for Australia and New Zealand. Breastmilk was the leading source of choline, contributing 42% and 14% of total choline intake in infants and toddlers, respectively; however, egg consumers had the highest adjusted choline intakes and probability of exceeding the AI. (5) Conclusions: Findings suggest that choline intake may be suboptimal in Australian infants and toddlers. Further research to examine the impact of low choline intake on child development is warranted. Full article
(This article belongs to the Special Issue Focus on Diet and Nutrition in Early Life of Infants)
18 pages, 4782 KiB  
Article
OnMapGaze and GraphGazeD: A Gaze Dataset and a Graph-Based Metric for Modeling Visual Perception Differences in Cartographic Backgrounds Used in Online Map Services
by Dimitrios Liaskos and Vassilios Krassanakis
Multimodal Technol. Interact. 2024, 8(6), 49; https://doi.org/10.3390/mti8060049 - 13 Jun 2024
Viewed by 1413
Abstract
In the present study, a new eye-tracking dataset (OnMapGaze) and a graph-based metric (GraphGazeD) for modeling visual perception differences are introduced. The dataset includes both experimental and analyzed gaze data collected during the observation of different cartographic backgrounds used in five online map [...] Read more.
In the present study, a new eye-tracking dataset (OnMapGaze) and a graph-based metric (GraphGazeD) for modeling visual perception differences are introduced. The dataset includes both experimental and analyzed gaze data collected during the observation of different cartographic backgrounds used in five online map services, including Google Maps, Wikimedia, Bing Maps, ESRI, and OSM, at three different zoom levels (12z, 14z, and 16z). The computation of the new metric is based on the utilization of aggregated gaze behavior data. Our dataset aims to serve as an objective ground truth for feeding artificial intelligence (AI) algorithms and developing computational models for predicting visual behavior during map reading. Both the OnMapGaze dataset and the source code for computing the GraphGazeD metric are freely distributed to the scientific community. Full article
Show Figures

Figure 1

Figure 1
<p>Indicative samples of experimental visual stimuli.</p>
Full article ">Figure 2
<p>A flowchart of the main experiment in the SR Research Experiment Builder environment.</p>
Full article ">Figure 3
<p>Graph-based metric computation in three successive steps. For the illustrative example, an interval of 0.2 is selected.</p>
Full article ">Figure 4
<p>Curve-fitting examples for modeling the graph-based metric. Blue lines represent the calculated values before fitting process.</p>
Full article ">Figure 5
<p>The components of the OnMapGaze dataset. Blue lines represent the calculated values before fitting process.</p>
Full article ">Figure 6
<p>Aggregated statistical grayscale heatmaps produced for the highest-ranking visual stimuli.</p>
Full article ">Figure 7
<p>An example of a higher difference pair (<b>on the left side</b>) and a lower difference pair (<b>on the right side</b>).</p>
Full article ">Figure 8
<p>Fitted curves (hexic (sixth degree) polynomial—(<b>left</b>), rectangular hyperbola—(<b>middle</b>), logistic function—(<b>right</b>)) that correspond to the highest (<b>up</b>) and lowest (<b>down</b>) values of R<sup>2</sup>. Blue lines represent the calculated values before fitting process.</p>
Full article ">
16 pages, 18966 KiB  
Article
Monitoring Equipment Malfunctions in Composite Material Machining: Acoustic Emission-Based Approach for Abrasive Waterjet Cutting
by Ioan Alexandru Popan, Cosmin Cosma, Alina Ioana Popan, Vlad I. Bocăneț and Nicolae Bâlc
Appl. Sci. 2024, 14(11), 4901; https://doi.org/10.3390/app14114901 - 5 Jun 2024
Cited by 4 | Viewed by 1249
Abstract
This paper introduces an Acoustic Emission (AE)-based monitoring method designed for supervising the Abrasive Waterjet Cutting (AWJC) process, with a specific focus on the precision cutting of Carbon Fiber-Reinforced Polymer (CFRP). In industries dealing with complex CFRP components, like the aerospace, automotive, or [...] Read more.
This paper introduces an Acoustic Emission (AE)-based monitoring method designed for supervising the Abrasive Waterjet Cutting (AWJC) process, with a specific focus on the precision cutting of Carbon Fiber-Reinforced Polymer (CFRP). In industries dealing with complex CFRP components, like the aerospace, automotive, or medical sectors, preventing cutting system malfunctions is very important. This proposed monitoring method addresses issues such as reductions or interruptions in the abrasive flow rate, the clogging of the cutting head with abrasive particles, the wear of cutting system components, and drops in the water pressure. Mathematical regression models were developed to predict the root mean square of the AE signal. The signal characteristics are determined, considering key cutting parameters like the water pressure, abrasive mass flow rate, feed rate, and material thickness. Monitoring is conducted at both the cutting head and on the CFRP workpiece. The efficacy of the proposed monitoring method was validated through experimental tests, confirming its utility in maintaining precision and operational integrity in AWJC processes applied to CFRP materials. Integrating the proposed monitoring technique within the framework of digitalization and Industry 4.0/5.0 establishes the basis for advanced technologies such as Sensor Integration, Data Analytics and AI, Digital Twin Technology, Cloud and Edge Computing, MES and ERP Integration, and Human-Machine Interface. This integration enhances operational efficiency, quality control, and predictive maintenance in the AWJC process. Full article
(This article belongs to the Special Issue Advancement in Smart Manufacturing and Industry 4.0)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The AWJC principal and kerf characteristics: (<b>a</b>) the AWJC principle, (<b>b</b>) the kerf geometry, and (<b>c</b>) the cut surface characteristics.</p>
Full article ">Figure 2
<p>The proposed method for monitoring Abrasive Waterjet Cutting process.</p>
Full article ">Figure 3
<p>The experiment setup: (<b>a</b>) the Omax 2626 AWJ equipment; (<b>b</b>) the clamping system.</p>
Full article ">Figure 4
<p>The AE signal acquisition setup: (<b>a</b>) the AE sensor installation, (<b>b</b>) the AE signal acquisition system architecture.</p>
Full article ">Figure 5
<p>The CFRP cut samples during the experiment. (<b>a</b>) The experiment setup, (<b>b</b>) the cut CFRP specimens, (<b>c</b>) the generated kerf.</p>
Full article ">Figure 6
<p>The main phases of the AE signal obtained during the experiment.</p>
Full article ">Figure 7
<p>The frequency domain of the AE signal analyzed during the experiment (trial no. 3, <span class="html-italic">P</span> = 350 MPa, <span class="html-italic">V</span> = 2275 mm/min, <span class="html-italic">Ma</span> = 0.35 kg/min, and <span class="html-italic">T</span> = 3 mm): (<b>a</b>) the PSD of the AE signal measured at the cutting head, (<b>b</b>) the <span class="html-italic">PSD</span> of the AE signal measured at the CFRP workpiece.</p>
Full article ">Figure 8
<p>The influence of the process parameters on the AE signal: (<b>a</b>) the AE signal measured at the cutting head, (<b>b</b>) the AE signal measured at the CFRP workpiece.</p>
Full article ">Figure 9
<p>The <span class="html-italic">AE<sub>RMS</sub></span> signal analyzed in this scenario: (<b>a</b>) the AE signal measured at the CFRP workpiece, (<b>b</b>) the AE signal measured at the cutting head.</p>
Full article ">Figure 10
<p>The kerf dimensions and the surface topography obtained in this scenario: (<b>a</b>) the normal AWJC process, (<b>b</b>) the AWJC with equipment malfunction.</p>
Full article ">
Back to TopTop