Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,111)

Search Parameters:
Keywords = automatic identification system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 7096 KiB  
Article
Kohonen Mapping of the Space of Vibration Parameters of an Intact and Damaged Wheel Rim Structure
by Arkadiusz Rychlik, Oleksandr Vrublevskyi and Daria Skonieczna
Appl. Sci. 2024, 14(23), 10937; https://doi.org/10.3390/app142310937 - 25 Nov 2024
Viewed by 267
Abstract
The research presented in this paper takes another step towards developing methods for automatic condition verification to detect structural damage to vehicle wheel rims. This study presents the utilisation of vibration spectra via Fast Fourier Transform (FFT) and a neural network’s learning capabilities [...] Read more.
The research presented in this paper takes another step towards developing methods for automatic condition verification to detect structural damage to vehicle wheel rims. This study presents the utilisation of vibration spectra via Fast Fourier Transform (FFT) and a neural network’s learning capabilities for evaluating structural damage. Amplitude and time cycles of acceleration were analyzed as the structural response. These cycles underwent FFT analysis, leading to the identification of four diagnostic symptoms described by 20 features of the diagnostic signal, which in turn defined a condition vector. In the subsequent stage, the amplitude and frequency cycles served as input data for the neural network, and based on them, self-organizing maps (SOM) were generated. From these maps, a condition vector was defined for each of the four positions of the rim. Therefore, the technical condition of the wheel rim was determined based on the variance in condition parameter features, using reference frequencies of vibration spectra and SOM visualisations. The outcome of this work is a unique synergetic diagnostic system with innovative features, identifying the condition of a wheel rim through vibration and acoustic analysis along with neural network techniques in the form of Kohonen maps. Full article
(This article belongs to the Section Acoustics and Vibrations)
Show Figures

Figure 1

Figure 1
<p>General view of the ZRTOK station for identification of the technical condition of wheel rims at the production stage, with highlights of its executive and measurement components. 1—the tested wheel rim; 2—framework with shaft for mounting the tested wheel rim; 3—wheel rim pressure nut; 4a, 4b—sensor of vibration acceleration; 5—shaft rotation angle sensor (encoder); 6—vibration inductor; 7—computing unit.</p>
Full article ">Figure 2
<p>Scheme of the physical structure of the diagnostic station.</p>
Full article ">Figure 3
<p>Graphical interpretation of the data acquisition process at the ZRTOK testing station: 1—the tested wheel rim; 2—framework with shaft for mounting the tested wheel rim; 3—wheel rim pressure nut; 4—sensor of vibration acceleration; 5—shaft rotation angle sensor (encoder); 6—vibration inductor; 7—wheel valve bore.</p>
Full article ">Figure 4
<p>Dimensionality reduction with SOM.</p>
Full article ">Figure 5
<p>Methodology flowchart.</p>
Full article ">Figure 6
<p>General view of a wheel rim, size 9 × 15.3, utilized in duty vehicles and agricultural machines.</p>
Full article ">Figure 7
<p>Spectra of vibration amplitudes for a new 9 × 15.3 wheel rim in serviceable condition (<b>a</b>) and unserviceable condition (<b>b</b>), obtained at the ZRTOK diagnostic station, with selected characteristic frequencies of the station highlighted and the tested wheel rim for four angles of measurements: 0°, 90°, 180°, 270°.</p>
Full article ">Figure 7 Cont.
<p>Spectra of vibration amplitudes for a new 9 × 15.3 wheel rim in serviceable condition (<b>a</b>) and unserviceable condition (<b>b</b>), obtained at the ZRTOK diagnostic station, with selected characteristic frequencies of the station highlighted and the tested wheel rim for four angles of measurements: 0°, 90°, 180°, 270°.</p>
Full article ">Figure 8
<p>View of a rupture of the butt joint of the 9 × 15.3 wheel rim (vibration spectrum shown in <a href="#applsci-14-10937-f006" class="html-fig">Figure 6</a>). 1—rupture of the butt joint of the wheel rim; 2—mark of an X-ray of the wheel rim disc.</p>
Full article ">Figure A1
<p>Diagram of the dynamic interpretation of the system under consideration.</p>
Full article ">Figure A2
<p>Structural diagram of a dynamic system described in terms of state variables.</p>
Full article ">Figure A3
<p>Screen view of the Model28 program for identifying modal parameters of the wheel rim-shaft balancer.</p>
Full article ">
15 pages, 7711 KiB  
Article
Development of Automated 3D LiDAR System for Dimensional Quality Inspection of Prefabricated Concrete Elements
by Shuangping Li, Bin Zhang, Junxing Zheng, Dong Wang and Zuqiang Liu
Sensors 2024, 24(23), 7486; https://doi.org/10.3390/s24237486 - 24 Nov 2024
Viewed by 492
Abstract
The dimensional quality inspection of prefabricated concrete (PC) elements is crucial for ensuring overall assembly quality and enhancing on-site construction efficiency. However, current practices remain heavily reliant on manual inspection, which results in high operator dependency and low efficiency. Existing Light Detection and [...] Read more.
The dimensional quality inspection of prefabricated concrete (PC) elements is crucial for ensuring overall assembly quality and enhancing on-site construction efficiency. However, current practices remain heavily reliant on manual inspection, which results in high operator dependency and low efficiency. Existing Light Detection and Ranging (LiDAR)-based methods also require skilled professionals for scanning and subsequent point cloud processing, thereby presenting technical challenges. This study developed a 3D LiDAR system for the automatic identification and measurement of the dimensional quality of PC elements. The system consists of (1) a hardware system integrated with camera and LiDAR components to acquire 3D point cloud data and (2) a user-friendly graphical user interface (GUI) software system incorporating a series of algorithms for automated point cloud processing using PyQt5. Field experiments comparing the system’s measurements with manual measurements on prefabricated bridge columns demonstrated that the system’s average measurement error was approximately 5 mm. The developed system can provide a quick, accurate, and automated inspection tool for dimensional quality assessment of PC elements, thereby enhancing on-site construction efficiency. Full article
Show Figures

Figure 1

Figure 1
<p>Research framework.</p>
Full article ">Figure 2
<p>The developed device: (<b>a</b>) the operating end of the device and (<b>b</b>) internal structure of the device.</p>
Full article ">Figure 3
<p>Workflow of column end face automatic inspection system.</p>
Full article ">Figure 4
<p>Software system of the proposed device.</p>
Full article ">Figure 5
<p>Experiment in the field: (<b>a</b>) column production area, (<b>b</b>) column scanning based on the proposed system, and (<b>c</b>) basic information of bridge column.</p>
Full article ">Figure 6
<p>Point cloud visualization of the column: (<b>a</b>) position of frontal scan, (<b>b</b>) rotation process of 2D LiDAR, (<b>c</b>) results from frontal scan, and (<b>d</b>) results from side-scan.</p>
Full article ">Figure 7
<p>Point cloud processing and assessment of PC column.</p>
Full article ">Figure 8
<p>Rebars clustering results under different parameters: (<b>a</b>) ε = 50, Minpoints = 5; (<b>b</b>) ε = 50, Minpoints = 10; (<b>c</b>) ε = 100, Minpoints = 10; (<b>d</b>) ε = 100, Minpoints = 5.</p>
Full article ">Figure 9
<p>Absolute difference values of embedded rebar spacing.</p>
Full article ">Figure 10
<p>Absolute difference values of embedded rebar length.</p>
Full article ">
23 pages, 10144 KiB  
Article
A Fast Algorithm for Matching AIS Trajectories with Radar Point Data in Complex Environments
by Jialuo Xu, Ying Suo, Yuqing Jiang and Qiang Yang
Remote Sens. 2024, 16(23), 4360; https://doi.org/10.3390/rs16234360 - 22 Nov 2024
Viewed by 304
Abstract
In high-traffic port areas, vessel traffic-management systems (VTMS) are essential for managing ship movements and preventing collisions. However, inaccuracies and omissions in the Automatic Identification System (AIS), along with frequent false tracks generated by radar false alarms in complex environments, can compromise VTMS [...] Read more.
In high-traffic port areas, vessel traffic-management systems (VTMS) are essential for managing ship movements and preventing collisions. However, inaccuracies and omissions in the Automatic Identification System (AIS), along with frequent false tracks generated by radar false alarms in complex environments, can compromise VTMS stability. To address the challenges of establishing consistent navigation and improving trajectory quality, this study introduces a novel method to directly identify AIS-matched trajectories from radar plots. This approach treats radar points as probability clouds, generating a multi-dimensional information layer by stacking these clouds after differential transformations based on AIS data. The resulting layer undergoes filtering and clustering to extract point sets that align with AIS data, effectively isolating matching trajectories. The algorithm, validated with simulated data, rapidly identifies target trajectories amid extensive interference without requiring strict parameter adjustments. In measured data, the algorithm rapidly provides matching trajectories, although further human judgment is still required due to the potential absence of true values in measured data. Full article
(This article belongs to the Special Issue Innovative Applications of HF Radar (Second Edition))
Show Figures

Figure 1

Figure 1
<p>Traditional track-matching process.</p>
Full article ">Figure 2
<p>Algorithm processing flow.</p>
Full article ">Figure 3
<p>Illustrative diagram of timestamps in AIS and radar data.</p>
Full article ">Figure 4
<p>When significant data loss occurs, vessels are more likely to follow the shortest-distance path rather than a spline-interpolated trajectory.</p>
Full article ">Figure 5
<p>Interpolation strategy for continuous AIS data loss.</p>
Full article ">Figure 6
<p>The result of linear interpolation does not lie on the geodesic.</p>
Full article ">Figure 7
<p>Illustration of point stacking and difference transformation, where red points represent AIS data and blue points represent radar data: (<b>a</b>) how the layers of radar and AIS position information are stacked; (<b>b</b>) the result after applying difference transformation to each layer, positioning the AIS data at the zero point.</p>
Full article ">Figure 8
<p>The curve fitting problem transforms into finding the optimal circle: (<b>a</b>) the curve fitting problem, with the search results located within the yellow ellipsoid; (<b>b</b>) the problem of finding a circle that “contains one point of each color”. The black dot represents the origin. Due to systematic errors in the radar, the center of the circle is typically not at the origin, and the search results are located within the yellow circle.</p>
Full article ">Figure 9
<p>Visualization of the optimal solution search for Equation (18). The black dot represents the origin, which is the position of the AIS data after applying the difference transformation. The other colored dots represent radar track points at different time instances. (<b>a</b>) The intersection represents the fixation of the variable <math display="inline"><semantics> <mrow> <mi>r</mi> </mrow> </semantics></math>. (<b>b</b>) When the set of shaded areas is reduced to a single element, it represents the optimal solution for the current scoring function.</p>
Full article ">Figure 10
<p>Diffusion of radar plots from <a href="#remotesensing-16-04360-f009" class="html-fig">Figure 9</a> into a probability cloud based on the probability distribution in Equation (21), where the likelihood <math display="inline"><semantics> <mrow> <mi>I</mi> </mrow> </semantics></math> is represented by intensity.</p>
Full article ">Figure 11
<p>Illustration of Nearest-Neighbor Search.</p>
Full article ">Figure 12
<p>Radar data and simulated data. The numbers above the points indicate the results of the <math display="inline"><semantics> <mrow> <mi>i</mi> </mrow> </semantics></math>-th batch of radar scans at that specific time instance.</p>
Full article ">Figure 13
<p>Simulated radar trace data, where points of the same color indicate they belong to the same batch of radar scans.</p>
Full article ">Figure 14
<p>The superimposed results after differential transformation. After the differential transformation, the AIS data are positioned at the origin of the image.</p>
Full article ">Figure 15
<p>Probability cloud of the first batch of data, with a search radius of 1. The brightness of overlapping points is not superimposed.</p>
Full article ">Figure 16
<p>The result of filtering all layers, showing only unique tracks selected.</p>
Full article ">Figure 17
<p>The algorithm’s selected trajectory points closely match the radar trajectory points. The numbers above the points indicate the results of the <math display="inline"><semantics> <mrow> <mi>i</mi> </mrow> </semantics></math>-th batch of radar scans at that specific time instance. Some errors, however, are indistinguishable with only latitude and longitude. Introducing <math display="inline"><semantics> <mrow> <mi>c</mi> <mi>o</mi> <mi>g</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>v</mi> </mrow> </semantics></math> can help further screen these errors.</p>
Full article ">Figure 18
<p>The positions of the tracks that meet the requirements after selecting different parameters when the random error is further increased. This method has a high tolerance for parameters.</p>
Full article ">Figure 19
<p>Nearest-neighbor algorithm after adding two additional dimensions, effectively eliminating the erroneous selections mentioned in <a href="#remotesensing-16-04360-f017" class="html-fig">Figure 17</a>.</p>
Full article ">Figure 20
<p>Radar plot after difference transformation. A total of 29 different colors are used to represent 29 batches of radar plot data.</p>
Full article ">Figure 21
<p>Clusters extracted from the L1 norm-based probability cloud, with blue cluster centers identified as true bias. with each cluster represented by a distinct color. The centroids of the clusters are indicated by cross marks of the red color.</p>
Full article ">Figure 22
<p>Results of the algorithm, with the starting point shown in the bottom right.</p>
Full article ">Figure 23
<p>Error analysis of both trajectories compared to the AIS trajectory.</p>
Full article ">Figure 24
<p>Different thresholds selected for the filter may result in different outcomes. Similar to this, different colors are used to represent different batches of radar plot data, with points of the same color indicating radar plots from the same batch.</p>
Full article ">Figure 25
<p>The number of clusters varies across different regions, with each cluster represented by a distinct color. The centroids of the clusters are indicated by cross marks of the red color.</p>
Full article ">Figure 26
<p>When the systematic error of radar points is related to the coordinate system and rotation occurs locally, using a rotation matrix and displacement for fitting will yield better results.</p>
Full article ">
20 pages, 13179 KiB  
Article
A Study on the Monitoring of Floating Marine Macro-Litter Using a Multi-Spectral Sensor and Classification Based on Deep Learning
by Youchul Jeong, Jisun Shin, Jong-Seok Lee, Ji-Yeon Baek, Daniel Schläpfer, Sin-Young Kim, Jin-Yong Jeong and Young-Heon Jo
Remote Sens. 2024, 16(23), 4347; https://doi.org/10.3390/rs16234347 - 21 Nov 2024
Viewed by 277
Abstract
Increasing global plastic usage has raised critical concerns regarding marine pollution. This study addresses the pressing issue of floating marine macro-litter (FMML) by developing a novel monitoring system using a multi-spectral sensor and drones along the southern coast of South Korea. Subsequently, a [...] Read more.
Increasing global plastic usage has raised critical concerns regarding marine pollution. This study addresses the pressing issue of floating marine macro-litter (FMML) by developing a novel monitoring system using a multi-spectral sensor and drones along the southern coast of South Korea. Subsequently, a convolutional neural network (CNN) model was utilized to classify four distinct marine litter materials: film, fiber, fragment, and foam. Automatic atmospheric correction with the drone data atmospheric correction (DROACOR) method, which is specifically designed for currently available drone-based sensors, ensured consistent reflectance across altitudes in the FMML dataset. The CNN models exhibited promising performance, with precision, recall, and F1 score values of 0.9, 0.88, and 0.89, respectively. Furthermore, gradient-weighted class activation mapping (Grad-CAM), an object recognition technique, allowed us to interpret the classification performance. Overall, this study will shed light on successful FMML identification using multi-spectral observations for broader applications in diverse marine environments. Full article
(This article belongs to the Special Issue Recent Progress in UAV-AI Remote Sensing II)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The overall workflow shows the processes that led to the classification of FMML using drone-acquired data and deep learning models. We performed three steps: (1) FMML exploration; (2) data processing for the deep learning models; and (3) deep learning to process FMML classification and visualization.</p>
Full article ">Figure 2
<p>The study location on Gadeok Island in South Korea and the data acquisition location of the drone surveys in the study area in drone-based imagery (red rectangle). Maps of the study area and a Pix4Dmapper image were used to illustrate the data acquisition.</p>
Full article ">Figure 3
<p>FMML dataset of images captured by the drone in the study area.</p>
Full article ">Figure 4
<p>CNN architecture for the classification of FMML. The training, validation, and test sets comprised FMML datasets as input. The input image size was 128 × 128 × 5. The output was labeled as film, fiber, fragment, and foam for the FMML. This network consisted of input, feature learning, classification, and output.</p>
Full article ">Figure 5
<p>Reflectance analysis of flight altitude through atmospheric correction. (<b>a</b>) A multi-spectral image was obtained on 29 March 2023 (true color RGB; R: 668 nm; G: 560 nm; B: 475 nm; a 51 m flight altitude). Images for atmospheric correction were acquired at altitudes of 23, 51, 70, 101, 127, 146, and 170 m. (<b>b</b>) The image values for each altitude of the orange film buoy image before atmospheric correction were compared. (<b>c</b>) The reflectance for each altitude of the orange film buoy image using a DROACOR atmospheric correction processor were compared.</p>
Full article ">Figure 6
<p>Spectra of all FMML lists in the dataset from the DROACOR-calculated reflectance.</p>
Full article ">Figure 7
<p>A confusion matrix of the CNN-3 model (<span class="html-italic">x</span>-axis: recall; <span class="html-italic">y</span>-axis: precision). The green box indicates correct classification by the model, and the red box indicates incorrect classification.</p>
Full article ">Figure 8
<p>Visualization of FMML using Grad-CAM on CNN-3 model. (<b>a</b>–<b>d</b>) Confident detections of FMML dataset labels. (<b>e</b>–<b>h</b>) Unconfident detections of FMML dataset labels.</p>
Full article ">Figure 9
<p>The well-classified and misclassified results of each category in the CNN-3 Model. All the images are Micasense multi-spectral images of band five. (<b>a</b>–<b>d</b>) Classified as fiber. (<b>e</b>–<b>h</b>) Classified as film. (<b>i</b>–<b>l</b>) Classified as foam. (<b>m</b>–<b>p</b>) Classified as fragment. Green and red circles indicate well-classified and misclassified results, respectively.</p>
Full article ">
14 pages, 2453 KiB  
Article
Dead Broiler Detection and Segmentation Using Transformer-Based Dual Stream Network
by Gyu-Sung Ham and Kanghan Oh
Agriculture 2024, 14(11), 2082; https://doi.org/10.3390/agriculture14112082 - 19 Nov 2024
Viewed by 353
Abstract
Improving productivity in industrial farming is crucial for precision agriculture, particularly in the broiler breeding sector, where swift identification of dead broilers is vital for preventing disease outbreaks and minimizing financial losses. Traditionally, the detection process relies on manual identification by farmers, which [...] Read more.
Improving productivity in industrial farming is crucial for precision agriculture, particularly in the broiler breeding sector, where swift identification of dead broilers is vital for preventing disease outbreaks and minimizing financial losses. Traditionally, the detection process relies on manual identification by farmers, which is both labor-intensive and inefficient. Recent advances in computer vision and deep learning have resulted in promising automatic dead broiler detection systems. In this study, we present an automatic detection and segmentation system for dead broilers that uses transformer-based dual-stream networks. The proposed dual-stream method comprises two streams that reflect the segmentation and detection networks. In our approach, the detection network supplies location-based features of dead broilers to the segmentation network, aiding in the prevention of live broiler mis-segmentation. This integration allows for more accurate identification and segmentation of dead broilers within the farm environment. Additionally, we utilized the self-attention mechanism of the transformer to uncover high-level relationships among the features, thereby enhancing the overall accuracy and robustness. Experiments indicated that the proposed approach achieved an average IoU of 88% on the test set, indicating its strong detection capabilities and precise segmentation of dead broilers. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Examples of dead broiler dataset. From top to bottom: images of dead broilers, GT masks, and Gaussian heatmaps centered around the locations of the dead broilers.</p>
Full article ">Figure 2
<p>Proposed dual-stream network for the detection and segmentation of dead broiler.</p>
Full article ">Figure 3
<p>Overview of the transformer block with multi-head attention. The figure illustrates the process of recalibrating encoded features using a transformer block, which includes layer normalization, multi-head attention, and an MLP (multi-layer perceptron).</p>
Full article ">Figure 4
<p>Training and validation loss graph of our model.</p>
Full article ">Figure 5
<p>Box plots display the distribution of performance metrics (IoU, Precision, Recall, and F-measure) for each segmentation method (U-Net, FCN, LinkNet, DeepLabV3, and the Proposed method).</p>
Full article ">Figure 6
<p>Visualization results of the proposed method. Blue outlines indicate the predicted segmentation boundaries, while green outlines represent the ground truth (GT).</p>
Full article ">Figure 7
<p>Comparison of segmentation and heatmap results. From left to right: original images, ground truth segmentation (GT-seg), output segmentation (Output-seg), ground truth heatmap (GT-heatmap), and output heatmap (Output-heatmap). Output heatmaps guide the model’s focus, enhancing segmentation accuracy.</p>
Full article ">
19 pages, 4718 KiB  
Article
Spatiotemporal Analysis of Light Purse Seine Fishing Vessel Operations in the Arabian High Seas Based on Automatic Identification System Data
by Shenglong Yang, Linlin Yu, Keji Jiang, Xiumei Fan, Lijun Wan, Wei Fan and Heng Zhang
Appl. Sci. 2024, 14(22), 10692; https://doi.org/10.3390/app142210692 - 19 Nov 2024
Viewed by 346
Abstract
Understanding the dynamic spatial distribution and characteristics of fishing activities is crucial for fisheries management and sustainable development. In recent years, small pelagic fish and cephalopods in the Arabian Sea have become new targets for light purse seine fishing; however, there is a [...] Read more.
Understanding the dynamic spatial distribution and characteristics of fishing activities is crucial for fisheries management and sustainable development. In recent years, small pelagic fish and cephalopods in the Arabian Sea have become new targets for light purse seine fishing; however, there is a lack of publicly available reports. This study uses automatic identification system (AIS) data from January to May and October to December of 2021 to 2022 in the region between 58°–70° E and 10°–22° N to extract spatial distribution information through three methods. The results show that with a spatial resolution of 0.25° × 0.25°, the spatial similarity index between the fishing ground information extracted in 2022 and catch data was consistently above 0.60, reaching 0.76 in March 2021 and 0.79 in November 2022, while the spatial similarity index in March 2022 exceeded 0.71. The spatial distribution of fishing effort and kernel density was similar to that of the fishing grounds, and the fishing intensity information exhibited the highest spatiotemporal similarity with commercial catch data, making it more suitable as a substitute for fishery data. Therefore, effective international cooperation and efficient joint management mechanisms for fishing vessels are needed to enhance the regulatory oversight of fishing vessels in this region. Integrating AIS data with other technological methods is crucial for more effective monitoring and management of fishing vessels. The findings presented in this paper provide both quantitative and qualitative scientific support for resource conservation and sustainable development in the region. Full article
(This article belongs to the Section Marine Science and Engineering)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) AIS vessel track point distribution in 2021; (<b>b</b>) AIS vessel track point distribution in 2022.</p>
Full article ">Figure 2
<p>Spatiotemporal analysis workflow.</p>
Full article ">Figure 3
<p>(<b>a</b>) Speed distribution of fishing vessels—all-day data; (<b>b</b>) speed distribution of fishing vessels—nighttime; (<b>c</b>) speed distribution of fishing vessels—daytime.</p>
Full article ">Figure 4
<p>(<b>a</b>) Duration spatial distribution of all vessels in 2021; (<b>b</b>) duration spatial distribution of all vessels in 2022.</p>
Full article ">Figure 5
<p>Mapping results of light purse seine vessel distribution (Bo0 = 0, speed ≤ 1.5 kn) using FE, KDE, and KDHSA in 2021.</p>
Full article ">Figure 6
<p>Mapping results of light purse seine vessel distribution (Bo0 = 0, speed ≤ 1.5 kn) using FE, KDE, and KDHSA in 2022.</p>
Full article ">Figure 7
<p>Average spatial similarity index under different spatial resolutions.</p>
Full article ">Figure 8
<p>(<b>a</b>) Monthly distribution of fishing effort in 2021; (<b>b</b>) monthly distribution of fishing effort in 2022.</p>
Full article ">Figure 9
<p>Mapping of monthly light purse seine vessel fishing efforts in 2021.</p>
Full article ">Figure 10
<p>Mapping of monthly light purse seine vessel fishing efforts in 2022.</p>
Full article ">
23 pages, 4387 KiB  
Article
Multisensor Feature Selection for Maritime Target Estimation
by Sun Choi and Jhonghyun An
Electronics 2024, 13(22), 4497; https://doi.org/10.3390/electronics13224497 - 15 Nov 2024
Viewed by 300
Abstract
This paper introduces a preprocessing and feature selection technique for maritime target estimation. Given the distinct challenges of the maritime environment and the use of multiple sensors, we propose a target estimation model designed to achieve high accuracy while minimizing computational costs through [...] Read more.
This paper introduces a preprocessing and feature selection technique for maritime target estimation. Given the distinct challenges of the maritime environment and the use of multiple sensors, we propose a target estimation model designed to achieve high accuracy while minimizing computational costs through suitable data preprocessing and feature selection. The experimental results demonstrate excellent performance, with the mean square error (MSE) reduced by about 99%. This approach is expected to enhance vessel tracking in situations where vessel estimation sensors, such as the automatic identification system (AIS), are disabled. By enabling reliable vessel tracking, this technique can aid in the detection of illegal vessels. Full article
(This article belongs to the Section Electrical and Autonomous Vehicles)
Show Figures

Figure 1

Figure 1
<p>Overall stream from raw data input to target estimation output.</p>
Full article ">Figure 2
<p>Data preprocessing.</p>
Full article ">Figure 3
<p>Comparison of data before and after synchronization. The different colors in the graph represent each of the 20 experiments. (<b>a</b>) is the raw data, and (<b>b</b>) is the synchronized data, aligned to the point where the target is the closest.</p>
Full article ">Figure 4
<p>Comparison of data before and after outlier handling. The different colors in the graph represent each of the 20 experiments. (<b>a</b>) shows the data with outliers and (<b>b</b>) shows the data after outlier removal, which allows for clearer visualization.</p>
Full article ">Figure 5
<p>Results for various scaling methods. The different colors in the graph represent each of the 20 experiments. (<b>a</b>) shows the visualization of the original data without any applied scaling. (<b>b</b>–<b>f</b>) display the visualizations for each of the different scaling methods.</p>
Full article ">Figure 6
<p>Feature selection.</p>
Full article ">Figure 7
<p>STL decomposition example for magnetic field sensor data. The plots show the original, trend, seasonal, and residual components from the top, respectively. For residual-restricting thresholds in the last plot, the green dashed line is the quantile-based threshold which only restricts the top 5% and bottom 5% of residuals. The blue dashed line is the <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>R</mi> </mrow> </semantics></math>-based threshold.</p>
Full article ">Figure 8
<p>Regression results for all preprocessing combinations before feature selection. Different marker shapes represent the 9 regressors, while color variations indicate the 5 scaling methods. Filled markers denote the SR denoising threshold, markers with black outlines represent QR, and empty markers indicate TS. Shapes indicate the models, with pentagons for XGB, circles for RandomForest, squares for GradientBoosting, triangles for DecisionTree, right-pointing triangles for Ridge, left-pointing triangles for Lasso, hexagons for SVR, diamonds for Linear, and inverted triangles for KNeighbors. Colors represent the scalers, where gray is MinMax, orange is Standard, green is Robust, purple is MaxAbs, and blue is Normalizer.</p>
Full article ">Figure 9
<p>Results of sensor stability. The blue bar chart represents the mean correlation, and the red line chart represents the standard deviation.</p>
Full article ">Figure 10
<p>Final regression results using the selected features, with Normalizer as the scaling method and TS as the denoising threshold.</p>
Full article ">Figure 11
<p>Target estimation.</p>
Full article ">Figure 12
<p>Comparison of average MSE between different feature selection methods. Hierarchical method shows the lowest MSE among other methods.</p>
Full article ">Figure 13
<p>Comparison of the average MSE across different scaler methods and denoising thresholds. (<b>a</b>) shows the graph for 5 scaler methods, showing that the Normalizer yields the lowest average MSE. (<b>b</b>) shows the graph for 3 denoising thresholds, with the TS threshold achieving the lowest average MSE.</p>
Full article ">Figure 14
<p>Visualization of each sensor. The different colors in the graph represent each of the 20 experiments. (<b>a</b>) shows data from the acoustic sensor, which was frequently selected as a key sensor in the hierarchical feature selection process. In contrast, (<b>b</b>) shows data from the specific energy sensor, which was identified as less important. By examining these visualizations of the actual sensor data, we can assess the validity of the feature selection results.</p>
Full article ">Figure 15
<p>Qualitative results for target estimation using LSTM. The graph represents the distance between sensors and the target. The x-axis represents time, whereas the y-axis represents target distance. The blue line shows the original target distance. Gray, green, and fuchsia dashed lines represent denoised, existing feature selection, and proposed hierarchical feature selection respectively.</p>
Full article ">
19 pages, 4383 KiB  
Article
Classification of Ship Type from Combination of HMM–DNN–CNN Models Based on Ship Trajectory Features
by Dae-Woon Shin and Chan-Su Yang
Remote Sens. 2024, 16(22), 4245; https://doi.org/10.3390/rs16224245 - 14 Nov 2024
Viewed by 304
Abstract
This study proposes an enhanced ship-type classification model that employs a sequential processing methodology integrating hidden Markov model (HMM), deep neural network (DNN), and convolutional neural network (CNN) techniques. Four different ship types—fishing boat, passenger, container, and other ship—were classified using multiple ship [...] Read more.
This study proposes an enhanced ship-type classification model that employs a sequential processing methodology integrating hidden Markov model (HMM), deep neural network (DNN), and convolutional neural network (CNN) techniques. Four different ship types—fishing boat, passenger, container, and other ship—were classified using multiple ship trajectory features extracted from the automatic identification system (AIS) and small fishing vessel tracking system. For model optimization, both ship datasets were transformed into various formats corresponding to multiple models, incorporating data enhancement and augmentation approaches. Speed over ground, course over ground, rate of turn, rate of turn in speed, berth distance, latitude/longitude, and heading were used as input parameters. The HMM–DNN–CNN combination was obtained as the optimal model (average F-1 score: 97.54%), achieving individual classification performances of 99.03%, 97.46%, and 95.83% for fishing boats, passenger ships, and container ships, respectively. The proposed approach outperformed previous approaches in prediction accuracy, with further improvements anticipated when implemented on a large-scale real-time data collection system. Full article
(This article belongs to the Special Issue Artificial Intelligence and Big Data for Oceanography)
Show Figures

Figure 1

Figure 1
<p>Study area (red box) and trajectories of ships from 6 to 10 February 2021. The red dot indicates the location of the Korea Institute of Ocean Science and Technology, operating a monitoring station for merchant and fishing vessels. Blue and green lines depict the ship trajectories obtained from the AIS and V-Pass, respectively. Here, AIS = automatic identification system, and V-Pass = small fishing vessel tracking system.</p>
Full article ">Figure 2
<p>Trajectories of different ship types from the training dataset shown in <a href="#remotesensing-16-04245-t001" class="html-table">Table 1</a>. (<b>a</b>) Fishing boat, (<b>b</b>) passenger ship, (<b>c</b>) container ship, and (<b>d</b>) other ship.</p>
Full article ">Figure 3
<p>Overall workflow for ship type classification through combining of multiple models. Here, SOG = speed over ground, ROT = rate of turn, ROTS = rate of turn in speed, and COG = course over ground.</p>
Full article ">Figure 4
<p>Structure of hierarchical HMM model for classifying fishing boat. (<b>a</b>) The position-based probability of fishing activity was derived from two observational parameters, SOG and ROT, at each time step. (<b>b</b>) Fishing/non-fishing state estimated by the stochastic method based on SOG (<b>top</b>) and ROT (<b>bottom</b>).</p>
Full article ">Figure 5
<p>Flowchart for estimating the probability of the DNN model input values through filtering for passenger ship classification.</p>
Full article ">Figure 6
<p>Flowchart for estimating the probability of the CNN model input values by thresholding and filtering for container ship classification. Here, CP = container pier, PM = pier masking, and NCP = non-container pier.</p>
Full article ">Figure 7
<p>Case application of the HMM model and fishing boat trajectory feature analysis from the training dataset. (<b>a</b>) Labeling of classified trajectory into fishing (red circle) and non-fishing (blue circle). (<b>b</b>) Comparison of SOG and ROT distributions between fishing and non-fishing states.</p>
Full article ">Figure 8
<p>Analysis of passenger ship trajectory features from the training dataset. (<b>a</b>) Example of a passenger ship trajectory on 10 February 2021. (<b>b</b>) Comparative analysis between passenger and other ship types based on the probability of parameters: berth distance, ROTS, and heading.</p>
Full article ">Figure 9
<p>Pier masking area to classify container ships from the training dataset. Red and blue polygons display CP and NCP, respectively (left figure). A sample container ship berthed at CP on 10 February 2021 (green circle), intersecting the CP polygon and container ship trajectory points (right figure). Here, CP = container pier, and NCP = non-container pier.</p>
Full article ">Figure 10
<p>Analysis of container ship trajectory features from the training dataset. (<b>a</b>) Container ship density map in log scale and main navigating direction (black arrows). (<b>b</b>) Comparative analysis between container ships and other ship types using the three RGB inputs, composed of ship trajectories (b-1,b-2), SOG (b-3,b-4), and COG (b-5,b-6), respectively.</p>
Full article ">Figure 11
<p>Comparison of ground truth and model classification results for fishing boat (blue circle), passenger ship (green circle), and container ship (red circle).</p>
Full article ">Figure 12
<p>Confusion matrices of HMM, DNN, and CNN models applied to the test dataset. (<b>a</b>) Fishing boats and other ships. (<b>b</b>) Passenger ships and other ships. (<b>c</b>) Container ships and other ships.</p>
Full article ">
25 pages, 4557 KiB  
Article
Spatio-Temporal Transformer Networks for Inland Ship Trajectory Prediction with Practical Deficient Automatic Identification System Data
by Youan Xiao, Xin Luo, Tengfei Wang and Zijian Zhang
Appl. Sci. 2024, 14(22), 10494; https://doi.org/10.3390/app142210494 - 14 Nov 2024
Viewed by 439
Abstract
Inland waterways, characterized by their complex, narrow paths, see significantly higher traffic volumes compared to maritime routes, increasing the regulatory demands on traffic management. Predictive modeling of ship traffic flows, utilizing real AIS historical data, enhances route and docking planning for ships and [...] Read more.
Inland waterways, characterized by their complex, narrow paths, see significantly higher traffic volumes compared to maritime routes, increasing the regulatory demands on traffic management. Predictive modeling of ship traffic flows, utilizing real AIS historical data, enhances route and docking planning for ships and port managers. This approach boosts transportation efficiency and safety in inland waterway navigation. Nevertheless, AIS data are flawed, marred by noise, disjointed paths, anomalies, and inconsistent timing between points. This study introduces a data processing technique to refine AIS data, encompassing segmentation, outlier elimination, missing point interpolation, and uniform interval resampling, aiming to enhance trajectory analysis reliability. Utilizing this refined data processing approach on ship trajectory data yields independent, complete motion profiles with uniform timing. Leveraging the Transformer model, denoted TRFM, this research integrates processed AIS data from the Yangtze River to create a predictive dataset, validating the efficacy of our prediction methodology. A comparative analysis with advanced models such as LSTM and its variants demonstrates TRFM’s superior accuracy, showcasing lower errors in multiple metrics. TRFM’s alignment with actual trajectories underscores its potential for enhancing navigational planning. This validation not only underscores the method’s precision in forecasting ship movements but also its utility in risk management and decision-making, contributing significantly to the advancement in maritime traffic safety and efficiency. Full article
(This article belongs to the Special Issue Efficient and Innovative Goods Transportation and Logistics)
Show Figures

Figure 1

Figure 1
<p>Architecture of ship trajectory prediction method based on TRFM model.</p>
Full article ">Figure 2
<p>A representation of the trajectory prediction problem.</p>
Full article ">Figure 3
<p>Trajectory segmentation.</p>
Full article ">Figure 4
<p>Removal of anomalies and redundant points.</p>
Full article ">Figure 5
<p>The Self-Attention calculation structure.</p>
Full article ">Figure 6
<p>The multi-head attention layer.</p>
Full article ">Figure 7
<p>The data processing effects at each step: (<b>a</b>) Trajectories after segmentation. (<b>b</b>) Trajectories after segmentation. (<b>c</b>) Trajectories after segmentation. (<b>d</b>) Trajectories after uniform time interval resampling.</p>
Full article ">Figure 8
<p>Training loss curves for different models.</p>
Full article ">Figure 9
<p>Bar charts of <span class="html-italic">ADE</span> and <span class="html-italic">FDE</span> for different models.</p>
Full article ">Figure 10
<p>Comparison of predicted trajectories and actual trajectories for different models: (<b>a</b>) LSTM, (<b>b</b>) ATT-LSTM, (<b>c</b>) CNN-LSTM, (<b>d</b>) Bi-LSTM, (<b>e</b>) TRFM(DEC), (<b>f</b>) TRFM(ENC-DEC).</p>
Full article ">
17 pages, 4873 KiB  
Article
An Ensemble Approach for Speaker Identification from Audio Files in Noisy Environments
by Syed Shahab Zarin, Ehzaz Mustafa, Sardar Khaliq uz Zaman, Abdallah Namoun and Meshari Huwaytim Alanazi
Appl. Sci. 2024, 14(22), 10426; https://doi.org/10.3390/app142210426 - 13 Nov 2024
Viewed by 378
Abstract
Automatic noise-robust speaker identification is essential in various applications, including forensic analysis, e-commerce, smartphones, and security systems. Audio files containing suspect speech often include background noise, as they are typically not recorded in soundproof environments. To this end, we address the challenges of [...] Read more.
Automatic noise-robust speaker identification is essential in various applications, including forensic analysis, e-commerce, smartphones, and security systems. Audio files containing suspect speech often include background noise, as they are typically not recorded in soundproof environments. To this end, we address the challenges of noise robustness and accuracy in speaker identification systems. An ensemble approach is proposed combining two different neural network architectures including an RNN and DNN using softmax. This approach enhances the system’s ability to identify speakers even in noisy environments accurately. Using softmax, we combine voice activity detection (VAD) with a multilayer perceptron (MLP). The VAD component aims to remove noisy frames from the recording. The softmax function addresses these residual traces by assigning a higher probability to the speaker’s voice compared to the noise. We tested our proposed solution on the Kaggle speaker recognition dataset and compared it to two baseline systems. Experimental results show that our approach outperforms the baseline systems, achieving a 3.6% and 5.8% increase in test accuracy. Additionally, we compared the proposed MLP system with Long Short-Term Memory (LSTM) and Bidirectional LSTM (BiLSTM) classifiers. The results demonstrate that the MLP with VAD and softmax outperforms the LSTM by 23.2% and the BiLSTM by 6.6% in test accuracy. Full article
(This article belongs to the Special Issue Advances in Intelligent Information Systems and AI Applications)
Show Figures

Figure 1

Figure 1
<p>The proposed framework.</p>
Full article ">Figure 2
<p>Illustration of recurrent neural network.</p>
Full article ">Figure 3
<p>Illustration of deep neural network.</p>
Full article ">Figure 4
<p>The proposed MLP classifier.</p>
Full article ">Figure 5
<p>The LSTM network used.</p>
Full article ">Figure 6
<p>The BiLSTM model.</p>
Full article ">Figure 7
<p>The proposed framework compared with baselines in terms of spectrogram features.</p>
Full article ">Figure 8
<p>The proposed framework compared with baselines in terms of MFCC features.</p>
Full article ">Figure 9
<p>MLP model loss with different features.</p>
Full article ">Figure 10
<p>MLP model validation loss.</p>
Full article ">Figure 11
<p>MLP model accuracy.</p>
Full article ">Figure 12
<p>MLP model validation accuracy.</p>
Full article ">Figure 13
<p>Model accuracy of the three models.</p>
Full article ">Figure 14
<p>Validation accuracy comparison.</p>
Full article ">Figure 15
<p>Model loss of the three models.</p>
Full article ">Figure 16
<p>Validation loss of the three models.</p>
Full article ">Figure 17
<p>Model MSE of the three models.</p>
Full article ">Figure 18
<p>Validation MSE of the three models.</p>
Full article ">
29 pages, 4732 KiB  
Article
Environmental and Cost Assessments of Marine Alternative Fuels for Fully Autonomous Short-Sea Shipping Vessels Based on the Global Warming Potential Approach
by Harriet Laryea and Andrea Schiffauerova
J. Mar. Sci. Eng. 2024, 12(11), 2026; https://doi.org/10.3390/jmse12112026 - 9 Nov 2024
Viewed by 417
Abstract
This research paper presents an effective approach to reducing marine pollution and costs by determining the optimal marine alternative fuels framework for short-sea shipping vessels, with a focus on energy efficiency. Employing mathematical models in a Python environment, the analyses are tailored specifically [...] Read more.
This research paper presents an effective approach to reducing marine pollution and costs by determining the optimal marine alternative fuels framework for short-sea shipping vessels, with a focus on energy efficiency. Employing mathematical models in a Python environment, the analyses are tailored specifically for conventional and fully autonomous high-speed passenger ferries (HSPFs) and tugboats, utilizing bottom-up methodologies, ship operating phases, and the global warming potential approach. The study aims to identify the optimal marine fuel that offers the highest Net Present Value (NPV) and minimal emissions, aligning with International Maritime Organization (IMO) regulations and environmental objectives. Data from the ship’s Automatic Identification System (AIS), along with specifications and port information, were integrated to assess power, energy, and fuel consumption, incorporating parameters of proposed marine alternative fuels. This study examines key performance indicators (KPIs) for marine alternative fuels used in both conventional and autonomous vessels, specifically analyzing total mass emission rate (TMER), total global warming potential (TGWP), total environmental impact (TEI), total environmental damage cost (TEDC), and NPV. The results show that hydrogen (H2-Ren, H2-F) fuels and electric options produce zero emissions, while traditional fuels like HFO and MDO exhibit the highest TMER. Sensitivity and stochastic analyses identify critical input variables affecting NPV, such as fuel costs, emission costs, and vessel speed. Findings indicate that LNG consistently yields the highest NPV, particularly for autonomous vessels, suggesting economic advantages and reduced emissions. These insights are crucial for optimizing fuel selection and operational strategies in marine transportation and offer valuable guidance for decision-making and investment in the marine sector, ensuring regulatory compliance and environmental sustainability. Full article
(This article belongs to the Special Issue Performance and Emission Characteristics of Marine Engines)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the data analysis process for marine alternative fuels in conventional and fully autonomous vessels.</p>
Full article ">Figure 2
<p>A segment of navigation routes depicted on a map sourced from Google Maps [<a href="#B41-jmse-12-02026" class="html-bibr">41</a>]: (<b>a</b>) HSPF, (<b>b</b>) tugboat.</p>
Full article ">Figure 3
<p>Results of the KPIs for the conventional and fully autonomous ships: HSPFs and tugboats.</p>
Full article ">Figure 4
<p>Sensitivity analysis for conventional HSPF: (<b>a</b>) HFO, (<b>b</b>) MDO, (<b>c</b>) MGO, (<b>d</b>) H2 Ren, (<b>e</b>) H2-F, (<b>f</b>) Elec, (<b>g</b>) B20, (<b>h</b>) LNG, and (<b>i</b>) MeOH.</p>
Full article ">Figure 5
<p>Sensitivity analysis for fully autonomous HSPF: (<b>a</b>) HFO, (<b>b</b>) MDO, (<b>c</b>) MGO, (<b>d</b>) H<sub>2</sub> Ren, (<b>e</b>) H<sub>2</sub>-F, (<b>f</b>) Elec, (<b>g</b>) B20, (<b>h</b>) LNG, and (<b>i</b>) MeOH.</p>
Full article ">Figure 6
<p>Sensitivity analysis for conventional tugboat: (<b>a</b>) HFO, (<b>b</b>) MDO, (<b>c</b>) MGO, (<b>d</b>) H<sub>2</sub> Ren, (<b>e</b>) H<sub>2</sub>-F, (<b>f</b>) Elec, (<b>g</b>) B20, (<b>h</b>) LNG, and (<b>i</b>) MeOH.</p>
Full article ">Figure 7
<p>Sensitivity analysis for fully autonomous tugboat: (<b>a</b>) HFO, (<b>b</b>) MDO, (<b>c</b>) MGO, (<b>d</b>) H<sub>2</sub> Ren, (<b>e</b>) H<sub>2</sub>-F, (<b>f</b>) Elec, (<b>g</b>) B20, (<b>h</b>) LNG, and (<b>i</b>) MeOH.</p>
Full article ">Figure 8
<p>Result of stochastic analysis for fully autonomous HSPF and tugboat powered by LNG fuel.</p>
Full article ">
21 pages, 7007 KiB  
Article
LEM-Detector: An Efficient Detector for Photovoltaic Panel Defect Detection
by Xinwen Zhou, Xiang Li, Wenfu Huang and Ran Wei
Appl. Sci. 2024, 14(22), 10290; https://doi.org/10.3390/app142210290 - 8 Nov 2024
Viewed by 456
Abstract
Photovoltaic panel defect detection presents significant challenges due to the wide range of defect scales, diverse defect types, and severe background interference, often leading to a high rate of false positives and missed detections. To address these challenges, this paper proposes the LEM-Detector, [...] Read more.
Photovoltaic panel defect detection presents significant challenges due to the wide range of defect scales, diverse defect types, and severe background interference, often leading to a high rate of false positives and missed detections. To address these challenges, this paper proposes the LEM-Detector, an efficient end-to-end photovoltaic panel defect detector based on the transformer architecture. To address the low detection accuracy for Crack and Star crack defects and the imbalanced dataset, a novel data augmentation method, the Linear Feature Augmentation (LFA) module, specifically designed for linear features, is introduced. LFA effectively improves model training performance and robustness. Furthermore, the Efficient Feature Enhancement Module (EFEM) is presented to enhance the receptive field, suppress redundant information, and emphasize meaningful features. To handle defects of varying scales, complementary semantic information from different feature layers is leveraged for enhanced feature fusion. A Multi-Scale Multi-Feature Pyramid Network (MMFPN) is employed to selectively aggregate boundary and category information, thereby improving the accuracy of multi-scale target recognition. Experimental results on a large-scale photovoltaic panel dataset demonstrate that the LEM-Detector achieves a detection accuracy of 94.7% for multi-scale defects, outperforming several state-of-the-art methods. This approach effectively addresses the challenges of photovoltaic panel defect detection, paving the way for more reliable and accurate defect identification systems. This research will contribute to the automatic detection of surface defects in industrial production, ultimately enhancing production efficiency. Full article
Show Figures

Figure 1

Figure 1
<p>Common defect types in photovoltaic panels.</p>
Full article ">Figure 2
<p>Overall framework of the LEM-Detector.</p>
Full article ">Figure 3
<p>The architecture of the LFA.</p>
Full article ">Figure 4
<p>Effects of data augmentation ((<b>A</b>): defect overlap; (<b>B</b>): small-scale Star crack; (<b>C</b>): small-scale Finger; (<b>D</b>): crack crossing the busbar).</p>
Full article ">Figure 5
<p>The architecture of the EFEM.</p>
Full article ">Figure 6
<p>The architecture of the CIA.</p>
Full article ">Figure 7
<p>The proposed LEM-Detector achieves state-of-the-art performance when compared to existing prominent object detectors.</p>
Full article ">Figure 8
<p>P-R curve of LEM-Detector.</p>
Full article ">Figure 9
<p>Detection results of LEM-Detector.</p>
Full article ">Figure 10
<p>Heatmaps of the feature extraction stage.</p>
Full article ">Figure 11
<p>Heatmaps of the feature fusion stage.</p>
Full article ">
14 pages, 6699 KiB  
Article
TPTrans: Vessel Trajectory Prediction Model Based on Transformer Using AIS Data
by Wentao Wang, Wei Xiong, Xue Ouyang and Luo Chen
ISPRS Int. J. Geo-Inf. 2024, 13(11), 400; https://doi.org/10.3390/ijgi13110400 - 7 Nov 2024
Viewed by 640
Abstract
The analysis of large amounts of vessel trajectory data can facilitate more complex traffic management and route planning, thereby reducing the risk of accidents. The application of deep learning methods in vessel trajectory prediction is becoming more and more widespread; however, due to [...] Read more.
The analysis of large amounts of vessel trajectory data can facilitate more complex traffic management and route planning, thereby reducing the risk of accidents. The application of deep learning methods in vessel trajectory prediction is becoming more and more widespread; however, due to the complexity of the marine environment, including the influence of geographical environmental factors, weather factors, and real-time traffic conditions, predicting trajectories in less constrained maritime areas is more challenging than in path network conditions. Ship trajectory prediction methods based on kinematic formulas work well in ideal conditions but struggle with real-world complexities. Machine learning methods avoid kinematic formulas but fail to fully leverage complex data due to their simple structure. Deep learning methods, which do not require preset formulas, still face challenges in achieving high-precision and long-term predictions, particularly with complex ship movements and heterogeneous data. This study introduces an innovative model based on the transformer structure to predict the trajectory of a vessel. First, by processing the raw AIS (Automatic Identification System) data, we provide the model with a more efficient input format and data that are both more representative and concise. Secondly, we combine convolutional layers with the transformer structure, using convolutional neural networks to extract local spatiotemporal features in sequences. The encoder and decoder structure of the traditional transformer structure is retained by us. The attention mechanism is used to extract the global spatiotemporal features of sequences. Finally, the model is trained and tested using publicly available AIS data. The prediction results on the field data show that the model can predict trajectories including straight lines and turns under the field data of complex terrain, and in terms of prediction accuracy, our model can reduce the mean squared error by at least 6×104 compared with the baseline model. Full article
Show Figures

Figure 1

Figure 1
<p>Visualization of the raw AIS data.</p>
Full article ">Figure 2
<p>Distribution of trajectory lengths.</p>
Full article ">Figure 3
<p>The sliding window method.</p>
Full article ">Figure 4
<p>Visualization of the processed AIS data.</p>
Full article ">Figure 5
<p>Framework of the vessel trajectory prediction model.</p>
Full article ">Figure 6
<p>Prediction performance.</p>
Full article ">Figure 7
<p>Comparison between the predicted trajectory and the actual trajectory.</p>
Full article ">Figure 8
<p>Comparison between the predicted trajectory and the actual trajectory.</p>
Full article ">
25 pages, 976 KiB  
Article
HT-PGFV: Security-Aware Hardware Trojan Security Property Generation and Formal Security Verification Scheme
by Maoyuan Qin, Jiale Li, Jiaqi Yan, Zishuai Hao, Wei Hu and Baolong Liu
Electronics 2024, 13(21), 4286; https://doi.org/10.3390/electronics13214286 - 31 Oct 2024
Viewed by 471
Abstract
Property-driven hardware verification provides a promising way to uncover design vulnerabilities. However, developing security properties that check for highly concealed security vulnerabilities remains a significant challenge. In this paper, we propose a scheme, called HT-PGFV, to implement hardware Trojan security property assertion automatic [...] Read more.
Property-driven hardware verification provides a promising way to uncover design vulnerabilities. However, developing security properties that check for highly concealed security vulnerabilities remains a significant challenge. In this paper, we propose a scheme, called HT-PGFV, to implement hardware Trojan security property assertion automatic generation and formal security verification for Trojan-infected designs. In our scheme, we develop a hardware Trojan security property assertion generation method for automated hardware which can extract hardware Trojan security properties from Trojan-infected designs by performing the three main steps of Trojan-infected signal identification based on feature matching, influence-cone-analysis-based Trojan path identification, and information flow trace mining, and formulate them as SystemVerilog assertions. In addition, we develop a formal security verification method based on information flow analysis which can formally verify hardware Trojan security properties and detect hardware Trojans violating information flow security policies by checking the security of information flows via our developed RT-level hardware information flow security models. The proposed method is demonstrated on several Trojan benchmarks from Trust-Hub. Experimental results show that our scheme can generate hardware Trojan security property assertions for Trojan-infected designs and detect information leakage and functionality change hardware Trojans activated by external inputs or internal conditions. Full article
Show Figures

Figure 1

Figure 1
<p>The general flow of CDFG construction.</p>
Full article ">Figure 2
<p>Hardware information flow analysis.</p>
Full article ">Figure 3
<p>Hardware Trojan classification.</p>
Full article ">Figure 4
<p>The proposed scheme (HT-PGFV).</p>
Full article ">Figure 5
<p>Large self-loop structure.</p>
Full article ">Figure 6
<p>In-degree of trigger signal.</p>
Full article ">Figure 7
<p>Branch execution probability.</p>
Full article ">Figure 8
<p>Example of the RTL control and data flow graph.</p>
Full article ">Figure 9
<p>Influence cone analysis.</p>
Full article ">Figure 10
<p>AES-T400 design with a hardware Trojan.</p>
Full article ">Figure 11
<p>Trigger signal checking for AES-T400.</p>
Full article ">Figure 12
<p>(<b>a</b>) Formal verification result for AES-T400. (<b>b</b>) The wave of the counterexample.</p>
Full article ">Figure 13
<p>AES-T800 design with a hardware Trojan.</p>
Full article ">Figure 14
<p>Trigger signal checking for AES-T800.</p>
Full article ">Figure 15
<p>(<b>a</b>) Formal verification result for AES-T800. (<b>b</b>) The wave of the counterexample.</p>
Full article ">Figure 16
<p>AES-T900 design with a hardware Trojan.</p>
Full article ">Figure 17
<p>Trigger signal checking for AES-T900.</p>
Full article ">Figure 18
<p>AES-T2300 design with a hardware Trojan.</p>
Full article ">Figure 19
<p>Trigger signal checking for AES-T2300.</p>
Full article ">Figure 20
<p>(<b>a</b>) Formal verification result for AES-T2300. (<b>b</b>) The wave of the counterexample.</p>
Full article ">
30 pages, 8567 KiB  
Article
The Environmental Niche of the Light Purse Seine Fleet in the Northwest Pacific Ocean Based on Automatic Identification System Data
by Shenglong Yang, Lijun Wan, Linlin Yu, Jiashu Shi, Weifeng Zhou, Shengmao Zhang, Fei Wang, Zuli Wu, Yang Dai, Keji Jiang and Wei Fan
J. Mar. Sci. Eng. 2024, 12(11), 1944; https://doi.org/10.3390/jmse12111944 - 31 Oct 2024
Viewed by 438
Abstract
Ecosystem-based fisheries management requires high-precision fisheries information to provide relevant data for natural resource management, assessment, and marine spatial planning. This study utilizes Automatic Identification System (AIS) data from light purse seine vessels from the Chinese mainland that were collected from May to [...] Read more.
Ecosystem-based fisheries management requires high-precision fisheries information to provide relevant data for natural resource management, assessment, and marine spatial planning. This study utilizes Automatic Identification System (AIS) data from light purse seine vessels from the Chinese mainland that were collected from May to November between 2020 and 2022, along with the corresponding environmental data. By applying boosted regression trees (BRTs) and generalized additive models (GAMs), this study establishes nonlinear relationships between fishing intensity and predictor variables and explores the ecological and environmental drivers behind the spatial distribution of light purse seine vessels from the Chinese mainland in the Northwest Pacific. This research identifies the key influencing factors and reveals significant seasonal preferences for different marine environments in various months, with chlorophyll-a being the primary influencing factor. The predicted fishing effort closely resembles observed data, providing valuable information to support fisheries resource management and planning. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Figure 1
<p>The Chinese mainland light purse seine vessel track map from 2020 to 2022.</p>
Full article ">Figure 2
<p>(<b>A</b>) Spatial distribution map of the average fishing effort by Chinese mainland light purse seine vessels from 2020 to 2022. (<b>B</b>) Density curve of histograms of the average fishing effort from 2020 to 2022. (<b>C</b>) Q-Q diagram showing the cumulative quantile of fishing effort against the corresponding cumulative quantile of fishing areas.</p>
Full article ">Figure 3
<p>(<b>A</b>) Daily fishing effort distribution curve of Chinese mainland light purse seine vessels from 2020 to 2022. (<b>B</b>) Density curve of histograms of the average daily fishing effort from 2020 to 2022. (<b>C</b>) Q-Q diagram showing the cumulative quantile of fishing effort against the corresponding cumulative percentage of days.</p>
Full article ">Figure 4
<p>(<b>A</b>) VIF values for each environmental variable before removing variables. (<b>B</b>) VIF values for each environmental variable after removing variables.</p>
Full article ">Figure 5
<p>The linear correlation coefficients among the remaining variables after removing variables.</p>
Full article ">Figure 6
<p>(<b>A</b>) Boxplot of the average deviance explained by the BRT model for each month; (<b>B</b>) Boxplot of the average AUC values of the BRT model for each month.</p>
Full article ">Figure 7
<p>The relative influences of spatial and environmental variables on the FE.</p>
Full article ">Figure 8
<p>The potential activity distributions of Chinese mainland light seine vessels in the Northwest Pacific for 2022 compared with the actual distributions.</p>
Full article ">Figure A1
<p>The relationship between predictive deviance and number of trees for BRT model under different learning rate and tree complexity in each month.</p>
Full article ">Figure A1 Cont.
<p>The relationship between predictive deviance and number of trees for BRT model under different learning rate and tree complexity in each month.</p>
Full article ">Figure A1 Cont.
<p>The relationship between predictive deviance and number of trees for BRT model under different learning rate and tree complexity in each month.</p>
Full article ">Figure A2
<p>Influence of latitude and longitude on FE in each month.</p>
Full article ">Figure A2 Cont.
<p>Influence of latitude and longitude on FE in each month.</p>
Full article ">Figure A3
<p>Influence of important environmental variables on FE in each month.</p>
Full article ">Figure A3 Cont.
<p>Influence of important environmental variables on FE in each month.</p>
Full article ">Figure A4
<p>Q-Q Plot for the GAM model in each month.</p>
Full article ">Figure A4 Cont.
<p>Q-Q Plot for the GAM model in each month.</p>
Full article ">Figure A4 Cont.
<p>Q-Q Plot for the GAM model in each month.</p>
Full article ">Figure A4 Cont.
<p>Q-Q Plot for the GAM model in each month.</p>
Full article ">Figure A5
<p>Global Sensitivity Analysis Results.</p>
Full article ">Figure A5 Cont.
<p>Global Sensitivity Analysis Results.</p>
Full article ">Figure A5 Cont.
<p>Global Sensitivity Analysis Results.</p>
Full article ">
Back to TopTop