Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (602)

Search Parameters:
Keywords = ship extraction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 10087 KiB  
Article
Numerical Analysis of Roll Hydrodynamic Coefficients of 2D Triangular Cylinder Using OpenFOAM
by Eunchong Hwang and Kyung-Kyu Yang
J. Mar. Sci. Eng. 2025, 13(3), 391; https://doi.org/10.3390/jmse13030391 - 20 Feb 2025
Abstract
Predicting the roll damping coefficient of a ship is a crucial factor in determining the dynamic stability of the vessel. However, a nonlinear analysis that considers the viscosity of the fluid is required to accurately estimate the roll damping coefficient. This study numerically [...] Read more.
Predicting the roll damping coefficient of a ship is a crucial factor in determining the dynamic stability of the vessel. However, a nonlinear analysis that considers the viscosity of the fluid is required to accurately estimate the roll damping coefficient. This study numerically analyzed the hydrodynamic coefficients related to the roll motion of ships, focusing on the eddy-making damping coefficient. A series of forced vibration tests were conducted on a two-dimensional triangular cylinder floating on the water surface. The overset method and the volume-of-fluid method were applied, and the governing equations were solved using the open-source software OpenFOAM v2106. Uncertainties in the grid size and time intervals were identified through the International Towing Tank Conference (ITTC) procedure, and the obtained hydrodynamic coefficients were compared with available experimental data and potential flow results. Additionally, eddy-making damping was extracted from the shed vortex for various excitation frequencies and amplitudes. The study found that the uncertainty in the roll damping coefficient was less than 8%, with eddy-making damping being the dominant factor influencing the results. Numerical results showed a good agreement with experimental data, with an average deviation of 4.4%, highlighting the importance of considering nonlinear effects at higher excitation amplitudes. Comparison with experimental data and empirical formulas revealed that the nonlinearity due to the excitation amplitude must be considered in empirical formulations. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic view of interpolation methods for overset mesh: (<b>a</b>) cell-volume weighting interpolation; (<b>b</b>) inverse distance weighting interpolation.</p>
Full article ">Figure 2
<p>Comparison of added moment of inertia between interpolation methods for overset mesh.</p>
Full article ">Figure 3
<p>Geometric specifications of the target cross-sectional shape.</p>
Full article ">Figure 4
<p>Schematic view of computational domain and the corresponding boundary condition.</p>
Full article ">Figure 5
<p>Generated grids in each region of computational domain.</p>
Full article ">Figure 6
<p>Three grids near the vertex of the body for uncertainty analysis: (<b>a</b>) coarse grid (2.25 × 2.25); (<b>b</b>) medium grid (1.5 × 1.5); (<b>c</b>) fine grid (1.0 × 1.0).</p>
Full article ">Figure 7
<p>Grid convergence test: (<b>a</b>) time histories of roll moment; (<b>b</b>) convergence of hydrodynamic coefficient depending on the size of time windows.</p>
Full article ">Figure 8
<p>Total uncertainty of the numerical solutions: (<b>a</b>) roll into roll; (<b>b</b>) roll into sway.</p>
Full article ">Figure 9
<p>Comparison of added moment of inertia and damping coefficients in roll into roll: (<b>a</b>) added moment of inertia coefficient; (<b>b</b>) damping coefficient. The dashed line shows the potential theory results [<a href="#B6-jmse-13-00391" class="html-bibr">6</a>] and the hollow symbol represents the experiment [<a href="#B6-jmse-13-00391" class="html-bibr">6</a>].</p>
Full article ">Figure 10
<p>Comparison of added mass and damping coefficients in roll into sway: (<b>a</b>) added mass coefficient; (<b>b</b>) damping coefficient. The dashed line shows the potential theory results [<a href="#B6-jmse-13-00391" class="html-bibr">6</a>] and the hollow symbol represents the experiment [<a href="#B6-jmse-13-00391" class="html-bibr">6</a>].</p>
Full article ">Figure 11
<p>Q-criterion contour and velocity vector around the rolling triangle cylinder, <span class="html-italic">φ</span><sub>0</sub> = 0.05 rad, ω’ = 1.594 (<b>left</b> column) and ω’ = 0.531 (<b>right</b> column). (<b>a</b>) <span class="html-italic">t</span> = <span class="html-italic">t</span><sub>0</sub>; (<b>b</b>) <span class="html-italic">t</span> = <span class="html-italic">t</span><sub>0</sub> + <span class="html-italic">T</span>/3; (<b>c</b>) <span class="html-italic">t</span> = <span class="html-italic">t</span><sub>0</sub> + 2<span class="html-italic">T</span>/3.</p>
Full article ">Figure 11 Cont.
<p>Q-criterion contour and velocity vector around the rolling triangle cylinder, <span class="html-italic">φ</span><sub>0</sub> = 0.05 rad, ω’ = 1.594 (<b>left</b> column) and ω’ = 0.531 (<b>right</b> column). (<b>a</b>) <span class="html-italic">t</span> = <span class="html-italic">t</span><sub>0</sub>; (<b>b</b>) <span class="html-italic">t</span> = <span class="html-italic">t</span><sub>0</sub> + <span class="html-italic">T</span>/3; (<b>c</b>) <span class="html-italic">t</span> = <span class="html-italic">t</span><sub>0</sub> + 2<span class="html-italic">T</span>/3.</p>
Full article ">Figure 12
<p>Q-criterion contour and velocity vector around the rolling triangle cylinder, <span class="html-italic">φ</span><sub>0</sub> = 0.1 rad, ω’ = 1.594 (<b>left</b> column) and ω’ = 0.531 (<b>right</b> column). (<b>a</b>) <span class="html-italic">t</span> = <span class="html-italic">t</span><sub>0</sub>; (<b>b</b>) <span class="html-italic">t</span> = <span class="html-italic">t</span><sub>0</sub> + <span class="html-italic">T</span>/3; (<b>c</b>) <span class="html-italic">t</span> = <span class="html-italic">t</span><sub>0</sub> + 2<span class="html-italic">T</span>/3.</p>
Full article ">Figure 12 Cont.
<p>Q-criterion contour and velocity vector around the rolling triangle cylinder, <span class="html-italic">φ</span><sub>0</sub> = 0.1 rad, ω’ = 1.594 (<b>left</b> column) and ω’ = 0.531 (<b>right</b> column). (<b>a</b>) <span class="html-italic">t</span> = <span class="html-italic">t</span><sub>0</sub>; (<b>b</b>) <span class="html-italic">t</span> = <span class="html-italic">t</span><sub>0</sub> + <span class="html-italic">T</span>/3; (<b>c</b>) <span class="html-italic">t</span> = <span class="html-italic">t</span><sub>0</sub> + 2<span class="html-italic">T</span>/3.</p>
Full article ">Figure 13
<p>Decomposition of damping components: (<b>a</b>) at <span class="html-italic">φ</span><sub>0</sub> = 0.05 rad; (<b>b</b>) at <span class="html-italic">φ</span><sub>0</sub> = 0.1 rad.</p>
Full article ">Figure 14
<p>Extract circumferential velocities: (<b>a</b>) schematic view of extracting the circumferential velocity and definition of <span class="html-italic">r</span>-direction; (<b>b</b>) extracted circumferential velocity distribution along <span class="html-italic">r</span>-direction at different time instants.</p>
Full article ">Figure 15
<p>Comparison of experiment, empirical formula and numerical analysis eddy making roll damping results: (<b>a</b>) at <span class="html-italic">φ</span><sub>0</sub> = 0.05 rad; (<b>b</b>) at <span class="html-italic">φ</span><sub>0</sub> = 0.1 rad. The solid line shows the empirical formula result [<a href="#B11-jmse-13-00391" class="html-bibr">11</a>] and the circle symbol represents the experiment [<a href="#B6-jmse-13-00391" class="html-bibr">6</a>].</p>
Full article ">
21 pages, 6412 KiB  
Article
Inverse Synthetic Aperture Radar Image Multi-Modal Zero-Shot Learning Based on the Scattering Center Model and Neighbor-Adapted Locally Linear Embedding
by Xinfei Jin, Hongxu Li, Xinbo Xu, Zihan Xu and Fulin Su
Remote Sens. 2025, 17(4), 725; https://doi.org/10.3390/rs17040725 - 19 Feb 2025
Abstract
Inverse Synthetic Aperture Radar (ISAR) images are extensively used in Radar Automatic Target Recognition (RATR) for non-cooperative targets. However, acquiring training samples for all target categories is challenging. Recognizing target classes without training samples is called Zero-Shot Learning (ZSL). When ZSL involves multiple [...] Read more.
Inverse Synthetic Aperture Radar (ISAR) images are extensively used in Radar Automatic Target Recognition (RATR) for non-cooperative targets. However, acquiring training samples for all target categories is challenging. Recognizing target classes without training samples is called Zero-Shot Learning (ZSL). When ZSL involves multiple modalities, it becomes Multi-modal Zero-Shot Learning (MZSL). To achieve MZSL, a framework is proposed for generating ISAR images with optical image aiding. The process begins by extracting edges from optical images to capture the structure of ship targets. These extracted edges are used to estimate the potential locations of the target’s scattering centers. Using the Geometric Theory of Diffraction (GTD)-based scattering center model, the edges’ ISAR images are generated from the scattering centers. Next, a mapping is established between the edges’ ISAR images and the actual ISAR images. Neighbor-Adapted Local Linear Embedding (NALLE) generates pseudo-ISAR images for the unseen classes by combining the edges’ ISAR images with the actual ISAR images from the seen classes. Finally, these pseudo-ISAR images serve as training samples, enabling the recognition of test samples. In contrast to the network-based approaches, this method requires only a limited number of training samples. Experiments based on simulated and measured data validate the effectiveness. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>The flowchart of the proposed algorithm.</p>
Full article ">Figure 2
<p>Optical image with its edges. (<b>a</b>) Optical image. (<b>b</b>) Edges of the optical image.</p>
Full article ">Figure 3
<p>Process of NALLE generating a patch <math display="inline"><semantics> <msubsup> <mi>y</mi> <mi>I</mi> <mi>n</mi> </msubsup> </semantics></math> from <math display="inline"><semantics> <msubsup> <mi>y</mi> <mi>O</mi> <mi>n</mi> </msubsup> </semantics></math> (with <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>), where the arrows indicate the process flow. To clearly distinguish between the optical and ISAR images, the left side shows patches from optical images in grayscale, while the right side shows patches from ISAR images in pseudo-color.</p>
Full article ">Figure 4
<p>The 3D models, ISAR images, and optical images for simulated ships. (<b>a</b>–<b>c</b>) are the 3D model, ISAR image, and optical image of Target 1; (<b>d</b>–<b>f</b>) are the 3D model, ISAR image, and optical image of Target 2; (<b>g</b>–<b>i</b>) are the 3D model, ISAR image, and optical image of Target 3.</p>
Full article ">Figure 5
<p>Output of each step in the proposed method for Target 3. (<b>a</b>) Optical image with an attitude similar to the ISAR image. (<b>b</b>) Grayscale image with pseudocolor after preprocessing. (<b>c</b>) ISAR image of edges generated by the scattering center models. (<b>d</b>) Pseudo-ISAR image synthesized by the NALLE algorithm. (<b>e</b>) ISAR image with an attitude similar to the optical image in (<b>a</b>).</p>
Full article ">Figure 6
<p>Mean and standard deviation of the accuracy for the simulated data with varying <span class="html-italic">K</span>. (<b>a</b>) Mean accuracy. (<b>b</b>) Standard deviation of accuracy.</p>
Full article ">Figure 7
<p>Mean and standard deviation of accuracy for the simulated data with varying patch sizes. (<b>a</b>) Mean accuracy for the unseen class. (<b>b</b>) Mean accuracy for all targets. (<b>c</b>) Standard deviation of accuracy for the unseen class. (<b>d</b>) Standard deviation of accuracy for all targets.</p>
Full article ">Figure 8
<p>Optical and ISAR images of the measured ships. (<b>a</b>–<b>c</b>) are the optical images of barque, geared bulk carrier, and gearless bulk carrier, respectively; (<b>d</b>–<b>f</b>) are the ISAR images of barque, geared bulk carrier, and gearless bulk carrier, respectively.</p>
Full article ">Figure 9
<p>Output of each step in the proposed method for the gearless bulk carrier. (<b>a</b>) Optical image with an attitude similar to the ISAR image. (<b>b</b>) Grayscale image with pseudocolor after preprocessing. (<b>c</b>) ISAR image of edges generated by the scattering center models. (<b>d</b>) Pseudo-ISAR image synthesized by the NALLE algorithm. (<b>e</b>) ISAR image with an attitude similar to the optical image in (<b>a</b>).</p>
Full article ">Figure 10
<p>Mean and standard deviation of the accuracy for the measured data with varying <span class="html-italic">K</span>. (<b>a</b>) Mean accuracy. (<b>b</b>) Standard deviation of accuracy.</p>
Full article ">Figure 11
<p>Mean and standard deviation of accuracy for the measured data with varying patch sizes. (<b>a</b>) Mean accuracy for the unseen class. (<b>b</b>) Mean accuracy for all targets. (<b>c</b>) Standard deviation of accuracy for the unseen class. (<b>d</b>) Standard deviation of accuracy for all targets.</p>
Full article ">
18 pages, 6889 KiB  
Article
Machine Learning-Based Detection of Icebergs in Sea Ice and Open Water Using SAR Imagery
by Zahra Jafari, Pradeep Bobby, Ebrahim Karami and Rocky Taylor
Remote Sens. 2025, 17(4), 702; https://doi.org/10.3390/rs17040702 - 19 Feb 2025
Abstract
Icebergs pose significant risks to shipping, offshore oil exploration, and underwater pipelines. Detecting and monitoring icebergs in the North Atlantic Ocean, where darkness and cloud cover are frequent, is particularly challenging. Synthetic aperture radar (SAR) serves as a powerful tool to overcome these [...] Read more.
Icebergs pose significant risks to shipping, offshore oil exploration, and underwater pipelines. Detecting and monitoring icebergs in the North Atlantic Ocean, where darkness and cloud cover are frequent, is particularly challenging. Synthetic aperture radar (SAR) serves as a powerful tool to overcome these difficulties. In this paper, we propose a method for automatically detecting and classifying icebergs in various sea conditions using C-band dual-polarimetric images from the RADARSAT Constellation Mission (RCM) collected throughout 2022 and 2023 across different seasons from the east coast of Canada. This method classifies SAR imagery into four distinct classes: open water (OW), which represents areas of water free of icebergs; open water with target (OWT), where icebergs are present within open water; sea ice (SI), consisting of ice-covered regions without any icebergs; and sea ice with target (SIT), where icebergs are embedded within sea ice. Our approach integrates statistical features capturing subtle patterns in RCM imagery with high-dimensional features extracted using a pre-trained Vision Transformer (ViT), further augmented by climate parameters. These features are classified using XGBoost to achieve precise differentiation between these classes. The proposed method achieves a low false positive rate of 1% for each class and a missed detection rate ranging from 0.02% for OWT to 0.04% for SI and SIT, along with an overall accuracy of 96.5% and an area under curve (AUC) value close to 1. Additionally, when the classes were merged for target detection (combining SI with OW and SIT with OWT), the model demonstrated an even higher accuracy of 98.9%. These results highlight the robustness and reliability of our method for large-scale iceberg detection along the east coast of Canada. Full article
Show Figures

Figure 1

Figure 1
<p>Distribution of targets over date and location.</p>
Full article ">Figure 2
<p>These figures show four sample RGB images from the RCM dataset, where Red = HH, Green = HV, and Blue = (HH-HV)/2. (<b>A</b>,<b>B</b>) depict OW and SI, while (<b>C</b>,<b>D</b>) show icebergs in OW and SI. Only red circles highlight icebergs; other bright pixels represent clutter or sea ice.</p>
Full article ">Figure 3
<p>Block diagram illustrating the proposed system.</p>
Full article ">Figure 4
<p>The impact of despeckling on iceberg images in the HH channel from the SAR dataset, using mean, bilateral, and Lee filters.</p>
Full article ">Figure 5
<p>(<b>A</b>) shows that feature #780 exhibits the most overlap and is considered a weak feature. (<b>B</b>) In contrast, feature #114 is the strongest feature, displaying the least overlap.</p>
Full article ">Figure 6
<p>ROC curves for the evaluated models: (<b>A</b>) ViTFM, (<b>B</b>) StatFM, (<b>C</b>) ViTStatFM, and (<b>D</b>) ViTStatClimFM. The curves illustrate the classification performance across OW, OWT, SI, and SIT categories.</p>
Full article ">Figure 7
<p>Confusion matrices depicting the classification performance of the hybrid model with climate features: (<b>A</b>) represents the classification performance across all four classes, (<b>B</b>) highlights the model’s ability to distinguish between target-containing patches and those without targets, and (<b>C</b>) evaluates the classification of sea ice (SI and SIT) versus open water (OW and OWT).</p>
Full article ">Figure 8
<p>Application of the proposed method to a calibrated RCM image acquired on 23 June 2023. (<b>A</b>) The RCM image overlaid on the Labrador coast. (<b>B</b>) Corresponding ice chart from the Canadian Ice Service for the same region and date. (<b>C</b>) Probability map for OW. (<b>D</b>) Probability map for SI. (<b>E</b>) Probability map for OWT. (<b>F</b>) Probability map for SIT.</p>
Full article ">Figure 9
<p>An extracted section from the full RCM image captured on 23 June 2023, showing icebergs embedded in SI. Red triangles indicate ground truth points, while green circles represent model predictions.</p>
Full article ">Figure 10
<p>Missed targets located near patch borders, illustrating boundary effects. (<b>A</b>) A missed target near the top-left patch border. (<b>B</b>) A missed target within a central region affected by boundary artifacts. (<b>C</b>) A missed target near the bottom-right patch border, highlighting prediction inconsistencies at patch edges.</p>
Full article ">
35 pages, 37221 KiB  
Article
Target Ship Recognition and Tracking with Data Fusion Based on Bi-YOLO and OC-SORT Algorithms for Enhancing Ship Navigation Assistance
by Shuai Chen, Miao Gao, Peiru Shi, Xi Zeng and Anmin Zhang
J. Mar. Sci. Eng. 2025, 13(2), 366; https://doi.org/10.3390/jmse13020366 - 16 Feb 2025
Abstract
With the ever-increasing volume of maritime traffic, the risks of ship navigation are becoming more significant, making the use of advanced multi-source perception strategies and AI technologies indispensable for obtaining information about ship navigation status. In this paper, first, the ship tracking system [...] Read more.
With the ever-increasing volume of maritime traffic, the risks of ship navigation are becoming more significant, making the use of advanced multi-source perception strategies and AI technologies indispensable for obtaining information about ship navigation status. In this paper, first, the ship tracking system was optimized using the Bi-YOLO network based on the C2f_BiFormer module and the OC-SORT algorithms. Second, to extract the visual trajectory of the target ship without a reference object, an absolute position estimation method based on binocular stereo vision attitude information was proposed. Then, a perception data fusion framework based on ship spatio-temporal trajectory features (ST-TF) was proposed to match GPS-based ship information with corresponding visual target information. Finally, AR technology was integrated to fuse multi-source perceptual information into the real-world navigation view. Experimental results demonstrate that the proposed method achieves a mAP0.5:0.95 of 79.6% under challenging scenarios such as low resolution, noise interference, and low-light conditions. Moreover, in the presence of the nonlinear motion of the own ship, the average relative position error of target ship visual measurements is maintained below 8%, achieving accurate absolute position estimation without reference objects. Compared to existing navigation assistance, the AR-based navigation assistance system, which utilizes ship ST-TF-based perception data fusion mechanism, enhances ship traffic situational awareness and provides reliable decision-making support to further ensure the safety of ship navigation. Full article
Show Figures

Figure 1

Figure 1
<p>Organization diagram of the sections of this paper.</p>
Full article ">Figure 2
<p>A perception data fusion framework based on ship ST-TF for ship AR navigation assistance.</p>
Full article ">Figure 3
<p>The structure of the Bi-YOLO network.</p>
Full article ">Figure 4
<p>(<b>a</b>) Details of a BiFormer block; (<b>b</b>) Structure of the BRA.</p>
Full article ">Figure 5
<p>(<b>a</b>) to (<b>b</b>) illustrate the Driving-Leaves binocular camera before and after calibration and stereo rectification, and (<b>c</b>) to (<b>d</b>) illustrate the Baymax binocular camera before and after calibration and stereo rectification.</p>
Full article ">Figure 6
<p>Conceptual diagram of the binocular imaging process.</p>
Full article ">Figure 7
<p>Illustration of coordinate system conversion.</p>
Full article ">Figure 8
<p>Synchronization process of different sensor frequencies.</p>
Full article ">Figure 9
<p>Asynchronous nonlinear ship trajectory sequence association based on the DTW algorithm.</p>
Full article ">Figure 10
<p>Asynchronous ship trajectory association and joint data storage method.</p>
Full article ">Figure 11
<p>The MASSs used in the experimental process.</p>
Full article ">Figure 12
<p>The data samples from the FLShip dataset.</p>
Full article ">Figure 13
<p>Training mAP@0.5 curves for Bi-YOLO and various object detection algorithms.</p>
Full article ">Figure 14
<p>(<b>a</b>–<b>f</b>) respectively show the comparison of detection effects between YOLO11s and Bi-YOLO.</p>
Full article ">Figure 15
<p>Tracking performance comparison of four state-of-the-art object trackers in Scene-2.</p>
Full article ">Figure 16
<p>Tracking performance comparison of four state-of-the-art object trackers in Scene-4.</p>
Full article ">Figure 17
<p>Tracking performance comparison of four state-of-the-art object trackers in Scene-6.</p>
Full article ">Figure 18
<p>The visual position estimation results of the ‘Roaring-Flame’ MASS in Scene-1.</p>
Full article ">Figure 19
<p>The visual position estimation result of the ‘Baymax’ MASS in scene-2.</p>
Full article ">Figure 20
<p>The visual position estimation results of the ‘Baymax’ MASS in Scene-3.</p>
Full article ">Figure 21
<p>The AR navigation assistance effects of ships constructed at different timestamps in multiple scenes.</p>
Full article ">
12 pages, 839 KiB  
Article
ISAR Image Quality Assessment Based on Visual Attention Model
by Jun Zhang, Zhicheng Zhao and Xilan Tian
Appl. Sci. 2025, 15(4), 1996; https://doi.org/10.3390/app15041996 - 14 Feb 2025
Abstract
The quality of ISAR (Inverse Synthetic Aperture Radar) images has a significant impact on the detection and recognition of targets. Therefore, ISAR image quality assessment is a fundamental prerequisite and primary link in the utilization of ISAR images. Previous ISAR image quality assessment [...] Read more.
The quality of ISAR (Inverse Synthetic Aperture Radar) images has a significant impact on the detection and recognition of targets. Therefore, ISAR image quality assessment is a fundamental prerequisite and primary link in the utilization of ISAR images. Previous ISAR image quality assessment methods typically extract hand-crafted features or use simple multi-layer networks to extract local features. Hand-crafted features and local features from networks usually lack the global information of ISAR images. Furthermore, most deep neural networks obtain feature representations by abridging the prediction quality score and the ground truth, neglecting to explore the strong correlations between features and quality scores in the stage of feature extraction. This study proposes a Gramin Transformer to explore the similarity and diversity of features extracted from different images, thus obtaining features containing quality-related information. The Gramin matrix of features is computed to obtain the score token through the self-attention layer. It prompts the network to learn more discriminative features, which are closely associated with quality scores. Despite the Transformer architecture’s ability to extract global information, the Channel Attention Block (CAB) can capture complementary information from different channels in an image, aggregating and mining information from these channels to provide a more comprehensive evaluation of ISAR images. ISAR images are formed from target scattering points with a background containing substantial silent noise, and the Inter-Region Attention Block (IRAB) is utilized to extract local scattering point features, which decide the clarity of target. In addition, extensive experiments are conducted on the ISAR image dataset (including space stations, ships, aircraft, etc.). The evaluation results of our method on the dataset are significantly superior to those of traditional feature extraction methods and existing image quality assessment methods. Full article
Show Figures

Figure 1

Figure 1
<p>The architecture of the proposed model. The image is partitioned into 8 × 8-sized patches. Then, the linear projection layer performs a convolution operation on these patches to acquire patch embeddings, which can be processed by the Transformer encoder. The Gram–T model computes the Gramin matrix of extracted features to obtain score tokens through one attention layer. Moreover, the CAB and IRAB strength interactions between channels and regions within features are output by the Transformer encoder. Finally, the prediction score from the IRAB is added by a score token from Gram–T to obtain the final score.</p>
Full article ">Figure 2
<p>CAB module.</p>
Full article ">Figure 3
<p>IRAB module.</p>
Full article ">Figure 4
<p>Display of the results in the training process.</p>
Full article ">Figure 5
<p>Example images from ISAR image dataset. The first caption row below the images refers to the ground truth scores of the according images, and the second row refers to prediction scores.</p>
Full article ">Figure 6
<p>Attention heatmap analysis. The attention heatmap is on the left, and the original image is on the right. In our experiments, the blue color in the attention heatmap represents the high weight, and the red color represents low weight.</p>
Full article ">
20 pages, 12270 KiB  
Article
Spatial State Analysis of Ship During Berthing and Unberthing Process Utilizing Incomplete 3D LiDAR Point Cloud Data
by Ying Li and Tian-Qi Wang
J. Mar. Sci. Eng. 2025, 13(2), 347; https://doi.org/10.3390/jmse13020347 - 14 Feb 2025
Abstract
In smart ports, accurately perceiving the motion state of a ship during berthing and unberthing is essential for the safety and efficiency of the ship and port. However, in actual scenarios, the obtained data are not always complete, which impacts the accuracy of [...] Read more.
In smart ports, accurately perceiving the motion state of a ship during berthing and unberthing is essential for the safety and efficiency of the ship and port. However, in actual scenarios, the obtained data are not always complete, which impacts the accuracy of the ship’s motion state. This paper proposes a spatial visualization method to analyze a ship’s motion state in the incomplete data by introducing the GIS spatial theory. First, for the complete part under incomplete data, this method proposes a new technique named LGFCT to extract the key points of this part. Then, for the missing part under the incomplete data, this method applies the key point prediction technique based on the line features to extract the key points of this part. Note that the key points will be used to calculate the key parameters. Finally, spatial visualization and spatial-temporal tracking techniques are employed to spatially analyze the ship’s motion state. In summary, the proposed method not only spatially identifies a ship’s motion state for the incomplete data but also provides an intuitive visualization of a ship’s spatial motion state. The accuracy and effectiveness of the proposed method are verified through experimental data collected from a ship in Dalian Port, China. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram for a ship’s point cloud data collected by LiDAR: (<b>a</b>) complete data and (<b>b</b>) incomplete data.</p>
Full article ">Figure 2
<p>The flowchart of the spatial state analysis of the ship berthing and berthing processes with incomplete data.</p>
Full article ">Figure 3
<p>Schematic diagram of the ellipse fitting technique for treating point cloud data: (<b>a</b>) space standard deviation ellipse technique and (<b>b</b>) LGFCT technique.</p>
Full article ">Figure 4
<p>Schematic diagram of the rectangular fitting technique.</p>
Full article ">Figure 5
<p>Schematic diagram of berthing and unberthing parameters when the missing part is under incomplete data: (<b>a</b>) the missing part is the ship’s stern or (<b>b</b>) the missing part is the ship’s bow.</p>
Full article ">Figure 6
<p>Schematic diagram of the spatial-temporal tracking analysis: (<b>a</b>) trajectory point and (<b>b</b>) trajectory line.</p>
Full article ">Figure 7
<p>The ship’s unberthing process: (<b>a</b>) experiment scene and (<b>b</b>) point cloud data.</p>
Full article ">Figure 8
<p>The ship’s berthing process: (<b>a</b>) simulated scene and (<b>b</b>) point cloud data.</p>
Full article ">Figure 9
<p>Point cloud data: (<b>a</b>) original point cloud data in the experiment, (<b>b</b>) preprocessed point cloud data in the experiment, (<b>c</b>) original point cloud data in the simulated scene, and (<b>d</b>) preprocessed point cloud data in the simulated scene.</p>
Full article ">Figure 10
<p>Schematic diagram for the point cloud data obtained by longitudinal cutting: (<b>a</b>) the real experiment data for the ship’s bow and (<b>b</b>) the simulated data for the ship’s stern.</p>
Full article ">Figure 11
<p>Schematic diagram of the motion state with the ship’s bow as the complete part: (<b>a</b>) elliptical fitting and (<b>b</b>) the extracted key points.</p>
Full article ">Figure 12
<p>Schematic diagram of the motion state with the ship’s stern as the complete part: (<b>a</b>) rectangular fitting and (<b>b</b>) the extracted key points.</p>
Full article ">Figure 13
<p>Schematic diagram of the key points for the missing part: (<b>a</b>) the real experiment data during the unberthing process and (<b>b</b>) the simulated data during the berthing process.</p>
Full article ">Figure 14
<p>Comparison between reference solutions and calculated values for the missing part in the simulated data.</p>
Full article ">Figure 15
<p>Parameters during the unberthing process: (<b>a</b>) distance, (<b>b</b>) velocity, and (<b>c</b>) departure angle of a ship relative to the shoreline.</p>
Full article ">Figure 16
<p>Spatial visualization results for the line elements representing a ship.</p>
Full article ">Figure 17
<p>Spatial visualization results for the line elements representing a ship after the reclassification process.</p>
Full article ">Figure 18
<p>The spatial-temporal tracking results for the unberthing process.</p>
Full article ">
24 pages, 687 KiB  
Article
MtAD-Net: Multi-Threshold Adaptive Decision Net for Unsupervised Synthetic Aperture Radar Ship Instance Segmentation
by Junfan Xue, Junjun Yin and Jian Yang
Remote Sens. 2025, 17(4), 593; https://doi.org/10.3390/rs17040593 - 9 Feb 2025
Abstract
In synthetic aperture radar (SAR) images, pixel-level Ground Truth (GT) is a scarce resource compared to Bounding Box (BBox) annotations. Therefore, exploring the use of unsupervised instance segmentation methods to convert BBox-level annotations into pixel-level GT holds great significance in the SAR field. [...] Read more.
In synthetic aperture radar (SAR) images, pixel-level Ground Truth (GT) is a scarce resource compared to Bounding Box (BBox) annotations. Therefore, exploring the use of unsupervised instance segmentation methods to convert BBox-level annotations into pixel-level GT holds great significance in the SAR field. However, previous unsupervised segmentation methods fail to perform well on SAR images due to the presence of speckle noise, low imaging accuracy, and gradual pixel transitions at the boundaries between targets and background, resulting in unclear edges. In this paper, we propose a Multi-threshold Adaptive Decision Network (MtAD-Net), which is capable of segmenting SAR ship images under unsupervised conditions and demonstrates good performance. Specifically, we design a Multiple CFAR Threshold-extraction Module (MCTM) to obtain a threshold vector by a false alarm rate vector. A Local U-shape Feature Extractor (LUFE) is designed to project each pixel of SAR images into a high-dimensional feature space, and a Global Vision Transformer Encoder (GVTE) is designed to obtain global features, and then, we use the global features to obtain a probability vector, which is the probability of each CFAR threshold. We further propose a PLC-Loss to adaptively reduce the feature distance of pixels of the same category and increase the feature distance of pixels of different categories. Moreover, we designed a label smoothing module to denoise the result of MtAD-Net. Experimental results on the dataset show that our MtAD-Net outperforms traditional and existing deep learning-based unsupervised segmentation methods in terms of pixel accuracy, kappa coefficient, mean intersection over union, frequency weighted intersection over union, and F1-Score. Full article
Show Figures

Figure 1

Figure 1
<p>The overall architecture. In MCTM, thresholds corresponding to different false alarm rates are extracted. The LUFE module maps each pixel in the image to a high-dimensional feature space. The GVTE module employs a Vision Transformer structure to extract global features and maps these global features to the probabilities corresponding to the MCTM output thresholds. We use the designed loss function to update the weights of LUFE and GVTE.The inner product of the output vectors from MTCM and GVTE modules serves as the segmentation threshold for the input image. After segmentation, the result undergoes smoothing through the label smoothing module to obtain the final segmentation result.</p>
Full article ">Figure 2
<p>The image is divided into target region and noise region.</p>
Full article ">Figure 3
<p>Illustration of the Local U-shape Feature Extractor. (<b>a</b>) The overall structure of the LUFE module. After the input image passes through three Down modules and three Up modules, the feature map of the image is obtained. (<b>b</b>) The specific structure of the Down module. (<b>c</b>) The specific structure of the Up module.</p>
Full article ">Figure 4
<p>Illustration of the Global Vision Transformer Encoder. (<b>a</b>) The overall structure of the GVTE module. After Patch and Position Embedding, we feed it into Transformer Encoder, the global features of the image can be obtained. After MLP and SoftMax, global features can be mapped to the probabilities of each CFAR threshold. (<b>b</b>) The specific structure of Transformer Encoder.</p>
Full article ">Figure 5
<p>Schematic diagram of the role of the PLC-Loss function. (<b>a</b>) Distribution of different categories of pixels in the feature space before training. (<b>b</b>) Distribution of different categories of pixels in feature space after training.</p>
Full article ">Figure 6
<p>Qualitative analysis achieved by different unsupervised segmentation methods on small-size ship in the SSDD dataset. (<b>a</b>) Input image, (<b>b</b>) GT, (<b>c</b>) CFAR, (<b>d</b>) OTSU, (<b>e</b>) Kim-Net, (<b>f</b>) PiCIE, (<b>g</b>) CDA-SAR, (<b>h</b>) IDUDL, (<b>i</b>) Ours. The false negative areas and false positive areas are highlighted by green and red.</p>
Full article ">Figure 7
<p>Qualitative analysis achieved by different unsupervised segmentation methods on small-size ship in the HRSID dataset. (<b>a</b>) Input image (<b>b</b>) GT, (<b>c</b>) CFAR, (<b>d</b>) OTSU, (<b>e</b>) Kim-Net, (<b>f</b>) PiCIE, (<b>g</b>) CDA-SAR, (<b>h</b>) IDUDL, (<b>i</b>) Ours. The false negative areas and false positive areas are highlighted by green and red.</p>
Full article ">Figure 8
<p>Qualitative analysis achieved by different unsupervised segmentation methods on large ship in the SSDD dataset. (<b>a</b>) Input image, (<b>b</b>) GT, (<b>c</b>) CFAR, (<b>d</b>) OTSU, (<b>e</b>) Kim-Net, (<b>f</b>) PiCIE, (<b>g</b>) CDA-SAR, (<b>h</b>) IDUDL, (<b>i</b>) Ours. The false negative areas and false positive areas are highlighted by green and red.</p>
Full article ">Figure 9
<p>Qualitative analysis achieved by different unsupervised segmentation methods on large ship in the HRSID dataset. (<b>a</b>) Input image, (<b>b</b>) GT, (<b>c</b>) CFAR, (<b>d</b>) OTSU, (<b>e</b>) Kim-Net, (<b>f</b>) PiCIE, (<b>g</b>) CDA-SAR, (<b>h</b>) IDUDL, (<b>i</b>) Ours. The false negative areas and false positive areas are highlighted by green and red.</p>
Full article ">
28 pages, 11323 KiB  
Article
Polarimetric SAR Ship Detection Using Context Aggregation Network Enhanced by Local and Edge Component Characteristics
by Canbin Hu, Hongyun Chen, Xiaokun Sun and Fei Ma
Remote Sens. 2025, 17(4), 568; https://doi.org/10.3390/rs17040568 - 7 Feb 2025
Abstract
Polarimetric decomposition methods are widely used in polarimetric Synthetic Aperture Radar (SAR) data processing for extracting scattering characteristics of targets. However, polarization SAR methods for ship detection still face challenges. The traditional constant false alarm rate (CFAR) detectors face sea clutter modeling and [...] Read more.
Polarimetric decomposition methods are widely used in polarimetric Synthetic Aperture Radar (SAR) data processing for extracting scattering characteristics of targets. However, polarization SAR methods for ship detection still face challenges. The traditional constant false alarm rate (CFAR) detectors face sea clutter modeling and parameter estimation problems in ship detection, which is difficult to adapt to the complex background. In addition, neural network-based detection methods mostly rely on single polarimetric-channel scattering information and fail to fully explore the polarization properties and physical scattering laws of ships. To address these issues, this study constructed two novel characteristics: a helix-scattering enhanced (HSE) local component and a multi-scattering intensity difference (MSID) edge component, which are specifically designed to describe ship scattering characteristics. Based on the characteristic differences of different scattering components in ships, this paper designs a context aggregation network enhanced by local and edge component characteristics to fully utilize the scattering information of polarized SAR data. With the powerful feature extraction capability of a convolutional neural network, the proposed method can significantly enhance the distinction between ships and the sea. Further analysis shows that HSE is able to capture structural information about the target, MSID can increase ship–sea separation capability, and an HV channel retains more detailed information. Compared with other decomposition models, the proposed characteristic combination model performs well in complex backgrounds and can distinguish ship from sea more effectively. The experimental results show that the proposed method achieves a detection precision of 93.6% and a recall rate of 91.5% on a fully polarized SAR dataset, which are better than other popular network algorithms, verifying the reasonableness and superiority of the method. Full article
Show Figures

Figure 1

Figure 1
<p>Ship scattering characteristics in four component decomposition.</p>
Full article ">Figure 2
<p>Enhancement comparison before (<b>a</b>) and after (<b>b</b>) the difference.</p>
Full article ">Figure 3
<p>Structural diagram of context aggregation network based on local and edge component feature enhancement.</p>
Full article ">Figure 4
<p>Scattering Structure Feature Extraction Network.</p>
Full article ">Figure 5
<p>Detailed view of DCNblock module.</p>
Full article ">Figure 6
<p>Structure of the CAM.</p>
Full article ">Figure 7
<p>Low-Level Feature Guided Balanced Fusion Network for PolSAR.</p>
Full article ">Figure 8
<p>Comparison of extracted characteristics from RADARSAT-2 data. (<b>a1</b>,<b>a2</b>) Pauli pseudocolor maps; (<b>b1</b>,<b>b2</b>) HSE; (<b>c1</b>,<b>c2</b>) MSID; (<b>d1</b>,<b>d2</b>) HH; and (<b>e1</b>,<b>e2</b>) HV.</p>
Full article ">Figure 9
<p>Comparison of extracted characteristics from AIRSAR data. (<b>a1</b>,<b>a2</b>) Pauli pseudocolor maps; (<b>b1</b>,<b>b2</b>) HSE; (<b>c1</b>,<b>c2</b>) MSID; (<b>d1</b>,<b>d2</b>) HH; and (<b>e1</b>,<b>e2</b>) HV.</p>
Full article ">Figure 10
<p>Comparison of extracted characteristics from UAVSAR data. (<b>a1</b>,<b>a2</b>) Pauli pseudocolor maps; (<b>b1</b>,<b>b2</b>) HSE; (<b>c1</b>,<b>c2</b>) MSID; (<b>d1</b>,<b>d2</b>) HH; and (<b>e1</b>,<b>e2</b>) HV.</p>
Full article ">Figure 11
<p>3D scatter plots of ship and sea characteristics. (<b>a</b>) Pauli pseudocolor map; (<b>b</b>) Pauli decomposition 3D scatter plot; (<b>c</b>) Freeman–Durden decomposition 3D scatter plot; and (<b>d</b>) proposed characteristics 3D scatter plot.</p>
Full article ">Figure 12
<p>Distribution of target pixel sizes.</p>
Full article ">Figure 13
<p>Comparison of ship detection results under different polarimetric characteristic combinations. Green rectangles indicate the ground truth, red rectangles indicate the detected results, blue circles indicate the false alarms, and orange circles indicate the missed detections. (<b>a</b>) Ground truth; (<b>b</b>) Pauli components; (<b>c</b>) Freeman–Durden components; (<b>d</b>) Proposed method.</p>
Full article ">Figure 14
<p>Comparison of feature maps under different backbone networks. (<b>a</b>) Pauli image; (<b>b</b>) feature map generated by the backbone network constructed with traditional convolutional blocks; (<b>c</b>) feature map generated by the proposed backbone network employing deformable convolutional blocks.</p>
Full article ">Figure 15
<p>Comparison of ship detection results under different network modules. Green rectangles indicate the ground truth, red rectangles indicate the detected results, blue circles indicate the false alarms, and orange circles indicate the missed detections. (<b>a</b>) Ground truth; (<b>b</b>) CAM only; (<b>c</b>) DCNblock only; (<b>d</b>) both DCNblock and CAM.</p>
Full article ">Figure 16
<p>Comparison of vessel detection results under different networks. Green rectangles indicate the ground truth, red rectangles indicate the detected results, blue circles indicate the false alarms, and orange circles indicate the missed detections. (<b>a</b>) Ground truth, (<b>b</b>) RetinaNet, (<b>c</b>) CenterNet, (<b>d</b>) Faster-RCNN, (<b>e</b>) YOLOv5, (<b>f</b>) YOLOv8, (<b>g</b>) MobileNet, (<b>h</b>) Proposed method.</p>
Full article ">
18 pages, 4135 KiB  
Article
Assessing the Impacts of the Israeli–Palestinian Conflict on Global Sea Transportation: From the View of Mass Tanker Trajectories
by Bing Zhang, Xiaohui Chen, Haiyan Liu, Lin Ye, Ran Zhang and Yunpeng Zhao
J. Mar. Sci. Eng. 2025, 13(2), 311; https://doi.org/10.3390/jmse13020311 - 7 Feb 2025
Abstract
Sea transportation plays a vital role in global trade, and studying the impact of emergencies on global sea transportation is essential to ensure the stability of trade. At present, the conflict between Palestine and Israel has attracted extensive attention worldwide. However, there is [...] Read more.
Sea transportation plays a vital role in global trade, and studying the impact of emergencies on global sea transportation is essential to ensure the stability of trade. At present, the conflict between Palestine and Israel has attracted extensive attention worldwide. However, there is a lack of specific research on the impact of conflict on shipping, particularly on global shipping costs. By using the global vessel trajectory data of tankers from the Automatic Identification System (AIS) and taking the global sea transportation of large tankers as an example, this paper quantifies and visualizes the changes in global sea transportation before and after conflicts from a data-driven perspective. Firstly, the complete vessel trajectory, as well as the port of departure and the port of destination are extracted. Then, from the perspective of shipping cost and vessel traffic flow, we evaluate the vessel traffic flow changes caused by the conflict by using the route distance to replace the shipping costs and quantify the cost increase for the relevant countries caused by the vessel detour based on the shipping cost increment index. The research results show that after the outbreak of the conflict, the number of vessels passing through the Red Sea area has decreased significantly. About 3.1% of global vessels were affected, with global sea transportation costs of large tankers increasing by about 0.0825%. This study takes the Israeli–Palestinian conflict as an example and analyzes the impact of emergencies on the global sea transportation situation of tankers based on AIS data. The research results reveal the characteristics of international shipping to a certain extent and provide guidance for global sea transportation route planning. Full article
(This article belongs to the Special Issue Risk Assessment in Maritime Transportation)
Show Figures

Figure 1

Figure 1
<p>Changes in vessel routes from the port of Abu Dhabi to the port of Odessa.</p>
Full article ">Figure 2
<p>Vessel trajectory extraction results of the proposed algorithm of this paper.</p>
Full article ">Figure 3
<p>Boxplot of shipping time before and after the Israeli–Palestinian conflict.</p>
Full article ">Figure 4
<p>Changes in the traffic volume of oil tankers passing through the Bab-el-Mandeb Strait before and after the conflict.</p>
Full article ">Figure 5
<p>The density maps of oil tanker trajectories for August and September 2023, as well as January and February 2024.</p>
Full article ">Figure 6
<p>Chart of the change in the vessel traffic flow in the Bab-al-Mande Strait before and after the Israeli–Palestinian conflict.</p>
Full article ">Figure 7
<p>Sankey diagrams of the shipping cost increments of various countries.</p>
Full article ">Figure 8
<p>Changes in crude oil transportation flow of various countries from September 2023 to February 2024.</p>
Full article ">
27 pages, 5726 KiB  
Article
RUL Prediction for Lithium Battery Systems in Fuel Cell Ships Based on Adaptive Modal Enhancement Networks
by Yifan Liu, Huabiao Jin, Xiangguo Yang, Telu Tang, Jiaxin Luo, Lei Han, Junting Lang and Weixin Zhao
J. Mar. Sci. Eng. 2025, 13(2), 296; https://doi.org/10.3390/jmse13020296 - 5 Feb 2025
Abstract
With the widespread application of fuel cell technology in the fields of transportation and energy, Battery Management Systems (BMSs) have become one of the key technologies for ensuring system stability and extending battery lifespan. As an auxiliary power source in fuel cell systems, [...] Read more.
With the widespread application of fuel cell technology in the fields of transportation and energy, Battery Management Systems (BMSs) have become one of the key technologies for ensuring system stability and extending battery lifespan. As an auxiliary power source in fuel cell systems, the prediction of the Remaining Useful Life (RUL) of lithium-ion batteries is crucial for enhancing the reliability and efficiency of fuel cell ships. However, due to the complex degradation mechanisms of lithium batteries and the actual noisy operating conditions, particularly capacity regeneration noise, accurate RUL prediction remains a challenge. To address this issue, this paper proposes a lithium battery RUL prediction method based on an Adaptive Modal Enhancement Network (RIME-VMD-SEInformer). By incorporating an improved Variational Mode Decomposition (VMD) technique, the RIME algorithm is used to optimize decomposition parameters for the adaptive extraction of key modes from the signal. The Squeeze-and-Excitation Networks (SEAttention) module is employed to enhance the accuracy of feature extraction, and the sparse attention mechanism of Informer is utilized to efficiently model long-term dependencies in time series. This results in a comprehensive prediction framework that spans signal decomposition, feature enhancement, and time-series modeling. The method is validated on several public datasets, and the results demonstrate that each component of the RIME-VMD-SEInformer framework is both necessary and justifiable, leading to improved performance. The model outperforms the state-of-the-art models, with a MAPE of only 0.00837 on the B0005 dataset, representing a 59.96% reduction compared to other algorithms, showcasing outstanding prediction performance. Full article
(This article belongs to the Special Issue Marine Fuel Cell Technology: Latest Advances and Prospects)
Show Figures

Figure 1

Figure 1
<p>Topology diagram of the power system for fuel cell ships.</p>
Full article ">Figure 2
<p>RIME process diagram.</p>
Full article ">Figure 3
<p>RIME-VMD process diagram.</p>
Full article ">Figure 4
<p>Schematic diagram of SEAttention module structure.</p>
Full article ">Figure 5
<p>Structure and self-attention mechanism in Transformer.</p>
Full article ">Figure 6
<p>Structure of Informer.</p>
Full article ">Figure 7
<p>Structure of the Informer Encoder.</p>
Full article ">Figure 8
<p>Based on the RIME-VMD-SEInformer RUL prediction framework.</p>
Full article ">Figure 9
<p>Taking the b0005 battery as an example, the IMFs (<b>a</b>) and corresponding frequencies (<b>b</b>) after RIME-VMD decomposition.</p>
Full article ">Figure 10
<p>Errors of different training sets on b0005 (<b>a</b>), b0006 (<b>b</b>), cs2_35 (<b>c</b>), cs2_36 (<b>d</b>).</p>
Full article ">Figure 11
<p>The comparison of evaluation indexes and the prediction effect of each model are in b0005 (<b>a</b>,<b>b</b>), b0006 (<b>c</b>,<b>d</b>), cs2_35 (<b>e</b>,<b>f</b>), and cs2_36 (<b>g</b>,<b>h</b>).</p>
Full article ">Figure 11 Cont.
<p>The comparison of evaluation indexes and the prediction effect of each model are in b0005 (<b>a</b>,<b>b</b>), b0006 (<b>c</b>,<b>d</b>), cs2_35 (<b>e</b>,<b>f</b>), and cs2_36 (<b>g</b>,<b>h</b>).</p>
Full article ">Figure 12
<p>Comparison of RIME-VMD-SEInformer and other models on b0005 (<b>a</b>), b0006 (<b>b</b>), cs2_35 (<b>c</b>), and cs2_36 (<b>d</b>).</p>
Full article ">
16 pages, 14617 KiB  
Article
Room for Sea-Level Rise: Conceptual Perspectives to Keep The Netherlands Safe and Livable in the Long Term as Sea Level Rises
by Jos van Alphen, Stephan van der Biezen, Matthijs Bouw, Alex Hekman, Bas Kolen, Rob Steijn and Harm Albert Zanting
Water 2025, 17(3), 437; https://doi.org/10.3390/w17030437 - 5 Feb 2025
Abstract
An accelerated sea-level rise (SLR) may threaten the future livability of the Netherlands. Three perspectives to anticipate this SLR are elaborated here regarding technical, physical, and spatial aspects: Protect, Advance, and Accommodate. The overall objective was to explore the tools and measures that [...] Read more.
An accelerated sea-level rise (SLR) may threaten the future livability of the Netherlands. Three perspectives to anticipate this SLR are elaborated here regarding technical, physical, and spatial aspects: Protect, Advance, and Accommodate. The overall objective was to explore the tools and measures that are available for adaptation, assess their spatial impacts, and identify dos and don’ts in current spatial issues like housing, climate adaptation, infrastructure, and the energy transition. Each elaboration was performed by a consortium consisting of representatives from private parties (engineering consultancy, project contractors, (landscape) architects, economists), knowledge institutes (including universities), and government, using an iterative process of model computations and design workshops. The elaborations made clear that a realistic and livable future perspective for the Dutch Delta continues to exist, even with a maximum analyzed SLR of 5 m, and will consist of a combination of elements from all three perspectives. This will require large investments and space for new and upgraded water infrastructure and will have large impacts on land use, water availability, agriculture, nature, residential buildings, shipping, and regional water systems. There is still a significant degree of uncertainty regarding future SLR; therefore, it is not advisable to make major investment decisions at this time. Nevertheless, some no-regret measures are already clear: continuation of the protection of the Randstad agglomeration (Amsterdam, The Hague, Rotterdam, and Utrecht) and its economic earning potential for future generations, adaptation of agriculture to more brackish and saline conditions, designation of space for additional future flood protection, extra storage capacity (for river discharge and increased precipitation), river discharge, and sand extraction (for future coastal maintenance). The research identified concrete actions for today’s decision-making processes, even though the time horizon of the analysis captures centuries. Including the perspectives in long term, policy planning is already necessary because the transition processes will take decades, if not more than a century, to be implemented. Full article
(This article belongs to the Special Issue Climate Risk Management, Sea Level Rise and Coastal Impacts)
Show Figures

Figure 1

Figure 1
<p>Location of the Netherlands and topography of its main water systems and urban areas. Colored areas: blue—water or land below MSL; green—land around MSL; yellow/orange—above MSL. See also Figure 4 for more details in the southwestern area.</p>
Full article ">Figure 2
<p>Cross-sections from sea to landward direction for different perspectives: The top five cross-sections show the sea, the estuarine areas, and the upper reaches of the Rhine and Meuse about 100 km inland, with five meters of SLR and extreme river discharge. The differences in water levels and salinization (reddish hue landward of sluice or dam) are caused by different combinations of measures such as closure, storage, pumping, and sluicing in the conceptual perspectives studied. Higher levels lead to dike upgrades in these cross-sections. The bottom cross-section is through land, and it shows different possible measures for the Accommodate perspective. Note that the scale of the <span class="html-italic">x</span>-axis is not equal to the scale of the <span class="html-italic">y</span>-axis.</p>
Full article ">Figure 3
<p>The four strategies studied for the Protect perspective: <b>top left</b>: an open (closable) sea front with storm surge barriers, water level rising in line with SLR, and maintenance of the present discharge distribution in the Rhine’s branches; <b>top right</b>: a concentrated peak discharge on the Waal branch combined with a new polder around Rotterdam and Dordrecht (protected by flood gates); <b>bottom left</b>: a closed sea front, with either pumping (to maintain present flood levels) or (<b>bottom right</b>) sluicing with high water levels that rise with SLR, enabling the discharge of excess river water to the sea. The Protect Open strategies favor navigation and natural dynamics but result in large dike upgrade programmes and increasing salinization. The Protect Closed strategies require large pumping volumes, which can be reduced by peak storage or sluicing excess water. The latter requires dike upgrades to combat rising river levels. The Protect Closed strategy blocks navigation, natural dynamics, and salinization. The latter is beneficial for fresh water supply and agriculture.</p>
Full article ">Figure 4
<p>The Advance perspective, as it is elaborated for 5 m SLR, including the peak storage in the present water systems/former tidal basins in the southwestern part of the Netherlands (available storage surface 1000 km<sup>2</sup>), and the new coastline, creating an additional storage surface of 900 km<sup>2</sup> in front of the present coastline. As a result, the pumping capacity required in the Protect Closed strategy of 12,200 m<sup>3</sup>/s can be initially reduced to 3800 m<sup>3</sup>/s, to be gradually increased with rising SLR. In an optimized pumping strategy, anticipating peak river discharges, the required pumping capacity will be 8700 m<sup>3</sup>/s to maintain the present flood design levels. Access channels to the harbors of Antwerp and Rotterdam Maasvlakte remain open. The additional storage area creates a brackish environment, reducing inland salt intrusion, albeit with devastating effects on the coastal ecosystems.</p>
Full article ">Figure 5
<p>Landscape impressions (images on map) and measures (in cross-section) in the Accommodate perspective: The map makes a broad distinction between the flood-prone areas in the Netherlands (dike ring 14/44, low-lying areas, rivers, and delta) and the high parts of the country. In the flood-prone areas, the Accommodate perspective requires changes to built-up areas and land use to reduce the vulnerability of the area. The area of dike rings 14 and 44 is shown separately, because further analyses led to the conclusion that this area will have to remain protected with dikes and dunes for as long as possible in order to maintain the earning capacity of the Netherlands. Blue arrows represent groundwater flow, white arrows represent saline groundwater intrusion, red arrows indicate vertical evacuation and flood proofed buildings.</p>
Full article ">
23 pages, 6523 KiB  
Essay
Data-Driven Analysis of Regional Ship Carbon Emission Reduction: The Bohai Bay Area Case Study
by Yangning Ning, Tao Li, Libo Yang and Bing Chen
Sustainability 2025, 17(3), 1159; https://doi.org/10.3390/su17031159 - 31 Jan 2025
Abstract
With the tightening of marine carbon emission reduction policies, the sustainable development of the shipping industry has attracted much attention, and it is of great significance to use Automatic Identification System (AIS) big data to study the carbon emissions of marine ships. Taking [...] Read more.
With the tightening of marine carbon emission reduction policies, the sustainable development of the shipping industry has attracted much attention, and it is of great significance to use Automatic Identification System (AIS) big data to study the carbon emissions of marine ships. Taking ships around Bohai Bay as the research object, this paper constructs a calculation method of ship carbon emissions driven by the ship AIS trajectory. The AIS information of ships is extracted, and the sailing status is determined. The carbon emission calculation model is built based on the AIS data, the carbon emission in 2023 is empirically measured, and the characteristics are analyzed. At the same time, a speed simulation model was built to evaluate the impact of speed reduction on carbon emissions and put forward emission reduction measures. The results show that the carbon emission of ships around Bohai Bay in 2023 was 8.8072 million tons, with cargo ships contributing the most, and the carbon emissions of the cruise state was significant. A 10% reduction in speed would reduce annual carbon emissions by about 6%. This study provides a reference for understanding the impact of speed on carbon emissions and formulating emission reduction measures, which can be used to compare historical and future data to support the emission reduction in ports and shipping enterprises. Full article
Show Figures

Figure 1

Figure 1
<p>Research framework diagram.</p>
Full article ">Figure 2
<p>AIS data preprocessing logic diagram.</p>
Full article ">Figure 3
<p>Comparison of AIS abnormal data processing.</p>
Full article ">Figure 4
<p>Comparison of trajectories before and after interpolation.</p>
Full article ">Figure 5
<p>Calculation process of carbon emissions.</p>
Full article ">Figure 6
<p>Map of the study area.</p>
Full article ">Figure 7
<p>Carbon emissions classification by different dimensions.</p>
Full article ">Figure 8
<p>Change in monthly carbon emissions by ship.</p>
Full article ">Figure 9
<p>Results of changes in spatial distribution of carbon emissions.</p>
Full article ">Figure 10
<p>Simulated ship information.</p>
Full article ">Figure 11
<p>Simulated data preprocessing logic diagram.</p>
Full article ">Figure 12
<p>Carbon emission distribution at different speeds.</p>
Full article ">
24 pages, 6606 KiB  
Article
Ship Anomalous Behavior Detection Based on BPEF Mining and Text Similarity
by Yongfeng Suo, Yan Wang and Lei Cui
J. Mar. Sci. Eng. 2025, 13(2), 251; https://doi.org/10.3390/jmse13020251 - 29 Jan 2025
Abstract
Maritime behavior detection is vital for maritime surveillance and management, ensuring safe ship navigation, normal port operations, marine environmental protection, and the prevention of illegal activities on water. Current methods for detecting anomalous vessel behaviors primarily rely on single time series data or [...] Read more.
Maritime behavior detection is vital for maritime surveillance and management, ensuring safe ship navigation, normal port operations, marine environmental protection, and the prevention of illegal activities on water. Current methods for detecting anomalous vessel behaviors primarily rely on single time series data or feature point analysis, which struggle to capture the relationships between vessel behaviors, limiting anomaly identification accuracy. To address this challenge, we proposed a novel vessel anomaly detection framework, which is called the BPEF-TSD framework. It integrates a ship behavior pattern recognition algorithm, Smith–Waterman, and text similarity measurement methods. Specifically, we first introduced the BPEF mining framework to extract vessel behavior events from AIS data, then generated complete vessel behavior sequence chains through temporal combinations. Simultaneously, we employed the Smith–Waterman algorithm to achieve local alignment between the test vessel and known anomalous vessel behavior sequences. Finally, we evaluated the overall similarity between behavior chains based on the text similarity measure strategy, with vessels exceeding a predefined threshold being flagged as anomalous. The results demonstrate that the BPEF-TSD framework achieves over 90% accuracy in detecting abnormal trajectories in the waters of Xiamen Port, outperforming alternative methods such as LSTM, iForest, and HDBSCAN. This study contributes valuable insights for enhancing maritime safety and advancing intelligent supervision while introducing a novel research perspective on detecting anomalous vessel behavior through maritime big data mining. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Figure 1
<p>BPEF-TSD framework, which is divided into four parts: BPEF mining ship behavior pattern recognition, ship behavior sequence chain construction, behavior sequence chain similarity calculation, and ship abnormal behavior judgment.</p>
Full article ">Figure 2
<p>Behavior pattern association and combination diagram. In this case, the interval timestamps of behaviors <span class="html-italic">C</span><sub>1</sub> and <span class="html-italic">C</span><sub>2</sub> partially overlap. Between timestamps <span class="html-italic">t</span><sub>1</sub> and <span class="html-italic">t</span><sub>2</sub>, <span class="html-italic">C</span><sub>2</sub> precedes <span class="html-italic">C</span><sub>1</sub>; between <span class="html-italic">t</span><sub>2</sub> and <span class="html-italic">t</span><sub>3</sub>, <span class="html-italic">C</span><sub>1</sub> and <span class="html-italic">C</span><sub>2</sub> occur concurrently; and between <span class="html-italic">t</span><sub>3</sub> and <span class="html-italic">t</span><sub>4</sub>, <span class="html-italic">C</span><sub>2</sub> follows <span class="html-italic">C</span><sub>1</sub>.</p>
Full article ">Figure 3
<p>Ship behavior sequence chain construction process.</p>
Full article ">Figure 4
<p>Smith–Waterman diagram.</p>
Full article ">Figure 5
<p>Smith–Waterman algorithm. The red arrows in (<b>a</b>) indicate the possible movement directions during the score calculation process, including right, bottom-right, and down. The blue arrows in (<b>b</b>) represent the backtracking direction used to determine the optimal local alignment path, which starts from the highest scoring cell (highlighted in blue) and traces back until a cell with a score of zero is reached. (<b>c</b>) shows the optimal local alignment result obtained from the backtracking process.</p>
Full article ">Figure 6
<p>Xiamen Port basic information chart.</p>
Full article ">Figure 7
<p>Xiamen Port monthly traffic AIS flow chart.</p>
Full article ">Figure 8
<p>Similarity distribution between known illegal ships. Trajectory similarity is mainly distributed in the lower region and between 0.4 and 1, forming two more distinct regions of concentration. The lower region of similarity indicates that there are large differences between these trajectories, while the high similarity region between 0.4 and 1 reflects certain common behavioral patterns or characteristics.</p>
Full article ">Figure 9
<p>Variation of weight parameters A, B, and C in six combinations.</p>
Full article ">Figure 10
<p>Kernel density distribution of thresholds in six combinations. (<b>a</b>–<b>f</b>) corresponds to specific combinations of weight parameters (A, B, C) and initial thresholds, as outlined in the combinations presented in <a href="#jmse-13-00251-f009" class="html-fig">Figure 9</a>. The x-axis represents the threshold values, while the y-axis denotes the density. These distributions illustrate the variations in threshold selection under different parameter configurations.</p>
Full article ">Figure 11
<p>Performance of six combinations during iterations. Including precision, recall, F1-score, and accuracy. (<b>a</b>–<b>f</b>) represents specific combinations of weight parameters (A, B, C) and initial thresholds, as detailed in the combinations shown in <a href="#jmse-13-00251-f009" class="html-fig">Figure 9</a>. The x-axis depicts the number of iterations, while the y-axis represents the values of precision, recall, F1-score, and accuracy. These performance metrics provide insights into the stability and convergence trends across different parameter configurations.</p>
Full article ">Figure 12
<p>Performance comparison of different parameter combinations including precision, recall, F1-score, and accuracy. (<b>a</b>) shows a radar chart comparing the top five optimal parameter combinations, while (<b>b</b>) presents the model’s performance under the final configuration.</p>
Full article ">
22 pages, 11926 KiB  
Article
PJ-YOLO: Prior-Knowledge and Joint-Feature-Extraction Based YOLO for Infrared Ship Detection
by Yongjie Liu, Chaofeng Li and Guanghua Fu
J. Mar. Sci. Eng. 2025, 13(2), 226; https://doi.org/10.3390/jmse13020226 - 25 Jan 2025
Viewed by 233
Abstract
Infrared ship images have low resolution and limited recognizable features, especially for small targets, leading to low accuracy and poor generalization of traditional detection methods. To address this, we design a prior knowledge auxiliary loss for leveraging the unique brightness distribution of infrared [...] Read more.
Infrared ship images have low resolution and limited recognizable features, especially for small targets, leading to low accuracy and poor generalization of traditional detection methods. To address this, we design a prior knowledge auxiliary loss for leveraging the unique brightness distribution of infrared ship images, we construct a joint feature extraction module that sufficiently captures context awareness, channel differentiation, and global information, and then we propose a prior-knowledge- and joint-feature-extraction-based YOLO (PJ-YOLO) for use in detecting infrared ships. Additionally, a residual deformable attention module is designed to integrate multi-scale information, enhancing detail capture. Experimental results on the SFISD and InfiRray Ships datasets demonstrate that the proposed PJ-YOLO achieves state-of-the-art detection performance for infrared ship targets. In particular, PJ-YOLO achieves improvements of 1.6%, 5.0%, and 2.8% in mAP50, mAP75, and mAP50:95 on the SFISD dataset, respectively. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Figure 1
<p>Overall architecture of PJ-YOLO.</p>
Full article ">Figure 2
<p>Structure of the JFE module.</p>
Full article ">Figure 3
<p>Structure of the R-DA module.</p>
Full article ">Figure 4
<p>Brightness distribution of the infrared ship image.</p>
Full article ">Figure 5
<p>Mean brightness of the bounding box. In the right image, (<b>a</b>–<b>e</b>) are extracted from the left image, each containing the ground truth (GT) bounding box and the predicted bounding box. Green numbers represent the mean brightness values of pixels within the ground truth (GT) bounding boxes, while red numbers represent the mean brightness values of pixels within the predicted bounding boxes. Blue numbers indicate the difference in mean brightness between the GT and predicted bounding boxes. A larger difference value indicates a greater discrepancy in brightness distribution.</p>
Full article ">Figure 6
<p>Bounding box adjustment process in PKA Loss: green box (GT) and red box (predicted).</p>
Full article ">Figure 7
<p>The size distribution of targets in the datasets: (<b>a</b>) the SFISD dataset and (<b>b</b>) the InfiRray Ships dataset. The darker the dot color, the larger the quantity of targets with normalized width and height.</p>
Full article ">Figure 8
<p>Visual comparison of results for the SFISD dataset. False negative samples are highlighted with blue circles, and false positive samples are indicated with purple circles.</p>
Full article ">Figure 9
<p>Visual comparison of results for the InfiRray Ships dataset. False negative samples are highlighted with blue circles, and false positive samples are indicated with purple circles.</p>
Full article ">Figure 10
<p>Grad-CAM visualization results for different stages of the baseline network and PJ-YOLO across diverse scenarios in the InfiRray Ships dataset: (<b>a</b>) single-target scenario and (<b>b</b>) multi-target scenario.</p>
Full article ">Figure 11
<p>Visualization results of the ablation study from multiple scenarios.</p>
Full article ">Figure 12
<p>Comparison of latency, FLOPs, and mAP<sub>50:95</sub> (on the SFISD dataset) between PJ-YOLO and other competing methods. The number of FLOPs is represented by the radius of the circle.</p>
Full article ">Figure 13
<p>Visualization comparison of prediction results on real-world images.</p>
Full article ">
19 pages, 5807 KiB  
Article
BurgsVO: Burgs-Associated Vertex Offset Encoding Scheme for Detecting Rotated Ships in SAR Images
by Mingjin Zhang, Yaofei Li, Jie Guo, Yunsong Li and Xinbo Gao
Remote Sens. 2025, 17(3), 388; https://doi.org/10.3390/rs17030388 - 23 Jan 2025
Viewed by 288
Abstract
Synthetic Aperture Radar (SAR) is a crucial remote sensing technology with significant advantages. Ship detection in SAR imagery has garnered significant attention. However, existing ship detection methods often overlook feature extraction, and the unique imaging mechanisms of SAR images hinder the direct application [...] Read more.
Synthetic Aperture Radar (SAR) is a crucial remote sensing technology with significant advantages. Ship detection in SAR imagery has garnered significant attention. However, existing ship detection methods often overlook feature extraction, and the unique imaging mechanisms of SAR images hinder the direct application of conventional natural image feature extraction techniques. Moreover, oriented bounding box-based detection methods often prioritize accuracy excessively, leading to increased parameters and computational costs, which in turn elevate computational load and model complexity. To address these issues, we propose a novel two-stage detector, Burgs-rooted vertex offset encoding scheme (BurgsVO), for detecting rotated ships in SAR images. BurgsVO consists of two key modules: the Burgs equation heuristics module, which facilitates feature extraction, and the average diagonal vertex offset (ADVO) encoding scheme, which significantly reduces computational costs. Specifically, the Burgs equation module integrates temporal information with spatial data for effective feature aggregation, establishing a strong foundation for subsequent object detection. The ADVO encoding scheme reduces parameters through anchor transformation, leveraging geometric similarities between quadrilaterals and triangles to further reduce computational costs. Experimental results on the RSSDD and RSDD benchmarks demonstrate that the proposed BurgsVO outperforms the state-of-the-art detectors in both accuracy and efficiency. Full article
Show Figures

Figure 1

Figure 1
<p>The overall framework of our method.</p>
Full article ">Figure 2
<p>The structure of the Burgess equation heuristic module.</p>
Full article ">Figure 3
<p>Illustration of an OBB represented by an ADVO.</p>
Full article ">Figure 4
<p>Decoding regression diagram of ADVO.</p>
Full article ">Figure 5
<p>Visualization of the detection results of different methods on RSSDD. Red rectangles indicate the actual ship targets. Green and purple rectangles represent the detection results of five comparative methods and our method, respectively.</p>
Full article ">Figure 6
<p>Visualization of the detection results of different methods on RSDD. Red rectangles indicate the actual ship targets. Green and blue rectangles represent the detection results of five comparative methods and our method, respectively.</p>
Full article ">Figure 7
<p>Algorithm Performance under ship size variations. Red rectangles indicate the actual ship targets, and purple rectangles represent the detection results of our method.</p>
Full article ">Figure 8
<p>Speed vs. accuracy on the RSSDD test set.</p>
Full article ">
Back to TopTop