Nothing Special   »   [go: up one dir, main page]

CN111027511B - Remote sensing image ship detection method based on region of interest block extraction - Google Patents

Remote sensing image ship detection method based on region of interest block extraction Download PDF

Info

Publication number
CN111027511B
CN111027511B CN201911338663.3A CN201911338663A CN111027511B CN 111027511 B CN111027511 B CN 111027511B CN 201911338663 A CN201911338663 A CN 201911338663A CN 111027511 B CN111027511 B CN 111027511B
Authority
CN
China
Prior art keywords
image
detection
area
water
remote sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911338663.3A
Other languages
Chinese (zh)
Other versions
CN111027511A (en
Inventor
侯彪
刘佳丽
焦李成
马文萍
马晶晶
杨淑媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201911338663.3A priority Critical patent/CN111027511B/en
Publication of CN111027511A publication Critical patent/CN111027511A/en
Application granted granted Critical
Publication of CN111027511B publication Critical patent/CN111027511B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于感兴趣区块提取的光学遥感图像舰船检测方法,主要解决现有技术检测精度低,虚警较多的问题。其实现方案为:构建光学遥感图像舰船检测数据集;对宽幅遥感图像下采样并去雾增强,利用上下文信息和图像全局特征进行水陆分割;用构建的数据集训练基于SCRDet的目标检测模型;根据水陆分割结果,用部分重叠的滑动窗口扫描原宽幅遥感图像提取感兴趣区块作为待检测区域,将待检测区域图像输入检测模型得到区域检测结果;将区域结果映射到原宽幅图像尺度上,作改进的非极大值抑制以优化初步检测结果;根据舰船的结构特征再次优化检测结果。本发明检测精度高,虚警率低,可用于获取大幅面遥感图像中感兴趣的舰船目标及其的位置。

Figure 201911338663

The invention discloses an optical remote sensing image ship detection method based on the extraction of interesting blocks, which mainly solves the problems of low detection accuracy and many false alarms in the prior art. Its implementation plan is: constructing optical remote sensing image ship detection data set; downsampling and dehazing enhancement of wide remote sensing image, using context information and image global features to perform water and land segmentation; using the constructed data set to train a target detection model based on SCRDet ; According to the result of water and land segmentation, scan the original wide-format remote sensing image with a partially overlapping sliding window to extract the block of interest as the area to be detected, input the image of the to-be-detected area into the detection model to obtain the regional detection result; map the regional result to the original wide-format image On the scale, improved non-maximum suppression is used to optimize the initial detection results; the detection results are optimized again according to the structural characteristics of the ship. The invention has high detection accuracy and low false alarm rate, and can be used for acquiring interesting ship targets and their positions in large-scale remote sensing images.

Figure 201911338663

Description

基于感兴趣区块提取的遥感图像舰船检测方法Remote sensing image ship detection method based on extraction of interesting blocks

技术领域technical field

本发明属于图像处理技术领域,特别涉及一种光学遥感图像舰船检测方法,可用于大幅面遥感图像中的目标识别。The invention belongs to the technical field of image processing, and in particular relates to a method for detecting ships in optical remote sensing images, which can be used for target recognition in large-format remote sensing images.

背景技术Background technique

光学遥感图像的目标检测是遥感影像研究领域的重要问题之一,而舰船目标检测由于其特殊性和关键性,在渔业管理、军事侦察和战略部署等方面有着极为重要的应用价值。舰船目标检测就是要从复杂的场景中确定水域或岸边是否存在舰船,并对其进行定位。Target detection of optical remote sensing images is one of the important issues in the field of remote sensing image research, and ship target detection has extremely important application value in fishery management, military reconnaissance and strategic deployment due to its particularity and criticality. Ship target detection is to determine whether there is a ship in the water or on the shore from a complex scene, and to locate it.

传统的舰船目标检测方法主要包括利用海陆分割和利用先验地理信息的方法,这些方法需要手动设计大量特征来定位舰船,并且在对靠岸舰船和离岸舰船这两种不同场景目标进行同时检测时适应性不强。Traditional ship target detection methods mainly include methods using land and sea segmentation and using prior geographic information. These methods need to manually design a large number of features to locate ships. The adaptability is not strong when the target is detected at the same time.

近年来,深度学习得到了快速发展,深度卷积神经网络因其能够自动提取图像浅层及深层特征,避免了传统方法的手动特征提取,在图像处理领域获得了先进的结果。目前,基于深度学习的方法是目标检测的主流方法。In recent years, deep learning has developed rapidly, and deep convolutional neural networks have achieved advanced results in the field of image processing because of their ability to automatically extract shallow and deep features of images, avoiding the manual feature extraction of traditional methods. Currently, methods based on deep learning are the mainstream methods for object detection.

基于深度学习的舰船目标检测方法按检测框形式可分为两类:基于水平检测框和基于倾斜检测框。其中:The ship target detection method based on deep learning can be divided into two categories according to the detection frame: based on the horizontal detection frame and based on the tilt detection frame. in:

基于水平检测框的方法,包括经典两阶段检测框架Faster-RCNN,单阶段检测框架Yolo、RetinaNet和SSD。Methods based on horizontal detection boxes include the classic two-stage detection framework Faster-RCNN, and the single-stage detection frameworks Yolo, RetinaNet, and SSD.

基于倾斜检测框的方法,包括R2CNN、ROI-Transformer、SCRDet等。Methods based on tilt detection boxes, including R 2 CNN, ROI-Transformer, SCRDet, etc.

由于舰船方向任意、靠岸舰船密集等特点,倾斜检测框能够对舰船更精确的定位。在采用倾斜检测框的方法中,SCRDet(Towards More Robust Detection for Small,Cluttered and Rotated Objects)模型能充分融合底层特征和高层特征,对密集目标达到更优的检测结果。然而,基于深度学习的方法不会对图像进行水陆分割,而是直接将宽幅图像切片输入训练好的模型生成结果,将不包含水域的复杂陆地区域输入检测模型不仅降低了检测效率,同时还可能造成明显的陆地虚警,另外由于切分图像导致的船体残缺也会造成明显漏检,所以检测精度不高。Due to the characteristics of arbitrary ship direction and dense ships docking, the tilt detection frame can more accurately locate the ship. In the method of using the inclined detection frame, the SCRDet (Towards More Robust Detection for Small, Cluttered and Rotated Objects) model can fully integrate the low-level features and high-level features, and achieve better detection results for dense objects. However, deep learning-based methods do not perform water and land segmentation on images, but directly input wide image slices into the trained model to generate results. Inputting complex land areas without water into the detection model not only reduces the detection efficiency, but also reduces the detection efficiency. It may cause obvious false alarms on land. In addition, the incomplete hull caused by the segmented image will also cause obvious missed detection, so the detection accuracy is not high.

发明内容SUMMARY OF THE INVENTION

本发明的目的在于针对上述现有技术的不足,提出一种基于感兴趣区块提取的光学遥感图像舰船检测方法,以优化遥感图像中舰船目标的定位,提高检测效率与精度。The purpose of the present invention is to propose a ship detection method based on an optical remote sensing image extraction based on the block of interest, in order to optimize the positioning of the ship target in the remote sensing image and improve the detection efficiency and accuracy, aiming at the above-mentioned shortcomings of the prior art.

为实现上述目的,本发明的技术方案包括如下步骤:To achieve the above object, the technical scheme of the present invention comprises the following steps:

(1)构建光学遥感图像舰船检测数据集G:(1) Constructing the optical remote sensing image ship detection dataset G:

1a)下载高分二号光学遥感数据,人工筛选出包含舰船目标的区域,用部分重叠的方式裁剪这些区域并保存;1a) Download the optical remote sensing data of Gaofen-2, manually filter out the areas that contain ship targets, crop and save these areas in a partially overlapping manner;

1b)对1a)得到的所有图像随机进行上下左右翻转或旋转,得到扩增图像并保存;1b) Randomly flip or rotate all the images obtained in 1a) up, down, left and right to obtain an amplified image and save it;

1c)对扩增的图像用倾斜的矩形框进行标注,并将标注信息保存为xml格式文件,并用所有扩增图像和其对应的标注信息构成光学遥感图像舰船检测数据集G;1c) Annotate the amplified image with a slanted rectangular frame, save the annotation information as an xml format file, and use all the amplified images and their corresponding annotation information to form the optical remote sensing image ship detection dataset G;

(2)对宽幅光学遥感图像进行下采样,利用基于暗通道先验的去雾算法减少图像上的云雾遮挡,得到增强图像I;(2) down-sampling the wide-format optical remote sensing image, and using the dehazing algorithm based on the dark channel prior to reduce the cloud and fog occlusion on the image, and obtain the enhanced image I;

(3)利用上下文信息和图像全局特征对增强图像I进行水陆分割,提取感兴趣的水域区块,得到二值图:(3) Use the context information and image global features to perform water and land segmentation on the enhanced image I, extract the water area of interest, and obtain a binary image:

(3a)利用像素的上下文特征信息对增强图像I作初步阈值分割,得到初步水陆分割二值图I3(3a) using the context feature information of the pixel to perform preliminary threshold segmentation on the enhanced image I to obtain a preliminary water and land segmentation binary map I 3 ;

(3b)标记初步水陆分割二值图I3中水域连通域集合W={w1,...,wi,...,wn},其中wi表示第i个水域联通域;标记初步水陆分割二值图I3中陆地联通域集合L={l1,...,lj,...,lm},其中lj表示第j个陆地联通域;提取每个水域联通域和陆地联通域的区域特征,根据判别规则对其类别进行重标定,得到增强图像I的优化水陆分割二值图I4(3b) Mark the water-connected domain set W = { w 1 , . The set L = { l 1 , . The regional characteristics of the domain and the land-connected domain are re-calibrated according to the discrimination rules, and the optimized water-land segmentation binary map I 4 of the enhanced image I is obtained;

(3c)对优化水陆分割二值图I4作形态学膨胀操作后,上采样恢复到原宽幅图像大小,得到最终水陆分割二值图I5(3c) after performing the morphological expansion operation on the optimized water and land segmentation binary image I 4 , the upsampling is restored to the original wide image size, and the final water and land segmentation binary image I 5 is obtained;

(4)采用随机多尺度策略训练基于卷积神经网络的SCRDet目标检测模型M0,得到训练后的检测模型M;(4) Using random multi-scale strategy to train the SCRDet target detection model M 0 based on convolutional neural network, and obtain the trained detection model M;

(5)利用最终水陆分割二值图I5,在原宽幅图像上用部分重叠的滑动窗口提取感兴趣区块作为待检测区域,构成待检测区域图像集合为F={f1,...,fi,...,fn}、位置集合为S={s1,...,si,...,sn},其中fi表示第i个检测区域,si表示检测区域fi的左上角坐标,将待检测区域图像集合F输入到检测模型M中,得到每个区域fi的区域检测结果;(5) Using the final water and land segmentation binary image I 5 , use a partially overlapping sliding window to extract the block of interest as the area to be detected on the original wide image, and form the image set of the area to be detected as F={f 1 ,... , f i ,...,f n }, the position set is S={s 1 ,...,s i ,...,s n }, where f i represents the ith detection area, and s i represents the detection area The coordinates of the upper left corner of the area fi , input the image set F of the area to be detected into the detection model M, and obtain the area detection result of each area fi ;

(6)根据位置集合S将区域检测结果映射到原宽幅图像尺度上,得到初步检测结果集合

Figure BDA0002331656970000031
其中Pi
Figure BDA0002331656970000032
分别表示初步检测的第i个检测框的类别、位置坐标、置信度分数;对初步检测结果集合A作改进的非极大值抑制,得到一次优化检测结果集合
Figure BDA0002331656970000033
其中Qi
Figure BDA0002331656970000034
分别表示一次优化的第i个检测框的类别、位置坐标、置信度分数;(6) Map the region detection results to the original wide image scale according to the position set S, and obtain a preliminary detection result set
Figure BDA0002331656970000031
where P i ,
Figure BDA0002331656970000032
Respectively represent the category, position coordinates, and confidence score of the ith detection frame of the preliminary detection; perform improved non-maximum suppression on the preliminary detection result set A, and obtain an optimized detection result set
Figure BDA0002331656970000033
where Q i ,
Figure BDA0002331656970000034
Respectively represent the category, position coordinates, and confidence score of the ith detection frame optimized once;

(7)根据舰船的结构特征对一次优化检测结果集合B进行二次优化,得到最终检测结果集合

Figure BDA0002331656970000035
其中Ri
Figure BDA0002331656970000036
分别表示最终生成的的第i个检测框的类别、位置坐标、置信度分数。(7) Perform secondary optimization on the primary optimization detection result set B according to the structural characteristics of the ship, and obtain the final detection result set
Figure BDA0002331656970000035
where R i ,
Figure BDA0002331656970000036
Represent the category, position coordinates, and confidence score of the finally generated i-th detection frame, respectively.

本发明与现有的技术相比具有以下优点:Compared with the prior art, the present invention has the following advantages:

1.本发明根据宽幅遥感图像场景复杂、尺寸大的特点,利用下采样并去雾的增强图像的上下文信息和全局特征进行水陆分割,并进行形态学操作,能更有效地优化水陆分割结果,保留感兴趣区块内的舰船目标。1. The present invention uses the context information and global features of the enhanced image that is downsampled and dehazed to perform water and land segmentation according to the complex and large size of the wide-format remote sensing image scene, and performs morphological operations, which can more effectively optimize the water and land segmentation results. , retains ship targets within the block of interest.

2、本发明通过部分重叠的滑动窗口扫描全图,并根据水陆分割结果提取感兴趣区块作为待检测区域,在降低陆地上的虚警的同时,有效地提高了目标检测效率;另外,在宽幅尺度上对检测结果作改进的非极大值抑制,有效地减少了图像切片造成的漏检,提高了检测精度。2. The present invention scans the whole image through partially overlapping sliding windows, and extracts the block of interest as the area to be detected according to the result of the water and land segmentation, which effectively improves the target detection efficiency while reducing the false alarm on the land; The improved non-maximum value suppression of the detection results on the wide scale can effectively reduce the missed detection caused by the image slice and improve the detection accuracy.

附图说明Description of drawings

图1是本发明的实现流程示意图;Fig. 1 is the realization flow schematic diagram of the present invention;

图2是用本发明对宽幅遥感图像进行水陆分割的仿真结果图像;Fig. 2 is the simulation result image of carrying out water and land segmentation to wide remote sensing image with the present invention;

图3是用现有方法对宽幅遥感图像进行舰船检测的仿真结果图像;Fig. 3 is the simulation result image of carrying out ship detection to wide-format remote sensing image by the existing method;

图4是用本发明对宽幅遥感图像进行舰船检测的仿真结果图像。FIG. 4 is a simulation result image of ship detection on a wide-format remote sensing image using the present invention.

具体实施方式Detailed ways

以下结合附图,对本发明的实施例和效果作进一步说明。The embodiments and effects of the present invention will be further described below with reference to the accompanying drawings.

参照图1,本实施例的实现步骤如下:1, the implementation steps of this embodiment are as follows:

步骤1,构建光学遥感图像舰船检测数据集G。Step 1, construct the optical remote sensing image ship detection dataset G.

1.1)下载高分二号光学遥感数据,人工筛选出包含舰船目标的区域,用大小为832×832、步长为416的部分重叠滑动窗口裁剪这些区域并保存;1.1) Download the optical remote sensing data of Gaofen-2, manually filter out the areas containing the ship target, crop these areas with a partially overlapping sliding window with a size of 832×832 and a step size of 416 and save them;

1.2)对1.1)得到的所有图像随机进行上下左右翻转或旋转,得到扩增图像并保存;1.2) Randomly flip or rotate all the images obtained in 1.1) up, down, left, and right to obtain an amplified image and save it;

1.3)对1.2)得到的所有扩增图像用倾斜的矩形框进行标注,并将标注信息保存为xml格式文件,并用所有扩增图像和其对应的标注信息构成光学遥感图像舰船检测数据集G。1.3) Mark all augmented images obtained in 1.2) with a slanted rectangular frame, save the annotation information as an xml format file, and use all augmented images and their corresponding annotation information to form an optical remote sensing image ship detection dataset G .

步骤2,宽幅光学遥感图像下采样并去雾增强。Step 2, the wide optical remote sensing image is down-sampled and enhanced by dehazing.

对原宽幅光学遥感图像进行8倍的下采样,根据何恺明在《Single Image HazeRemoval Using Dark Channel Prior》中提出的基于暗通道先验的去雾算法,减少原宽幅光学遥感图像上的云雾遮挡,得到增强图像I。The original wide-format optical remote sensing image is down-sampled by 8 times. According to the dehazing algorithm based on dark channel prior proposed by He Yuming in "Single Image HazeRemoval Using Dark Channel Prior", the cloud and fog occlusion on the original wide-format optical remote sensing image is reduced. , the enhanced image I is obtained.

步骤3,利用上下文信息和图像全局特征进行水陆分割。Step 3, using context information and global image features to perform water and land segmentation.

3.1)利用像素的上下文特征信息对增强图像I作初步阈值分割;3.1) utilize the context feature information of the pixel to make preliminary threshold segmentation to the enhanced image I;

3.1.a)对增强图像I的边缘作宽度为3的镜像拓展得到拓展图像I',以增强图像I的每个像素点为7×7滑动窗口的中心,在拓展图像I'上提取对应窗口区域并计算区域方差v作为中心像素点的上下文特征值,得到增强图像I的特征图V,该增强图像I在坐标(i,j)处的特征值为V(i,j);3.1.a) Extend the edge of the enhanced image I with a mirror image with a width of 3 to obtain the extended image I', take each pixel of the enhanced image I as the center of a 7 × 7 sliding window, and extract the corresponding window on the extended image I' region and calculate the regional variance v as the context feature value of the central pixel point to obtain the feature map V of the enhanced image I, and the feature value of the enhanced image I at the coordinates (i, j) is V(i, j);

3.1.b)以1为间隔统计特征图V的分布直方图,确定直方图峰值处所对应的峰值方差vmax,计算[vmax,+∞)范围内直方图的第一个谷值所对应的谷值方差gt3.1.b) Count the distribution histogram of the feature map V with 1 as the interval, determine the peak variance v max corresponding to the peak of the histogram, and calculate the value corresponding to the first valley value of the histogram in the range of [v max ,+∞) valley variance g t ;

3.1.c)将增强图像I在坐标(i,j)处的特征值为V(i,j)与谷值方差gt进行比较:3.1.c) Compare the eigenvalues V(i,j) of the enhanced image I at coordinates (i,j) with the valley variance g t :

若V(i,j)≤gt则更新V(i,j)=1,If V(i,j)≤g t , then update V(i,j)=1,

否则,更新V(i,j)=0;Otherwise, update V(i,j)=0;

3.1.d)对更新后的特征图V作形态学的膨胀操作,得到特征二值图I13.1.d) perform a morphological expansion operation on the updated feature map V to obtain a feature binary map I 1 ;

3.1.e)用改进的拉普拉斯算子

Figure BDA0002331656970000041
对增强图像I作拉普拉斯灰度变换,利用OTSU自适应阈值法分割灰度变换后的增强图像I,得到灰度二值图I2;3.1.e) Using the modified Laplacian
Figure BDA0002331656970000041
Laplacian grayscale transformation is performed on the enhanced image I, and the enhanced image I after the grayscale transformation is segmented by using the OTSU adaptive threshold method to obtain a grayscale binary image I 2 ;

3.1.f)将特征二值图I1和灰度二值图I2逐像素对应作逻辑与操作,得到初步水陆分割二值图I3,其中I3中像素值0表示陆地,像素值1表示水域;3.1.f) The feature binary image I 1 and the gray-scale binary image I 2 are correspondingly performed pixel-by-pixel logical AND operation to obtain a preliminary water and land segmentation binary image I 3 , wherein the pixel value 0 in I 3 represents land, and the pixel value 1 means water;

3.2)利用水域连通域和陆地联通域特征,对联通域进行类别重标定:3.2) Using the characteristics of the water connectivity domain and the land connectivity domain, the category re-calibration of the connectivity domain is carried out:

3.2.a)标记初步水陆分割二值图I3中水域连通域集合W={w1,...,wi,...,wn},其中wi表示第i个水域联通域,计算n个水域联通域的面积,取面积最大的三个联通域作为宽幅图像水域特征模板,并计算水域特征模板的方差σ2 3.2.a ) Mark the water connected domain set W= { w 1 , . Calculate the area of the n connected water areas, take the three connected areas with the largest area as the water feature template of the wide image, and calculate the variance σ 2 of the water feature template;

3.2.b)初始化面积阈值t1=35×35;3.2.b) Initialize the area threshold t 1 =35×35;

3.2.c)遍历水域联通域集合W,计算W中wi的面积

Figure BDA0002331656970000051
和方差
Figure BDA0002331656970000052
Figure BDA0002331656970000053
Figure BDA0002331656970000054
则重标定wi为水域,否则,重标定wi为陆地;3.2.c) Traverse the water connectivity domain set W, and calculate the area of w i in W
Figure BDA0002331656970000051
and variance
Figure BDA0002331656970000052
like
Figure BDA0002331656970000053
and
Figure BDA0002331656970000054
Then re-calibrate wi as water, otherwise, re-calibrate wi as land;

3.2.d)标记初步水陆分割二值图I3中陆地联通域集合L={l1,...,lj,...,lm},其中lj表示第j个陆地联通域;3.2.d) Mark the land-connected domain set L={l 1 ,...,l j ,...,l m } in the preliminary water-land segmentation binary map I3 , where l j represents the j-th land-connected domain;

3.2.e)遍历陆地联通域集合L,计算L中lj的面积为

Figure BDA0002331656970000055
Figure BDA0002331656970000056
则重标定lj为水域,否则,重标定lj陆地;3.2.e) Traverse the land-connected domain set L, and calculate the area of l j in L as
Figure BDA0002331656970000055
like
Figure BDA0002331656970000056
Then re-calibrate l j as water, otherwise, re-calibrate l j land;

3.2.f)为重标定后的陆地联通域和水域联通域中的所有像素分别赋值0和1,得到优化水陆分割二值图I43.2.f) assign values 0 and 1 to all pixels in the re-calibrated land-connected domain and water-connected domain, respectively, to obtain an optimized water-land segmentation binary map I 4 ;

3.3)利用优化水陆分割二值图I4得到最终水陆分割二值图I53.3) Using the optimized binary map I 4 of water and land segmentation to obtain the final binary map I 5 of water and land segmentation:

3.3.a)复制优化水陆分割二值图I4,得到复制的结果图I4';3.3.a) Copy the optimized binary image I 4 of land and water segmentation to obtain the replicated result image I 4 ′;

3.3.b)用15×15的椭圆结构元素逐像素扫描优化水陆分割二值图I4,并将结构元素与I4对应区域内的各像素点作逻辑与操作:3.3.b) Use 15×15 ellipse structural elements to scan and optimize the binary image I 4 of water and land segmentation pixel by pixel, and perform logical AND operation between the structural elements and each pixel in the corresponding area of I 4 :

3.3.c)判断各像素点的结果,更新I4'对应区域内的中心像素:3.3.c) Judging the result of each pixel point, update the center pixel in the corresponding area of I 4 ':

若各像素点的结果都为0,则更新该对应区域中心像素为0;If the result of each pixel is 0, update the center pixel of the corresponding area to 0;

否则,更新该对应区域中心像素为1;Otherwise, update the center pixel of the corresponding area to 1;

3.3.d)将更新后的I4'进行8倍的上采样,将I4'的大小恢复到原宽幅图像大小,得到最终水陆分割二值图I53.3.d) Upsampling the updated I 4 ' by 8 times, restoring the size of I 4 ' to the original wide image size, and obtaining the final binary image I 5 of land and water segmentation.

步骤4,随机多尺度训练基于卷积神经网络的SCRDet模型M0Step 4, random multi-scale training of the SCRDet model M 0 based on the convolutional neural network.

4.1)将步骤一构建的光学遥感图像舰船检测数据集G的90%作为训练样本,剩余10%作为测试样本;4.1) Take 90% of the optical remote sensing image ship detection dataset G constructed in step 1 as the training sample, and the remaining 10% as the test sample;

4.2)使用ResNet-50网络作为SCRDet模型M0的骨干网络,并用数据集ImageNet预训练ResNet-50网络得到的模型参数,再用该模型参数对SCRDet模型M0的骨干网络进行参数初始化;4.2) Use the ResNet-50 network as the backbone network of the SCRDet model M 0 , and use the dataset ImageNet to pre-train the model parameters obtained by the ResNet-50 network, and then use the model parameters to initialize the parameters of the backbone network of the SCRDet model M 0 ;

4.3)随机选取训练样本中的一幅训练图像和其对应的标注,并随机上下或左右翻转图像及标注,得到变换图像X和真实标注Y;4.3) Randomly select a training image and its corresponding label in the training sample, and randomly flip the image and label up and down or left and right to obtain the transformed image X and the real label Y;

4.4)随机从[600,700,832]中选定某一尺度作为图像的短边长度,将变换图像X和真实标注Y按比例放缩到选定的尺度后,输入到SCRDet模型M0中,得到预测结果Y';4.4) Randomly select a certain scale from [600, 700, 832] as the short side length of the image, scale the transformed image X and the ground truth Y to the selected scale, and input it into the SCRDet model M 0 to get the prediction result y';

4.5)计算预测结果Y'与真实标注Y之间的误差,使用优化器Adam最小化误差以更新SCRDet模型M0的权重参数;4.5) Calculate the error between the predicted result Y' and the real label Y, and use the optimizer Adam to minimize the error to update the weight parameter of the SCRDet model M 0 ;

4.6)重复4.3)-4.5),当训练轮数达到200000时,得到训练好的目标检测模型M。4.6) Repeat 4.3)-4.5), when the number of training rounds reaches 200,000, the trained target detection model M is obtained.

步骤5,提取原宽幅图像上的待检测区域,输入到训练好的目标检测模型M中对区域进行检测。Step 5: Extract the area to be detected on the original wide image, and input it into the trained target detection model M to detect the area.

5.1)初始化窗口参数p=832,初始化待检测区域图像集合F和位置集合S为空集合;5.1) Initialize the window parameter p=832, and initialize the image set F and the position set S of the area to be detected to be an empty set;

5.2)用大小为p×p、步长为p/2的部分重叠的滑动窗口扫描原宽幅遥感图像,计算原宽幅遥感图像的窗口区域f所对应的最终水陆分割二值图I5区域内的水域面积Sw5.2) Scan the original wide-scale remote sensing image with a partially overlapping sliding window with a size of p × p and a step size of p/2, and calculate the final water and land segmentation binary map I 5 area corresponding to the window area f of the original wide-scale remote sensing image The water area S w within;

5.3)根据水域面积Sw的大小,判断窗口区域f是否为感兴趣区块:5.3) According to the size of the water area S w , determine whether the window area f is a block of interest:

若Sw>(p/8)2,则判别窗口区域f为感兴趣区块,将该窗口区域f加入到待检测区域图像集合F中,并将窗口区域f的左上角坐标s加入到位置集合S,得到待检测区域图像集合F={f1,...,fi,...,fn}和位置集合S={s1,...,si,...,sn};If S w >(p/8) 2 , the window area f is determined to be the block of interest, the window area f is added to the image set F of the area to be detected, and the upper left corner coordinate s of the window area f is added to the position Set S, obtain the image set F={f 1 ,...,f i ,...,f n } and the position set S={s 1 ,...,s i ,...,s n };

否则,判别窗口区域f为非感兴趣区块,不进行任何操作;Otherwise, it is determined that the window area f is not a block of interest, and no operation is performed;

5.5)将待检测区域图像集合F={f1,...,fi,...,fn}输入到检测模型M中,得到每个区域fi的所有检测框的类别、位置坐标、置信度分数。5.5) Input the image set F={f 1 ,...,f i ,...,f n } into the detection model M to obtain the category and position coordinates of all detection frames of each area f i , the confidence score.

步骤6,在原宽幅图像尺度上对初步检测结果作改进的非极大值抑制。Step 6: Perform improved non-maximum suppression on the preliminary detection results on the scale of the original wide image.

6.1)根据位置集合S={s1,...,si,...,sn}中待检测区域fi的坐标si,将每个区域fi的检测结果映射到原宽幅图像尺度上,得到初步检测结果集合:6.1) According to the coordinates s i of the area f i to be detected in the position set S={s 1 ,...,s i ,...,s n }, map the detection result of each area f i to the original width At the image scale, the initial detection result set is obtained:

Figure BDA0002331656970000071
Figure BDA0002331656970000071

其中Pi

Figure BDA0002331656970000072
分别表示初步检测的第i个检测框的类别、位置坐标、置信度分数;where P i ,
Figure BDA0002331656970000072
Respectively represent the category, position coordinates, and confidence score of the i-th detection frame of the preliminary detection;

6.2)初始化一次优化后的检测结果集合B为空;6.2) Initialize the optimized detection result set B to be empty;

6.3)将初步检测结果集合A中所有检测框的类别假设为同一类,令A中置信度最高的检测结果的索引

Figure BDA0002331656970000073
根据索引r从A中取出检测结果
Figure BDA0002331656970000074
加入集合B后,并将
Figure BDA0002331656970000075
从集合A中删除;6.3) Assume that the categories of all detection frames in the preliminary detection result set A are the same category, and set the index of the detection result with the highest confidence in A
Figure BDA0002331656970000073
Take the detection result from A according to the index r
Figure BDA0002331656970000074
After joining set B, and put
Figure BDA0002331656970000075
remove from set A;

6.4)计算初步检测结果集合A中每个检测框与置信度最高的检测框的交并比

Figure BDA0002331656970000076
即两个检测框重叠区域面积与并集区域面积的比值;6.4) Calculate the intersection ratio of each detection frame in the preliminary detection result set A and the detection frame with the highest confidence
Figure BDA0002331656970000076
That is, the ratio of the area of the overlapping area of the two detection frames to the area of the union area;

6.5)判断交并比

Figure BDA0002331656970000077
与交并比阈值t2的大小关系:6.5) Judging the intersection ratio
Figure BDA0002331656970000077
The size relationship with the intersection and union ratio threshold t 2 :

Figure BDA0002331656970000078
则更新置信度分数
Figure BDA0002331656970000079
否则,
Figure BDA00023316569700000710
不更新;like
Figure BDA0002331656970000078
update the confidence score
Figure BDA0002331656970000079
otherwise,
Figure BDA00023316569700000710
not update;

6.6)重复步骤6.3)-6.5)的操作,直到初步检测结果集合A为空,得到一次优化后的检测结果集合:

Figure BDA00023316569700000711
其中Qi
Figure BDA00023316569700000712
分别表示一次优化后的第i个检测框的类别、位置坐标、置信度分数。6.6) Repeat steps 6.3)-6.5) until the preliminary test result set A is empty, and obtain an optimized test result set:
Figure BDA00023316569700000711
where Q i ,
Figure BDA00023316569700000712
Respectively represent the category, position coordinates, and confidence score of the ith detection frame after an optimization.

步骤7,根据光学遥感图像中舰船的结构特征,对一次优化后的检测结果集合B中的所有检测结果

Figure BDA00023316569700000713
进行二次优化。Step 7: According to the structural characteristics of the ship in the optical remote sensing image, all the detection results in the detection result set B after one optimization are analyzed.
Figure BDA00023316569700000713
Perform secondary optimization.

7.1)判断置信度分数

Figure BDA00023316569700000714
的大小:若
Figure BDA00023316569700000715
则将第i个检测框的类别li优化为背景,否则优化为舰船目标;7.1) Judgment Confidence Score
Figure BDA00023316569700000714
size: if
Figure BDA00023316569700000715
Then the category li of the ith detection frame is optimized as the background, otherwise it is optimized as the ship target;

7.2)根据检测框位置坐标

Figure BDA00023316569700000716
得到检测框的最长边x,判断x的大小:若x>450,则将第i个检测框的类别Qi优化为背景,否则,优化为舰船目标;7.2) According to the position coordinates of the detection frame
Figure BDA00023316569700000716
Obtain the longest side x of the detection frame, and judge the size of x: if x>450, optimize the category Q i of the i-th detection frame as the background, otherwise, optimize it as the ship target;

7.3)根据检测框位置坐标

Figure BDA00023316569700000717
计算检测框的最长边与最短边之比y,判断y的大小:若y>11,则将第i个检测框的类别Qi优化为背景,否则,优化为舰船目标。7.3) According to the position coordinates of the detection frame
Figure BDA00023316569700000717
Calculate the ratio y of the longest side to the shortest side of the detection frame, and determine the size of y: if y>11, optimize the category Q i of the ith detection frame as the background, otherwise, optimize it as the ship target.

本发明的效果可以通过以下仿真实验进一步说明:The effect of the present invention can be further illustrated by the following simulation experiments:

1.仿真条件:1. Simulation conditions:

仿真实验采用“高分二号”光学遥感卫星拍摄的中国主要城市的全色图像,地面分辨率1米或4米。The simulation experiment uses the full-color images of major Chinese cities captured by the "Gaofen-2" optical remote sensing satellite, with a ground resolution of 1 meter or 4 meters.

仿真所用的CPU为Intel(R)Core(TM)i7-8750H、主频2.20GHz,GPU为8G的GeForceGTX1080,仿真平台为UBUNTU 16.04操作系统,使用TensorFlow深度学习框架,采用Python3.6软件进行实验。The CPU used in the simulation is Intel(R) Core(TM) i7-8750H, the main frequency is 2.20GHz, the GPU is GeForceGTX1080 with 8G, the simulation platform is the UBUNTU 16.04 operating system, the TensorFlow deep learning framework is used, and the Python3.6 software is used for experiments.

2.仿真内容及结果:2. Simulation content and results:

仿真1,用本发明对10000×10000的宽幅遥感图像进行水陆分割,根据水陆分割结果对陆地进行屏蔽,结果如图2所示,其中黑色代表陆地,其余均为感兴趣的水域。从图2可见,与陆地连接的靠岸舰船得到了很好的保留。Simulation 1, using the present invention to perform water and land segmentation on a 10000×10000 wide remote sensing image, and shield the land according to the results of the water and land segmentation. It can be seen from Figure 2 that the docked ships connected to the land are well preserved.

仿真2,用现有的基于卷积神经网络的方法直接对10000×10000的宽幅遥感图像进行切片式逐一检测,结果如图3所示,其中绿色框标注出检测到的舰船,蓝色数字为置信度分时。从图3可见,在对宽幅遥感图像直接切片检测时由于船体残缺造成了明显漏检,这是因为现有的方法未考虑到舰船的分布特征所致。Simulation 2, the existing method based on convolutional neural network is used to directly detect the 10000×10000 wide remote sensing images one by one. Numbers are confidence time divisions. It can be seen from Fig. 3 that there is obvious missed detection due to the incomplete hull of the wide remote sensing image, which is because the distribution characteristics of ships are not considered in the existing methods.

仿真3,用本发明对10000×10000的宽幅遥感图像进行舰船目标检测,结果如图4所示,其中绿色框标注出检测到的舰船,蓝色数字为置信度分数。Simulation 3, using the present invention to detect ship targets on a 10,000×10,000 wide-format remote sensing image, the result is shown in Figure 4, where the green box marks the detected ships, and the blue numbers are the confidence scores.

对比仿真2与仿真3,本发明在保留靠岸舰船信息前提下有效地进行了水陆分割,并根据水陆分割结果提取到感兴趣区域,在此基础上采用部分重叠检测策略及改进非极大值抑制进行优化,最后结合舰船结构信息再次优化检测结果,相比现有方法有效地提升了检测效率,提高了舰船目标检测精度,大大降低了陆地虚警。Comparing simulation 2 and simulation 3, the present invention effectively performs water and land segmentation on the premise of retaining the information of the ships docked at the shore, and extracts the region of interest according to the results of the water and land segmentation, and on this basis adopts a partial overlap detection strategy and improves non-maximum Compared with the existing methods, the detection efficiency is effectively improved, the detection accuracy of ship targets is improved, and the land false alarm is greatly reduced.

Claims (6)

1.一种基于感兴趣区块提取的遥感图像舰船检测方法,其特征在于,包括如下:1. a remote sensing image ship detection method based on block extraction of interest, is characterized in that, comprises as follows: (1)构建光学遥感图像舰船检测数据集G:(1) Constructing the optical remote sensing image ship detection dataset G: 1a)下载高分二号光学遥感数据,人工筛选出包含舰船目标的区域,用部分重叠的方式裁剪这些区域并保存;1a) Download the optical remote sensing data of Gaofen-2, manually filter out the areas that contain ship targets, crop and save these areas in a partially overlapping manner; 1b)对1a)得到的所有图像随机进行上下左右翻转或旋转,得到扩增图像并保存;1b) Randomly flip or rotate all the images obtained in 1a) up, down, left and right to obtain an amplified image and save it; 1c)对扩增的图像用倾斜的矩形框进行标注,并将标注信息保存为xml格式文件,并用所有扩增图像和其对应的标注信息构成光学遥感图像舰船检测数据集G;1c) Annotate the amplified image with a slanted rectangular frame, save the annotation information as an xml format file, and use all the amplified images and their corresponding annotation information to form the optical remote sensing image ship detection dataset G; (2)对宽幅光学遥感图像进行下采样,利用基于暗通道先验的去雾算法减少图像上的云雾遮挡,得到增强图像I;(2) down-sampling the wide-format optical remote sensing image, and using the dehazing algorithm based on the dark channel prior to reduce the cloud and fog occlusion on the image, and obtain the enhanced image I; (3)利用上下文信息和图像全局特征对增强图像I进行水陆分割,提取感兴趣的水域区块,得到二值图:(3) Use the context information and image global features to perform water and land segmentation on the enhanced image I, extract the water area of interest, and obtain a binary image: (3a)利用像素的上下文特征信息对增强图像I作初步阈值分割,得到初步水陆分割二值图I3(3a) using the context feature information of the pixel to perform preliminary threshold segmentation on the enhanced image I to obtain a preliminary water and land segmentation binary map I 3 ; (3b)标记初步水陆分割二值图I3中水域连通域集合W={w1,...,wi,...,wn},其中wi表示第i个水域连通域;标记初步水陆分割二值图I3中陆地连通域集合L={l1,...,lj,...,lm},其中lj表示第j个陆地连通域;提取每个水域连通域和陆地连通域的区域特征,根据判别规则对其类别进行重标定,得到增强图像I的优化水陆分割二值图I4(3b) Mark the water connected domain set W = { w 1 , . The set L = { l 1 , . The regional features of the domain and the land-connected domain are re-calibrated according to the discriminant rules, and the optimized binary map I 4 of land and water segmentation of the enhanced image I is obtained; (3c)对优化水陆分割二值图I4作形态学膨胀操作后,上采样恢复到原宽幅图像大小,得到最终水陆分割二值图I5(3c) after performing the morphological expansion operation on the optimized water and land segmentation binary image I 4 , the upsampling is restored to the original wide image size, and the final water and land segmentation binary image I 5 is obtained; (4)采用随机多尺度策略训练基于卷积神经网络的SCRDet目标检测模型M0,得到训练后的检测模型M;具体实现如下:(4) Using the random multi-scale strategy to train the SCRDet target detection model M 0 based on the convolutional neural network, and obtain the trained detection model M; the specific implementation is as follows: 4a)将构建的光学遥感图像舰船检测数据集G的90%作为训练样本,剩余10%作为测试样本;4a) Take 90% of the constructed optical remote sensing image ship detection dataset G as training samples, and the remaining 10% as test samples; 4b)随机选取训练样本中的一幅训练图像和其对应的标注,随机上下或左右翻转图像及标注得到变换图像X和真实标注Y;4b) randomly select a training image and its corresponding label in the training sample, and randomly flip the image up and down or left and right and label it to obtain the transformed image X and the real label Y; 4c)随机从[600,700,832]中选定某一尺度作为图像的短边长度,将变换图像X和真实标注Y按比例放缩到选定的尺度后,输入到SCRDet模型M0中,得到预测结果Y';4c) Randomly select a certain scale from [600, 700, 832] as the short side length of the image, scale the transformed image X and the ground truth Y to the selected scale, and input it into the SCRDet model M 0 to obtain the prediction result y'; 4d)计算预测结果Y'与真实标注Y之间的误差,使用优化器Adam最小化误差以更新SCRDet模型M0的权重参数;4d) Calculate the error between the predicted result Y' and the real label Y, and use the optimizer Adam to minimize the error to update the weight parameter of the SCRDet model M 0 ; 4e)重复4b)-4d),当训练轮数达到200000时,得到训练后的检测模型M;4e) Repeat 4b)-4d), when the number of training rounds reaches 200,000, obtain the detection model M after training; (5)利用最终水陆分割二值图I5,在原宽幅图像上用部分重叠的滑动窗口提取感兴趣区块作为待检测区域,构成待检测区域图像集合为F={f1,...,fi,...,fu}、位置集合为
Figure FDA0003534525790000021
其中fi表示第i个检测区域,si表示检测区域fi的左上角坐标,将待检测区域图像集合F输入到检测模型M中,得到每个区域fi的区域检测结果;
(5) Using the final water and land segmentation binary image I 5 , use a partially overlapping sliding window to extract the block of interest as the area to be detected on the original wide image, and form the image set of the area to be detected as F={f 1 ,... ,f i ,...,f u }, the position set is
Figure FDA0003534525790000021
Wherein f i represents the ith detection area, si represents the upper left corner coordinate of the detection area fi , input the image set F of the area to be detected into the detection model M, and obtain the area detection result of each area fi;
(6)根据位置集合S将区域检测结果映射到原宽幅图像尺度上,得到初步检测结果集合
Figure FDA0003534525790000022
其中Pi
Figure FDA0003534525790000023
分别表示初步检测的第i个检测框的类别、位置坐标、置信度分数;对初步检测结果集合A作改进的非极大值抑制,得到一次优化检测结果集合
Figure FDA0003534525790000024
其中Qi
Figure FDA0003534525790000025
分别表示一次优化的第i个检测框的类别、位置坐标、置信度分数;
(6) Map the region detection results to the original wide image scale according to the position set S, and obtain a preliminary detection result set
Figure FDA0003534525790000022
where P i ,
Figure FDA0003534525790000023
Respectively represent the category, position coordinates, and confidence score of the ith detection frame of the preliminary detection; perform improved non-maximum suppression on the preliminary detection result set A, and obtain an optimized detection result set
Figure FDA0003534525790000024
where Q i ,
Figure FDA0003534525790000025
Respectively represent the category, position coordinates, and confidence score of the ith detection frame optimized once;
(7)根据舰船的结构特征对一次优化检测结果集合B进行二次优化,得到最终检测结果集合
Figure FDA0003534525790000026
其中Ri
Figure FDA0003534525790000027
分别表示最终生成的第i个检测框的类别、位置坐标、置信度分数。
(7) Perform secondary optimization on the primary optimization detection result set B according to the structural characteristics of the ship, and obtain the final detection result set
Figure FDA0003534525790000026
where R i ,
Figure FDA0003534525790000027
Represent the category, position coordinates, and confidence score of the finally generated i-th detection frame, respectively.
2.根据权利要求1所述的方法,其中(3a)中利用像素的上下文特征信息对增强图像I作初步阈值分割,具体实现如下:2. method according to claim 1, wherein utilize the context feature information of pixel to make preliminary threshold segmentation to enhanced image I in (3a), concrete realization is as follows: 3a1)对增强图像I的边缘作宽度为3的镜像拓展得到拓展图像I',以增强图像I的每个像素点为7×7滑动窗口的中心,在拓展图像I'上提取对应窗口区域并计算区域方差v作为中心像素点的上下文特征值,得到增强图像I的特征图V,该增强图像I在坐标(i,j)处的特征值为V(i,j);3a1) Extend the edge of the enhanced image I with a mirror image with a width of 3 to obtain the extended image I', take each pixel of the enhanced image I as the center of a 7 × 7 sliding window, extract the corresponding window area on the extended image I' and Calculate the regional variance v as the context feature value of the central pixel point, and obtain the feature map V of the enhanced image I, and the feature value of the enhanced image I at the coordinates (i, j) is V(i, j); 3a2)以1为间隔统计特征图V的分布直方图,确定直方图峰值处所对应的峰值方差vmax,计算[vmax,+∞)范围内直方图的第一个谷值所对应的谷值方差gt,若V(i,j)≤gt则更新V(i,j)=1,否则,更新V(i,j)=0,对更新后的特征图V作形态学的膨胀操作,得到特征二值图I13a2) Count the distribution histogram of the feature map V at intervals of 1, determine the peak variance v max corresponding to the peak of the histogram, and calculate the valley value corresponding to the first valley value of the histogram in the range of [v max , +∞) Variance g t , if V(i, j) ≤ g t , update V(i, j)=1, otherwise, update V(i, j)=0, and perform morphological expansion operation on the updated feature map V , obtain the characteristic binary image I 1 ; 3a3)用改进的拉普拉斯算子对增强图像I作灰度变换,利用OTSU自适应阈值法分割增强图像I,得到灰度二值图I23a3) use the improved Laplacian operator to make grayscale transformation to the enhanced image I, utilize the OTSU adaptive threshold method to segment the enhanced image I, and obtain a grayscale binary image I 2 ; 3a4)将特征二值图I1和灰度二值图I2逐像素对应作逻辑与操作,得到初步水陆分割二值图I3,I3中像素值0表示陆地,像素值1表示水域。3a4) The feature binary image I 1 and the grayscale binary image I 2 are correspondingly performed pixel-by-pixel logical AND operation to obtain a preliminary water and land segmentation binary image I 3 , where pixel value 0 in I 3 represents land, and pixel value 1 represents water. 3.根据权利要求1所述的方法,其中(3c)中利用优化水陆分割二值图I4得到最终水陆分割二值图I5,实现如下:3. The method according to claim 1 , wherein in (3c), the optimized water and land segmentation binary graph I 4 is used to obtain the final water and land segmentation binary graph 1 5 , which is realized as follows: 3c1)复制优化水陆分割二值图I4,得到复制的结果图I4';3c1) Copy the optimized binary image I 4 of land and water segmentation, and obtain the copied result image I 4 ′; 3c2)用15×15的椭圆结构元素逐像素扫描优化水陆分割二值图I4,并将结构元素与I4对应区域内的各像素点作逻辑与操作:3c2) Use 15×15 ellipse structural elements to scan and optimize the binary image I 4 of water and land segmentation pixel by pixel, and perform a logical AND operation between the structural element and each pixel in the corresponding area of I 4 : 3c3)判断各像素点的结果,更新I4'对应区域内的中心像素:3c3) Judging the result of each pixel point, update the center pixel in the corresponding area of I 4 ': 若各像素点的结果都为0,则更新该对应区域中心像素为0;If the result of each pixel is 0, update the center pixel of the corresponding area to 0; 否则,更新该对应区域中心像素为1;Otherwise, update the center pixel of the corresponding area to 1; 3c4)将更新后的I4'进行8倍的上采样,将I4'的大小恢复到原宽幅图像大小,得到最终水陆分割二值图I53c4) Upsampling the updated I 4 ' by 8 times, restoring the size of I 4 ' to the original wide image size, and obtaining the final water and land segmentation binary image I 5 . 4.根据权利要求1所述的方法,其中(5)中利用最终水陆分割二值图I5在原始宽幅图像中提取感兴趣区块作为待检测区域,将待检测区域图像集合F输入到检测模型M中得到区域检测结果,具体实现如下:4. The method according to claim 1, wherein ( 5 ) utilizes the final water and land segmentation binary image 15 to extract the block of interest as the region to be detected in the original wide-format image, and the region image set F to be detected is input into the The region detection result is obtained in the detection model M, and the specific implementation is as follows: 5a)初始化待检测区域图像集合F和位置集合S为空集合;5a) Initialize the image set F of the area to be detected and the position set S to be empty sets; 5b)用大小为p×p、步长为p/2的部分重叠窗口扫描原宽幅遥感图像,计算原宽幅遥感图像的窗口区域f所对应的最终水陆分割二值图I5区域内的水域面积,设水域面积为Sw5b) Scan the original wide remote sensing image with a partially overlapping window with a size of p × p and a step size of p/2, and calculate the final water and land segmentation binary map I5 area corresponding to the window area f of the original wide remote sensing image. Water area, let the water area be S w ; 5c)根据Sw的大小,判断窗口区域f是否为感兴趣区块:5c) According to the size of S w , determine whether the window area f is the block of interest: 若Sw>(p/8)2,则判别窗口区域f为感兴趣区块,将窗口区域f加入待检测区域图像集合F,并将窗口区域f的左上角坐标s加入位置集合S;If S w >(p/8) 2 , then the window area f is determined to be the block of interest, the window area f is added to the image set F of the area to be detected, and the upper left corner coordinate s of the window area f is added to the position set S; 否则,判别窗口区域f为非感兴趣区块,不进行任何操作;Otherwise, it is determined that the window area f is not a block of interest, and no operation is performed; 5d)将待检测区域图像集合F={f1,...,fi,...,fu}输入到检测模型M中,得到每个区域fi的所有检测框的类别、位置坐标、置信度分数。5d) Input the image set F={f 1 ,...,f i ,...,f u } into the detection model M to obtain the category and position coordinates of all detection frames of each area f i , the confidence score. 5.根据权利要求1所述的方法,其中(6)中将区域检测结果映射到原宽幅图像尺度上,对初步检测结果集合A作改进的非极大值抑制,具体实现如下:5. The method according to claim 1, wherein in (6), the regional detection result is mapped on the original wide-width image scale, and the preliminary detection result set A is made to improve the non-maximum suppression, and the specific implementation is as follows: 6a)根据位置集合
Figure FDA0003534525790000041
中待检测区域fi的坐标si,将每个区域fi的检测结果映射到原宽幅图像尺度上,得到初步检测结果集合
Figure FDA0003534525790000042
其中Pi
Figure FDA0003534525790000043
分别表示初步检测的第i个检测框的类别、位置坐标、置信度分数;
6a) Assemble according to location
Figure FDA0003534525790000041
The coordinates s i of the area fi to be detected in the
Figure FDA0003534525790000042
where P i ,
Figure FDA0003534525790000043
Respectively represent the category, position coordinates, and confidence score of the i-th detection frame of the preliminary detection;
6b)初始化一次优化后的检测结果集合B为空;6b) Initialize the optimized detection result set B to be empty; 6c)将初步检测结果集合A中所有检测框的类别假设为同一类,令A中置信度最高的检测结果的索引
Figure FDA0003534525790000044
根据索引r从A中取出检测结果
Figure FDA0003534525790000045
加入集合B后,并将
Figure FDA0003534525790000046
从集合A中删除;
6c) Assume the categories of all detection frames in the preliminary detection result set A to be the same category, and set the index of the detection result with the highest confidence in A
Figure FDA0003534525790000044
Take the detection result from A according to the index r
Figure FDA0003534525790000045
After joining set B, and
Figure FDA0003534525790000046
remove from set A;
6d)计算初步检测结果集合A中每个检测框与置信度最高的检测框的交并比
Figure FDA0003534525790000047
6d) Calculate the intersection ratio of each detection frame in the preliminary detection result set A and the detection frame with the highest confidence
Figure FDA0003534525790000047
6e)判断交并比
Figure FDA0003534525790000048
与交并比阈值t2的大小关系:
6e) Judging the intersection ratio
Figure FDA0003534525790000048
The size relationship with the intersection and union ratio threshold t 2 :
Figure FDA0003534525790000049
则更新置信度分数
Figure FDA00035345257900000410
否则,
Figure FDA00035345257900000411
不更新;
like
Figure FDA0003534525790000049
update the confidence score
Figure FDA00035345257900000410
otherwise,
Figure FDA00035345257900000411
not update;
6f)重复步骤6c)-6e)的操作,直到初步检测结果集合A为空,得到一次优化后的检测结果集合:
Figure FDA00035345257900000412
其中Qi
Figure FDA00035345257900000413
分别表示一次优化后的第i个检测框的类别、位置坐标、置信度分数。
6f) Repeat the operations of steps 6c)-6e) until the preliminary detection result set A is empty, and obtain an optimized detection result set:
Figure FDA00035345257900000412
where Q i ,
Figure FDA00035345257900000413
Respectively represent the category, position coordinates, and confidence score of the ith detection frame after an optimization.
6.根据权利要求1所述的方法,其中(7)中根据舰船的结构特征对一次优化后的检测结果集合B中的所有检测结果
Figure FDA0003534525790000051
进行二次优化,具体实现如下:
6. The method according to claim 1, wherein in (7), all detection results in the detection result set B after one optimization are performed according to the structural characteristics of the ship
Figure FDA0003534525790000051
The second optimization is carried out, and the specific implementation is as follows:
7a)判断置信度分数
Figure FDA0003534525790000052
的大小:若
Figure FDA0003534525790000053
则将第i个检测框的类别li优化为背景,否则优化为舰船目标;
7a) Judgment Confidence Score
Figure FDA0003534525790000052
size: if
Figure FDA0003534525790000053
Then the category li of the ith detection frame is optimized as the background, otherwise it is optimized as the ship target;
7b)根据检测框位置坐标
Figure FDA0003534525790000054
得到检测框的最长边为x,判断x的大小:若x>450,则将第i个检测框的类别Qi优化为背景,否则,优化为舰船目标;
7b) According to the position coordinates of the detection frame
Figure FDA0003534525790000054
The longest side of the detection frame is x, and the size of x is judged: if x>450, the category Q i of the ith detection frame is optimized as the background, otherwise, it is optimized as the ship target;
7c)根据检测框位置坐标
Figure FDA0003534525790000055
计算检测框的最长边与最短边之比为y,判断y的大小:若y>11,则将第i个检测框的类别Qi优化为背景,否则,优化为舰船目标。
7c) According to the position coordinates of the detection frame
Figure FDA0003534525790000055
Calculate the ratio of the longest side to the shortest side of the detection frame as y, and determine the size of y: if y > 11, optimize the category Q i of the ith detection frame as the background, otherwise, optimize it as the ship target.
CN201911338663.3A 2019-12-23 2019-12-23 Remote sensing image ship detection method based on region of interest block extraction Active CN111027511B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911338663.3A CN111027511B (en) 2019-12-23 2019-12-23 Remote sensing image ship detection method based on region of interest block extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911338663.3A CN111027511B (en) 2019-12-23 2019-12-23 Remote sensing image ship detection method based on region of interest block extraction

Publications (2)

Publication Number Publication Date
CN111027511A CN111027511A (en) 2020-04-17
CN111027511B true CN111027511B (en) 2022-04-29

Family

ID=70212653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911338663.3A Active CN111027511B (en) 2019-12-23 2019-12-23 Remote sensing image ship detection method based on region of interest block extraction

Country Status (1)

Country Link
CN (1) CN111027511B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111626176B (en) * 2020-05-22 2021-08-06 中国科学院空天信息创新研究院 A method and system for fast detection of remote sensing targets based on dynamic attention mechanism
CN112949389A (en) * 2021-01-28 2021-06-11 西北工业大学 Haze image target detection method based on improved target detection network
CN112967335B (en) * 2021-03-10 2024-11-22 精英数智科技股份有限公司 Bubble size monitoring method and device
CN113112479A (en) * 2021-04-15 2021-07-13 清华大学 Progressive target detection method and device based on key block extraction
CN113205469A (en) * 2021-06-04 2021-08-03 中国人民解放军国防科技大学 Single image defogging method based on improved dark channel
CN113781502B (en) * 2021-09-30 2024-06-28 浪潮云信息技术股份公司 Super-resolution image training data preprocessing method
CN114119617B (en) * 2021-11-12 2024-09-27 武汉大学 Inland salt lake artemia zone extraction method of multispectral satellite remote sensing image
CN114782346B (en) * 2022-04-13 2024-10-29 大连理工大学 Cloth image defect detection method based on polymorphic data amplification and block identification
CN115082668B (en) * 2022-05-10 2024-12-03 北京大学 Method, device, equipment and medium for screening regions of interest in remote sensing images
CN115294066A (en) * 2022-08-09 2022-11-04 重庆科技学院 Sandstone particle size detection method
CN116091451A (en) * 2023-01-09 2023-05-09 中国科学院苏州生物医学工程技术研究所 Retinal pigment epithelial cell image segmentation method and system
CN118608355B (en) * 2024-08-02 2025-01-14 医博士医教科技(深圳)有限公司 Emergency rescue management method and system for emergency rescue equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107784655A (en) * 2016-12-28 2018-03-09 中国测绘科学研究院 A kind of visual attention model SAR naval vessels detection algorithm of adaptive threshold
CN108133468A (en) * 2017-12-25 2018-06-08 南京理工大学 Auto-adaptive parameter enhances and the constant false alarm rate Ship Detection of tail auxiliary detection
CN108171193A (en) * 2018-01-08 2018-06-15 西安电子科技大学 Polarization SAR Ship Target Detection method based on super-pixel local message measurement
CN108921066A (en) * 2018-06-22 2018-11-30 西安电子科技大学 Remote sensing image Ship Detection based on Fusion Features convolutional network
CN109409285A (en) * 2018-10-24 2019-03-01 西安电子科技大学 Remote sensing video object detection method based on overlapping slice

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9196044B2 (en) * 2014-02-26 2015-11-24 Raytheon Company False alarm rejection for boat detection candidates

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107784655A (en) * 2016-12-28 2018-03-09 中国测绘科学研究院 A kind of visual attention model SAR naval vessels detection algorithm of adaptive threshold
CN108133468A (en) * 2017-12-25 2018-06-08 南京理工大学 Auto-adaptive parameter enhances and the constant false alarm rate Ship Detection of tail auxiliary detection
CN108171193A (en) * 2018-01-08 2018-06-15 西安电子科技大学 Polarization SAR Ship Target Detection method based on super-pixel local message measurement
CN108921066A (en) * 2018-06-22 2018-11-30 西安电子科技大学 Remote sensing image Ship Detection based on Fusion Features convolutional network
CN109409285A (en) * 2018-10-24 2019-03-01 西安电子科技大学 Remote sensing video object detection method based on overlapping slice

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
高分三号SAR图像舰船目标检测研究及实现;章林;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20190215;全文 *

Also Published As

Publication number Publication date
CN111027511A (en) 2020-04-17

Similar Documents

Publication Publication Date Title
CN111027511B (en) Remote sensing image ship detection method based on region of interest block extraction
CN109447994B (en) Remote Sensing Image Segmentation Method Combining Complete Residual and Feature Fusion
US11367217B2 (en) Image processing method and apparatus, and related device
CN109934200B (en) RGB color remote sensing image cloud detection method and system based on improved M-Net
CN103714339B (en) SAR image road damaging information extracting method based on vector data
CN110866871A (en) Text image correction method and device, computer equipment and storage medium
US20210158699A1 (en) Method, device, readable medium and electronic device for identifying traffic light signal
CN107767400B (en) A moving target detection method for remote sensing image sequences based on hierarchical saliency analysis
CN110348376A (en) A kind of pedestrian's real-time detection method neural network based
CN109919002B (en) Yellow stop line identification method and device, computer equipment and storage medium
CN104978567B (en) Vehicle checking method based on scene classification
CN109840483B (en) Landslide crack detection and identification method and device
CN111126335B (en) SAR ship identification method and system combining significance and neural network
CN103810716B (en) Move and the image partition method of Renyi entropy based on gray scale
CN109726717A (en) A vehicle comprehensive information detection system
CN110619328A (en) Intelligent ship water gauge reading identification method based on image processing and deep learning
WO2023124442A1 (en) Method and device for measuring depth of accumulated water
CN106934386A (en) A kind of natural scene character detecting method and system based on from heuristic strategies
CN110929635B (en) Fake face video detection method and system based on facial intersection and comparison under trust mechanism
CN112819753B (en) Building change detection method and device, intelligent terminal and storage medium
CN109766823A (en) A high-resolution remote sensing ship detection method based on deep convolutional neural network
CN104598907A (en) Stroke width figure based method for extracting Chinese character data from image
CN116758421A (en) Remote sensing image directed target detection method based on weak supervised learning
CN116740528A (en) A method and system for target detection in side scan sonar images based on shadow features
CN115272876A (en) Remote sensing image ship target detection method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant