Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
A Scalable Reduced-Complexity Compression of Hyperspectral Remote Sensing Images Using Deep Learning
Next Article in Special Issue
Method for Determining Coastline Course Based on Low-Altitude Images Taken by a UAV
Previous Article in Journal
Reconstructing Geometrical Models of Indoor Environments Based on Point Clouds
Previous Article in Special Issue
The Application of CNN-Based Image Segmentation for Tracking Coastal Erosion and Post-Storm Recovery
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Remote Sensing Classification of Offshore Seaweed Aquaculture Farms on Sample Dataset Amplification and Semantic Segmentation Model

1
College of Geodesy and Geomatics, Shandong University of Science and Technology, Qingdao 266590, China
2
College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2023, 15(18), 4423; https://doi.org/10.3390/rs15184423
Submission received: 24 July 2023 / Revised: 4 September 2023 / Accepted: 6 September 2023 / Published: 8 September 2023
Figure 1
<p>Location of the study area: (<b>a</b>) Dalian; (<b>b</b>) study area A, Lvshunkou offshore aquaculture area; (<b>c</b>) study area B, Jinzhou offshore aquaculture area.</p> ">
Figure 2
<p>Workflow of the remote sensing classification of offshore seaweed aquaculture farms on sample dataset amplification and semantic segmentation model.</p> ">
Figure 3
<p>Structure diagram of the improved DCGAN generator network. The generator network has a total of five layers, including one fully linked layer (first cuboid) and four convolutional layers (last four cuboids). The input is a 100-dimensional random noise vector that obeys a normal distribution, and the output is an image of 32 × 32 × 10.</p> ">
Figure 4
<p>Structure diagram of the improved DCGAN discriminator network. The discriminator network has a total of five layers, including four convolutional layers (first four cuboids) and one fully linked layer (fifth cuboid). The input is an image of 32 × 32 × 10, and the output is a single predictive value (probability that the image of the input discriminator is identically distributed with the dataset image).</p> ">
Figure 5
<p>Structure of the UNet aquaculture classification model, consisting of three parts: encoder (left column), converter (bottom line and “skip connection”), and decoder (right column).</p> ">
Figure 6
<p>Overview of DeepLabv3 architecture consisting of three stages: basic network, atrous spatial pyramid pooling (ASPP) module, and post-processing stage.</p> ">
Figure 7
<p>Structure of the SegNet aquaculture classification model. The model comprises two steps: an encoding process that compresses the images, followed by a decoding process that restores the images.</p> ">
Figure 8
<p>Comparison of the NDAWI values of the training samples and NDAWI values of the generated samples: (<b>a</b>) kelp; (<b>b</b>) wakame; (<b>c</b>) seawater in the Lvshunkou offshore aquaculture area.</p> ">
Figure 9
<p>(<b>a</b>) Sample images of part of the Lvshunkou offshore aquaculture area and (<b>b</b>) their corresponding labels.</p> ">
Figure 10
<p>The classification results of the (<b>a</b>) UNet, (<b>b</b>) DeepLabv3, and (<b>c</b>) SegNet models based on the amplified NDAWI dataset in the Lvshunkou offshore aquaculture area (large) and a detailed comparison (small).</p> ">
Figure 10 Cont.
<p>The classification results of the (<b>a</b>) UNet, (<b>b</b>) DeepLabv3, and (<b>c</b>) SegNet models based on the amplified NDAWI dataset in the Lvshunkou offshore aquaculture area (large) and a detailed comparison (small).</p> ">
Figure 11
<p>Classification result maps of aquaculture use before and after sample amplification in the Lvshunkou offshore aquaculture area (<b>a</b>,<b>e</b>) and detailed comparison maps (<b>b</b>,<b>f</b>;<b>c</b>,<b>g</b>;<b>d</b>,<b>h</b>).</p> ">
Figure 12
<p>The classification results of the (<b>a</b>) UNet, (<b>b</b>) DeepLabv3, and (<b>c</b>) SegNet models based on the amplified NDAWI dataset in the Jinzhou offshore aquaculture area (large) and a detailed comparison(s) (small).</p> ">
Figure 13
<p>Classification result maps of aquaculture use before and after sample amplification in the Jinzhou offshore aquaculture area (<b>a</b>,<b>e</b>) and detailed comparison maps (<b>b</b>,<b>f</b>;<b>c</b>,<b>g</b>;<b>d</b>,<b>h</b>).</p> ">
Versions Notes

Abstract

:
Satellite remote sensing provides an effective technical means for the precise extraction of information on aquacultural areas, which is of great significance in realizing the scientific supervision of the aquaculture industry. Existing optical remote sensing methods for the extraction of aquacultural area information mostly focus on the use of image spatial features and research on classification methods of single aquaculture patterns. Accordingly, the comprehensive utilization of a combination of spectral information and deep learning automatic recognition technology in the feature expression and discriminant extraction of aquaculture areas needs to be further explored. In this study, using Sentinel-2 remote sensing images, a method for the accurate extraction of different algae aquaculture zones combined with spectral information and deep learning technology was proposed for the characteristics of small samples, multidimensions, and complex water components in marine aquacultural areas. First, the feature expression ability of the aquaculture area target was enhanced through the calculation of the normalized difference aquaculture water index (NDAWI). Second, on this basis, the improved deep convolution generative adversarial network (DCGAN) algorithm was used to amplify the samples and create the NDAWI dataset. Finally, three semantic segmentation methods (UNet, DeepLabv3, and SegNet) were used to design models for classifying the algal aquaculture zones based on the sample amplified time series dataset and comprehensively compare the accuracy of the model classifications for achieving accurate extraction of different algal aquaculture information within the seawater aquaculture zones. The results show that the improved DCGAN amplification exhibited a better effect than the generative adversarial networks (GANs) and DCGAN under the indexes of structural similarity (SSIM) and peak signal-to-noise ratio (PSNR). The UNet classification model constructed on the basis of the improved DCGAN-amplified NDAWI dataset achieved better classification results (Lvshunkou: OA = 94.56%, kappa = 0.905; Jinzhou: OA = 94.68%, kappa = 0.913). The algorithmic model in this study provides a new method for the fine classification of marine aquaculture area information under small sample conditions.

1. Introduction

Aquaculture is one of the fastest-growing animal food production sectors in the world, accounting for more than half of the total amount of aquatic food consumed by humans and providing great potential for global food security [1]. The extended expanse of China’s sea area has led to remarkable advancements in aquaculture [2]. These accomplishments have significantly contributed to ensuring the supply of high-quality proteins, lessening the intensity of the utilization of aquatic biological resources in natural waters and promoting the growth of the fishery industry and the livelihood of fishermen. In recent years, the resulting pollution around aquaculture waters, irrational aquacultural layouts, and excessive density of offshore aquaculture cages have become increasingly serious [3,4] with the expanding scale of marine aquaculture and the unregulated management of aquacultural activities. Therefore, strengthening the environmental regulation of aquaculture is crucial in promoting the green development of the aquacultural industry.
Traditional surveys of aquacultural areas are typically carried out by means of on-the-spot investigations at sea. However, this approach is easily restricted by meteorological conditions and other factors, time consuming, and labor intensive. At present, satellite remote sensing technology has a number of advantages, such as low cost, wide monitoring range, high efficiency, and repeated observations. This technology has demonstrated its unique technical advantages in the fields of marine resources and environmental investigation [5,6,7]. Currently, researchers have made some progress in the inventory and management of aquacultural areas using satellite remote sensing images [8,9,10,11,12,13,14]. These extractions of aquacultural area information have pioneered the application of remote sensing technology in aquacultural areas. The high precision of the results obtained confirms the feasibility of using remote sensing images to extract information from aquacultural areas.
Deep learning is an important field of machine learning research. In comparison to traditional machine learning, deep learning is a method with multilayer representation learning ability, where the data features are abstracted and extracted from lower to higher levels through multiple sets of nonlinear modules [15]. This method has an excellent learning ability, a wide application range, high accuracy of its results, and strong feature construction ability, providing a new idea for remote sensing data processing [16]. Liu et al. [17] utilized high-resolution remote sensing satellite Gaofen-2 (GF-2) images and introduced the deep learning Richer convolutional feature network model to extract raft aquacultural areas in Sanduao, Ningde city, Fujian Province, China. Zheng et al. [18] used GF-2 data to propose an improved two-branch network model for remote sensing information extraction of seawater cage culture areas. Sui et al. [19] proposed an automatic extraction method for offshore cage and floating raft aquacultural zones based on semantic segmentation to address the problem whereby a traditional target recognition algorithm on high-resolution images exhibits a weak generalization ability and low recognition rate in a weak signal environment. These studies show that deep learning can achieve satisfactory results in dealing with different research scenarios and bring new development opportunities for the processing, analysis, feature extraction, recognition, and classification of such “big data” as massive high-resolution image data.
A sufficient number of training samples are a requirement for the training of models based on neural networks. At the same time, the spatial distribution and diversity of sample data guarantee the robustness and generalization ability of the network model [20,21,22]. However, the cost of sample data acquisition is high in the context of actual production applications and big data, or the duplication of the acquired data is considerable, or the probability of the occurrence of the required samples is small, resulting in a limited number of useful sample data; that is, the problem of “big data, small samples” exists [23,24,25]. Consequently, the study of the sample amplification technique in the case of small samples is necessary to effectively expand the training dataset through a reasonable method and has practical application value. Sample amplification can roughly be divided into traditional image-processing-based and deep-learning-based sample amplifications [26]. Traditional sample amplification primarily refers to image processing methods, such as flipping, rotating, scaling, cropping, and shifting, for the existing data samples. Deep learning models, represented by the current rapidly developing generative adversarial networks (GANs) [27], have a wide range of potential applications in sample enhancement. This model trains the network by discriminatively learning normal samples and generator-generated adversarial samples during training in this adversarial training approach. Researchers have accomplished various outcomes in sample amplification. BigGAN has achieved a significant breakthrough in the GAN model for image generation. This network was proposed by researchers at Google’s DeepMind department to generate images with realistic backgrounds and textures [28]. Gao and Jiang [29] designed a new GAN that can be applied to data enhancement tasks. The discriminator of the structure would discriminate the sample and generated datasets while classifying the generated samples into categories. This result was achieved by the classifiers learning the classification boundaries of the feature information. Shaham et al. [30] introduced a model that can be used for generative training tasks using a single natural image (i.e., learning a generative model from a single natural image, SinGAN). The SinGAN model learned to capture feature information inside the training samples and output high-quality images with realistic visual effects through a generator model. The generative results were applied to a variety of specific experimental tasks, and excellent outcomes were obtained.
Dalian is an important seaweed aquacultural base in northern China. Abundant marine resources, suitable climatic conditions, and advanced farming technology make the seaweed aquacultural industry the characteristic and pillar industry of Dalian [31]. The products of kelp and wakame not only better meet the domestic market demand in China but are also exported to Japan, Korea, and other countries, contributing to the development of the local marine economy. Currently, the governmental administration of Dalian has taken measures to increase the scale of aquaculture, improve the quality of products, and accelerate industrial upgrading in order to better promote the development of the seaweed farming industry. Therefore, it is of great practical significance to carry out research on the seaweed culture in Dalian.
A large number of studies focus on extracting and analyzing information from marine areas dedicated to fish or shellfish aquaculture, but there are few studies focusing on seaweed aquacultural areas [32,33,34,35]. Meanwhile, automatic computerized identification methods for different types of aquacultural areas still need to be further developed. The main objectives of this study included the following three points: (1) finely realize the extraction of the spatial distribution of the different algal culture types in aquacultural areas; (2) study the amplification method of remote sensing sample data in view of the complex conditions of small samples in aquacultural zones; and (3) use deep learning techniques for the automated classification of seaweed farms using semantic segmentation models and evaluate the performance of different models. The results of this study are helpful in promoting the high-quality development of aquaculture, the sustainable utilization of marine resources, and the improvement and protection of the marine ecological environment.

2. Materials

2.1. Study Area

Dalian (120°58′–123°31′E, 38°43′–40°10′N) is located at the southern tip of the Liaodong Peninsula in China, bordering the Yellow Sea to the east and the Bohai Sea to the west (Figure 1). Dalian is located in a warm temperate monsoon climate zone, with an average annual temperature of 10.5 °C, an annual rainfall of 550–950 mm, and total sunshine hours of 2500–2800 h. Dalian has a sea area of 30,100 km2 and a coastline of 2211 km, making it the city with the longest coastline in China. The water quality is clean, with little pollution and less floating mud. The salinity of Dalian coastal waters is relatively stable, and the vertical difference and the difference in the distance from the coast are relatively small, generally in the range of 30–32. The waters are rich in nutrient salts, stable in physical and chemical environmental elements, and abundant in marine resources. This natural advantage provides a good environment for mariculture organisms to reproduce and survive. Study area A was located in the Lvshunkou District of Dalian city, and study area B was located in the Jinzhou District of Dalian city (as a validation area for the method’s extension). These two study areas adopt the culture method of fixed floating valves, and the culture types are mainly wakame and kelp of algal culture.

2.2. Data

In order to classify the marine area used for offshore seaweed aquaculture, Sentinel-2 optical remote sensing images covering the study area during the growth cycle of kelp and wakame were selected as the data source in this study (European Space Agency (ESA), https://scihub.copernicus.eu/dhus/ (accessed on 8 June 2021)), mainly 10 Sentinel 2A/B Level-1C images without cloud impact between 2017 and 2018 (Table 1).
The data collection was carried out from November 2016 to June 2018 using an ASD Field Spec spectrometer in the coastal marine area in the south of Lvshunkou District, Dalian. The spectral and coordinate measurements of 25 sampling points in the aquaculture area were collected, including 9 sampling points for kelp and 16 sampling points for wakame.

3. Methods

3.1. Research Route

This study was carried out according to the technical route shown in Figure 2. The main workflow was as follows: to begin with, using the measured spectral data of the seaweed aquaculture area and the Sentinel-2 remote sensing data as the data source, we constructed the normalized difference aquaculture water index (NDAWI) by adopting the spectral analysis method to analyze the spectral characteristics of the two different species grown in the area and the surrounding seawater. Next, the NDAWI dataset was created based on this spectral feature index, and the amplification of the NDAWI dataset was realized by combining the improved deep convolution generative adversarial network (DCGAN) algorithm. Finally, the precise classification of the different types of algal aquaculture in the study area was realized on the basis of the amplified sample dataset by utilizing the semantic segmentation model.

3.2. Calculation of Spectral Characteristic Index

The spectral characteristic index used in this study was the normalized difference aquaculture water index (NDAWI). The development of this spectral characteristic index combines the reflectance spectral curves of different bands of remote sensing data and the reflectance spectral curves of measured spectral data, taking into account the spectral characteristic changes during the growth period of algae. It can better meet the requirements of the fine classification of different types of aquacultural areas. For details, please refer to the research results of Zhang et al. [36]. The NDAWI employs the blue, green, red, and short-wave infrared bands. The blue and green bands have high reflectivity, and the red and short-wave infrared bands have low reflectivity. The formula for the NDAWI is as follows:
N D A W I = B 2 + B 3 B 4 + B 5 B 2 + B 3 + B 4 + B 5
where B2 represents the blue band, B3 represents the green band, B4 represents the red band, and B5 represents the short-wave infrared band.

3.3. Sample Amplification Based on Improved DCGAN

3.3.1. Improvement of DCGAN

  • Improvement of the Loss Function
Although the DCGAN network improves the network performance, to a certain extent, by introducing a convolutional neural network, the composition of the loss function is still based on Jensen–Shannon divergence, which causes the DCGAN to still have the problem of gradient disappearance during training [37]. Therefore, the Wasserstein distance was introduced to replace the original loss function of the DCGAN so as to better play the role of the discriminator to distinguish between true and false and improve the ability of the generator to generate images. The definition of the Wasserstein distance [38] is shown in Formula (2):
W P r , P g = inf γ ~ Π ( P r ,   P g ) E x , y ~ γ | | x y | |
where  P r  and  P g  represent the distribution of the real and the generated sample data, respectively;  Π ( P r , P g )  represents all possible joint distributions of  P r  and  P g x  and  y  are the samples from each possible joint distribution  γ , and  | | x y | |  is the distance of the samples.
According to the Wasserstein distance, the loss function of the discriminant network of the improved DCGAN is shown in Formula (3):
L D = E x ~ P r D ( x ) + E x ~ ~ P g D x ~ + λ E x ^ ~ P x ^ x ^ D ( x ^ ) 2 1 2
where  D ( x )  is the output of the real data through the discriminator;  λ  is the penalty coefficient;  x ^  is the interpolation between the real data and the generated data; and  x ^ D ( x ^ )  is the gradient for the interpolation by the discriminator output.
  • Improvement of the Network Structure
Since the NDAWI dataset of the study area to be amplified consisted of 10 bands and was affected by the size of the aquacultural area and the spatial resolution of Sentinel-2, the design of the network parameters was implemented on the basis of the DCGAN structure combined with the characteristics of the NDAWI algal aquaculture dataset in order to extract deeper image features. Figure 3 shows the improved network structure of the DCGAN generator. To start with, the input random variable obeying the uniform distribution was transformed into a 4 × 4 × 512 image through the fully connected layer. Then, it passed through 4 convolutional layers sequentially. For the first 3 convolutional layers, the number of channels was halved, and the length and width were doubled for each convolutional layer passed. For the 4th convolutional layer, only the number of channels was changed without changing the image size. A final image of the size 32 × 32 × 10 was generated. In addition, the transposed convolution kernel had a stride of 2, as well as a size of 3 × 3. A batch normalization (BN) layer was added after each convolutional layer to prevent the gradient from disappearing. Except for the last layer using the Tanh function, the rest were activated using the ReLU function.
Figure 4 shows the network structure of the improved DCGAN discriminator. At first, an image of 32 × 32 × 10 generated by the generator was input. Then, a feature map of 4 × 4 × 512 was obtained by extracting features through 4 convolutional layers. In the end, a probability value of an image true and false judgment was output through the fully connected layer, and the parameters of the generator network and the discriminator network were updated according to the judgment probability value. In addition, each kernel had a size of 5 × 5 and a stride of 2. The BN layer was added after each convolutional layer in the discriminator network. Except for the last layer, which did not use the activation function, the other layers used the LeakyReLU function.

3.3.2. Amplification of NDAWI Dataset Based on Improved DCGAN

Because of the large extent of the study area, in situ field surveys cannot cover the whole area. Thus, the NDAWI dataset was produced based on the limited observation data, and 308 samples of 32 × 32 × 10 were created. Among them, since the number of original training samples was not sufficient, data enhancement was needed, including the up and down flip, left and right flip, clockwise 90° rotation, clockwise 180° rotation, and clockwise 270° rotation of the samples. After the enhancement, there were 1848 training samples in the Lvshunkou offshore aquaculture area.
The quantitative evaluation of the sample amplification results was performed by calculating the relevant statistics of the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) indexes of the amplified sample images. The PSNR formula is as follows:
M S E = i = 0 M 1 j = 0 N 1 ( I 0 i , j I ( i , j ) ) 2 M × N
P S N R = 10 l o g M A X f 2 M S E
The MSE is the mean square error of the corresponding pixels of the training sample and the generated sample image, where  I 0 i , j  and  I ( i , j )  are the pixel values at  i , j  in the training sample and the generated sample image, respectively;  M × N  represents the number of pixels in the image; and  M A X f  represents the maximum value of the image pixels.
The formula for the SSIM is as follows:
S S I M ( a , b ) = ( 2 u a u b + C 1 ) ( 2 σ a b + C 2 ) ( u a 2 + u b 2 + C 1 ) ( σ a 2 + σ b 2 + C 2 )
For all pixels of image  a  and image  b , the average value is represented by  u a  and  u b , and the variance is represented by  σ a 2  and  σ b 2 , respectively.  σ a b  is the covariance of the image  a  and image  b  pixels. The value range of the SSIM is  0 ,   1 . If two images are the same, the SSIM value is 1; if they are completely different, the SSIM value is 0.

3.4. Construction of Classification Model for Offshore Seaweed Aquaculture Farms

3.4.1. Classification Model of Marine Seaweed Aquaculture Based on UNet

The structure of the UNet model [39] is shown in Figure 5. The left half of the “U-shaped” structure was the encoder of the UNet network. It used a 5-layer repeated convolution operation to extract features from the input image. Each layer first performed a 3 × 3 convolution on the feature map and used the ReLU function for processing, and then the feature map was subjected to a 2 × 2 maximum pooling operation to narrow the feature map. The right half of the “U-shaped” structure was divided into the decoder part, and UNet performed five upsamplings to recover the size of the feature map. Each upsampling first used a deconvolution to restore the resolution of the feature map, and the feature map obtained in this part was spliced with the feature map extracted from the corresponding depth in the encoding stage. Then, the spliced feature map was subjected to 2 times 3 × 3 convolution and activated by the ReLU function. This skip connection method allowed the UNet network to take into account the deep feature information extracted in the encoding stage and the low-level detail features in the decoding stage.

3.4.2. Classification Model of Marine Seaweed Aquaculture Based on DeepLabv3

The main structure of the DeepLabv3 network consisted of three parts [40], as shown in Figure 6. The first part was the basic network, which used ResNet to extract high-level semantic features of images. These features had different scales and levels of semantic information. The last block in the original ResNet contained a hole convolution with rate = 2. The second part was the atrous spatial pyramid pooling (ASPP) structure, which used dilated convolutions with different rates (6, 12, and 18) to convolve the output results of the previous layer to obtain multiscale information. In addition, broader contextual information was captured by adding a global average pooling layer. In the last part, the features of each branch of the ASPP were combined into a single vector, and then a 1 × 1 convolution was used to convolve the output, and the feature map was restored to the original image size by upsampling.

3.4.3. Classification Model of Marine Seaweed Aquaculture Based on SegNet

The SegNet model structure is shown in Figure 7, which includes the left encoding part and the right decoding part [41]. The coding part had 13 convolutional layers and 5 pooling layers. The image was input into the network, and the corresponding feature map information was obtained through 3 × 3 convolution, normalization, ReLU, and other operations in the convolution layer. After convolution, the feature map was compressed by the maximum pooling layer. With each pooling, the feature map become 1/2 of the original, and a total of 5 poolings were performed. The resolution of the feature map output by the encoding part was reduced to 1/32 of the input image. In the process of feature extraction, the number of calculations was reduced, and the receptive field was expanded. The decoding part had 13 convolutional layers, 5 upsampling layers, and the last Softmax layer. The upsampling used the pooling layer index to return the eigenvalues to the original position, the rest was filled with 0, and the dense feature map was obtained after convolution. Upsampling enlarged the reduced feature map in a ratio of two times, and the feature map after 5 deconvolution operations restored the original input size. The last layer of the fifth layer was a 1 × 1 convolution layer to classify the image to obtain the output result.

3.5. Accuracy Assessment

Precision, kappa index, F1 score, recall, and overall accuracy (OA) were chosen to quantitatively evaluate the recognition accuracy of the deep learning semantic segmentation model for seaweed aquaculture area. The calculation formulas for each accuracy evaluation index are as follows:
O A = T P + T N T N + T P + F N + F P × 100 %
K a p p a = T P + T N T N + F P × T N + F N + F N + T P × ( F P + T P )
R e c a l l = T P T P + F N × 100 %
P r e c i s i o n = T P T P + F P × 100 %
F 1 = 2 × R e c a l l × P r e c i s i o n R e c a l l + P r e c i s i o n
where TP represents the number of pixels in the classification results of the composite aquaculture area, TN represents the number of pixels in the true counterexample, FP represents the number of pixels in the false counterexample, and FN represents the number of pixels in the false counterexample.

4. Results

4.1. Amplification Comparison of the NDAWI Dataset Based on the Improved DCGAN

The training environment configuration’s information for this study was an AMD Ryzen 7 4800H for the CPU and an NVIDIA RTX2060 for the GPU. The software platform was Pytorch 1.12.1 + Python 3.8 + PyCharm 2021.3.2. The network training had 600 iterations and 64 training batches. The network could complete the training of 64 samples each time, and the learning rate was 0.0002. The NDAWI dataset of the Lvshunkou aquaculture area was divided into three categories, namely, kelp aquaculture patches + aquaculture-free sea area, wakame aquaculture patches + aquaculture-free sea area, and aquaculture-free sea area. The three datasets were input into the improved DCGAN.
As shown in Figure 8a–c, the NDWAI datasets of the training and generation samples show that the NDWAI time series curves of the different categories had approximately the same shape. The NDWAI time series curves of the kelp training and generated samples followed a similar trend of a letter “W” throughout the growth cycle, with a peak from the beginning of February to the end of March. The time series curves of the NDWAI for the training and generated samples of wakame resembled the letter “V” throughout the growth cycle, with a trough in February. The time series curves of the NDWAI for the training and generated samples in the culture-free waters showed a trend throughout the growth cycle influenced by the culture in the study area, with high NDWAI values in the early part of the cycle and flattening out in the later part of the cycle.
Table 2 shows the SSIM and PSNR of the different dataset generation algorithms. The results show that the improved DCGAN produced different datasets with the highest SSIM with respect to the GAN and DCGAN and the lowest PSNR compared to the GAN and DCGAN.

4.2. Selection and Analysis of the Before-and-After DCGAN Aquaculture Sea Classification Models

The NDAWI dataset from before and after the improved DCGAN amplification was used for the DCGAN, both before and after the aquaculture sea area classification model selection experiments. Based on the UNet, DeepLabv3, and SegNet semantic segmentation models used to classify the offshore aquaculture area of Lvshunkou, we investigated the sample amplification effect of the improved DCGAN and the selection of the semantic segmentation models. The size of the Lvshunkou offshore aquaculture area was 2400 × 900, and the size of each image element was 10 m × 10 m.
The NDAWI dataset of the Lvshunkou offshore aquaculture area included 1848 samples before augmentation and 3648 samples after amplification, and both datasets were constructed according to a ratio of 3:2 for the test samples, as well as the training samples (Figure 9). The training samples were input into the UNet, DeepLabv3, and SegNet seaweed aquaculture classification models, and their accuracies were evaluated.
In the Lvshunkou aquaculture area, the highest classification accuracy for the UNet model before and after the NDAWI dataset amplification was achieved (Table 3). The OA, kappa, and precision of the UNet model before and after the NDAWI dataset amplification were improved by 3.84%, 0.022, and 4.17%, respectively. The results show that the improved DCGAN was effective in amplifying the NDAWI classification dataset in the aquaculture area of Lvshunkou, which was reflected in the classification results of the different aquaculture sea area models. In addition, based on the amplified NDAWI dataset, the best classification results were obtained for the aquaculture sea area by using the UNet model.

4.3. Classification of Aquaculture Farms Based on UNet Model

A deep learning model extracts the features of different marine aquaculture areas in an image by learning the samples and labels of different types of aquaculture sea areas. Accordingly, the quantity and quality of the samples and labels directly affect the training effect of the model and the recognition table selection. Based on the manually tagged classification labels, the samples were amplified using the improved DCGAN. The amplified NDAWI dataset was then used to classify the Lvshunkou aquaculture areas. Finally, a comparative experiment was conducted with the NDAWI dataset that had not been amplified using the improved DCGAN to analyze the classification effect of the seaweed aquaculture area dataset before and after the amplification of the UNet model-based dataset.
Figure 10a–c show the results of the classification of the seaweed farms of each model in the offshore aquaculture area of Lvshunkou. In terms of local details, the classification results of the UNet model were better than those of the other models. In the classification results of the DeepLabv3 model, there was a misclassification of some kelp aquaculture areas into wakame culture areas. In the classification result map of the SegNet model, there was the misclassification of part of the kelp aquaculture area into free marine area.
The results show that the NDAWI dataset with the improved DCGAN amplification (Figure 11e) was better classified than the NDAWI dataset without amplification (Figure 11a), which was primarily demonstrated by fewer cases of incorrectly identifying the cultured seawaters as culture-free seawaters, substantially improved misclassification, and a significant reduction in broken spots. From the detailed comparison figures, the UNet model constructed based on the unamplified NDAWI dataset misidentified some kelp-farmed sea areas as culture-free sea areas (Figure 11b,f). The UNet model misidentified some kelp-farmed sea areas as wakame-farmed sea areas (Figure 11c,g). The UNet model misidentified the kelp-farmed and wakame-farmed sea areas as culture-free sea areas (Figure 11d,h).

5. Discussion

5.1. Demonstration of the Model’s Application

In order to verify the generalization of the model, the Jinzhou offshore aquaculture area (study area B, Figure 1), which is farther away from the Lvshunkou offshore aquaculture area, was selected for this study. Since the water color conditions in the two zones are different, it was necessary to train the Lvshunkou aquaculture area separately from the Jinzhou aquaculture area. In the Jinzhou aquaculture area, 568 images of the 32 × 32 × 10 NDAWI sample set were created. Next, the amplification and analysis of the NDAWI dataset were completed on the basis of the improved DCGAN. The processing was consistent with the Lvshunkou aquaculture area.
Table 4 illustrates the SSIM and PSNR of the different dataset generation algorithms. The results show that the improved DCGAN produced the highest SSIM for the different datasets with respect to the GAN and DCGAN and the lowest PSNR compared to the GAN and DCGAN.
The NDAWI dataset of the Jinzhou aquaculture area included 568 samples before amplification and a total of 6408 samples after amplification. Both the before and after datasets were constructed in a 3:2 ratio for the test samples, as well as the training samples. The training samples were input into the UNet, DeepLabv3, and SegNet seaweed aquaculture classification models, and the accuracy was evaluated.
From Table 5, it can be seen that for the Jinzhou aquaculture area, the UNet model had the highest classification accuracy before and after the amplification of the NDAWI dataset. The OA, Kappa, and Precision of the UNet model before and after the amplification of the NDAWI dataset were improved by 4.43%, 0.032, and 4, respectively. The results showed that the improved DCGAN had the best classification effect for the Jinzhou aquaculture area based on the amplified NDAWI dataset. Moreover, based on the amplified NDAWI dataset, the UNet model was the most effective in classifying the aquaculture sea area using the UNet model.
The amplified NDAWI dataset was used to classify the Jinzhou offshore aquaculture area, and a comparative experiment was conducted with the unamplified NDAWI dataset to analyze the classification effect before and after the amplification of the seaweed aquaculture area dataset based on the UNet model.
Figure 12a–c present the results of the classification of each modeled aquaculture sea area in the Jinzhou aquaculture zone. From the observation of local details, the UNet model had better classification results. In the classification results of the DeepLabv3 model, there existed the misclassification of part of the aquaculture-free area into the wakame aquaculture area. In the classification result map of the SegNet model, there was the misclassification of part of the kelp aquaculture area into free marine area.
The classification results (Figure 13a,e) of the NDAWI dataset amplified by the improved DCGAN indicated the more accurate identification of the boundaries between the cultured and the uncultured marine areas, with fewer overall fragmentation patches. From the detailed comparison diagram, the UNet model constructed based on the unamplified NDAWI dataset incorrectly identified the aquaculture-free sea area as kelp aquaculture sea area (Figure 13b,f). The UNet model was unable to accurately distinguish the boundary between the kelp aquaculture sea area and the aquaculture-free sea area and mistakenly identified the kelp aquaculture sea area as the aquaculture-free sea area (Figure 13c,g). The UNet model misidentified the aquaculture-free sea area as the wakame aquaculture sea area (Figure 13d,h).
In summary, based on the improved DCGAN amplified seaweed aquaculture area dataset, the utilization of the UNet model displayed a high-level of improvement in the recognition effect of every type of aquaculture sea area, as well as the boundary identification of aquaculture sea area and aquaculture-free sea area. The method in this study has good applicability.

5.2. Limitations and Prospects

The remote sensing images used in this study belonged to the Sentinel-2 mission. The next step is to use multiresolution images provided by multisource satellites (GF-2, ZY-3) to classify or detect aquaculture areas. We can try to use radar images to detect macroalgae growing beneath the water’s surface, and we can also use drones to be maneuverable and flexible in acquiring information to fill in the missing time periods of satellite imagery. With the increase in sample types, the problem of an imbalance among aquaculture area, aquaculture type, and seawater samples has also been alleviated.
At present, this study used UNet, DeepLabv3, and SegNet semantic segmentation models to compare the effects after classification and screened the UNet model as the optimal model. Future work should use a variety of deep learning methods for remote sensing classification of the sea for aquaculture and make innovations based on existing methods. Moreover, the training and learning framework, the accuracy of the target identification and detection, and the generalization ability of the model in areas with richer types of mariculture and larger research scope should be improved. Furthermore, mariculture information with high precision and stability needs to be obtained.
The work of this study focused on the accurate extraction of information from different algae mariculture areas. In the future, it is necessary to further study the spatial and temporal dynamic distribution and change information of mariculture with a larger research scope and a longer time interval, as well as to analyze the driving factors of the changes. Moreover, the spatial and temporal differentiation of the seaweed culture patterns should be scientifically and quantitatively summarized, which will be conducive to promoting the green and healthy development of the mariculture industry. Meanwhile, the discussion of the distribution pattern of algae culture will also help in accurately grasping the current situation of the use of sea resources and in protecting the marine ecological environment. This work can provide support for government departments to use the sea accurately, use the sea scientifically, and supervise efficiently.

6. Conclusions

The insufficient number of images from satellite remote sensing collections that coincide with on-site observations of marine aquaculture areas, as well as the characteristics of manual markup, such as high cost and low efficiency, result in too few data samples, which limits the generalization ability of trained classification or detection models. In this study, combined with deep learning technology, we used the improved DCGAN to expand the samples of seaweed aquaculture images in order to improve the generalization ability of classification models and solve the current problem of too few samples of remote sensing images. The DCGAN loss function and generator network structure were improved according to the Wasserstein distance, thereby enhancing the image generation quality of DCGAN, stabilizing the training of the generative adversarial network and minimizing pattern collapse.
Image classification and recognition is basic research content in the field of remote sensing. With the development of machine learning, deep learning has been increasingly widely applied in the field of image classification with the characteristics of intelligence and efficiency. Aiming at accurately extracting the spatial distribution of aquaculture areas, this study designed a classification model applicable to offshore seaweed farms based on the NDAWI amplified dataset of the improved DCGAN using the UNet, DeepLabv3, and SegNet semantic segmentation models. On the basis of the classification effects of the three models, the optimal classification model for aquaculture sea areas was selected. The UNet classification model based on the improved DCGAN amplified NDAWI dataset achieved excellent performance. This study combines deep learning technology and actual production requirements to realize the precise extraction of different types of aquaculture areas in marine aquaculture areas, which is a useful exploration and application for the scientific management of marine aquaculture.

Author Contributions

Conceptualization, H.Z., H.L. and C.Z.; methodology, H.Z., Z.L. and C.Z.; software, Z.L. and C.Z.; validation, C.Z.; formal analysis, Z.L.; investigation, Z.L.; data curation, C.Z. and Y.Y.; writing—original draft preparation, Z.L., G.Z. and C.Z.; writing—review and editing, Z.L., C.Z. and H.Z.; visualization, Z.L. and Y.Z.; supervision, H.Z.; funding acquisition, H.Z. and H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the National Natural Science Foundation of China (41971339) and the SDUST Research Fund (2019TDJH103).

Data Availability Statement

Not applicable.

Acknowledgments

We thank the following institutions for their kind assistance with this research: the European Space Agency (ESA) for providing the Sentinel-2 data and the National Marine Environmental Monitoring Center of China for providing the measured data. We would like to thank the editors and anonymous reviewers for their valuable comments and suggestions on this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ottinger, M.; Clauss, K.; Kuenzer, C. Aquaculture: Relevance, Distribution, Impacts and Spatial Assessments—A Review. Ocean Coast. Manag. 2016, 119, 244–266. [Google Scholar] [CrossRef]
  2. Liu, W.; Zhai, W.; Fan, S.; Cao, Y.; Guo, L. Current situation assessment and policy suggestion of the sea use for aquaculture all over the country. Ocean Dev. Manag. 2022, 39, 89–93. [Google Scholar] [CrossRef]
  3. Gao, Q.; Zhang, G.; Dong, S. Reviews on cage-culture ecology. Period. Ocean Univ. China 2019, 49, 7–17. [Google Scholar] [CrossRef]
  4. Ministry of Agriculture and Rural Affairs of the People’s Republic of China. Several opinions of the Ministry of Agriculture and Rural Affairs on accelerating the green development of the aquaculture industry. Announc. Minist. Agric. Rural Aff. People’s Repub. China 2019, 24–28. [Google Scholar]
  5. Kramer, S.J.; Siegel, D.A.; Maritorena, S.; Catlett, D. Modeling Surface Ocean Phytoplankton Pigments from Hyperspectral Remote Sensing Reflectance on Global Scales. Remote Sens. Environ. 2022, 270, 112879. [Google Scholar] [CrossRef]
  6. Mohseni, F.; Saba, F.; Mirmazloumi, S.M.; Amani, M.; Mokhtarzade, M.; Jamali, S.; Mahdavi, S. Ocean Water Quality Monitoring Using Remote Sensing Techniques: A Review. Mar. Environ. Res. 2022, 180, 105701. [Google Scholar] [CrossRef]
  7. Salgado-Hernanz, P.M.; Bauzà, J.; Alomar, C.; Compa, M.; Romero, L.; Deudero, S. Assessment of Marine Litter through Remote Sensing: Recent Approaches and Future Goals. Mar. Pollut. Bull. 2021, 168, 112347. [Google Scholar] [CrossRef]
  8. Alexandridis, T.K.; Topaloglou, C.A.; Lazaridou, E.; Zalidis, G.C. The Performance of Satellite Images in Mapping Aquacultures. Ocean Coast. Manag. 2008, 51, 638–644. [Google Scholar] [CrossRef]
  9. Sun, X.; Su, F.; Zhou, C.; Xue, Z. Analyses on spatial-temporal changes in aquaculture land in coastal areas of the Pearl River Estuarine. Resour. Sci. 2010, 32, 71–77. [Google Scholar]
  10. Virdis, S.G.P. An Object-Based Image Analysis Approach for Aquaculture Ponds Precise Mapping and Monitoring: A Case Study of Tam Giang-Cau Hai Lagoon, Vietnam. Environ. Monit. Assess. 2014, 186, 117–133. [Google Scholar] [CrossRef]
  11. Ottinger, M.; Clauss, K.; Kuenzer, C. Large-Scale Assessment of Coastal Aquaculture Ponds with Sentinel-1 Time Series Data. Remote Sens. 2017, 9, 440. [Google Scholar] [CrossRef]
  12. Wang, M.; Cui, Q.; Wang, J.; Ming, D.; Lv, G. Raft Cultivation Area Extraction from High Resolution Remote Sensing Imagery by Fusing Multi-Scale Region-Line Primitive Association Features. ISPRS J. Photogramm. Remote Sens. 2017, 123, 104–113. [Google Scholar] [CrossRef]
  13. Duan, Y.; Li, X.; Zhang, L.; Liu, W.; Liu, S.; Chen, D.; Ji, H. Detecting Spatiotemporal Changes of Large-Scale Aquaculture Ponds Regions over 1988–2018 in Jiangsu Province, China Using Google Earth Engine. Ocean Coast. Manag. 2020, 188, 105144. [Google Scholar] [CrossRef]
  14. Kurekin, A.A.; Miller, P.I.; Avillanosa, A.L.; Sumeldan, J.D.C. Monitoring of Coastal Aquaculture Sites in the Philippines through Automated Time Series Analysis of Sentinel-1 SAR Images. Remote Sens. 2022, 14, 2862. [Google Scholar] [CrossRef]
  15. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  16. Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep Learning in Remote Sensing Applications: A Meta-Analysis and Review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar] [CrossRef]
  17. Liu, Y.; Yang, X.; Wang, Z.; Lu, C. Extracting ralt aquaculture areas in Sanduao from high-resolution remote sensing images using RCF. Haiyang Xuebao 2019, 41, 119–130. [Google Scholar]
  18. Zheng, Z.; Fan, H.; Wang, J.; Wu, Y.; Wang, B.; Huang, T. An improved double-branch network method for intelligently extracting marine cage culture area. Remote Sens. Land Resour. 2020, 32, 120–129. [Google Scholar]
  19. Sui, B.; Jiang, T.; Zhang, Z.; Pan, X.; Liu, C. A Modeling Method for Automatic Extraction of Offshore Aquaculture Zones Based on Semantic Segmentation. ISPRS Int. J. Geo-Inf. 2020, 9, 145. [Google Scholar] [CrossRef]
  20. Aggarwal, C.C. Neural Networks and Deep Learning: A Textbook; Springer International Publishing: Berlin/Heidelberg, Germany, 2018; ISBN 978-3-319-94462-3. [Google Scholar]
  21. Reichstein, M.; Camps-Valls, G.; Stevens, B.; Jung, M.; Denzler, J.; Carvalhais, N. Prabhat Deep Learning and Process Understanding for Data-Driven Earth System Science. Nature 2019, 566, 195–204. [Google Scholar] [CrossRef]
  22. Roh, Y.; Heo, G.; Whang, S.E. A Survey on Data Collection for Machine Learning: A Big Data—AI Integration Perspective. IEEE Trans. Knowl. Data Eng. 2021, 33, 1328–1347. [Google Scholar] [CrossRef]
  23. Hao, X.; Liu, L.; Yang, R.; Yin, L.; Zhang, L.; Li, X. A Review of Data Augmentation Methods of Remote Sensing Image Target Recognition. Remote Sens. 2023, 15, 827. [Google Scholar] [CrossRef]
  24. Lalitha, V.; Latha, B. A Review on Remote Sensing Imagery Augmentation Using Deep Learning. Mater. Today Proc. 2022, 62, 4772–4778. [Google Scholar] [CrossRef]
  25. Feng, Q.; Chen, B.; Li, G.; Yao, X.; Gao, B.; Zhang, L. A review for sample datasets of remote sensing imagery. Natl. Remote Sens. Bull. 2022, 26, 589–605. [Google Scholar] [CrossRef]
  26. Zhu, Q.; Zhong, Y.; Zhao, B.; Xia, G.-S.; Zhang, L. Bag-of-Visual-Words Scene Classifier With Local and Global Features for High Spatial Resolution Remote Sensing Imagery. IEEE Geosci. Remote Sens. Lett. 2016, 13, 747–751. [Google Scholar] [CrossRef]
  27. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2014; Volume 27. [Google Scholar]
  28. Brock, A.; Donahue, J.; Simonyan, K. Large Scale GAN Training for High Fidelity Natural Image Synthesis 2019. arXiv 2018, arXiv:1809.11096. [Google Scholar]
  29. Gao, Q.; Jiang, Z. Amplification of small sample library based on GAN equivalent model. Electr. Meas. Instrum. 2019, 56, 76–81. [Google Scholar] [CrossRef]
  30. Shaham, T.R.; Dekel, T.; Michaeli, T. SinGAN: Learning a Generative Model From a Single Natural Image. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 4570–4580. [Google Scholar]
  31. Office of Local Chronicles Compilation of Dalian City. Dalian City Chronicle: Aquatic Chronicle; Dalian Publishing House: Dalian, China, 2004; ISBN 7-80684-218-7. [Google Scholar]
  32. Xing, Q.; An, D.; Zheng, X.; Wei, Z.; Wang, X.; Li, L.; Tian, L.; Chen, J. Monitoring Seaweed Aquaculture in the Yellow Sea with Multiple Sensors for Managing the Disaster of Macroalgal Blooms. Remote Sens. Environ. 2019, 231, 111279. [Google Scholar] [CrossRef]
  33. Bell, T.W.; Nidzieko, N.J.; Siegel, D.A.; Miller, R.J.; Cavanaugh, K.C.; Nelson, N.B.; Reed, D.C.; Fedorov, D.; Moran, C.; Snyder, J.N.; et al. The Utility of Satellites and Autonomous Remote Sensing Platforms for Monitoring Offshore Aquaculture Farms: A Case Study for Canopy Forming Kelps. Front. Mar. Sci. 2020, 7, 520223. [Google Scholar] [CrossRef]
  34. Langford, A.; Waldron, S.; Sulfahri; Saleh, H. Monitoring the COVID-19-Affected Indonesian Seaweed Industry Using Remote Sensing Data. Mar. Policy 2021, 127, 104431. [Google Scholar] [CrossRef]
  35. Cheng, J.; Jia, N.; Chen, R.; Guo, X.; Ge, J.; Zhou, F. High-Resolution Mapping of Seaweed Aquaculture along the Jiangsu Coast of China Using Google Earth Engine (2016–2022). Remote Sens. 2022, 14, 6202. [Google Scholar] [CrossRef]
  36. Zhang, C.; Gao, L.; Lu, Z.; Liu, H.; Zhu, H.; Tang, K. Classification of Aquaculture Waters through Remote Sensing on the Basis of a Time-Series Water Index. J. Coast. Res. 2022, 38, 1148–1162. [Google Scholar] [CrossRef]
  37. Ren, K.; Meng, L.; Fan, C.; Wang, P. Least Squares DCGAN Based Semantic Image Inpainting. In Proceedings of the 2018 5th IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS), Nanjing, China, 23–25 November 2018; pp. 890–894. [Google Scholar]
  38. Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A.C. Improved Training of Wasserstein GANs. In Proceedings of the Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30. [Google Scholar]
  39. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015; Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  40. Chen, L.-C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking Atrous Convolution for Semantic Image Segmentation 2017. arXiv 2017, arXiv:1706.05587. [Google Scholar]
  41. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
Figure 1. Location of the study area: (a) Dalian; (b) study area A, Lvshunkou offshore aquaculture area; (c) study area B, Jinzhou offshore aquaculture area.
Figure 1. Location of the study area: (a) Dalian; (b) study area A, Lvshunkou offshore aquaculture area; (c) study area B, Jinzhou offshore aquaculture area.
Remotesensing 15 04423 g001
Figure 2. Workflow of the remote sensing classification of offshore seaweed aquaculture farms on sample dataset amplification and semantic segmentation model.
Figure 2. Workflow of the remote sensing classification of offshore seaweed aquaculture farms on sample dataset amplification and semantic segmentation model.
Remotesensing 15 04423 g002
Figure 3. Structure diagram of the improved DCGAN generator network. The generator network has a total of five layers, including one fully linked layer (first cuboid) and four convolutional layers (last four cuboids). The input is a 100-dimensional random noise vector that obeys a normal distribution, and the output is an image of 32 × 32 × 10.
Figure 3. Structure diagram of the improved DCGAN generator network. The generator network has a total of five layers, including one fully linked layer (first cuboid) and four convolutional layers (last four cuboids). The input is a 100-dimensional random noise vector that obeys a normal distribution, and the output is an image of 32 × 32 × 10.
Remotesensing 15 04423 g003
Figure 4. Structure diagram of the improved DCGAN discriminator network. The discriminator network has a total of five layers, including four convolutional layers (first four cuboids) and one fully linked layer (fifth cuboid). The input is an image of 32 × 32 × 10, and the output is a single predictive value (probability that the image of the input discriminator is identically distributed with the dataset image).
Figure 4. Structure diagram of the improved DCGAN discriminator network. The discriminator network has a total of five layers, including four convolutional layers (first four cuboids) and one fully linked layer (fifth cuboid). The input is an image of 32 × 32 × 10, and the output is a single predictive value (probability that the image of the input discriminator is identically distributed with the dataset image).
Remotesensing 15 04423 g004
Figure 5. Structure of the UNet aquaculture classification model, consisting of three parts: encoder (left column), converter (bottom line and “skip connection”), and decoder (right column).
Figure 5. Structure of the UNet aquaculture classification model, consisting of three parts: encoder (left column), converter (bottom line and “skip connection”), and decoder (right column).
Remotesensing 15 04423 g005
Figure 6. Overview of DeepLabv3 architecture consisting of three stages: basic network, atrous spatial pyramid pooling (ASPP) module, and post-processing stage.
Figure 6. Overview of DeepLabv3 architecture consisting of three stages: basic network, atrous spatial pyramid pooling (ASPP) module, and post-processing stage.
Remotesensing 15 04423 g006
Figure 7. Structure of the SegNet aquaculture classification model. The model comprises two steps: an encoding process that compresses the images, followed by a decoding process that restores the images.
Figure 7. Structure of the SegNet aquaculture classification model. The model comprises two steps: an encoding process that compresses the images, followed by a decoding process that restores the images.
Remotesensing 15 04423 g007
Figure 8. Comparison of the NDAWI values of the training samples and NDAWI values of the generated samples: (a) kelp; (b) wakame; (c) seawater in the Lvshunkou offshore aquaculture area.
Figure 8. Comparison of the NDAWI values of the training samples and NDAWI values of the generated samples: (a) kelp; (b) wakame; (c) seawater in the Lvshunkou offshore aquaculture area.
Remotesensing 15 04423 g008
Figure 9. (a) Sample images of part of the Lvshunkou offshore aquaculture area and (b) their corresponding labels.
Figure 9. (a) Sample images of part of the Lvshunkou offshore aquaculture area and (b) their corresponding labels.
Remotesensing 15 04423 g009
Figure 10. The classification results of the (a) UNet, (b) DeepLabv3, and (c) SegNet models based on the amplified NDAWI dataset in the Lvshunkou offshore aquaculture area (large) and a detailed comparison (small).
Figure 10. The classification results of the (a) UNet, (b) DeepLabv3, and (c) SegNet models based on the amplified NDAWI dataset in the Lvshunkou offshore aquaculture area (large) and a detailed comparison (small).
Remotesensing 15 04423 g010aRemotesensing 15 04423 g010b
Figure 11. Classification result maps of aquaculture use before and after sample amplification in the Lvshunkou offshore aquaculture area (a,e) and detailed comparison maps (b,f;c,g;d,h).
Figure 11. Classification result maps of aquaculture use before and after sample amplification in the Lvshunkou offshore aquaculture area (a,e) and detailed comparison maps (b,f;c,g;d,h).
Remotesensing 15 04423 g011
Figure 12. The classification results of the (a) UNet, (b) DeepLabv3, and (c) SegNet models based on the amplified NDAWI dataset in the Jinzhou offshore aquaculture area (large) and a detailed comparison(s) (small).
Figure 12. The classification results of the (a) UNet, (b) DeepLabv3, and (c) SegNet models based on the amplified NDAWI dataset in the Jinzhou offshore aquaculture area (large) and a detailed comparison(s) (small).
Remotesensing 15 04423 g012
Figure 13. Classification result maps of aquaculture use before and after sample amplification in the Jinzhou offshore aquaculture area (a,e) and detailed comparison maps (b,f;c,g;d,h).
Figure 13. Classification result maps of aquaculture use before and after sample amplification in the Jinzhou offshore aquaculture area (a,e) and detailed comparison maps (b,f;c,g;d,h).
Remotesensing 15 04423 g013
Table 1. Sentinel-2A/B images data list.
Table 1. Sentinel-2A/B images data list.
NumberLvshunkou Offshore Aquaculture AreaJinzhou Offshore Aquaculture Area
Satellite SensorDate ObtainedSatellite SensorDate Obtained
1Sentinel-2B13 December 2017Sentinel-2B30 November 2017
2Sentinel-2B2 January 2018Sentinel-2B19 January 2018
3Sentinel-2B1 February 2018Sentinel-2B8 February 2018
4Sentinel-2A16 February 2018Sentinel-2A5 March 2018
5Sentinel-2A5 March 2018Sentinel-2B30 March 2018
6Sentinel-2A25 March 2018Sentinel-2B9 April 2018
7Sentinel-2A7 April 2018Sentinel-2B19 April 2018
8Sentinel-2A27 April 2018Sentinel-2B29 April 2018
9Sentinel-2B9 May 2018Sentinel-2A4 May 2018
10Sentinel-2A24 May 2018Sentinel-2A24 May 2018
Table 2. Evaluation indicators for the three categories of data generated in the Lvshunkou offshore aquaculture area.
Table 2. Evaluation indicators for the three categories of data generated in the Lvshunkou offshore aquaculture area.
DatasetIndexGANDCGANImproved DCGAN
The kelp + aquaculture-free sea area datasetSSIM53.82%64.89%78.96%
PSNR51.78%50.36%49.75%
The wakame + aquaculture-free sea area datasetSSIM55.54%66.56%79.28%
PSNR52.96%51.59%50.51%
The aquaculture-free sea area datasetSSIM69.66%77.84%82.11%
PSNR63.12%57.69%48.96%
Table 3. Classification accuracy of each model before and after amplification of the NDAWI classification dataset (Lvshunkou offshore aquaculture area).
Table 3. Classification accuracy of each model before and after amplification of the NDAWI classification dataset (Lvshunkou offshore aquaculture area).
ModelBefore AmplificationAfter Amplification
OA (%)KappaRecall (%)Precision (%)F1OA (%)KappaRecall (%)Precision (%)F1
UNet90.720.88389.9189.580.897494.560.90593.6993.750.9372
DeepLabv389.230.87589.5489.810.896792.120.89491.1390.960.9104
SegNet89.560.87989.6289.250.894393.480.89993.4893.550.9351
Table 4. Evaluation indicators for the three categories of data generated in the Jinzhou offshore aquaculture area.
Table 4. Evaluation indicators for the three categories of data generated in the Jinzhou offshore aquaculture area.
DatasetIndexGANDCGANImproved DCGAN
The kelp + aquaculture-free sea area datasetSSIM55.57%65.58%79.14%
PSNR52.56%49.26%50.69%
The wakame + aquaculture-free sea area datasetSSIM56.23%67.31%80.02%
PSNR54.25%50.65%49.23%
The aquaculture-free sea area datasetSSIM70.23%78.63%83.5%
PSNR63.55%58.85%53.95%
Table 5. Classification accuracy of each model before and after amplification of the NDAWI classification dataset (Jinzhou offshore aquaculture area).
Table 5. Classification accuracy of each model before and after amplification of the NDAWI classification dataset (Jinzhou offshore aquaculture area).
ModelBefore AmplificationAfter Amplification
OA (%)KappaRecall (%)Precision (%)F1OA (%)KappaRecall (%)Precision (%)F1
UNet90.250.88190.3490.890.906594.680.91394.6894.890.9478
DeepLabv389.120.86390.2189.510.898592.310.90192.3191.250.9178
SegNet89.950.87790.1389.650.898993.560.90993.5692.450.9302
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, H.; Lu, Z.; Zhang, C.; Yang, Y.; Zhu, G.; Zhang, Y.; Liu, H. Remote Sensing Classification of Offshore Seaweed Aquaculture Farms on Sample Dataset Amplification and Semantic Segmentation Model. Remote Sens. 2023, 15, 4423. https://doi.org/10.3390/rs15184423

AMA Style

Zhu H, Lu Z, Zhang C, Yang Y, Zhu G, Zhang Y, Liu H. Remote Sensing Classification of Offshore Seaweed Aquaculture Farms on Sample Dataset Amplification and Semantic Segmentation Model. Remote Sensing. 2023; 15(18):4423. https://doi.org/10.3390/rs15184423

Chicago/Turabian Style

Zhu, Hongchun, Zhiwei Lu, Chao Zhang, Yanrui Yang, Guocan Zhu, Yining Zhang, and Haiying Liu. 2023. "Remote Sensing Classification of Offshore Seaweed Aquaculture Farms on Sample Dataset Amplification and Semantic Segmentation Model" Remote Sensing 15, no. 18: 4423. https://doi.org/10.3390/rs15184423

APA Style

Zhu, H., Lu, Z., Zhang, C., Yang, Y., Zhu, G., Zhang, Y., & Liu, H. (2023). Remote Sensing Classification of Offshore Seaweed Aquaculture Farms on Sample Dataset Amplification and Semantic Segmentation Model. Remote Sensing, 15(18), 4423. https://doi.org/10.3390/rs15184423

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop