Abstract
The detection of spatial and temporal changes (or change detection) in remote sensing images is essential in any decision support system about natural phenomena such as extreme weather conditions, climate change, and floods. In this paper, a new method is proposed to determine the inference process parameters of boundary point, rule coefficient, defuzzification coefficient, and dependency coefficient and present a new FWADAM+ method to train that set of parameters simultaneously. The initial data are clustered simultaneously according to each data group. This result will be the basis for determining a suitable set of parameters by using the FWADAM+ concurrent training algorithm. Eventually, these results will be inherited in the following data groups to build other complex fuzzy rule systems in a shorter time while still ensuring the model’s efficiency. The weather imagery database of the United States Navy (US Navy) is used to evaluate and compare with some related methods using the root-mean-squared error (RMSE), R-squared (R2) measures, and the analysis of variance (ANOVA) model. The experimental results show that the proposed method is up to 30% better than the SeriesNet method, and the processing time is 10% less than that of the SeriesNet method.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data availability
The data that support the findings of this study are available from US Navy but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of US Navy.
Notes
We will represent the loss function (20) in the Appendix A.
References
Liu W, Jie Y, Zhao J, Le YA (2017) Novel method of unsupervised change detection using multi-temporal PolSAR images. Remote Sens 9:1135
Ma W, Wu Y, Gong M, Xiong Y, Yang H, Hu T (2018) Change detection in SAR images based on matrix factorisation and a Bayes classifier. Int J Remote Sens 40:1–26
Singh A (1989) Review article digital change detection techniques using remotely—sensed data. Int J Remote Sens 10(6):989–1003
Lu D, Mausel P, Brondizio E, Moran E (2004) Change detection techniques. Int J Remote Sens 25(12):2365–2401
Hussain M, Chen D, Cheng A, Wei H, Stanley D (2013) Change detection from remotely sensed images: from pixel-based to object-based approaches. ISPRS J Photogramm Remote Sens 80:91–106
You Y, Cao J, Zhou W (2020) A survey of change detection methods based on remote sensing images for multi-source and multi-objective scenarios. Remote Sens 12(15):2460
Canty MJ (2019) Image analysis, classification, and change detection in remote sensing. Taylor & Francis Group, Abingdon-on-Thames. https://doi.org/10.1201/9780429464348
Shi W, Zhang M, Zhang R, Chen S, Zhan Z (2020) Change detection based on artificial intelligence: state-of-the-art and challenges. Remote Sens 12(10):1688
Zhang M, Zhou Y, Quan W, Zhu J, Zheng R, Wu Q (2020) Online learning for IoT optimization: a Frank-Wolfe Adam-based algorithm. IEEE Internet Things J 7(9):8228–8237
Shen Z, Zhang Y, Lu J, Xu J, Xiao G (2018) SeriesNet: a generative time series forecasting model. In: 2018 international joint conference on neural networks (IJCNN), pp 1–8. https://doi.org/10.1109/IJCNN.2018.8489522
Du B, Ru L, Wu C, Zhang L (2019) Unsupervised deep slow feature analysis for change detection in multi-temporal remote sensing images. IEEE Trans Geosci Remote Sens 57(12):9976–9992
Chu S, Li P, Xia M (2022) MFGAN: multi feature guided aggregation network for remote sensing image. Neural Comput Appl 34(12):10157–10173
Nguyen CH, Nguyen TC, Tang TN, Phan NL (2021) Improving object detection by label assignment distillation. arXiv preprint, arXiv:2108.10520
Daudt RC, Le Saux, B, Boulch A, Gousseau Y (2018) Urban change detection for multispectral earth observation using convolutional neural networks. In IGARSS 2018—2018 IEEE international geoscience and remote sensing symposium. IEEE, pp 2115–2118
Odaudu SN, Umoh IJ, Adedokun EA, Jonathan C (2021) LearnFuse: An efficient distributed big data fusion architecture using ensemble learning technique. In: Misra S, Muhammad-Bello B (eds) Information and communication technology and applications. ICTA 2020. Communications in computer and information science, vol 1350. Springer, Cham, pp 80–92. https://doi.org/10.1007/978-3-030-69143-1_7
Qin D, Zhou X, Zhou W, Huang G, Ren Y, Horan B, He H, Kito N (2018) MSIM: a change detection framework for damage assessment in natural disasters. Expert Syst Appl 97:372–383
Saha S, Bovolo F, Bruzzone L (2019) Unsupervised deep change vector analysis for multiple-change detection in VHR images. IEEE Trans Geosci Remote Sens 57(6):3677–3693
Zhan Y, Fu K, Yan M, Sun X, Wang H, Qiu X (2017) Change detection based on deep siamese convolutional network for optical aerial images. IEEE Geosci Remote Sens Lett 14(10):1845–1849
Cao Z et al (2020) Detection of small changed regions in remote sensing imagery using convolutional neural network. In: IOP conference series earth and environmental science, vol 502, p 012017
Liu R, Wang R, Huang J, Li J, Jiao L (2021) Change detection in SAR images using multiobjective optimization and ensemble strategy. IEEE Geosci Remote Sens Lett 18(9):1585–1589
Celik T (2009) Unsupervised change detection in satellite images using principal component analysis and k-means clustering. IEEE Geosci Remote Sens Lett 6(4):772–776
Saha PK, Logofatu D (2021) Efficient approaches for density-based spatial clustering of applications with noise. In: Maglogiannis I, Macintyre J, Iliadis L (eds) Artificial intelligence applications and innovations. AIAI 2021. IFIP advances in information and communication technology, vol 627. Springer, Cham, pp 184–195. https://doi.org/10.1007/978-3-030-79150-6_15
Wu C, Peng Q, Lee J, Leibnitz K, Xia Y (2021) Effective hierarchical clustering based on structural similarities in nearest neighbor graphs. Knowl-Based Syst 228:107295
Ghosh S, Dubey SK (2013) Comparative analysis of k-means and fuzzy c-means algorithms. Int J Adv Comput Sci Appl 4(4):35–39
Zhang D, Yao L, Chen K, Wang S, Chang X, Liu Y (2019) Making sense of spatio-temporal preserving representations for EEG-based human intention recognition. IEEE Trans Cybern 50(7):3033–3044
Chen K, Yao L, Zhang D, Wang X, Chang X, Nie F (2019) A semisupervised recurrent convolutional attention model for human activity recognition. IEEE Trans Neural Netw Learn Syst 31(5):1747–1756
López-Fandiño J, Garea AS, Heras DB, Argüello F (2018) Stacked autoencoders for multiclass change detection in hyperspectral images. In: Proceedings of the 2018 IEEE international geoscience and remote sensing symposium (IGARSS), pp 1906–1909
Samadi F, Akbarizadeh G, Kaabi H (2019) Change detection in SAR images using deep belief network: a new training approach based on morphological images. IET Image Proc 13(12):2255–2264
Peng D, Zhang Y, Guan H (2019) End-to-end change detection for high resolution satellite images using improved UNet++. Remote Sens 11(11):1382
Luo M, Chang X, Nie L, Yang Y, Hauptmann AG, Zheng Q (2017) An adaptive semisupervised feature analysis for video semantic recognition. IEEE Trans Cybern 48(2):648–660
Mou L, Zhu XX (2018) A recurrent convolutional neural network for land cover change detection in multispectral images. In: IGARSS 2018—2018 IEEE international geoscience and remote sensing symposium, 2018, pp 4363–4366. https://doi.org/10.1109/IGARSS.2018.8517375
Zheng Z, Ma A, Zhang L, Zhong Y (2021) Change is everywhere: single-temporal supervised object change detection in remote sensing imagery. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 15193–15202
Xu G, Li H, Zang Y, Xie L, Bai C (2020) Change detection based on IR-MAD model for GF-5 remote sensing imagery. In: IOP conference series: materials science and engineering, vol 768, no 7. IOP Publishing, p 072073
Healey SP, Cohen WB, Yang Z, Kenneth Brewer C, Brooks EB, Gorelick N, Hernandez AJ, Huang C, Joseph Hughes M, Kennedy RE, Loveland TR, Moisen GG, Schroeder TA, Stehman SV, Vogelmann JE, Woodcock CE, Yang L, Zhu Z (2018) Mapping forest change using stacked generalization: an ensemble approach. Remote Sens Environ 204:717–728
Jiang W, He G, Long T, Ni Y, Liu H, Peng Y, Lv K, Wang G (2018) Multilayer perceptron neural network for surface water extraction in Landsat 8 OLI satellite images. Remote Sens 10(5):755
Sharma C, Amandeep B, Sobti R, Lohani TK, Shabaz M (2021) A secured frame selection based video watermarking technique to address quality loss of data: Combining graph based transform, singular valued decomposition, and hyperchaotic encryption. Secur Commun Netw 2021:5536170
Jarrahi MA, Samet H, Ghanbari T (2020) Novel change detection and fault classification scheme for AC microgrids. IEEE Syst J 14(3):3987–3998
Im J, Jensen JR (2005) A change detection model based on neighborhood correlation image analysis and decision tree classification. Remote Sens Environ 99(3):326–340
Shao P, Shi W, He P, Hao M, Zhang X (2016) Novel approach to unsupervised change detection based on a robust semi-supervised FCM clustering algorithm. Remote Sens 8(3):264
Zhang H, Wang Q, Shi W, Hao M (2017) A novel adaptive fuzzy local information C-means clustering algorithm for remotely sensed imagery classification. IEEE Trans Geosci Remote Sens 55(9):5057–5068
Shao R, Du C, Chen H, Li J (2021) SUNet: Change detection for heterogeneous remote sensing images from satellite and UAV using a dual-channel fully convolution network. Remote Sens 13(18):3750
Hou B, Liu Q, Wang H, Wang Y (2019) From W-Net to CDGAN: bitemporal change detection via deep learning techniques. IEEE Trans Geosci Remote Sens 58(3):1790–1802
Kou R, Fang B, Chen G, Wang L (2020) Progressive domain adaptation for change detection using season-varying remote sensing images. Remote Sens 12(22):3815
Ramot D, Milo R, Friedman M, Kandel A (2002) Complex fuzzy sets. IEEE Trans Fuzzy Syst 10(2):171–186
Ramot D, Friedman M, Langholz G, Kandel A (2003) Complex fuzzy logic. IEEE Trans Fuzzy Syst 11(4):450–461
Selvachandran G, Quek SG, Lan LTH, Son LH, Giang NL, Ding W, Abdel-Basset M, De Albuquerque VHC (2021) A new design of Mamdani complex fuzzy inference system for multi-attribute decision making problems. IEEE Trans Fuzzy Syst 29(4):716–730
Bezdek JC, Ehrlich R, Full W (1984) FCM: the fuzzy c-means clustering algorithm. Comput Geosci 10(2):191–203
Kingma DP, Ba J (2015) Adam: a method for stochastic optimization. In: Proceedings of the 3rd international conference on learning representations (ICLR)
Hoerl AE, Kennard RW (2000) Ridge regression: biased estimation for nonorthogonal problems. Technometrics 42(1):80–86
Son LH, Thong PH (2017) Some novel hybrid forecast methods based on picture fuzzy clustering for weather nowcasting from satellite image sequences. Appl Intell 46(1):1–15
National Oceanic and Atmospheric Administration (2015) MTSAT west color infrared loop. Retrieved from, https://www.star.nesdis.noaa.gov/GOES/index.php
Ji M, Liu L, Du R, Buchroithner MF (2019) A comparative study of texture and convolutional neural network features for detecting collapsed buildings after earthquakes using pre- and post-event satellite imagery. Remote Sens 11(10):1202
Acknowledgements
This research has been funded by the Research Project: VAST01.07/22-23, Vietnam Academy of Science and Technology.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they do not have any conflict of interests. All authors have checked and agreed the submission.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: Details of loss function
Appendix B: Numerical example
Step 1: Input data preprocessing.
Step 1.1. Convert satellite image from color image to gray image.
Let us say we have 10 RGB images, each with a size 9 × 9. (Because the formula applies to all ten images is the same. Therefore, we will show the example with two images to avoid duplication in the information.)
To perform the conversion to grayscale, we will follow the formula:
where Y is the corresponding gray pixel value, R, G, and B are, respectively, the pixel value of the channels: red, green, and blue of the color image.
Then, suppose our grayscale data is obtained as follows:
X1:
X2:
Step 1.2: Reduce image size by representative pixels.
From the input image, we will proceed to group pixels. (In this case, we will choose the value c = 3, which means we will group the 3 × 3 small images corresponding to the color we have presented above.)
From the above data, according to formula number (3), we can determine Imtb and \(\kappa\) as follows:
\({\text{Im}}_{{^{1} }}^{tb}\):
155.33 | 112.44 | 129.33 |
79.56 | 85 | 129.44 |
134.56 | 141.56 | 122.33 |
\({\text{Im}}_{{^{2} }}^{tb}\):
135.78 | 144.67 | 157.78 |
107.67 | 111.22 | 109.33 |
118.11 | 101.22 | 132 |
\(\kappa_{1}\):
0.035 | 0.011 | 0.3216 | 0.024 | 0.004 | 0.035 | 0.046 | 0.055 | 0.087 |
0.0159 | 0.0088 | 0.0424 | 0.904 | 0.004 | 0.006 | 0.054 | 0.025 | 0.367 |
0.124 | 0.4038 | 0.0375 | 0.007 | 0.005 | 0.014 | 0.17 | 0.026 | 0.17 |
0.3873 | 0.0812 | 0.0374 | 0.043 | 0.041 | 0.037 | 0.249 | 0.056 | 0.128 |
0.0602 | 0.1608 | 0.0896 | 0.154 | 0.038 | 0.051 | 0.035 | 0.387 | 0.03 |
0.077 | 0.0428 | 0.0637 | 0.292 | 0.292 | 0.051 | 0.03 | 0.033 | 0.051 |
0.0357 | 0.0598 | 0.1848 | 0.054 | 0.115 | 0.028 | 0.121 | 0.06 | 0.098 |
0.1191 | 0.0472 | 0.3772 | 0.05 | 0.044 | 0.052 | 0.117 | 0.256 | 0.059 |
0.0706 | 0.0409 | 0.0647 | 0.463 | 0.032 | 0.162 | 0.099 | 0.125 | 0.066 |
\(\kappa_{2}\):
0.246 | 0.027 | 0.0131 | 0.13 | 0.14 | 0.017 | 0.079 | 0.211 | 0.172 |
0.0583 | 0.032 | 0.4004 | 0.153 | 0.039 | 0.161 | 0.002 | 0.134 | 0.247 |
0.1556 | 0.0077 | 0.0598 | 0.164 | 0.066 | 0.13 | 0.054 | 0.052 | 0.05 |
0.0403 | 0.0424 | 0.1182 | 0.126 | 0.074 | 0.019 | 0.039 | 0.018 | 0.209 |
0.2261 | 0.1031 | 0.0465 | 0.039 | 0.179 | 0.279 | 0.011 | 0.171 | 0.045 |
0.1708 | 0.1079 | 0.1448 | 0.115 | 0.122 | 0.047 | 0.065 | 0.102 | 0.339 |
0.0495 | 0.1018 | 0.0249 | 0.087 | 0.032 | 0.126 | 0.018 | 0.136 | 0.039 |
0.0519 | 0.2524 | 0.4177 | 0.283 | 0.007 | 0.18 | 0.164 | 0.149 | 0.13 |
0.0434 | 0.028 | 0.0304 | 0.031 | 0.123 | 0.133 | 0.136 | 0.042 | 0.185 |
From here, we can determine \({\text{I}} \overline{m}\) according to formula number (2)
Similarly, we get the following input:
Step 1.3: Determine the difference matrix (imaginary part).
Imaginary part (is the difference matrix): Determined by directly subtracting the difference between the corresponding regions of the remote sensing image according to formula number (3):
Step 1.4: Data sampling.
N = 10 Input images from Images 1 to 10 as above, we choose sample size \(\kappa = 4\), apply formula number (4), we get: \(M = \frac{10 - 4}{{4\left( {1 - 0.5} \right)}} + 1 = 4\).
Sample 1:
Training: From Image 1 to 3; Validation: Image 3; Testing: Image 4.
Sample 2:
Training: From Image 3 to 5; Validation: Image 5; Testing: Image 6.
Sample 3:
Training: From Image 5 to 7; Validation: Image 7; Testing: Image 8.
Sample 4.
Training: From Image 7 to 9: Validation: Image 9; Testting: Image 10.
Step 2: Using fuzzy C-means to cluster the input data with both the real and the imaginary parts of each sample.
With sample 1:
Using FCM clustering algorithm to cluster simultaneously with \({\text{I}} \overline{m}^{(t)}\) and HOD get the corresponding pairs of values.
-
Number of cluster: 2
-
Value m = 2
-
EPS (Threshold of difference between two consecutive iterations) = 0.001
-
Number of iterations t: 3
Output:
-
Membership matrix: U
-
Center vector: V
Step 2.1: Transform X t) value and HOD in range [0, 1]
Step 2.2: Initialize the matrix of values Vector cluster center according to random values
Satisfied:
-
\(V_{j} :\,\) is the center vector
$$ V_{j1} \in \left( {\min Xi,\ldots\max Xi} \right);V_{j2} \in \left( {\min HODi,\ldots\max HODi} \right) $$$$ V_{1}^{(0)} = \,\left[ {\begin{array}{*{20}r} \hfill {0.1416} \\ \hfill {0.0024} \\ \end{array} } \right];\quad V_{2}^{(0)} = \,\left[ {\begin{array}{*{20}r} \hfill {0.1744} \\ \hfill {0.0113} \\ \end{array} } \right] $$
Step 2.3: Calculate U by center vector V
Step 2.4: Update Center vector V
Calculate the different between \(V^{(1)}\) and \(V^{(0)}\) using Euclid distance
Step 2.5: Repeat step 2.3 and step 2.4 while still satisfying one of two conditions.
-
Condition 1: The number of iterations is less than the maximum number of iterations (3)
-
Condition 2: \(\left\| {V^{(t)} \, - \,V^{(t - 1)} \,} \right\|\,\, \le \,\,EPS\left( {0.001} \right)\)
Current iteration count = 1 and \(\left\| {V^{(1)} \, - \,V^{(0)} \,} \right\|\, = 0.5665 > EPS\).
Continue to the second iteration:
Calculate the different between \(V^{(2)}\) and \(V^{(1)}\) by using Euclid distance
Current iterations count = 2 and \(\left\| {V^{(2)} \, - \,V^{(1)} \,} \right\|\, = 0.5097\, > \,EPS\).
Continue to the third iterations:
Calculate different between \(V^{(3)}\) and \(V^{(2)}\) by using Euclid distance
Current iteration count = 3 and \(\left\| {V^{(3)} - V^{(2)} } \right\| = 0.0{472} > EPS\).
\(\to\) Stopped due to the number of iterations exceeded.
The result:
Same with the rest of the samples (Sample 2, Sample 3).
Step 3: Generate and Aggregate Spatial CFIS+ rules from clustering results.
-
(a)
Rule generation with sample 1 and Image X 1 :
Initialize with all values \(\alpha_{j} = 1;j \in 1,2,\ldots R;\alpha_{j} \in \left( {\alpha_{j}^{a} ,\alpha_{j,}^{b} \alpha_{j}^{c} ,\alpha_{j}^{{a^{\prime}}} ,\alpha_{j}^{{b^{\prime}}} ,\alpha_{j}^{{c^{\prime}}} } \right)\).
Applying formulas (5–10) we get:
Similar to the rest of imaginary part of the remaining data, we have the following rules for the first input data.
Rule 1
Include six parameters a, b, c and \(a^{\prime } , \, b^{\prime } , \, c^{\prime }\).
a, b, c are the coordinates of the first triangle of the real part and \(a^{\prime } , \, b^{\prime } , \, c^{\prime }\) are the coordinates of the first triangle of the imaginary part (see Fig.
12).
The value area is in the bottom surface \(({\mathbf{AA}}^{\prime } {\mathbf{C}}^{\prime } {\mathbf{BC}})\) where:
Rule 2
Include six parameters a, b, c and \(a^{\prime } , \, b^{\prime } , \, c^{\prime }\).
a, b, c are the coordinates of the second triangle of the real part and \(a^{\prime } , \, b^{\prime } , \, c^{\prime }\) are the coordinates of the second triangle of the imaginary part (see Fig.
13).
The value area is in the bottom surface \(({\mathbf{AA}}^{\prime } {\mathbf{C}}^{\prime } {\mathbf{BC}})\) where
Same with the rest of image:
-
Sample 1 Image X1, X2, X3
-
Sample 2 Image X3, X4, X5
-
Sample 3 Image X5, X6, X7
-
Sample 4 Image X7, X8, X9
-
Rule aggregate
At each image of a sample set, rules will be generated and those rules will be added to the rule set of each sample. Each template will have its own set of rules, but the parameters will be inherited from the previous template.
Example: With sample number 1 when initializing values for all parameters, but to pattern number 2 will not initialize the value but use the parameter value obtained from pattern 1.
Step 4: Calculate the interpolated value and synthesize the next predicted Image.
Step 4.1. Inference membership function.
Based on the complex fuzzy rule system in triangular space (Spatial CFIS), determine the value of the membership function of Image \(X_{1}^{\prime }\)
-
- With the first pixel (0.607, 0.0111) and rule 1.
-
+ We call the first pixel in the rule space D, which will have the following value D (0.607, 0.0111). (Point D on the bottom surface AA′C′BC).
-
+ Since point D is outside the bounds, we need to move point D into the 1st rule space with the coefficient \(\mu = \,1.7\)
-
+ Draw line BD intersecting line AA′ at point E.
-
+ Let F be a point satisfying the condition F in the plane AA′B′ and DF perpendicular to the base.
Then, the height DF is the value of the degree of belonging U of the first pixel D (0.607, 0.0111) into the first rule space (see Fig. 14).
-
- With the first image point (0.6181, 0.0141) and the second rule.
-
+ We call the second Image point in the rule space D, which will have the following value D (0.607, 0.0111). (Point D is on the bottom surface of AA′C′BC.)
-
+ Since point D is outside the bounds, we need to move point D into 2nd rule space with \(\mu = \,1.7\)
-
+ Draw line BD intersecting line AA′ at point E.
-
+ Let F be the point satisfying the condition F on the plane AA′B′ and DF perpendicular to the base surface.
Then, the height DF is the value of the degree of belonging U of the first pixel D (0.607, 0.0111) into the second rule space (see Fig. 15).
Same with the rest of the points.
-
- With the second input value pair (0.443, 0.1246).
-
+ The degree of belonging of the second pixel to the first rule, DF = 0.3693.
-
+ The degree of belonging of the second pixel to the second rule, DF = 0.5375.
-
- With the third input value pair (0.5162, 0.0579).
-
+ The degree of belonging of the third pixel to the first rule, DF = 0.179.
-
+ The degree of belonging of the third pixel to the second rule, DF = 0.2116.
-
- With the fourth input value pair (0.2946, 0.2018).
-
+ The degree of belonging of the fourth pixel to the first rule, DF = 0.2457.
-
+ The degree of belonging of the fourth pixel to the second rule, DF = 0.3952.
-
- With the fifth input value pair (0.322, 0.1682).
-
+ The degree of belonging of the fifth pixel to the first rule, DF = 0.2684.
-
+ The degree of belonging of the fifth pixel to the second rule, DF = 0.4319.
-
- With the sixth input value pair (0.4962, 0.0136).
-
+ The degree of belonging of the sixth pixel to the first rule, DF = 0.1831.
-
+ The degree of belonging of the sixth pixel to the second rule, DF = 0.127.
-
- With the seventh input value pair (0.5121, 0.0765).
-
+ The degree of belonging of the seventh pixel to the first rule, DF = 0.2758.
-
+ The degree of belonging of the seventh pixel to the second rule, DF = 0.3064.
-
- With the eighth input value pair (0.5414, 0.0526).
-
+ The degree of belonging of the eighth pixel to the first rule, DF = 0.1564.
-
+ The degree of belonging of the eighth pixel to the second rule, DF = 0.1951.
-
- With the ninth input value pair (0.3883, 0.1166).
-
+ The degree of belonging of the ninth pixel to the first rule, DF = 0.3238.
-
+ The degree of belonging of the ninth pixel to the second rule, DF = 0.4673.
Step 4.2. Determine rule coefficient \(\beta_{i}\).
Initialized the value \(\beta_{ij} = 1,\,\forall i \in 1,2,\ldots L;j \in 1,2,\ldots R + 1\) and applied formula (12), the attribute values of \(W_{ij}\) correspond to the interpolation values of each pixel into that rule, we have:
The first pixel:
The second pixel:
The third pixel:
The fourth pixel:
The fifth pixel:
The sixth pixel:
The seventh pixel:
The eighth pixel:
The ninth pixel:
Step 4.3. Determine defuzzification coefficient.
Initialized \(\left( {h_{1j} , \, h_{2j} , \, h_{3j} , \, h^{\prime}_{1j} , \, h^{\prime}_{2j} , \, h^{\prime}_{3j} } \right);\,\forall j \in 1,2,\ldots ,\,R\) according to \(\left( {1,2,1,1,2,1} \right)\) and applied formula (13, 14), we have:
DEF of rule 1:
DEF of rule 2:
Step 4.4. Determine dependence coefficient \(\gamma\).
(*) The next Image prediction result of the real part is determined by formula ( 16 ) , as follows:
Similarly, we have:
The inferred value of the predicted image \(O_{{i.{\text{Img}}}}^{*}\) is based on the difference value calculated by formula ( 18 ) , as follows:
Similarly, we have:
(**) The phase part prediction result \(O_{{i.{\text{Img}}}}^{{*^{\prime}}}\) is calculated based on formula ( 17 ) , as follows:
Similarly, we have:
Initialize initial value \(\gamma\) = 0.5. The forecast image of the final representative image is the result of the next image prediction \(O_{i}^{*}\) calculated based on the combined result of the real part and phase prediction pixel according to formula (15), as follows:
Similarly, we have:
Return to normal space
Step 4.5. Final predict result.
After having the final prediction result of the representative image point \(O_{i}^{*}\), proceed to calculate the neighborhood points of each central representative Image point according to formula (19), we get:
With the point image represents \(O_{1}^{*}\)
From \(O_{1}^{*}\) , restore nine neighboring image points as follows:
Proceed with the remaining representative image points \(O_{2}^{*} ,O_{3}^{*} ,O_{4}^{*} ,O_{5}^{*} ,O_{6}^{*} ,O_{7}^{*} ,O_{8}^{*} \,{\text{and}}\,O_{9}^{*}\). We get the next prediction by image as follows:
Step 5: Simultaneous training of the parameters in the model (Co-learning)
After training process, we had a suitable set of parameters X for the next iteration.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Giang, L.T., Son, L.H., Giang, N.L. et al. A new co-learning method in spatial complex fuzzy inference systems for change detection from satellite images. Neural Comput & Applic 35, 4519–4548 (2023). https://doi.org/10.1007/s00521-022-07928-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-022-07928-5