Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (50)

Search Parameters:
Keywords = Coherent Pixel Technique

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 20046 KiB  
Communication
Time-Series Change Detection Using KOMPSAT-5 Data with Statistical Homogeneous Pixel Selection Algorithm
by Mirza Muhammad Waqar, Heein Yang, Rahmi Sukmawati, Sung-Ho Chae and Kwan-Young Oh
Sensors 2025, 25(2), 583; https://doi.org/10.3390/s25020583 - 20 Jan 2025
Viewed by 392
Abstract
For change detection in synthetic aperture radar (SAR) imagery, amplitude change detection (ACD) and coherent change detection (CCD) are widely employed. However, time-series SAR data often contain noise and variability introduced by system and environmental factors, requiring mitigation. Additionally, the stability of SAR [...] Read more.
For change detection in synthetic aperture radar (SAR) imagery, amplitude change detection (ACD) and coherent change detection (CCD) are widely employed. However, time-series SAR data often contain noise and variability introduced by system and environmental factors, requiring mitigation. Additionally, the stability of SAR signals is preserved when calibration accounts for temporal and environmental variations. Although ACD and CCD techniques can detect changes, spatial variability outside the primary target area introduces complexity into the analysis. This study presents a robust change detection methodology designed to identify urban changes using KOMPSAT-5 time-series data. A comprehensive preprocessing framework—including coregistration, radiometric terrain correction, normalization, and speckle filtering—was implemented to ensure data consistency and accuracy. Statistical homogeneous pixels (SHPs) were extracted to identify stable targets, and coherence-based analysis was employed to quantify temporal decorrelation and detect changes. Adaptive thresholding and morphological operations refined the detected changes, while small-segment removal mitigated noise effects. Experimental results demonstrated high reliability, with an overall accuracy of 92%, validated using confusion matrix analysis. The methodology effectively identified urban changes, highlighting the potential of KOMPSAT-5 data for post-disaster monitoring and urban change detection. Future improvements are suggested, focusing on the stability of InSAR orbits to further enhance detection precision. The findings underscore the potential for broader applications of the developed SAR time-series change detection technology, promoting increased utilization of KOMPSAT SAR data for both domestic and international research and monitoring initiatives. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Location of study site along with KOMPSAT-5 time-series image footprints.</p>
Full article ">Figure 2
<p>Dataset for change detection analysis: (<b>a</b>) optical footprint of the study site (source: Google Earth; image acquisition date: 4 October 2024), (<b>b</b>) KOMPSAT-5 SAR imagery of the study site, and (<b>c</b>) generated ground truth for accuracy assessment.</p>
Full article ">Figure 3
<p>Preprocessing of KOMPSAT-5 time-series images: (<b>a</b>) KOMPSAT-5 time-series stack, (<b>b</b>) KOMPSAT-5 radiometric terrain-corrected time-series stack, (<b>c</b>) KOMPSAT-5 radiometric terrain-corrected normalized time-series stack.</p>
Full article ">Figure 4
<p>Detailed methodological framework adopted for change detection using KOMPSAT-5 images.</p>
Full article ">Figure 5
<p>Experimental results to obtain appropriate statistical homogeneous pixels (SHPs).</p>
Full article ">Figure 6
<p>Statistical homogeneous pixels (SHPs) selection: (<b>a</b>) KOMPSAT-5 image, (<b>b</b>) resultant SHPs over urban segments. The San Francisco port area, highlighted within the red box, was selected for time-series change detection using the proposed technique.</p>
Full article ">Figure 7
<p>Change detection results by utilizing KOMPSAT-5 time-series images: (<b>a</b>) pre-image, (<b>b</b>) post-image, (<b>c</b>) de-correlation between pre- and post-image, (<b>d</b>) adaptive thresholding results, (<b>e</b>) detected changed area, (<b>f</b>) ground truth data.</p>
Full article ">
13 pages, 2474 KiB  
Article
Exploiting Temporal Features in Calculating Automated Morphological Properties of Spiky Nanoparticles Using Deep Learning
by Muhammad Aasim Rafique
Sensors 2024, 24(20), 6541; https://doi.org/10.3390/s24206541 - 10 Oct 2024
Viewed by 669
Abstract
Object segmentation in images is typically spatial and focuses on the spatial coherence of pixels. Nanoparticles in electron microscopy images are also segmented frame by frame, with subsequent morphological analysis. However, morphological analysis is inherently sequential, and a temporal regularity is evident in [...] Read more.
Object segmentation in images is typically spatial and focuses on the spatial coherence of pixels. Nanoparticles in electron microscopy images are also segmented frame by frame, with subsequent morphological analysis. However, morphological analysis is inherently sequential, and a temporal regularity is evident in the process. In this study, we extend the spatially focused morphological analysis by incorporating a fusion of hard and soft inductive bias from sequential machine learning techniques to account for temporal relationships. Previously, spiky Au nanoparticles (Au-SNPs) in electron microscopy images were analyzed, and their morphological properties were automatically generated using a hourglass convolutional neural network architecture. In this study, recurrent layers are integrated to capture the natural, sequential growth of the particles. The network is trained with a spike-focused loss function. Continuous segmentation of the images explores the regressive relationships among natural growth features, generating morphological statistics of the nanoparticles. This study comprehensively evaluates the proposed approach by comparing the results of segmentation and morphological properties analysis, demonstrating its superiority over earlier methods. Full article
(This article belongs to the Special Issue Nanotechnology Applications in Sensors Development)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>,<b>b</b>) depict the hard and soft inductive bias, where the CNN shown in (<b>a</b>) shows the spatial coherence, and the LSTM node in (<b>b</b>) depicts the temporal cohesion. (<b>a</b>) Hard inductive bias of CNN. (<b>b</b>) Temporal cohesion with an LSTM node.</p>
Full article ">Figure 2
<p>The proposed architecture of deep neural networks for segmentation. ⊕ denotes the concatenation of the features among various layers. The arrow points the direction of the features.</p>
Full article ">Figure 3
<p>The sample of images for Au-SNP growth monitored in TEM images. The top row shows the images for frame no. 1123, 1156, and 1180 (left to right), and the bottom row depicts the ground truth.</p>
Full article ">Figure 4
<p>(<b>a</b>,<b>b</b>) depict the identified shape and the area of a Au-SNP during its growth. (<b>a</b>) Shape. (<b>b</b>) Number of spikes.</p>
Full article ">Figure 5
<p>(<b>a</b>) shows the masked particle after segmenting the image, (<b>b</b>) shows the spike count graph. (<b>a</b>) Masked particle. (<b>b</b>) Spike count.</p>
Full article ">Figure 6
<p>The qualitative results of segmentation from various techniques. Row (<b>a</b>) is the original image of the particle, row (<b>b</b>) shows segmentation results of the MaskRCNN, row (<b>c</b>) shows segmentation results of conventional techniques used in [<a href="#B9-sensors-24-06541" class="html-bibr">9</a>], row (<b>d</b>) shows the results of [<a href="#B18-sensors-24-06541" class="html-bibr">18</a>], and row (<b>e</b>) shows results generated by our proposed technique.</p>
Full article ">
21 pages, 3577 KiB  
Article
Exploring Distributed Scatterers Interferometric Synthetic Aperture Radar Attributes for Synthetic Aperture Radar Image Classification
by Mingxuan Wei, Yuzhou Liu, Chuanhua Zhu and Chisheng Wang
Remote Sens. 2024, 16(15), 2802; https://doi.org/10.3390/rs16152802 - 31 Jul 2024
Viewed by 806
Abstract
Land cover classification of Synthetic Aperture Radar (SAR) imagery is a significant research direction in SAR image interpretation. However, due to the unique imaging methodology of SAR, interpreting SAR images presents numerous challenges, and land cover classification using SAR imagery often lacks innovative [...] Read more.
Land cover classification of Synthetic Aperture Radar (SAR) imagery is a significant research direction in SAR image interpretation. However, due to the unique imaging methodology of SAR, interpreting SAR images presents numerous challenges, and land cover classification using SAR imagery often lacks innovative features. Distributed scatterers interferometric synthetic aperture radar (DS-InSAR), a common technique for deformation extraction, generates several intermediate parameters during its processing, which have a close relationship with land features. Therefore, this paper utilizes the coherence matrix, the number of statistically homogeneous pixels (SHPs), and ensemble coherence, which are involved in DS-InSAR as classification features, combined with the backscatter intensity of multi-temporal SAR imagery, to explore the impact of these features on the discernibility of land objects in SAR images. The results indicate that the adopted features improve the accuracy of land cover classification. SHPs and ensemble coherence demonstrate significant importance in distinguishing land features, proving that these proposed features can serve as new attributes for land cover classification in SAR imagery. Full article
(This article belongs to the Section Remote Sensing for Geospatial Science)
Show Figures

Figure 1

Figure 1
<p>Research area and schematic diagram of classified land features. (<b>a</b>) Optical image. The red frame indicates the research area. (<b>b</b>) Averaged SAR intensity image. (<b>c</b>–<b>g</b>) represent the optical images of classified land features within the research area.</p>
Full article ">Figure 2
<p>The PA and UA for different land feature categories under the five feature combinations.</p>
Full article ">Figure 3
<p>The confusion matrices for the five feature combinations. (<b>a</b>) Time-series backscatter intensity feature combination (Mli). (<b>b</b>) Backscatter and coherence matrix combination (Mli + CohM). (<b>c</b>) Backscatter and statistically homogeneous pixel number combination (Mli + Bro). (<b>d</b>) Backscatter and ensemble coherence combination (Mli + Pcoh). (<b>e</b>) Combination of all features: backscatter, coherence matrix, statistically homogeneous pixel number, and ensemble coherence (Mli + CohM + Bro + Pcoh).</p>
Full article ">Figure 4
<p>The Pearson correlation matrix for the selected features. Mli represents the time−series backscatter, Bro represents the number of statistically homogeneous pixels, Pcoh represents the ensemble coherence, and Coh represents the PCA−processed coherence.</p>
Full article ">Figure 5
<p>The mapping results of the five feature combinations. (<b>a</b>) Time-series backscatter intensity feature mapping (Mli). (<b>b</b>) Backscatter and coherence matrix combination mapping (Mli + CohM). (<b>c</b>) Backscatter and statistically homogeneous pixel number combination mapping (Mli + Bro). (<b>d</b>) Backscatter and ensemble coherence combination mapping (Mli + Pcoh). (<b>e</b>) Mapping with all features combined: backscatter, coherence matrix, statistically homogeneous pixel number, and ensemble coherence (Mli + CohM + Bro + Pcoh).</p>
Full article ">Figure 6
<p>The feature importance of the five combinations. (<b>a</b>) Time-series backscatter intensity feature importance (Mli); red represents the top three most important intensity features. (<b>b</b>) Backscatter and coherence matrix combination feature importance (Mli + CohM); green represents the highest-scoring coherence features. (<b>c</b>) Backscatter and statistically homogeneous pixel number combination feature importance (Mli + Bro); brown represents the scores for the number of statistically homogeneous pixels. (<b>d</b>) Backscatter and ensemble coherence combination feature importance (Mli + Pcoh); purple represents the scores for ensemble coherence. (<b>e</b>) Feature importance for the combination of all features: backscatter, coherence matrix, statistically homogeneous pixel number, and ensemble coherence (Mli + CohM + Bro + Pcoh).</p>
Full article ">Figure 7
<p>The time-series scattering intensity of the five types of land features.</p>
Full article ">Figure 8
<p>PA and UA of five kinds of ground objects under windows of different sizes with homogenous particles.</p>
Full article ">
14 pages, 11819 KiB  
Article
Error Correction of the RapidEye Sub-Pixel Correlation: A Case Study of the 2019 Ridgecrest Earthquake Sequence
by Wulinhong Luo, Qi An, Guangcai Feng, Zhiqiang Xiong, Lijia He, Yilin Wang, Hongbo Jiang, Xiuhua Wang, Ning Li and Wenxin Wang
Sensors 2024, 24(14), 4726; https://doi.org/10.3390/s24144726 - 21 Jul 2024
Viewed by 1139
Abstract
The optical image sub-pixel correlation (SPC) technique is an important method for monitoring large-scale surface deformation. RapidEye images, distinguished by their short revisit period and high spatial resolution, are crucial data sources for monitoring surface deformation. However, few studies have comprehensively analyzed the [...] Read more.
The optical image sub-pixel correlation (SPC) technique is an important method for monitoring large-scale surface deformation. RapidEye images, distinguished by their short revisit period and high spatial resolution, are crucial data sources for monitoring surface deformation. However, few studies have comprehensively analyzed the error sources and correction methods of the deformation field obtained from RapidEye images. We used RapidEye images without surface deformation to analyze potential errors in the offset fields. We found that the errors in RapidEye offset fields primarily consist of decorrelation noise, orbit error, and attitude jitter distortions. To mitigate decorrelation noise, the careful selection of offset pairs coupled with spatial filtering is essential. Orbit error can be effectively mitigated by the polynomial fitting method. To address attitude jitter distortions, we introduced a linear fitting approach that incorporated the coherence of attitude jitter. To demonstrate the performance of the proposed methods, we utilized RapidEye images to extract the coseismic displacement field of the 2019 Ridgecrest earthquake sequence. The two-dimensional (2D) offset field contained deformation signals extracted from two earthquakes, with a maximum offset of 2.8 m in the E-W direction and 2.4 m in the N-S direction. A comparison with GNSS observations indicates that, after error correction, the mean relative precision of the offset field improved by 92% in the E-W direction and by 89% in the N-S direction. This robust enhancement underscores the effectiveness of the proposed error correction methods for RapidEye data. This study sheds light on large-scale surface deformation monitoring using RapidEye images. Full article
(This article belongs to the Special Issue Feature Papers in Environmental Sensing and Smart Cities)
Show Figures

Figure 1

Figure 1
<p>Study area and corresponding RapidEye images. (<b>a</b>) Study area and image coverage. The blue rectangles show the coverage of the RapidEye images. The stars are the epicenters of the 2019 Ridgecrest earthquake from the USGS. The magenta rectangles are Sentinel-1 coverage. The red lines indicate the active faults. (<b>b</b>) True-color RapidEye images over the study area.</p>
Full article ">Figure 2
<p>Error components of RapidEye images in the E-W and N-S offset fields. R1, R2, and R3 represent satellite numbers. The error with a systematic trend in (<b>a</b>) is an orbital error. The collective deviation depicted in (<b>e</b>) is attributed to inter-satellite bias. The outlier in (<b>b</b>,<b>f</b>) is decorrelation noise. The trend errors perpendicular to the flight direction in (<b>c</b>,<b>g</b>) are attitude jitter distortions. The errors enclosed within the red rectangular (<b>d</b>) are terrain shadows. Figure (<b>h</b>) includes decorrelation noise and terrain shadow.</p>
Full article ">Figure 3
<p>The characteristic of attitude jitter distortions of RapidEye. The blue discrete points are the data along the profile line, the black dashed line is the mean value of these data, and the red solid line is the fitted straight line considering the asymptotic tendency of the attitude jitter stripes. R2 and R3 denote the satellite numbers.</p>
Full article ">Figure 4
<p>The correction of attitude jitter distortions of RapidEye. (<b>a</b>,<b>e</b>) are the offset fields with attitude jitter distortions. (<b>b</b>,<b>f</b>) are the offset fields after filtering. (<b>c</b>,<b>g</b>) present the attitude jitter distortions models based on the filtered offset fields computed by Equations (2)–(4). (<b>d</b>,<b>h</b>) are the offset fields after removing attitude jitter distortions.</p>
Full article ">Figure 5
<p>Processing flow chart of RapidEye images.</p>
Full article ">Figure 6
<p>E-W and N-S components of the offset field before and after error correction. (<b>a</b>,<b>c</b>) show the offset fields before error correction. (<b>b</b>,<b>d</b>) show the offset fields after error correction.</p>
Full article ">Figure 7
<p>Comparison of the deformation value of Mw 7.1 obtained by SPC and GNSS. (Red dots represent GNSS station locations, purple pentagrams represent the foreshock epicenter, the red star represent the main shock epicenter, black arrows represents the GNSS observations, and red arrows represent the observations calculated from RapidEye imagery).</p>
Full article ">Figure 8
<p>The 3D offset field of the 2019 Ridgecrest earthquake sequence.</p>
Full article ">
17 pages, 5072 KiB  
Article
Image Feature Extraction Using Symbolic Data of Cumulative Distribution Functions
by Sri Winarni, Sapto Wahyu Indratno, Restu Arisanti and Resa Septiani Pontoh
Mathematics 2024, 12(13), 2089; https://doi.org/10.3390/math12132089 - 3 Jul 2024
Cited by 1 | Viewed by 1001
Abstract
Symbolic data analysis is an emerging field in statistics with great potential to become a standard inferential technique. This research introduces a new approach to image feature extraction using the empirical cumulative distribution function (ECDF) and distribution function of distribution values (DFDV) as [...] Read more.
Symbolic data analysis is an emerging field in statistics with great potential to become a standard inferential technique. This research introduces a new approach to image feature extraction using the empirical cumulative distribution function (ECDF) and distribution function of distribution values (DFDV) as symbolic data. The main objective is to reduce the dimension of huge pixel data by organizing them into more coherent pixel-intensity distributions. We propose a partitioning method with different breakpoints to capture pixel intensity variations effectively. This results in an ECDF representing the proportion of pixel intensities and a DFDV representing the probability distribution at specific points. The novelty of this approach lies in using ECDF and DFDV as symbolic features, thus summarizing the data and providing a more informative representation of the pixel value distribution, facilitating image classification analysis based on intensity distribution. The experimental results underscore the potential of this method in distinguishing image characteristics among existing image classes. Image features extracted using this approach promise image classification analysis with more informative image representations. In addition, theoretical insights into the properties of DFDV distribution functions are gained. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of vector formation <math display="inline"><semantics> <mrow> <mi>v</mi> <mi>e</mi> <mi>c</mi> <mo>(</mo> <mi mathvariant="normal">A</mi> <mo>)</mo> </mrow> </semantics></math> from pixel value matrix <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">A</mi> </mrow> <mrow> <mi>m</mi> <mo>×</mo> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math> using row-by-row vectorization.</p>
Full article ">Figure 2
<p>Illustration of ECDF formation for each image. The element <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">S</mi> </mrow> <mrow> <mi>j</mi> </mrow> </msub> </mrow> </semantics></math> can be written as <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="script">F</mi> </mrow> <mrow> <mi>j</mi> </mrow> </msub> <mfenced separators="|"> <mrow> <mi>x</mi> </mrow> </mfenced> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>j</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mi>N</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>255</mn> </mrow> </semantics></math> are the realized pixel values.</p>
Full article ">Figure 3
<p>Illustration of the MNIST dataset for handwritten digit recognition.</p>
Full article ">Figure 4
<p>Illustration of the center image cropping. The result is a 20 × 20 image with a more focused object.</p>
Full article ">Figure 5
<p>ECDF results of image class <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>0,1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>9</mn> </mrow> </semantics></math>, each panel represents a different image class. The results are divided into two subsets, <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>1,2</mn> </mrow> </semantics></math>, with distinguishing points <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>0.03</mn> </mrow> </semantics></math> (black vertical dotted line) and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>0.6</mn> <mo>.</mo> </mrow> </semantics></math> (red vertical dotted line).</p>
Full article ">Figure 6
<p>Formation at differentiation points <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>0.03</mn> </mrow> </semantics></math> (black vertical dotted line) and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math> (red vertical dotted line).</p>
Full article ">Figure 7
<p>DFDV results on partition (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, and (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, with distinguishing points <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>0.03</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7 Cont.
<p>DFDV results on partition (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, and (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, with distinguishing points <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>0.03</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>KDE function curve results at partition (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, and (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, with distinguishing points <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>0.03</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8 Cont.
<p>KDE function curve results at partition (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, and (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, with distinguishing points <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>0.03</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math>.</p>
Full article ">
39 pages, 61918 KiB  
Article
Learning Ground Displacement Signals Directly from InSAR-Wrapped Interferograms
by Lama Moualla, Alessio Rucci, Giampiero Naletto and Nantheera Anantrasirichai
Sensors 2024, 24(8), 2637; https://doi.org/10.3390/s24082637 - 20 Apr 2024
Cited by 1 | Viewed by 1475
Abstract
Monitoring ground displacements identifies potential geohazard risks early before they cause critical damage. Interferometric synthetic aperture radar (InSAR) is one of the techniques that can monitor these displacements with sub-millimeter accuracy. However, using the InSAR technique is challenging due to the need for [...] Read more.
Monitoring ground displacements identifies potential geohazard risks early before they cause critical damage. Interferometric synthetic aperture radar (InSAR) is one of the techniques that can monitor these displacements with sub-millimeter accuracy. However, using the InSAR technique is challenging due to the need for high expertise, large data volumes, and other complexities. Accordingly, the development of an automated system to indicate ground displacements directly from the wrapped interferograms and coherence maps could be highly advantageous. Here, we compare different machine learning algorithms to evaluate the feasibility of achieving this objective. The inputs for the implemented machine learning models were pixels selected from the filtered-wrapped interferograms of Sentinel-1, using a coherence threshold. The outputs were the same pixels labeled as fast positive, positive, fast negative, negative, and undefined movements. These labels were assigned based on the velocity values of the measurement points located within the pixels. We used the Parallel Small Baseline Subset service of the European Space Agency’s GeoHazards Exploitation Platform to create the necessary interferograms, coherence, and deformation velocity maps. Subsequently, we applied a high-pass filter to the wrapped interferograms to separate the displacement signal from the atmospheric errors. We successfully identified the patterns associated with slow and fast movements by discerning the unique distributions within the matrices representing each movement class. The experiments included three case studies (from Italy, Portugal, and the United States), noted for their high sensitivity to landslides. We found that the Cosine K-nearest neighbor model achieved the best test accuracy. It is important to note that the test sets were not merely hidden parts of the training set within the same region but also included adjacent areas. We further improved the performance with pseudo-labeling, an approach aimed at evaluating the generalizability and robustness of the trained model beyond its immediate training environment. The lowest test accuracy achieved by the implemented algorithm was 80.1%. Furthermore, we used ArcGIS Pro 3.3 to compare the ground truth with the predictions to visualize the results better. The comparison aimed to explore indications of displacements affecting the main roads in the studied area. Full article
(This article belongs to the Special Issue Intelligent SAR Target Detection and Recognition)
Show Figures

Figure 1

Figure 1
<p>Landslide distribution in Lombardy (Italy) according to the Geoportal of Lombardy.</p>
Full article ">Figure 2
<p>European landslide susceptibility map (Italy, Lombardy) according to the European Soil Data Center. The colors from green to red represent the sensitivity degrees from low to high.</p>
Full article ">Figure 3
<p>European landslide susceptibility map (Portugal, Lisbon) according to the European Soil Data Center. The colors from green to red represent the sensitivity degrees from low to high.</p>
Full article ">Figure 4
<p>Washington landslide susceptibility map according to the United States geological survey.</p>
Full article ">Figure 5
<p>Washington deep-seated landslides captured by ALOS-2 PALSAR-2 images between 2015 and 2019. The red polygons represent the landslide locations.</p>
Full article ">Figure 6
<p>The chronological sorting of the temporal baselines of the wrapped interferograms (Lombardy Dataset). <b>Top</b> figure expresses the sequence of the temporal baselines of the interferograms before sorting. <b>Bottom</b> figure expresses the sequence of the temporal baselines of the interferograms after sorting. Similar results have been obtained for the other two datasets of Lisbon and Washington.</p>
Full article ">Figure 7
<p>Example of a wrapped interferogram in the complex domain from the Lombardy dataset before using the high-pass filter (<b>top</b> figure) and the magnitude after using the high-pass filter (<b>bottom</b> figure).</p>
Full article ">Figure 8
<p>The matrices represent slow and fast motions based on the used datasets. The black color in the matrices represents the magnitude values greater than 0.9 radians, while the white color represents the filtered phase values smaller than 0.9 radians.</p>
Full article ">Figure 9
<p>The histograms representing positive and negative motions based on the used datasets.</p>
Full article ">Figure 10
<p>Deformation orm as it is so keep velocity map of the Lombardy dataset using the P-SBAS service at the G-TEP.</p>
Full article ">Figure 11
<p>Intersection between S.Puliero et al.’s landslide dataset and the Sentinel-1 deformation velocity map in Belluno [<a href="#B41-sensors-24-02637" class="html-bibr">41</a>]. The violet pins refer to the location of the landslides.</p>
Full article ">Figure 12
<p>Deformation velocity map of the Lisbon dataset using the P-SBAS service at the G-TEP.</p>
Full article ">Figure 13
<p>Deformation velocity map of the Washington dataset using the P-SBAS service at the G-TEP.</p>
Full article ">Figure 14
<p>Intersections between landslides dataset [<a href="#B24-sensors-24-02637" class="html-bibr">24</a>] and deformation velocity map in the Washington U.S. The violet polygons represent the landslides dataset.</p>
Full article ">Figure 15
<p>Deformation velocity map of zone 98,944 according to the time series analysis of the Washington dataset.</p>
Full article ">Figure 16
<p>Sensitivity map to landslides in zone 98,944 according to the U.S. Landslide Inventory Web Application.</p>
Full article ">Figure 17
<p>Confusion matrices for the trained models of the Lombardy dataset: positive/negative movement model, fast positive movement model, and fast negative movement model, respectively.</p>
Full article ">Figure 18
<p>Confusion matrices for the trained models of the Lisbon dataset: positive/negative movement model, fast positive movement model, and fast negative movement model, respectively.</p>
Full article ">Figure 19
<p>Confusion matrices for the trained models of the Washington dataset: positive/negative movement model, fast positive movement model, and fast negative movement model, respectively.</p>
Full article ">Figure 20
<p>Comparison between the ground truth and the predictions of the test sets for different datasets. Top main figure: Lombardy dataset, showing fast positive and undefined movements. Bottom main figure: Lisbon dataset, displaying positive and negative movements. Each main figure consists of four subfigures. In the <b>top</b> subfigure, Subfigure (<b>A</b>) presents the ground truth of the Lombardy test set, with Subfigure (<b>a</b>) providing a detailed close-up of Subfigure (<b>A</b>). Subfigure (<b>B</b>) shows the predictions for the Lombardy test set, with Subfigure (<b>b</b>) offering a close-up of these predictions. Similarly, in the <b>bottom</b> main figure for the Lisbon dataset, Subfigure (<b>A</b>) and Subfigure (<b>a</b>) focus on the ground truth test set and its close-up, respectively, while Subfigure (<b>B</b>) and Subfigure (<b>b</b>) depict the predictions and their close-up.</p>
Full article ">Figure 20 Cont.
<p>Comparison between the ground truth and the predictions of the test sets for different datasets. Top main figure: Lombardy dataset, showing fast positive and undefined movements. Bottom main figure: Lisbon dataset, displaying positive and negative movements. Each main figure consists of four subfigures. In the <b>top</b> subfigure, Subfigure (<b>A</b>) presents the ground truth of the Lombardy test set, with Subfigure (<b>a</b>) providing a detailed close-up of Subfigure (<b>A</b>). Subfigure (<b>B</b>) shows the predictions for the Lombardy test set, with Subfigure (<b>b</b>) offering a close-up of these predictions. Similarly, in the <b>bottom</b> main figure for the Lisbon dataset, Subfigure (<b>A</b>) and Subfigure (<b>a</b>) focus on the ground truth test set and its close-up, respectively, while Subfigure (<b>B</b>) and Subfigure (<b>b</b>) depict the predictions and their close-up.</p>
Full article ">Figure 21
<p>Comparison of the ground truth test set and its predictions for the Washington dataset. The top figure illustrates fast negative and undefined movements, while the bottom figure shows fast positive and undefined movements. In each figure, subfigure <b>A</b> depicts the ground truth of the test dataset, and subfigure <b>a</b> provides a detailed close-up of <b>A</b>. Subfigure <b>B</b> presents the predictions for the test dataset, with subfigure <b>b</b> offering a close-up view of <b>B</b>.</p>
Full article ">Figure 21 Cont.
<p>Comparison of the ground truth test set and its predictions for the Washington dataset. The top figure illustrates fast negative and undefined movements, while the bottom figure shows fast positive and undefined movements. In each figure, subfigure <b>A</b> depicts the ground truth of the test dataset, and subfigure <b>a</b> provides a detailed close-up of <b>A</b>. Subfigure <b>B</b> presents the predictions for the test dataset, with subfigure <b>b</b> offering a close-up view of <b>B</b>.</p>
Full article ">Figure 22
<p>Road network of the Lombardy test dataset. The top figure exhibits the masked roads of the ground truth test set; the bottom figure exhibits the masked roads of the predicted test sets. The value of −1 expresses the fast positive movement while the value of 1 expresses the undefined movement.</p>
Full article ">Figure 23
<p>Road network of the Lisbon test dataset. The masked roads of the ground truth test set are shown in the top figure; while the masked roads of the predicted test sets are shown in the bottom figure. The value of 1 expresses the positive movement while the value −1 expresses the negative movement.</p>
Full article ">Figure 24
<p>Road network of the Washington test dataset. The top figure represents the masked roads of the ground truth test set; the bottom figure represents the masked roads of the predicted test sets. The value of −1 expresses the fast negative movement while the value of 1 expresses the undefined movement.</p>
Full article ">
19 pages, 12947 KiB  
Review
Computational Optical Scanning Holography
by Naru Yoneda, Jung-Ping Liu, Osamu Matoba, Yusuke Saita and Takanori Nomura
Photonics 2024, 11(4), 347; https://doi.org/10.3390/photonics11040347 - 10 Apr 2024
Cited by 1 | Viewed by 1913
Abstract
Holographic techniques are indispensable tools for modern optical engineering. Over the past two decades, research about incoherent digital holography has continued to attract attention. Optical scanning holography (OSH) can obtain incoherent holograms using single-pixel detection and structured illumination with Fresnel zone patterns (FZPs). [...] Read more.
Holographic techniques are indispensable tools for modern optical engineering. Over the past two decades, research about incoherent digital holography has continued to attract attention. Optical scanning holography (OSH) can obtain incoherent holograms using single-pixel detection and structured illumination with Fresnel zone patterns (FZPs). Particularly by changing the size of a detector, OSH can also obtain holograms under coherently illuminated conditions. Since 1979, OSH has continuously evolved. According to the evolution of semiconductor technology, spatial light modulators (SLMs) come to be useful for various imaging fields. By using SLM techniques for OSH, the practicality of OSH is improved. These SLM-based OSH methods are termed computational OSH (COSH). In this review, the configurations, recording and reconstruction methods, and proposed applications of COSH are reviewed. Full article
(This article belongs to the Special Issue Holographic Information Processing)
Show Figures

Figure 1

Figure 1
<p>The set-up of conventional OSH. EOM = electro-optic modulator; PBS = polarizing beamsplitter; HWP = half-wave plate; M = mirror; BS = beamsplitter; PDs = photodetectors.</p>
Full article ">Figure 2
<p>The TDFZP at different depths (<b>a</b>) and different times (<b>b</b>).</p>
Full article ">Figure 3
<p>Conceptual diagram of MOSH.</p>
Full article ">Figure 4
<p>Optical set-up of MOSH for proof-of-principle experiments. HWP = half-wave plate; BS = beamsplitter; PD = photodiode.</p>
Full article ">Figure 5
<p>Set-up of interferenceless optical scanning holography. M = mirror; TIRP = total internal reflection prism.</p>
Full article ">Figure 6
<p>Real part of experimental PSFs at image distances of 180 mm (<b>a</b>), 190 mm (<b>b</b>) and 200 mm (<b>c</b>). The image plane of the projection lens is at <math display="inline"><semantics> <mrow> <mn>190</mn> <mspace width="0.166667em"/> <mi>mm</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Experimental demonstration of IOSH. (<b>a</b>) The real part of the acquired complex hologram by IOSH. (<b>b</b>) The reconstructed image. (<b>c</b>) Photo of the object.</p>
Full article ">Figure 8
<p>Comparison of detection and reproduction processes of MOSH and SP-MOSH. Reprinted with permission from [<a href="#B54-photonics-11-00347" class="html-bibr">54</a>]. © Optica Publishing Group.</p>
Full article ">Figure 9
<p>Intensity distributions reconstructed by (<b>a</b>) original MOSH and (<b>b</b>) SP-MOSH. Reprinted with permission from [<a href="#B54-photonics-11-00347" class="html-bibr">54</a>]. © Optica Publishing Group.</p>
Full article ">Figure 10
<p>Schematic of spatially divided two-step phase-shifting method. The black pixel value is interpolated by the pixel values to the top, bottom, left, and right of it. Reprinted with permission from [<a href="#B54-photonics-11-00347" class="html-bibr">54</a>]. © IOP Publishing.</p>
Full article ">Figure 11
<p>Experimental results for (<b>a</b>) spatially divided 4 step phase-shifted hologram, (<b>b</b>–<b>e</b>) interpolated phase-shifted holograms from (<b>a</b>,<b>f</b>) with spatially divided 2 step phase-shifted hologram, (<b>g</b>,<b>h</b>) interpolated phase-shifted holograms from (<b>f</b>,<b>i</b>) with reconstructed intensity distribution of S2P, and (<b>j</b>) reconstructed intensity distribution of S4P and sectional profiles at broken lines in (<b>i</b>,<b>j</b>). Reprinted with permission from [<a href="#B54-photonics-11-00347" class="html-bibr">54</a>]. © IOP Publishing.</p>
Full article ">Figure 12
<p>Reconstructed intensity distributions under noisy conditions. T4P = temporal 4 step phase shifting; T2P = temporal 2 step phase shifting; S4P = spatially divided 4 step phase shifting; S2P = spatially divided 2 step phase shifting. Reprinted with permission from [<a href="#B54-photonics-11-00347" class="html-bibr">54</a>]. © IOP Publishing.</p>
Full article ">Figure 13
<p>Flow diagram of one-dimensional spatial–temporal demodulation. (<b>a</b>) The raw hologram line <span class="html-italic">L</span> as a function of time <span class="html-italic">t</span>. (<b>b</b>) The Fourier spectrum <span class="html-italic">S</span> of (<b>a</b>). (<b>c</b>) The extracted spectrum. (<b>d</b>) The demodulated phase and amplitude of the hologram line. FT and IFT stand for the Fourier transform and inverse Fourier transform, respectively.</p>
Full article ">Figure 14
<p>Experimental results of a microlens array: (<b>a</b>) phase distribution, (<b>b</b>) enlarged microlens, (<b>c</b>) theoretical microlens, (<b>d</b>) comparison of sectional profiles, and (<b>e</b>) reconstructed spot intensity distribution. Reprinted with permission from [<a href="#B76-photonics-11-00347" class="html-bibr">76</a>]. © Optica Publishing Group.</p>
Full article ">Figure 15
<p>Reconstructed intensity distributions through static scattering media. The numbers at the bottom of the images represent the diffusion angle of the diffusers. Reprinted with permission from [<a href="#B90-photonics-11-00347" class="html-bibr">90</a>]. © AIP Publishing.</p>
Full article ">Figure 16
<p>Reconstructed intensity distributions of 3D fluorescent object. The left and right columns indicate the focal planes of elements 6 and 3, respectively. Reprinted with permission from [<a href="#B90-photonics-11-00347" class="html-bibr">90</a>]. © AIP Publishing.</p>
Full article ">Figure 17
<p>The experimental results. The upper and lower regions show the results of the proposed method and the image sensor, respectively. Reprinted with permission from [<a href="#B95-photonics-11-00347" class="html-bibr">95</a>]. © The Optical Society of Japan.</p>
Full article ">Figure 18
<p>The results of polarization imaging through scattering media. Reprinted with permission from [<a href="#B95-photonics-11-00347" class="html-bibr">95</a>]. © The Optical Society of Japan.</p>
Full article ">
16 pages, 17045 KiB  
Article
Vector Angular Continuity in the Fusion of Coseismic Deformations at Multiple Optical Correlation Scales
by Rui Guo, Qiming Zeng and Shangzong Lu
Sensors 2023, 23(15), 6677; https://doi.org/10.3390/s23156677 - 26 Jul 2023
Viewed by 1114
Abstract
As one of the common techniques for measuring coseismic deformations, optical image correlation techniques are capable of overcoming the drawbacks of inadequate coherence and phase blurring which can occur in radar interferometry, as well as the problem of low spatial resolution in radar [...] Read more.
As one of the common techniques for measuring coseismic deformations, optical image correlation techniques are capable of overcoming the drawbacks of inadequate coherence and phase blurring which can occur in radar interferometry, as well as the problem of low spatial resolution in radar pixel offset tracking. However, the scales of the correlation window in optical image correlation techniques typically influence the results; the conventional SAR POT method faces a fundamental trade-off between the accuracy of matching and the preservation of details in the correlation window size. This study regards coseismic deformation as a two-dimensional vector, and develops a new post-processing workflow called VACI-OIC to reduce the dependence of shift estimation on the size of the correlation window. This paper takes the coseismic deformations in both the east–west and north–south directions into account at the same time, treating them as vectors, while also considering the similarity of displacement between adjacent points on the surface. Herein, the angular continuity index of the coseismic deformation vector was proposed as a more reasonable constraint condition to fuse the deformation field results obtained by optical image correlation across different correlation window. Taking the earthquake of 2021 in Maduo, China, as the study area, the deformation with the highest spatial resolution in the violent surface rupture area was determined (which could not be provided by SAR data). Compared to the results of single-scale optical correlation, the presented results were more uniform (i.e., more consistent with published results). At the same time, the proposed index also detected the strip fracture zone of the earthquake with impressive clarity. Full article
(This article belongs to the Special Issue Remote Sensing and GIS for Natural Hazards Mapping)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Colour shadow relief map of the study area. Elevation information is based on ASTER GDEM 30 M. Optical data are represented by a solid red line, while SAR data are represented by a dashed purple line. The yellow-brown line represents the Maduo earthquake fault trace [<a href="#B18-sensors-23-06677" class="html-bibr">18</a>]. Beachball’s focal mechanism and epicenter location are based on USGS data, with the location of Maduo County shown as a black dot. (<b>b</b>) shows the relative position of the Maduo county in Qinghai Province in China. (<b>c</b>) shows the location of the study area on the Tibetan Plateau, and the black box in the figure is (<b>a</b>).</p>
Full article ">Figure 2
<p>Deformation obtained by D-InSAR and POT methods using Sentinel-1 ascending and descending data, respectively, where the blue line is the marked rupture zone. (<b>a</b>) is the line-of-sight displacement obtained after POT processing of the ascending data; (<b>b</b>) is the LOS displacement obtained after POT processing of the ascending data; (<b>c</b>) is the result obtained after D-INSAR processing of the descending data; (<b>d</b>) is the LOS displacement obtained after POT processing of descending data.</p>
Full article ">Figure 3
<p>East–west (E–W) deformation for various correlation scales (<b>a</b>–<b>d</b>), where the eastward direction is positive. The number represents the size of the matching window used, and the spatial resolution is 6 m in all images.</p>
Full article ">Figure 4
<p>North–south (N–S) deformation for various correlation scales (<b>a</b>–<b>d</b>), where the northward direction is positive. The number represents the size of the matching window used, and the spatial resolution is 6 m in all images.</p>
Full article ">Figure 5
<p>Vertical profile line analysis of the east–west deformation (around the 550th column of the abscissa was the location of the rupture zone); number in the upper left corner is the size of the relevant window, e.g., 64 × 64 pixels, they are represented in different colors.</p>
Full article ">Figure 6
<p>The technical flow chart of this article.</p>
Full article ">Figure 7
<p>VACI index distribution of Sentinel-1′s POT results, in which the black line segment with a larger value is consistent with the rupture zone (except for lakes). The value of VACI in the image is dimensionless.</p>
Full article ">Figure 8
<p>(<b>a</b>) East–west and (<b>b</b>) north–south deformations.</p>
Full article ">Figure 9
<p>(<b>a</b>) The modulus of the proposed method’s results, whose positive and negative components are defined by the east–west component, and the east–west displacement component is regular and positive; (<b>b</b>,<b>c</b>) the LOS deformation of the D-InSAR method for Sentinel-1 data; (<b>d</b>,<b>e</b>) distance and LOS deformations of the Sentinel-1 orbit descent POT method; (<b>f</b>) the two-dimensional deformation, where the eastward direction is positive, calculated by the strain model and variance component estimation (SMVCE) method. Reprinted/adapted with permission from Ref. [<a href="#B28-sensors-23-06677" class="html-bibr">28</a>]. 2018, Jihong Liu.</p>
Full article ">Figure 10
<p>The mean absolute error (MAE) and standard deviation (STD), as calculated utilizing SAR POT as a reference. East–west and north–south are depicted in the left and right images, respectively. The plotted points denote the simple average of the different correlation scales from the ordinary OIC method, while the dotted line represents the proposed method.</p>
Full article ">
17 pages, 5136 KiB  
Article
A Novel Hybrid Approach for a Content-Based Image Retrieval Using Feature Fusion
by Shahbaz Sikandar, Rabbia Mahum and AbdulMalik Alsalman
Appl. Sci. 2023, 13(7), 4581; https://doi.org/10.3390/app13074581 - 4 Apr 2023
Cited by 19 | Viewed by 5505
Abstract
The multimedia content generated by devices and image processing techniques requires high computation costs to retrieve images similar to the user’s query from the database. An annotation-based traditional system of image retrieval is not coherent because pixel-wise matching of images brings significant variations [...] Read more.
The multimedia content generated by devices and image processing techniques requires high computation costs to retrieve images similar to the user’s query from the database. An annotation-based traditional system of image retrieval is not coherent because pixel-wise matching of images brings significant variations in terms of pattern, storage, and angle. The Content-Based Image Retrieval (CBIR) method is more commonly used in these cases. CBIR efficiently quantifies the likeness between the database images and the query image. CBIR collects images identical to the query image from a huge database and extracts more useful features from the image provided as a query image. Then, it relates and matches these features with the database images’ features and retakes them with similar features. In this study, we introduce a novel hybrid deep learning and machine learning-based CBIR system that uses a transfer learning technique and is implemented using two pre-trained deep learning models, ResNet50 and VGG16, and one machine learning model, KNN. We use the transfer learning technique to obtain the features from the images by using these two deep learning (DL) models. The image similarity is calculated using the machine learning (ML) model KNN and Euclidean distance. We build a web interface to show the result of similar images, and the Precision is used as the performance measure of the model that achieved 100%. Our proposed system outperforms other CBIR systems and can be used in many applications that need CBIR, such as digital libraries, historical research, fingerprint identification, and crime prevention. Full article
(This article belongs to the Special Issue Deep Learning for Image Recognition and Processing)
Show Figures

Figure 1

Figure 1
<p>The proposed CBIR.</p>
Full article ">Figure 2
<p>The process of image retrieval.</p>
Full article ">Figure 3
<p>The process of Relu activation function.</p>
Full article ">Figure 4
<p>Dropout process [<a href="#B52-applsci-13-04581" class="html-bibr">52</a>].</p>
Full article ">Figure 5
<p>Samples of images used in the dataset.</p>
Full article ">Figure 6
<p>Samples results of CBIR.</p>
Full article ">Figure 7
<p>Performance of CBIR on random images.</p>
Full article ">Figure 8
<p>Comparison with existing techniques.</p>
Full article ">
20 pages, 29641 KiB  
Article
Slow Deformation Time-Series Monitoring for Urban Areas Based on the AWHPSPO Algorithm and TELM: A Case Study of Changsha, China
by Xuemin Xing, Jihang Zhang, Jun Zhu, Rui Zhang and Bin Liu
Remote Sens. 2023, 15(6), 1492; https://doi.org/10.3390/rs15061492 - 8 Mar 2023
Cited by 2 | Viewed by 1805
Abstract
Health monitoring is important for densely distributed urban infrastructures, particularly in cities undergoing rapid economic progress. Permanent scatterer interferometry (PSI) is an advanced remote sensing observation technique that is commonly used in urban infrastructure monitoring. However, the rapid construction of infrastructures may easily [...] Read more.
Health monitoring is important for densely distributed urban infrastructures, particularly in cities undergoing rapid economic progress. Permanent scatterer interferometry (PSI) is an advanced remote sensing observation technique that is commonly used in urban infrastructure monitoring. However, the rapid construction of infrastructures may easily cause a loss of coherence for radar interferometry, inducing a low density of effective permanent scatterer (PS) points, which is the main limitation of PSI. In order to address these problems, a novel time-series synthetic aperture radar interferometry (InSAR) process based on the adaptive window homogeneous pixel selection and phase optimization (AWHPSPO) algorithm and thermal expansion linear model (TELM) is proposed. Firstly, for homogeneous point selection, information on both the time-series intensity and deformation phases is considered, which can compensate for the defects of insufficient homogeneous samples and low phase quality in traditional distributed scatterer interferometric synthetic aperture radar (DS-InSAR) processing. Secondly, the physical, thermal expansion component, which reflects the material properties of the infrastructures, is introduced into the traditional linear model, which can more rationally reflect the temporal evolution of deformation variation, and the thermal expansion coefficients can be estimated simultaneously with the deformation parameters. In order to verify our proposed algorithm, the Orange Island area in Changsha City, China, was selected as the study area in this experiment. Three years of its historical time-series deformation fields and thermal expansion coefficients were regenerated. With the use of high-resolution TerraSAR-X radar satellite images, a maximum accumulated settlement of 12.3 mm and a minor uplift of 8.2 mm were detected. Crossvalidation with small baseline subset interferometric synthetic aperture radar (SBAS-InSAR) results using Sentinel 1A data proved the reliability of AWHPSPO. The proposed algorithm can provide a reference for the control of the health and safety of urban infrastructures. Full article
(This article belongs to the Special Issue Applications of SAR Images for Urban Areas)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flow chart of AWHPSPO.</p>
Full article ">Figure 2
<p>Study area features at different scales: (<b>a</b>) the location of the study area, (<b>b</b>) an amplified image of the study area, and (<b>c</b>) the scale of the region in China.</p>
Full article ">Figure 3
<p>Spatiotemporal baselines of the interferometric pairs (the center orange dot represents the index of master image).</p>
Full article ">Figure 4
<p>Results of SHP identification. (<b>a</b>) Optical image. (<b>b</b>) Results of a homogeneous pixel number with a [7, 7] window. (<b>c</b>) Results of AWHPSPO (A represents the Yuelu Mountain area; B represents the high-rise group area of Wanda Plaza; C represents the northern area of Orange Island).</p>
Full article ">Figure 5
<p>Contrast of differential interferograms (11 June 2016–8 February 2016). (<b>a</b>) Original interferogram. (<b>b</b>) Goldstein phase optimization. (<b>c</b>) Gaussian-weighted phase optimization. (<b>d</b>) Adaptive window phase optimization.</p>
Full article ">Figure 6
<p>PS and DS candidate distribution. (<b>a</b>) PS candidates. (<b>b</b>) DS candidates.</p>
Full article ">Figure 7
<p>Comparison of PS-DS and baseline network; (<b>a</b>,<b>b</b>) before the baseline quality evaluation; (<b>c</b>,<b>d</b>) after baseline quality evaluation.</p>
Full article ">Figure 8
<p>(<b>a</b>) Annual velocity map. (<b>b</b>) Thermal expansion coefficient map. (<b>c</b>,<b>d</b>) These are enlarged maps of areas D and G in <a href="#remotesensing-15-01492-f008" class="html-fig">Figure 8</a>b (with an optical image as the background; E represents the area near Wanda Plaza; F represents the area near International Finance Square building; I represents the area near Poly International Plaza).</p>
Full article ">Figure 9
<p>Time-series deformation of the study area (D represents the Orange Island Bridge; F represents the area near International Finance Square building; H represents the area near Zhongshan Pavilion; I represents the area near Poly International Plaza).</p>
Full article ">Figure 10
<p>Enlarged deformation maps of (<b>a</b>) area D, (<b>b</b>) area H, (<b>c</b>) area F, and (<b>d</b>) area I.</p>
Full article ">Figure 11
<p>Optical image map of part of the construction area. (<b>a</b>,<b>b</b>) Enlarged image of area H. (<b>c</b>,<b>d</b>) Enlarged image of area I.</p>
Full article ">Figure 12
<p>Locations of PS points and in situ pictures. (<b>a</b>) Location of the four PS points, (<b>b</b>) Location of PS3, (<b>c</b>) Location of PS4, and (<b>d</b>) Location of PS1, PS2.</p>
Full article ">Figure 13
<p>Three years of time-series deformations at the feature points.</p>
Full article ">Figure 14
<p>Comparison of AWHPSPO and SBAS-InSAR results.</p>
Full article ">
20 pages, 8464 KiB  
Article
A SqueeSAR Spatially Adaptive Filtering Algorithm Based on Hadoop Distributed Cluster Environment
by Yongning Li, Weiwei Song, Baoxuan Jin, Xiaoqing Zuo, Yongfa Li and Kai Chen
Appl. Sci. 2023, 13(3), 1869; https://doi.org/10.3390/app13031869 - 31 Jan 2023
Cited by 1 | Viewed by 2047
Abstract
Multi-temporal interferometric synthetic aperture radar (MT-InSAR) techniques analyze a study area using a set of SAR image data composed of time series, reaching millimeter surface subsidence accuracy. To effectively acquire the subsidence information in low-coherence areas without obvious features in non-urban areas, an [...] Read more.
Multi-temporal interferometric synthetic aperture radar (MT-InSAR) techniques analyze a study area using a set of SAR image data composed of time series, reaching millimeter surface subsidence accuracy. To effectively acquire the subsidence information in low-coherence areas without obvious features in non-urban areas, an MT-InSAR technique, called SqueeSAR, is proposed to improve the density of the subsidence points in the study area by fusing the distributed scatterers (DS). However, SqueeSAR filters the DS points individually during spatial adaptive filtering, which requires significant computer memory, which leads to low processing efficiency, and faces great challenges in large-area InSAR processing. We propose a spatially adaptive filtering parallelization strategy based on the Spark distributed computing engine in a Hadoop distributed cluster environment, which splits the different DS pixel point data into different computing nodes for parallel processing and effectively improves the filtering algorithm’s performance. To evaluate the effectiveness and accuracy of the proposed method, we conducted a performance evaluation and accuracy verification in and around the main city of Kunming with the original Sentinel-1A SLC data provided by ESA. Additionally, parallel calculation was performed in a YARN cluster comprising three computing nodes, which improved the performance of the filtering algorithm by a factor of 2.15, without affecting the filtering accuracy. Full article
(This article belongs to the Special Issue Big Data Management and Analysis with Distributed or Cloud Computing)
Show Figures

Figure 1

Figure 1
<p>Comparison before and after filtering: (<b>a</b>) before filtering and (<b>b</b>) after filtering (the resultant figure is after cropping a differential interferogram in the study area).</p>
Full article ">Figure 2
<p>Pixel identification information within the search window (12 × 13) obtained based on KS test.</p>
Full article ">Figure 3
<p>Hadoop Distributed File System (HDFS) framework and processing schematic.</p>
Full article ">Figure 4
<p>Principle of Spark on YARN operation.</p>
Full article ">Figure 5
<p>The principle of distributing data to executors via Broadcast.</p>
Full article ">Figure 6
<p>Spark-based parallelization algorithm for spatial adaptive filtering (red represents the different operators provided by spark).</p>
Full article ">Figure 7
<p>Study area(including the main urban area of Kunming and its surroundings).</p>
Full article ">Figure 8
<p>SqueeSAR data processing flow.</p>
Full article ">Figure 9
<p>Subsidence rate map of the study area obtained after time series analysis.</p>
Full article ">Figure 10
<p>A certain differential interferogram before filtering using Doris to generate the study area.</p>
Full article ">Figure 11
<p>Comparison of the two processing methods: (<b>a</b>) filtering with conventional data processing, (<b>b</b>) filtering with spark parallel algorithm.</p>
Full article ">Figure 12
<p>Number of executors versus speedup ratio for the case of three nodes: the <span class="html-italic">x</span>-axis represents the number of executors, where the first value indicates that one executor is started in the case of a single core, and the remaining values start 2 cores. The <span class="html-italic">y</span>-axis represents the speedup.</p>
Full article ">Figure 13
<p>Comparison of filtering processing time with increasing number of executors for the entire study area (all PATCHs): the <span class="html-italic">x</span>-axis represents the number of executors, where the first value indicates that one executor is started in the case of a single core and the remaining values all start 2 cores. the <span class="html-italic">y</span>-axis represents the filtering processing.</p>
Full article ">Figure 14
<p>The relationship between the number of executor-allocated single-core processing units and the speedup in the case of three nodes: the <span class="html-italic">x</span>-axis represents the number of executors. The <span class="html-italic">y</span>-axis represents the speedup as the number of executors increases.</p>
Full article ">Figure 15
<p>Comparison of filtering processing time with increasing number of executors (single cores) for the entire study area (all PATCHs): <span class="html-italic">x</span>-axis represents the number of executors. <span class="html-italic">y</span>-axis represents the filtering processing time with increasing number of executors.</p>
Full article ">
19 pages, 8641 KiB  
Article
A Modification to Phase Estimation for Distributed Scatterers in InSAR Data Stacks
by Changjun Zhao, Yunyun Dong, Wenhao Wu, Bangsen Tian, Jianmin Zhou, Ping Zhang, Shuo Gao, Yuechi Yu and Lei Huang
Remote Sens. 2023, 15(3), 613; https://doi.org/10.3390/rs15030613 - 20 Jan 2023
Cited by 4 | Viewed by 2128
Abstract
To improve the spatial density and quality of measurement points in multitemporal interferometric synthetic aperture radar, distributed scatterers (DSs) should be processed. An essential procedure in DS interferometry is phase estimation, which reconstructs a consistent phase series from all available interferograms. Influenced by [...] Read more.
To improve the spatial density and quality of measurement points in multitemporal interferometric synthetic aperture radar, distributed scatterers (DSs) should be processed. An essential procedure in DS interferometry is phase estimation, which reconstructs a consistent phase series from all available interferograms. Influenced by the well-known suboptimality of coherence estimation, the performance of the state-of-the-art phase estimation algorithms is severely degraded. Previous research has addressed this problem by introducing the coherence bias correction technique. However, the precision of phase estimation is still insufficient because of the limited correction capabilities. In this paper, a modified phase estimation approach is proposed. Particularly, by incorporating the information on both interferometric coherence and the number of looks, a significant bias correction to each element of the coherence magnitude matrix is achieved. The bias-corrected coherence matrix is combined with advanced statistically homogeneous pixel selection and time series phase optimization algorithms to obtain the optimal phase series. Both the simulated and Sentinel-1 real data sets are used to demonstrate the superiority of this proposed approach over the traditional phase estimation algorithms. Specifically, the coherence bias can be corrected with considerable accuracy by the proposed scheme. The mean bias of coherence magnitude is reduced by more than 29%, and the standard deviation is reduced by more than 18% over the existing bias correction method. The proposed approach achieves higher accuracy than the current methods over the reconstructed phase series, including smoother interferometric phases and fewer outliers. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The difference between the expectation of sample coherence magnitude and true coherence <math display="inline"><semantics> <mi>γ</mi> </semantics></math> for different numbers of looks <math display="inline"><semantics> <mi>L</mi> </semantics></math>. (<b>b</b>) The standard deviation of sample coherence magnitude <math display="inline"><semantics> <mrow> <mi>D</mi> <mo stretchy="false">(</mo> <mover accent="true"> <mi>γ</mi> <mo>^</mo> </mover> <mo stretchy="false">)</mo> </mrow> </semantics></math> for different numbers of looks <math display="inline"><semantics> <mi>L</mi> </semantics></math>.</p>
Full article ">Figure 2
<p>The difference between the estimated coherence magnitude <math display="inline"><semantics> <mover accent="true"> <mi>γ</mi> <mo>˜</mo> </mover> </semantics></math> and true coherence magnitude <math display="inline"><semantics> <mi>γ</mi> </semantics></math> for different numbers of looks <math display="inline"><semantics> <mi>L</mi> </semantics></math> and parameters <math display="inline"><semantics> <mi>s</mi> </semantics></math>, using numerical calculation.</p>
Full article ">Figure 3
<p>Flowchart of the proposed phase estimation algorithm.</p>
Full article ">Figure 4
<p>True data of (<b>a</b>) interferometric phase matrix and (<b>b</b>) coherence magnitude matrix.</p>
Full article ">Figure 5
<p>The value of <math display="inline"><semantics> <mi>s</mi> </semantics></math> with a window of (<b>a</b>) <math display="inline"><semantics> <mrow> <mn>5</mn> <mo>×</mo> <mn>5</mn> </mrow> </semantics></math> pixels and (<b>b</b>) <math display="inline"><semantics> <mrow> <mn>7</mn> <mo>×</mo> <mn>7</mn> </mrow> </semantics></math> pixels.</p>
Full article ">Figure 6
<p>Coherence matrix estimation with a window of <math display="inline"><semantics> <mrow> <mn>5</mn> <mo>×</mo> <mn>5</mn> </mrow> </semantics></math> pixels. (<b>a</b>) Sample coherence magnitude <math display="inline"><semantics> <mover accent="true"> <mi>γ</mi> <mo>^</mo> </mover> </semantics></math>. (<b>b</b>) Corrected coherence magnitude <math display="inline"><semantics> <mover accent="true"> <mi>γ</mi> <mo>¯</mo> </mover> </semantics></math>. (<b>c</b>) Corrected coherence magnitude <math display="inline"><semantics> <mover accent="true"> <mi>γ</mi> <mo>˜</mo> </mover> </semantics></math> with the proposed method.</p>
Full article ">Figure 7
<p>(<b>a</b>) Means and (<b>b</b>) standard deviations for different coherence magnitude estimators using different numbers of looks.</p>
Full article ">Figure 8
<p>Standard deviation of the residuals for reconstructed phase series using different coherence magnitude estimators. The colors indicate the number of looks.</p>
Full article ">Figure 9
<p>Reconstructed phase series of a single point using different coherence magnitude estimators. The colors indicate the number of looks.</p>
Full article ">Figure 10
<p>Study area in Volcán Alcedo. (<b>a</b>) Optical image from Google Earth. (<b>b</b>) Averaged intensity map from 40 SLCs of the Sentinel-1 image stacks.</p>
Full article ">Figure 11
<p>The original and reconstructed interferometric phases with the longest temporal baseline of 288 days. An area denoted by the red rectangle is enlarged. (<b>a</b>) Original single look interferogram. Reconstructed phases using: (<b>b</b>) the sample coherence magnitude <math display="inline"><semantics> <mover accent="true"> <mi>γ</mi> <mo>^</mo> </mover> </semantics></math>, (<b>c</b>) the corrected coherence magnitude <math display="inline"><semantics> <mover accent="true"> <mi>γ</mi> <mo>¯</mo> </mover> </semantics></math>, (<b>d</b>) the corrected coherence magnitude <math display="inline"><semantics> <mover accent="true"> <mi>γ</mi> <mo>˜</mo> </mover> </semantics></math> proposed in this paper.</p>
Full article ">Figure 12
<p>(<b>a</b>) Horizontal and (<b>b</b>) vertical profiles of original and reconstructed interferometric phases along solid black lines depicted in <a href="#remotesensing-15-00613-f011" class="html-fig">Figure 11</a>a.</p>
Full article ">Figure 13
<p>Histogram of temporal coherence for phase estimation using different coherence magnitude estimators.</p>
Full article ">Figure 14
<p>Standard deviation of the residuals for reconstructed phase series using different phase estimators. The colors indicate the coherence matrix estimator used.</p>
Full article ">Figure 15
<p>Processing time over: (<b>a</b>) the coherence matrix estimation; (<b>b</b>) the EMI phase optimization.</p>
Full article ">
15 pages, 979 KiB  
Review
OCT and OCT Angiography Update: Clinical Application to Age-Related Macular Degeneration, Central Serous Chorioretinopathy, Macular Telangiectasia, and Diabetic Retinopathy
by Lyvia Zhang, Elon H. C. Van Dijk, Enrico Borrelli, Serena Fragiotta and Mark P. Breazzano
Diagnostics 2023, 13(2), 232; https://doi.org/10.3390/diagnostics13020232 - 8 Jan 2023
Cited by 17 | Viewed by 3446
Abstract
Similar to ultrasound adapting soundwaves to depict the inner structures and tissues, optical coherence tomography (OCT) utilizes low coherence light waves to assess characteristics in the eye. Compared to the previous gold standard diagnostic imaging fluorescein angiography, OCT is a noninvasive imaging modality [...] Read more.
Similar to ultrasound adapting soundwaves to depict the inner structures and tissues, optical coherence tomography (OCT) utilizes low coherence light waves to assess characteristics in the eye. Compared to the previous gold standard diagnostic imaging fluorescein angiography, OCT is a noninvasive imaging modality that generates images of ocular tissues at a rapid speed. Two commonly used iterations of OCT include spectral-domain (SD) and swept-source (SS). Each comes with different wavelengths and tissue penetration capacities. OCT angiography (OCTA) is a functional extension of the OCT. It generates a large number of pixels to capture the tissue and underlying blood flow. This allows OCTA to measure ischemia and demarcation of the vasculature in a wide range of conditions. This review focused on the study of four commonly encountered diseases involving the retina including age-related macular degeneration (AMD), diabetic retinopathy (DR), central serous chorioretinopathy (CSC), and macular telangiectasia (MacTel). Modern imaging techniques including SD-OCT, TD-OCT, SS-OCT, and OCTA assist with understanding the disease pathogenesis and natural history of disease progression, in addition to routine diagnosis and management in the clinical setting. Finally, this review compares each imaging technique’s limitations and potential refinements. Full article
(This article belongs to the Special Issue Advances in Optical Coherence Tomography Angiography)
Show Figures

Figure 1

Figure 1
<p>Multimodal imaging of central serous chorioretinopathy with secondary choroidal neovascularization. Color fundus photography (<b>A</b>) shows a retinal pigment epithelial (RPE) detachment with surrounding subretinal fluid nasal to the fovea. Corresponding <span class="html-italic">en face</span> near-infrared reflectance (<b>B</b>) with B-scan (green line) by swept-source optical coherence tomography (OCT) demonstrates a thickened choroid with overlying pigment adjacent to the subretinal fluid (<b>C</b>). Late-phase fluorescein angiography (<b>D</b>) revealed corresponding hyperfluorescence without overt indication of choroidal neovascularization. <span class="html-italic">En face</span> OCT angiography (OCTA) with segmentation between the outer retina and RPE (<b>E</b>) revealed the focal area of the flow signal (circle), which on B-scan (green: choroidal flow, red: retinal flow) localizes (arrow) within the RPE detachment (<b>F</b>), consistent with secondary choroidal neovascularization. Projection artifacts were removed automatically for OCTA.</p>
Full article ">Figure 2
<p>Clinical outcome from case in <a href="#diagnostics-13-00232-f001" class="html-fig">Figure 1</a> following treatment for CSCR with secondary neovascularization. <span class="html-italic">En face</span> swept-source OCT “heat” thickness map (<b>A</b>) with corresponding line (teal) OCT B-scan indicates subretinal fluid with RPE detachment in nasal macula of left eye at presentation (<b>B</b>) which alone could be mistaken for CSCR without neovascularization. This eye was treated with two sequential, intravitreal anti-vascular endothelial growth factor (VEGF) injections (bevacizumab), one month apart. Three months after initial presentation, <span class="html-italic">en face</span> near-infrared reflectance (<b>C</b>) with corresponding line (teal) OCT B-scan demonstrated sustained resolution of the subretinal fluid and collapse of the RPE detachment (<b>D</b>), consistent with response of exudation and neovascularization to anti-VEGF treatment.</p>
Full article ">
11 pages, 1699 KiB  
Article
UPolySeg: A U-Net-Based Polyp Segmentation Network Using Colonoscopy Images
by Subhashree Mohapatra, Girish Kumar Pati, Manohar Mishra and Tripti Swarnkar
Gastroenterol. Insights 2022, 13(3), 264-274; https://doi.org/10.3390/gastroent13030027 - 10 Aug 2022
Cited by 8 | Viewed by 3067
Abstract
Colonoscopy is a gold standard procedure for tracking the lower gastrointestinal region. A colorectal polyp is one such condition that is detected through colonoscopy. Even though technical advancements have improved the early detection of colorectal polyps, there is still a high percentage of [...] Read more.
Colonoscopy is a gold standard procedure for tracking the lower gastrointestinal region. A colorectal polyp is one such condition that is detected through colonoscopy. Even though technical advancements have improved the early detection of colorectal polyps, there is still a high percentage of misses due to various factors. Polyp segmentation can play a significant role in the detection of polyps at the early stage and can thus help reduce the severity of the disease. In this work, the authors implemented several image pre-processing techniques such as coherence transport and contrast limited adaptive histogram equalization (CLAHE) to handle different challenges in colonoscopy images. The processed image was then segmented into a polyp and normal pixel using a U-Net-based deep learning segmentation model named UPolySeg. The main framework of UPolySeg has an encoder–decoder section with feature concatenation in the same layer as the encoder–decoder along with the use of dilated convolution. The model was experimentally verified using the publicly available Kvasir-SEG dataset, which gives a global accuracy of 96.77%, a dice coefficient of 96.86%, an IoU of 87.91%, a recall of 95.57%, and a precision of 92.29%. The new framework for the polyp segmentation implementing UPolySeg improved the performance by 1.93% compared with prior work. Full article
(This article belongs to the Collection Advances in Gastrointestinal Cancer)
Show Figures

Figure 1

Figure 1
<p>Chart showing the incidences of colorectum cancer worldwide.</p>
Full article ">Figure 2
<p>Sample of polyp images along with their masks showing the difference in number, shape, and size of polyps. The first and third images show the original image, while the second and fourth images show the corresponding ground truths provided in the Kvasir-SEG [<a href="#B7-gastroent-13-00027" class="html-bibr">7</a>] dataset.</p>
Full article ">Figure 3
<p>A complete framework of the proposed model.</p>
Full article ">Figure 4
<p>The first image is the original image from the Kvasir-SEG dataset, the second image is the image after artifact removal, and the third is the image after contrast enhancement.</p>
Full article ">Figure 5
<p>UPolySeg architecture.</p>
Full article ">Figure 6
<p>The first image is the original image, the second image is the ground truth, and the third is an overlay of the ground truth and the segmented image for the best case.</p>
Full article ">Figure 7
<p>The first image is the original image, the second image is the ground truth, and the third is an overlay of the ground truth and the segmented image for the worst case.</p>
Full article ">
21 pages, 14969 KiB  
Article
Cloud Removal for Optical Remote Sensing Imagery Using Distortion Coding Network Combined with Compound Loss Functions
by Jianjun Zhou, Xiaobo Luo, Wentao Rong and Hao Xu
Remote Sens. 2022, 14(14), 3452; https://doi.org/10.3390/rs14143452 - 18 Jul 2022
Cited by 11 | Viewed by 3420
Abstract
Optical remote sensing (RS) satellites perform imaging in the visible and infrared electromagnetic spectrum to collect data and analyze information on the optical characteristics of the objects of interest. However, optical RS is sensitive to illumination and atmospheric conditions, especially clouds, and multiple [...] Read more.
Optical remote sensing (RS) satellites perform imaging in the visible and infrared electromagnetic spectrum to collect data and analyze information on the optical characteristics of the objects of interest. However, optical RS is sensitive to illumination and atmospheric conditions, especially clouds, and multiple acquisitions are typically required to obtain an image of sufficient quality. To accurately reproduce surface information that has been contaminated by clouds, this work proposes a generative adversarial network (GAN)-based cloud removal framework using a distortion coding network combined with compound loss functions (DC-GAN-CL). A novel generator embedded with distortion coding and feature refinement mechanisms is applied to focus on cloudy regions and enhance the transmission of optical information. In addition, to achieve feature and pixel consistency, both coherent semantics and local adaptive reconstruction factors are considered in our loss functions. Extensive numerical evaluations on RICE1, RICE2, and Paris datasets are performed to validate the good performance achieved by the proposed DC-GAN-CL in both peak signal-to-noise ratio (PSNR) and visual perception. This system can thus restore images to obtain similar quality to cloud-free reference images, in a dynamic range of over 30 dB. The restoration effect on the coherence of image semantics produced by this technique is competitive compared with other methods. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The proposed DC-GAN-CL framework.</p>
Full article ">Figure 2
<p>Architecture of the generator.</p>
Full article ">Figure 3
<p>Architecture of the distortion coding module.</p>
Full article ">Figure 4
<p>Examples of cloudy images (<math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mrow> <mi>RGB</mi> </mrow> </msub> </mrow> </semantics></math>), and the corresponding distortion coding maps (<math display="inline"><semantics> <mrow> <msub> <mi>m</mi> <mrow> <mi>DC</mi> </mrow> </msub> </mrow> </semantics></math>). (<b>a</b>) RICE1/<math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mrow> <mi>RGB</mi> </mrow> </msub> <mn>1</mn> </mrow> </semantics></math>; (<b>b</b>) RICE1/<math display="inline"><semantics> <mrow> <msub> <mi>m</mi> <mrow> <mi>DC</mi> </mrow> </msub> <mn>1</mn> </mrow> </semantics></math>; (<b>c</b>) RICE2/<math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mrow> <mi>RGB</mi> </mrow> </msub> <mn>2</mn> </mrow> </semantics></math>; (<b>d</b>) RICE2/<math display="inline"><semantics> <mrow> <msub> <mi>m</mi> <mrow> <mi>DC</mi> </mrow> </msub> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Architecture of the feature refinement module.</p>
Full article ">Figure 6
<p>Results of each method for RICE1. (<b>a</b>) Cloudy images; (<b>b</b>) DCP; (<b>c</b>) cGAN; (<b>d</b>) SpAGAN; (<b>e</b>) McGAN; (<b>f</b>) AMGAN-CR; (<b>g</b>) DC-GAN-CL; (<b>h</b>) cloud-free images.</p>
Full article ">Figure 7
<p>Results of each method for RICE2. (<b>a</b>) Cloudy images; (<b>b</b>) Cloud-GAN; (<b>c</b>) cGAN; (<b>d</b>) SpAGAN; (<b>e</b>) McGAN; (<b>f</b>) AMGAN-CR; (<b>g</b>) DC-GAN-CL; (<b>h</b>) cloud-free images.</p>
Full article ">Figure 8
<p>Trends in the PSNR and SSIM values of McGAN, AMGAN-CR, and DC-GAN-CL on the RICE1 test set throughout training.</p>
Full article ">Figure 9
<p>Trends in the PSNR and SSIM values of McGAN, AMGAN-CR, and DC-GAN-CL on the RICE2 test set throughout training.</p>
Full article ">Figure 10
<p>Results of an ablation study of different components of the network architecture on the RICE2 test set. (<b>a</b>) Cloudy images; (<b>b</b>) baseline; (<b>c</b>) baseline + DCM; (<b>d</b>) baseline + FRM; (<b>e</b>) DC-GAN-CL; (<b>f</b>) cloud-free images.</p>
Full article ">Figure 11
<p>Results of an ablation study of different components of the network architecture on the RICE2 test set. (<b>a</b>) Cloudy images; (<b>b</b>) baseline; (<b>c</b>) baseline + <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>cs</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>d</b>) baseline + <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>cloudy</mi> </mrow> </msub> </mrow> </semantics></math> + <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mrow> <mi>non</mi> <mo>-</mo> <mi>cloudy</mi> </mrow> </mrow> </msub> </mrow> </semantics></math>; (<b>e</b>) DC-GAN-CL; (<b>f</b>) cloud-free images.</p>
Full article ">Figure 12
<p>Analysis of the influence of different hyper parameter values in the proposed method on the performance.</p>
Full article ">Figure 13
<p>Results of each method in analysing RICE2/(Image1,Image2). (<b>a</b>) Cloudy images; (<b>b</b>) cloud masks; (<b>c</b>) Cloud-GAN; (<b>d</b>) cGAN; (<b>e</b>) SpAGAN; (<b>f</b>) McGAN; (<b>g</b>) AMGAN-CR; (<b>h</b>) DC-GAN-CL; (<b>i</b>) cloud-free images.</p>
Full article ">Figure 14
<p>Samples from the Paris dataset. (<b>a</b>) Cloud-free images; (<b>b</b>) real cloud images; (<b>c</b>) cloud masks; (<b>d</b>) simulated cloud images.</p>
Full article ">Figure 15
<p>Results of each method for Paris dataset. (<b>a</b>) Cloudy images; (<b>b</b>) Cloud-GAN; (<b>c</b>) cGAN; (<b>d</b>) SpAGAN; (<b>e</b>)McGAN; (<b>f</b>) AMGAN-CR; (<b>g</b>) DC-GAN-CL; (<b>h</b>) cloud-free images.</p>
Full article ">
Back to TopTop