Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 15, May-1
Previous Issue
Volume 15, April-1
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 15, Issue 8 (April-2 2023) – 261 articles

Cover Story (view full-size image): This paper presents and discusses two new methods based on PRISMA hyperspectral imagery to extract satellite-derived shorelines (SDS) along low-lying coasts. The first method analyses band-averaged spectral signatures along transverse beach transects. In contrast, the second method uses all the spectral information in the image by detecting spectral signatures using k-means clustering to apply the fully constrained linear spectral (FCLS) unmixing and spatial attraction model algorithms. The results are validated on three Mediterranean beaches in Italy and Greece. The resulting error is of the order of 6–7 m. The paper also analyses the ability of the methods to identify different shoreline proxies. The results demonstrate that hyperspectral imagery can accurately map shorelines that represent essential information for coastal management. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
41 pages, 2368 KiB  
Article
Fast, Efficient, and Viable Compressed Sensing, Low-Rank, and Robust Principle Component Analysis Algorithms for Radar Signal Processing
by Reinhard Panhuber
Remote Sens. 2023, 15(8), 2216; https://doi.org/10.3390/rs15082216 - 21 Apr 2023
Cited by 5 | Viewed by 1871
Abstract
Modern radar signal processing techniques make strong use of compressed sensing, affine rank minimization, and robust principle component analysis. The corresponding reconstruction algorithms should fulfill the following desired properties: complex valued, viable in the sense of not requiring parameters that are unknown in [...] Read more.
Modern radar signal processing techniques make strong use of compressed sensing, affine rank minimization, and robust principle component analysis. The corresponding reconstruction algorithms should fulfill the following desired properties: complex valued, viable in the sense of not requiring parameters that are unknown in practice, fast convergence, low computational complexity, and high reconstruction performance. Although a plethora of reconstruction algorithms are available in the literature, these generally do not meet all of the aforementioned desired properties together. In this paper, a set of algorithms fulfilling these conditions is presented. The desired requirements are met by a combination of turbo-message-passing algorithms and smoothed 0-refinements. Their performance is evaluated by use of extensive numerical simulations and compared with popular conventional algorithms. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Phase transition of CS algorithms in SRE in dB.</p>
Full article ">Figure 2
<p>Phase transition plot for NIHT in the case of wrongly chosen parameter <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>2</mn> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math>.</p>
Full article ">Figure 3
<p>Comparison of convergence speed for <math display="inline"> <semantics> <mfrac bevelled="true"> <mi mathvariant="sans-serif-italic">κ</mi> <mi>m</mi> </mfrac> </semantics> </math> = 0.25 and <math display="inline"> <semantics> <mrow> <mfrac bevelled="true"> <mi>m</mi> <mi>n</mi> </mfrac> </mrow> </semantics> </math>. The solid line shows NIHT for <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math> and the dashed line <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>2</mn> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math>. (<b>a</b>) Random sensing operators <math display="inline"> <semantics> <mrow> <mi mathvariant="bold-italic">A</mi> <mo>∼</mo> <mi mathvariant="script">CN</mi> <mfenced separators="" open="(" close=")"> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mfenced> </mrow> </semantics> </math>. (<b>b</b>) DFT sensing operators with random rows.</p>
Full article ">Figure 4
<p>Comparison of computation time <math display="inline"> <semantics> <mrow> <mfrac bevelled="true"> <mi>m</mi> <mi>n</mi> </mfrac> </mrow> </semantics> </math>. The solid line shows NIHT for <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math> and the dashed line <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>2</mn> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math>. (<b>a</b>) Random sensing operators <math display="inline"> <semantics> <mrow> <mi mathvariant="bold-italic">A</mi> <mo>∼</mo> <mi mathvariant="script">CN</mi> <mfenced separators="" open="(" close=")"> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mfenced> </mrow> </semantics> </math>. (<b>b</b>) DFT sensing operators with random rows.</p>
Full article ">Figure 5
<p>Comparison of SRE vs. SNR for <math display="inline"> <semantics> <mrow> <mfrac bevelled="true"> <mi>m</mi> <mi>n</mi> </mfrac></mrow> </semantics> </math>. The solid line for NIHT shows <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math> and the dashed line <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>2</mn> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math>. (<b>a</b>) Random sensing operators <math display="inline"> <semantics> <mrow> <mi>A</mi> <mo>∼</mo> <mi mathvariant="script">CN</mi> <mfenced separators="" open="(" close=")"> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mfenced> </mrow> </semantics> </math> for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>0</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>. (<b>b</b>) DFT sensing operator for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>0</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>. (<b>c</b>) Random sensing operators <math display="inline"> <semantics> <mrow> <mi>A</mi> <mo>∼</mo> <mi mathvariant="script">CN</mi> <mfenced separators="" open="(" close=")"> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mfenced> </mrow> </semantics> </math> for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>10</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>. (<b>d</b>) DFT sensing operator for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>10</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>. (<b>e</b>) Random sensing operators <math display="inline"> <semantics> <mrow> <mi>A</mi> <mo>∼</mo> <mi mathvariant="script">CN</mi> <mfenced separators="" open="(" close=")"> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mfenced> </mrow> </semantics> </math> for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>20</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>. (<b>f</b>) DFT sensing operator for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>20</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>. (<b>g</b>) Random sensing operators <math display="inline"> <semantics> <mrow> <mi>A</mi> <mo>∼</mo> <mi mathvariant="script">CN</mi> <mfenced separators="" open="(" close=")"> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mfenced> </mrow> </semantics> </math> for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>30</mn> <mspace width="3.33333pt"/> <mi>dB</mi> </mrow> </semantics> </math>. (<b>h</b>) DFT sensing operator for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>30</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>.</p>
Full article ">Figure 6
<p>Common approximations to the <math display="inline"> <semantics> <msub> <mo>ℓ</mo> <mn>0</mn> </msub> </semantics> </math>-quasi-norm.</p>
Full article ">Figure 7
<p>Phase transition of refined CS algorithms in SRE in dB.</p>
Full article ">Figure 7 Cont.
<p>Phase transition of refined CS algorithms in SRE in dB.</p>
Full article ">Figure 8
<p>Comparison of reconstruction performances for CSCSA update. Reconstruction success is defined as <math display="inline"> <semantics> <mrow> <mo form="prefix">SRE</mo> <mo>≤</mo> <mo>−</mo> <mo form="prefix">SNR</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure 9
<p>Phase transition of ARM algorithms in SRE in dB.</p>
Full article ">Figure 9 Cont.
<p>Phase transition of ARM algorithms in SRE in dB.</p>
Full article ">Figure 10
<p>Phase transition plot for TARM in the case of wrongly chosen parameter <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>2</mn> <mi>ρ</mi> </mrow> </semantics> </math>.</p>
Full article ">Figure 11
<p>Comparison of convergence speed for <math display="inline"> <semantics> <mrow> <mfrac bevelled="true"> <msub> <mi>d</mi> <mi>ρ</mi> </msub> <mi>m</mi> </mfrac> <mo>=</mo> <mn>0.25</mn></mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mfrac bevelled="true"> <mi>m</mi> <mi>n</mi> </mfrac> <mo>=</mo> <mn>0.5</mn></mrow> </semantics> </math>. The solid lines show SVP and TARM for <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mi>ρ</mi> </mrow> </semantics> </math> and the dashed lines for <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>2</mn> <mi>ρ</mi> </mrow> </semantics> </math>. (<b>a</b>) Random sensing operators <math display="inline"> <semantics> <mrow> <mi mathvariant="bold-italic">A</mi> <mo>∼</mo> <mi mathvariant="script">CN</mi> <mfenced separators="" open="(" close=")"> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mfenced> </mrow> </semantics> </math>. (<b>b</b>) DFT sensing operators with random rows.</p>
Full article ">Figure 12
<p>Comparison of computation time for <math display="inline"> <semantics> <mrow> <mfrac bevelled="true"> <mi>m</mi> <mi>n</mi> </mfrac> <mo>=</mo> <mn>0.8</mn></mrow> </semantics> </math> in an interval where all algorithms perform equally well. The solid lines show SVP and TARM for <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mi>ρ</mi> </mrow> </semantics> </math> and the dashed lines for <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>2</mn> <mi>ρ</mi> </mrow> </semantics> </math>. (<b>a</b>) Random sensing operators <math display="inline"> <semantics> <mrow> <mi mathvariant="bold-italic">A</mi> <mo>∼</mo> <mi mathvariant="script">CN</mi> <mfenced separators="" open="(" close=")"> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mfenced> </mrow> </semantics> </math>. (<b>b</b>) DFT sensing operators with random rows.</p>
Full article ">Figure 13
<p>Comparison of SRE vs. SNR for <math display="inline"> <semantics> <mrow> <mfrac bevelled="true"> <mi>m</mi> <mi>n</mi> </mfrac> <mo>=</mo> <mn>0.8</mn></mrow> </semantics> </math>. The solid line for SVP and TARM show <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mi>ρ</mi> </mrow> </semantics> </math> and the dashed line <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>2</mn> <mi>ρ</mi> </mrow> </semantics> </math>. (<b>a</b>) Random sensing operators <math display="inline"> <semantics> <mrow> <mi mathvariant="bold-italic">A</mi> <mo>∼</mo> <mi mathvariant="script">CN</mi> <mfenced separators="" open="(" close=")"> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mfenced> </mrow> </semantics> </math> for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>0</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>. (<b>b</b>) DFT sensing operator for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>0</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>. (<b>c</b>) Random sensing operators <math display="inline"> <semantics> <mrow> <mi mathvariant="bold-italic">A</mi> <mo>∼</mo> <mi mathvariant="script">CN</mi> <mfenced separators="" open="(" close=")"> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mfenced> </mrow> </semantics> </math> for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>10</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>. (<b>d</b>) DFT sensing operator for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>10</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>. (<b>e</b>) Random sensing operators <math display="inline"> <semantics> <mrow> <mi mathvariant="bold-italic">A</mi> <mo>∼</mo> <mi mathvariant="script">CN</mi> <mfenced separators="" open="(" close=")"> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mfenced> </mrow> </semantics> </math> for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>20</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>. (<b>f</b>) DFT sensing operator for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>20</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>. (<b>g</b>) Random sensing operators <math display="inline"> <semantics> <mrow> <mi mathvariant="bold-italic">A</mi> <mo>∼</mo> <mi mathvariant="script">CN</mi> <mfenced separators="" open="(" close=")"> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mfenced> </mrow> </semantics> </math> for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>30</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>. (<b>h</b>) DFT sensing operator for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>30</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>.</p>
Full article ">Figure 13 Cont.
<p>Comparison of SRE vs. SNR for <math display="inline"> <semantics> <mrow> <mfrac bevelled="true"> <mi>m</mi> <mi>n</mi> </mfrac> <mo>=</mo> <mn>0.8</mn></mrow> </semantics> </math>. The solid line for SVP and TARM show <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mi>ρ</mi> </mrow> </semantics> </math> and the dashed line <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>2</mn> <mi>ρ</mi> </mrow> </semantics> </math>. (<b>a</b>) Random sensing operators <math display="inline"> <semantics> <mrow> <mi mathvariant="bold-italic">A</mi> <mo>∼</mo> <mi mathvariant="script">CN</mi> <mfenced separators="" open="(" close=")"> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mfenced> </mrow> </semantics> </math> for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>0</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>. (<b>b</b>) DFT sensing operator for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>0</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>. (<b>c</b>) Random sensing operators <math display="inline"> <semantics> <mrow> <mi mathvariant="bold-italic">A</mi> <mo>∼</mo> <mi mathvariant="script">CN</mi> <mfenced separators="" open="(" close=")"> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mfenced> </mrow> </semantics> </math> for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>10</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>. (<b>d</b>) DFT sensing operator for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>10</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>. (<b>e</b>) Random sensing operators <math display="inline"> <semantics> <mrow> <mi mathvariant="bold-italic">A</mi> <mo>∼</mo> <mi mathvariant="script">CN</mi> <mfenced separators="" open="(" close=")"> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mfenced> </mrow> </semantics> </math> for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>20</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>. (<b>f</b>) DFT sensing operator for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>20</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>. (<b>g</b>) Random sensing operators <math display="inline"> <semantics> <mrow> <mi mathvariant="bold-italic">A</mi> <mo>∼</mo> <mi mathvariant="script">CN</mi> <mfenced separators="" open="(" close=")"> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mfenced> </mrow> </semantics> </math> for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>30</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>. (<b>h</b>) DFT sensing operator for <math display="inline"> <semantics> <mrow> <mo form="prefix">SNR</mo> <mo>=</mo> <mn>30</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mrow> </semantics> </math>.</p>
Full article ">Figure 14
<p>Phase transition of refined ARM algorithms in SRE in dB.</p>
Full article ">Figure 15
<p>Comparison of reconstruction performances of TSVT + CSRA to TARM. The solid lines show the 50% success rate for a strict reconstruction success, defined as <math display="inline"> <semantics> <mrow> <mo form="prefix">SRE</mo> <mo>≤</mo> <mo>−</mo> <mo form="prefix">SNR</mo> </mrow> </semantics> </math>, and the dashed lines the success rate for a less strict success definition, i.e., <math display="inline"> <semantics> <mrow> <mo form="prefix">SRE</mo> <mo>≤</mo> <mo>−</mo> <mfenced separators="" open="(" close=")"> <mo form="prefix">SNR</mo> <mo>−</mo> <mn>5</mn> <mspace width="0.166667em"/> <mi>dB</mi> </mfenced> </mrow> </semantics> </math>.</p>
Full article ">Figure 16
<p>Comparison of reconstruction performance. Dashed lines illustrate results of Part 1 of the TCRPCA algorithm given by Algorithm 5 and solid lines the refined results achieved by Part 2 the TCRPCA algorithm given by Algorithm 6. (<b>a</b>) TCRPCA for noiselet operator. (<b>b</b>) TCRPCA for DFT operator. (<b>c</b>) turbo message passing for CRPCA (TMP-CRPCA) for noiselet operator with <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mi>ρ</mi> </mrow> </semantics> </math>. (<b>d</b>) TMP-CRPCA for DFT operator with <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mi>ρ</mi> </mrow> </semantics> </math>. (<b>e</b>) NFL for noiselet operator with <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mi>ρ</mi> </mrow> </semantics> </math>. (<b>f</b>) NFL for DFT operator with <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mi>ρ</mi> </mrow> </semantics> </math>. (<b>g</b>) SpaRCS for noiselet operator with <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mi>ρ</mi> </mrow> </semantics> </math>. (<b>h</b>) SpaRCS for DFT operator with <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mi>ρ</mi> </mrow> </semantics> </math>.</p>
Full article ">Figure 16 Cont.
<p>Comparison of reconstruction performance. Dashed lines illustrate results of Part 1 of the TCRPCA algorithm given by Algorithm 5 and solid lines the refined results achieved by Part 2 the TCRPCA algorithm given by Algorithm 6. (<b>a</b>) TCRPCA for noiselet operator. (<b>b</b>) TCRPCA for DFT operator. (<b>c</b>) turbo message passing for CRPCA (TMP-CRPCA) for noiselet operator with <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mi>ρ</mi> </mrow> </semantics> </math>. (<b>d</b>) TMP-CRPCA for DFT operator with <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mi>ρ</mi> </mrow> </semantics> </math>. (<b>e</b>) NFL for noiselet operator with <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mi>ρ</mi> </mrow> </semantics> </math>. (<b>f</b>) NFL for DFT operator with <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mi>ρ</mi> </mrow> </semantics> </math>. (<b>g</b>) SpaRCS for noiselet operator with <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mi>ρ</mi> </mrow> </semantics> </math>. (<b>h</b>) SpaRCS for DFT operator with <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mi>ρ</mi> </mrow> </semantics> </math>.</p>
Full article ">Figure 17
<p>In detail reconstruction performance comparison for a DFT sensing operator and <math display="inline"> <semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>9</mn> </mrow> </semantics> </math>. The red line indicates the success boundary or the phase transitions shown in <a href="#remotesensing-15-02216-f016" class="html-fig">Figure 16</a>.</p>
Full article ">Figure 18
<p>Comparison of convergence speed for <math display="inline"> <semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mfrac bevelled="true"> <mi>κ</mi> <mi>m</mi> </mfrac> <mo>=</mo> <mn>0.2</mn></mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mfrac bevelled="true"> <mi>m</mi> <mi>n</mi> </mfrac> <mo>=</mo> <mn>0.8</mn></mrow> </semantics> </math>. The solid lines show NFL, SpaRCS, and TMP-CRPCA for <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mi>ρ</mi> </mrow> </semantics> </math> and the dashed lines for <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>2</mn> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>2</mn> <mi>ρ</mi> </mrow> </semantics> </math>.</p>
Full article ">Figure 19
<p>Comparison of computation time for <math display="inline"> <semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mfrac bevelled="true"> <mi>m</mi> <mi>n</mi> </mfrac> <mo>=</mo> <mn>0.8</mn></mrow> </semantics> </math>, i.e., in an interval where all algorithms perform equally well. For NFL, SpaRCS, and TMP-CRPCA, the results are shown for <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mi mathvariant="sans-serif-italic">κ</mi> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mi>ρ</mi> </mrow> </semantics> </math> only. For <math display="inline"> <semantics> <mrow> <mfrac bevelled="true"> <mi mathvariant="sans-serif-italic">κ</mi> <mi>m</mi> </mfrac> <mo>&gt;</mo> <mn>0.3</mn></mrow> </semantics> </math>, TMP-CRPCA did not converge and is thus not shown here. (<b>a</b>) Noislet sensing operators. (<b>b</b>) DFT sensing operators with random rows.</p>
Full article ">Figure 20
<p>Comparison of the low-rank SRE vs. SNR for <math display="inline"> <semantics> <mrow> <mfrac bevelled="true"> <mi>m</mi> <mi>n</mi> </mfrac> <mo>=</mo> <mn>0.8</mn></mrow> </semantics> </math>. The solid line for TCRPCA shows the performance without and the dashed line with the spikiness operator <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="script">P</mi> <mi mathvariant="sans-serif-italic">φ</mi> </msub> <mfenced open="(" close=")"> <mi mathvariant="bold-italic">L</mi> </mfenced> </mrow> </semantics> </math> defined in (<a href="#FD64-remotesensing-15-02216" class="html-disp-formula">64</a>), respectively.</p>
Full article ">Figure 20 Cont.
<p>Comparison of the low-rank SRE vs. SNR for <math display="inline"> <semantics> <mrow> <mfrac bevelled="true"> <mi>m</mi> <mi>n</mi> </mfrac> <mo>=</mo> <mn>0.8</mn></mrow> </semantics> </math>. The solid line for TCRPCA shows the performance without and the dashed line with the spikiness operator <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="script">P</mi> <mi mathvariant="sans-serif-italic">φ</mi> </msub> <mfenced open="(" close=")"> <mi mathvariant="bold-italic">L</mi> </mfenced> </mrow> </semantics> </math> defined in (<a href="#FD64-remotesensing-15-02216" class="html-disp-formula">64</a>), respectively.</p>
Full article ">
27 pages, 20567 KiB  
Article
Fast Factorized Backprojection Algorithm in Orthogonal Elliptical Coordinate System for Ocean Scenes Imaging Using Geosynchronous Spaceborne–Airborne VHF UWB Bistatic SAR
by Xiao Hu, Hongtu Xie, Lin Zhang, Jun Hu, Jinfeng He, Shiliang Yi, Hejun Jiang and Kai Xie
Remote Sens. 2023, 15(8), 2215; https://doi.org/10.3390/rs15082215 - 21 Apr 2023
Cited by 10 | Viewed by 1966
Abstract
Geosynchronous (GEO) spaceborne–airborne very high-frequency ultra-wideband bistatic synthetic aperture radar (VHF UWB BiSAR) can conduct high-resolution and wide-swath imaging for ocean scenes. However, GEO spaceborne–airborne VHF UWB BiSAR imaging faces some challenges such as the geometric configuration, huge amount of echo data, serious [...] Read more.
Geosynchronous (GEO) spaceborne–airborne very high-frequency ultra-wideband bistatic synthetic aperture radar (VHF UWB BiSAR) can conduct high-resolution and wide-swath imaging for ocean scenes. However, GEO spaceborne–airborne VHF UWB BiSAR imaging faces some challenges such as the geometric configuration, huge amount of echo data, serious range–azimuth coupling, large spatial variance, and complex motion error, which increases the difficulty of the high-efficiency and high-precision imaging. In this paper, we present an improved bistatic fast factorization backprojection (FFBP) algorithm for ocean scene imaging using the GEO satellite-unmanned aerial vehicle (GEO-UAV) VHF UWB BiSAR, which can solve the above issues with high efficiency and high precision. This method reconstructs the subimages in the orthogonal elliptical polar (OEP) coordinate system based on the GEO satellite and UAV trajectories as well as the location of the imaged scene, which can further reduce the computational burden. First, the imaging geometry and signal model of the GEO-UAV VHF UWB BiSAR are established, and the construction of the OEP coordinate system and the subaperture imaging method are proposed. Moreover, the Nyquist sampling requirements for the subimages in the OEP coordinate system are derived from the range error perspective, which can offer a near-optimum tradeoff between precision and efficiency. In addition, the superiority of the OEP coordinate system is analyzed, which demonstrates that the angular dimensional sampling rate of the subimages is significantly reduced. Finally, the implementation processes and computational burden of the proposed algorithm are provided, and the speed-up factor of the proposed FFBP algorithm compared with the BP algorithm is derived and discussed. Experimental results of ideal point targets and natural ocean scenes demonstrate the correctness and effectiveness of the proposed algorithm, which can achieve near-optimal imaging performance with a low computational burden. Full article
(This article belongs to the Special Issue Radar Signal Processing and Imaging for Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Imaging geometry of the GEO-UAV BiSAR system.</p>
Full article ">Figure 2
<p>Subaperture imaging geometry of the GEO-UAV BiSAR in the orthogonal elliptical coordinate system. (<b>a</b>) The <span class="html-italic">k</span>th subaperture and subimage grid; (<b>b</b>) The <span class="html-italic">k</span>th OEP coordinate system.</p>
Full article ">Figure 2 Cont.
<p>Subaperture imaging geometry of the GEO-UAV BiSAR in the orthogonal elliptical coordinate system. (<b>a</b>) The <span class="html-italic">k</span>th subaperture and subimage grid; (<b>b</b>) The <span class="html-italic">k</span>th OEP coordinate system.</p>
Full article ">Figure 3
<p>Plane geometry of the ellipse <math display="inline"><semantics> <mrow> <mi>E</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Angular dimension sampling requirement of the subimages in the OEP system.</p>
Full article ">Figure 5
<p>Analysis of the angular dimension sampling requirement of the subimages in both EP and OEP systems for the same imaging scene.</p>
Full article ">Figure 6
<p>Two-dimensional (2D) spectrum comparison of the imaging result of the single ideal point target with the different subaperture lengths <span class="html-italic">n</span>. (<b>a</b>) Two-dimensional (2D) spectrum for <span class="html-italic">n</span> = 256 in the EP system; (<b>b</b>) Two-dimensional (2D) spectrum for <span class="html-italic">n</span> = 512 in the EP system; (<b>c</b>) Two-dimensional (2D) spectrum for <span class="html-italic">n</span> = 1024 in the EP system; (<b>d</b>) Two-dimensional (2D) spectrum for <span class="html-italic">n</span> = 256 in the OEP system; (<b>e</b>) Two-dimensional (2D) spectrum for <span class="html-italic">n</span> = 512 in the OEP system; (<b>f</b>) Two-dimensional (2D) spectrum for <span class="html-italic">n</span> = 1024 in the OEP system.</p>
Full article ">Figure 7
<p>Procedures of the proposed bistatic FFBP algorithm. (<b>a</b>) Diagram; (<b>b</b>) Flow chart.</p>
Full article ">Figure 8
<p>Variation of the speed-up factor. (<b>a</b>) With respect to the fusion time <span class="html-italic">M</span>; (<b>b</b>) With respect to the fusion aperture <span class="html-italic">n</span>.</p>
Full article ">Figure 9
<p>The experimental scene for the GEO-UAV UWB BiSAR imaging. (<b>a</b>) The imaging geometry; (<b>b</b>) The distribution of the point targets.</p>
Full article ">Figure 10
<p>Imaging results of the point targets. (<b>a</b>) Bistatic BP algorithm; (<b>b</b>) Bistatic EP FFBP algorithm; (<b>c</b>) Proposed bistatic FFBP algorithm.</p>
Full article ">Figure 11
<p>The counter plots of the impulse response of the selected point targets. (<b>a</b>) The target A focused by the bistatic BP algorithm; (<b>b</b>) The target A focused by the bistatic EP FFBP algorithm; (<b>c</b>) The target A focused by the proposed bistatic FFBP algorithm; (<b>d</b>) The target B focused by the bistatic BP algorithm; (<b>e</b>) The target B focused by the bistatic EP FFBP algorithm; (<b>f</b>) The target B focused by the proposed bistatic FFBP algorithm; (<b>g</b>) The target C focused by the bistatic BP algorithm; (<b>h</b>) The target C focused by the bistatic EP FFBP algorithm; (<b>i</b>) The target C focused by the proposed bistatic FFBP algorithm.</p>
Full article ">Figure 11 Cont.
<p>The counter plots of the impulse response of the selected point targets. (<b>a</b>) The target A focused by the bistatic BP algorithm; (<b>b</b>) The target A focused by the bistatic EP FFBP algorithm; (<b>c</b>) The target A focused by the proposed bistatic FFBP algorithm; (<b>d</b>) The target B focused by the bistatic BP algorithm; (<b>e</b>) The target B focused by the bistatic EP FFBP algorithm; (<b>f</b>) The target B focused by the proposed bistatic FFBP algorithm; (<b>g</b>) The target C focused by the bistatic BP algorithm; (<b>h</b>) The target C focused by the bistatic EP FFBP algorithm; (<b>i</b>) The target C focused by the proposed bistatic FFBP algorithm.</p>
Full article ">Figure 12
<p>The profiles of the impulse response of the selected point targets. (<b>a</b>) Azimuthal profile of the target A; (<b>b</b>) Range profile of the target A; (<b>c</b>) Azimuthal profile of the target B; (<b>d</b>) Range profile of the target B; (<b>e</b>) Azimuthal profile of the target C; (<b>f</b>) Range profile of the target C.</p>
Full article ">Figure 13
<p>The natural ocean scene and its echo signal generated by the TBT algorithm for the GEO-UAV VHF UWB BiSAR imaging. (<b>a</b>) The natural ocean scene; (<b>b</b>) Amplitude of the echo signal; (<b>c</b>) Phase of the echo signal.</p>
Full article ">Figure 14
<p>Reconstructed SAR image obtained by the different algorithms. (<b>a</b>) The bistatic BP algorithm; (<b>b</b>) The bistatic EP FFBP algorithm; (<b>c</b>) The proposed bistatic FFBP algorithm.</p>
Full article ">Figure 15
<p>The selected ship targets and their profiles of the imaging results obtained by the different algorithms. (<b>a</b>) Ship target A; (<b>b</b>) Ship target B; (<b>c</b>) Ship target C; (<b>d</b>) Azimuthal profile of the ship target A; (<b>e</b>) Azimuthal profile of the ship target B; (<b>f</b>) Azimuthal profile of the ship target C; (<b>g</b>) Range profile of the ship target A; (<b>h</b>) Range profile of the ship target B; (<b>i</b>) Range profile of the ship target C.</p>
Full article ">
28 pages, 36592 KiB  
Article
An Interferogram Re-Flattening Method for InSAR Based on Local Residual Fringe Removal and Adaptively Adjusted Windows
by Di Zhuang, Lamei Zhang and Bin Zou
Remote Sens. 2023, 15(8), 2214; https://doi.org/10.3390/rs15082214 - 21 Apr 2023
Cited by 1 | Viewed by 1546
Abstract
InSAR technology uses the geometry between antennas and targets to obtain DEM and deformation; therefore, accurate orbit information, which can provide reliable geometry, is the prerequisite for InSAR processing. However, the orbit information provided by some satellites may be inaccurate. Further, this inaccuracy [...] Read more.
InSAR technology uses the geometry between antennas and targets to obtain DEM and deformation; therefore, accurate orbit information, which can provide reliable geometry, is the prerequisite for InSAR processing. However, the orbit information provided by some satellites may be inaccurate. Further, this inaccuracy will be reflected in the interferogram and will be difficult to remove, finally resulting in incorrect results. More importantly, it was found that the residual fringes caused by inaccurate orbit information vary unevenly throughout the whole image and cannot be completely removed by the existing refinement and re-flattening methods. Therefore, an interferogram re-flattening method based on local residual fringe removal and adaptively adjusted windows was proposed in this paper, with the aim being to remove the unevenly varying residual fringes. There are two innovative advantages of the proposed method. One advantage is that the method aims at the global inhomogeneity of residual fringes; the idea of combining local processing and residual fringe removal was proposed to ensure the residual fringes in the whole image can be removed. The other is that an adaptively adjusted local flattening window was designed to ensure that the residual fringes within the local window can be removed cleanly. Three sets of GaoFen-3 data and one pair of Sentinle-1A data were used for experiments. The re-flattening process shows that the local flattening and the adjustment of the local window are absolutely essential to the clean removal of time-varying and uneven residual fringes. The generated DEM and the estimated building heights are used to indirectly reflect the performance of re-flattening methods. The final results show that compared with mature refinement and re-flattening methods, the DEMs based on the proposed method are more accurate, which reflects that the proposed method has a better performance in the removal of time-varying and uneven residual fringes. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Examples of the flattened interferograms using the orbit-based flattening method. (<b>a</b>) Flattened interferograms in Tokyo Bay area based on GaoFen-3 FSII data; (<b>b</b>) Flattened interferograms in Ningbo City area based on GaoFen-3 FSI data.</p>
Full article ">Figure 2
<p>Flowchart of the proposed method.</p>
Full article ">Figure 3
<p>Mechanism of adaptive adjustment for re-flattening windows.</p>
Full article ">Figure 4
<p>The situations of the residual fringes not being removed completely and the corresponding solution. (<b>a</b>) Re-flattened interferogram with residual fringes due to a small window; (<b>b</b>) Re-flattened interferogram without residual fringes after adaptive window size adjustment; (<b>c</b>) Re-flattened interferogram with residual fringes due to the improper position; (<b>d</b>) Re-flattened interferogram without residual fringes after adaptive position adjustment.</p>
Full article ">Figure 5
<p>Schematic diagram of the re-flattened interferogram. (<b>a</b>) re-flattened interferogram within one column of windows; (<b>b</b>) re-flattened interferogram within several columns of windows. The blue arrow indicates the order of the re-flattening process.</p>
Full article ">Figure 6
<p>SAR intensity images and corresponding optical images of the four pairs of InSAR data. (<b>a</b>) SAR intensity image of the GF-3 InSAR data located in Ningbo City; (<b>b</b>) Optical image of the GF-3 InSAR data located in Ningbo City; (<b>c</b>) SAR intensity image of the GF-3 InSAR data located in Yutian County; (<b>d</b>) Optical image of the GF-3 InSAR data located in Yutian County; (<b>e</b>) SAR intensity image of the GF-3 InSAR data located in Xi’an City; (<b>f</b>) Optical image of the GF-3 InSAR data located in Xi’an City; (<b>g</b>) SAR intensity image of the Sentinel-1A InSAR data located in Yancheng City; (<b>h</b>) Optical image of the Sentinel-1A InSAR data located in Yancheng City.</p>
Full article ">Figure 6 Cont.
<p>SAR intensity images and corresponding optical images of the four pairs of InSAR data. (<b>a</b>) SAR intensity image of the GF-3 InSAR data located in Ningbo City; (<b>b</b>) Optical image of the GF-3 InSAR data located in Ningbo City; (<b>c</b>) SAR intensity image of the GF-3 InSAR data located in Yutian County; (<b>d</b>) Optical image of the GF-3 InSAR data located in Yutian County; (<b>e</b>) SAR intensity image of the GF-3 InSAR data located in Xi’an City; (<b>f</b>) Optical image of the GF-3 InSAR data located in Xi’an City; (<b>g</b>) SAR intensity image of the Sentinel-1A InSAR data located in Yancheng City; (<b>h</b>) Optical image of the Sentinel-1A InSAR data located in Yancheng City.</p>
Full article ">Figure 7
<p>Interferogram and coherence of the InSAR data located in Ningbo. (<b>a</b>) Interferogram; (<b>b</b>) The enlarged image of the red rectangular window in (<b>a</b>); (<b>c</b>) Coherence; (<b>d</b>) The histogram of the coherence.</p>
Full article ">Figure 8
<p>Flattened interferogram.</p>
Full article ">Figure 9
<p>The re-flattened interferograms of local windows and phase alignment between adjacent windows. (<b>a</b>) The re-flattened and aligned interferogram; (<b>b</b>) The interferogram after re-flattening and phase alignment of the next window, based on (<b>a</b>); (<b>c</b>) The enlarged image of the red rectangular window in (<b>b</b>).</p>
Full article ">Figure 10
<p>Re-flattened interferograms based on the proposed method, GPR method, SBDR method and LFE-MW method. (<b>a</b>) The re-flattened interferogram based on the proposed re-flattening method in this paper; (<b>b</b>) The enlarged image of the red rectangular window in (<b>a</b>); (<b>c</b>) The re-flattened interferogram based on the GPR method; (<b>d</b>) The enlarged image of the red rectangular window in (<b>c</b>); (<b>e</b>) The re-flattened interferogram based on the SBDR method; (<b>f</b>) The enlarged image of the red rectangular window in (<b>e</b>); (<b>g</b>) The re-flattened interferogram based on the LFE-MW method; (<b>h</b>) The enlarged image of the red rectangular window in (<b>g</b>).</p>
Full article ">Figure 10 Cont.
<p>Re-flattened interferograms based on the proposed method, GPR method, SBDR method and LFE-MW method. (<b>a</b>) The re-flattened interferogram based on the proposed re-flattening method in this paper; (<b>b</b>) The enlarged image of the red rectangular window in (<b>a</b>); (<b>c</b>) The re-flattened interferogram based on the GPR method; (<b>d</b>) The enlarged image of the red rectangular window in (<b>c</b>); (<b>e</b>) The re-flattened interferogram based on the SBDR method; (<b>f</b>) The enlarged image of the red rectangular window in (<b>e</b>); (<b>g</b>) The re-flattened interferogram based on the LFE-MW method; (<b>h</b>) The enlarged image of the red rectangular window in (<b>g</b>).</p>
Full article ">Figure 11
<p>Re-flattened interferograms of the InSAR data located in Yutian County. (<b>a</b>) Re-flattened interferogram based on the proposed re-flattening method; (<b>b</b>) Re-flattened interferogram based on the GPR method; (<b>c</b>) Re-flattened interferogram based on the SBDR method; (<b>d</b>) Re-flattened interferogram based on the LFE-MW method.</p>
Full article ">Figure 12
<p>Re-flattened interferograms of the InSAR data located in Xi’an City. (<b>a</b>) Re-flattened interferogram based on the proposed re-flattening method; (<b>b</b>) Re-flattened interferogram based on the GPR method; (<b>c</b>) Re-flattened interferogram based on the SBDR method; (<b>d</b>) Re-flattened interferogram based on the LFE-MW method.</p>
Full article ">Figure 13
<p>Re-flattened interferograms of the Sentinel-1A InSAR data located in Yancheng City. (<b>a</b>) Re-flattened interferogram based on the proposed re-flattening method; (<b>b</b>) Re-flattened interferogram based on the GPR method; (<b>c</b>) Re-flattened interferogram based on the SBDR method; (<b>d</b>) Re-flattened interferogram based on the LFE-MW method.</p>
Full article ">Figure 13 Cont.
<p>Re-flattened interferograms of the Sentinel-1A InSAR data located in Yancheng City. (<b>a</b>) Re-flattened interferogram based on the proposed re-flattening method; (<b>b</b>) Re-flattened interferogram based on the GPR method; (<b>c</b>) Re-flattened interferogram based on the SBDR method; (<b>d</b>) Re-flattened interferogram based on the LFE-MW method.</p>
Full article ">Figure 14
<p>DEMs (in meters) in Ningbo City based on different re-flattened interferograms. (<b>a</b>) DEM based on the proposed re-flattening method; (<b>b</b>) DEM based on the GPR method; (<b>c</b>) DEM based on the SBDR method; (<b>d</b>) DEM based on the LFE-MW method.</p>
Full article ">Figure 14 Cont.
<p>DEMs (in meters) in Ningbo City based on different re-flattened interferograms. (<b>a</b>) DEM based on the proposed re-flattening method; (<b>b</b>) DEM based on the GPR method; (<b>c</b>) DEM based on the SBDR method; (<b>d</b>) DEM based on the LFE-MW method.</p>
Full article ">Figure 15
<p>DEMs (in meters) in Yutian County based on different re-flattened interferograms. (<b>a</b>) DEM based on the proposed re-flattening method; (<b>b</b>) DEM based on the GPR method; (<b>c</b>) DEM based on the SBDR method; (<b>d</b>) DEM based on the LFE-MW method.</p>
Full article ">Figure 16
<p>DEMs (in meters) in Xi’an City based on different re-flattened interferograms. (<b>a</b>) DEM based on the proposed re-flattening method; (<b>b</b>) DEM based on the GPR method; (<b>c</b>) DEM based on the SBDR method; (<b>d</b>) DEM based on the LFE-MW method.</p>
Full article ">Figure 16 Cont.
<p>DEMs (in meters) in Xi’an City based on different re-flattened interferograms. (<b>a</b>) DEM based on the proposed re-flattening method; (<b>b</b>) DEM based on the GPR method; (<b>c</b>) DEM based on the SBDR method; (<b>d</b>) DEM based on the LFE-MW method.</p>
Full article ">Figure 17
<p>DEMs (in meters) in Yancheng City based on different re-flattened interferograms. (<b>a</b>) DEM based on the proposed re-flattening method; (<b>b</b>) DEM based on the GPR method; (<b>c</b>) DEM based on the SBDR method; (<b>d</b>) DEM based on the LFE-MW method.</p>
Full article ">Figure 18
<p>The lines of relative evaluation indicators <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>M</mi> <mi>A</mi> <mi>E</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>R</mi> <mi>M</mi> <mi>S</mi> <mi>E</mi> </mrow> </semantics></math>. (<b>a</b>) The line of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>M</mi> <mi>A</mi> <mi>E</mi> </mrow> </semantics></math>; (<b>b</b>) the line of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>R</mi> <mi>M</mi> <mi>S</mi> <mi>E</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 19
<p>Building height estimation based on the re-flattened interferogram which uses the proposed method. (<b>a</b>) Optical image, (<b>b</b>) Re-flattened interferogram and (<b>c</b>) Height estimation of the selected buildings in area1; (<b>d</b>) Optical image, (<b>e</b>) Re-flattened interferogram and (<b>f</b>) Height estimation of the selected buildings in area2; (<b>g</b>) Optical image, (<b>h</b>) Re-flattened interferogram and (<b>i</b>) Height estimation of the selected buildings in area3; (<b>j</b>) Optical image, (<b>k</b>) Re-flattened interferogram and (<b>l</b>) Height estimation of the selected buildings in area4.</p>
Full article ">Figure 20
<p>Enlarged coherence image and absolute DEM error image in the local area of Ningbo City. (<b>a</b>) The coherence image of the local area in Ningbo City; (<b>b</b>) the corresponding absolute DEM error to (<b>a</b>).</p>
Full article ">Figure 21
<p>The relationship between the coherence and the absolute DEM error.</p>
Full article ">
18 pages, 3883 KiB  
Article
Methods of Analyzing the Error and Rectifying the Calibration of a Solar Tracking System for High-Precision Solar Tracking in Orbit
by Yingqiu Shao, Zhanfeng Li, Xiaohu Yang, Yu Huang, Bo Li, Guanyu Lin and Jifeng Li
Remote Sens. 2023, 15(8), 2213; https://doi.org/10.3390/rs15082213 - 21 Apr 2023
Cited by 2 | Viewed by 1351
Abstract
Reliability is the most critical characteristic of space missions, for example in capturing and tracking moving targets. To this end, two methods are designed to track sunlight using solar remote-sensing instruments (SRSIs). The primary method is to use the offset angles of the [...] Read more.
Reliability is the most critical characteristic of space missions, for example in capturing and tracking moving targets. To this end, two methods are designed to track sunlight using solar remote-sensing instruments (SRSIs). The primary method is to use the offset angles of the guide mirror for closed-loop tracking, while the alternative method is to use the sunlight angles, calculated from the satellite attitude, solar vector, and mechanical installation correction parameters, for open-loop tracking. By comprehensively analyzing the error and rectifying the calibration of the solar tracking system, we demonstrate that the absolute value of the azimuth tracking precision is less than 0.0121° and the pitch is less than 0.0037° with the primary method. Correspondingly, they are 0.0992° and 0.0960° with the alternative method. These precisions meet the requirements of SRSIs. In addition, recalibration due to mechanical vibration during the satellite’s launch may invalidate the above methods, leading to the failure of SRSIs. Hence, we propose a dedicated injection parameter strategy to rectify the sunlight angles to capture and track the sunlight successfully. The stable and effective results in the ultraviolet to near-infrared spectrum validate the SRSI’s high-precision sunlight tracking performance. Furthermore, the above methods can also be applied to all orbital inclinations and may provide a solution for capturing and tracking moving targets. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) The components of the spectrometer and the coordinate systems. (<b>b</b>) Orbit diagram. (<b>c</b>) The position of front detection on satellite.</p>
Full article ">Figure 1 Cont.
<p>(<b>a</b>) The components of the spectrometer and the coordinate systems. (<b>b</b>) Orbit diagram. (<b>c</b>) The position of front detection on satellite.</p>
Full article ">Figure 2
<p>(<b>a</b>) Working-principle diagram of the instrument. (<b>b</b>) Turntable control flow.</p>
Full article ">Figure 2 Cont.
<p>(<b>a</b>) Working-principle diagram of the instrument. (<b>b</b>) Turntable control flow.</p>
Full article ">Figure 3
<p>Angle calculation flow diagram.</p>
Full article ">Figure 4
<p>(<b>a</b>) Relationship between coordinate systems of the solar tracking system. (<b>b</b>) Coordinate transformation data flow diagram (see <a href="#remotesensing-15-02213-t001" class="html-table">Table 1</a>). (<b>c</b>) Diagram of the sun in the orbital coordinate system and the satellite coordinate system.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) Relationship between coordinate systems of the solar tracking system. (<b>b</b>) Coordinate transformation data flow diagram (see <a href="#remotesensing-15-02213-t001" class="html-table">Table 1</a>). (<b>c</b>) Diagram of the sun in the orbital coordinate system and the satellite coordinate system.</p>
Full article ">Figure 5
<p>Sunlight azimuth and pitch angle in the guide-mirror coordinate system.</p>
Full article ">Figure 6
<p>The work mode and angle in the first orbit using the alternative method.</p>
Full article ">Figure 7
<p>The work mode and angle in the second orbit using the alternative method.</p>
Full article ">Figure 8
<p>The work mode and angle using the primary method.</p>
Full article ">Figure 9
<p>The solar spectral curve.</p>
Full article ">
27 pages, 1518 KiB  
Article
TPENAS: A Two-Phase Evolutionary Neural Architecture Search for Remote Sensing Image Classification
by Lei Ao, Kaiyuan Feng, Kai Sheng, Hongyu Zhao, Xin He and Zigang Chen
Remote Sens. 2023, 15(8), 2212; https://doi.org/10.3390/rs15082212 - 21 Apr 2023
Cited by 7 | Viewed by 1960
Abstract
The application of deep learning in remote sensing image classification has been paid more and more attention by industry and academia. However, manually designed remote sensing image classification models based on convolutional neural networks usually require sophisticated expert knowledge. Moreover, it is notoriously [...] Read more.
The application of deep learning in remote sensing image classification has been paid more and more attention by industry and academia. However, manually designed remote sensing image classification models based on convolutional neural networks usually require sophisticated expert knowledge. Moreover, it is notoriously difficult to design a model with both high classification accuracy and few parameters. Recently, neural architecture search (NAS) has emerged as an effective method that can greatly reduce the heavy burden of manually designing models. However, it remains a challenge to search for a classification model with high classification accuracy and few parameters in the huge search space. To tackle this challenge, we propose TPENAS, a two-phase evolutionary neural architecture search framework, which optimizes the model using computational intelligence techniques in two search phases. In the first search phase, TPENAS searches for the optimal depth of the model. In the second search phase, TPENAS searches for the structure of the model from the perspective of the whole model. Experiments on three open benchmark datasets demonstrate that our proposed TPENAS outperforms the state-of-the-art baselines in both classification accuracy and reducing parameters. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The encoding diagram of a feature extraction block.</p>
Full article ">Figure 2
<p>The diagram of population evolution in TPENAS. Stem represents a convolution operation; Block represents a feature extraction block; Pooling represents a pooling operation; GPA represents a global average pooling operation; Linear represents a fully connected layer.</p>
Full article ">Figure 3
<p>Some samples from the UCM21 dataset.</p>
Full article ">Figure 4
<p>Some samples from the PatternNet dataset.</p>
Full article ">Figure 5
<p>Some samples from the NWPU45 dataset.</p>
Full article ">Figure 6
<p>Search results of the first search phase for the UCM21 dataset. The red dots represent the optimal solution in a particular block, while the blue circles represent nonoptimal solutions. The pentagram represents the chosen solution. The red dotted line represents the front of the best solution.</p>
Full article ">Figure 7
<p>Pareto front of the second search phase for the UCM21 dataset. The red dots represent the optimal solution, while the blue circles indicate the nonoptimal solutions. The Pareto front is represented by the red dotted line. The pentagram indicates the chosen solution.</p>
Full article ">Figure 8
<p>The classification confusion matrix on UCM21 dataset.</p>
Full article ">Figure 9
<p>Search results of the first search phase for the PatternNet dataset. The red dots indicate the optimal solution in a particular block, while the blue circles indicate nonoptimal solutions. The pentagram represents the chosen solution. The red dotted line represents the front of the best solution.</p>
Full article ">Figure 10
<p>Pareto front of the second search phase for the PatternNet dataset. The red dots represent the optimal solution, while the blue circles represent the nonoptimal solution. The Pareto front is represented by the red dotted line. The pentagram indicates the chosen solution.</p>
Full article ">Figure 11
<p>The classification confusion matrix on PatternNet dataset.</p>
Full article ">Figure 12
<p>Search results of the first search phase for the NWPU45 dataset. The red dots indicate the optimal solution in a particular block, while the blue circles indicate nonoptimal solutions. The pentagram represents the chosen solution. The red dotted line represents the front of the best solution.</p>
Full article ">Figure 13
<p>Pareto front of the second search phase for the NWPU45 dataset. The red dots represent the optimal solution, while the blue circles represent the nonoptimal solutions. The Pareto front is represented by the red dotted line. The pentagram indicates the chosen solution.</p>
Full article ">Figure 14
<p>The classification confusion matrix on the NWPU45 dataset. We removed categories with classification accuracy higher than 95% in the confusion matrix and did not display numbers less than 0.01.</p>
Full article ">Figure 15
<p>Some samples in the categories <span class="html-italic">church</span> and <span class="html-italic">place</span>. The four images in the first row are in the category <span class="html-italic">church</span>, and the four images in the second row are in the category <span class="html-italic">place</span>.</p>
Full article ">Figure 16
<p>The Pareto fronts obtained by experiments with five different block numbers during the second search phase.</p>
Full article ">Figure 17
<p>The horizontal axis represents the GFLOPs of the individual, and the vertical axis represents the test error of the individual on the UCM21 dataset. The blue curve represents the Pareto front on the second phase. The red curve represents the result obtained by fully training the solution on the Pareto front from scratch.</p>
Full article ">
16 pages, 14094 KiB  
Article
Remote Sensing Image Compression Based on the Multiple Prior Information
by Chuan Fu and Bo Du
Remote Sens. 2023, 15(8), 2211; https://doi.org/10.3390/rs15082211 - 21 Apr 2023
Cited by 4 | Viewed by 2086
Abstract
Learned image compression has achieved a series of breakthroughs for nature images, but there is little literature focusing on high-resolution remote sensing image (HRRSI) datasets. This paper focuses on designing a learned lossy image compression framework for compressing HRRSIs. Considering the local and [...] Read more.
Learned image compression has achieved a series of breakthroughs for nature images, but there is little literature focusing on high-resolution remote sensing image (HRRSI) datasets. This paper focuses on designing a learned lossy image compression framework for compressing HRRSIs. Considering the local and non-local redundancy contained in HRRSI, a mixed hyperprior network is designed to explore both the local and non-local redundancy in order to improve the accuracy of entropy estimation. In detail, a transformer-based hyperprior and a CNN-based hyperprior are fused for entropy estimation. Furthermore, to reduce the mismatch between training and testing, a three-stage training strategy is introduced to refine the network. In this training strategy, the entire network is first trained, and then some sub-networks are fixed while the others are trained. To evaluate the effectiveness of the proposed compression algorithm, the experiments are conducted on an HRRSI dataset. The results show that the proposed algorithm achieves comparable or better compression performance than some traditional and learned image compression algorithms, such as Joint Photographic Experts Group (JPEG) and JPEG2000. At a similar or lower bitrate, the proposed algorithm is about 2 dB higher than the PSNR value of JPEG2000. Full article
(This article belongs to the Special Issue AI-Based Obstacle Detection and Avoidance in Remote Sensing Images)
Show Figures

Figure 1

Figure 1
<p>The redundancy contains remote sensing images. In each image, the same color means similar patches.</p>
Full article ">Figure 2
<p>(<b>a</b>) Hyperprior [<a href="#B56-remotesensing-15-02211" class="html-bibr">56</a>]. (<b>b</b>) Proposed.</p>
Full article ">Figure 3
<p>The framework of the proposed compression algorithm.</p>
Full article ">Figure 4
<p>This is the entropy model used in the proposed algorithm.</p>
Full article ">Figure 5
<p>The main transform block of the proposed algorithm. The poolformer removed the token mixture model and used pool layers instead. To enhance the channel and spatial attention, a CBAM block was adopted after the poolformer blocks.</p>
Full article ">Figure 6
<p>The parameter estimator network, which is used for estimating the parameter of GMM.</p>
Full article ">Figure 7
<p>The rate-distortion compression performance of the proposed algorithm and comparison algorithms. The rate is measured by bit per pixel (bpp) and distortion is measured by PSNR.</p>
Full article ">Figure 8
<p>The rate-distortion compression performance of the proposed algorithm and comparison algorithms. The rate is measured by bit per pixel (bpp) and distortion is measured by MSSSIM.</p>
Full article ">Figure 9
<p>The visual performance. JPEG: 0.7712 bpp, PSNR 31.07 dB, MSSSIM 0.9788. JPEG2000: 0.7373 bpp, PSNR 33.72 dB, MSSSIM 0.9814. Ours: 0.6818 bpp, PSNR 35.18 dB, MSSSIM 0.9897. (<b>a</b>) Original; (<b>b</b>) JPEG; (<b>c</b>) JPEG2000; (<b>d</b>) ours.</p>
Full article ">Figure 10
<p>The visual performance. JPEG: 0.8254 bpp, PSNR 30.68 dB, MSSSIM 0.9788. JPEG2000: 0.7375 bpp, PSNR 32.74 dB, MSSSIM 0.9830. Ours: 0.7165 bpp, PSNR 34.68 dB, MSSSIM 0.9912. (<b>a</b>) Original; (<b>b</b>) JPEG; (<b>c</b>) JPEG2000; (<b>d</b>) ours.</p>
Full article ">
16 pages, 6089 KiB  
Article
The Ocean Surface Current in the East China Sea Computed by the Geostationary Ocean Color Imager Satellite
by Youzhi Ma, Wenbin Yin, Zheng Guo and Jiliang Xuan
Remote Sens. 2023, 15(8), 2210; https://doi.org/10.3390/rs15082210 - 21 Apr 2023
Cited by 3 | Viewed by 2556
Abstract
High-frequency observations of surface current field data over large areas and long time series are imperative for comprehending sea-air interaction and ocean dynamics. Nonetheless, neither in situ observations nor polar-orbiting satellites can fulfill the requirements necessary for such observations. In recent years, geostationary [...] Read more.
High-frequency observations of surface current field data over large areas and long time series are imperative for comprehending sea-air interaction and ocean dynamics. Nonetheless, neither in situ observations nor polar-orbiting satellites can fulfill the requirements necessary for such observations. In recent years, geostationary satellite data with ultra-high temporal resolution have been increasingly utilized for the computation of surface flow fields. In this paper, the surface flow field in the East China Sea is estimated using maximum cross-correlation, which is the most widely used flow field computation algorithm, based on the total suspended solids (TSS) data acquired from the Geostationary Ocean Color Imager satellite. The inversion results were compared with the modeled tidal current data and the measured tidal elevation data for verification. The results of the verification demonstrated that the mean deviation of the long semiaxis of the tidal ellipse of the inverted M2 tide is 0.0335 m/s, the mean deviation of the short semiaxis is 0.0276 m/s, and the mean deviation of the tilt angle is 6.89°. Moreover, the spatially averaged flow velocity corresponds with the observed pattern of tidal elevation changes, thus showcasing the field’s significant reliability. Afterward, we calculated the sea surface current fields in the East China Sea for the years 2013 to 2019 and created distribution maps for both climatology and seasonality. The resulting current charts provide an intuitive display of the spatial structure and seasonal variations in the East China Sea circulation. Lastly, we performed a diagnostic analysis on the surface TSS variation mechanism in the frontal zone along the Zhejiang coast, utilizing inverted flow data collected on 3 August 2013, which had a high spatial coverage and complete time series. Our analysis revealed that the intraday variation in TSS in the local surface layer was primarily influenced by tide-induced vertical mixing. The research findings of this article not only provide valuable data support for the study of local ocean dynamics but also verify the reliability of short-period surface flow inversion of high-turbidity waters near the coast using geostationary satellites. Full article
(This article belongs to the Special Issue Recent Advancements in Remote Sensing for Ocean Current)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Spatial distribution of GOCI climatological TSS in the East China Sea; (<b>b</b>) the schematic diagram of ocean circulation (the background color in (<b>b</b>) is the water depth). <b>KC</b>: Kuroshio Current; <b>TWC</b>: Taiwan Warm Current; <b>CDW</b>: Changjiang Diluted Water; <b>ZJCC</b>: Zhejiang Coastal Current.</p>
Full article ">Figure 2
<p>Two consecutive remote-sensing image images <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">T</mi> <mn mathvariant="bold">0</mn> </msub> </mrow> </semantics></math> (<b>a</b>) and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">T</mi> <mn mathvariant="bold">1</mn> </msub> </mrow> </semantics></math> (<b>b</b>), used for calculating the cross-correlation coefficient of the source window (red square) and the target window in the search window (blue square) to search for the position of the highest cross-correlation value and to obtain the velocity vector.</p>
Full article ">Figure 3
<p>The RMSE of semimajor (solid blue line), semiminor (blue dotted line) axis, and inclination (solid red line) under different source window radius <math display="inline"><semantics> <mi>r</mi> </semantics></math>.</p>
Full article ">Figure 4
<p>(<b>a</b>) M<sub>2</sub> tidal ellipse of MCC−derived current (blue) and the model flow (red), with a red star indicating the location of the local tide gauge station; (<b>b</b>) spectrum analysis of one month’s worth of tidal hourly elevation data; and (<b>c</b>) tidal elevation variation and spatially averaged flow velocity on 14 February 2017.</p>
Full article ">Figure 5
<p>(<b>a</b>) GOCI−derived climatic current field; (<b>b</b>) spatial distribution of the number of velocity vectors in the flow field. <b>KC</b>: Kuroshio Current; <b>TWC</b>: Taiwan Warm Current; <b>CDW</b>: Changjiang Diluted Water. The red arrows in 5(<b>a</b>) show the trajectory of the currents.</p>
Full article ">Figure 6
<p>Seasonal surface mean flow in the East China Sea (red arrows indicate the main circulation). (<b>a</b>) Spring. (<b>b</b>) Summer. (<b>c</b>) Autumn. (<b>d</b>) Winter. <b>KC</b>: Kuroshio Current; <b>TWC</b>: Taiwan Warm Current; <b>CDW</b>: Changjiang Diluted Water; <b>ZJCC</b>: Zhejiang Coastal Current.</p>
Full article ">Figure 7
<p>Seasonal distribution of the number of derived current vectors in the East China Sea. (<b>a</b>) Spring. (<b>b</b>) Summer. (<b>c</b>) Autumn. (<b>d</b>) Winter.</p>
Full article ">Figure 8
<p>(<b>a</b>–<b>g</b>) Surface current field (red line represents the 70 m isobath); (<b>h</b>) tidal elevation observed from local tide gauge station (blue line) and spatially averaged TSS variation within 70 m isobath (red line); and (<b>i</b>) regional mean current velocity within 70 m isobath and tidal elevation from 8:00 to 14:00 on 3 August 2013.</p>
Full article ">Figure 9
<p>The hourly variation in TSS, advection term, horizontal diffusion term, and vertical term from 08:00 to 13:00 on 3 August 2013.</p>
Full article ">Figure 10
<p>Spatial distribution of hourly variation in TSS, advection term, horizontal diffusion term, and vertical term at 10:00–11:00 (<b>a</b>–<b>d</b>) and 11:00–12:00 (<b>e</b>–<b>h</b>) on 3 August 2013.</p>
Full article ">Figure 11
<p>The evolution of Simpson−Hunter index k versus hourly variations in TSS from 8:00 to 14:00.</p>
Full article ">
22 pages, 8060 KiB  
Article
A Hybrid Chlorophyll a Estimation Method for Oligotrophic and Mesotrophic Reservoirs Based on Optical Water Classification
by Xiaoyan Dang, Jun Du, Chao Wang, Fangfang Zhang, Lin Wu, Jiping Liu, Zheng Wang, Xu Yang and Jingxu Wang
Remote Sens. 2023, 15(8), 2209; https://doi.org/10.3390/rs15082209 - 21 Apr 2023
Cited by 1 | Viewed by 1934
Abstract
Low- and medium-resolution satellites have been a relatively mature platform for inland eutrophic water classification and chlorophyll a concentration (Chl-a) retrieval algorithms. However, for oligotrophic and mesotrophic waters in small- and medium-sized reservoirs, problems of low satellite resolution, insufficient water sampling, and higher [...] Read more.
Low- and medium-resolution satellites have been a relatively mature platform for inland eutrophic water classification and chlorophyll a concentration (Chl-a) retrieval algorithms. However, for oligotrophic and mesotrophic waters in small- and medium-sized reservoirs, problems of low satellite resolution, insufficient water sampling, and higher uncertainty in retrieval accuracy exist. In this paper, a hybrid Chl-a estimation method based on spectral characteristics (i.e., remote sensing reflectance (Rrs)) classification was developed for oligotrophic and mesotrophic waters using high-resolution satellite Sentinel-2 (A and B) data. First, 99 samples and quasi-synchronous Sentinel-2 satellite data were collected from four small- and medium-sized reservoirs in central China, and the usability of the Sentinel-2 Rrs data in inland oligotrophic and mesotrophic waters was verified by accurate atmospheric correction. Second, a new optical classification method was constructed based on different water characteristics to classify waters into clear water, phytoplankton-dominated water, and water dominated by phytoplankton and suspended matter together using the thresholds of Rrs490/Rrs560 and Rrs665/Rrs560. The proposed method has a higher classification accuracy compared to other classification methods, and the band-ratio algorithm is simpler and more effective for satellite sensors without NIR bands. Third, given the sensitivity of the empirical method to water variability and the ease of development and implementation, a nonlinear least squares fitted one-dimensional nonlinear function was established based on the selection of the best-fitting spectral indices for different optical water types (OWTs) and compared with other Chl-a estimation algorithms. The validation results showed that the hybrid two-band method had the highest accuracy with squared correlation coefficient, root mean squared difference, mean absolute percentage error, and bias of 0.85, 2.93, 32.42%, and −0.75 mg/m3, respectively, and the results of the residual values further validated the applicability and reliability of the model. Finally, the performance of the classification and estimation algorithms on the four reservoirs was evaluated to obtain images mapping the Chl-a in the reservoirs. In conclusion, this study improves the accuracy of Chl-a estimation for oligotrophic and mesotrophic waters by combining a new classification algorithm with a two-band hybrid model, which is an important contribution to solving the problem of low resolution and high uncertainty in the retrieval of Chl-a in oligotrophic and mesotrophic waters in small- and medium-sized reservoirs and has the potential to be applied to other optically similar oligotrophic and mesotrophic lakes and reservoirs using similar spectrally satellite sensors. Full article
Show Figures

Figure 1

Figure 1
<p>Spatial distribution of study areas ((<b>a</b>) XLD, (<b>b</b>) DJK, (<b>c</b>) LHH, and (<b>d</b>) SYH) and location of sample sites. See <a href="#remotesensing-15-02209-t001" class="html-table">Table 1</a> for details about the dataset.</p>
Full article ">Figure 2
<p>Remote-sensing reflectance data collected from (<b>a</b>) XLD, (<b>b</b>) LHH, (<b>c</b>) SYH, and (<b>d</b>) DJK.</p>
Full article ">Figure 3
<p>Framework of the hybrid algorithms for Chl-a estimation based on optical classification.</p>
Full article ">Figure 4
<p>Scatter plots of the reflectance of the Sentinel-2 satellite in eight bands (443, 490, 560, 665, 705, 740, 783, and 842 nm) against the measured reflectance. The correlation coefficient (R) was used to evaluate the accuracy of the Sentinel-2 Rrs data. The black diagonal line is the 1:1 line.</p>
Full article ">Figure 5
<p>Sampled data classified according to the algorithm in <a href="#sec3dot1-remotesensing-15-02209" class="html-sec">Section 3.1</a> and the Rrs band ratios (<b>a</b>) Rrs490/Rrs560 and (<b>b</b>) Rrs665/Rrs560.</p>
Full article ">Figure 6
<p>Mean measured spectral characteristics and Sentinel-2 average Rrs of the three OWTs after classification based on the two-band algorithm (green curve, blue curve, red curve, black triangle, black asterisk, and black circle denote Type 1, Type 2, and Type 3, respectively).</p>
Full article ">Figure 7
<p>Classification of water into (<b>a</b>) three types according to MCI thresholds (0.0001, 0.0016) and (<b>b</b>) two types according to the R709/R560 algorithm (0.28). Green, blue, and red circles indicate the three OWTs in this study, corresponding to Types 1, 2, and 3, respectively.</p>
Full article ">Figure 8
<p>(<b>a</b>) Wp distinguished according to the CI672 threshold (0.005) and (<b>b</b>) Wm distinguished by R555 and CI555 (<b>b</b>). Green, blue, and red circles indicate the three OWTs in this study, corresponding to Types 1, 2, and 3, respectively.</p>
Full article ">Figure 9
<p>Scatter plot of measured Chl-a and Sentinel-2 estimated Chl-a based on the hybrid method (<span class="html-italic">n</span> = 99). The black diagonal line is the 1:1 line.</p>
Full article ">Figure 10
<p>Residuals distribution in the Chl-a estimation algorithm for the three OWTs (<span class="html-italic">n</span> = 99).</p>
Full article ">Figure 11
<p>Relationship between the Sentinel-2 satellite band modified methods (<b>a</b>) MCI, (<b>b</b>) TBR, and (<b>c</b>) TBA and measured Chl-a of three OWTs based on the band-ratio water classification. Green, blue, and red circles indicate the three OWTs in this study, corresponding to Types 1, 2, and 3, respectively.</p>
Full article ">Figure 12
<p>Scatter plots of measured Chl-a and Sentinel-2 estimated Chl-a based on the modified method (<b>a</b>) MCI, (<b>b</b>) TBR, and (<b>c</b>) TBA (<span class="html-italic">n</span> = 99). Green, blue, and red circles represent the three OWTs based on the band-ratio water classification algorithm proposed in this study, corresponding to Types 1, 2, and 3, respectively. The black diagonal line is the 1:1 line. The units of RMSE, MAPE, and bias are mg/m<sup>3</sup>, percent, and mg/m<sup>3</sup>, respectively.</p>
Full article ">Figure 13
<p>Estimated spatial distribution of the three OWTs based on hybrid two-band methods: (<b>a</b>) Type 1 water in DJK (June 2022), (<b>b</b>) Type 2 water in XLD (October 2020, and June 2021) and LHH (May 2021, and 14 September 2021), and (<b>c</b>) Type 3 waters in SYH and LHH (September 2021, and 29 September 2021).</p>
Full article ">Figure 13 Cont.
<p>Estimated spatial distribution of the three OWTs based on hybrid two-band methods: (<b>a</b>) Type 1 water in DJK (June 2022), (<b>b</b>) Type 2 water in XLD (October 2020, and June 2021) and LHH (May 2021, and 14 September 2021), and (<b>c</b>) Type 3 waters in SYH and LHH (September 2021, and 29 September 2021).</p>
Full article ">Figure 13 Cont.
<p>Estimated spatial distribution of the three OWTs based on hybrid two-band methods: (<b>a</b>) Type 1 water in DJK (June 2022), (<b>b</b>) Type 2 water in XLD (October 2020, and June 2021) and LHH (May 2021, and 14 September 2021), and (<b>c</b>) Type 3 waters in SYH and LHH (September 2021, and 29 September 2021).</p>
Full article ">
16 pages, 20205 KiB  
Article
Multi-Scale and Context-Aware Framework for Flood Segmentation in Post-Disaster High Resolution Aerial Images
by Sultan Daud Khan and Saleh Basalamah
Remote Sens. 2023, 15(8), 2208; https://doi.org/10.3390/rs15082208 - 21 Apr 2023
Cited by 6 | Viewed by 2150
Abstract
Floods are the most frequent natural disasters, occurring almost every year around the globe. To mitigate the damage caused by a flood, it is important to timely assess the magnitude of the damage and efficiently conduct rescue operations, deploy security personnel and allocate [...] Read more.
Floods are the most frequent natural disasters, occurring almost every year around the globe. To mitigate the damage caused by a flood, it is important to timely assess the magnitude of the damage and efficiently conduct rescue operations, deploy security personnel and allocate resources to the affected areas. To efficiently respond to the natural disaster, it is very crucial to swiftly obtain accurate information, which is hard to obtain during a post-flood crisis. Generally, high resolution satellite images are predominantly used to obtain post-disaster information. Recently, deep learning models have achieved superior performance in extracting high-level semantic information from satellite images. However, due to the loss of multi-scale and global contextual features, existing deep learning models still face challenges in extracting complete and uninterrupted results. In this work, we proposed a novel deep learning semantic segmentation model that reduces the loss of multi-scale features and enhances global context awareness. Generally, the proposed framework consists of three modules, encoder, decoder and bridge, combined in a popular U-shaped scheme. The encoder and decoder modules of the framework introduce Res-inception units to obtain reliable multi-scale features and employ a bridge module (between the encoder and decoder) to capture global context. To demonstrate the effectiveness of the proposed framework, we perform an evaluation using a publicly available challenging dataset, FloodNet. Furthermore, we compare the performance of the proposed framework with other reference methods. We compare the proposed framework with recent reference models. Quantitative and qualitative results show that the proposed framework outperforms other reference models by an obvious margin. Full article
Show Figures

Figure 1

Figure 1
<p>Detailed architecture of proposed framework for flood segmentation in satellite images.</p>
Full article ">Figure 2
<p>Structural diagram of Res-inception unit.</p>
Full article ">Figure 3
<p>Structural diagram of bridge network.</p>
Full article ">Figure 4
<p>Sample images of the dataset. First and third row show the sample images, while second and fourth row represent the ground truth masks.</p>
Full article ">Figure 5
<p>Distribution of images corresponding to different classes.</p>
Full article ">Figure 6
<p>Qualitative comparison of different methods for flood segmentation.</p>
Full article ">
12 pages, 1715 KiB  
Communication
Photosynthetically Active Radiation and Foliage Clumping Improve Satellite-Based NIRv Estimates of Gross Primary Production
by Iolanda Filella, Adrià Descals, Manuela Balzarolo, Gaofei Yin, Aleixandre Verger, Hongliang Fang and Josep Peñuelas
Remote Sens. 2023, 15(8), 2207; https://doi.org/10.3390/rs15082207 - 21 Apr 2023
Cited by 2 | Viewed by 2196
Abstract
Monitoring gross primary production (GPP) is necessary for quantifying the terrestrial carbon balance. The near-infrared reflectance of vegetation (NIRv) has been proven to be a good predictor of GPP. Given that radiation powers photosynthesis, we hypothesized that (i) the addition of photosynthetic photon [...] Read more.
Monitoring gross primary production (GPP) is necessary for quantifying the terrestrial carbon balance. The near-infrared reflectance of vegetation (NIRv) has been proven to be a good predictor of GPP. Given that radiation powers photosynthesis, we hypothesized that (i) the addition of photosynthetic photon flux density (PPFD) information to NIRv would improve estimates of GPP and that (ii) a further improvement would be obtained by incorporating the estimates of radiation distribution in the canopy provided by the foliar clumping index (CI). Thus, we used GPP data from FLUXNET sites to test these possible improvements by comparing the performance of a model based solely on NIRv with two other models, one combining NIRv and PPFD and the other combining NIRv, PPFD and the CI of each vegetation cover type. We tested the performance of these models for different types of vegetation cover, at various latitudes and over the different seasons. Our results demonstrate that the addition of daily radiation information and the clumping index for each vegetation cover type to the NIRv improves its ability to estimate GPP. The improvement was related to foliage organization, given that the foliar distribution in the canopy (CI) affects radiation distribution and use and that radiation drives productivity. Evergreen needleleaf forests are the vegetation cover type with the greatest improvement in GPP estimation after the addition of CI information, likely as a result of their greater radiation constraints. Vegetation type was more determinant of the sensitivity to PPFD changes than latitude or seasonality. We advocate for the incorporation of PPFD and CI into NIRv algorithms and GPP models to improve GPP estimates. Full article
(This article belongs to the Special Issue Remote Sensing Applications for the Biosphere)
Show Figures

Figure 1

Figure 1
<p>Sites used in this study.</p>
Full article ">Figure 2
<p>Relationship between the measured GPP and the GPP estimated from (i) NIRv, (ii) NIRv + PPFD and (iii) NIRv + PPFD + CI, for the whole dataset. CI used in the model was the CI per vegetation cover type from He et al. [<a href="#B24-remotesensing-15-02207" class="html-bibr">24</a>]. Data correspond to daily mean data from 26 sites from 2000 to 2014. The red line corresponds to the linear regression and the black line to the 1:1 line. Different colours identify the vegetation cover types.</p>
Full article ">Figure 3
<p>Relationship between the clumping index and the variation in R<sup>2</sup> (∆R<sup>2</sup>) and RMSE (∆RMSE) between the ground-measured GPP and the satellite-estimated GPP after adding PPFD information to the NIRv model (GPPnirppfd vs. GPPnirv) in black for the different studied sites (<a href="#remotesensing-15-02207-t001" class="html-table">Table 1</a>) (CI calculated as the average annual value per site from the CI product LIS-CI-A1), and in red for the different vegetation cover types (CI for vegetation cover type as in He et al. [<a href="#B24-remotesensing-15-02207" class="html-bibr">24</a>]): evergreen needleleaf forest (ENF), evergreen broadleaf forest (EBF), deciduous broadleaf forest (DBF), mixed forest (MF), open shrubland (OSH), grassland (GRA), woody savanna (WSA) and wetland (WET).</p>
Full article ">Figure 4
<p>Variation in absolute bias between the ground-measured GPP and the satellite-estimated GPP after adding biome CI information to the NIR and PPFD model (GPPnirppfdci vs. GPPnirppfd) for the different vegetation cover types.</p>
Full article ">
18 pages, 6981 KiB  
Article
Combined GPR and Self-Potential Techniques for Monitoring Steel Rebar Corrosion in Reinforced Concrete Structures: A Laboratory Study
by Giacomo Fornasari, Luigi Capozzoli and Enzo Rizzo
Remote Sens. 2023, 15(8), 2206; https://doi.org/10.3390/rs15082206 - 21 Apr 2023
Cited by 2 | Viewed by 2720
Abstract
Steel rebar corrosion is one of the main causes of the deterioration of engineering reinforced structures. Steel rebar in concrete is normally in a non-corroding, passive condition, but these conditions are not always achieved in practice, due to which corrosion of rebars takes [...] Read more.
Steel rebar corrosion is one of the main causes of the deterioration of engineering reinforced structures. Steel rebar in concrete is normally in a non-corroding, passive condition, but these conditions are not always achieved in practice, due to which corrosion of rebars takes place. This degradation has physical consequences, such as decreased ultimate strength and serviceability of engineering concrete structures. This work describes a laboratory test where GPR and SP geophysical techniques were used to detect and monitor the corrosion phenomena. The laboratory tests have been performed with several reinforced concrete samples. The concrete samples were partially submerged in water with a 5% sodium chloride (NaCl) solution. Therefore, an accelerated corrosion phenomenon has been produced by a direct current (DC) power supply along the rebar. The geophysical measurements were performed with a 2.0 GHz centre frequency GPR antenna along several parallel lines on the samples, always being the radar line perpendicular to the rebar axis. The GPR A-scan amplitude signals were elaborated with the Hilbert Transform approach, observing the envelope variations due to the progress of the steel rebar corrosion in each concrete sample. Moreover, Self-Potential acquisitions were carried out on the surface of the concrete sample at the beginning and end of the experiments. Each technique provided specific information, but a data integration method used in the operating system will further improve the overall quality of diagnosis. The collected data were used for an integrated detection approach useful to observe the corrosion evolution along the reinforcement bar. These first laboratory results highlight how the GPR should give a quantitative contribution to the deterioration of reinforced concrete structure. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) GPR acquisition on corroded rebar in concrete; (<b>b</b>) radargram acquired along the concrete sample, the hyperbola corresponds to corroded rebar.</p>
Full article ">Figure 2
<p>Reinforced concrete laboratory sample; (<b>a</b>) project of the realized concrete sample and dimension; (<b>b</b>) wooden formwork for the construction of the sample; (<b>c</b>) sample construction; (<b>d</b>) completed reinforced concrete sample.</p>
Full article ">Figure 3
<p>Accelerated corrosion tests with the impressed current technique. (<b>a</b>) The reinforced concrete sample was immersed in a solution of 5% NaCl and 95% of water. (<b>b</b>) the used Hewlett DC Power supply.</p>
Full article ">Figure 4
<p>GPR acquisition system and the plexiglass plane where several lines were depicted in order to make all the radargrams in the same position. (<b>a</b>) Image from the top; image from behind the antenna where the first (<b>b</b>) and the last (<b>c</b>) profiles were carried out.</p>
Full article ">Figure 5
<p>Acquisitions of Self-Potential (SP) on the reinforced concrete sample.</p>
Full article ">Figure 6
<p>One of the radargrams was acquired at the center of the concrete sample. (<b>a</b>) the radargram highlights the amplitude of the em signal in correspondence to the steel rebar. (<b>b</b>) The envelope filter has been applied to the radargram.</p>
Full article ">Figure 7
<p>Envelope A-scan acquired on the sample “A.” The trace shown is taken from radargram 29, which is exactly in the middle of the concrete sample. The track considered is above the steel rebar present at 0.6 ns. The trace remains constant during the 5 days of investigation, T1_1 at T1_5.</p>
Full article ">Figure 8
<p>Histogram of the envelope signal at the steel rebar for the radargrams acquired on the concrete sample. The value shows the envelope value on the steel rebar at 0.6 ns time depth.</p>
Full article ">Figure 9
<p>Envelope A-scan acquired on sample “B.” The trace shown is taken from radargram 29, which is exactly in the middle of the concrete sample. The track considered is above the steel rebar present at 0.6 ns. The trace remains constant during the 5 days of investigation, T2_1 at T2_5.</p>
Full article ">Figure 10
<p>Envelope A-scan was acquired on sample “B” during the accelerated corrosion test. The trace shown is taken from radargram 29, which is exactly in the middle of the concrete sample. The track considered is above the steel rebar present at 0.6 ns.</p>
Full article ">Figure 11
<p>Envelope A-scan was acquired on sample “B” during the accelerated corrosion test. The trace shown is taken from radargram n°29, at 10 cm after steel rebar, inside the concrete sample. From the graph, we can see that there were no changes in the envelope A-scan trace during the corrosion test.</p>
Full article ">Figure 12
<p>Envelope A-scan was acquired on sample “B” during the accelerated corrosion test. The trace shown is coming from the radargram n°29, 10 cm before the steel rebar. From the graph, we can see that there were no changes in the envelope A-scan trace during the corrosion test.</p>
Full article ">Figure 13
<p>Envelope A-scan was acquired on sample “B” after an accelerated corrosion test. The trace shown is taken from radargram 29 on the steel rebar.</p>
Full article ">Figure 14
<p>Self-potential maps were obtained on the concrete sample. The map shown on the left was realized the day before starting the accelerated corrosion tests on the sample. The map on the right was obtained after 10 days of accelerated corrosion testing.</p>
Full article ">Figure 15
<p>Envelope values along the steel rebar for sample “A” (<b>top</b>) and for sample “B” (<b>bottom</b>). Each bar of the histogram represents the average value of the envelope signal applied on all the radargrams acquired before the induced corrosion phase. Only the odd radargrams were plotted from n°5 to n°55.</p>
Full article ">Figure 16
<p>Envelope values along the steel rebar for sample “B” during the accelerated corrosion tests for ten days of monitoring. The investigation at T3_1 was carried out after 24 h, switching on the power supply, and subsequent surveys, every 24 h.</p>
Full article ">Figure 17
<p>The histogram depicts the maximum envelope value reached for each trace at T3_10 normalized with respect to the average of the traces acquired during the first phase (T1).</p>
Full article ">Figure 18
<p>The picture of the broken sample in correspondence of the steel rebar with the histogram of the normalized envelope values and the self-potential profile along the sample.</p>
Full article ">
17 pages, 8483 KiB  
Article
Optimizing Observation Plans for Identifying Faxon Fir (Abies fargesii var. Faxoniana) Using Monthly Unmanned Aerial Vehicle Imagery
by Weibo Shi, Xiaohan Liao, Jia Sun, Zhengjian Zhang, Dongliang Wang, Shaoqiang Wang, Wenqiu Qu, Hongbo He, Huping Ye, Huanyin Yue and Torbern Tagesson
Remote Sens. 2023, 15(8), 2205; https://doi.org/10.3390/rs15082205 - 21 Apr 2023
Cited by 1 | Viewed by 1780
Abstract
Faxon fir (Abies fargesii var. faxoniana), as a dominant tree species in the subalpine coniferous forest of Southwest China, has strict requirements regarding the temperature and humidity of the growing environment. Therefore, the dynamic and continuous monitoring of Faxon fir distribution [...] Read more.
Faxon fir (Abies fargesii var. faxoniana), as a dominant tree species in the subalpine coniferous forest of Southwest China, has strict requirements regarding the temperature and humidity of the growing environment. Therefore, the dynamic and continuous monitoring of Faxon fir distribution is very important to protect this highly sensitive ecological environment. Here, we combined unmanned aerial vehicle (UAV) imagery and convolutional neural networks (CNNs) to identify Faxon fir and explored the identification capabilities of multispectral (five bands) and red-green-blue (RGB) imagery under different months. For a case study area in Wanglang Nature Reserve, Southwest China, we acquired monthly RGB and multispectral images on six occasions over the growing season. We found that the accuracy of RGB imagery varied considerably (the highest intersection over union (IoU), 83.72%, was in April and the lowest, 76.81%, was in June), while the accuracy of multispectral imagery was consistently high (IoU > 81%). In April and October, the accuracy of the RGB imagery was slightly higher than that of multispectral imagery, but for the other months, multispectral imagery was more accurate (IoU was nearly 6% higher than those of the RGB imagery for June). Adding vegetation indices (VIs) improved the accuracy of the RGB models during summer, but there was still a gap to the multispectral model. Hence, our results indicate that the optimized time of the year for identifying Faxon fir using UAV imagery is during the peak of the growing season when using a multispectral imagery. During the non-growing season, RGB imagery was no worse or even slightly better than multispectral imagery for Faxon fir identification. Our study can provide guidance for optimizing observation plans regarding data collection time and UAV loads and could further help enhance the utility of UAVs in forestry and ecological research. Full article
(This article belongs to the Special Issue Vegetation Biophysical Variables and Remote Sensing Applications)
Show Figures

Figure 1

Figure 1
<p>Overview of the study area: (<b>a</b>) satellite Image of the study area; (<b>b</b>) June data collection in the study area; (<b>c</b>) a subarea of Red–green–blue (RGB) imagery in June; and (<b>d</b>) a false color composite (green-red-near infrared) of the subarea of the multispectral imagery in June.</p>
Full article ">Figure 2
<p>RGB and multispectral imagery collected by drones for different months: (<b>a</b>) April 21, (<b>b</b>) May 23, (<b>c</b>) June 17, (<b>d</b>) August 27, (<b>e</b>) September 27, and (<b>f</b>) October 27.</p>
Full article ">Figure 3
<p>Architecture of DeepLabv3 for tree species segmentation.</p>
Full article ">Figure 4
<p>UAV-based RGB orthomosaics for different months and the associated Faxon fir identification map: (<b>a</b>) April 21, (<b>b</b>) May 23, (<b>c</b>) June 17, (<b>d</b>) August 27, (<b>e</b>) September 27, (<b>f</b>) and October 27 (<b>g</b>) manually labeled reference data in April; (<b>h</b>) identification map of RGB imagery on April model (highest accuracy), and (<b>i</b>) identification map of RGB imagery on June model (lowest accuracy).</p>
Full article ">Figure 5
<p>UAV-based multispectral orthomosaics in different months and the associated Faxon fir identification map: (<b>a</b>) April 21, (<b>b</b>) May 23, (<b>c</b>) June 17, (<b>d</b>) August 27, (<b>e</b>) September 27, (<b>f</b>) October 27, (<b>g</b>) manually labeled reference data in September; (<b>h</b>) identification map of MS imagery on September model (highest accuracy), and (<b>i</b>) identification map of MS imagery on October model (lowest accuracy).</p>
Full article ">Figure 6
<p>Variability in UAV orthomosaics for April, May, June, August, September, and October.</p>
Full article ">Figure 7
<p>Comparison of the IoU for the RGB and MS models for different months.</p>
Full article ">Figure 8
<p>Faxon fir identification for different months. (<b>a</b>) UAV-based RGB orthomosaics, (<b>b</b>) manually labeled reference data, (<b>c</b>) identification map based on the RGB model, (<b>d</b>) identification map based on the MS model. The yellow and red circles represent correct and incorrect identification results, respectively.</p>
Full article ">Figure 9
<p>Faxon fir identification of trained CNN on the adjacent sample sites for different months. (<b>a</b>) UAV-based RGB orthomosaics, (<b>b</b>) UAV-based MS orthomosaics, (<b>c</b>) identification map based on the RGB model, (<b>d</b>) identification map based on the MS model. The yellow and red circles represent correct and incorrect identification results, respectively.</p>
Full article ">Figure 10
<p>Comparison of the IoU of the RGB model with added VIs and MS imagery without added VIs for different months.</p>
Full article ">
31 pages, 14747 KiB  
Article
Mapping and Influencing the Mechanism of CO2 Emissions from Building Operations Integrated Multi-Source Remote Sensing Data
by You Zhao, Yuan Zhou, Chenchen Jiang and Jinnan Wu
Remote Sens. 2023, 15(8), 2204; https://doi.org/10.3390/rs15082204 - 21 Apr 2023
Viewed by 1997
Abstract
Urbanization has led to rapid growth in energy consumption and CO2 emissions in the building sector. Building operation emissions (BCEs) are a major part of emissions in the building life cycle. Existing studies have attempted to estimate fine-scale BCEs using remote sensing [...] Read more.
Urbanization has led to rapid growth in energy consumption and CO2 emissions in the building sector. Building operation emissions (BCEs) are a major part of emissions in the building life cycle. Existing studies have attempted to estimate fine-scale BCEs using remote sensing data. However, there is still a lack of research on estimating long-term BCEs by integrating multi-source remote sensing data and applications in different regions. We selected the Beijing–Tianjin–Hebei (BTH) urban agglomeration and the National Capital Region of Japan (NCRJ) as research areas for this study. We also built multiple linear regression (MLR) models between prefecture-level BCEs and multi-source remote sensing data. The prefecture-level BCEs were downscaled to grid scale at a 1 km2 resolution. The estimation results verify the method’s difference and accuracy at different development stages. The multi-scale BCEs showed a continuous growth trend in the BTH urban agglomeration and a significant downward trend in the NCRJ. The decrease in energy intensity and population density were the main factors contributing to the negative growth of BCEs, whereas GDP per capita and urban expansion significantly promoted it. Through our methods and analyses, we contribute to the study of estimating greenhouse gas emissions with remote sensing and exploring the environmental impact of urban growth. Full article
Show Figures

Figure 1

Figure 1
<p>Location and administrative division of the Beijing–Tianjin–Hebei urban agglomeration and the National Capital Region of Japan.</p>
Full article ">Figure 2
<p>The research framework for this study.</p>
Full article ">Figure 3
<p>Fitting results between this study’s estimations and EDGAR at a multi-scale.</p>
Full article ">Figure 4
<p>BCEs of the BTH urban agglomeration and NCRJ. (<b>a</b>) Total BCEs; (<b>b</b>) emissions from public building operations; (<b>c</b>) emissions from residential building operations.</p>
Full article ">Figure 5
<p>BCEs of prefectures in the BTH urban agglomeration.</p>
Full article ">Figure 6
<p>BCEs of prefectures in the NCRJ.</p>
Full article ">Figure 7
<p>Lorentz curve of county-level emissions of the BTH and municipality-level emissions of the NCRJ.</p>
Full article ">Figure 8
<p>Spatial–temporal evolution of and change in BCEs at the county level in the BTH.</p>
Full article ">Figure 9
<p>Spatial–temporal evolution of and change in BCEs at the municipality level in the NCRJ.</p>
Full article ">Figure 10
<p>Change in BCEs at a grid scale and the statistics by distance.(<b>a</b>,<b>c</b>) show changes in BCEs between 2000 and 2019 at grid scale; (<b>b</b>,<b>d</b>) count the total changes in BCEs within a certain distance in BTH and NCRJ, respectively.</p>
Full article ">Figure 11
<p>Growth rate of BCEs at the prefecture and county (municipality) levels in the BTH and NCRJ. (<b>a</b>,<b>c</b>) show the growth rate of BCEs between 2000 and 2019 in BTH and NCRJ, respectively; (<b>b</b>,<b>d</b>) show the growth rate of all counties (municipalities), urban districts, and other counties (municipalities) in BTH and NCRJ, respectively.</p>
Full article ">Figure 12
<p>Decomposition of influencing factors for BCEs growth. (<b>a</b>) shows the decomposition results at the prefecture level in BTH; (<b>b</b>) shows the decomposition results at the prefecture level in NCRJ.</p>
Full article ">Figure 13
<p>Decomposition of influencing factors on the growth of BCEs at county (municipality) level in the BTH and NCRJ. (<b>a</b>) Urban areas in the BTH; (<b>b</b>) other counties in the BTH; (<b>c</b>) urban areas in the NCRJ; (<b>d</b>) other municipalities in the NCRJ.</p>
Full article ">Figure A1
<p>Liner regression between prefecture-level EVI and BCEs.</p>
Full article ">Figure A2
<p>Comparison between estimation results and statistical results in BTH.</p>
Full article ">Figure A3
<p>Comparison between estimation results and statistical results in the NCRJ.</p>
Full article ">Figure A4
<p>Grid-scale verification in BTH and NCRJ. (<b>a</b>,<b>c</b>) show the fitting results of grid-scale estimation results and EDGAR with 10 km resolution; (<b>b</b>,<b>d</b>) show the fitting results of grid-scale estimation results and building volume with 1 km resolution in 2019.</p>
Full article ">Figure A5
<p>Grid-scale BCE in the BTH.</p>
Full article ">Figure A6
<p>Grid-scale BCEs in the NCRJ.</p>
Full article ">Figure A7
<p>Prefecture-level growth in energy consumption, tertiary industry, built-up land, and population in BTH (<b>a</b>) and NCRJ (<b>b</b>).</p>
Full article ">Figure A8
<p>County-level growth in energy consumption, tertiary industry, built-up land, and population.</p>
Full article ">Figure A9
<p>Municipality-level growth in energy consumption, the tertiary industry, built-up land, and population.</p>
Full article ">Figure A10
<p>Comparison between prefecture-level estimation results and statistical results.</p>
Full article ">Figure A11
<p>The proportion of emissions from central heating to prefecture-level BCEs in the BTH.</p>
Full article ">
20 pages, 8340 KiB  
Article
Analysis of Spatial and Temporal Criteria for Altimeter Collocation of Significant Wave Height and Wind Speed Data in Deep Waters
by Ricardo M. Campos
Remote Sens. 2023, 15(8), 2203; https://doi.org/10.3390/rs15082203 - 21 Apr 2023
Cited by 2 | Viewed by 1789
Abstract
This paper investigates the spatial and temporal variability of significant wave height (Hs) and wind speed (U10) using altimeter data from the Australian Ocean Data Network (AODN) and buoy data from the National Data Buoy Center (NDBC). The main goal is to evaluate [...] Read more.
This paper investigates the spatial and temporal variability of significant wave height (Hs) and wind speed (U10) using altimeter data from the Australian Ocean Data Network (AODN) and buoy data from the National Data Buoy Center (NDBC). The main goal is to evaluate spatial and temporal criteria for collocating altimeter data to fixed-point positions and to provide practical guidance on altimeter collocation in deep waters. The results show that a temporal criterion of 30 min and a spatial criterion between 25 km and 50 km produce the best results for altimeter collocation, in close agreement with buoy data. Applying a 25 km criterion leads to slightly better error metrics but at the cost of fewer matchups, whereas using 50 km augments the resulting collocated dataset while keeping the differences to buoy measurements very low. Furthermore, the study demonstrates that using the single closest altimeter record to the buoy position leads to worse results compared to the collocation method based on temporal and spatial averaging. The final validation of altimeter data against buoy observations shows an RMSD of 0.21 m, scatter index of 0.09, and correlation coefficient of 0.98 for Hs, confirming the optimal choice of temporal and spatial criteria employed and the high quality of the calibrated AODN altimeter dataset. Full article
Show Figures

Figure 1

Figure 1
<p>Position of the 11 NDBC metocean buoys selected. Different colors help to separate different clusters with different wave climates. The same color patterns will be used throughout this paper to identify the buoys’ locations.</p>
Full article ">Figure 2
<p>Autocorrelation as a function of time lags for U10 (<b>A</b>) and Hs (<b>B</b>). The 11 buoys are presented, and the gray shading shows the average of all the buoy results.</p>
Full article ">Figure 3
<p>Normalized root mean square difference as a function of the time lags for U10 (<b>A</b>) and Hs (<b>B</b>). The 11 buoys are presented, and the gray shading shows the average of all the buoy results.</p>
Full article ">Figure 4
<p>Variance spectra of U10 (<b>A</b>) and Hs (<b>B</b>) for each buoy.</p>
Full article ">Figure 5
<p>Examples of interesting events measured by the wave buoys. Four clusters and four different events are illustrated, related to hourly time-series of Hs (meters). (<b>A</b>) Hurricane Sandy measured in the Atlantic Ocean. (<b>B</b>) Hurricane Ida in the Gulf of Mexico. (<b>C</b>) Extra-tropical cyclone in the Pacific Ocean. (<b>D</b>) High-energy swell in the Tropical Pacific Ocean (Hawaii).</p>
Full article ">Figure 6
<p>Scatter plot of time-displaced time-series compared to the original time-series for different time lags ranging from 1 h to 24 h. Data of U10 (m/s) for buoy 41010. The plots use hot colors to highlight areas of higher point density. Panels (<b>A</b>–<b>F</b>) show the increasing time lag.</p>
Full article ">Figure 7
<p>Scatter plot of time-displaced time-series compared to the original time-series for different time lags ranging from 1 h to 24 h. Data of Hs (m) for buoy 41010. The plots use hot colors to highlight areas of higher point density. Panels (<b>A</b>–<b>F</b>) show the increasing time lag.</p>
Full article ">Figure 8
<p>Expected differences as a function of distance (km) for U10 (m/s) and Hs (m). The left panels (<b>A</b>,<b>D</b>) present scatter plots with altimeter measurements (JASON3) compared to the closest altimeter record to the buoys’ positions for each altimeter track section. The plots use hot colors to highlight areas of higher point density. The center panels (<b>B</b>,<b>E</b>) show the median (black) of such differences (altimeter–altimeter) accompanied by the shaded area designed between the first and third quartiles, while the dashed red line represents the median difference between altimeter and buoy. The use of the median for this type of analysis was suggested by Quartly and Kurekin [<a href="#B72-remotesensing-15-02203" class="html-bibr">72</a>]. The right panels (<b>C</b>,<b>F</b>) display the arithmetic mean of differences (altimeter–altimeter), highlighted for the first 50 km.</p>
Full article ">Figure 9
<p>Number of altimeter points in the satellite transects defined by different spatial criteria.</p>
Full article ">Figure 10
<p>Scatter plots of altimeter data for different spatial criteria (radius, km) using JASON3 dataset collocated at the 11 buoy positions (<a href="#remotesensing-15-02203-f001" class="html-fig">Figure 1</a>). The plots were made using the smallest radius of 10 km as a reference. A total of 189 collocated values were used. (<b>A</b>–<b>D</b>) (top) refer to wind speed (U10) while (<b>E</b>–<b>H</b>) (bottom) refer to significant wave height (Hs).</p>
Full article ">Figure 11
<p>Scatter index (Equation (4)) versus bias (or mean difference, Equation (2)) for spatial criteria ranging from 10 to 200 km. The estimates calculated with the spatial criterion of 10 km were used as a reference (<math display="inline"><semantics> <mi>x</mi> </semantics></math> in Equations (2)–(5)). Satellite dataset considered is from JASON3. (<b>A</b>) Wind speed (U10). (<b>B</b>) Significant wave height (Hs).</p>
Full article ">Figure 12
<p>QQ-plots (<b>A</b>,<b>C</b>) and scatter plots (<b>B</b>,<b>D</b>) of collocated altimeter data of Hs<span class="html-italic">c</span> from the AODN-calibrated dataset, against NDBC buoy data in the Pacific and Atlantic Oceans (<a href="#remotesensing-15-02203-f001" class="html-fig">Figure 1</a>). The temporal collocation criterion <math display="inline"><semantics> <mi>r</mi> </semantics></math> = 1800 s was applied, and two spatial collocation criteria of τ = 25 and 50 km are presented. The plots use hot colors to highlight areas of higher point density.</p>
Full article ">
19 pages, 14731 KiB  
Article
Quantitative Assessment of Apple Mosaic Disease Severity Based on Hyperspectral Images and Chlorophyll Content
by Yanfu Liu, Yu Zhang, Danyao Jiang, Zijuan Zhang and Qingrui Chang
Remote Sens. 2023, 15(8), 2202; https://doi.org/10.3390/rs15082202 - 21 Apr 2023
Cited by 21 | Viewed by 2703
Abstract
The infection of Apple mosaic virus (ApMV) can severely damage the cellular structure of apple leaves, leading to a decrease in leaf chlorophyll content (LCC) and reduced fruit yield. In this study, we propose a novel method that utilizes hyperspectral imaging (HSI) technology [...] Read more.
The infection of Apple mosaic virus (ApMV) can severely damage the cellular structure of apple leaves, leading to a decrease in leaf chlorophyll content (LCC) and reduced fruit yield. In this study, we propose a novel method that utilizes hyperspectral imaging (HSI) technology to non-destructively monitor ApMV-infected apple leaves and predict LCC as a quantitative indicator of disease severity. LCC data were collected from 360 ApMV-infected leaves, and optimal wavelengths were selected using competitive adaptive reweighted sampling algorithms. A high-precision LCC inversion model was constructed based on Boosting and Stacking strategies, with a validation set Rv2 of 0.9644, outperforming traditional ensemble learning models. The model was used to invert the LCC distribution image and calculate the average and coefficient of variation (CV) of LCC for each leaf. Our findings indicate that the average and CV of LCC were highly correlated with disease severity, and their combination with sensitive wavelengths enabled the accurate identification of disease severity (validation set overall accuracy = 98.89%). Our approach considers the role of plant chemical composition and provides a comprehensive evaluation of disease severity at the leaf scale. Overall, our study presents an effective way to monitor and evaluate the health status of apple leaves, offering a quantifiable index of disease severity that can aid in disease prevention and control. Full article
(This article belongs to the Special Issue Application of Hyperspectral Imagery in Precision Agriculture)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Study area. (<b>b</b>) Location of sampled trees.</p>
Full article ">Figure 2
<p>Flow chart for quantitative assessment of apple mosaic disease severity based on hyperspectral images.</p>
Full article ">Figure 3
<p>(<b>a</b>) Original spectral reflectance, and (<b>b</b>) Savitzky–Golay filtered spectral reflectance.</p>
Full article ">Figure 4
<p>Flow chart of Stacked–Boosting ensemble learning model.</p>
Full article ">Figure 5
<p>(<b>a</b>) Spectral reflectance and (<b>b</b>) SI of leaves with different LCC.</p>
Full article ">Figure 6
<p>CARS results. (<b>a</b>) Variation in RMSECV; (<b>b</b>) variation in the number of selected features; (<b>c</b>) variation in the trend of regression coefficients; (<b>d</b>) selected wavelengths.</p>
Full article ">Figure 7
<p>(<b>a</b>) Leaf RGB image and (<b>b</b>) LCC distribution with average LCC.</p>
Full article ">Figure 8
<p>Correlation of (<b>a</b>) average LCC and (<b>b</b>) CV of LCC with disease spot area.</p>
Full article ">Figure 9
<p>Confusion matrix of the classification results.</p>
Full article ">Figure 10
<p>Prediction results of (<b>a</b>) Random Forest; (<b>b</b>) XGBoost; (<b>c</b>) Stacked–Boosting.</p>
Full article ">Figure 11
<p>Feature importance.</p>
Full article ">
22 pages, 5175 KiB  
Article
Climate-Adaptive Potential Crops Selection in Vulnerable Agricultural Lands Adjacent to the Jamuna River Basin of Bangladesh Using Remote Sensing and a Fuzzy Expert System
by Kazi Faiz Alam and Tofael Ahamed
Remote Sens. 2023, 15(8), 2201; https://doi.org/10.3390/rs15082201 - 21 Apr 2023
Cited by 3 | Viewed by 1982
Abstract
Agricultural crop production was affected worldwide due to the variability of weather causing floods or droughts. In climate change impacts, flood becomes the most devastating in deltaic regions due to the inundation of crops within a short period of time. Therefore, the aim [...] Read more.
Agricultural crop production was affected worldwide due to the variability of weather causing floods or droughts. In climate change impacts, flood becomes the most devastating in deltaic regions due to the inundation of crops within a short period of time. Therefore, the aim of this study was to propose climate-adaptive crops that are suitable for the flood inundation in risk-prone areas of Bangladesh. The research area included two districts adjacent to the Jamuna River in Bangladesh, covering an area of 5489 km2, and these districts were classified as highly to moderately vulnerable due to inundation by flood water during the seasonal monsoon time. In this study, first, an inundation vulnerability map was prepared from the multicriteria analysis by applying a fuzzy expert system in the GIS environment using satellite remote sensing datasets. Among the analyzed area, 42.3% was found to be highly to moderately vulnerable, 42.1% was marginally vulnerable and 15.6% was not vulnerable to inundation. Second, the most vulnerable areas for flooding were identified from the previous major flood events and cropping practices based on the crop calendar. Based on the crop adaptation suitability analysis, two cash crops, sugarcane and jute, were recommended for cultivation during major flooding durations. Finally, a land suitability analysis was conducted through multicriteria analysis applying a fuzzy expert system. According to our analysis, 28.6% of the land was highly suitable, 27.9% was moderately suitable, 19.7% was marginally suitable and 23.6% of the land was not suitable for sugarcane and jute cultivation in the vulnerable areas. The inundation vulnerability and suitability analysis proposed two crops, sugarcane and jute, as potential candidates for climate-adaptive selection in risk-prone areas. Full article
Show Figures

Figure 1

Figure 1
<p>Geographical extent of the study area in Bangladesh. (<b>a</b>) Location of the Bengal Delta in the world map. (<b>b</b>) River distribution flows from north to south toward the Bay of Bengal.</p>
Full article ">Figure 2
<p>Stepwise workflow for inundation vulnerability assessment in the riverine area.</p>
Full article ">Figure 3
<p>Stepwise workflow of land suitability assessment for jute and sugarcane.</p>
Full article ">Figure 4
<p>Fuzzy membership function for inundation vulnerability assessment: (<b>a</b>) NDWI; (<b>b</b>) rainfall; (<b>c</b>) elevation and (<b>d</b>) distance from the river.</p>
Full article ">Figure 5
<p>Crop calendar during major flooding time in Bangladesh.</p>
Full article ">Figure 6
<p>Fuzzy membership function for land suitability assessment to cultivate jute and sugarcane: (<b>a</b>) soil texture; (<b>b</b>) soil pH; (<b>c</b>) slope and (<b>d</b>) LULC.</p>
Full article ">Figure 7
<p>Land suitability class selection for jute and sugarcane cultivation using fuzzy membership function and land index.</p>
Full article ">Figure 8
<p>Reclassified vulnerability map of different indices: (<b>a</b>) NDWI; (<b>b</b>) rainfall; (<b>c</b>) elevation; (<b>d</b>) distance from river and (<b>e</b>) fuzzy gamma overlay, where V1, V2, V3 and N refer to high, moderate, marginal and nonvulnerable areas, respectively.</p>
Full article ">Figure 9
<p>Reclassified suitability map of different indices for sugarcane and jute: (<b>a</b>,<b>b</b>) slope; (<b>c</b>) elevation; (<b>d</b>,<b>e</b>) soil pH; (<b>f</b>,<b>g</b>) soil texture; (<b>h</b>,<b>i</b>) rainfall; (<b>j</b>,<b>k</b>) LULC; (<b>l</b>) flooding vulnerability; (<b>m</b>,<b>n</b>) distance from river and (<b>o</b>) fuzzy gamma overlay, where S1, S2, S3 and N refer to high, moderate, marginal and suitable areas, respectively.</p>
Full article ">Figure 9 Cont.
<p>Reclassified suitability map of different indices for sugarcane and jute: (<b>a</b>,<b>b</b>) slope; (<b>c</b>) elevation; (<b>d</b>,<b>e</b>) soil pH; (<b>f</b>,<b>g</b>) soil texture; (<b>h</b>,<b>i</b>) rainfall; (<b>j</b>,<b>k</b>) LULC; (<b>l</b>) flooding vulnerability; (<b>m</b>,<b>n</b>) distance from river and (<b>o</b>) fuzzy gamma overlay, where S1, S2, S3 and N refer to high, moderate, marginal and suitable areas, respectively.</p>
Full article ">Figure 10
<p>Average production of jute (bale/acre) and sugarcane (MT/acre) in ten administrative districts in Bangladesh during 2021.</p>
Full article ">Figure 11
<p>Validation of the fuzzy-based land suitability score referring to the average sugarcane and jute production from different administrative districts of Bangladesh. (<b>a</b>) Linear regression for sugarcane; (<b>b</b>) polynomial regression for sugarcane; (<b>c</b>) linear regression for jute and (<b>d</b>) polynomial regression for jute.</p>
Full article ">
23 pages, 35922 KiB  
Article
MUREN: MUltistage Recursive Enhanced Network for Coal-Fired Power Plant Detection
by Shuai Yuan, Juepeng Zheng, Lixian Zhang, Runmin Dong, Ray C. C. Cheung and Haohuan Fu
Remote Sens. 2023, 15(8), 2200; https://doi.org/10.3390/rs15082200 - 21 Apr 2023
Cited by 2 | Viewed by 1329
Abstract
The accurate detection of coal-fired power plants (CFPPs) is meaningful for environmental protection, while challenging. The CFPP is a complex combination of multiple components with varying layouts, unlike clearly defined single objects, such as vehicles. CFPPs are typically located in industrial districts with [...] Read more.
The accurate detection of coal-fired power plants (CFPPs) is meaningful for environmental protection, while challenging. The CFPP is a complex combination of multiple components with varying layouts, unlike clearly defined single objects, such as vehicles. CFPPs are typically located in industrial districts with similar backgrounds, further complicating the detection task. To address this issue, we propose a MUltistage Recursive Enhanced Detection Network (MUREN) for accurate and efficient CFPP detection. The effectiveness of MUREN lies in the following: First, we design a symmetrically enhanced module, including a spatial-enhanced subnetwork (SEN) and a channel-enhanced subnetwork (CEN). SEN learns the spatial relationships to obtain spatial context information. CEN provides adaptive channel recalibration, restraining noise disturbance and highlighting CFPP features. Second, we use a recursive construction set on top of feature pyramid networks to receive features more than once, strengthening feature learning for relatively small CFPPs. We conduct comparative and ablation experiments in two datasets and apply MUREN to the Pearl River Delta region in Guangdong province for CFPP detection. The comparative experiment results show that MUREN improves the mAP by 5.98% compared with the baseline method and outperforms by 4.57–21.38% the existing cutting-edge detection methods, which indicates the promising potential of MUREN in large-scale CFPP detection scenarios. Full article
Show Figures

Figure 1

Figure 1
<p>The differences between CFPPs and other objects, i.e., cars and ships, in optical HRSIs. (<b>a</b>,<b>b</b>) are the optical images of CFPPs, while (<b>c</b>,<b>d</b>) are the optical images of ships and cars. Compared with CFPP, cars and ships have clearer boundaries and simpler backgrounds. Additionally, their regular shapes make them easier to detect.</p>
Full article ">Figure 2
<p>The mAPs of the previous studies on CFPP detection on the BUAA-FFPP60 datasets [<a href="#B7-remotesensing-15-02200" class="html-bibr">7</a>,<a href="#B8-remotesensing-15-02200" class="html-bibr">8</a>,<a href="#B10-remotesensing-15-02200" class="html-bibr">10</a>,<a href="#B63-remotesensing-15-02200" class="html-bibr">63</a>].</p>
Full article ">Figure 3
<p>The architecture of our proposed MUREN: the GREEN parts denote our main contributions, including a channel-enhanced subnetwork, a spatial-enhanced subnetwork, and a recursive connection with an improved ASPP module embedded in the Feature Pyramid Network.</p>
Full article ">Figure 4
<p>The detailed architectures of CEN and SEN. Both of them are embedded into the residual block of ResNet50.</p>
Full article ">Figure 5
<p>The architecture of the improved ASPP module. Four outputs generated by four branches are concatenated along the channel dimension.</p>
Full article ">Figure 6
<p>Examples of our datasets, including working chimney, nonworking chimney, working condensing tower, and nonworking condensing tower.</p>
Full article ">Figure 7
<p>(<b>a</b>) is the map of China, (<b>b</b>) is the map of Guangdong province and locations of the Guangzhou–Foshan–Zhaoqing region, and (<b>c</b>) is our study area.</p>
Full article ">Figure 8
<p>The visualization of large-scale detection in the Guangzhou–Foshan–Zhaoqing region. The blue point denotes an incomplete CFPP location, and the red point denotes a complete CFPP location. There are nine complete CFPPs and forty-three incomplete CFPPs in this region.</p>
Full article ">Figure 9
<p>The detection results of eight methods. The box denotes the location, and the text denotes the category and working status. The number denotes the confidence value.</p>
Full article ">Figure 10
<p>The detection results of eight methods. The box denotes the location, and the text denotes the category and working status. The number denotes the confidence value.</p>
Full article ">Figure 11
<p>The visualization of feature maps from ResNet50, ResNet50+CEN, ResNet50+SEN, and ResNet50+CEN+SEN. We visualized the feature maps of these four satellite images. Vanilla ResNet50 is seriously affected by surrounding textures, and the features of CFPP are vague and blurred. By adding CEN into the backbone, most noise is eliminated and the features of CFPP are emphasized, but some useful features are also removed. SEN finds more spatial relationship between components in CFPPs. After employing SEN, the features of CFPPs become more comprehensive with clearer boundaries and more accurate locations.</p>
Full article ">Figure 12
<p>The visualization of false detection and misdetection of MUREN results. The red rectangles denote false detection, and the green rectangles denote the misdetection.</p>
Full article ">Figure A1
<p>The training loss comparison of eight methods. Our MUREN achieves the best loss convergence of about 0.15.</p>
Full article ">Figure A2
<p>The time required by the eight methods in training and testing. Though MUREN has the maximum total time spending, the testing time of each method is almost the same, which is the real time consumption in large-scale applications.</p>
Full article ">Figure A3
<p>The detection results of the eight methods. The box denotes the location, and the text denotes the category and working status. The number denotes the confidence value.</p>
Full article ">Figure A4
<p>The detection results of the eight methods. The box denotes, the location and the text denotes the category and working status. The number denotes the confidence value.</p>
Full article ">
10 pages, 2827 KiB  
Technical Note
Blind Spots Analysis of Magnetic Tensor Localization Method
by Lei Xu, Xianyuan Huang, Zhonghua Dai, Fuli Yuan, Xu Wang and Jinyu Fan
Remote Sens. 2023, 15(8), 2199; https://doi.org/10.3390/rs15082199 - 21 Apr 2023
Cited by 1 | Viewed by 1247
Abstract
In order to compare and analyze the positioning efficiency of the magnetic tensor location method, this paper studies the blind spots of the magnetic tensor location method. By constructing two magnetic tensor localization models, the localization principles of the single-point magnetic tensor localization [...] Read more.
In order to compare and analyze the positioning efficiency of the magnetic tensor location method, this paper studies the blind spots of the magnetic tensor location method. By constructing two magnetic tensor localization models, the localization principles of the single-point magnetic tensor localization method (STLM) and the two-point magnetic tensor linear localization method (TTLM) are analyzed. Furthermore, the eigenvalue analysis method is studied to analyze the blind spots of STLM, and the spherical analysis method is proposed to analyze the blind spots of TTLM. The results show that when the direction of any measuring point is perpendicular to the direction of the target magnetic moment, blind spots of STLM appear. However, TTLM still has good positioning performance in the blind spot. Full article
(This article belongs to the Special Issue Satellite Missions for Magnetic Field Analysis)
Show Figures

Figure 1

Figure 1
<p>Position model using magnetic gradient tensor of two points.</p>
Full article ">Figure 2
<p>Relationship between positioning blind surface and magnetic moment.</p>
Full article ">Figure 3
<p>Spherical analysis method.</p>
Full article ">Figure 4
<p>Simulation model of positioning blind point.</p>
Full article ">Figure 5
<p>Positioning error distribution using a single magnetic tensor. (<b>a</b>) Error in the <span class="html-italic">x</span>-axis direction, (<b>b</b>) error in the <span class="html-italic">y</span>-axis direction, and (<b>c</b>) error in the <span class="html-italic">z</span>-axis direction.</p>
Full article ">Figure 6
<p>Blind point of a single position using spherical analysis. (<b>a</b>) Error in the <span class="html-italic">x</span>-axis direction, (<b>b</b>) error in the <span class="html-italic">y</span>-axis direction, and (<b>c</b>) error in the <span class="html-italic">z</span>-axis direction.</p>
Full article ">Figure 7
<p>Blind point of two-point position using spherical analysis. (<b>a</b>) Error in the <span class="html-italic">x</span>-axis direction, (<b>b</b>) error in the <span class="html-italic">y</span>-axis direction, and (<b>c</b>) error in the <span class="html-italic">z</span>-axis direction.</p>
Full article ">
19 pages, 16851 KiB  
Article
Dynamic Effects of Atmosphere over and around the Tibetan Plateau on the Sustained Drought in Southwest China from 2009 to 2014
by Yiwei Ye, Rongxiang Tian and Zhan Jin
Remote Sens. 2023, 15(8), 2198; https://doi.org/10.3390/rs15082198 - 21 Apr 2023
Cited by 1 | Viewed by 1586
Abstract
The two westerly branches have a significant impact on the climate of the area on the eastern side of the Tibetan Plateau when flowing around it. A continuous drought event in Southwest China from the winter of 2009 to the spring of 2014 [...] Read more.
The two westerly branches have a significant impact on the climate of the area on the eastern side of the Tibetan Plateau when flowing around it. A continuous drought event in Southwest China from the winter of 2009 to the spring of 2014 caused huge economic losses. This research focuses on the dynamic field anomalies over the Tibetan Plateau during this event using statistical analysis, attempts to decipher its mechanism on drought in Southwest China, and provides a regression model. We established that the anticyclone and downdraft over the Tibetan Plateau were weaker than usual during the drought, which would reduce the southward cold airflow on the northeast of the Tibetan Plateau and strengthen the west wind from dry central Asia on the south of the plateau. As a result, a larger area of the southwest region in China was controlled by the warm and dry air mass, which was acting against precipitation. The results will be of reference value to the drought forecast for Southwest China, and also encourage further research about how the Tibetan Plateau influence the climate on its eastern side. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Percentage of Precipitation Anomaly in southwest China during spring, autumn and winter periods from 2009 to 2014 (comparing to the average of the same seasons in 1960–2019). The rating in the color bar indicates different drought levels given by the Chinese official document GBT20481-2017 [<a href="#B24-remotesensing-15-02198" class="html-bibr">24</a>]: mild drought from −50% to −25%, moderate drought from −70% to −50%, severe drought from −80% to −70% and extreme drought under −80%.</p>
Full article ">Figure 2
<p>Flowchart including data, methods and results.</p>
Full article ">Figure 3
<p>Terrain of the Tibetan Plateau (the color bar referring to the altitude measured in meters) and Meteorological National Stations (dots) in Southwest China.</p>
Full article ">Figure 4
<p>EOF of the relative divergence anomaly field and mean field of the relative divergence at 500 hPa over the Tibetan Plateau from November to next April 2009–2014 (10<sup>−4</sup> s<sup>−1</sup>). (<b>a</b>) Eigenvectors of the first mode (with explained variance of 18.06%); (<b>b</b>) Time coefficients of the first mode; (<b>c</b>) Eigenvectors of the second mode (with explained variance of 12.25%); (<b>d</b>) Time coefficients of the second mode; (<b>e</b>) Mean field. The first two modes have passed the North Test [<a href="#B32-remotesensing-15-02198" class="html-bibr">32</a>]. The white ovals highlight the opposite part in the EOF modes and the mean field.</p>
Full article ">Figure 5
<p>EOF of the relative vorticity anomaly field and mean field of the relative vorticity at 500 hPa over the Tibetan Plateau from November to next April 2009–2014 (10<sup>−4</sup> s<sup>−1</sup>). (<b>a</b>) Eigenvectors of the first mode (with explained variance of 19.68%); (<b>b</b>) Time coefficients of the first mode; (<b>c</b>) Mean field.</p>
Full article ">Figure 6
<p>EOF of the vertical velocity anomaly field and mean field of the vertical velocity at 500 hPa over the Tibetan Plateau from November to next April 2009–2014 (Pa/s). (<b>a</b>) Eigenvectors of the first mode (with explained variance of 22.68%); (<b>b</b>) Time coefficients of the first mode; (<b>c</b>) Eigenvectors of the second mode (with explained variance of 17.58%), (<b>d</b>) Time coefficients of the second mode; (<b>e</b>) Mean field. The first two modes have passed the North Test [<a href="#B32-remotesensing-15-02198" class="html-bibr">32</a>].</p>
Full article ">Figure 7
<p>Average wind field at 700 hPa over east Asia (m/s). (<b>a</b>) Mean field from November to next April during 1960–2019; (<b>b</b>) Mean field from November to next April during 2009–2014. (<b>c</b>,<b>d</b>) enlarge the regions of red rectangles in (<b>a</b>,<b>b</b>). The arrow marks with numbers near (<b>a</b>,<b>b</b>) represent the size of the vectors in the graphs. The red solid lines in (<b>c</b>,<b>d</b>) refer to the boundaries between south and north wind. And the red dotted lines are referential lines at 106°E.</p>
Full article ">Figure 8
<p>Average wind field and wind speed anomaly at 850 hPa over east Asia (m/s). (<b>a</b>) Mean field from November to next April during 1960–2019, (<b>b</b>) Mean field from November to next April during 2009–2014, (<b>e</b>) Mean field of wind (vector) and wind speed anomaly (shaded) at 850 hPa over east Asia from 2009 to 2014. (<b>c</b>,<b>d</b>) enlarge the regions of red rectangles in (<b>a</b>,<b>b</b>). The arrow marks with numbers near (<b>a</b>,<b>b</b>,<b>e</b>) represent the size of the vectors in the graphs. The red solid lines refer to the boundaries between west and east wind.</p>
Full article ">Figure 9
<p>Dew point depression anomaly over east Asia from 2009 to 2014 (K). (<b>a</b>) 700 hPa, (<b>b</b>) 850 hPa.</p>
Full article ">Figure 10
<p>The forecast results of the regression equations, variables including relative divergence and relative vorticity at 500 hPa. The blue line represents the observed percentage of precipitation anomaly, and the yellow line represents the prediction.</p>
Full article ">Figure 11
<p>Temperature anomalies and percentage of precipitation anomalies in southwest China in winter of 2009–2014. (<b>a</b>) Temperature anomalies (°C), (<b>b</b>) percentage of precipitation anomalies.</p>
Full article ">Figure 12
<p>Conceptual Model of dynamic effect influencing drought in southwest China. The blue arrows represent cold airflows, while the red ones represent warm airflows. Note: The South China Sea has not been marked due to layout reasons.</p>
Full article ">
16 pages, 20442 KiB  
Article
Geomatic Data Fusion for 3D Tree Modeling: The Case Study of Monumental Chestnut Trees
by Mattia Balestra, Enrico Tonelli, Alessandro Vitali, Carlo Urbinati, Emanuele Frontoni and Roberto Pierdicca
Remote Sens. 2023, 15(8), 2197; https://doi.org/10.3390/rs15082197 - 21 Apr 2023
Cited by 12 | Viewed by 3003
Abstract
In recent years, advancements in remote and proximal sensing technology have driven innovation in environmental and land surveys. The integration of various geomatics devices, such as reflex and UAVs equipped with RGB cameras and mobile laser scanners (MLS), allows detailed and precise surveys [...] Read more.
In recent years, advancements in remote and proximal sensing technology have driven innovation in environmental and land surveys. The integration of various geomatics devices, such as reflex and UAVs equipped with RGB cameras and mobile laser scanners (MLS), allows detailed and precise surveys of monumental trees. With these data fusion method, we reconstructed three monumental 3D tree models, allowing the computation of tree metric variables such as diameter at breast height (DBH), total height (TH), crown basal area (CBA), crown volume (CV) and wood volume (WV), even providing information on the tree shape and its overall conditions. We processed the point clouds in software such as CloudCompare, 3D Forest, R and MATLAB, whereas the photogrammetric processing was conducted with Agisoft Metashape. Three-dimensional tree models enhance accessibility to the data and allow for a wide range of potential applications, including the development of a tree information model (TIM), providing detailed data for monitoring tree health, growth, biomass and carbon sequestration. The encouraging results provide a basis for extending the virtualization of these monumental trees to a larger scale for conservation and monitoring. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area location and a view of the three monumental chestnut trees in the winter season.</p>
Full article ">Figure 2
<p>Complete workflow of the experiment; the data collection with passive (images) and active sensors (LiDAR) and georeferencing with GNSS or Total Station coordinates. The data processing with dedicated software produced the 3D tree models that allowed us to estimate main structural parameters (DBH = diameter at breast heigh, WV = wood volume, H = total tree height, CV = crown volume).</p>
Full article ">Figure 3
<p>(<b>a</b>) The MLS detection of a tree (MT1) surrounding and target in winter. (<b>b</b>) Top view of the Kaarta Stencil-2 point cloud around one tree (black line) with the loop-closed trajectory. The line color change is based on the trajectory time, with red at the beginning and white at the end. The blue and green colors represent the points’ intensity obtained during the MLS point cloud acquisition.</p>
Full article ">Figure 4
<p>Detailed view of the MT2 dense clouds obtained through the AMP software. (<b>a</b>) SLR dense cloud representing the monumental trunk with pronounced buttresses and visible defects and (<b>b</b>) the UAV dense cloud representing the canopy top which was not detectable with the MLS in the summer survey.</p>
Full article ">Figure 5
<p>A view of the 3D tree models of the three monumental trees (MT1, MT2 and MT3) obtained fusing SLR dense cloud, MLS point cloud and UAV dense cloud.</p>
Full article ">Figure 6
<p>Mesh of the DBH sections from the SLR dense clouds in the three monumental trees (MT1, MT2 and MT3).</p>
Full article ">Figure 7
<p>The 3D Forest output of the three 3D tree models (MT1, MT2 and MT3) with the TH (m) and the CBA (m<sup>2</sup>).</p>
Full article ">Figure 8
<p>Mesh representation of the three MLS tree skeletons in the winter surveys (MT1, MT2 and MT3) by AdQSM (visualization in CloudCompare).</p>
Full article ">Figure 9
<p>The three crown meshes reconstructed with the alpha shape (α: 0.25) algorithm.</p>
Full article ">
24 pages, 8835 KiB  
Article
Monitoring and Forecasting Green Tide in the Yellow Sea Using Satellite Imagery
by Shuwen Xu, Tan Yu, Jinmeng Xu, Xishan Pan, Weizeng Shao, Juncheng Zuo and Yang Yu
Remote Sens. 2023, 15(8), 2196; https://doi.org/10.3390/rs15082196 - 21 Apr 2023
Cited by 2 | Viewed by 2099
Abstract
This paper proposes a semi-automatic green tide extraction method based on the NDVI to extract Yellow Sea green tides from 2008 to 2022 using remote sensing (RS) images from multiple satellites: GF-1, Landsat 5 TM, Landsat 8 OLI_TIRS, HJ-1A/B, HY-1C, and MODIS. The [...] Read more.
This paper proposes a semi-automatic green tide extraction method based on the NDVI to extract Yellow Sea green tides from 2008 to 2022 using remote sensing (RS) images from multiple satellites: GF-1, Landsat 5 TM, Landsat 8 OLI_TIRS, HJ-1A/B, HY-1C, and MODIS. The results of the accuracy assessment based on three indicators: Precision, Recall, and F1-score, showed that our extraction method can be applied to the images of most satellites and different environments. We traced the source of the Yellow Sea green tide to Jiangsu Subei shoal and the southeastern Yellow Sea and earliest advanced the tracing time to early April. The Gompertz and Logistic growth curve models were selected to predict and monitor the extent and duration of the Yellow Sea green tide, and uncertainty for the predicted growth curve was estimated. The prediction for 2022 was that its start and dissipation dates were expected to be June 1 and August 15, respectively, and the accumulative cover area was expected to be approximately 1190.90–1191.21 km2. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area and Location diagram of extraction regions in different environments for accuracy assessment ((<b>a</b>) Partition diagram, where the red line is the −35 m isobath and the area within the green line represents Zone C, which is the muddy water area. (<b>b</b>) Accuracy assessment areas A1–A3, B1–B3, and C1–C3 corresponding to the three partitions of A, B, and C).</p>
Full article ">Figure 2
<p>Terrestrial masking process.</p>
Full article ">Figure 3
<p><span class="html-italic">NDVI</span> frequency histogram and second-order Gaussian fit curve (The data in the figure are from Landsat 8 (Date: 23 June 2021; Worldwide Reference System: path 119 and row 35).</p>
Full article ">Figure 4
<p>Threshold segmentation separating the seawater and green tide binarized images, (<b>a</b>) is the binarized green tide information and (<b>b</b>) is the extraction result of green tide, the outer boundary red line is indicated to the distribution of green tide. The data were extracted from Landsat 8 (Date: 23 June 2021; Worldwide Reference System: path 119 and row 35).</p>
Full article ">Figure 5
<p>Classification results for TP, FP, FN, and TN, ((<b>a</b>) is classification results for TP and FP, where the green solid circle represents TP, and the red solid circle represents FP. (<b>b</b>) is classification results for TP and FN, where the green solid circle represents TP, and the orange pentagram represents FN).</p>
Full article ">Figure 6
<p>Schematic diagram of percentile points.</p>
Full article ">Figure 7
<p>Spectral information of one <span class="html-italic">U. prolifera</span> pixel from GF-1 image (6 April 2021).</p>
Full article ">Figure 8
<p>Schematic diagram of the location of the satellite extraction area. (<b>a</b>) is the selected approximate area; (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>,<b>j</b>) are false-color images of GF-1, Landsat, HJ-1, HY-1C, and MODIS, respectively (band combination: GF-1: band 3-2-1; Landsat: band 4-5-3; HJ-1 and HY-1C: 3-4-2; MODIS:1-2-1); and (<b>c</b>,<b>e</b>,<b>g</b>,<b>i</b>,<b>k</b>) are the images’ respective green tide extraction effects (the red point represents FP, the white point represents FN, and the green point represents TP).</p>
Full article ">Figure 9
<p>Accuracy assessment of extraction results with different orders of magnitude of pixel numbers (RS data: Landsat 8, 23 June 2021, Worldwide Reference System: path 119 and row 35). (<b>a</b>) is a schematic diagram of extraction regions of different orders of magnitude pixel numbers and (<b>b</b>) is the corresponding accuracy of different orders of magnitude of pixel numbers.</p>
Full article ">Figure 10
<p>Growth curve fitting effect with 95% confidence interval.</p>
Full article ">Figure 11
<p>First derivatives of growth curve and its Gaussian fitting curve.</p>
Full article ">Figure 12
<p>Curvilinear relationships between the start and dissipation percentiles and the kurtosis coefficients ((<b>a</b>) is curvilinear relationships between the start percentiles and the kurtosis coefficients and (<b>b</b>) is curvilinear relationships between the dissipation percentiles and the kurtosis coefficients).</p>
Full article ">Figure 13
<p>Forecasted start and dissipation times of the Yellow Sea green tide for 2022 ((<b>a</b>) is Logistic growth curve fitting effect of the ACA of the Yellow Sea green tide (2022) and (<b>b</b>) is the forecast results of start and dissipation times of the Yellow Sea green tide for 2022).</p>
Full article ">Figure 14
<p>Results of simulation of the bloom duration and accumulative and maximum cover area of the Yellow Sea green tide from 2008 to 2022.</p>
Full article ">Figure 15
<p>Source site statistics 2008–2022. (<b>a</b>) shows the source sites of green tide blooms over the years, and (<b>b</b>) shows the results of k-means clustering, which divided the green tide source into two clusters: the Subei shoal (red box) and the southeastern Yellow Sea (blue box).</p>
Full article ">Figure 16
<p>Spectral information of extracted green tide pixels for the source found in the southeastern Yellow Sea (blue box in <a href="#remotesensing-15-02196-f006" class="html-fig">Figure 6</a>) for 2008, 2017, 2020, and 2021 (where N is the number of pixels).</p>
Full article ">
17 pages, 3621 KiB  
Article
A Spectral Library Study of Mixtures of Common Lunar Minerals and Glass
by Xiaoyi Hu, Te Jiang, Pei Ma, Hao Zhang, Paul Lucey and Menghua Zhu
Remote Sens. 2023, 15(8), 2195; https://doi.org/10.3390/rs15082195 - 21 Apr 2023
Cited by 3 | Viewed by 2270
Abstract
Reflectance spectroscopy is a powerful tool to remotely identify the mineral and chemical compositions of the lunar regolith. The lunar soils contain silicate minerals with prominent absorption features and glasses with much less distinctive spectral features. The accuracy of mineral abundance retrieval may [...] Read more.
Reflectance spectroscopy is a powerful tool to remotely identify the mineral and chemical compositions of the lunar regolith. The lunar soils contain silicate minerals with prominent absorption features and glasses with much less distinctive spectral features. The accuracy of mineral abundance retrieval may be affected by the presence of glasses. In this work, we construct a spectral library of mixtures of major lunar-type minerals and synthetic glasses with varying relative abundances and test their performance on mineral abundance retrievals. By matching the library spectra with the spectra of mineral mixtures with known abundances, we found that the accuracy of mineral abundance retrieval can be improved by including glass as an endmember. Although our method cannot identify the abundance of glasses quantitatively, the presence or absence of glasses in the mixtures can be decisively determined. Full article
Show Figures

Figure 1

Figure 1
<p>Laboratory reflectance spectra measured at incident zenith angle i = 30°, emission angle e = 0°, and phase angle g = 30° of representative lunar minerals and glasses. Dashed lines at 1000 and 2000 nm indicate the major absorption features of the silicates. The Reflectance Experiment Laboratory IDs are LR-CMP-014 for OLV, LS-CMP-004 for PLG, LS-CMP-009 for CPX, LS-CMP-012 for OPX, LR-CMP-052 for green glass, LR-CMP051 for orange glass, and LR-CMP-182 for ILM. The agglutinate is the Apollo 15,041 agglutinates spectrum from [<a href="#B25-remotesensing-15-02195" class="html-bibr">25</a>].</p>
Full article ">Figure 2
<p>(<b>a</b>) The synthetic lunar green and orange glass from [<a href="#B19-remotesensing-15-02195" class="html-bibr">19</a>] and the Apollo sample agglutinates spectra from [<a href="#B25-remotesensing-15-02195" class="html-bibr">25</a>]; (<b>b</b>) VNIR spectra from our spectral library. We added a series content of glass to the mineral mixture (OLV:8 wt.%, PYX: 32 wt.%, PLG: 60 wt.%) to highlight the spectral differences between different content of synthetic lunar glasses in this study.</p>
Full article ">Figure 3
<p>The modeling procedure of constructing the spectral mixing library. We added one of the synthetic glasses, the orange glass or the green glass, to the library [<a href="#B19-remotesensing-15-02195" class="html-bibr">19</a>,<a href="#B34-remotesensing-15-02195" class="html-bibr">34</a>].</p>
Full article ">Figure 4
<p>Comparisons of the LSCC measurement spectra with the corresponding best-matched MSL spectra with orange glass were obtained using four different fitting metrics for (<b>a</b>) X12001, (<b>b</b>) X15071, (<b>c</b>) X14141, (<b>d</b>) X70181, (<b>e</b>) X61141, and (<b>f</b>) X62231.</p>
Full article ">Figure 5
<p>The absolute difference in relative abundance between the MSL retrieval of this work and the LSCC measurement (Difference = |LSCC measurement–MSL retrieval|) for OLV (<b>a</b>,<b>b</b>), PYX (<b>c</b>,<b>d</b>), and PLG (<b>e</b>,<b>f</b>). The blue squares indicate the weight% difference between the measurement and the retrieval with no glass considered. The green and orange circles indicate the weight% difference in retrieved and measured mineral abundances with green and orange glasses, respectively. Vertical black and red lines indicate that the retrieval results with glass considered are closer and further from the actual measurement results, respectively.</p>
Full article ">Figure 6
<p>Comparisons of RELAB sample spectra with the corresponding best-matched MSL spectra (green or orange glass) were obtained using four different fitting metrics (Equations (15)–(18)). The results shown in (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>) and those in (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>) are obtained using the MSL with orange and green glasses, respectively. c1xs06, c1xs09, c1xs12, and c1xs15 are spectra ID corresponding to RELAB sample ID XS-JFM-006, XS-JFM-009, XS-JFM-012, and XS-JFM-006, respectively.</p>
Full article ">Figure 7
<p>Comparisons of the CE-3 sample spectra with the corresponding best-matched MSL spectra (green or orange glass) were obtained using four different fitting metrics (Equations (15)–(18)). The results shown in (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>) and those in (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>) are obtained using the MSL with orange and green glasses, respectively.</p>
Full article ">
19 pages, 21341 KiB  
Article
A Phase Difference Measurement Method for Integrated Optical Interferometric Imagers
by Jialiang Chen, Qinghua Yu, Ben Ge, Chuang Zhang, Yan He and Shengli Sun
Remote Sens. 2023, 15(8), 2194; https://doi.org/10.3390/rs15082194 - 21 Apr 2023
Cited by 1 | Viewed by 2063
Abstract
Interferometric imagers based on integrated optics have the advantages of miniaturization and low cost compared with traditional telescope imaging systems and are expected to be applied in the field of space target detection. Phase measurement of the complex coherence factor is crucial for [...] Read more.
Interferometric imagers based on integrated optics have the advantages of miniaturization and low cost compared with traditional telescope imaging systems and are expected to be applied in the field of space target detection. Phase measurement of the complex coherence factor is crucial for the image reconstruction of interferometric imaging technology. This study discovers the effect of the phase of the complex coherence factor on the extrema of the interference fringes in the interferometric imager and proposes a method for calculating the phase difference of the complex coherence factor of two interference signals by comparing the extrema of the interferometric fringes in the area of approximate linear change in the envelope shape to obtain the phase information required for imaging. Experiments using two interferometric signals with a phase difference of π were conducted to verify the validity and feasibility of the phase difference measurement method. Compared with the existing phase measurement methods, this method does not need to calibrate the position of the zero optical path difference and can be applied to the integrated optical interferometric imager using a single-mode fiber, which also allows the imager to work in a more flexible way. The theoretical phase measurement accuracy of this method is higher than 0.05 π, which meets the image reconstruction requirements. Full article
(This article belongs to the Special Issue Laser and Optical Remote Sensing for Planetary Exploration)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic diagram of the integrated optical interferometric imaging system.</p>
Full article ">Figure 2
<p>Simulated interference term <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> </semantics></math> and phase <math display="inline"><semantics> <mrow> <mi>f</mi> <mo stretchy="false">(</mo> <mi>τ</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> associated with the spectral shape for a range of optical path difference [−150 μm, 150 μm].</p>
Full article ">Figure 3
<p>All maximum values <math display="inline"><semantics> <mrow> <msubsup> <mi>a</mi> <mn>1</mn> <mn>0</mn> </msubsup> <mo>,</mo> <msubsup> <mi>a</mi> <mn>2</mn> <mn>0</mn> </msubsup> <mo>…</mo> <msubsup> <mi>a</mi> <mi>n</mi> <mn>0</mn> </msubsup> </mrow> </semantics></math> of the interference term of <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> in the range of optical path difference [−150 μm,150 μm], and the absolute value of the difference between adjacent maxima <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>a</mi> <mi>n</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Calculated from all the maxima values <math display="inline"><semantics> <mrow> <msubsup> <mi>a</mi> <mn>1</mn> <mn>0</mn> </msubsup> <mo>,</mo> <msubsup> <mi>a</mi> <mn>2</mn> <mn>0</mn> </msubsup> <mo>…</mo> <msubsup> <mi>a</mi> <mi>n</mi> <mn>0</mn> </msubsup> </mrow> </semantics></math> of different phase interference signals in the range of optical range difference [−150 μm, 150 μm].</p>
Full article ">Figure 5
<p>Layout of the experimental setup for amplitude-division interference.</p>
Full article ">Figure 6
<p>Photographs of the experimental setup: (<b>a</b>) light source, collimator with 2.27 m focal length; (<b>b</b>) filter, fiber patch cable, fiber stretcher, motorized delay line, coupler, detectors.</p>
Full article ">Figure 7
<p>Transmission spectrum of the filter.</p>
Full article ">Figure 8
<p>Measured interference term <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>The simulated interference term <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> </semantics></math>, the phase <math display="inline"><semantics> <mrow> <mi>f</mi> <mo stretchy="false">(</mo> <mi>τ</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> associated with the spectral shape in the absence of dispersion, and the interference term <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> </semantics></math> in the presence of dispersion.</p>
Full article ">Figure 10
<p>Original signal, and filtered signal with the direct-current light intensity subtracted.</p>
Full article ">Figure 11
<p>Measurement results of amplitude-division interference experiments at high signal-to-noise ratios. (<b>a</b>) extrema of the normalized interference term <math display="inline"><semantics> <mrow> <mfrac> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> <mrow> <msub> <mrow> <mfenced close="|" open="|"> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> </mfenced> </mrow> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </mfrac> </mrow> </semantics></math> of the coupler output 1; (<b>b</b>) extrema of the normalized interference term <math display="inline"><semantics> <mrow> <mfrac> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> <mrow> <msub> <mrow> <mfenced close="|" open="|"> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> </mfenced> </mrow> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </mfrac> </mrow> </semantics></math> of the coupler output 2; (<b>c</b>) average of the corresponding extrema of the normalized interference terms <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> </semantics></math> of the two outputs; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>m</mi> <mi>a</mi> <msubsup> <mi>x</mi> <mi>n</mi> <mi>m</mi> </msubsup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>m</mi> <mi>i</mi> <msubsup> <mi>n</mi> <mi>n</mi> <mi>m</mi> </msubsup> </mrow> </semantics></math> calculated from the extrema.</p>
Full article ">Figure 12
<p>Measurement results of amplitude-division interference experiments at low signal-to-noise ratios. (<b>a</b>) extrema of the normalized interference term <math display="inline"><semantics> <mrow> <mfrac> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> <mrow> <msub> <mrow> <mfenced close="|" open="|"> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> </mfenced> </mrow> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </mfrac> </mrow> </semantics></math> of the coupler output 1; (<b>b</b>) extrema of the normalized interference term <math display="inline"><semantics> <mrow> <mfrac> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> <mrow> <msub> <mrow> <mfenced close="|" open="|"> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> </mfenced> </mrow> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </mfrac> </mrow> </semantics></math> of the coupler output 2; (<b>c</b>) average of the corresponding extrema of the normalized interference terms <math display="inline"><semantics> <mrow> <mfrac> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> <mrow> <msub> <mrow> <mfenced close="|" open="|"> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> </mfenced> </mrow> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </mfrac> </mrow> </semantics></math> of the two outputs; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>m</mi> <mi>a</mi> <msubsup> <mi>x</mi> <mi>n</mi> <mi>m</mi> </msubsup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>m</mi> <mi>i</mi> <msubsup> <mi>n</mi> <mi>n</mi> <mi>m</mi> </msubsup> </mrow> </semantics></math> calculated from the extrema.</p>
Full article ">Figure 13
<p>Layout of the experimental setup for wavefront-division interference.</p>
Full article ">Figure 14
<p>A centrally symmetric periodic grating target.</p>
Full article ">Figure 15
<p>Measurement results of wavefront-division interference experiments at high signal-to-noise ratios. (<b>a</b>) Extrema of the normalized interference term <math display="inline"><semantics> <mrow> <mfrac> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> <mrow> <msub> <mrow> <mfenced close="|" open="|"> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> </mfenced> </mrow> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </mfrac> </mrow> </semantics></math> of the coupler output 1; (<b>b</b>) extrema of the normalized interference term <math display="inline"><semantics> <mrow> <mfrac> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> <mrow> <msub> <mrow> <mfenced close="|" open="|"> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> </mfenced> </mrow> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </mfrac> </mrow> </semantics></math> of the coupler output 2; (<b>c</b>) average of the corresponding extrema of the normalized interference terms <math display="inline"><semantics> <mrow> <mfrac> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> <mrow> <msub> <mrow> <mfenced close="|" open="|"> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> </mfenced> </mrow> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </mfrac> </mrow> </semantics></math> of the two outputs; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>m</mi> <mi>a</mi> <msubsup> <mi>x</mi> <mi>n</mi> <mi>m</mi> </msubsup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>m</mi> <mi>i</mi> <msubsup> <mi>n</mi> <mi>n</mi> <mi>m</mi> </msubsup> </mrow> </semantics></math> calculated from the extrema.</p>
Full article ">Figure 16
<p>Measurement results of wavefront-division interference experiments at low signal-to-noise ratios. (<b>a</b>) extrema of the normalized interference term <math display="inline"><semantics> <mrow> <mfrac> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> <mrow> <msub> <mrow> <mfenced close="|" open="|"> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> </mfenced> </mrow> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </mfrac> </mrow> </semantics></math> of the coupler output 1; (<b>b</b>) extrema of the normalized interference term <math display="inline"><semantics> <mrow> <mfrac> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> <mrow> <msub> <mrow> <mfenced close="|" open="|"> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> </mfenced> </mrow> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </mfrac> </mrow> </semantics></math> of the coupler output 2; (<b>c</b>) average of the corresponding extrema of the normalized interference terms <math display="inline"><semantics> <mrow> <mfrac> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> <mrow> <msub> <mrow> <mfenced close="|" open="|"> <mrow> <msub> <mi>I</mi> <mo>Δ</mo> </msub> </mrow> </mfenced> </mrow> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </mfrac> </mrow> </semantics></math> of the two outputs; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>m</mi> <mi>a</mi> <msubsup> <mi>x</mi> <mi>n</mi> <mi>m</mi> </msubsup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>m</mi> <mi>i</mi> <msubsup> <mi>n</mi> <mi>n</mi> <mi>m</mi> </msubsup> </mrow> </semantics></math> calculated from the extrema.</p>
Full article ">
20 pages, 11363 KiB  
Article
An Earth Observation Task Representation Model Supporting Dynamic Demand for Flood Disaster Monitoring and Management
by Zhongguo Zhao, Chuli Hu, Ke Wang, Yixiao Zhang, Zhangyan Xu and Xuan Ding
Remote Sens. 2023, 15(8), 2193; https://doi.org/10.3390/rs15082193 - 21 Apr 2023
Cited by 1 | Viewed by 2102
Abstract
A comprehensive, accurate, and timely expression of earth observation (EO) tasks is the primary prerequisite for the response to and the emergency monitoring of disasters, especially floods. However, the existing information model does not fully satisfy the demand for a fine-grain observation expression [...] Read more.
A comprehensive, accurate, and timely expression of earth observation (EO) tasks is the primary prerequisite for the response to and the emergency monitoring of disasters, especially floods. However, the existing information model does not fully satisfy the demand for a fine-grain observation expression of EO task, which results in the absence of task process management. The current study proposed an EO task representation model based on meta-object facility to address this problem. The model not only describes the static information of a task, but it also defines the dynamics of an observation task by introducing a functional metamodel. This metamodel describes the full life cycle of a task; it comprises five process methods: birth, separation, combination, updating, and extinction. An earth observation task modeling and management prototype system (EO-TMMS) for conducting a remote sensing satellite sensor observation task representation experiment on flooding was developed. In accordance with the results, the proposed model can describe various EO tasks demands and the full life cycle process of an EO task. Compared with other typical observation task information models, the proposed model satisfies the dynamic and fine-grain process representation of EO tasks, which can improve the efficiency of EO sensor utilization. Full article
Show Figures

Figure 1

Figure 1
<p>Metamodel framework of earth observation task representation. EO task instances are represented at multiple levels by corresponding specific metadata collection types. A standard representation model of the EO task is constructed based on the EO task metamodel.</p>
Full article ">Figure 2
<p>Earth observation task representation of the metamodeling architecture.</p>
Full article ">Figure 3
<p>Full life cycle of an earth observation task.</p>
Full article ">Figure 4
<p>UML diagram for earth observation task metadata contents. (1..*: at least one instance, 0..*: zero or more instances).</p>
Full article ">Figure 5
<p>Mapping of the earth observation task contents to existing standard data types.</p>
Full article ">Figure 6
<p>Prototype of system architecture.</p>
Full article ">Figure 7
<p>Main interface of the prototype system.</p>
Full article ">Figure 8
<p>Interface of earth observation task modeling.</p>
Full article ">Figure 9
<p>Experimental scenario. (<b>a</b>,<b>b</b>) Study area and (<b>c</b>) rainfall and water level variation at the Hankou Hydrological Station.</p>
Full article ">Figure 10
<p>Segments of earth observation task representation instances (part). (<b>a</b>) Flood inundation observation task and (<b>b</b>) updated flood inundation observation task (the red circle shows the updated time and space information).</p>
Full article ">Figure 11
<p>Drag-and-drop modeling visualization of the earth observation task process.</p>
Full article ">Figure 12
<p>Effect of observation task observation representation model on the observation coverage. (<b>a1</b>,<b>b1</b>) Coverage of unchanged task demands, (<b>a2</b>) observation solution without EObTask after task demand change, (<b>b2</b>) observation solution with EObTask after task demand change.</p>
Full article ">Figure 13
<p>Comparison of models with and without a task process. (<b>a</b>) Diagram of an earth observation task without a task process and (<b>b</b>) diagram of an earth observation task with a task process.</p>
Full article ">
27 pages, 8754 KiB  
Article
Multiscale Entropy-Based Surface Complexity Analysis for Land Cover Image Semantic Segmentation
by Lianfa Li, Zhiping Zhu and Chengyi Wang
Remote Sens. 2023, 15(8), 2192; https://doi.org/10.3390/rs15082192 - 21 Apr 2023
Cited by 1 | Viewed by 2054
Abstract
Recognizing and classifying natural or artificial geo-objects under complex geo-scenes using remotely sensed data remains a significant challenge due to the heterogeneity in their spatial distribution and sampling bias. In this study, we propose a deep learning method of surface complexity analysis based [...] Read more.
Recognizing and classifying natural or artificial geo-objects under complex geo-scenes using remotely sensed data remains a significant challenge due to the heterogeneity in their spatial distribution and sampling bias. In this study, we propose a deep learning method of surface complexity analysis based on multiscale entropy. This method can be used to reduce sampling bias and preserve entropy-based invariance in learning for the semantic segmentation of land use and land cover (LULC) images. Our quantitative models effectively identified and extracted local surface complexity scores, demonstrating their broad applicability. We tested our method using the Gaofen-2 image dataset in mainland China and accurately estimated multiscale complexity. A downstream evaluation revealed that our approach achieved similar or better performance compared to several representative state-of-the-art deep learning methods. This highlights the innovative and significant contribution of our entropy-based complexity analysis and its applicability in improving LULC semantic segmentations through optimal stratified sampling and constrained optimization, which can also potentially be used to enhance semantic segmentation under complex geo-scenes using other machine learning methods. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Multiscale convolution operator by different kernel sizes.</p>
Full article ">Figure 2
<p>Convolution operator (<b>a</b>) of local entropy-based complexity and the UNet (<b>b</b>) learning algorithm to identify it.</p>
Full article ">Figure 3
<p>Flow chart for local surface complexity quantification (<b>a</b>), learning (<b>b</b>), and optimal sampling and constraint (<b>c</b>) for binary semantic segmentation.</p>
Full article ">Figure 4
<p>Optimal selection of the samples (patches) using the complexity score as the stratifying and weight factors.</p>
Full article ">Figure 5
<p>Quantification of multiscale entropy-based surface complexity by varying kernel sizes.</p>
Full article ">Figure 6
<p>Violins of the local surface complexity extracted from the original images (4 × 4 m<sup>2</sup>) with different kernel sizes (KZ: kernel sizes). The red plus sign indicates the average complexity score of the samples.</p>
Full article ">Figure 7
<p>Quantification of entropy-based surface complexity by varying upscaling spatial scales (KZ: kernel size). The RGB images display true colors, while the binary mask appears in yellow to represent the target feature. In the KZ rows, a color gradient ranging from blue to yellow is used, where yellow represents higher complexity scores (light-red circle dot: hotspots of surface complexity).</p>
Full article ">Figure 8
<p>Comparison of the original images, binary mask, extracted surface complexity, and estimated local surface complexity for each land cover. The RGB images show true colors, and the binary mask is displayed in yellow to indicate the target feature. In the last two rows, a color gradient ranging from blue to yellow is used to represent the complexity score, with yellow denoting high complexity degree.</p>
Full article ">Figure 9
<p>Comparison between build-up segmentation results of baseline UNet, UNets with complexity-informed constraint/sampling, and UNet with both. (<b>a</b>) shows the RGB images depicting true colors; (<b>b</b>–<b>f</b>) display yellow masks representing the build-up feature for ground truth (<b>b</b>) or predictions (<b>c</b>–<b>f</b>).</p>
Full article ">Figure 10
<p>Comparison between farmland segmentation results of baseline UNet, UNets with complexity-informed constraint/sampling, and UNet with both. (<b>a</b>) shows the RGB images depicting true colors; (<b>b</b>–<b>f</b>) display yellow masks representing the farmland feature for ground truth (<b>b</b>) or predictions (<b>c</b>–<b>f</b>).</p>
Full article ">Figure 11
<p>Comparison between forest segmentation results of baseline UNet, UNets with complexity-informed constraint/sampling, and UNet with both. (<b>a</b>) shows the RGB images depicting true colors; (<b>b</b>–<b>f</b>) display yellow masks representing the forest feature for ground truth (<b>b</b>) or predictions (<b>c</b>–<b>f</b>).</p>
Full article ">Figure 12
<p>Comparison between meadow segmentation results of baseline UNet, UNets with complexity-informed constraint/sampling, and UNet with both. (<b>a</b>) shows the RGB images depicting true colors; (<b>b–f</b>) display yellow masks representing the meadow feature for ground truth (<b>b</b>) or predictions (<b>c</b>–<b>f</b>).</p>
Full article ">Figure 13
<p>Comparison between waters segmentation results of baseline UNet, UNets with complexity-informed constraint/ sampling, and UNet with both. (<b>a</b>) shows the RGB images depicting true colors; (<b>b</b>–<b>f</b>) display yellow masks representing the waters feature for ground truth (<b>b</b>) or predictions (<b>c</b>–<b>f</b>).</p>
Full article ">Figure 14
<p>Comparison of segmentation results for Crossformer, DeepLab V3+, Global CNN, FCN-ResNet, and our complexity analysis.</p>
Full article ">
22 pages, 13555 KiB  
Article
Effects of Directional Wave Spectra on the Modeling of Ocean Radar Backscatter at Various Azimuth Angles by a Modified Two-Scale Method
by Qiushuang Yan, Yuqi Wu, Chenqing Fan, Junmin Meng, Tianran Song and Jie Zhang
Remote Sens. 2023, 15(8), 2191; https://doi.org/10.3390/rs15082191 - 20 Apr 2023
Viewed by 1801
Abstract
Knowledge of the ocean backscatter at various azimuth angles is critical to the radar detection of the ocean environment. In this study, the modified two-scale model (TSM), which introduces a correction term in the conventional TSM, is improved based on the empirical model, [...] Read more.
Knowledge of the ocean backscatter at various azimuth angles is critical to the radar detection of the ocean environment. In this study, the modified two-scale model (TSM), which introduces a correction term in the conventional TSM, is improved based on the empirical model, CMOD5.n. Then, the influences of different directional wave spectra on the prediction of azimuthal behavior of ocean radar backscatter are investigated by comparing the simulated results with CMOD5.n and the Advanced Scatterometer (ASCAT) measurements. The results show that the overall performance of the single spectra of D, A, E, and H18 and the composite spectra of AH18 and AEH18 in predicting ocean backscatter are different at different wind speeds and incidence angles. Generally, the AH18 spectrum has better performance at low and moderate wind speeds, while the A spectrum works better at high wind speed. Nevertheless, the wave spectra have little effect on the prediction of the azimuthal fluctuation of scattering, which is highly dependent on the directional spreading function. The relative patterns of azimuthal undulation produced by different spreading functions are rather different at different wind speeds, but similar under different incidence angles. The Gaussian spreading function generally has better performance in predicting the azimuthal fluctuation of scattering. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Curvature spectra <span class="html-italic">B</span>(<span class="html-italic">K</span>) in log−log scales: (<b>a</b>) the D spectrum, (<b>b</b>) the A spectrum, (<b>c</b>) the E spectrum, and (<b>d</b>) the H18 spectrum. Wind speed increases upward from 2 m/s to 24 m/s in 2 m/s steps.</p>
Full article ">Figure 2
<p>The (<b>a</b>) Cosine, (<b>b</b>) Sech, and (<b>c</b>) Gaussian spreading functions at 10 m/s wind speed for wave numbers of 0.1, 1, 10, and 100 rad/m.</p>
Full article ">Figure 3
<p>Comparison of distributions of Gaussian sea surface and Tayfun sea surface based on the A spectrum and Cosine spreading function at 10 m/s wind speed and 30° incidence angle along upwind direction. (<b>a</b>) Pdfs for surface elevation; (<b>b</b>) Pdfs for slope along range direction. Blue solid curve, Gaussian sea surface; red dashed curve, non−Gaussian sea surface constructed by Tayfun model.</p>
Full article ">Figure 4
<p>Estimations of the <span class="html-italic">ζ</span> parameter with the use of different directional wave spectra of D + Cosine, D + Sech, D + Gaussian, A + Cosine, A + Sech, A + Gaussian, E + Cosine, E + Sech, E + Gaussian, H18 + Cosine, H18 + Sech, and H18 + Gaussian as a function of incidence angle (<b>a</b>–<b>c</b>) at wind speed of 3 m/s, (<b>d</b>–<b>f</b>) at wind speed of 9 m/s, (<b>g</b>–<b>i</b>) at wind speed of 16 m/s.</p>
Full article ">Figure 5
<p>Estimations of the <span class="html-italic">ζ</span> parameter with the use of different directional wave spectra of D + Cosine, D + Sech, D + Gaussian, A + Cosine, A + Sech, A + Gaussian, E + Cosine, E + Sech, E + Gaussian, H18 + Cosine, H18 + Sech, and H18 + Gaussian as functions of wind speed (<b>a</b>–<b>c</b>) at incidence angle of 30°, (<b>d</b>–<b>f</b>) at incidence angle of 40°, (<b>g</b>–<b>i</b>) at incidence angle of 50°.</p>
Full article ">Figure 6
<p>Example of comparison of the NRCSs predicted by the modified TSM (solid curves) and the conventional TSM (dashed curves) using different wave spectra and the Cosine spreading function, and their comparisons with CMOD5.n at wind speed of 16 m/s and incidence angle of 50°. (<b>a</b>) Results obtained with the D and A spectra. (<b>b</b>) Results obtained with the E and H18 spectra.</p>
Full article ">Figure 7
<p>Simulated NRCSs based on A + Cosine, A + Sech, and A + Gaussian, and their comparison with the ASCAT data and CMOD5.n as functions of incidence angle at wind speeds of 3 m/s, 9 m/s, and 16 m/s in the (<b>a</b>–<b>c</b>) upwind direction, (<b>d</b>–<b>f</b>) downwind direction, and (<b>g</b>–<b>i</b>) crosswind direction.</p>
Full article ">Figure 8
<p>Simulated NRCSs based on D + Cosine, A + Cosine, E + Cosine, H18 + Cosine, AH18 + Cosine, and AEH18 + Cosine, and their comparison with the ASCAT data and CMOD5.n as functions of incidence angle at wind speeds of 3 m/s, 9 m/s, and 16 m/s along the (<b>a</b>–<b>c</b>) upwind direction, (<b>d</b>–<b>f</b>) downwind direction, and (<b>g</b>–<b>i</b>) crosswind direction.</p>
Full article ">Figure 9
<p>Predictions of the modified TSM using A + Cosine, A + Sech, A + Gaussian, AH18 + Cosine, AH18 + Sech, and AH18 + Gaussian, and their comparisons with CMOD5.n and the ASCAT measurements. (<b>a</b>–<b>c</b>) At incidence angle of 30° under wind speeds of 3 m/s, 9 m/s, and 16 m/s. (<b>d</b>–<b>f</b>) At incidence angle of 50° under wind speeds of 3 m/s, 9 m/s, and 16 m/s.</p>
Full article ">Figure 9 Cont.
<p>Predictions of the modified TSM using A + Cosine, A + Sech, A + Gaussian, AH18 + Cosine, AH18 + Sech, and AH18 + Gaussian, and their comparisons with CMOD5.n and the ASCAT measurements. (<b>a</b>–<b>c</b>) At incidence angle of 30° under wind speeds of 3 m/s, 9 m/s, and 16 m/s. (<b>d</b>–<b>f</b>) At incidence angle of 50° under wind speeds of 3 m/s, 9 m/s, and 16 m/s.</p>
Full article ">Figure 10
<p>Upwind–downwind asymmetry predicted by the modified TSM with different wave directional spectra input, and their comparison with the ASCAT and CMOD5.n. (<b>a</b>–<b>c</b>) As a function of incidence angle at wind speeds of 3 m/s, 9 m/s, and 16 m/s. (<b>d</b>–<b>f</b>) As a function of wind speed at incidence angles of 30°, 40°, and 50°.</p>
Full article ">Figure 11
<p>Upwind–crosswind anisotropy predicted by the modified TSM with different directional wave spectra input, and their comparisons with the ASCAT data and CMOD5.n. (<b>a</b>–<b>c</b>) Results plotted as functions of incidence angle for wind speeds of 3 m/s, 9 m/s, and 16 m/s. (<b>d</b>–<b>f</b>) Results plotted as functions of wind speed for incidence angles of 30°, 40°, and 50°.</p>
Full article ">Figure 12
<p>Upwind–downwind asymmetry predicted by the modified TSM with A + Gaussian and AH18 + Gaussian input at high wind speeds, and their comparison with CMOD5.n as functions of wind speed (<b>a</b>) at incidence angle of 30°, (<b>b</b>) at incidence angle of 40°, and (<b>c</b>) at incidence angle of 50°.</p>
Full article ">
17 pages, 4605 KiB  
Article
Epoch-Wise Estimation and Analysis of GNSS Receiver DCB under High and Low Solar Activity Conditions
by Xiao Zhang, Linyuan Xia, Hong Lin and Qianxia Li
Remote Sens. 2023, 15(8), 2190; https://doi.org/10.3390/rs15082190 - 20 Apr 2023
Cited by 2 | Viewed by 1576
Abstract
Differential code bias (DCB) is one of the main errors involved in ionospheric total electron content (TEC) retrieval using a global navigation satellite system (GNSS). It is typically assumed to be constant over time. However, this assumption is not always valid because receiver [...] Read more.
Differential code bias (DCB) is one of the main errors involved in ionospheric total electron content (TEC) retrieval using a global navigation satellite system (GNSS). It is typically assumed to be constant over time. However, this assumption is not always valid because receiver DCBs have long been known to exhibit apparent intraday variations. In this paper, a combined method is introduced to estimate the epoch-wise receiver DCB, which is divided into two parts: the receiver DCB at the initial epoch and its change with respect to the initial value. In the study, this method was proved feasible by subsequent experiments and was applied to analyze the possible reason for the intraday receiver DCB characteristics of 200 International GNSS Service (IGS) stations in 2014 (high solar activity) and 2017 (low solar activity). The results show that the proportion of intraday receiver DCB stability less than 1 ns increased from 72.5% in 2014 to 87% in 2017, mainly owing to the replacement of the receiver hardware in stations. Meanwhile, the intraday receiver DCB estimates in summer generally exhibited more instability than those in other seasons. Although more than 90% of the stations maintained an intraday receiver DCB stability within 2 ns, substantial variations with a peak-to-peak range of 5.78 ns were detected for certain stations, yielding an impact of almost 13.84 TECU on the TEC estimates. Moreover, the intraday variability of the receiver DCBs is related to the receiver environment temperature. Their correlation coefficient (greater than 0.5 in our analyzed case) increases with the temperature. By contrast, the receiver firmware version does not exert a great impact on the intraday variation characteristics of the receiver DCB in this case. Full article
Show Figures

Figure 1

Figure 1
<p>Simulated values (red line) versus estimates (blue line) of receiver DCB variations at site ALG3 on DOY 07, 2017: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>D</mi> <mi>C</mi> <mi>B</mi> <mo stretchy="false">(</mo> <mi>i</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>D</mi> <mi>C</mi> <mi>B</mi> <mo stretchy="false">(</mo> <mi>i</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mi>C</mi> <mrow> <mi>os</mi> <mtext> </mtext> <mo>(</mo> </mrow> <mfrac> <mrow> <mn>2</mn> <mi>π</mi> </mrow> <mi>n</mi> </mfrac> <mo>⋅</mo> <mi>i</mi> <mo>)</mo> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math> and (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>D</mi> <mi>C</mi> <mi>B</mi> <mo stretchy="false">(</mo> <mi>i</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mn>2</mn> <mo>⋅</mo> <mi>sin</mi> <mfenced> <mrow> <mfrac> <mrow> <mn>2</mn> <mi>π</mi> </mrow> <mi>n</mi> </mfrac> <mo>⋅</mo> <mi>i</mi> </mrow> </mfenced> </mrow> </semantics></math> for epochs <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>…</mo> <mi>n</mi> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2880</mn> </mrow> </semantics></math> and the interval is 30 s.</p>
Full article ">Figure 2
<p>Epoch-by-epoch estimates of BR-DCB using SD-GF method (red line) and MCCL plus datum method (blue line) for three pairs of co-located receivers: DLFT-DLF4, DLFT-DLF5, DLF4-DLF5 on DOY 172, 2010.</p>
Full article ">Figure 3
<p>Epoch-by-epoch estimates of single-receiver DCBs retrieved using the MCCL plus datum method for three co-located receivers: DLFT, DLF4, DLF5 on DOY 172, 2010.</p>
Full article ">Figure 4
<p>Epoch-by-epoch estimates of BR-DCB using SD-GF method (red line) and MCCL plus datum method (blue line) for three pairs of co-located receivers: ALG2-ALG3, ALGO-ALG3, ALGO-ALG2 on DOY 07, 2017.</p>
Full article ">Figure 5
<p>Epoch-by-epoch estimates of single-receiver DCBs retrieved with the MCCL plus datum method for three co-located receivers: ALGO, ALG2, ALG3 on DOY 07, 2017.</p>
Full article ">Figure 6
<p>Geographic locations of receivers (of different types) that provide the second set of data analyzed in this work.</p>
Full article ">Figure 7
<p>The upper scatter plot displays the maximum DSTD of the estimated receiver DCBs in 2014 and 2017, where the stations are aligned in accordance with their geomagnetic latitudes along the horizontal axis. The bottom histograms show the distribution of the DSTD values, where the vertical axis represents the number of receivers.</p>
Full article ">Figure 8
<p>The distribution of the intraday stability DSTD of the estimated receiver DCBs determined using MCCL in different seasons of 2014 (red) and 2017 (blue). The vertical axis stands for the number of selected receivers and the horizontal axis stands for the DSTD of receiver DCB estimates.</p>
Full article ">Figure 9
<p>Intraday variations in the receiver DCBs estimates obtained using MCCL for the analyzed stations over one day (DOY 06) in 2014 (red lines) and 2017 (blue lines). The lines are shifted to the same datum for ease of comparison.</p>
Full article ">Figure 10
<p>Intraday variation characteristics of receiver DCBs spanning consecutive days. The blue scattered dots represent the epoch-wise estimates of the receiver DCBs, the yellow lines represent fits to these values, and the red solid lines represent the CODE DCB products.</p>
Full article ">Figure 11
<p>Receiver DCB variations (blue lines) at station ALIC, as estimated with the MCCL method for various receiver firmware versions and various temperature conditions. The intraday temperature values (red lines) were extracted from IGS meteorological data. PCC denotes the Pearson correlation coefficient between the blue line and the red line.</p>
Full article ">Figure 12
<p>Receiver DCB variations (vertical axis) plotted as a function of the intraday temperature values (horizontal axis). The red line represents the results obtained by fitting a linear regression equation to the same dataset used in <a href="#remotesensing-15-02190-f011" class="html-fig">Figure 11</a>.</p>
Full article ">Figure 13
<p>Biases of the ionospheric STEC (upper panels, where different colors correspond to different satellite arcs) stemming from the intraday variations in the receiver DCBs (bottom panels) for stations ADIS and ZECK on DOY 16, 2017.</p>
Full article ">
37 pages, 11631 KiB  
Article
Determination of Accurate Dynamic Topography for the Baltic Sea Using Satellite Altimetry and a Marine Geoid Model
by Majid Mostafavi, Nicole Delpeche-Ellmann, Artu Ellmann and Vahidreza Jahanmard
Remote Sens. 2023, 15(8), 2189; https://doi.org/10.3390/rs15082189 - 20 Apr 2023
Cited by 5 | Viewed by 2040
Abstract
Accurate determination of dynamic topography (DT) is expected to quantify a realistic sea surface with respect to its vertical datum and in identifying sub-mesoscale features of ocean dynamics. This study explores a method that derives DT by using satellite altimetry (SA) in conjunction [...] Read more.
Accurate determination of dynamic topography (DT) is expected to quantify a realistic sea surface with respect to its vertical datum and in identifying sub-mesoscale features of ocean dynamics. This study explores a method that derives DT by using satellite altimetry (SA) in conjunction with a high-resolution marine geoid model. To assess the method, DT was computed using along-track SA from Sentinel- 3A (S3A), Sentinel-3B (S3B), and Jason-3 (JA3), then compared with DT derived from a tide-gauge-corrected hydrodynamic model (HDM) for the period 2017–2019 over the Baltic Sea. Comparison of SA-derived DT and corrected HDM showed average discrepancies in the range of ±20 cm, with root mean square errors of 9 cm (for S3B) and 6 cm (for S3A and JA6) and a standard deviation between 2 and 16 cm. Inter-comparisons between data sources and multi-mission SA over the Baltic Sea also potentially identified certain persistent and semi-persistent problematic areas that are either associated with deficiencies in the geoid, tide gauge, HDM, and SA or a combination of all of these. In addition, it was observed that SA data have the potential to show a more realistic (detailed) variation of DT compared to HDM, which tended to generate only a smooth (low-pass) surface and underestimate DT. Full article
(This article belongs to the Special Issue Satellite Altimetry: Technology and Application in Geodesy)
Show Figures

Figure 1

Figure 1
<p>Inter-relations between the participating datasets (hydrodynamic model, tide gauges, and satellite altimetry data), geoid model, and different reference ellipsoids. Virtual Station (VS) is used to correct the HDM near the TG. The top inset illustrates the VS selection principles along the SA tracks.</p>
Full article ">Figure 2
<p>Determination of the HDM bias and selection principles of virtual stations (VSs). (<b>a</b>) The horizontal profile of a single SA pass and selection of VSs along the SA track near TG locations. (<b>b</b>) Determination of the HDM bias at the location of VS (<math display="inline"><semantics> <mrow> <mi>B</mi> <mi>i</mi> <mi>a</mi> <msubsup> <mi>s</mi> <mrow> <mi>H</mi> <mi>D</mi> <mi>M</mi> </mrow> <mrow> <mi>V</mi> <mi>S</mi> </mrow> </msubsup> </mrow> </semantics></math>) using the <math display="inline"><semantics> <mrow> <mi>D</mi> <msub> <mi>T</mi> <mrow> <mi>T</mi> <mi>G</mi> </mrow> </msub> </mrow> </semantics></math> by adding the relative DT.</p>
Full article ">Figure 3
<p>Workflow of different stages of developed methodology and analysis for validation and assessment process (the corresponding section, table, or figure of each stage is denoted by blue font). The used data products (in brackets) refer to the case study.</p>
Full article ">Figure 4
<p>Characteristics of the study area: (<b>a</b>) Location of the Baltic Sea (the background represents the NKG2015 geoid model) together with the location of the tide gauges (triangle symbols). The names of sub-basins and main islands are denoted in white and red, respectively. (<b>b</b>) TG data availability between December 2016 to April 2019 of each TG ID (<b>b</b>). The TG numbering is clockwise, starting from the eastmost Estonian TG station and finishing with the Russian Kronstadt TG station (No. 74), which is located at the eastmost end of the Gulf of Finland.</p>
Full article ">Figure 5
<p>Steps for TG observation data reconstructions and corrections.</p>
Full article ">Figure 6
<p>Statistics of HDM discrepancies with respect to participating TG stations. (<b>a</b>) Means (characterized by the circle size) and standard deviations (characterized by the colors) of discrepancies between HDM and TGs (Equations (16) and (17)) during 2017–2019 over the Baltic Sea. (<b>b</b>) Monthly average of HDM discrepancies over three years (Equation (16)). (<b>c</b>) Monthly average of HDM discrepancies at six selected TG locations (one station in each country with the largest STD or mean). TG IDs are explained in <a href="#remotesensing-15-02189-f004" class="html-fig">Figure 4</a>a as well as in <a href="#remotesensing-15-02189-t0A1" class="html-table">Table A1</a> (<a href="#app1-remotesensing-15-02189" class="html-app">Appendix A</a>).</p>
Full article ">Figure 7
<p>Coverage of SA passes within the study area. (<b>a</b>) Sentinel 3A (note that S3B has almost the same track patterns) and (<b>b</b>) Jason-3 missions. For each mission ascending pass numbers are shown. The locations of VS and TG stations are denoted by red circles and black triangles, respectively.</p>
Full article ">Figure 8
<p>SA data preprocessing steps diagram including data filtering, harmonization, and corrections.</p>
Full article ">Figure 9
<p>(<b>a</b>) Comparison of S3A pass #272 <math display="inline"><semantics> <mrow> <mi>D</mi> <msub> <mi>T</mi> <mrow> <mi>S</mi> <mi>A</mi> </mrow> </msub> </mrow> </semantics></math> (blue dots) with the TG-corrected HDM (<math display="inline"><semantics> <mrow> <mi>D</mi> <msub> <mi>T</mi> <mrow> <mi>H</mi> <mi>D</mi> <mi>M</mi> <mo>−</mo> <mi>c</mi> <mi>o</mi> <mi>r</mi> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math>) (green line) for 3 cycles (representing different seasons) in 2017. The <math display="inline"><semantics> <mrow> <mi>D</mi> <msub> <mi>T</mi> <mrow> <mi>S</mi> <mi>A</mi> </mrow> </msub> </mrow> </semantics></math> moving average is denoted by the blue solid line, the grey zones showing masked near coast and land areas, and the triangles denote locations of virtual stations; (<b>b</b>) the location of the S3A pass #272 in the Baltic Sea.</p>
Full article ">Figure 10
<p>Along-track <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>E</mi> <mi>A</mi> <msub> <mi>N</mi> <mrow> <mi>S</mi> <mi>A</mi> <mo>−</mo> <mi>H</mi> <mi>D</mi> <mi>M</mi> </mrow> </msub> <mo> </mo> <mfenced> <mrow> <msub> <mi>φ</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>λ</mi> <mi>s</mi> </msub> </mrow> </mfenced> </mrow> </semantics></math> for 4 SA passes ((<b>a</b>): S3A#158, (<b>b</b>): S3A#169, (<b>c</b>): JA3#111, and (<b>d</b>): JA3#16) considering all available cycles during 2017–2019. The blue line represents the moving median of <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>E</mi> <mi>A</mi> <msub> <mi>N</mi> <mrow> <mi>S</mi> <mi>A</mi> <mo>−</mo> <mi>H</mi> <mi>D</mi> <mi>M</mi> </mrow> </msub> <mo> </mo> <mfenced> <mrow> <msub> <mi>φ</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>λ</mi> <mi>s</mi> </msub> </mrow> </mfenced> </mrow> </semantics></math> and the dashed red lines represent the moving standard deviation “envelope”. The average of <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>E</mi> <mi>A</mi> <msub> <mi>N</mi> <mrow> <mi>S</mi> <mi>A</mi> <mo>−</mo> <mi>H</mi> <mi>D</mi> <mi>M</mi> </mrow> </msub> <mo> </mo> <mfenced> <mrow> <msub> <mi>φ</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>λ</mi> <mi>s</mi> </msub> </mrow> </mfenced> </mrow> </semantics></math> is also denoted. The bottom row (<b>e</b>–<b>h</b>) represents each pass location in the Baltic Sea. The grey zone denotes masked land areas. The potential problematic areas are classified into three types. Yellow shaded regions: the suspected geoid model problems; green shade: TG records problem (or the HDM problem); purple shade: SA problem, possibly due to sea ice presence or land contamination.</p>
Full article ">Figure 11
<p>The averaged <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>E</mi> <mi>A</mi> <msub> <mi>N</mi> <mrow> <mi>S</mi> <mi>A</mi> <mo>−</mo> <mi>H</mi> <mi>D</mi> <mi>M</mi> </mrow> </msub> <mo> </mo> </mrow> </semantics></math>during 2017–2019 over the Gulf of Bothnia of S3A for two descending (<b>a</b>) and three ascending passes (<b>b</b>), denoted in blue lines. The geoid undulations along the passes are denoted in magenta-color lines (the different line styles represent each pass). The yellow masked area is the location of steep geoid slopes, which may cause deteriorations in <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>D</mi> <msub> <mi>T</mi> <mrow> <mi>S</mi> <mi>A</mi> <mo>−</mo> <mi>H</mi> <mi>D</mi> <mi>M</mi> </mrow> </msub> <mo>.</mo> </mrow> </semantics></math> The locations of yellow masked areas in the left-hand-side profiles are denoted by red rectangles in the right-hand-side maps (<b>c</b>,<b>d</b>), whereas the NKG2015 geoid model is in the background. Note that for pass#272 the drastic drop at 63° is most likely due to land contamination.</p>
Full article ">Figure 12
<p>Statistics of the along-track SA examination. (top row) Mean <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>D</mi> <msub> <mi>T</mi> <mrow> <mi>S</mi> <mi>A</mi> <mo>−</mo> <mi>H</mi> <mi>D</mi> <mi>M</mi> </mrow> </msub> </mrow> </semantics></math>, discrepancies between SA along-track DT data (<math display="inline"><semantics> <mrow> <mi>D</mi> <msub> <mi>T</mi> <mrow> <mi>S</mi> <mi>A</mi> </mrow> </msub> </mrow> </semantics></math>) and <math display="inline"><semantics> <mrow> <mi>D</mi> <msub> <mi>T</mi> <mrow> <mi>H</mi> <mi>D</mi> <mi>M</mi> <mo>−</mo> <mi>c</mi> <mi>o</mi> <mi>r</mi> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> (<math display="inline"><semantics> <mrow> <mi>M</mi> <mi>E</mi> <mi>A</mi> <msub> <mi>N</mi> <mrow> <mi>S</mi> <mi>A</mi> <mo>−</mo> <mi>H</mi> <mi>D</mi> <mi>M</mi> </mrow> </msub> </mrow> </semantics></math>) during 2017–2019 for (<b>a</b>) S3B, (<b>b</b>) S3A and (<b>c</b>) JA3 missions (Equation (10)). The <math display="inline"><semantics> <mrow> <mfenced close="|" open="|"> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>D</mi> <msub> <mi>T</mi> <mrow> <mi>S</mi> <mi>A</mi> <mo>−</mo> <mi>H</mi> <mi>D</mi> <mi>M</mi> </mrow> </msub> </mrow> </mfenced> <mo>&gt;</mo> <mn>20</mn> <mo> </mo> <mi>c</mi> <mi>m</mi> </mrow> </semantics></math> are represented as black dots in the <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>E</mi> <mi>A</mi> <msub> <mi>N</mi> <mrow> <mi>S</mi> <mi>A</mi> <mo>−</mo> <mi>H</mi> <mi>D</mi> <mi>M</mi> </mrow> </msub> </mrow> </semantics></math> plots and this is excluded from the calculation of the RMSE value of the whole basin per each mission (Equations (12) and (14)). In addition, (bottom row; (<b>d</b>–<b>f</b>)) associated STDs of <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>D</mi> <msub> <mi>T</mi> <mrow> <mi>S</mi> <mi>A</mi> <mo>−</mo> <mi>H</mi> <mi>D</mi> <mi>M</mi> </mrow> </msub> </mrow> </semantics></math> (Equation (11)) larger than 20 cm are denoted in black dots.</p>
Full article ">Figure 12 Cont.
<p>Statistics of the along-track SA examination. (top row) Mean <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>D</mi> <msub> <mi>T</mi> <mrow> <mi>S</mi> <mi>A</mi> <mo>−</mo> <mi>H</mi> <mi>D</mi> <mi>M</mi> </mrow> </msub> </mrow> </semantics></math>, discrepancies between SA along-track DT data (<math display="inline"><semantics> <mrow> <mi>D</mi> <msub> <mi>T</mi> <mrow> <mi>S</mi> <mi>A</mi> </mrow> </msub> </mrow> </semantics></math>) and <math display="inline"><semantics> <mrow> <mi>D</mi> <msub> <mi>T</mi> <mrow> <mi>H</mi> <mi>D</mi> <mi>M</mi> <mo>−</mo> <mi>c</mi> <mi>o</mi> <mi>r</mi> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> (<math display="inline"><semantics> <mrow> <mi>M</mi> <mi>E</mi> <mi>A</mi> <msub> <mi>N</mi> <mrow> <mi>S</mi> <mi>A</mi> <mo>−</mo> <mi>H</mi> <mi>D</mi> <mi>M</mi> </mrow> </msub> </mrow> </semantics></math>) during 2017–2019 for (<b>a</b>) S3B, (<b>b</b>) S3A and (<b>c</b>) JA3 missions (Equation (10)). The <math display="inline"><semantics> <mrow> <mfenced close="|" open="|"> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>D</mi> <msub> <mi>T</mi> <mrow> <mi>S</mi> <mi>A</mi> <mo>−</mo> <mi>H</mi> <mi>D</mi> <mi>M</mi> </mrow> </msub> </mrow> </mfenced> <mo>&gt;</mo> <mn>20</mn> <mo> </mo> <mi>c</mi> <mi>m</mi> </mrow> </semantics></math> are represented as black dots in the <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>E</mi> <mi>A</mi> <msub> <mi>N</mi> <mrow> <mi>S</mi> <mi>A</mi> <mo>−</mo> <mi>H</mi> <mi>D</mi> <mi>M</mi> </mrow> </msub> </mrow> </semantics></math> plots and this is excluded from the calculation of the RMSE value of the whole basin per each mission (Equations (12) and (14)). In addition, (bottom row; (<b>d</b>–<b>f</b>)) associated STDs of <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>D</mi> <msub> <mi>T</mi> <mrow> <mi>S</mi> <mi>A</mi> <mo>−</mo> <mi>H</mi> <mi>D</mi> <mi>M</mi> </mrow> </msub> </mrow> </semantics></math> (Equation (11)) larger than 20 cm are denoted in black dots.</p>
Full article ">Figure 13
<p>Sea ice concentration over Baltic Sea for March (<b>a</b>) and June (<b>b</b>) 2017 (source: gridded ice chart model available from Copernicus Marine Service Information) and the comparison between <math display="inline"><semantics> <mrow> <mi>D</mi> <msub> <mi>T</mi> <mrow> <mi>H</mi> <mi>D</mi> <mi>M</mi> <mo>−</mo> <mi>c</mi> <mi>o</mi> <mi>r</mi> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> (green) and <math display="inline"><semantics> <mrow> <mi>D</mi> <msub> <mi>T</mi> <mrow> <mi>S</mi> <mi>A</mi> </mrow> </msub> </mrow> </semantics></math> (blue) over pass#158 in two cycles, cycle 15, March (<b>c</b>) and cycle 19, June (<b>d</b>) 2017.</p>
Full article ">Figure 14
<p>Problematic DT determination areas over the Baltic Sea. (<b>a</b>) Normalized mean of <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>D</mi> <msub> <mi>T</mi> <mrow> <mi>S</mi> <mi>A</mi> <mo>−</mo> <mi>H</mi> <mi>D</mi> <mi>M</mi> </mrow> </msub> </mrow> </semantics></math> (<math display="inline"><semantics> <mrow> <mi>M</mi> <mi>E</mi> <mi>A</mi> <msub> <mi>N</mi> <mrow> <mi>S</mi> <mi>A</mi> <mo>−</mo> <mi>H</mi> <mi>D</mi> <mi>M</mi> </mrow> </msub> </mrow> </semantics></math>) of S3A data (<b>a</b>) and JA3 data (<b>b</b>) over the Baltic Sea in 2017. The problematic areas are enclosed by colored rectangles (and numbers) to classify the possible reasons. Geoid: yellow, SA: purple, and HDM or TG: green. Four selected passes (two passes for S3A and two passes for JA3) are specified to illustrate the possible reasons (cf. <a href="#remotesensing-15-02189-f012" class="html-fig">Figure 12</a>).</p>
Full article ">Figure 15
<p>Along-track DT and <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>e</mi> <mi>a</mi> <msub> <mi>n</mi> <mrow> <mi>S</mi> <mi>A</mi> <mo>−</mo> <mi>H</mi> <mi>D</mi> <mi>M</mi> </mrow> </msub> </mrow> </semantics></math> (Equation (10)) of four selected passes (cf., <a href="#remotesensing-15-02189-f014" class="html-fig">Figure 14</a>). The along-track mean DT of SA is denoted in blue dots (whereas the blue solid line is the moving median of 0.5° latitude window) and HDM in green line (left axis). The <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>E</mi> <mi>A</mi> <msub> <mi>N</mi> <mrow> <mi>S</mi> <mi>A</mi> <mo>−</mo> <mi>H</mi> <mi>D</mi> <mi>M</mi> </mrow> </msub> </mrow> </semantics></math> is represented on the right axis by magenta color (<b>a</b>,<b>c</b>) for two S3A passes at almost the same locations as two JA3 passes (<b>b</b>,<b>d</b>). The problematic areas of possible geoid modeling are highlighted in yellow; the purple areas are due to poor quality of SA data near the land (vertical dashed lines represent the locations of islands). The grey zones mask land areas.</p>
Full article ">
19 pages, 3922 KiB  
Article
Improving the Detection Accuracy of Underwater Obstacles Based on a Novel Combined Method of Support Vector Regression and Gravity Gradient
by Tengda Fu, Wei Zheng, Zhaowei Li, Yifan Shen, Huizhong Zhu and Aigong Xu
Remote Sens. 2023, 15(8), 2188; https://doi.org/10.3390/rs15082188 - 20 Apr 2023
Viewed by 1917
Abstract
Underwater gravity gradient detection techniques are conducive to ensuring the safety of submersible sailing. In order to improve the accuracy of underwater obstacle detection based on gravity gradient detection technology, this paper studies the gravity gradient underwater obstacle detection method based on the [...] Read more.
Underwater gravity gradient detection techniques are conducive to ensuring the safety of submersible sailing. In order to improve the accuracy of underwater obstacle detection based on gravity gradient detection technology, this paper studies the gravity gradient underwater obstacle detection method based on the combined support vector regression (SVR) algorithm. First, the gravity gradient difference ratio (GGDR) equation, which is only related to the obstacle’s position, is obtained based on the gravity gradient equation by using the difference and ratio methods. Aiming at solving the shortcomings of the GGDR equation based on Newton–Raphson method (NRM), combined with SVR algorithm, a novel SVR–gravity gradient joint method (SGJM) is proposed. Second, the differential ratio dataset is constructed by simulating the gravity gradient data generated by obstacles, and the obstacle location model is trained using SVR. Four measuring lines were selected to verify the SVR-based positioning model. The verification results show that the mean absolute error of the new method in the x, y, and z directions is less than 5.39 m, the root-mean-square error is less than 7.58 m, and the relative error is less than 4% at a distance of less than 500 m. These evaluation metrics validate the reliability of the novel SGJM-based detection of underwater obstacles. Third, comparative experiments based on the novel SGJM and traditional NRM were carried out. The experimental results show that the positioning accuracy of x and z directions in the obstacle’s position calculation based on the novel SGJM is improved by 88% and 85%, respectively. Full article
(This article belongs to the Special Issue Remote Sensing in Space Geodesy and Cartography Methods II)
Show Figures

Figure 1

Figure 1
<p>Hyperplane diagram.</p>
Full article ">Figure 2
<p>SGJM flow chart.</p>
Full article ">Figure 3
<p>Simulate the shape of an obstacle.</p>
Full article ">Figure 4
<p>Simulate the full tensor gravity gradient caused by obstacles.</p>
Full article ">Figure 5
<p>The relationship between detection accuracy and detection distance.</p>
Full article ">Figure 6
<p>Differential ratio dataset construction process.</p>
Full article ">Figure 7
<p>Schemes follow the same format.</p>
Full article ">Figure 8
<p>Error distribution of positioning results.</p>
Full article ">Figure 9
<p>Variation in ER and SNR with distance.</p>
Full article ">Figure 10
<p>The SGJM positioning result error: (<b>a</b>) Positioning error in <span class="html-italic">x</span> and <span class="html-italic">y</span> directions; (<b>b</b>) <span class="html-italic">z</span>-directional positioning error.</p>
Full article ">Figure 11
<p>The NRM positioning result error: (<b>a</b>) Positioning error in <span class="html-italic">x</span> and <span class="html-italic">y</span> directions; (<b>b</b>) <span class="html-italic">z</span>-directional positioning error.</p>
Full article ">Figure 12
<p>Comparison of two methods, SNR and RE: (<b>a</b>) SNR; (<b>b</b>) RE.</p>
Full article ">Figure 13
<p>Comparison of RE between NRM and SGJM.</p>
Full article ">
21 pages, 98313 KiB  
Article
Hybrid BBO-DE Optimized SPAARCTree Ensemble for Landslide Susceptibility Mapping
by Duc Anh Hoang, Hung Van Le, Dong Van Pham, Pham Viet Hoa and Dieu Tien Bui
Remote Sens. 2023, 15(8), 2187; https://doi.org/10.3390/rs15082187 - 20 Apr 2023
Cited by 2 | Viewed by 1691
Abstract
This paper presents a new hybrid ensemble modeling method called BBO-DE-STreeEns for land-slide susceptibility mapping in Than Uyen district, Vietnam. The method uses subbagging and random subspacing to generate subdatasets for constituent classifiers of the ensemble model, and a split-point and attribute reduced [...] Read more.
This paper presents a new hybrid ensemble modeling method called BBO-DE-STreeEns for land-slide susceptibility mapping in Than Uyen district, Vietnam. The method uses subbagging and random subspacing to generate subdatasets for constituent classifiers of the ensemble model, and a split-point and attribute reduced classifier (SPAARC) decision tree algorithm to build each classifier. To optimize hyperparameters of the ensemble model, a hybridization of biogeography-based optimization (BBO) and differential evolution (DE) algorithms is adopted. The land-slide database for the study area includes 114 landslide locations, 114 non-landslide locations, and ten influencing factors: elevation, slope, curvature, aspect, relief amplitude, soil type, geology, distance to faults, distance to roads, and distance to rivers. The database was used to build and verify the BBO-DE-StreeEns model, and standard statistical metrics, namely, positive predictive value (PPV), negative predictive value (NPV), sensitivity (Sen), specificity (Spe), accuracy (Acc), Fscore, Cohen’s Kappa, and the area under the ROC curve (AUC), were calculated to evaluate prediction power. Logistic regression, multi-layer perceptron neural network, support vector machine, and SPAARC were used as benchmark models. The results show that the proposed model outperforms the benchmarks with a high prediction power (PPV = 90.3%, NPV = 83.8%, Sen = 82.4%, Spe = 91.2%, Acc = 86.8%, Fscore = 0.862, Kappa = 0.735, and AUC = 0.940). Therefore, the BBO-DE-StreeEns method is a promising tool for landslide susceptibility mapping. Full article
Show Figures

Figure 1

Figure 1
<p>Location of Than Uyen district.</p>
Full article ">Figure 2
<p>Two photos of the landslide on the slope wall of National Road 279, near the right bank of Nam Kim stream, Na Pa village, Muong Kim commune, Than Uyen district. Source: Vietnam Institute of Geosciences and Mineral Resources.</p>
Full article ">Figure 3
<p>Landslide influencing factors: (<b>a</b>) elevation; (<b>b</b>) slope; (<b>c</b>) curvature; (<b>d</b>) aspect; (<b>e</b>) relief amplitude; (<b>f</b>) soil type; (<b>g</b>) geology; (<b>h</b>) distance to fault; (<b>i</b>) distance to road; and (<b>j</b>) distance to river.</p>
Full article ">Figure 3 Cont.
<p>Landslide influencing factors: (<b>a</b>) elevation; (<b>b</b>) slope; (<b>c</b>) curvature; (<b>d</b>) aspect; (<b>e</b>) relief amplitude; (<b>f</b>) soil type; (<b>g</b>) geology; (<b>h</b>) distance to fault; (<b>i</b>) distance to road; and (<b>j</b>) distance to river.</p>
Full article ">Figure 4
<p>The flowchart of the proposed BBO-DE-STreeEns for landslide susceptible mapping.</p>
Full article ">Figure 5
<p>The role of the influencing factors.</p>
Full article ">Figure 6
<p>Percentage of landslides vs. percentage of susceptibility map for Than Uyen district.</p>
Full article ">Figure 7
<p>The landslide susceptibility map for Than Uyen district using the BBO-DE-STreeEns.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop