Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (29)

Search Parameters:
Keywords = mixed pixel decomposition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 33223 KiB  
Article
Synergistic Coupling of Multi-Source Remote Sensing Data for Sandy Land Detection and Multi-Indicator Integrated Evaluation
by Junjun Wu, Yi Li, Bo Zhong, Yan Zhang, Qinhuo Liu, Xiaoliang Shi, Changyuan Ji, Shanlong Wu, Bin Sun, Changlong Li and Aixia Yang
Remote Sens. 2024, 16(22), 4322; https://doi.org/10.3390/rs16224322 - 19 Nov 2024
Viewed by 320
Abstract
Accurate and timely extraction and evaluation of sandy land are essential for ecological environmental protection; it is urgent to do the research to support the sustainable development goals (SDGs) of Land Degradation Neutrality. This study used Sentinel-1 Synthetic Aperture Radar (SAR) data and [...] Read more.
Accurate and timely extraction and evaluation of sandy land are essential for ecological environmental protection; it is urgent to do the research to support the sustainable development goals (SDGs) of Land Degradation Neutrality. This study used Sentinel-1 Synthetic Aperture Radar (SAR) data and Landsat 8 OLI multispectral data as the main data sources. Combining the rich spectral information from optical data and the penetrating advantages of radar data, a feature-level fusion method was employed to unveil the intrinsic nature of vegetative cover and accurately identify sandy land. Simultaneously, leveraging the results obtained from training with measured data, a comprehensive desertification assessment model was proposed, which combines multiple indicators to achieve a thorough evaluation of sandy land. The results showed that the method based on feature-level fusion achieved an overall accuracy of 86.31% in sandy land detection in Gansu Province, China. The integrated multi-indicator model C22_C/FVC is the ratio of correlation texture features of VH to vegetation cover based on which sandy land can be classified into three categories. When C22_C/FVC is less than 2.2, the pixel is classified as fixed sandy land. Pixels of semi-fixed sandy land have an indicator value between 2.2 and 5.2. Shifting sandy land has values greater than 5.2. Results showed that shifting sandy land and semi-fixed sandy land are the predominant types in Gansu Province, with 85,100 square kilometers and 87,100 square kilometers, respectively. The acreage of fixed sandy land was the least, 51,800 square kilometers. The method presented in this paper is robust for the detection and evaluation of sandy land from satellite imageries, which can potentially be applied for conducting high-resolution and large-scale detection and evaluation of sandy land. Full article
(This article belongs to the Section Ecological Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Administrative map of Gansu Province.</p>
Full article ">Figure 2
<p>Distribution of sampling plots.</p>
Full article ">Figure 3
<p>Technical flowchart.</p>
Full article ">Figure 4
<p>Distribution of vegetation cover in Southern Gansu Province.</p>
Full article ">Figure 5
<p>Detection results of sandy land. (<b>a</b>) Landsat 8 OLI image in the test area of Gansu Province; (<b>b</b>) Spectral reflectance curve of different objects from Landsat 8 OLI image; (<b>c</b>) Sandy land detection based on Landsat 8 OLI; (<b>d</b>) Sandy land detection based on Sentinel-1; (<b>e</b>) Sandy land detection based on GS fusion; (<b>f</b>) Sandy land detection based on PCA fusion; (<b>g</b>) Sandy land detection based on HSV fusion; (<b>h</b>) Sandy land detection based on feature-level fusion.</p>
Full article ">Figure 6
<p>Detection of sandy land in Gansu Province.</p>
Full article ">Figure 7
<p>The 25 indicators generated from both Spectral and Radar data.</p>
Full article ">Figure 8
<p>The distribution of samples with different types of sandy land in different optical indicators. (<b>a</b>) Distribution of samples in NDVI; (<b>b</b>) Distribution of samples in MSAVI; (<b>c</b>) Distribution of samples in FVC; (<b>d</b>) Distribution of samples in EVI; (<b>e</b>) Distribution of samples in Albedo; (<b>f</b>) Distribution of samples in BSI; (<b>g</b>) Distribution of samples in LST_ Median; (<b>h</b>) Distribution of samples in LST_ Mean; (<b>i</b>) Distribution of samples in LST_ Max.</p>
Full article ">Figure 9
<p>The distribution of samples with different types of sandy land in texture features of C11. (<b>a</b>) Distribution of samples in C11; (<b>b</b>) Distribution of samples in C11_Contrast; (<b>c</b>) Distribution of samples in C11_Correlation; (<b>d</b>) Distribution of samples in C11_Dissimilarity; (<b>e</b>) Distribution of samples in C11_Energy; (<b>f</b>) Distribution of samples in C11_Entropy; (<b>g</b>) Distribution of samples in C11_Homogeneity; (<b>h</b>) Distribution of samples in C11_Mean.</p>
Full article ">Figure 10
<p>The distribution of samples with different types of sandy land in texture features of C22. (<b>a</b>) Distribution of samples in C22; (<b>b</b>) Distribution of samples in C22_Contrast; (<b>c</b>) Distribution of samples in C22_Correlation; (<b>d</b>) Distribution of samples in C22_Dissimilarity; (<b>e</b>) Distribution of samples in C22_Energy; (<b>f</b>) Distribution of samples in C22_Entropy; (<b>g</b>) Distribution of samples in C22_Homogeneity; (<b>h</b>) Distribution of samples in C22_Mean.</p>
Full article ">Figure 11
<p>Distribution of samples in C22_C/FVC.</p>
Full article ">Figure 12
<p>Evaluation of sandy land in Gansu.</p>
Full article ">
24 pages, 15178 KiB  
Article
Sentinel-2A Image Reflectance Simulation Method for Estimating the Chlorophyll Content of Larch Needles with Pest Damage
by Le Yang, Xiaojun Huang, Debao Zhou, Junsheng Zhang, Gang Bao, Siqin Tong, Yuhai Bao, Dashzebeg Ganbat, Dorjsuren Altanchimeg, Davaadorj Enkhnasan and Mungunkhuyag Ariunaa
Forests 2024, 15(11), 1901; https://doi.org/10.3390/f15111901 - 28 Oct 2024
Viewed by 545
Abstract
With the development of remote sensing technology, the estimation of the chlorophyll content (CHLC) of vegetation via satellite data has become an important means of monitoring vegetation health, and high-precision estimation has been the focus of research in this field. In this study, [...] Read more.
With the development of remote sensing technology, the estimation of the chlorophyll content (CHLC) of vegetation via satellite data has become an important means of monitoring vegetation health, and high-precision estimation has been the focus of research in this field. In this study, we used larch affected by Yarl’s larch looper (Erannis jacobsoni Djak) in the boundary region of Mongolia as the research object, simulated the multispectral reflectance, downscaled Sentinel-2A satellite data, performed mixed-pixel decomposition, analyzed the potential of Sentinel-2A satellite data for estimating the chlorophyll content by calculating the spectral indices (SIs) and spectral derivatives (SDFs) of images, and then extracted sensitive spectral features as the model training set. Spectral features sensitive to the chlorophyll content were extracted to establish the training set, and, finally, the chlorophyll content estimation model for larch was constructed on the basis of the partial least squares algorithm (PLSR). The results revealed that SI and SDF based on simulated remote sensing data were highly sensitive to the chlorophyll content under the influence of pests, with the SAVI and EVI2 spectral indices as well as the D_B2 and D_B5 spectral derivatives being the most sensitive to the chlorophyll content. The estimation models based on simulated data performed significantly better than models without simulated data in terms of accuracy, especially those based on SDF-PLSR. The simulated spectral reflectance well reflected the spectral characteristics of the larch canopy and was sensitive to damaged larch, especially in the green light, red edge, and near-infrared bands. The proposed approach improves the accuracy of chlorophyll content estimation via Sentinel-2A data and enhances the ability to monitor changes in the chlorophyll content under complex forest conditions through simulations, providing new technical means and a theoretical basis for forestry pest monitoring and vegetation health management. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Experimental area: (<b>a</b>) Topography of Mongolia, (<b>b</b>) Sentinel-2A image of the test area, (<b>c</b>) UAV RGB map of the test area, (<b>d</b>) schematic diagram of the sample tree used for sample plot selection, (<b>e</b>) health sample tree, (<b>f</b>) damage sample tree.</p>
Full article ">Figure 2
<p>Sentinel-2A image processing and chlorophyll content estimation methodological framework.</p>
Full article ">Figure 3
<p>Linear fits to downscaled reflectance in the B5, B6, B7, and B8A bands.</p>
Full article ">Figure 4
<p>RF model classification results.</p>
Full article ">Figure 5
<p>Healthy and damaged larch depression: (<b>a</b>) health, (<b>b</b>) damage.</p>
Full article ">Figure 6
<p>Effectiveness of fitting multispectral simulation models: (<b>a</b>–<b>h</b>) represent the fitting effects of the above equations, respectively.</p>
Full article ">Figure 7
<p>Simulated and nonsimulated reflectance curves for each spectral band.</p>
Full article ">Figure 8
<p>Spectral characteristics and CHLC correlation.</p>
Full article ">Figure 9
<p>CHLC estimation model 1:1 linear fit: (<b>a</b>,<b>b</b>) show the fitted plots of the model results for the simulated remote sensing data, (<b>c</b>,<b>d</b>) show the fitted plots for the nonsimulated data.</p>
Full article ">Figure 10
<p>Estimation of CHLC in insect-infested stands based on spectral features from simulated (<b>a</b>) and nonsimulated (<b>b</b>) remote sensing data.</p>
Full article ">Figure 11
<p>Comparison of images before and after Sentinel-2A hybrid pixel decomposition.</p>
Full article ">
18 pages, 9929 KiB  
Article
Inversion of Cotton Soil and Plant Analytical Development Based on Unmanned Aerial Vehicle Multispectral Imagery and Mixed Pixel Decomposition
by Bingquan Tian, Hailin Yu, Shuailing Zhang, Xiaoli Wang, Lei Yang, Jingqian Li, Wenhao Cui, Zesheng Wang, Liqun Lu, Yubin Lan and Jing Zhao
Agriculture 2024, 14(9), 1452; https://doi.org/10.3390/agriculture14091452 - 25 Aug 2024
Viewed by 923
Abstract
In order to improve the accuracy of multispectral image inversion of soil and plant analytical development (SPAD) of the cotton canopy, image segmentation methods were utilized to remove the background interference, such as soil and shadow in UAV multispectral images. UAV multispectral images [...] Read more.
In order to improve the accuracy of multispectral image inversion of soil and plant analytical development (SPAD) of the cotton canopy, image segmentation methods were utilized to remove the background interference, such as soil and shadow in UAV multispectral images. UAV multispectral images of cotton bud stage canopies at three different heights (30 m, 50 m, and 80 m) were acquired. Four methods, namely vegetation index thresholding (VIT), supervised classification by support vector machine (SVM), spectral mixture analysis (SMA), and multiple endmember spectral mixture analysis (MESMA), were used to segment cotton, soil, and shadows in the multispectral images of cotton. The segmented UAV multispectral images were used to extract the spectral information of the cotton canopy, and eight vegetation indices were calculated to construct the dataset. Partial least squares regression (PLSR), Random forest (FR), and support vector regression (SVR) algorithms were used to construct the inversion model of cotton SPAD. This study analyzed the effects of different image segmentation methods on the extraction accuracy of spectral information and the accuracy of SPAD modeling in the cotton canopy. The results showed that (1) The accuracy of spectral information extraction can be improved by removing background interference such as soil and shadows using four image segmentation methods. The correlation between the vegetation indices calculated from MESMA segmented images and the SPAD of the cotton canopy was improved the most; (2) At three different flight altitudes, the vegetation indices calculated by the MESMA segmentation method were used as the input variable, and the SVR model had the best accuracy in the inversion of cotton SPAD, with R2 of 0.810, 0.778, and 0.697, respectively; (3) At a flight altitude of 80 m, the R2 of the SVR models constructed using vegetation indices calculated from images segmented by VIT, SVM, SMA, and MESMA methods were improved by 2.2%, 5.8%, 13.7%, and 17.9%, respectively, compared to the original images. Therefore, the MESMA mixed pixel decomposition method can effectively remove soil and shadows in multispectral images, especially to provide a reference for improving the inversion accuracy of crop physiological parameters in low-resolution images with more mixed pixels. Full article
(This article belongs to the Special Issue Application of UAVs in Precision Agriculture—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Overview of the study area.</p>
Full article ">Figure 2
<p>Experimental instruments. (<b>a</b>) DJI M300 with a Zenmuse Pl camera, (<b>b</b>) DJI M210 with MS600Pro multispectral camera. Note: The green box in (<b>a</b>) is the Zenmuse P1 camera (DJI, Shenzhen, China), and the red box in (<b>b</b>) is the MS600Pro multispectral camera (Yusense, Inc., Qingdao, China).</p>
Full article ">Figure 3
<p>MESMA under different fertilization gradients. (<b>a1</b>–<b>a3</b>) RGB images, (<b>b1</b>–<b>b3</b>) MNF eigenvalue, (<b>c1</b>–<b>c3</b>) enumerating pixels in an n-dimensional visualizer, (<b>d1</b>–<b>d3</b>) outputting EM spectral.</p>
Full article ">Figure 4
<p>Distribution of pure pixels at different flight altitudes. (<b>a</b>) 30 m; (<b>b</b>) 50 m; (<b>c</b>) 80 m.</p>
Full article ">Figure 5
<p>Segmentation results at different flight altitudes. (<b>a1</b>–<b>a3</b>) RGB images, (<b>b1</b>–<b>b3</b>) <span class="html-italic">NDCSI</span> vegetation index threshold segmentation, (<b>c1</b>–<b>c3</b>) SVM segmentation, (<b>d1</b>–<b>d3</b>) SMA segmentation, (<b>e1</b>–<b>e3</b>) MESMA segmentation.</p>
Full article ">Figure 6
<p>MESMA abundance inversion result map (flight altitude 80 m). (<b>a</b>) cotton; (<b>b</b>) shadow; (<b>c</b>) soil.</p>
Full article ">Figure 7
<p>Correlation between cotton SPAD and vegetation indices at 30 m.</p>
Full article ">Figure 8
<p>Correlation between cotton SPAD and vegetation indices at 50 m.</p>
Full article ">Figure 9
<p>Correlation between cotton SPAD and vegetation indices at 80 m.</p>
Full article ">Figure 10
<p>Inversion results of the optimal cotton SPAD model at different flight altitudes: (<b>a</b>) 30 m; (<b>b</b>) 50 m; (<b>c</b>) 80 m.</p>
Full article ">Figure 11
<p>SPAD distribution map of cotton.</p>
Full article ">
23 pages, 40006 KiB  
Article
Impervious Surface Area Patterns and Their Response to Land Surface Temperature Mechanism in Urban–Rural Regions of Qingdao, China
by Tao Pan, Baofu Li and Letian Ning
Remote Sens. 2023, 15(17), 4265; https://doi.org/10.3390/rs15174265 - 30 Aug 2023
Cited by 3 | Viewed by 1614
Abstract
The expansion of impervious surface area (ISA) in megacities of China often leads to land surface temperature (LST) aggregation effects, which affect living environments by impacting thermal comfort levels, thus becoming an issue of public concern. However, from an urban–rural synchronous comparison perspective, [...] Read more.
The expansion of impervious surface area (ISA) in megacities of China often leads to land surface temperature (LST) aggregation effects, which affect living environments by impacting thermal comfort levels, thus becoming an issue of public concern. However, from an urban–rural synchronous comparison perspective, the study of LST responses to ISA changes is still lacking in the central coastal megalopolises of China. To solve this issue, a collaborative methodology of artificial digitization—fully constrained least squares mixed pixel decomposition—split-window algorithm—PCACA model was established for Qingdao using land use dataset and remote sensing images. The conclusions are below. Long time series of land use monitoring indicated that the expansion ratios of urban and rural areas were 131.29% and 43.42% in the past 50 years (i.e., from 1970 to 2020). Within urban and rural areas, a synchronous ISA increase was observed, with ratios of +9.14% (140.55 km2) and +7.94% (28.04 km2), respectively. Higher ratios and area changes were found in the urban regions, and a similar ISA change pattern in both urban and rural regions was captured by the ISA horizontal epitaxial expansion and vertical density enhancement. Further, the horizontal gradient effect displayed that the mean LSTs were 28.75 °C, 29.77 °C and 31.91 °C in the urban areas and 28.73 °C, 29.66 °C and 31.65 °C in the rural areas in low-, medium-, and high-density ISAs. The vertical density effect showed that the LST change was 1.02 °C and 2.14 °C in the urban areas but 0.93 °C and 1.99 °C in the rural areas during the ISA-density transition from low- to medium- and from medium- to high-density, respectively. Potential surface thermal indicators were assessed, and the urban regions displayed higher sensible heat flux (280.13 W/m2) compared to the rural regions (i.e., 274.76 W/m2). The mechanism effect of the ISA changes on LST in the urban and rural regions was revealed. These findings form a new comparative perspective of the urban–rural synchronous change in the central coastal megalopolis of China and can provide a practical reference for relevant studies. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geographic location of the study area (i.e., the red region). (<b>a</b>) The global location of the study area, (<b>b</b>) The location of the study area along the central coast of China, (<b>c</b>) The DEM and the distribution of administrative divisions, and (<b>d</b>) Remote sensing images from Landsat Operational Land Imager synthesized by false colors.</p>
Full article ">Figure 2
<p>The flowchart of this study.</p>
Full article ">Figure 3
<p>Spatial land use change maps with a total of 50 years from 1970 to 2020.</p>
Full article ">Figure 4
<p>Changing patterns of urban ISA from 2000 to 2020. Note: (<b>a1</b>) and (<b>a2</b>) are the same location in the urban region in 2000 and 2020, and (<b>b1</b>) and (<b>b2</b>) are the same location in the rural region in 2000 and 2020, respectively.</p>
Full article ">Figure 5
<p>Spatial characteristics of the LST in the study area.</p>
Full article ">Figure 6
<p>Spatial distribution characteristics of LST in the study area. (<b>a</b>) was the Mean LST in different ISA densities, and (<b>b</b>) was the Mean LST changes in different ISA densities. Abbreviations: ML-LISA: Mean LST from low-density ISA, ML-MISA: Mean LST from medium-density ISA, ML-HISA: Mean LST from high-density ISA, MLC-LMISA: Mean LST changes from low- to medium-density ISA, MLC-MHISA: Mean LST changes from medium- to high-density ISA.</p>
Full article ">Figure 7
<p>Resultant maps of the land surface temperature mechanism indicators.</p>
Full article ">
17 pages, 5460 KiB  
Article
Analysis on the Rationality of Urban Green Space Distribution in Hangzhou City Based on GF-1 Data
by Danying Zhang, Haijian Liu and Zhifeng Yu
Sustainability 2023, 15(15), 12027; https://doi.org/10.3390/su151512027 - 5 Aug 2023
Cited by 2 | Viewed by 1412
Abstract
With its ecological, economic, and social benefits, urban green space (UGS) plays an important role in urban planning. Accordingly, it is also an important indicator in the evaluation of urban liveability. However, the extraction and statistical analysis of UGS are difficult because urban [...] Read more.
With its ecological, economic, and social benefits, urban green space (UGS) plays an important role in urban planning. Accordingly, it is also an important indicator in the evaluation of urban liveability. However, the extraction and statistical analysis of UGS are difficult because urban land use involves complex types and UGS exhibits fragmented distribution and common vegetation extraction models such as the NDVI model and pixel bipartite model. In addition, there are few studies that analyze UGS in Hangzhou with a pixel decomposition model. Therefore, applying the mixed pixel decomposition model with GF-1 data, the following three objectives were set in this study: (1) analyzing the temporal changes of UGS in Shangcheng District, Hangzhou from 2018 to 2020; (2) analyzing the spatial distribution characteristics of UGS in the six main urban areas of Hangzhou in 2020; (3) analyzing the rationality and influencing factors of UGS distribution in Hangzhou. In Shangcheng District, the overall UGS area increased from 2018 to 2020 due to the increase in forest area rather than grassland area. As for the main built-up area in Hangzhou, medium and high coverage of UGS were primarily observed, with an overall high level of greening and a relatively uniform vegetation cover. Only a few areas showed very low UGS coverage. Some differences were observed among administrative regions under the influence of topography, but the overall coverage is high. At the same time, the distribution of UGS in Hangzhou is closely related to policy guidance, the needs of urban residents, and the requirements of economic development. This research not only can provide a new way to analyze UGS features in Hangzhou but also provides scientific guidance for governments in urban planning. Full article
Show Figures

Figure 1

Figure 1
<p>Study area.</p>
Full article ">Figure 2
<p>Masking of water bodies based on NDWI (Shangcheng District): (<b>a</b>) before masking; (<b>b</b>) after masking.</p>
Full article ">Figure 3
<p>Masking of water bodies based on NDWI (main built-up area in Hangzhou): (<b>a</b>) before masking; (<b>b</b>) after masking.</p>
Full article ">Figure 4
<p>The green space distribution of Shangcheng District in 2018 and 2020: (<b>a-1</b>) forest, 2018; (<b>a-2</b>) grassland, 2018; (<b>a-3</b>) UGS, 2018; (<b>b-1</b>) forest, 2020; (<b>b-2</b>) grassland, 2020; (<b>b-3</b>) UGS, 2020.</p>
Full article ">Figure 5
<p>The results of mixed pixel decomposition in Hangzhou: (<b>a</b>) forest; (<b>b</b>) grassland; (<b>c</b>) bare land; (<b>d</b>) urban land.</p>
Full article ">Figure 6
<p>The distribution of UGS in Hangzhou (including forest and grassland).</p>
Full article ">Figure 7
<p>The results of mixed pixel decomposition in each district: (<b>a-1</b>) forest, Shangcheng; (<b>a-2</b>) grassland, Shangcheng; (<b>b-1</b>) forest, Xiacheng; (<b>b-2</b>) grassland, Xiacheng; (<b>c-1</b>) forest, Jianggan; (<b>c-2</b>) grassland, Jianggan; (<b>d-1</b>) forest, Gongshu; (<b>d-2</b>) grassland, Gongshu; (<b>e-1</b>) forest, Xihu; (<b>e-2</b>) grassland, Xihu; (<b>f-1</b>) forest, Binjiang; (<b>f-2</b>) grassland, Binjiang.</p>
Full article ">Figure 8
<p>The distribution of UGS in each district: (<b>a</b>) Shangcheng; (<b>b</b>) Xiacheng; (<b>c</b>) Jianggan; (<b>d</b>) Gongshu; (<b>e</b>) Xihu; (<b>f</b>) Binjiang.</p>
Full article ">Figure 9
<p>The result of UGS extraction using the NDVI model.</p>
Full article ">Figure 10
<p>The result of UGS extraction using the pixel bipartite model.</p>
Full article ">
22 pages, 3855 KiB  
Article
Hyperspectral Anomaly Detection Based on Regularized Background Abundance Tensor Decomposition
by Wenting Shang, Mohamad Jouni, Zebin Wu, Yang Xu, Mauro Dalla Mura and Zhihui Wei
Remote Sens. 2023, 15(6), 1679; https://doi.org/10.3390/rs15061679 - 20 Mar 2023
Cited by 8 | Viewed by 1779
Abstract
The low spatial resolution of hyperspectral images means that existing mixed pixels rely heavily on spectral information, making it difficult to differentiate between the target of interest and the background. The endmember extraction method is powerful in enhancing spatial structure information for hyperspectral [...] Read more.
The low spatial resolution of hyperspectral images means that existing mixed pixels rely heavily on spectral information, making it difficult to differentiate between the target of interest and the background. The endmember extraction method is powerful in enhancing spatial structure information for hyperspectral anomaly detection, whereas most approaches are based on matrix representation, which inevitably destroys the spatial or spectral information. In this paper, we treated the hyperspectral image as a third-order tensor and proposed a novel anomaly detection method based on a low-rank linear mixing model of the scene background. The obtained abundance maps possessed more distinctive features than the raw data, which was beneficial for identifying anomalies in the background. Specifically, the low-rank tensor background was approximated as the mode-3 product of an abundance tensor and endmember matrix. Due to the distinctive features of the background’s abundance maps, we characterized them by tensor regularization and imposed low rankness through CP decomposition, smoothness, and sparsity. In addition, we utilized the 1,1,2-norm to characterize the tube-wise sparsity of the anomaly, since it accounted for a small portion of the scene. The experimental results obtained using five real datasets demonstrated the outstanding performance of our proposed method. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>AVIRIS−1 dataset. (<b>a</b>) False-color image of the whole scene. (<b>b</b>) False-color image of the detection area corresponding to the area surrounded by the red box in the image (<b>a</b>). (<b>c</b>) Ground-truth map of the anomalies. (<b>d</b>) Spectra of endmembers <math display="inline"><semantics> <mrow> <mo>#</mo> <mn>1</mn> </mrow> </semantics></math>–<math display="inline"><semantics> <mrow> <mo>#</mo> <mn>3</mn> </mrow> </semantics></math> of the background; the estimated number of endmembers is 3.</p>
Full article ">Figure 2
<p>AVIRIS−2 dataset. (<b>a</b>) False-color image of the detection area corresponding to the area surrounded by the blue box in the scene in (<b>a</b>). (<b>b</b>) Ground-truth map of the anomalies. (<b>c</b>) Spectra of endmembers <math display="inline"><semantics> <mrow> <mo>#</mo> <mn>1</mn> </mrow> </semantics></math>–<math display="inline"><semantics> <mrow> <mo>#</mo> <mn>2</mn> </mrow> </semantics></math> of the background; the estimated number of endmembers was 2.</p>
Full article ">Figure 3
<p>HYDICE dataset. (<b>a</b>) False-color image of the whole scene. (<b>b</b>) False-color image of the detection area. (<b>c</b>) Ground-truth map of the anomalies. (<b>d</b>) Spectra of endmembers <math display="inline"><semantics> <mrow> <mo>#</mo> <mn>1</mn> </mrow> </semantics></math>–<math display="inline"><semantics> <mrow> <mo>#</mo> <mn>4</mn> </mrow> </semantics></math> of the background; the estimated number of endmembers was 4.</p>
Full article ">Figure 4
<p>Urban−1 of the Urban (ABU) dataset. (<b>a</b>) False-color image of the detection area. (<b>b</b>) Ground-truth map of the anomalies. (<b>c</b>) Spectra of endmembers <math display="inline"><semantics> <mrow> <mo>#</mo> <mn>1</mn> </mrow> </semantics></math>–<math display="inline"><semantics> <mrow> <mo>#</mo> <mn>2</mn> </mrow> </semantics></math> of the background; the estimated number of endmembers was 2.</p>
Full article ">Figure 5
<p>Urban−2 of the Urban (ABU) dataset. (<b>a</b>) False-color image of the detection area. (<b>b</b>) Ground-truth map of the anomalies. (<b>c</b>) Spectra of endmembers <math display="inline"><semantics> <mrow> <mo>#</mo> <mn>1</mn> </mrow> </semantics></math>–<math display="inline"><semantics> <mrow> <mo>#</mo> <mn>3</mn> </mrow> </semantics></math> of the background; the estimated number of endmembers was 3.</p>
Full article ">Figure 6
<p>2D plots of the detection results obtained by RX, RPCA, LRASR, GTVLRR, GVAE, TRPCA, PTA, LRTDTV, Dm-1, Dm-2, Dm-3, Dm-4, and ATLSS on the AVIRIS−1 dataset. (<b>a</b>) RX. (<b>b</b>) RPCA. (<b>c</b>) LRASR. (<b>d</b>) GTVLRR. (<b>e</b>) GVAE. (<b>f</b>) TRPCA. (<b>g</b>) PTA. (<b>h</b>) LRTDTV. (<b>i</b>) Dm-1. (<b>j</b>) Dm-2. (<b>k</b>) Dm-3. (<b>l</b>) Dm-4. (<b>m</b>) ATLSS.</p>
Full article ">Figure 7
<p>2D plots of the detection results obtained by RX, RPCA, LRASR, GTVLRR, GVAE, TRPCA, PTA, LRTDTV, Dm-1, Dm-2, Dm-3, Dm-4, and ATLSS on the AVIRIS−2 dataset. (<b>a</b>) RX. (<b>b</b>) RPCA. (<b>c</b>) LRASR. (<b>d</b>) GTVLRR. (<b>e</b>) GVAE. (<b>f</b>) TRPCA. (<b>g</b>) PTA. (<b>h</b>) LRTDTV. (<b>i</b>) Dm-1. (<b>j</b>) Dm-2. (<b>k</b>) Dm-3. (<b>l</b>) Dm-4. (<b>m</b>) ATLSS.</p>
Full article ">Figure 8
<p>2D plots of the detection results obtained by RX, RPCA, LRASR, GTVLRR, GVAE, TRPCA, PTA, LRTDTV, Dm-1, Dm-2, Dm-3, Dm-4, and ATLSS on the HYDICE dataset. (<b>a</b>) RX. (<b>b</b>) RPCA. (<b>c</b>) LRASR. (<b>d</b>) GTVLRR. (<b>e</b>) GVAE. (<b>f</b>) TRPCA. (<b>g</b>) PTA. (<b>h</b>) LRTDTV. (<b>i</b>) Dm-1. (<b>j</b>) Dm-2. (<b>k</b>) Dm-3. (<b>l</b>) Dm-4. (<b>m</b>) ATLSS.</p>
Full article ">Figure 9
<p>2D plots of the detection results obtained by RX, RPCA, LRASR, GTVLRR, GVAE, TRPCA, PTA, LRTDTV, Dm-1, Dm-2, Dm-3, Dm-4, and ATLSS on the ABU Urban−1 dataset. (<b>a</b>) RX. (<b>b</b>) RPCA. (<b>c</b>) LRASR. (<b>d</b>) GTVLRR. (<b>e</b>) GVAE. (<b>f</b>) TRPCA. (<b>g</b>) PTA. (<b>h</b>) LRTDTV. (<b>i</b>) Dm-1. (<b>j</b>) Dm-2. (<b>k</b>) Dm-3. (<b>l</b>) Dm-4. (<b>m</b>) ATLSS.</p>
Full article ">Figure 10
<p>2D plots of the detection results obtained by RX, RPCA, LRASR, GTVLRR, GVAE, TRPCA, PTA, LRTDTV, Dm-1, Dm-2, Dm-3, Dm-4, and ATLSS on the ABU Urban−2 dataset. (<b>a</b>) RX. (<b>b</b>) RPCA. (<b>c</b>) LRASR. (<b>d</b>) GTVLRR. (<b>e</b>) GVAE. (<b>f</b>) TRPCA. (<b>g</b>) PTA. (<b>h</b>) LRTDTV. (<b>i</b>) Dm-1. (<b>j</b>) Dm-2. (<b>k</b>) Dm-3. (<b>l</b>) Dm-4. (<b>m</b>) ATLSS.</p>
Full article ">Figure 11
<p>The ROC curves of the different methods under comparison for the five datasets. (<b>a</b>) AVIRIS−1. (<b>b</b>) AVIRIS−2. (<b>c</b>) HYDICE. (<b>d</b>) ABU Urban−1. (<b>e</b>) ABU Urban−2. (Left to right) 3D ROC curve and 2D ROC curve of <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi>d</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mi>f</mi> </msub> <mo>)</mo> </mrow> </semantics></math>, and 2D ROC curve of <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi>f</mi> </msub> <mo>,</mo> <mi>τ</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Box and whisker plots of the different methods under comparison for the five real datasets: (<b>a</b>) AVIRIS−1. (<b>b</b>) AVIRIS−2. (<b>c</b>) HYDICE. (<b>d</b>) ABU Urban−1. (<b>e</b>) ABU Urban−2.</p>
Full article ">Figure 13
<p>Detection accuracy of ATLSS on the AVIRIS−2 dataset with different parameter settings. (<b>a</b>) <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>1</mn> </msub> </semantics></math> varied. (<b>b</b>) <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>2</mn> </msub> </semantics></math> varied. (<b>c</b>) <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>3</mn> </msub> </semantics></math> varied. (<b>d</b>) <math display="inline"><semantics> <mi>β</mi> </semantics></math> varied.</p>
Full article ">
15 pages, 3293 KiB  
Article
Retrieval of Fractional Vegetation Cover from Remote Sensing Image of Unmanned Aerial Vehicle Based on Mixed Pixel Decomposition Method
by Mengmeng Du, Minzan Li, Noboru Noguchi, Jiangtao Ji and Mengchao (George) Ye
Drones 2023, 7(1), 43; https://doi.org/10.3390/drones7010043 - 7 Jan 2023
Cited by 7 | Viewed by 2831
Abstract
FVC (fractional vegetation cover) is highly correlated with wheat plant density in the reviving period, which is an important indicator for conducting variable-rate nitrogenous topdressing. In this study, with the objective of improving inversion accuracy of wheat plant density, an innovative approach of [...] Read more.
FVC (fractional vegetation cover) is highly correlated with wheat plant density in the reviving period, which is an important indicator for conducting variable-rate nitrogenous topdressing. In this study, with the objective of improving inversion accuracy of wheat plant density, an innovative approach of retrieval of FVC values from remote sensing images of a UAV (unmanned aerial vehicle) was proposed based on the mixed pixel decomposition method. Firstly, remote sensing images of an experimental wheat field were acquired by using a DJI Mini UAV and endmembers in the image were identified. Subsequently, a linear unmixing model was used to subdivide mixed pixels into components of vegetation and soil, and an abundance map of vegetation was acquired. Based on the abundance map of vegetation, FVC was calculated. Consequently, a linear regression model between the ground truth data of wheat plant density and FVC was established. The coefficient of determination (R2), RMSE (root mean square error), and RRMSE (Relative-RMSE) of the inversion model were calculated as 0.97, 1.86 plants/m2, and 0.677%, which indicates strong correlation between the FVC of mixed pixel decomposition method and wheat plant density. Therefore, we can conclude that the mixed pixel decomposition model of the remote sensing image of a UAV significantly improved the inversion accuracy of wheat plant density from FVC values, which provides method support and basic data for variable-rate nitrogenous fertilization in the wheat reviving period in the manner of precision agriculture. Full article
Show Figures

Figure 1

Figure 1
<p>Plots of experimental wheat field.</p>
Full article ">Figure 2
<p>Identification of vegetation endmembers (pixels in green color) and soil endmembers (pixels in red color).</p>
Full article ">Figure 3
<p>Spectral characteristics of vegetation endmembers (in green color) and soil endmembers (in red color).</p>
Full article ">Figure 4
<p>Abundance map of vegetation.</p>
Full article ">Figure 5
<p>Bimodal characteristics of the green–red difference index map.</p>
Full article ">Figure 6
<p>Image segmentation results:(<b>a</b>) image thresholding method; (<b>b</b>) support vector machine method.</p>
Full article ">Figure 7
<p>Wheat plant density inversion models based on FVC values calculated by using different methods. Note: FVC<sub>MPD</sub>, FVC<sub>SVM</sub>, and FVC<sub>IT</sub> indicate the FVC (fractional vegetation cover) calculated by using mixed pixel decomposition, image thresholding, and the SVM method. <span class="html-italic">y<sub>1</sub>, y<sub>2</sub></span>, and <span class="html-italic">y<sub>3</sub></span> indicate the predicted wheat plant density from FVC<sub>MPD</sub>, FVC<sub>SVM</sub>, and FVC<sub>IT</sub>, respectively.</p>
Full article ">Figure 8
<p>Residue plots of predicted wheat plant densities by using different inversion models. Note: MPD, SVM, and IT indicate the methods of mixed pixel decomposition, support vector machine, and image thresholding.</p>
Full article ">Figure 9
<p>Map of estimated wheat plant density.</p>
Full article ">
18 pages, 5521 KiB  
Article
Dimensionality Reduction and Classification of Hyperspectral Remote Sensing Image Feature Extraction
by Hongda Li, Jian Cui, Xinle Zhang, Yongqi Han and Liying Cao
Remote Sens. 2022, 14(18), 4579; https://doi.org/10.3390/rs14184579 - 13 Sep 2022
Cited by 21 | Viewed by 4617
Abstract
Terrain classification is an important research direction in the field of remote sensing. Hyperspectral remote sensing image data contain a large amount of rich ground object information. However, such data have the characteristics of high spatial dimensions of features, strong data correlation, high [...] Read more.
Terrain classification is an important research direction in the field of remote sensing. Hyperspectral remote sensing image data contain a large amount of rich ground object information. However, such data have the characteristics of high spatial dimensions of features, strong data correlation, high data redundancy, and long operation time, which lead to difficulty in image data classification. A data dimensionality reduction algorithm can transform the data into low-dimensional data with strong features and then classify the dimensionally reduced data. However, most classification methods cannot effectively extract dimensionality-reduced data features. In this paper, different dimensionality reduction and machine learning supervised classification algorithms are explored to determine a suitable combination method of dimensionality reduction and classification for hyperspectral images. Soft and hard classification methods are adopted to achieve the classification of pixels according to diversity. The results show that the data after dimensionality reduction retain the data features with high overall feature correlation, and the data dimension is drastically reduced. The dimensionality reduction method of unified manifold approximation and projection and the classification method of support vector machine achieve the best terrain classification with 99.57% classification accuracy. High-precision fitting of neural networks for soft classification of hyperspectral images with a model fitting correlation coefficient (R2) of up to 0.979 solves the problem of mixed pixel decomposition. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of hard classification.</p>
Full article ">Figure 2
<p>Flowchart of soft classification.</p>
Full article ">Figure 3
<p>Neural network structure diagram.</p>
Full article ">Figure 4
<p>(<b>a</b>) Hyperspectral remote sensing image; (<b>b</b>) label image showing different terrains.</p>
Full article ">Figure 5
<p>PCA cumulative explained variance ratio.</p>
Full article ">Figure 6
<p>(<b>a</b>) Classified image of Gaussian maximum likelihood; (<b>b</b>) Result of comparison with the label image.</p>
Full article ">Figure 7
<p>Comparison chart of overall classification accuracy of different dimensionality reduction methods.</p>
Full article ">Figure 8
<p>Comparison chart of terrain classification accuracy by different dimensionality reduction methods.</p>
Full article ">Figure 9
<p>Comparison chart of terrain classification accuracy by UMAP dimensionality reduction.</p>
Full article ">
42 pages, 36690 KiB  
Article
The Impacts of Air Quality on Vegetation Health in Dense Urban Environments: A Ground-Based Hyperspectral Imaging Approach
by Farid Qamar, Mohit S. Sharma and Gregory Dobler
Remote Sens. 2022, 14(16), 3854; https://doi.org/10.3390/rs14163854 - 9 Aug 2022
Cited by 6 | Viewed by 2819
Abstract
We examine the impact of changes in ozone (O3), particulate matter (PM2.5), temperature, and humidity on the health of vegetation in dense urban environments, using a very high-resolution, ground-based Visible and Near-Infrared (VNIR, 0.4–1.0 μm with a spectral resolution [...] Read more.
We examine the impact of changes in ozone (O3), particulate matter (PM2.5), temperature, and humidity on the health of vegetation in dense urban environments, using a very high-resolution, ground-based Visible and Near-Infrared (VNIR, 0.4–1.0 μm with a spectral resolution of 0.75 nm) hyperspectral camera deployed by the Urban Observatory (UO) in New York City. Images were captured at 15 min intervals from 08h00 to 18h00 for 30 days between 3 May and 6 June 2016 with each image containing a mix of dense built structures, sky, and vegetation. Vegetation pixels were identified using unsupervised k-means clustering of the pixel spectra and the time dependence of the reflection spectrum of a patch of vegetation at roughly 1 km from the sensor that was measured across the study period. To avoid illumination and atmospheric variability, we introduce a method that measures the ratio of vegetation pixel spectra to the spectrum of a nearby building surface at each time step relative to that ratio at a fixed time. This “Compound Ratio” exploits the (assumed) static nature of the building reflectance to isolate the variability of vegetation reflectance. Two approaches are used to quantify the health of vegetation at each time step: (a) a solar-induced fluorescence indicator (SIFi) calculated as the simple ratio of the amplitude of the Compound Ratio at 0.75 μm and 0.9 μm, and (b) Principal Component Analysis (PCA) decomposition designed to capture more global spectral features. The time dependence of these vegetation health indicators is compared to that of O3, PM2.5, temperature, and humidity values from a distributed and publicly available in situ air quality sensor network. Assuming a linear relationship between vegetation health indicators and air quality indicators, we find that changes in both SIF indicator values and PC amplitudes show a strong correlation (r2 value of 40% and 47%, respectively) with changes in air quality, especially in comparison with nearby buildings used as controls (r2 value of 1% and 4%, respectively, and with all molecular correlations consistent with zero to within 3σ uncertainty). Using the SIF indicator, O3 and temperature exhibit a positive correlation with changes in photosynthetic rate in vegetation, while PM2.5 and humidity exhibit a negative correlation. We estimate full covariant uncertainties on the coefficients using a Markov Chain Monte Carlo (MCMC) approach and demonstrate that these correlations remain statistically significant even when controlling for the effects of diurnal sun-sensor geometry and temperature variability. This work highlights the importance of quantifying the effects of various air quality parameters on vegetation health in urban environments in order to uncover the complexity, covariance, and interdependence of the numerous factors involved. Full article
(This article belongs to the Section Urban Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Full-resolution RGB (0.61 μm, 0.54 μm, and 0.48 μm) representation of the scene of Downtown and North Brooklyn imaged by the Urban Observatory’s hyperspectral imaging system. The blue box shows the buildings used as controls, which were randomly split into two equal sets, represented by the red and yellow pixels in the expanded view. The green box shows the vegetation pixels (identified using <span class="html-italic">k</span>-means clustering and highlighted in the expanded view in green) whose spectra are used in this study.</p>
Full article ">Figure 2
<p>(<b>Top</b>): The cluster labels for a <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> <span class="html-italic">k</span>-means clustering of pixel spectra from the scene in <a href="#remotesensing-14-03854-f001" class="html-fig">Figure 1</a>. An image of only pixels labeled as cluster 2 or cluster 5 is shown in the (<b>bottom left</b>). These cluster centers have spectra (<b>bottom right</b>) with enhanced green and near-infrared wavelengths, characteristic of chlorophyll reflectance, and so pixels labeled as belonging to those clusters are interpreted as vegetation.</p>
Full article ">Figure 3
<p>(<b>Top</b>): Compound Ratios of all scans for vegetation (<b>left</b>) and building (<b>right</b>) pixels as functions of wavelength and scan number (i.e., time). (<b>Bottom</b>): Compound Ratio of 10% of scans from <a href="#remotesensing-14-03854-f003" class="html-fig">Figure 3</a> for vegetation pixels (<b>left</b>) and building pixels (<b>right</b>) as a function of wavelength. Note that vegetation reflectances are varying at ~50% relative to a set of nearby building pixels, while the building pixels are only varying at ~1% relative to each other.</p>
Full article ">Figure 4
<p>Map of NYC showing the locations of the Urban Observatory’s hyperspectral imaging system (red triangle), the vegetation and buildings used in this study (green square), the New York State Department of Environmental Conservation’s air quality monitoring sites (blue circles), and the Weather Underground network of air quality sensors (black crosses).</p>
Full article ">Figure 5
<p>(<b>Top</b>): Air quality scatter matrix showing the distribution of O<sub>3</sub> (in ppm) and PM<sub>2.5</sub> (in μg/m<sup>3</sup>) concentrations, temperature (in °C), and absolute humidity (in g/cm<sup>3</sup>). (<b>Bottom</b>): Normalized and offset O<sub>3</sub> and PM<sub>2.5</sub> concentrations, temperature, and absolute humidity as a function of scan number. Vertical lines indicate the change in days.</p>
Full article ">Figure 6
<p>SCOPE model simulations with varying chlorophyll AB content (<math display="inline"><semantics> <msub> <mi>C</mi> <mrow> <mi>a</mi> <mi>b</mi> </mrow> </msub> </semantics></math>), carotenoid content (<math display="inline"><semantics> <msub> <mi>C</mi> <mrow> <mi>c</mi> <mi>a</mi> </mrow> </msub> </semantics></math>), dry matter content (<math display="inline"><semantics> <msub> <mi>C</mi> <mrow> <mi>d</mi> <mi>m</mi> </mrow> </msub> </semantics></math>), leaf water equivalent layer (<math display="inline"><semantics> <msub> <mi>C</mi> <mi>w</mi> </msub> </semantics></math>), and senescent material fraction (<math display="inline"><semantics> <msub> <mi>C</mi> <mi>s</mi> </msub> </semantics></math>), showing (<b>a</b>) the apparent reflectance spectrum (<math display="inline"><semantics> <msup> <mi>R</mi> <mo>*</mo> </msup> </semantics></math>), (<b>b</b>) the Compound Ratio (<math display="inline"><semantics> <msubsup> <mi>C</mi> <mrow> <mi>λ</mi> <mo>,</mo> <mi>t</mi> </mrow> <mi>V</mi> </msubsup> </semantics></math>) computed using Equation (<a href="#FD8-remotesensing-14-03854" class="html-disp-formula">8</a>), (<b>c</b>) the fluorescence emitted per wavelength in observation direction, and (<b>d</b>) photosynthesis as a function of the area under the fluorescence curve.</p>
Full article ">Figure 7
<p>SCOPE simulations showing the correlations between the SIF indicator values calculated using Equation (<a href="#FD9-remotesensing-14-03854" class="html-disp-formula">9</a>) and relative changes in: (<b>a</b>) the integrated solar induced fluorescence spectra, (<b>b</b>) photosynthesis rates, (<b>c</b>) chlorophyll AB content (<math display="inline"><semantics> <msub> <mi>C</mi> <mrow> <mi>a</mi> <mi>b</mi> </mrow> </msub> </semantics></math>), (<b>d</b>) carotenoid content (<math display="inline"><semantics> <msub> <mi>C</mi> <mrow> <mi>c</mi> <mi>a</mi> </mrow> </msub> </semantics></math>), (<b>e</b>) dry matter content (<math display="inline"><semantics> <msub> <mi>C</mi> <mrow> <mi>d</mi> <mi>m</mi> </mrow> </msub> </semantics></math>), (<b>f</b>) leaf water equivalent layer (<math display="inline"><semantics> <msub> <mi>C</mi> <mi>w</mi> </msub> </semantics></math>), and (<b>g</b>) senescent material fraction (<math display="inline"><semantics> <msub> <mi>C</mi> <mi>s</mi> </msub> </semantics></math>).</p>
Full article ">Figure 8
<p>(<b>a</b>) The mean Compound Ratio spectrum of vegetation pixels. (<b>b</b>) The amplitude of each of the 4 principal components as functions of wavelength and their Explained Variance (EV) in %.</p>
Full article ">Figure 9
<p>(<b>a</b>) The mean Compound Ratio spectrum of building pixels. (<b>b</b>) The amplitude of each of the 4 principal components as functions of wavelength and their Explained Variance (EV) in %.</p>
Full article ">Figure 10
<p>(<b>Top</b>): MCMC corner plot of the posterior distribution of air quality parameters for vegetation. (<b>Bottom</b>): The measured SIF indicator (<math display="inline"><semantics> <mrow> <mi>S</mi> <mi>I</mi> <mi>F</mi> <mi>i</mi> </mrow> </semantics></math>) values as a function of scan number with 10% of randomly selected models from those identified as probable by MCMC, with the vertical lines indicating the change in days.</p>
Full article ">Figure 11
<p>(<b>Top</b>): MCMC corner plot of the posterior distribution of air quality parameters for buildings. (<b>Bottom</b>): The measured SIF indicator (<math display="inline"><semantics> <mrow> <mi>S</mi> <mi>I</mi> <mi>F</mi> <mi>i</mi> </mrow> </semantics></math>) values as a function of scan number with 10% of randomly selected models from those identified as probable by MCMC, with the vertical lines indicating the change in days.</p>
Full article ">Figure 12
<p>Measured vs. predicted SIF indicator (<math display="inline"><semantics> <mrow> <mi>S</mi> <mi>I</mi> <mi>F</mi> <mi>i</mi> </mrow> </semantics></math>) values (<b>left</b>), and <math display="inline"><semantics> <msup> <mi>χ</mi> <mn>2</mn> </msup> </semantics></math> per degree of freedom (<b>right</b>) for vegetation and buildings models.</p>
Full article ">Figure 13
<p>Solar-Induced Fluorescence indicator (<math display="inline"><semantics> <mrow> <mi>S</mi> <mi>I</mi> <mi>F</mi> <mi>i</mi> </mrow> </semantics></math>) for vegetation (<b>left</b>) and buildings (<b>right</b>) standardized horizontally for each time-of-day series plotted for each day vertically. Each horizontal time-of-day series is independent of diurnal correlations between solar angles, temperature, ozone, and vegetation health indicators since each data point is captured daily at the same time.</p>
Full article ">Figure 14
<p><math display="inline"><semantics> <msup> <mi>r</mi> <mn>2</mn> </msup> </semantics></math> values that portray the levels at which linear models composed of O<sub>3</sub>, PM<sub>2.5</sub>, temperature, and humidity are capable of explaining the variability in each series of standardized time-of-day Solar-Induced Fluorescence indicator (<math display="inline"><semantics> <mrow> <mi>S</mi> <mi>I</mi> <mi>F</mi> <mi>i</mi> </mrow> </semantics></math>) for vegetation (<b>left</b>) and buildings (<b>right</b>).</p>
Full article ">Figure 15
<p>The coefficients for models containing all air quality parameters: O<sub>3</sub>, PM<sub>2.5</sub>, temperature, and absolute humidity, for each series of standardized time-of-day Solar-Induced Fluorescence indicator (<math display="inline"><semantics> <mrow> <mi>S</mi> <mi>I</mi> <mi>F</mi> <mi>i</mi> </mrow> </semantics></math>) for vegetation (<b>left</b>) and buildings (<b>right</b>).</p>
Full article ">Figure 16
<p>The coefficients for models that only contain O<sub>3</sub> and PM<sub>2.5</sub> concentrations, with temperature and absolute humidity absent, for each series of standardized time-of-day Solar-Induced Fluorescence indicator (<math display="inline"><semantics> <mrow> <mi>S</mi> <mi>I</mi> <mi>F</mi> <mi>i</mi> </mrow> </semantics></math>) for vegetation (<b>left</b>) and buildings (<b>right</b>).</p>
Full article ">Figure 17
<p>Scatter matrix showing the distribution of O<sub>3</sub> concentrations (in g/cm<sup>3</sup>), temperature (in °C), and <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>I</mi> <mi>F</mi> <mi>i</mi> </mrow> </semantics></math> and their correlations.</p>
Full article ">Figure 18
<p>Each plot shows the trends in standardized O<sub>3</sub> concentrations (black), <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>I</mi> <mi>F</mi> <mi>i</mi> </mrow> </semantics></math> for vegetation (green), and <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>I</mi> <mi>F</mi> <mi>i</mi> </mrow> </semantics></math> for the buildings (blue) for scans of equal temperature. A summary of the Pearson’s correlation coefficients of these time series for both the vegetation and buildings are shown in the figure in the bottom right corner, and indicates a clear correlation between ozone and vegetation health (but not between ozone and building pixels) when temperature is held constant.</p>
Full article ">Figure A1
<p>MCMC corner plot of posterior distribution for NDVI of vegetation.</p>
Full article ">Figure A2
<p>MCMC corner plot of posterior distribution for NDVI of the buildings.</p>
Full article ">Figure A3
<p>MCMC corner plot of posterior distribution for PRI of vegetation.</p>
Full article ">Figure A4
<p>MCMC corner plot of posterior distribution for PRI of the buildings.</p>
Full article ">Figure A5
<p>Full-resolution RGB representation of the scene as captured by the Urban Observatory’s hyperspectral imaging system. The blue box labeled B shows the buildings used as control throughout this work, which are 0.1 km to the south of the center of the vegetation whose spectra are used in this study. The red box r shows a set of pixels in a building adjacent to the control buildings B at a distance of 0.05 km west. Box p shows a set of pixels belonging to a building 0.8 km south of the control. Box w is closer to the sensor than B, and is 0.3 km north of the control. Box y is a building that is 0.35 km south-west of the control. Box b shows a subset of the control building B with a similar sample size as p.</p>
Full article ">Figure A6
<p>The <span class="html-italic">SIFi</span> of vegetation and all objects identified in <a href="#remotesensing-14-03854-f0A5" class="html-fig">Figure A5</a> together with 10% of models randomly selected from those identified as probable by MCMC, with the vertical lines indicating the change in days, and the <math display="inline"><semantics> <msup> <mi>r</mi> <mn>2</mn> </msup> </semantics></math> of the line of best fit printed in the title of each plot.</p>
Full article ">
40 pages, 540 KiB  
Review
Review of Remote Sensing Applications in Grassland Monitoring
by Zhaobin Wang, Yikun Ma, Yaonan Zhang and Jiali Shang
Remote Sens. 2022, 14(12), 2903; https://doi.org/10.3390/rs14122903 - 17 Jun 2022
Cited by 55 | Viewed by 8007
Abstract
The application of remote sensing technology in grassland monitoring and management has been ongoing for decades. Compared with traditional ground measurements, remote sensing technology has the overall advantage of convenience, efficiency, and cost effectiveness, especially over large areas. This paper provides a comprehensive [...] Read more.
The application of remote sensing technology in grassland monitoring and management has been ongoing for decades. Compared with traditional ground measurements, remote sensing technology has the overall advantage of convenience, efficiency, and cost effectiveness, especially over large areas. This paper provides a comprehensive review of the latest remote sensing estimation methods for some critical grassland parameters, including above-ground biomass, primary productivity, fractional vegetation cover, and leaf area index. Then, the applications of remote sensing monitoring are also reviewed from the perspective of their use of these parameters and other remote sensing data. In detail, grassland degradation and grassland use monitoring are evaluated. In addition, disaster monitoring and carbon cycle monitoring are also included. Overall, most studies have used empirical models and statistical regression models, while the number of machine learning approaches has an increasing trend. In addition, some specialized methods, such as the light use efficiency approaches for primary productivity and the mixed pixel decomposition methods for vegetation coverage, have been widely used and improved. However, all the above methods have certain limitations. For future work, it is recommended that most applications should adopt the advanced estimation methods rather than simple statistical regression models. In particular, the potential of deep learning in processing high-dimensional data and fitting non-linear relationships should be further explored. Meanwhile, it is also important to explore the potential of some new vegetation indices based on the spectral characteristics of the specific grassland under study. Finally, the fusion of multi-source images should also be considered to address the deficiencies in information and resolution of remote sensing images acquired by a single sensor or satellite. Full article
(This article belongs to the Special Issue Advances in Optical Remote Sensing Image Processing and Applications)
Show Figures

Figure 1

Figure 1
<p>The number of times the top six frequently occurring satellites in 196 publications.</p>
Full article ">
19 pages, 3961 KiB  
Article
A Hybrid Polarimetric Target Decomposition Algorithm with Adaptive Volume Scattering Model
by Xiujuan Li, Yongxin Liu, Pingping Huang, Xiaolong Liu, Weixian Tan, Wenxue Fu and Chunming Li
Remote Sens. 2022, 14(10), 2441; https://doi.org/10.3390/rs14102441 - 19 May 2022
Cited by 4 | Viewed by 1893
Abstract
Previous studies have shown that scattering mechanism ambiguity and negative power issues still exist in model-based polarization target decomposition algorithms, even though deorientation processing is implemented. One possible reason for this is that the dynamic range of the model itself is limited and [...] Read more.
Previous studies have shown that scattering mechanism ambiguity and negative power issues still exist in model-based polarization target decomposition algorithms, even though deorientation processing is implemented. One possible reason for this is that the dynamic range of the model itself is limited and cannot fully satisfy the mixed scenario. To address these problems, we propose a hybrid polarimetric target decomposition algorithm (GRH) with a generalized volume scattering model (GVSM) and a random particle cloud volume scattering model (RPCM). The adaptive volume scattering model used in GRH incorporates GVSM and RPCM to model the volume scattering model of the regions dominated by double-bounce scattering and the surface scattering, respectively, to expand the dynamic range of the model. In addition, GRH selects the volume scattering component between GVSM and RPCM adaptively according to the target dominant scattering mechanism of fully polarimetric synthetic aperture radar (PolSAR) data. The effectiveness of the proposed method was demonstrated using AirSAR dataset over San Francisco. Comparison studies were carried out to test the performance of GRH over several target decomposition algorithms. Experimental results show that the GRH outperforms the algorithms we tested in this study in decomposition accuracy and reduces the number of pixels with negative powers, demonstrating that the GRH can significantly avoid mechanism ambiguity and negative power issues. Full article
(This article belongs to the Special Issue Emerging Techniques and Applications of Polarimetric SAR (PolSAR))
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Polarization characteristics of GVSM. (<b>a</b>) H of GVSM with different <math display="inline"><semantics> <mi>p</mi> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mover accent="true"> <mi>α</mi> <mo>¯</mo> </mover> </semantics></math> of GVSM with different <math display="inline"><semantics> <mi>p</mi> </semantics></math>.</p>
Full article ">Figure 2
<p>Ellipsoidal particle model in the Cartesian coordinate system.</p>
Full article ">Figure 3
<p><span class="html-italic">H</span> and <math display="inline"><semantics> <mover accent="true"> <mi>α</mi> <mo>¯</mo> </mover> </semantics></math> of the volume scattering model. (<b>a</b>) H of GVSM; (<b>b</b>) H of RPCM; (<b>c</b>) H of FRE2 volume model; (<b>d</b>) <math display="inline"><semantics> <mover accent="true"> <mi>α</mi> <mo>¯</mo> </mover> </semantics></math> of GVSM; (<b>e</b>) <math display="inline"><semantics> <mover accent="true"> <mi>α</mi> <mo>¯</mo> </mover> </semantics></math> of RPCM; (<b>f</b>) <math display="inline"><semantics> <mover accent="true"> <mi>α</mi> <mo>¯</mo> </mover> </semantics></math> of FRE2 volume model.</p>
Full article ">Figure 4
<p>Two-dimensional <span class="html-italic">H</span>/<math display="inline"><semantics> <mover accent="true"> <mi>α</mi> <mo>¯</mo> </mover> </semantics></math> plane of volume scattering models: FRE2 volume scattering model: gray line; GVSM: red line; RPCM: blue line; FDD volume scattering model: red pentagram; YRO volume scattering model: blue diamond; An’s volume scattering model: black star.</p>
Full article ">Figure 5
<p>Pauli pseudo-color image of the AirSAR data.</p>
Full article ">Figure 6
<p>Pseudo-color images of the polarimetric target decomposition results. Red: double-bounce scattering (<span class="html-italic">Pd</span>); Green: volume scattering (<span class="html-italic">Pv</span>); Blue: surface scattering (<span class="html-italic">Ps</span>). (<b>a</b>) FRE2; (<b>b</b>) YRO; (<b>c</b>) Y4R; (<b>d</b>) MF4CF; (<b>e</b>) HTCD; (<b>f</b>) GRH.</p>
Full article ">Figure 7
<p>Bar charts of polarimetric target decomposition results. Red: double-bounce scattering (<span class="html-italic">Pd</span>). Green: volume scattering (<span class="html-italic">Pv</span>). Blue: surface scattering (<span class="html-italic">Ps</span>). (<b>a</b>) Results of decomposition of urban area; (<b>b</b>) Results of decomposition of vegetation area; (<b>c</b>) Results of decomposition of ocean area.</p>
Full article ">
20 pages, 6464 KiB  
Article
Methods of Sandy Land Detection in a Sparse-Vegetation Scene Based on the Fusion of HJ-2A Hyperspectral and GF-3 SAR Data
by Yi Li, Junjun Wu, Bo Zhong, Xiaoliang Shi, Kunpeng Xu, Kai Ao, Bin Sun, Xiangyuan Ding, Xinshuang Wang, Qinhuo Liu, Aixia Yang, Fei Chen and Mengqi Shi
Remote Sens. 2022, 14(5), 1203; https://doi.org/10.3390/rs14051203 - 1 Mar 2022
Cited by 3 | Viewed by 2760
Abstract
Accurate identification of sandy land plays an important role in sandy land prevention and control. It is difficult to identify the nature of sandy land due to vegetation covering the soil in the sandy area. Therefore, HJ-2A hyperspectral data and GF-3 Synthetic Aperture [...] Read more.
Accurate identification of sandy land plays an important role in sandy land prevention and control. It is difficult to identify the nature of sandy land due to vegetation covering the soil in the sandy area. Therefore, HJ-2A hyperspectral data and GF-3 Synthetic Aperture Radar (SAR) data were used as the main data sources in this article. The advantages of the spectral characteristics of a hyperspectral image and the penetration characteristics of SAR data were used synthetically to carry out mixed-pixel decomposition in the “horizontal” direction and polarization decomposition in the “vertical” direction. The results showed that in the study area of the Otingdag Sandy Land, in China, the accuracy of sandy land detection based on feature-level fusion and single GF-3 data was verified to be 92% in both cases by field data; the accuracy of sandy land detection based on feature-level fusion was verified to be 88.74% by the data collected from Google high-resolution imagery, which was higher than that based on single HJ-2A (74.17%) and single GF-3 data (88.08%). To further verify the universality of the feature-level fusion method for sandy land detection, Alxa sandy land was also used as a verification area and the accuracy of sandy land detection was verified to be as high as 88.74%. The method proposed in this paper made full use of the horizontal and vertical structural information of remote sensing data. The problem of mixed pixels in sparse-vegetation scenes in the horizontal direction and the problem of vegetation covering sandy soil in the vertical direction were both well solved. Accurate identification of sandy land can be realized effectively, which can provide technical support for sandy land prevention and control. Full article
Show Figures

Figure 1

Figure 1
<p>Locations of the study area and the verification area.</p>
Full article ">Figure 2
<p>The distribution of field data.</p>
Full article ">Figure 3
<p>Technical flow chart.</p>
Full article ">Figure 4
<p>Distribution of vegetation coverage.</p>
Full article ">Figure 5
<p>Sandy land detection based on an HJ-2A hyperspectral image. (<b>a</b>) Distribution of sandy land; (<b>b</b>) the image of HJ-2A hyperspectral.</p>
Full article ">Figure 6
<p>Sandy land detection based on GF-3 data. (<b>a</b>) Distribution of sandy land; (<b>b</b>) GF-3 data.</p>
Full article ">Figure 7
<p>Spectral curves of different features.</p>
Full article ">Figure 8
<p>Sandy land detection based on pixel-level fusion. (<b>a</b>) Distribution of sandy land based on a GS fusion image; (<b>b</b>) distribution of sandy land based on a PCA fusion image; (<b>c</b>) distribution of sandy land based on an HSV fusion image.</p>
Full article ">Figure 8 Cont.
<p>Sandy land detection based on pixel-level fusion. (<b>a</b>) Distribution of sandy land based on a GS fusion image; (<b>b</b>) distribution of sandy land based on a PCA fusion image; (<b>c</b>) distribution of sandy land based on an HSV fusion image.</p>
Full article ">Figure 9
<p>Sandy land detection based on feature-level fusion.</p>
Full article ">Figure 10
<p>Distribution of sample points in the grid of a Google Earth image.</p>
Full article ">Figure 11
<p>Distribution of vegetation coverage and sample points.</p>
Full article ">Figure 12
<p>Sandy land detection in the verification area. (<b>a</b>) Sandy land detection based on single-sensor GF-3; (<b>b</b>) sandy land detection based on feature-level fusion; (<b>c</b>) HJ-2A hyperspectral image in the verification area.</p>
Full article ">
26 pages, 48240 KiB  
Article
Hyperspectral Image Restoration via Spatial-Spectral Residual Total Variation Regularized Low-Rank Tensor Decomposition
by Xiangyang Kong, Yongqiang Zhao, Jonathan Cheung-Wai Chan and Jize Xue
Remote Sens. 2022, 14(3), 511; https://doi.org/10.3390/rs14030511 - 21 Jan 2022
Cited by 6 | Viewed by 3622
Abstract
To eliminate the mixed noise in hyperspectral images (HSIs), three-dimensional total variation (3DTV) regularization has been proven as an efficient tool. However, 3DTV regularization is prone to losing image details in restoration. To resolve this issue, we proposed a novel TV, named spatial [...] Read more.
To eliminate the mixed noise in hyperspectral images (HSIs), three-dimensional total variation (3DTV) regularization has been proven as an efficient tool. However, 3DTV regularization is prone to losing image details in restoration. To resolve this issue, we proposed a novel TV, named spatial domain spectral residual total variation (SSRTV). Considering that there is much residual texture information in spectral variation image, SSRTV first calculates the difference between the pixel values of adjacent bands and then calculates a 2DTV for the residual image. Experimental results demonstrated that the SSRTV regularization term is powerful at changing the structures of noises in an original HSI, thus allowing low-rank techniques to get rid of mixed noises more efficiently without treating them as low-rank features. The global low-rankness and spatial–spectral correlation of HSI is exploited by low-rank Tucker decomposition (LRTD). Moreover, it was demonstrated that the l2,1 norm is more effective to deal with sparse noise, especially the sample-specific noise such as stripes or deadlines. The augmented Lagrange multiplier (ALM) algorithm was adopted to solve the proposed model. Finally, experimental results with simulated and real data illustrated the validity of the proposed method. The proposed method outperformed state-of-the-art TV-regularized low-rank matrix/tensor decomposition methods in terms of quantitative metrics and visual inspection. Full article
(This article belongs to the Special Issue Remote Sensing Image Denoising, Restoration and Reconstruction)
Show Figures

Figure 1

Figure 1
<p>Examples of differences between 3DTV and SSTV with Band 115 of real Urban dataset. The 3DTV calculates the L<sub>1</sub> norm of spatial-spectral differences (blue line). SSRTV evaluates the L<sub>1</sub> norm of both direct spatial and spatial-spectral residual differences (red line). The <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>h</mi> </msub> <mi mathvariant="script">X</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>v</mi> </msub> <mi mathvariant="script">X</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>p</mi> </msub> <mi mathvariant="script">X</mi> </mrow> </semantics></math> in (<b>a</b>–<b>c</b>) respectively represent differences of <math display="inline"><semantics> <mi mathvariant="script">X</mi> </semantics></math> along the horizontal, vertical, and spectral directions. The <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>h</mi> </msub> <mfenced> <mrow> <msub> <mi>D</mi> <mi>p</mi> </msub> <mi mathvariant="script">X</mi> </mrow> </mfenced> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>v</mi> </msub> <mfenced> <mrow> <msub> <mi>D</mi> <mi>p</mi> </msub> <mi mathvariant="script">X</mi> </mrow> </mfenced> </mrow> </semantics></math> in (<b>d</b>) and (<b>e</b>) represent differences of <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>p</mi> </msub> <mi mathvariant="script">X</mi> </mrow> </semantics></math> along the horizontal and vertical directions.</p>
Full article ">Figure 2
<p>The histogram of (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>p</mi> </msub> <mi mathvariant="script">X</mi> <mo>,</mo> </mrow> </semantics></math> (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>h</mi> </msub> <mfenced> <mrow> <msub> <mi>D</mi> <mi>p</mi> </msub> <mi mathvariant="script">X</mi> </mrow> </mfenced> <mo>,</mo> <mrow> <mo> </mo> <mi>and</mi> </mrow> </mrow> </semantics></math> (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>v</mi> </msub> <mfenced> <mrow> <msub> <mi>D</mi> <mi>p</mi> </msub> <mi mathvariant="script">X</mi> </mrow> </mfenced> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Illustration of deadlines and stripes in Urban data. (<b>a</b>) Band 204. (<b>b</b>) Band 206.</p>
Full article ">Figure 4
<p>The flowchart of the LRTDSSRTV model.</p>
Full article ">Figure 5
<p>Denoised results of all the methods: (<b>a</b>) Original band 46. (<b>b</b>) Noisy band under case 1. (<b>c</b>) LRMR. (<b>d</b>) LRTV. (<b>e</b>) SSTV. (<b>f</b>) LRTDTV. (<b>g</b>) LRTDGS. (<b>h</b>) LRTDSSRTV.</p>
Full article ">Figure 6
<p>Denoised results of all the methods: (<b>a</b>) Original band 168. (<b>b</b>) Noisy band in case 5. (<b>c</b>) LRMR. (<b>d</b>) LRTV. (<b>e</b>) SSTV. (<b>f</b>) LRTDTV. (<b>g</b>) LRTDGS. (<b>h</b>) LRTDSSRTV.</p>
Full article ">Figure 7
<p>Denoised results of all the methods: (<b>a</b>) Original band 162. (<b>b</b>) Noisy band in case 6. (<b>c</b>) LRMR. (<b>d</b>) LRTV. (<b>e</b>) SSTV. (<b>f</b>) LRTDTV. (<b>g</b>) LRTDGS. (<b>h</b>) LRTDSSRTV.</p>
Full article ">Figure 8
<p>Detailed PSNR/SSIM evaluation of all the methods for each band: (<b>a</b>,<b>b</b>) Case 1, (<b>c</b>,<b>d</b>) Case 2, (<b>e</b>,<b>f</b>) Case 3, (<b>g</b>,<b>h</b>) Case 4, (<b>i</b>,<b>j</b>) Case 5, (<b>k</b>,<b>l</b>) Case 6.</p>
Full article ">Figure 8 Cont.
<p>Detailed PSNR/SSIM evaluation of all the methods for each band: (<b>a</b>,<b>b</b>) Case 1, (<b>c</b>,<b>d</b>) Case 2, (<b>e</b>,<b>f</b>) Case 3, (<b>g</b>,<b>h</b>) Case 4, (<b>i</b>,<b>j</b>) Case 5, (<b>k</b>,<b>l</b>) Case 6.</p>
Full article ">Figure 9
<p>From top to bottom are the differences between the original spectrum and the restoration results of locations (86, 75), (55, 90), and (115, 102) in the spatial domain on the Indian Pines’ dataset in cases 1, 5, and 6, respectively. (<b>a</b>) Noisy. (<b>b</b>) LRMR. (<b>c</b>) LRTV. (<b>d</b>) SSTV (<b>e</b>) LRTDTV. (<b>f</b>) LRTDGS. (<b>g</b>) LRTDSSRTV.</p>
Full article ">Figure 9 Cont.
<p>From top to bottom are the differences between the original spectrum and the restoration results of locations (86, 75), (55, 90), and (115, 102) in the spatial domain on the Indian Pines’ dataset in cases 1, 5, and 6, respectively. (<b>a</b>) Noisy. (<b>b</b>) LRMR. (<b>c</b>) LRTV. (<b>d</b>) SSTV (<b>e</b>) LRTDTV. (<b>f</b>) LRTDGS. (<b>g</b>) LRTDSSRTV.</p>
Full article ">Figure 10
<p>Restoration results of all comparison methods for band 104 of the real Indian Pines’ dataset. (<b>a</b>) Original. (<b>b</b>) LRMR. (<b>c</b>) LRTV. (<b>d</b>) SSTV. (<b>e</b>) LRTDTV. (<b>f</b>) LRTDGS. (<b>g</b>) LRTDSSRTV.</p>
Full article ">Figure 11
<p>Restoration results of all comparison methods for band 150 of the real Indian Pines’ dataset. (<b>a</b>) Original. (<b>b</b>) LRMR. (<b>c</b>) LRTV. (<b>d</b>) SSTV. (<b>e</b>) LRTDTV. (<b>f</b>) LRTDGS. (<b>g</b>) LRTDSSRTV.</p>
Full article ">Figure 12
<p>Restoration results of all comparison methods for band 108 of the real Urban dataset. (<b>a</b>) Original. (<b>b</b>) LRMR. (<b>c</b>) LRTV. (<b>d</b>) SSTV. (<b>e</b>) LRTDTV. (<b>f</b>) LRTDGS. (<b>g</b>) LRTDSSRTV.</p>
Full article ">Figure 13
<p>Spectral signatures’ curve of band 108 for the real Urban dataset estimated by different methods: (<b>a</b>) Original. (<b>b</b>) LRMR. (<b>c</b>) LRTV. (<b>d</b>) SSTV. (<b>e</b>) LRTDTV. (<b>f</b>) LRTDGS. (<b>g</b>) Proposed.</p>
Full article ">Figure 14
<p>Restoration results of all comparison methods for band 208 of the real Urban dataset. (<b>a</b>) Original. (<b>b</b>) LRMR. (<b>c</b>) LRTV. (<b>d</b>) SSTV. (<b>e</b>) LRTDTV. (<b>f</b>) LRTDGS. (<b>g</b>) LRTDSSRTV.</p>
Full article ">Figure 15
<p>Spectral signatures’ curve of band 208 for the real Urban dataset estimated by different methods: (<b>a</b>) Original. (<b>b</b>) LRMR. (<b>c</b>) LRTV. (<b>d</b>) SSTV. (<b>e</b>) LRTDTV. (<b>f</b>) LRTDGS. (<b>g</b>) LRTDSSRTV.</p>
Full article ">Figure 16
<p>Classification map on Indian pines’ dataset, (<b>a</b>) true values, (<b>b</b>) before denoising, (<b>c</b>) LRMR, (<b>d</b>) LRTV, (<b>e</b>) SSTV, (<b>f</b>) LRTDTV, (<b>g</b>) LRTDGS, (<b>h</b>) SSRTV.</p>
Full article ">Figure 17
<p>Sensitivity analysis between parameters <span class="html-italic">ρ</span> and <span class="html-italic">τ</span> using the simulated Indian Pines’ dataset. (<b>a</b>) Case 1. (<b>b</b>) Case 5. (<b>c</b>) Cases 6.</p>
Full article ">Figure 18
<p>Performance with weight parameter <span class="html-italic">w</span><sub>3</sub>. (<b>a</b>) MPSNR value vs. <span class="html-italic">w</span><sub>3,</sub> (<b>b</b>) MSSIM value vs. <span class="html-italic">w</span><sub>3</sub>.</p>
Full article ">Figure 19
<p>MPSNR and relative change values versus the iteration number of LRTDSSRTV: (<b>a</b>,<b>b</b>) for case 1; (<b>c</b>,<b>d</b>) for case 5; (<b>e</b>,<b>f</b>) for case 6.</p>
Full article ">
18 pages, 4201 KiB  
Article
Weighted Group Sparsity-Constrained Tensor Factorization for Hyperspectral Unmixing
by Xinxi Feng, Le Han and Le Dong
Remote Sens. 2022, 14(2), 383; https://doi.org/10.3390/rs14020383 - 14 Jan 2022
Cited by 6 | Viewed by 1924
Abstract
Recently, unmixing methods based on nonnegative tensor factorization have played an important role in the decomposition of hyperspectral mixed pixels. According to the spatial prior knowledge, there are many regularizations designed to improve the performance of unmixing algorithms, such as the total variation [...] Read more.
Recently, unmixing methods based on nonnegative tensor factorization have played an important role in the decomposition of hyperspectral mixed pixels. According to the spatial prior knowledge, there are many regularizations designed to improve the performance of unmixing algorithms, such as the total variation (TV) regularization. However, these methods mostly ignore the similar characteristics among different spectral bands. To solve this problem, this paper proposes a group sparse regularization that uses the weighted constraint of the L2,1 norm, which can not only explore the similar characteristics of the hyperspectral image in the spectral dimension, but also keep the data smooth characteristics in the spatial dimension. In summary, a non-negative tensor factorization framework based on weighted group sparsity constraint is proposed for hyperspectral images. In addition, an effective alternating direction method of multipliers (ADMM) algorithm is used to solve the algorithm proposed in this paper. Compared with the existing popular methods, experiments conducted on three real datasets fully demonstrate the effectiveness and advancement of the proposed method. Full article
Show Figures

Figure 1

Figure 1
<p>The false images of data. (<b>a</b>) DS1 (R = 25 band, G = 40 band, B = 175 band). (<b>b</b>) DS2 (R = 15 band, G = 20 band, B = 24 band). (<b>c</b>) Jasper Ridge (R = 20 band, G = 57 band, B = 100 band). (<b>d</b>) Samon (R = 65 band, G = 130 band, B = 125 band). (<b>e</b>) Cuprite (R = 55 band, G = 125 band, B = 120 band).</p>
Full article ">Figure 2
<p>The abundance maps estimated by the five comparison algorithms and the proposed WSCTF method in DS1 with 20 dB.</p>
Full article ">Figure 3
<p>The abundance maps of endmember 1, endmember 2, endmember 3, endmember 4 and endmember 5, estimated by the five comparison algorithms and the proposed WSCTF method in DS2 with 20 dB.</p>
Full article ">Figure 4
<p>The endmembers of Jasper Ridge Data extracted by the comparison methods and the proposed WSCTF. (<b>a</b>) Tree. (<b>b</b>) Soil. (<b>c</b>) Water. (<b>d</b>) Road.</p>
Full article ">Figure 5
<p>The abundances of six algorithms and the Ground Truth on Jasper Ridge Data.</p>
Full article ">Figure 6
<p>The endmembers of Samon Data extracted by the five comparison methods and the proposed method. (<b>a</b>) Soil. (<b>b</b>) Tree. (<b>c</b>) Water.</p>
Full article ">Figure 7
<p>The abundance maps of six methods and the Ground Truth on Samon Data.</p>
Full article ">Figure 8
<p>The endmembers spectra of the Cuprite Date set for all tested method, and the black line is ground truth. (<b>a</b>) Montmorillonite. (<b>b</b>) Sphene. (<b>c</b>) Alunite. (<b>d</b>) Buddingtonite. (<b>e</b>) Dumortierite. (<b>f</b>) Andradite. (<b>g</b>) Muscovite. (<b>h</b>) Kaolinite #1. <b>(i</b>) Chalcedony. (<b>j</b>) Pyrope. (<b>k</b>) Nontronite. (<b>l</b>) Kaolinite #2.</p>
Full article ">Figure 9
<p>Abundance maps of different endmembers using WSCTF on the AVIRIS Cuprite data. (<b>a</b>) Montmorillonite. (<b>b</b>) Sphene. (<b>c</b>) Alunite. (<b>d</b>) Buddingtonite. (<b>e</b>) Dumortierite. (<b>f</b>) Andradite. (<b>g</b>) Muscovite. (<b>h</b>) Kaolinite #1. (<b>i</b>) Chalcedony. (<b>j</b>) Pyrope. (<b>k</b>) Nontronite. (<b>l</b>) Kaolinite #2.</p>
Full article ">Figure 10
<p>Parameter analysis on Jasper Ridge data. (<b>a</b>) Parameter <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>s</mi> </msub> </semantics></math>; (<b>b</b>) Parameter <math display="inline"><semantics> <mi>λ</mi> </semantics></math>.</p>
Full article ">
22 pages, 9768 KiB  
Article
Information Extraction from Satellite-Based Polarimetric SAR Data Using Simulated Annealing and SIRT Methods and GPU Processing
by Stanisława Porzycka-Strzelczyk, Jacek Strzelczyk, Kamil Szostek, Maciej Dwornik, Andrzej Leśniak, Justyna Bała and Anna Franczyk
Energies 2022, 15(1), 72; https://doi.org/10.3390/en15010072 - 22 Dec 2021
Cited by 2 | Viewed by 2555
Abstract
The main goal of this research was to propose a new method of polarimetric SAR data decomposition that will extract additional polarimetric information from the Synthetic Aperture Radar (SAR) images compared to other existing decomposition methods. Most of the current decomposition methods are [...] Read more.
The main goal of this research was to propose a new method of polarimetric SAR data decomposition that will extract additional polarimetric information from the Synthetic Aperture Radar (SAR) images compared to other existing decomposition methods. Most of the current decomposition methods are based on scattering, covariance or coherence matrices describing the radar wave-scattering phenomenon represented in a single pixel of an SAR image. A lot of different decomposition methods have been proposed up to now, but the problem is still open since it has no unique solution. In this research, a new polarimetric decomposition method is proposed that is based on polarimetric signature matrices. Such matrices may be used to reveal hidden information about the image target. Since polarimetric signatures (size 18 × 9) are much larger than scattering (size 2 × 2), covariance (size 3 × 3 or 4 × 4) or coherence (size 3 × 3 or 4 × 4) matrices, it was essential to use appropriate computational tools to calculate the results of the proposed decomposition method within an acceptable time frame. In order to estimate the effectiveness of the presented method, the obtained results were compared with the outcomes of another method of decomposition (Arii decomposition). The conducted research showed that the proposed solution, compared with Arii decomposition, does not overestimate the volume-scattering component in built-up areas and clearly separates objects within the mixed-up areas, where both building, vegetation and surfaces occur. Full article
(This article belongs to the Section K: State-of-the-Art Energy Related Technologies)
Show Figures

Figure 1

Figure 1
<p>Basic scattering types (<b>a</b>)—single-bounce, (<b>b</b>)—double-bounce, (<b>c</b>)—volume scattering, (<b>d</b>)—helix scattering).</p>
Full article ">Figure 2
<p>(<b>a</b>) Trihedral, (<b>b</b>) co-pol and (<b>c</b>) cross-pol polarimetric signatures characterizing scattering.</p>
Full article ">Figure 3
<p>(<b>a</b>) Dihedral, (<b>b</b>) co-pol and (<b>c</b>) cross-pol polarimetric signatures characterizing scattering.</p>
Full article ">Figure 4
<p>(<b>a</b>) Left helix, (<b>b</b>) co-pol and (<b>c</b>) cross-pol polarimetric signatures characterizing scattering.</p>
Full article ">Figure 5
<p>(<b>a</b>) Cloud of randomly oriented dipoles, (<b>b</b>) co-pol and (<b>c</b>) cross-pol polarimetric signatures characterizing scattering.</p>
Full article ">Figure 6
<p>(<b>a</b>): Pauli-coded color composition of polarimetric channels (red: |hh-vv|, green: |hv|, blue: |hh+vv|). (<b>b</b>): optical image of studied region (source: Google Earth).</p>
Full article ">Figure 7
<p>Average execution times of the SA procedure using CPU and GPU with different numbers of streams.</p>
Full article ">Figure 8
<p>Results of the proposed SA-based and Arii decomposition: (<b>a</b>) decomposed single-bounce scattering powers, (<b>b</b>) decomposed double-bounce scattering powers, (<b>c</b>) decomposed volume-scattering powers.</p>
Full article ">
Back to TopTop