Bandpass Alignment from Sentinel-2 to Gaofen-1 ARD Products with UNet-Induced Tile-Adaptive Lookup Tables
<p>The whole processing procedure of our dataset, including three components: GF1-ARD producing, data filtering, and data processing.</p> "> Figure 2
<p>The spatial distribution and division of our dataset over the MODIS-derived global green vegetation map. There are three study sites A, B, and C in our dataset, located in China, Brazil, and France, respectively. All these tiles are randomly divided into training–validation and test sets. The detailed division result is shown in the corresponding zoomed-in views. The yellow dot grid represents a tile containing both training and validation samples, while the green dot grid indicates a tile consisting of pure test samples. The samples in the yellow dot are then split into training and validation samples at a nearly 1:1 ratio.</p> "> Figure 3
<p>Distribution of the training, validation, and test datasets over twelve months.</p> "> Figure 4
<p>The architecture of the proposed model. It is composed of two components: 1D LUTs generation based on the U-shaped network (UNet) and a simple CNN module.</p> "> Figure 5
<p>The architecture of the 1D UNet. It is an encoder–decoder network with four layers in both the encoder and decoder. The initial input feature number is four, and the feature channel is doubled while the feature size is halved after passing one layer of the encoder. In contrast, after passing one layer of the decoder, the feature channel is halved while the feature size is doubled. Meanwhile, a concatenation operation is employed to merge features from different scales. Finally, a convolutional layer is applied to produce the outputs with the same channel number as the inputs.</p> "> Figure 6
<p>(<b>a</b>) Comparison of the surface reflectance (SR) from Sentinel-2 and GF1-ARD. (<b>b</b>) Comparison of the predicted SR by OLS and the GF1-ARD. (<b>c</b>) Comparison of the predicted SR by our model and the GF1-ARD. All of the above sub-graphs are drawn using data from the test set. The colour bar on the right is used to render the scatters, and the more red the points, the denser they are. The red solid line represents the line fitted by ordinary least squares (OLS). The accuracy indices root-mean-squared error (RMSE) and the coefficient of determination (<math display="inline"><semantics> <msup> <mi>R</mi> <mn>2</mn> </msup> </semantics></math>) of the linear model are shown in the upper left corner.</p> "> Figure 7
<p>(<b>a</b>) Comparison of the normalized difference vegetation index (NDVI) from the adjusted Sentinel-2 by the linear model and GF1-ARD. (<b>b</b>) Comparison of the NDVI from the transformed Sentinel-2 by our model and GF1-ARD.</p> "> Figure 8
<p>Visualization results of the different methods. Three tiles of test dataset distributed at different study sites over several months are provided for comparison, identified at the left and labelled using the “tile-date” nomenclature. The first column is the GF1-ARD images, the second indicates the Sentinel-2 imagery, the third lists the prediction of the linear model, and the last column exhibits the transformed images of our model. All these images are shown with the fixed-parameter stretching in <a href="#remotesensing-15-02563-t006" class="html-table">Table 6</a>.</p> "> Figure 9
<p>Tile thumbnails of our model’s prediction. Thirty tiles are selected from three study sites, including China (<b>A</b>), Brazil (<b>B</b>), and France (<b>C</b>), and the outputs of our model are shown as thumbnails in three rows. All these images are shown with the fixed-parameter stretching in <a href="#remotesensing-15-02563-t006" class="html-table">Table 6</a>.</p> "> Figure 10
<p>The calculated LUTs of the three tiles. The first three bands are represented by corresponding colours, and the near-infrared band is rendered by a purple line. In addition, these bands are truncated by different values given the scatters.</p> "> Figure 11
<p>The comparison of spatial details. The first row shows the true colour combination of the selected local region in “21KYB-201909”. The second row displays the zoomed-in views of the red cross point in the first row, where all crosses share the same geometric position.</p> "> Figure 12
<p>The amount of S2 L2A and GF1-ARD imagery available at study site A in 2020. The detailed number of images under various valid data levels and months are displayed. The green, blue, and yellow bars represent the data with a valid data rate greater than 90, 80 50, and 0%, respectively. Meanwhile, the extended red bar refers to the corresponding S2 data, and the number at the top represents the ratio of S2 to GF1-ARD.</p> "> Figure 13
<p>An illustration of combining S2 and GF1-ARD. The yellow dot represents the GF1-ARD, and the orange dot refers to the S2 data. Meanwhile, the images are organized according to the time line below, with the days between images labelled above.</p> "> Figure 14
<p>Visualization of the variations between adjacent tiles. For each study site, four neighbouring tiles observed on the same day are selected to finish the image mosaic without any additional post-processing. Additionally, a mosaic is identified at the top and labelled using the “site-date” nomenclature. Meanwhile, the bottom of each mosaic presents four carefully chosen zoomed-in views in the overlapped regions, with the red boxes and corresponding number indicating where they are in the corresponding tile. All these images are shown with fixed-parameter stretching in <a href="#remotesensing-15-02563-t006" class="html-table">Table 6</a>.</p> "> Figure 15
<p>Visualization results of the tiles in <a href="#remotesensing-15-02563-f008" class="html-fig">Figure 8</a> with 2% clip stretching.</p> ">
Abstract
:1. Introduction
2. Materials and Methods
2.1. GF1-ARD Production
2.2. Dataset
2.3. Model Architecture
2.3.1. LUTs
2.3.2. Learning LUTs
2.3.3. Supplementing Spatial Details
2.4. Criterion
- 1.
- MSE loss. Suppose there are K training samples , where denotes the k-th S2 patch and indicates the k-th GF1-ARD patch. The MSE loss is expressed as follows:
- 2.
- Smooth regularization term. In order to stably transform the S2 bands into the GF1-like bands, the values in the LUTs should be locally smooth. In other words, it is expected that the learnt LUTs will vary slightly over a small range of input radiance. The smooth LUTs are define as follows:
- 3.
- Monotonicity regularization term. In addition to being smooth, LUTs should also be monotonic for two purposes. First, the relative reflectance value can be maintained, which is beneficial for ensuring the consistency before and after the transformation. Moreover, there might be insufficient training examples available in practice to cover the entire space of the LUTs, especially the two sides of the LUTs. Specifically, only a tiny portion of pixels in cloudless and snowless regions have reflectance values close to 1. Therefore, the generalization capability of the LUTs can be promoted when the monotonicity is ensured. The monotonicity regularization on the LUTs is formulated as follows:
3. Results
3.1. Performance Comparison
3.2. Ablation Study
3.3. Visualization Results
3.3.1. True Colour Combination
3.3.2. Batch Visualization
3.3.3. Visualization of LUTs
3.3.4. Spatial Misregistration
3.4. Temporal Frequency Assessment
4. Discussion
4.1. Variations between Adjacent Tiles
4.2. Spatial Coverage of Histogram
4.3. Numerical Closeness versus Distributional Closeness
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Claverie, M.; Ju, J.; Masek, J.G.; Dungan, J.L.; Vermote, E.F.; Roger, J.C.; Skakun, S.V.; Justice, C. The Harmonized Landsat and Sentinel-2 surface reflectance data set. Remote Sens. Environ. 2018, 219, 145–161. [Google Scholar] [CrossRef]
- Sigurdsson, J.; Armannsson, S.E.; Ulfarsson, M.O.; Sveinsson, J.R. Fusing Sentinel-2 and Landsat 8 Satellite Images Using a Model-Based Method. Remote Sens. 2022, 14, 3224. [Google Scholar] [CrossRef]
- Runge, A.; Grosse, G. Mosaicking Landsat and Sentinel-2 Data to Enhance LandTrendr Time Series Analysis in Northern High Latitude Permafrost Regions. Remote Sens. 2020, 12, 2471. [Google Scholar] [CrossRef]
- Li, J.; Roy, D.P. A Global Analysis of Sentinel-2A, Sentinel-2B and Landsat-8 Data Revisit Intervals and Implications for Terrestrial Monitoring. Remote Sens. 2017, 9, 902. [Google Scholar] [CrossRef]
- Zhu, Z.; Woodcock, C.E. Continuous change detection and classification of land cover using all available Landsat data. Remote Sens. Environ. 2014, 144, 152–171. [Google Scholar] [CrossRef]
- Zeng, Y.; Hao, D.; Huete, A.; Dechant, B.; Berry, J.; Chen, J.M.; Joiner, J.; Frankenberg, C.; Bond-Lamberty, B.; Ryu, Y.; et al. Optical vegetation indices for monitoring terrestrial ecosystems globally. Nat. Rev. Earth Environ. 2022, 3, 477–493. [Google Scholar] [CrossRef]
- Li, X.; Zhou, Y.; Zhu, Z.; Liang, L.; Yu, B.; Cao, W. Mapping annual urban dynamics (1985–2015) using time series of Landsat data. Remote Sens. Environ. 2018, 216, 674–683. [Google Scholar] [CrossRef]
- Kuenzer, C.; Guo, H.; Huth, J.; Leinenkugel, P.; Li, X.; Dech, S. Flood Mapping and Flood Dynamics of the Mekong Delta: ENVISAT-ASAR-WSM Based Time Series Analyses. Remote Sens. 2013, 5, 687–715. [Google Scholar] [CrossRef]
- Skakun, S.; Kussul, N.; Shelestov, A.; Kussul, O. Flood Hazard and Flood Risk Assessment Using a Time Series of Satellite Images: A Case Study in Namibia. Risk Anal. 2014, 34, 1521–1537. [Google Scholar] [CrossRef]
- Dwyer, J.L.; Roy, D.P.; Sauer, B.; Jenkerson, C.B.; Zhang, H.K.; Lymburner, L. Analysis Ready Data: Enabling Analysis of the Landsat Archive. Remote Sens. 2018, 10, 1363. [Google Scholar] [CrossRef]
- Ju, J.; Roy, D.P. The availability of cloud-free Landsat ETM+ data over the conterminous United States and globally. Remote Sens. Environ. 2008, 112, 1196–1211. [Google Scholar] [CrossRef]
- Fu, P.; Weng, Q. Consistent land surface temperature data generation from irregularly spaced Landsat imagery. Remote Sens. Environ. 2016, 184, 175–187. [Google Scholar] [CrossRef]
- Roy, D.; Kovalskyy, V.; Zhang, H.; Vermote, E.; Yan, L.; Kumar, S.; Egorov, A. Characterization of Landsat-7 to Landsat-8 reflective wavelength and normalized difference vegetation index continuity. Remote Sens. Environ. 2016, 185, 57–70, Landsat 8 Science Results. [Google Scholar] [CrossRef] [PubMed]
- Flood, N. Comparing Sentinel-2A and Landsat 7 and 8 Using Surface Reflectance over Australia. Remote Sens. 2017, 9, 659. [Google Scholar] [CrossRef]
- Runge, A.; Grosse, G. Comparing Spectral Characteristics of Landsat-8 and Sentinel-2 Same-Day Data for Arctic-Boreal Regions. Remote Sens. 2019, 11, 1730. [Google Scholar] [CrossRef]
- Aydal, D.; Arda1, E.; Dumanlilar, Ö. Application of the Crosta technique for alteration mapping of granitoidic rocks using ETM+ data: Case study from eastern Tauride belt (SE Turkey). Int. J. Remote Sens. 2007, 28, 3895–3913. [Google Scholar] [CrossRef]
- Vural, A.; Akpinar, İ.; Sipahi, F. Mineralogical and Chemical Characteristics of Clay Areas, Gümüşhane Region (NE Turkey), and Their Detection Using the Crósta Technique with Landsat 7 and 8 Images. Nat. Resour. Res. 2021, 30, 3955–3985. [Google Scholar] [CrossRef]
- Wang, Q.; Blackburn, G.A.; Onojeghuo, A.O.; Dash, J.; Zhou, L.; Zhang, Y.; Atkinson, P.M. Fusion of Landsat 8 OLI and Sentinel-2 MSI Data. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3885–3899. [Google Scholar] [CrossRef]
- Shao, Z.; Cai, J.; Fu, P.; Hu, L.; Liu, T. Deep learning-based fusion of Landsat-8 and Sentinel-2 images for a harmonized surface reflectance product. Remote Sens. Environ. 2019, 235, 111425. [Google Scholar] [CrossRef]
- Shi, W.; Guo, D.; Zhang, H. A reliable and adaptive spatiotemporal data fusion method for blending multi-spatiotemporal-resolution satellite images. Remote Sens. Environ. 2022, 268, 112770. [Google Scholar] [CrossRef]
- Qiu, S.; Lin, Y.; Shang, R.; Zhang, J.; Ma, L.; Zhu, Z. Making Landsat Time Series Consistent: Evaluating and Improving Landsat Analysis Ready Data. Remote Sens. 2019, 11, 51. [Google Scholar] [CrossRef]
- Roy, D.P.; Ju, J.; Lewis, P.; Schaaf, C.; Gao, F.; Hansen, M.; Lindquist, E. Multi-temporal MODIS–Landsat data fusion for relative radiometric normalization, gap filling, and prediction of Landsat data. Remote Sens. Environ. 2008, 112, 3112–3130. [Google Scholar] [CrossRef]
- Nagol, J.R.; Sexton, J.O.; Kim, D.H.; Anand, A.; Morton, D.; Vermote, E.; Townshend, J.R. Bidirectional effects in Landsat reflectance estimates: Is there a problem to solve? ISPRS J. Photogramm. Remote Sens. 2015, 103, 129–135, Global Land Cover Mapping and Monitoring. [Google Scholar] [CrossRef]
- Ju, J.; Roy, D.P.; Shuai, Y.; Schaaf, C. Development of an approach for generation of temporally complete daily nadir MODIS reflectance time series. Remote Sens. Environ. 2010, 114, 1–20. [Google Scholar] [CrossRef]
- Markham, B.L.; Helder, D.L. Forty-year calibrated record of earth-reflected radiance from Landsat: A review. Remote Sens. Environ. 2012, 122, 30–40, Landsat Legacy Special Issue. [Google Scholar] [CrossRef]
- Steven, M.D.; Malthus, T.J.; Baret, F.; Xu, H.; Chopping, M.J. Intercalibration of vegetation indices from different sensor systems. Remote Sens. Environ. 2003, 88, 412–422. [Google Scholar] [CrossRef]
- Liu, K.; Su, H.; Li, X.; Wang, W.; Yang, L.; Liang, H. Quantifying Spatial–Temporal Pattern of Urban Heat Island in Beijing: An Improved Assessment Using Land Surface Temperature (LST) Time Series Observations from LANDSAT, MODIS, and Chinese New Satellite GaoFen-1. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2028–2042. [Google Scholar] [CrossRef]
- Ghimire, P.; Lei, D.; Juan, N. Effect of Image Fusion on Vegetation Index Quality—A Comparative Study from Gaofen-1, Gaofen-2, Gaofen-4, Landsat-8 OLI and MODIS Imagery. Remote Sens. 2020, 12, 1550. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Zeng, H.; Cai, J.; Li, L.; Cao, Z.; Zhang, L. Learning Image-Adaptive 3D Lookup Tables for High Performance Photo Enhancement in Real-Time. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 2058–2073. [Google Scholar] [CrossRef]
- Gharbi, M.; Chen, J.; Barron, J.T.; Hasinoff, S.W.; Durand, F. Deep Bilateral Learning for Real-Time Image Enhancement. ACM Trans. Graph. 2017, 36, 118. [Google Scholar] [CrossRef]
- Yan, Z.; Zhang, H.; Wang, B.; Paris, S.; Yu, Y. Automatic Photo Adjustment Using Deep Neural Networks. ACM Trans. Graph. 2016, 35, 11. [Google Scholar] [CrossRef]
- Kim, H.U.; Koh, Y.J.; Kim, C.S. Global and Local Enhancement Networks for Paired and Unpaired Image Enhancement. In Proceedings of the Computer Vision—ECCV 2020, Glasgow, UK, 23–28 August 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M., Eds.; Springer: Cham, Switzerland, 2020; pp. 339–354. [Google Scholar]
- Ignatov, A.; Kobyshev, N.; Timofte, R.; Vanhoey, K. DSLR-Quality Photos on Mobile Devices with Deep Convolutional Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 3297–3305. [Google Scholar] [CrossRef]
- Li, Y.; Gupta, A. Beyond Grids: Learning Graph Representations for Visual Recognition. In Proceedings of the NIPS’18, 32nd International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 2–8 December 2018; pp. 9245–9255. [Google Scholar]
- Liu, Z.Q.; Tang, P.; Zhang, W.; Zhang, Z. CNN-Enhanced Heterogeneous Graph Convolutional Network: Inferring Land Use from Land Cover with a Case Study of Park Segmentation. Remote Sens. 2022, 14, 5027. [Google Scholar] [CrossRef]
- Zhong, B.; Yang, A.; Liu, Q.; Wu, S.; Shan, X.; Mu, X.; Hu, L.; Wu, J. Analysis Ready Data of the Chinese GaoFen Satellite Data. Remote Sens. 2021, 13, 1709. [Google Scholar] [CrossRef]
- Karaimer, H.C.; Brown, M.S. A Software Platform for Manipulating the Camera Imaging Pipeline. In Proceedings of the Computer Vision—ECCV 2016, Amsterdam, The Netherlands, 11–14 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer: Cham, Switzerland, 2016; pp. 429–444. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv 2015, arXiv:1502. 03167. [Google Scholar] [CrossRef]
- Nair, V.; Hinton, G.E. Rectified Linear Units Improve Restricted Boltzmann Machines. In Proceedings of the ICML’10, 27th International Conference on International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Prechelt, L. Early Stopping—However, When? In Neural Networks: Tricks of the Trade: Second Edition; Springer: Berlin/Heidelberg, Germany, 2012; pp. 53–67. [Google Scholar] [CrossRef]
- Bannari, A.; Morin, D.; Bonn, F.; Huete, A.R. A review of vegetation indices. Remote Sens. Rev. 1995, 13, 95–120. [Google Scholar] [CrossRef]
GF1-WFV | S-2A MSI | S-2B MSI | ||
---|---|---|---|---|
Launch date | 26 April 2013 | 23 June 2015 | 7 March 2017 | |
Nominal equatorial crossing time | 10:30 am | 10:30 am | 10:30 am | |
Spatial resolution | 16 m | 10 m | ||
Swath width | 800 km | 290 km | ||
Orbital altitude | 645 km | 786 km | ||
Revisit period | ~4 days | 10 days (5 days together) | ||
Spectral bands | Blue | 450–520 nm | 458–523 nm | |
Green | 520–590 nm | 543–578 nm | ||
Red | 630–690 nm | 650–680 nm | ||
NIR | 770–890 nm | 785–900 nm |
Region | Location | Year | Tile | Training | Validation | Test |
---|---|---|---|---|---|---|
A | China | 2020 | 69 | 1511 | 1360 | 2875 |
B | Brazil | 2019 | 32 | 450 | 439 | 841 |
C | France | 2020 | 92 | 2381 | 2273 | 4785 |
Sum | - | - | 193 | 4341 | 4072 | 8501 |
Model | Band | Slope | Intercept | RMSE | |
---|---|---|---|---|---|
Linear model | blue | 0.487 | 0.097 | 0.473 | 0.022 |
green | 0.546 | 0.071 | 0.586 | 0.019 | |
red | 0.640 | 0.048 | 0.795 | 0.022 | |
NIR | 0.718 | 0.056 | 0.623 | 0.032 | |
Ours | blue | 1.117 | −0.013 | 0.784 | 0.012 |
green | 1.026 | −0.002 | 0.828 | 0.012 | |
red | 1.061 | −0.005 | 0.904 | 0.014 | |
NIR | 1.053 | −0.014 | 0.860 | 0.032 |
Model | Band | Slope | Intercept | RMSE | |
---|---|---|---|---|---|
Linear model | NDVI | 1.213 | −0.089 | 0.903 | 0.073 |
Ours | NDVI | 1.052 | −0.032 | 0.930 | 0.059 |
Model | Histogram | CNN | Band | RMSE | |
---|---|---|---|---|---|
GlobalLUTs | ✗ | ✗ | blue | 0.467 | 0.018 |
green | 0.701 | 0.015 | |||
red | 0.836 | 0.019 | |||
NIR | 0.788 | 0.040 | |||
GlobalLUTs-Conv | ✗ | ✓ | blue | 0.563 | 0.017 |
green | 0.769 | 0.013 | |||
red | 0.893 | 0.015 | |||
NIR | 0.853 | 0.033 | |||
TileLUTs | ✓ | ✗ | blue | 0.772 | 0.012 |
green | 0.785 | 0.013 | |||
red | 0.865 | 0.017 | |||
NIR | 0.811 | 0.038 | |||
TileLUTs-Conv (ours) | ✓ | ✓ | blue | 0.784 | 0.012 |
green | 0.828 | 0.012 | |||
red | 0.904 | 0.014 | |||
NIR | 0.860 | 0.032 |
Data Type | Band | Background | Linear Stretch | Saturate |
---|---|---|---|---|
UInt16 | red | 0–400 | 401–2000 | >2000 |
green | 0–600 | 601–2000 | >2000 | |
blue | 0–800 | 801–2000 | >2000 | |
Byte | 0 | 1–254 | 255 |
Patch Size | 128 | 256 | 512 (Ours) | |||
---|---|---|---|---|---|---|
Band | RMSE | RMSE | RMSE | |||
Blue | 0.763 | 0.012 | 0.766 | 0.012 | 0.784 | 0.012 |
Green | 0.815 | 0.012 | 0.824 | 0.012 | 0.828 | 0.012 |
Red | 0.893 | 0.015 | 0.898 | 0.015 | 0.904 | 0.014 |
NIR | 0.847 | 0.034 | 0.857 | 0.033 | 0.860 | 0.032 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, Z.-Q.; Wang, Z.; Zhao, Z.; Huo, L.; Tang, P.; Zhang, Z. Bandpass Alignment from Sentinel-2 to Gaofen-1 ARD Products with UNet-Induced Tile-Adaptive Lookup Tables. Remote Sens. 2023, 15, 2563. https://doi.org/10.3390/rs15102563
Liu Z-Q, Wang Z, Zhao Z, Huo L, Tang P, Zhang Z. Bandpass Alignment from Sentinel-2 to Gaofen-1 ARD Products with UNet-Induced Tile-Adaptive Lookup Tables. Remote Sensing. 2023; 15(10):2563. https://doi.org/10.3390/rs15102563
Chicago/Turabian StyleLiu, Zhi-Qiang, Zhao Wang, Zhitao Zhao, Lianzhi Huo, Ping Tang, and Zheng Zhang. 2023. "Bandpass Alignment from Sentinel-2 to Gaofen-1 ARD Products with UNet-Induced Tile-Adaptive Lookup Tables" Remote Sensing 15, no. 10: 2563. https://doi.org/10.3390/rs15102563
APA StyleLiu, Z. -Q., Wang, Z., Zhao, Z., Huo, L., Tang, P., & Zhang, Z. (2023). Bandpass Alignment from Sentinel-2 to Gaofen-1 ARD Products with UNet-Induced Tile-Adaptive Lookup Tables. Remote Sensing, 15(10), 2563. https://doi.org/10.3390/rs15102563