Nothing Special   »   [go: up one dir, main page]

Academia.eduAcademia.edu

A general method for downscaling earth resource information

Computers & Geosciences 41 (2012) 119–125 Contents lists available at SciVerse ScienceDirect Computers & Geosciences journal homepage: www.elsevier.com/locate/cageo A general method for downscaling earth resource information Brendan P. Malone n, Alex B. McBratney, Budiman Minasny, Ichsani Wheeler Faculty of Agriculture, Food, & Natural Resources, The University of Sydney, John Woolley Building, NSW 2006, Australia a r t i c l e i n f o abstract Article history: Received 28 March 2011 Received in revised form 23 August 2011 Accepted 24 August 2011 Available online 14 September 2011 A programme scripted for use in an R programming environment called dissever is presented. This programme was designed to facilitate a generalised method for downscaling coarsely resolved earth resource information using available finely gridded covariate data. Under the assumption that the relationship between the target variable being downscaled and the available covariates can be nonlinear, dissever uses weighted generalised additive models (GAMs) to drive the empirical function. An iterative algorithm of GAM fitting and adjustment attempts to optimise the downscaling to ensure that the target variable value given for each coarse grid cell equals the average of all target variable values at the fine scale in each coarse grid cell. A number of outputs needed for mapping results and diagnostic purposes are automatically generated from dissever. We demonstrate the programs’ functionality by downscaling a soil organic carbon (SOC) map with 1-km by 1-km grid resolution down to a 90-m by 90-m grid resolution using available covariate information derived from a digital elevation model, Landsat ETM þ data, and airborne gamma radiometric data. dissever produced high quality results as indicated by a low weighted root mean square error between averaged 90-m SOC predictions within their corresponding 1-km grid cell (0.82 kg m  3). Additionally, from a concordance between the downscaled map and another map created using digital soil mapping methods there was a strong agreement (0.94). Future versioning of dissever will investigate quantifying the uncertainty of the downscaled outputs. & 2011 Elsevier Ltd. All rights reserved. Keywords: Pycnophylactic Disaggregation Digital soil mapping Mass balance 1. Introduction The spatial scale at which earth resource information is required is often mismatched to the scale at which it is available. One way of harmonising the ‘‘what is required’’ with the ‘‘what is available’’ is the application of either upscaling or downscaling methods. The focus of this study is on the application of a general method for downscaling. Scale in the context of cartography is difficult to define. In terms of digital information products however, scale is better replaced by terms such as grid cell resolution and spacing (McBratney et al., 2003). Thus downscaling can be defined as a process involving the transfer of information from a coarser to a finer scale or resolution by either mechanistic or empirical functions (Bierkens et al., 2000). Downscaling has particular traction in climatology research (IPCC, 2001) where outputs of climate simulations from general circulation models (GCMs) cannot be directly used for hydrological impact studies of climate change because of a scale mismatch (Wilby et al., 1998; Bloschl, 2005). The grid resolution n Correspondence to: Room S207, Faculty of Agriculture, Food & Natural Resources, The University of Sydney, John Woolley Building, NSW 2006, Australia. Tel.: þ61 290 365 278. E-mail address: brendan.malone@sydney.edu.au (B.P. Malone). 0098-3004/$ - see front matter & 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.cageo.2011.08.021 of GCMs is generally in the order of tens of thousands of square kilometres. In contrast, the resolution at which inputs to hydrological impact models are needed is on the order of tens or hundreds of square kilometres. Studies by Schomburg et al. (2010) and Wilby and Wigley (1997) detail a number of approaches for downscaling GCM model output for use in driving finer scaled soil–vegetation-transfer or hydrological models. In other related environmental research fields, Liu and Pu (2008) aimed to enhance land surface temperature (LST) products using coarsely resolved satellite thermal infrared (TIR) imagery. The statistical method for downscaling used by Liu and Pu (2008) was originally developed for disaggregating zonal census counts by Harvey (2002). Both Merlin et al. (2009) and Yu et al. (2008) set about downscaling soil moisture data retrieved from remote passive-microwave radiometer systems to finer resolutions in order to generate more compatible input for land surface and climate modelling. McBratney (1998) also discuss a number of potential applications for downscaling with particular reference to soil information. Most downscaling methods can be categorised into two classes: empirical or mechanistic. Generally for either class, the problem of downscaling involves reconstructing the variation of a property at a fine resolution, given that only the value at the coarser resolution is known (Bierkens et al., 2000). Earlier studies 120 B.P. Malone et al. / Computers & Geosciences 41 (2012) 119–125 from Tobler (1979) and, more recently, Gotway and Young (2002) detail the downscaling approach that maintains the mass balance with the coarse scaled information known as the equal-area or pycnophylactic property. These could be simplified as approaches that attempt to harmonise the arithmetic average of the property values at the fine scale with the single property value at the coarse scale. Linear functions, splines, and general additive models are examples of empirical methods and can be exemplified by Ponce:Hernandez et al. (1986) who developed a one-dimensional mass-preserving spline method for disaggregating soil horizon data to give a continuous function of the target variable with depth. Mechanistic approaches have had considerable applications in climatology research where deterministic regional climate models are nested into GCMs, which means that the initial and boundary conditions to drive the regional climate model are taken from the GCMs (Yarnal et al., 2001). A popular subclass of empirical and mechanistic downscaling approaches involves using auxiliary or covariate information (Wilby and Wigley, 1997). An implicit assumption when using this auxiliary information is that they are strongly related to the target variable, which is being derived at the fine scaled resolution (for examples, see Schomburg et al., 2010; Bierkens et al., 2000; Wilby and Wigley, 1997). The general method presented in this study uses available fine gridded covariate data to drive the downscaling procedure. Essentially this procedure is empirical and through iterative model fitting attempts to maintain mass balance; but rather than assuming a linear relationship between the target variable and the covariate data, it is also possible that the relationship can be nonlinear. Therefore, a generalised multiple regression approach, which replaces linear combinations of the predictors or covariates with combinations of nonparametric smoothing or fitting functions is used. This can be achieved with the use of generalised additive models (Hastie and Tibshirani, 1990). Secondly, it is assumed that there is an element of uncertainty in the target variable that is being downscaled. Currently, while downscaling as a procedure is well established (Wilby and Wigley, 1997), it is often assumed or implied that there is no associated uncertainty in the values that are being downscaled. Examples of products with associated uncertainties include all model-based outcomes whereby uncertainties will always accompany the predictions, an example of which are digital soil maps (McBratney et al., 2003). Alternatively, the target variable could be the product of a measurement or sensing device for which there will be some quantifiable measurement and or instrument error. To handle these uncertainties in the empirical downscaling process, higher weighting is given to information, which is more accurate than to information that is less accurate. We present this downscaling method as a programme and subsequent algorithm called dissever and demonstrate its use in the downscaling of a coarse soil organic carbon map (SOC) to a finely gridded resolution. 2. Materials and methods 2.1. Algorithm for downscaling A two-stage algorithm, initialisation and iteration, is used to downscale existing coarsely resolved target variable data to a finer resolution and support size, which is determined by the resolution of the available fine gridded environmental covariates. The algorithm presented in this study is based on that described in Liu and Pu (2008), but has been modified to accommodate the inclusion of target variable uncertainties in addition to functionality for modelling nonlinear relationships between a target variable and the available covariates. The algorithm is called dissever, meaning disseveration. Disseveration is a downscaling procedure where the support and the grid spacing are equal and both are changed equally and simultaneously. The target variable value at each coarse resolution grid cell is defined as T^ k , k¼1,y,B; thus B is the total number of coarsely resolved grids cells across the extent of a particular study area, and t^ m , m¼1,y,D denotes the estimate of the target variable at each grid cell at the fine scale. In the spatial context there would be many m encapsulated by each k, the number of which would be determined by the resolution of m and will not be consistently equal, for example, in study areas with nonsymmetric boundaries. The number of m encapsulated by each k is denoted as E. For the l initialisation stage where the iteration counter l is set to 0, t^m is set equal to the value of its encapsulating target variable T^ k . A l weighted nonlinear regression model between t^m and the suite of available covariates is fitted to all the grid cells. dissever uses a weighted generalised additive model (Hastie and Tibshirani, 1990) t^ m ¼ a þ f1 ðx1 Þ þ f2 ðx2 Þ þ    þ fp ðxp Þ, ð1Þ where a is a constant, x1, x2,y, xp are each of the covariate data sources, and fj are nonparametric smoothing splines that relate t^ m to the covariates. The model assumes that t^m is an additive combination of nonlinear functions of the covariates. Eq. (1) can be rewritten in the form t^ m ¼ a þ p X fj ðxj Þ: ð2Þ j¼1 Through an iterative backfitting algorithm all fj are computed, which are obtained by means of a smoothing of the dependent variable t^ m against the covariates xj. Justification of the backfitting algorithm is given by the penalised residual sum of squares (PRSS) criterion, which through subsequent iterations of the backfitting algorithm is minimised (Hastie et al., 2001). Essentially the PRSS can be considered as a smoothing spline approach to estimate the additive model and is defined as 8 92 p D = < X X ^ PRSSða,f1 ,f2 ,. . .fp Þ ¼ wk U t m a fj ðxmj Þ : ; m¼1 j¼1 þ p X j¼1 lj Z 2 ffj00 ðtÞg dtj : ð3Þ Each of the functions fj is a cubic spline in the covariate xj, with knots at each of the unique values of xmj , m¼1,y,D. The first term measures the ‘‘goodness of data fitting’’ or fidelity, the second term punctuated by the lambda lj term means ‘‘penalties’’ and is R 2 defined by the functions’ curvatures ffj00 ðtÞg dtj . The lj is considered the tuning parameter which controls the trade-off between the fidelity term and the penalties. Lastly, wk is the weighting vector assigned to each t^ m . See Hastie et al. (2001) for further elaboration of the PRSS. The weighting vector of the GAM is a measure of the uncertainty that exists or was estimated in the predictions at the coarse scaled resolutions. On the presumption that if the uncertainty is known, the weights for each grid cell k (wk) are simply a vector where the highest weighting is given to t^ m values that are the most accurate and so forth. In this study, the weights are the reciprocals of the variances of the coarse-scale grid-cell means. dissever then shifts to the iteration stage. At the l-th l iteration, in order to make the average of t^m estimates of finer resolution grid cells equal to the value of their encapsulating l1 l coarse resolution grid cell (i.e., to equal T^ k ), t^ are updated to t^ m m B.P. Malone et al. / Computers & Geosciences 41 (2012) 119–125 using the equation T^ k l l1 t^m ¼ t^m  P l1 : ð1=mÞ t^m ð4Þ P l l For simplicity the average of t^ m estimates ðð1=mÞ t^m Þ will be l denoted as t^ k . With the newly adjusted value, a new weighted l nonlinear regression model (GAM) between the t^m and the suite of available covariates is fitted to all the grid cells. Iterations P l l1 proceed until ð1=DÞ 9t^m t^ m 9 become equal to or decreases below a given stopping criterion value, SCV (the weights remain constant throughout). In the present study the SCV was set to 0.001. The algorithm dissever is summarised in Fig. 1. 2.2. Downscaling using dissever For this study, dissever was scripted in the R programming language (Ihaka and Gentleman, 1996). It calls up the R package, gam (generalised additive models) (Hastie, 2011), for the regression steps of dissever. Operationally, dissever is structured as a function, which requires two information inputs or objects: a data table containing the target variable information, associated weights (if known), and covariate data source information; and the GAM formula (which is of a ‘‘formula’’ class R object) used for both the initialisation and the iteration steps. The form of the table is a data frame of U number of columns by V number of rows. Each row is a grid cell location within the area of study. Together all rows correspond to all the regular grid cell positions in the area of interest at the fine gridded scale. Generally, columns 1 and 2 of the data table will correspond to the spatial coordinates. Column 3 is an ordinal data-type column where each number corresponds to k (1, 2, 3,y,B) from the coarsely gridded data. There is an obvious row number mismatch in order to arrange the coarse grid k to fit the corresponding number of rows at the fine gridded resolution. To overcome this, the coarse gridded information is fine gridded using a nearest neighbour resampling approach. Conceptually, this is just a matter of assigning the coarsely gridded cell values, here k, to each finely gridded cell it directly encapsulates in the spatial context. This fine gridding process is repeated also for the values of the target variable T^ k and their weightings. The fine gridded attribute values and weightings are situated in columns 4 and 5, respectively. The remaining columns (6 to U) correspond to each of the covariate data sources that have been compiled for a study area. It is up to the user to determine, which combination of covariates to include in the model. The combination of which can be controlled by selecting the column names, which correspond to the covariate data source required for inclusion. Once the two objects required for dissever are initialised, it is activated and will run until the stopping criterion is met or 100 iterations have run, whichever comes first. Once the function terminates, a number of outputs is created and used for mapping Initialisation 1. l = 0; 2. Within each k, (m = 1, …, E and k = 1,…, B); 3. Using a generalised additive model, regress on x , x ,…, x covariates with the weights w . Iteration 4. l = l+1 5. Update the model estimates: 6. Using a generalised additive model, regress on x , x ,…, x covariates with the weights w 7. If SCV, repeat 4-6; otherwise, iteration terminated. Fig. 1. The downscaling algorithm written into the dissever programme. 121 outputs and diagnostic analyses of the downscaling performance. A table containing the t^m predictions with appended spatial coordinates is created as with the estimates of the average of all l fine gridded values ðt^k Þ within their corresponding coarse grid cell k. In terms of quantifying the mass balance deviation, iterative estimates of the weighted root mean square error (wRMSE) are l given between t^ k and t^ k , which is evaluated as the square root of ^ the estimated weighted mean square error ðwMSEÞ: ^ wMSE ¼ PB 1 k¼1 B X wk k ¼ 1 l wk ðT^ k t^ k Þ2 : ð5Þ Furthermore, there are iterative outputs from dissever, which are essentially diagnostic measures of each GAM fit. The measures are given in terms of deviance, which is similar to a residual sum of squares, and the proportion of deviance explained by each iterative GAM (1  [residual deviance/null deviance]), which is comparable to the coefficient of determination (R2) from ordinary least-squares regression. Akaike’s information criterion (AIC) (Akaike, 1973) is also generated from each GAM, and is a useful measure for comparing models of differing complexity, which for dissever would be adjusted (complexity) on the basis of the number and combination of covariates used for downscaling. The AIC is simply a measure of the relative goodness of fit of a model and is used for comparative purposes whereby the ‘‘best’’ model is the one in which the AIC is minimised. The R script for dissever with associated materials and instructions can be obtained from the first author. 2.3. Case study We demonstrate the use of dissever for downscaling a soil organic carbon (SOC) map featuring the variation of SOC (kg m  3) in the top 30 cm of the soil profile around Edgeroi, a 1500 km2 agricultural district in north-western NSW, Australia (30.32S 149.78E). This SOC map has a block support, consisting of 1-km by 1-km blocks centred on a square grid with a spacing of 1 km, hereafter referred to as the 1-km blocked map. This map was downscaled to 90-m by 90-m blocks centred on a square grid with a spacing of 90 m (90-m blocked map). The 1-km blocked map was created for the specific purpose of demonstrating the application of dissever, such that it is the resultant product of a simple block averaging procedure (within 1-km blocks) of an existing block support map, of the same support and grid cell spacing as the 90-m blocked map, hereafter referred to as the 90-m base map. The reason for this process was to build in a generalised validation whereby the 90-m blocked map (that resulted from using dissever) could ultimately be compared with the 90-m base map. Obviously in a true situation where downscaling would be necessary, such a comparison would not be possible. Model-based methods in a digital soil mapping environment using legacy soil information and spatial interpolation procedures (McBratney et al., 2003) were used to create the 90-m base map. Block averaging predictions of the 90-m base map into 1-km blocks effectively created a product that might be obtained from a remote-sensing device, or might have been interpolated to this resolution because of a lack of predictive covariates at finer resolutions. For the downscaling, the covariates used by dissever were the same as those used to create the 90-m base map. These included those derived from a digital elevation model (DEM): elevation, slope (degrees), mid-slope position, terrain wetness index (TWI), and incoming solar radiation; those derived from Landsat ETMþ imagery (2009) which included normalised difference vegetation index (NDVI) in addition to a series of band ratio derivatives: band 5/band 7, band 3/band 7, and band 3/band 122 B.P. Malone et al. / Computers & Geosciences 41 (2012) 119–125 2; and those derived from airborne gamma-spectrometry information, which included the channels that correspond to the abundances of both radiometric potassium and thorium. All covariate data sources were resolved to 90-m grid cell resolution. wk for this study was the inversed variance of each 1-km block averaged T^ k . 3. Results The 90-m base SOC map is shown on the top panel of Fig. 2. Upscaling this map using the block averaging procedure resulted in the map on the second panel of Fig. 2 (1-km blocked map) and the block average standard errors (last panel of Fig. 2). Fig. 2. Top panel: SOC map displaying the variation of SOC in the 30 cm across the Edgeroi study area produced from the regression kriging procedure using observed soil data and a suite of environmental covariates. Middle panel: Upscaled map of the same target variable with 1-km by 1-km blocks centred onto a 1-km grid produced by block averaging. Bottom panel: Map of the standard errors of predictions resulting from the block averaging procedure. B.P. Malone et al. / Computers & Geosciences 41 (2012) 119–125 The 90-m blocked map, which resulted from running dissever, is displayed on the top panel of Fig. 3. The map on the second panel of Fig. 3 is that of the absolute difference between the values of the 90-m base map and the 90-m blocked map, represented as two classes of difference: o2 kg m  3 and Z2 kg m  3. Based on these two classes about 86% (E140,000) of the grid cells have an absolute difference of o2 kg m  3. Absolute 123 differences ranged effectively from 0 to 8 kg m  3. The third panel of Fig. 3 is a plot of the comparison between both fine scaled maps. Based on this comparison there was a coefficient of determination (R2) of 90% (concordance: 0.94) between the soil map predictions and those resulting from the downscaling. The real goal of downscaling is to reconstruct the variation of the target variable at a fine resolution within each coarsely Fig. 3. Top panel: Downscaled SOC map created from dissever. Middle panel: Map of the absolute differences (given as two classes of difference) between the downscaled map (90-m blocked map) and the 90-m base map. Bottom panel: Concordance plot between the 90-m blocked map and the 90-m base map. 124 B.P. Malone et al. / Computers & Geosciences 41 (2012) 119–125 resolved grid cell. To assess the quality of mass preservation, one of the diagnostic outputs provided by dissever is a weighted root mean square error (wRMSE). For this case study the average l1 deviation between the average of all fine gridded values ðt^ k Þ within their corresponding coarse grid cell ðT^ k Þ was 0.82 kg m  3. In a scenario running dissever without incorporating the weightings on the 1-km blocked map value, it was found that the wRMSE was larger at 1.10 kg m  3. It was also found when running this scenario that there was a slight improvement in the R2 (92%) and concordance (0.96) values when comparing the 90m base map with the 90-m blocked map. 4. Discussion The programme dissever was designed to be a general downscaling algorithm to suit a range of applications where scaling of information is required. This algorithm aims to determine the unknown spatial variation of a target variable at a fine resolution from an existing coarsely resolved map using a suite of finely resolved covariate or auxiliary data as predictor variables. Rather than assuming a linear function to describe the relationship between the target variable and the available covariates, dissever makes the prediction of the target variable based on an additive combination of nonlinear functions of the covariates, which is a more general model for estimation of the unknown spatial variation. However, the GAM is not exclusive to dissever, and the algorithm can be simply modified to accommodate a user-defined function. For example, it is possible to replace this model (GAM) with other deterministic functions, which could include linear models, neural networks, or regression trees as a few possibilities. While the current version of dissever allows the user to input the level of uncertainty associated with the information being downscaled, accommodating these uncertainties using other deterministic functions has not been investigated. In the case of dissever, however, if the uncertainties are not known, downscaling will proceed using equal weights. The wRMSE provides a quantitative measure to assess the mass balance deviation between the coarse gridded information and the downscaled fine gridded information, and the aim in any project is to minimise it. As discovered in this study, taking into account the uncertainties of the 1-km blocked map resulted in differing estimates of the wRMSE; 0.82 kg m  3 as opposed to 1.10 kg m  3. In the situation of using equal weightings, the wRMSE is essentially a measure of an unweighted RMSE. The logic of including the uncertainties into the downscaling process ensures that greater weighting is given to information that is more accurate and less weighting to less accurate information; the wRMSE measure also takes this into account. It is important to note that the wRMSE does not quantify the quality of downscaling; merely the deviation of mass balance. Thus downscaling may lead to poor results in situations where the fine grid cell variation has not been correctly predicted, even if the wRMSE is small. Nevertheless, in this study, the wRMSE appears to be quite acceptable in consideration of the concordance between the 90-m base map and the 90-m blocked map. This result was to be expected given that the combination of covariates used to create both maps was the same. This meets one of the implicit assumptions of downscaling using covariate data in that they need to be strongly related to the target property, which is being derived at the fine scaled resolution. The general features of both maps are comparable and where there was discrepancy it was predominantly in the order of o2 kg m  3 (absolute difference). Determining some reasons why disseveration of the coarsely resolved soil data was better in some areas than in others warrants further investigation, but is likely to have been attributed to the fact that areas where disseveration was poorest, the uncertainty of the 1-km block map was greatest. Additional to this factor, expert knowledge of the study area indicated that areas where disseveration was poorest, there was a greater spatial variation of the 90-m gridded covariate data inside each 1-km block. This feature highlights a common limitation of downscaling in that irrespective of the approach, all the known variabilities of a target variable is seldom captured at a given scale (Wilby and Wigley, 1997). It will be useful, however, in further research and subsequent versioning of dissever to determine a more sophisticated approach of assessing the uncertainties resulting from the downscaling, or in other words quantifying the confidence of the downscaled predictions. It is perceived currently that a disadvantage of dissever (and other downscaling procedures) is that it introduces bias attributed to differences between the averages of the fine gridded target variable data with that of the corresponding coarse gridded data. Therefore in addition to an incomplete knowledge about the variation of the target variable within each coarse grid cell, there is also this bias to account for when considering the magnitude of the prediction uncertainty resulting from downscaling. With respect to the case study, the information relating to the covariates was known a priori to the downscaling. Such information for the general application of downscaling will obviously not be available and thus it is up to expert opinion or empirical analysis to determine suitable covariates to include in dissever. Empirical analysis is exemplified by the wRMSE measure in addition to the deviances and AIC estimates that result directly from the GAM fits. Particularly the AIC and to a lesser extent the residual deviance both provide an objective tool to the user to decide which combination of covariates achieves the optimal downscaling outcome. As explained by Webster and McBratney (1989) the AIC is the statistical analogue to Occam’s razor, minimising the AIC results in a fair compromise between goodness of fit and parsimony. Overall, this programme was tested on a dataset with E175,000 grid cell nodes. With this size dataset, downscaling terminated after 1–2 h. However, the computational time required is dependent on the complexity of the GAM used (increasing or decreasing the number of predictive covariates). Generally, its usefulness for downscaling has been demonstrated. There is some expertise required to arrange the spatial data to generate the input table required by this programme. More importantly, however, is the necessary technical and theoretical expertise to decide which auxiliary data sources dominate at the scale for which the target variable is being downscaled. In soil science, the ability to disseverate coarsely resolved soil moisture data from passive-microwave radiometer systems to1 km or finer for updated soil water status information is the most intriguing application of this approach. 5. Conclusions One issue of spatial information is that the scale at which it is available is often inadequate or does not correspond to the scale at which it is required. There are established methods for upscaling and downscaling, which are able to address these issues. The programme dissever described in this paper is a new programme that builds on existing empirical methods of downscaling earth resource information. Principally, while attempting to maintain the mass balance with the available coarse scaled information, dissever through an iterative algorithm attempts to reconstruct the variation of a property at a prescribed fine resolution through an empirical function using B.P. Malone et al. / Computers & Geosciences 41 (2012) 119–125 auxiliary information. The features which differentiate it from other methods are:  It generalises the multiple regression approach, which replaces  linear combinations of the predictors or covariates with combinations of nonparametric smoothing or fitting functions. This generalised fitting allows the possibility of accommodating nonlinear relationships between the target variable and the covariates. The target variable uncertainties at the coarse scale are incorporated into the downscaling algorithm, which subsequently moderate the outcomes of the downscaled products and associated measures of mass balance deviation. Acknowledgements The authors appreciate the two anonymous reviewers whose perceptive comments improved our original version of dissever and the original submission of this paper. References Akaike, H., 1973. Information theory and an extension of maximum likelihood principle. In: Petrov, B.N., Csaki, F. (Eds.), Proceedings of the Second International Symposium on Information Theory, Akademia Kiado, Budapest, pp. 267–281. Bierkens, M.F.P., Finke, P.A., de Willigen, P., 2000. Upscaling and Downscaling Methods for Environmental Research. Kluwer Academic Publishers, Dordrecht, The Netherlands. Bloschl, G., 2005. Statistical upscaling and downscaling in hydrology. In: Anderson, M.G., McDonnell, J.J. (Eds.), Encyclopaedia of Hydrological Sciences, John Wiley & Sons, Chichester, West Sussex, England. Gotway, C.A., Young, L.J., 2002. Combining incompatible spatial data. Journal of the American Statistical Association 97, 632–648. Harvey, J.T., 2002. Population estimation models based on individual TM pixels. Photogrammetric Engineering and Remote Sensing 68 (11), 1181–1192. 125 Hastie, T.J., 2011. gam: Generalised additive models, R Package version 1.04.1. /http://CRAN.R-project.org/package=gamS. Hastie, T.J., Tibshirani, R.J., 1990. Generalized Additive Models. Chapman & Hall, London, England. Hastie, T.J., Tibshirani, R.J., Friedman, J., 2001. The Elements of Statistical Learning: Data Mining, Inference and Prediction. Springer, New York, NY. Ihaka, R., Gentleman, R., 1996. R: a language for data analysis and graphics. Journal of Computational and Graphical Statistics 5, 299–314. IPCC, 2001. Climate Change 2001: The Scientific Basis. Contribution of the Working Group 1 to the Third Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, Cambridge. Liu, D.S., Pu, R.L., 2008. Downscaling thermal infrared radiance for subpixel land surface temperature retrieval. Sensors 8, 2695–2706. McBratney, A.B., 1998. Some considerations on methods for spatially aggregating and disaggregating soil information. Nutrient Cycling in Agroecosystems 50, 51–62. McBratney, A.B., Mendonca-Santos, M.L., Minasny, B., 2003. On digital soil mapping. Geoderma 117, 3–52. Merlin, O., Al Bitar, A., Walker, J.P., Kerr, Y., 2009. A sequential model for disaggregating near-surface soil moisture observations using multi-resolution thermal sensors. Remote Sensing of Environment 113, 2275–2284. Ponce:Hernandez, R., Marriott, F.H.C., Beckett, P.H.T., 1986. An improved method for reconstructing a soil-profile from analysis of a small number of samples. Journal of Soil Science 37, 455–467. Schomburg, A., Venema, V., Lindau, R., Ament, F., Simmer, C., 2010. A downscaling scheme for atmospheric variables to drive soil–vegetation-atmosphere transfer models. Tellus Series B–Chemical and Physical Meteorology 62, 242–258. Tobler, W.R., 1979. Smooth pycnophylactic interpolation for geographical regions. Journal of the American Statistical Association 74, 519–530. Webster, R., McBratney, A.B., 1989. On the Akaike information criterion for choosing models for variograms of soil properties. Journal of Soil Science 40, 493–496. Wilby, R.L., Wigley, T.M.L., 1997. Downscaling general circulation model output: a review of methods and limitations. Progress in Physical Geography 21, 530–548. Wilby, R.L., Wigley, T.M.L., Conway, D., Jones, P.D., Hewitson, B.C., Main, J., Wilks, D.S., 1998. Statistical downscaling of general circulation model output: a comparison of methods. Water Resources Research 34, 2995–3008. Yarnal, B., Comrie, A.C., Frakes, B., Brown, D.P., 2001. Developments and prospects in synoptic climatology. International Journal of Climatology 21, 1923–1950. Yu, G., Di, L., Yang, W., 2008. Downscaling of global soil moisture using auxiliary data. In: Proceedings of the 2008 IEEE International Geoscience and Remote Sensing Symposium, Institute of Electrical and Electronics Engineers, Inc., Boston, Massachusetts, USA, pp. 230–233.