Publisher Version (Open Access)
Publisher Version (Open Access)
Publisher Version (Open Access)
335–342
Ó 2011 by the Ecological Society of America
Abstract. Maxent, one of the most commonly used methods for inferring species
distributions and environmental tolerances from occurrence data, allows users to fit models of
arbitrary complexity. Model complexity is typically constrained via a process known as L1
regularization, but at present little guidance is available for setting the appropriate level of
regularization, and the effects of inappropriately complex or simple models are largely
unknown. In this study, we demonstrate the use of information criterion approaches to setting
regularization in Maxent, and we compare models selected using information criteria to
models selected using other criteria that are common in the literature. We evaluate model
performance using occurrence data generated from a known ‘‘true’’ initial Maxent model,
Communications
using several different metrics for model quality and transferability. We demonstrate that
models that are inappropriately complex or inappropriately simple show reduced ability to
infer habitat quality, reduced ability to infer the relative importance of variables in
constraining species’ distributions, and reduced transferability to other time periods. We also
demonstrate that information criteria may offer significant advantages over the methods
commonly used in the literature.
Key words: Akaike information criterion; AUC; Bayesian information criterion; environmental niche
modeling; Maxent; maximum entropy; model complexity; model transferability; niche shifts; species
distribution modeling.
2005, Menke et al. 2009), in many cases ENM methods (Phillips et al. 2006). The first term represents the log
are employed because they are quite simply the only loss, while the second term consists of a set of weights
tools available. (kj) for the j features used to build the model and a set of
In this study, we will look at the effects of model weighting penalties, bj. Users of Maxent can specify
complexity on ENMs constructed using one of the most values of bj that penalize the addition of extra
commonly employed tools for this purpose, Maxent parameters to suit their particular application, data,
and biological intuition. Recent releases of the Maxent
Manuscript received 9 June 2010; revised 6 July 2010; software use different default b values for each type of
accepted 15 July 2010. Corresponding Editor: T. J. Stohlgren. variable (linear, quadratic, hinge, and so on; Phillips and
3
E-mail: dan.l.warren@gmail.com Dudı́k 2008), which are set to values that were obtained
335
Ecological Applications
336 DAN L. WARREN AND STEPHANIE N. SEIFERT
Vol. 21, No. 2
using empirical data for 226 species from six different Radosavljevic and R. P. Anderson, personal communi-
geographic regions. The ability to change the settings for cation).
each variable type is available, but users typically adjust In this study, we examine the effects of regularization
regularization via a single b setting that acts as a on model performance using a simulation approach in
multiplier for the default values, if they explore which the underlying model generating species occur-
regularization at all. For the present study we manip- rences (the ‘‘true’’ environmental niche) is known, and
ulate only this regularization multiplier and refer to it evaluate a broader range of model performance and
simply as b. selection criteria than has previously been used. The
While L1 regularization allows users to constrain purpose of these model selection criteria is not to replace
over-parameterization, they must still decide on a the regularization currently available in Maxent, but
criterion by which to choose an appropriate value of b rather to determine what methods users might employ to
for their data. Phillips and Dudı́k (2008) suggest that the help them best use Maxent’s existing regularization
default settings in Maxent are likely to be appropriate functions. We acknowledge that the use of AICc and
for many modeling efforts, and the empirical data BIC here is somewhat at odds with common practice for
presented in that study lend credibility to this statement. these criteria (Burnham and Anderson 2002); rather
However, those default settings were obtained using than specifying a set of models a priori, we simply
criteria that did not penalize model complexity per se, specify a range of levels of complexity and allow the
and models were evaluated only in their performance on Maxent algorithm to control parameterization of the
independent test data. This type of model evaluation is models. This is therefore still an AM approach to model
Communications
one of the cornerstones of a school of statistical construction, but with the addition of model selection
modeling known as algorithmic modeling (AM; Brei- heuristics from the DM literature. We therefore make no
man 2001). While the more traditional ‘‘data modeling’’ attempt to justify AICc and BIC from first principles.
(DM) approach starts with a family of models that are Instead, we simply focus on testing their utility as
specified a priori and attempts to select the model or set alternative heuristics to the more commonly used
of models that best fit the data, AM treats the true AUCTrain and AUCTest (see Methods: Criteria for model
model as an unknown (and potentially quite complex) selection for a description of these heuristics). However,
reality that is difficult or impossible to truly estimate. we note that the philosophy underlying information
Consequently, while DM investigators usually evaluate criterion approaches and L1 regularization are the same:
models using goodness-of-fit metrics and place a high we should reward models that fit data while penalizing
priority on model simplicity, AM users more typically unnecessary parameters. The primary conceptual differ-
focus on the ability of the model to predict independent ence is that AICc and BIC provide explicit criteria for
test data and tend to be less directly focused on selecting models of appropriate complexity, while L1
controlling over-parameterization. As normally applied, regularization leaves that choice to the user.
the Maxent software falls firmly under the AM
approach, and previous studies evaluating its perfor- METHODS
mance draw primarily from a model selection toolbox In order to reliably assess the ability of Maxent to
typical of that school of thought. This way of thinking estimate species’ environmental tolerances or to estimate
about ENMs may be entirely appropriate for users suitability of habitat, we must start by knowing their
whose primary objective is to predict the distribution of true values. However, environmental niche modeling
their species within the same set of environments primarily exists because these are difficult, and frequent-
available for data collection. However, some modelers ly impossible, quantities to estimate. For that reason, we
using Maxent may have more cause to be concerned present a simulation approach which is now available in
with model complexity per se, and to be less satisfied ENMTools under the name ‘‘resample from raster’’
with using predictive accuracy on test data as a criterion (Warren et al. 2010). In this approach, we start with a
for model selection. For applications 2, 3, and 4 above, Maxent ENM built using occurrence data from a real
the predictions of suitability on training and test data biological species. We treat that model as ‘‘truth’’ for the
are only a projection of the true item of interest (i.e., the sake of simulation, and sample geographic localities
estimate of the species’ niche) onto one distribution of with a probability proportional to the raw estimate of
environmental variables, and those predictions often habitat suitability in the grid cell they represent. These
cannot be tested in the environmental space of primary occurrence data are then used as input for Maxent, and
interest (e.g., a new geographic region [2], a different the ability of the program to infer the underlying model
time period [3], or the space of all possible environments is evaluated. Since the original model was generated
[4]). In addition to these issues, some authors have from Maxent, we know a priori that the software is
expressed concern regarding whether randomly withheld capable of inferring that model precisely and that a
test data can truly be considered independent when perfect mapping of the true model to the geographic
training and test data are both subject to similar spatial distribution of habitat suitability scores is possible. As
sampling biases, as is usually the case for the occurrence such, this represents a relatively simple test of Maxent’s
data used in ENM construction (Veloz 2009; A. ability to infer biological truth. By varying the b
March 2011 MODEL COMPLEXITY AND SELECTION IN MAXENT 337
parameter used to construct the ‘‘true’’ ENM from ability to estimate the relative ranking of habitat
empirical data, we can generate models and distributions patches. The M metric measures the ability of Maxent
of similarity scores for a variety of scenarios in which the to determine the relative importance of environmental
‘‘true’’ niche varies in complexity. Data simulated in variables. The IProj and RRProj metrics measure the
this fashion are different from those obtained from effectiveness of Maxent at estimating true suitability or
field studies in that they are sampled with no spatial relative ranking of habitat in a different time period,
bias. Because of this, Maxent must be run with the respectively. All metrics range from 1, where the truth is
‘‘addsamplestobackground’’ option disabled in order to estimated perfectly, to 0, where the inferred model shows
achieve statistically consistent results. A graphical no similarity to the true model. We note that these
representation of the simulation and testing process is metrics may be broadly useful for comparing models,
given in Appendix D. but they are generally not applicable to model selection
using real data as they can only be calculated when the
Data true niche and suitability of habitat are known.
We present analyses for 51 different species from
California. All occurrence data were obtained from the Criteria for model selection
Museum of Vertebrate Zoology at the University of Finding that under- or over-parameterization are
California, Berkeley. Environment layers at a resolution problematic is of great concern, but limited utility, in a
of 30 arc seconds were obtained from Worldclim world where the true complexity of the environmental
(Hijmans et al. 2005) and the California Gap Analysis niche is unknown and perhaps unknowable. We
Communications
Project (Davis et al. 1998), and trimmed to the state therefore test the performance of different methods of
boundaries of California using ArcGIS (ESRI 2006). selecting from a set of models of varying complexity.
Layers used included slope, altitude, gap vegetation They are:
type, and the 19 layers commonly referred to as the 1) Information criteria. Here we implement the
‘‘Bioclim’’ layers, which represent various aspects of Bayesian (BIC), and sample size corrected Akaike
temperature, precipitation, and seasonality. Although information criteria (AICc) for Maxent ENMs (Akaike
many of these variables are spatially correlated, we 1974, Burnham and Anderson 2002:284). These metrics
retained the entire set in order to determine the effects of are assessed by standardizing raw scores for each ENM
different modeling approaches on variable selection. In so that all scores within the geographic space sum to 1
order to estimate the transferability of models, we also and then calculating the likelihood of the data given
projected true and inferred models onto future climate each ENM by taking the product of the suitability scores
predictions for California under the CSIRO model, a2a for each grid cell containing a presence. Both training
climate scenario. These data were also obtained from and test localities are used in calculating likelihoods. The
Worldclim (Hijmans et al. 2005). Because no gap number of parameters is measured simply by counting
vegetation projections were available under this climate all parameters with a nonzero weight in the lambda file
scenario, we treated the vegetation as unchanging over produced by Maxent, a small text file containing model
this time period. details that Maxent produces as part of the modeling
process. We exclude models with zero parameters (which
Analyses occur in some cases with small sample sizes and
For each of the 51 species, we generated a ‘‘true’’ extremely high values for b) and models with more
model at ten different levels of complexity by setting b at parameters than data points (which violate the assump-
1, 3, 5, 7, 9, 11, 13, 15, 17, and 19. We sampled 100 tions of AICc). We note that Maxent is capable of
occurrence points from each of those ‘‘true’’ models, and producing marginal suitability functions that take on a
constructed models from those occurrence points using variety of shapes. Some of these functions may involve
the same ten values for b. We then repeated this process many more parameters than others (e.g., hinge features
using 1000 simulated occurrence points in order to as compared to linear features), and as such are likely to
determine the effects of sample size on model perfor- be penalized more severely by information criteria.
mance and model selection. Twenty percent of occur- Functions for calculating AICc and BIC are now
rence records were withheld from each model to be used available in ENMTools (Warren et al. 2010).
as independent test data. All other Maxent settings 2) Maximum training AUC. This method selects the
relating to model construction (with the exception of model that produces the maximum value for the area
‘‘add samples to background,’’ as mentioned above) under the receiver operating characteristic curve (AUC)
were left at their default values. calculated using the data used in model construction.
This value is often reported in the literature as an
Evaluating model performance estimate of model quality, but this interpretation is
We evaluate model performance using a variety of questionable as AUCTrain is generally expected to favor
metrics (Appendix A). The I metric measures the ability models with more parameters.
of the model to estimate the true suitability of habitat 3) Maximum test AUC. This method selects the model
(Warren et al. 2008), while the RR metric measures its that produces the maximum AUC value on randomly
Ecological Applications
338 DAN L. WARREN AND STEPHANIE N. SEIFERT
Vol. 21, No. 2
selected test data that was withheld from model criterion-based methods outperformed all AUC-based
construction. This method is generally thought not to methods for all metrics of model quality regardless of
suffer from the same overfitting problems as AUCTrain, sample size with one exception: AUCDiff outperformed
because overfitting the model to the training data should AICc on selecting the appropriate number of parameters
not necessarily improve the fit to independent test data. when N ¼ 1000 (Fig. 2A), which is also the only
4) Minimum difference between training and test data comparison in which BIC outperformed AIC c , and
(AUCDiff). This metric is based on the intuitive notion the only comparison where AUCDiff outperformed
that overfit models should generally perform well on AUCTest. AUCTrain was the worst-performing method
training data but poorly on test data (S. Sarkar, S. E. of model selection in all comparisons except for M and I
Strutz, D. M. Frank, C.-L. Rivaldi, B. Sissel, and V. when N ¼ 100; for these comparisons, AUCDiff exhibited
Sanchez-Cordero, unpublished manuscript). By minimiz- the poorest performance. We conducted a separate set of
ing the difference between training and test data, we analyses to study the effects of model over- and under-
minimize the risk that our model is over-parameterized parameterization on inferences of niche breadth and
in such a way as to be overly specific to the training data. changes in niche breadth over time. These analyses are
presented in Appendix S3 and discussed below, and
RESULTS should be of interest to investigators using Maxent to
The effects of over- and under-parameterization can predict range contraction or expansion as a function of
be seen in Fig. 1. Models constructed using the same climate change.
value for b may differ in the number of parameters they
Communications
Communications
FIG. 1. Extent of over- or under-parameterization and model performance for 100 (left column) and 1000 (right) simulated
occurrence points. Cooler colors indicate better performance. Color scales are calibrated individually for each column. In each
comparison, the number of parameters in the true model is indicated on the x-axis while the number of parameters in the inferred
model is given on the y-axis. The top left panel illustrates the ability of models to infer the relative importance of environmental
variables. The second and third left panels demonstrate the effects of complexity on the ability to determine the true suitability and
relative ranking of habitat in the present day, while the right panels show the effects of complexity on our ability to infer the
distribution of suitable habitat in a different time period.
skepticism. We find that AICc exhibits the best average lower tail. This indicates that AICc and BIC are selecting
performance on selecting models that estimate the true the worst models from each analysis less often than are
model complexity when N ¼ 100 (panel A), although not the AUC-based methods. In this study, model selection
when N ¼ 1000. In addition, models preferred by AICc criteria were given a total of 1020 trials, selecting from a
more accurately estimate the relative importance of set of 10 models in each replicate. The worst-performing
variables (panel B) as well as the suitability of habitat model selection criterion, AUCTrain, chose the default
both in the training region (panels C and D) and when behavior of Maxent (b ¼ 1) in all but ten replicates.
models are transferred to a different time period (panels Therefore the poor behavior of AUCTrain is approxi-
E and F). The average performance of the information mately what would be seen if no model selection
criteria (AICc, BIC) is slightly greater than that of criterion was applied at all. Although AIC and AICc
AUCTest and AUCDiff in the larger data sets, but the have been used for ENMs before (Hao et al. 2007,
difference is considerably greater when fewer data points Dormann et al. 2008, Hengl et al. 2009), little
are available, indicating that information criterion- information about their actual performance has been
based approaches to model selection may be particularly available. Here we demonstrate that they may make a
useful when sample sizes are small. In addition, we see valuable contribution to the toolbox of investigators
that the amount of variation in the performance of the using Maxent to model species distributions. Whether
information criterion methods is much smaller than that these benefits extend to other modeling methods is
seen in the AUC-based methods, particularly in the unknown.
Ecological Applications
340 DAN L. WARREN AND STEPHANIE N. SEIFERT
Vol. 21, No. 2
Communications
FIG. 2. Relative performance of model selection criteria for 1000 (open squares) and 100 (solid circles) simulated occurrence
points. (A) The performance of each criterion in regard to over- or under-parameterization; (B–E) the performance of the selected
models using a variety of metrics. Because ‘‘true’’ models were built for many different species using different levels of complexity,
we must scale the performance of each selected model by the models that were available for that combination of species and
complexity. For this reason, values on the y-axis in panels B–E are given as [A – min(A)]/[max(A) – min(A)], where A is the metric in
question and max(A) and min(A) are the maximum and minimum values of that metric over all models constructed using that
‘‘true’’ model. In this formulation each model gets a score of 1 if it is the best performing of that set of models and a score of 0 if it is
the worst. Metrics are: I, the ability of the model to estimate the true suitability of habitat; RR, the ability of the model to estimate
the relative ranking of habitat patches; M, the ability of the Maxent model to determine the relative importance of environmental
variables; IProj and RRProj, the effectiveness of Maxent at estimating true suitability or relative ranking of habitat in a different time
period, respectively. See Methods: Criteria for model selection for descriptions of the criteria listed on the x-axes. Error bars indicate
the 10th and 90th percentiles of the distribution of performance scores.
Transferring models between geographic regions or species to expand into new habitat. Using our simula-
periods of time represents an additional set of challenges tion approach, we are able to assess the transferability of
for ENM methods. The presence of combinations of models in comparison to the true change in habitat
climate variables in the projected region for which there suitability. We find that over-parameterized models tend
is no analog in the training region necessitates extrap- to underestimate the availability of suitable habitat
olation of those models into a set of conditions for when transferred into a new time period, while under-
which no presence or pseudo-absence data were parameterized models tend to overestimate it (Appendix
available during model construction. Nevertheless, C, middle row). However, the same models show
ENMs are frequently used to study the effects of climate identical behavior in the present day (Appendix C, top
change on habitat suitability or to estimate the ability of row), and the two cancel each other to a great extent so
March 2011 MODEL COMPLEXITY AND SELECTION IN MAXENT 341
that the bias in the inferred level of change is Maxent (Phillips et al. 2006) is rapidly becoming one
comparatively minor (Appendix C, bottom row) al- of the most widely used software packages for environ-
though still statistically highly significant (P , 0.001). mental niche modeling for a variety of good reasons.
Although it is tempting to conclude that inferences of However, as with any analytical method that is easy to
niche expansion and contraction as a function of climate use, it is easy for users to become complacent in their
change are therefore fairly robust to over- or under- modeling efforts and uncritically accept the models it
parameterization, the magnitude of the effect is small produces without exploring the effects of the modeling
only because the substantial biases caused by inappro- process. To date, this has been the case with respect to
priately complex or simple models mostly cancel each model complexity for many users of Maxent. Here we
other out when subtracting present from future niche demonstrate conclusively that model complexity affects
breadth. The actual predictions of habitat suitability, users’ ability to infer the suitability of habitat both with
relative rank, and niche breadth in the future climate and without thresholds, the relative importance of
scenario are still of poor quality compared to models of environmental variables to determining species’ distri-
appropriate complexity. It is only by comparison to butions, estimates of the breadth of species’ environ-
similarly biased models in the training region that they mental niches, and the transferability of models. We also
appear to yield an approximately correct signal of demonstrate that information-theoretic approaches to
environmental niche contraction or expansion. We note selecting models offer significant advantages over the
that the breadth metric used here (Levins 1968) relies on methods that are commonly employed by Maxent users.
Although it may be premature to suggest that these
Communications
standardized suitability scores without the application of
a threshold, and as such it may obscure changes in the methods replace the more traditional model selection
average suitability of habitat over time. It is therefore methods used with Maxent, the current study makes a
possible that significant range expansions and contrac- clear case for their inclusion in the Maxent modeler’s
tions, or artifactual inferences of these phenomena, toolbox.
could be obscured by this method. Spurious inferences ACKNOWLEDGMENTS
of range contraction or expansion may therefore be The authors thank the following people for comments on
more widespread and severe than this study indicates. early drafts of this manuscript: Steven Phillips, A. Townsend
Many investigators have treated model complexity in Peterson, William Godsoe, Sam Veloz, Robert Anderson, Sean
Maxent as unimportant, assuming that unnecessary Maher, Sahotra Sarkar, Stavana Strutz, Ophelia Wang, Blake
Sissel, Matthew Moskwik, Rich Glor, Robert Hijmans, and
parameters were of small effect and could safely be Michael Turelli. We also thank two anonymous reviewers for
ignored. Our results demonstrate that model complexity their helpful and rapid feedback. This work was funded by
affects model performance for many, if not all, grants from the UCD Center for Population Biology, the
applications. We find that over-parameterization is less California Department of Fish and Game, NSF postdoctoral
fellowship DBI-0905701 to D. Warren, and NSF grant DEB-
problematic than under-parameterization in most cases. 0815145.
It is tempting to conclude that models should be allowed
to be arbitrarily complex, but we suggest that the LITERATURE CITED
reduced penalty for over-parameterization is less impor- Akaike, H. 1974. A new look at the statistical model
tant than the observation that models of appropriate identification. IEEE Transactions on Automatic Control
19:716–723.
complexity perform best. The one exception is when
Breiman, L. 2001. Statistical modeling: the two cultures.
model complexity approaches or exceeds the number of Statistical Science 16:199–231.
occurrences available for model construction, in which Burnham, K. P., and D. R. Anderson. 2002. Model selection
case overly simplistic models perform better (Appendix and multimodel inference: a practical information-theoretic
approach. Second edition. Springer-Verlag, Berlin, Germany.
B). We suggest that the effects of regularization on
Davis, F. W., D. M. Stoms, A. D. Hollander, K. A. Thomas,
model structure and performance should always be P. A. Stine, D. Odion, M. I. Borchert, J. H. Thorne, M. V.
evaluated, if for no other reason than that parsimony Gray, R. E. Walker, K. Warner, and J. Graae. 1998. The
dictates that our models should be no more complicated California gap analysis project: final report. University of
than need be to explain our observations. California, Santa Barbara, California, USA.
Dormann, C. F., O. Purschke, J. R. Garcı́a Márquez, S.
Ideally, investigators should attempt to construct Lautenbach, and B. Schröder. 2008. Components of uncer-
models of approximately the true level of complexity. tainty in species distribution analysis: a case study of the
However, the very absence of this information is one of Great Grey Shrike. Ecology 89:3371–3386.
the primary reasons that ENMs are used: it is difficult to ESRI. 2006. ArcGIS 9.2. Environmental Systems Research
Institute, Redlands, California, USA.
argue that they are superior to experimental physiolog- Godsoe, W. 2010. I can’t define the niche but I know it when I
ical studies in any sense except for convenience and see it: a formal link between statistical theory and the
tractability. We therefore need criteria by which to select ecological niche. Oikos 119:53–60.
models of appropriate levels of complexity when the Hampe, A. 2004. Bioclimatic models: what they detect and
what they hide. Global Ecology and Biogeography 11:469–
truth is unknown, and in this study we demonstrate that 471.
information criterion-based metrics perform well with Hao, C., C. LiJun, and T. P. Albright. 2007. Predicting the
Maxent ENMs. potential distribution of invasive exotic species using GIS and
Ecological Applications
342 DAN L. WARREN AND STEPHANIE N. SEIFERT
Vol. 21, No. 2
information-theoretic approaches: a case of ragweed (Am- Phillips, S. J., R. P. Anderson, and R. E. Schapire. 2006.
brosia artemisiifolia L.) distribution in China. Chinese Maximum entropy modeling of species geographic distribu-
Science Bulletin 52:1223–1230. tions. Ecological Modeling 190:231–259.
Hengl, T., H. Sierdsema, A. Radovic, and A. Dilod. 2009. Phillips, S. J., and M. M. Dudı́k. 2008. Modeling of species
Spatial prediction of species’ distributions from occurrence- distributions with Maxent: new extensions and a compre-
only records: combining point pattern analysis, ENFA and hensive evaluation. Ecography 31:161–175.
regression-kriging. Ecological Modeling 24:3499–3511. Soberón, J., and A. T. Peterson. 2005. Interpretation of models
Hijmans, R. J., S. E. Cameron, J. L. Parra, P. G. Jones, and A. of fundamental ecological niches and species’ distributional
Jarvis. 2005. Very high resolution interpolated climate
areas. Biodiversity Informatics 2:1–10.
surfaces for global land areas. International Journal of
Veloz, S. 2009. Spatially autocorrelated sampling falsely inflates
Climatology 25:1965–1978.
Levins, R. 1968. Evolution in changing environments. Mono- measures of accuracy for presence-only niche models.
graphs in population biology. Volume 2. Princeton Univer- Journal of Biogeography 36:2290–2299.
sity Press, Princeton, New Jersey, USA. Warren, D. L., R. E. Glor, and M. Turelli. 2008. Environmen-
Menke, S. B., D. A. Holway, R. N. Fisher, and W. Jetz. 2009. tal niche equivalency versus conservatism: quantitative
Characterizing and predicting species distributions across approaches to niche evolution. Evolution 62:2868–2883.
environments and scales: Argentine ant occurrences in the Warren, D. L., R. E. Glor, and M. Turelli. 2010. ENMTools: a
eye of the beholder. Global Ecology and Biogeography 18: toolbox for comparative studies of environmental niche
50–63. models. Ecography. [doi: 10.1111/j.1600-0587.2009.06142.x]
APPENDIX A
Communications
APPENDIX B
Parameters exceeding data points (Ecological Archives A021-018-A2).
APPENDIX C
Niche breadth and transferability (Ecological Archives A021-018-A3).
APPENDIX D
Simulation and testing process (Ecological Archives A021-018-A4).
ulrichsweb.com(TM) -- The Global Source for Periodicals http://ulrichsweb.serialssolutions.com/title/1406000140367/190979
Log in to My Ulrich's
Ecological Applications
Related Titles Save to List Email Download Print Corrections Expand All Collapse All
Save to List Email Download Print Corrections Expand All Collapse All
1 of 2 22/07/2014 1:37 PM
ulrichsweb.com(TM) -- The Global Source for Periodicals http://ulrichsweb.serialssolutions.com/title/1406000140367/190979
2 of 2 22/07/2014 1:37 PM