Geostats ENG 042111 PDF
Geostats ENG 042111 PDF
Geostats ENG 042111 PDF
fo r M o delers -
G eostatistics
GEOMODELING DESIGN LONG TERM PLANNING SHORT TERM PLANNING PRODUCTION UTILITIES
MineSight Haulage
MineSight Axis
MineSight® for Modelers ‐ Introduction to Geostatistics
Geostatistics
Objective: To make you familiar with the
basic concepts of statistics, and the
Date: geostatistical tools available to solve
Instructor:
Instructor: problems in geology and mining of an ore
problems in geology and mining of an ore
deposit
Topics Classical Statistics
• Basic Statistics • Sample values are realizations of a random variable
• Data Analysis and Display • Samples are considered independent
• Analysis of Spatial Continuity (variogram) • Relative positions of the samples are ignored
• Model interpolation with inverse distance and
Model interpolation with inverse distance and • Does not make use of the spatial correlation of
Does not make use of the spatial correlation of
ordinary kriging samples
• Model statistics / resource reporting by geology and
bench etc.
Statistics Geostatistics
Universe Sampling Unit
The source of all possible data Part of the universe on which a measurement is
(for example, an ore deposit can be defined as the made (can be core samples, channel samples,
universe; sometimes a universe may not have grab samples etc.)
well‐defined boundaries) One must specify the sampling unit when making
statements about a universe.
Support Population
• Characteristics of the sampling unit • Like universe, population refers to the total category
• Refers to the size, shape and orientation of the under consideration
sample (for example, drillhole core samples will • It is possible to have different populations within the
not have the same support as blasthole samples) same universe (for example, population of drillhole
grades versus population of blasthole grades;
grades versus population of blasthole grades;
sampling unit and support must be specified)
Random Variable Frequency Distribution
A variable whose values are randomly Probability Density Function (pdf)
generated according to a probabilistic • Discrete:
mechanism (for example, the outcome of a 1. f(xi) 0 for xiR (R is the domain)
coin toss, or the grade of a core sample in a
coin toss, or the grade of a core sample in a 2 f(xi) = 1
2. f(x )=1
diamond drill hole) • Continuous:
1.f(x) 0
2.f(x)dx = 1
Frequency Distribution Example
Cumulative Density Function (cdf) Assume the following population of
Proportion of the population below a certain measurements:
value: 1, 7, 1, 3, 2, 3, 11, 1, 7, 5
F(x) = P(Xx)
F(x) P(Xx)
1. 0F(x) 1 for all x
2. F(x) is non decreasing
3. F(‐)=0 and F()=1
PDF CDF
Descriptive Measures Mean
Measures of location:
• Mean m = 1/n xi i=1,...,n
• Median
• Mode Arithmetic average of the data values
• Min, Max
• Quartiles
• Percentiles
Mean Weighted Mean
What is the arithmetic mean of the example Assume the weight (for example, the length of
population: sample) is 1 for all values except 11, which has
1, 7, 1, 3, 2, 3, 11, 1, 7, 5 a weight of 0.5
m= (1+ 7+ 1+ 3+ 2+ 3+ 11+ 1+ 7+ 5)/10=
(1+ 7+ 1+ 3+ 2+ 3+ 11+ 1+ 7+ 5)/10 Sum of weights = 9 5
Sum of weights = 9.5
= 41/10= m= {1*(1+ 7+ 1+ 3+ 2+ 3+ 1+ 7+ 5) +0.5*11}/9.5=
= 4.1 = 3.74
Mean Median
What is the mean if we remove highest Midpoint of the data values if they are sorted
value? in increasing order
M = x(n+1)/2 if n is odd
m= (1+ 7+ 1+ 3+ 2+ 3+ 1+ 7+ 5)/9=
/ M = [x n/2+x(n/2)+1]/2 if n is even
/
= 30/9=
= 3.33
Median Other
What is the median of example population? • Mode
• Minimum
M=? • Maximum
• Quartiles
Sort data in increasing order: • Deciles
1, 1, 1, 2, 3, 3, 5, 7, 7 ,11 • Percentiles
• Quantiles
M = 3
Mode Quartiles
The value that occurs most frequently Split data in quarters
Q1 = 1st quartile
In our example: Q3 = 3rd quartile
In example:
1, 1, 1, 2, 3, 3, 5, 7, 7 ,11 Q1=?
Q3=?
Mode = 1
Quartiles Deciles, Percentiles,Quantiles
1, 1, 1, 2, 3, 3, 5, 7, 7 ,11 1, 1, 1, 2, 3, 3, 5, 7, 7 ,11
Q1= 1
Q3= 7 D1= 1
D3= 1
=1
D9= 7
Mode on the PDF Median on the CDF
Mode (also min)
Max
Descriptive Measures Variance
Variance Variance
Example: Remove high value:
1, 1, 1, 2, 3, 3, 5, 7, 7
1, 1, 1, 2, 3, 3, 5, 7, 7 ,11
M=3.33
M=4.1
S2= 1/8 {(1‐3.33)2+ (1‐3.33)2+ (1‐3.33)2+ (2‐3.33)2+
S2= 1/9 {(1‐4.1)2+ (1‐4.1)2+ (1‐4.1)2+ (2‐4.1)2+ (3‐4.1)2+
(3 3 33)2+ (3‐3.33)
(3‐3.33) + (3 3 33)2+ (5‐3.33)
+ (5 3 33)2+ (7‐3.33)
+ (7 3 33)2+
+
(3‐4.1)2+ (5‐4.1)2+ (7‐4.1)2+ (7‐4.1)2+ (11‐4.1)2 } =
(7‐3.33)2 =
= 1/9 (9.61+ 9.61+ 9.61+ 4.41+ 1.21+ 1.21+ 0.81+ 8.41+
= 1/8 (5.43+ 5.43+ 5.43+1.769+ 0.109+ 0.109+ 2.789+
8.41+ 47.61) =
13.469+ 13.469) =
= 100.9/9 = 11.21
= 48/8 = 6
Standard Deviation Standard Deviation
s = s2 Example:
S2= 11.21
• Has the same units as the variable S = 3.348
• Never negative
Never negative
S2 = 6
S =2.445
Interquartile Range Descriptive Measures
IQR = Q3 ‐ Q1 Measures of shape:
• Skewness
Not used in mining very often • Peakedness (kurtosis)
• Coefficient of Variation
Skewness Positive Skewness
Skewness = [1/n (xi‐m)3] / s3
• Third moment about the mean divided by the
cube of the std. dev.
• Positive ‐
P iti t il t th i ht
tail to the right
• Negative ‐ tail to the left
Skewness Skewness
Example: Remove high value:
1, 1, 1, 2, 3, 3, 5, 7, 7 ,11 1, 1, 1, 2, 3, 3, 5, 7, 7
M=4.1 M=3.3
Sk= [1/10 {(1‐4.1)3+ (1‐4.1)3+ (1‐4.1)3+ (2‐4.1)3+ Sk= [1/9 {(1‐3.3)3+ (1‐3.3)3+ (1‐3.3)3+ (2‐3.3)3+
((3‐4.1))3+ (3‐4.1)
( )3+ (5‐4.1)
( )3+ (7‐4.1)
( )3+
(3 3 3)3+ (3‐3.3)
(3‐3.3) + (3 3 3)3+ (5‐3.3)
+ (5 3 3)3+ (7‐3.3)
+ (7 3 3)3+
+
(7‐4.1)3+ (11‐4.1)3 } ]/ 3.348 3=
(7‐3.3)3 } ]/ 2.445 3 =
= {1/10 (‐29.79‐29.79‐29.79‐8.82‐1.33 1.33+ 0.73+
= {1/9 (‐12.17‐ 12.17‐ 12.17‐ 2.2‐ 0.03‐ 0.03+ 4.91+
24.39+ 24.39+328.51)} /37.52 =
= 277.2/375.2 = 0.738 50.65+ 50.65)} / 14.61 =
= 67.44/131.54 = 0.513
Peakedness Coefficient of Variation
Coefficient of Variation Normal Distribution
In our example: f(x) = 1 / (s 2) exp [‐1/2 ((x‐m)/s)2]
CV = 3.348/4.1 =0.817 • symmetric, bell‐shaped
• 68% of the values are within one std. dev.
Remove high value: • 95% of the values are within two std. dev.
CV = 2.445/3.33=0.743
Normal Distribution curve Std. Normal Distribution
• m = 0 and
s = 1
• standardize any variable using:
z = (x‐m)
z = (x m) / s
/s
Lognormal Distribution Conversion Formulas
Logarithm of a random variable has a normal Conversion formulas between the normal and
lognormal distributions:
distribution Lognormal to normal:
f(x) = 1 / (x 2 ) e ‐u for x > 0, > 0 • µ = exp (+2 /2)
where • 2 = µ2 [exp(2) ‐ 1]
u= (ln x ‐ ) 2 / 22 Normal to lognormal:
= mean of logarithms • = logµ ‐ 2 /2
= variance of logarithms • 2 = log [1 + (2 /µ 2)]
Lognormal Distribution Curve Three‐Parameter LN Distribution
Logarithm of a random variable plus a
constant, ln (x+c) is normally distributed
Constant c can be estimated by:
c = (M2 ‐ q1 q2 ) / (q1 + q2 + 2M)
Bivariate Distribution Statistical Analysis
• Joint distribution of outcomes from two • To organize, understand, and/or describe data
random variables X and Y: • To check for errors
F(x,y) = Prob {Xx, and Yy} • To condense information
• In practice, it is estimated by the proportion of
I ti it i ti t d b th ti f • To uniformly exchange information
pairs of data values jointly below the respective
threshold values x, y.
Error Checking Data Analysis and Display Tools
• Avoid zero for defining missing values • Frequency Distributions
• Check for typographical errors • Histograms
• Sort data; examine extreme values • Cumulative Frequency Tables
• Plot sections and plan maps for coordinate • Probability plots
errors
• Box Plots
• Locate extreme values on map
• Scatter Plots
Isolated? Trend?
• Q‐Q plots
Data Analysis and Display Tools Histograms
• Correlation
• Visual picture of data and how they are
• Correlation Coefficient
• Linear Regression
distributed
• Data Location Maps • Bimodal distributions show up easily
• Contour Maps
Contour Maps • Outlier high grades
O tli hi h d
• Symbol Maps • Variability
• Moving Window Statistics
• Proportional Effect
Histogram Plot Histograms with skewed data
• Data values may not give a single informative
histogram
• One histogram may show the entire spread
O hi t h th ti d
of data, but another one may be required to
show details of small values.
Histograms with skewed data Cumulative Frequency Tables
Probability Plots Probability Plot
• Shows if distribution is normal or lognormal
• Presence of multiple populations
• Proportion of outlier high grades
Box Plots Box Plots
Graphically depicting groups of numerical
data through their five‐number summaries:
‐the smallest observation
‐lower quartile (Q1)
‐median
median
‐upper quartile (Q3)
‐largest observation
It also indicates which observations, if any,
might be considered outliers.
Scatter Plots Scatter Plot
• Simply an x‐y graph of the data
• It shows how well two variables are related
• Unusual data pairs show up
• For skewed distributions, two scatter plots
may be required to show both details near
origin and overall relationship.
Linear Regression Linear Regression
Different ranges of data may be described
• y = ax + b
adequately by different regressions
a = slope, b = constant of the line
• a = r (y/x)
• b = my ‐ amx
Linear Regression Conditional Expectation Line
Q‐Q Plots Q‐Q plot
• Quantile‐Quantile plots
• Straight line indicates the two distributions
have the same shape
• 45‐degree line indicates that mean and
45 d li i di t th t d
variance are the same.
Covariance High Positive Covariance
Covxy= 1/n (xi‐mx)(yi‐my) i=1,...,n
x-mx<0 x-mx>0
Where y-my>0
mx = mean of x values and my
y-my<0
my = mean of y values
mx
Covariance Near Zero Large Negative Covariance
Covariance Covariance
It is affected by the magnitude of the data C = 2097.5
values:
Multiply x and y values by C, then
covariance increases by C2.
C=20.975
Correlation Correlation Coefficient
Three scenarios between two variables: r = Covxy / xy
• Positively correlated • r = 1, straight line, positive slope
• Negatively correlated • r = ‐1, straight line, negative slope
• Uncorrelated • r = 0, no correlation
r = 0 no correlation
• May be affected by a few outliers
• It removes the dependence on the
magnitude of the data values.
Correlation Coefficient Correlation Coefficient
= 0.99 = ‐0.03
Correlation Coefficient Correlation Coefficient
It measures linear dependence
= -0.08
= ‐0.97
Data Location Map Contour Maps
Symbol Maps Moving Window Statistics
• Each grid location is represented by a symbol • Divide area into several local areas of same size
that denotes the class to which the value • Calculate statistics for each smaller area
belongs • Useful to investigate anomalies in mean and
• Designed for the line printer
Designed for the line printer variance
i
• Usually not to scale
Proportional Effect Proportional Effect Plot
• Mean and variability are both constant
• Mean is constant, variability changes
• Mean changes, variability is constant
• Both mean and variability change
Spatial Continuity
H‐ scatter plots
Plot the value at each sample location
versus the value at a nearby location
h l b l i
Spatial Continuity Spatial Continuity
A series of h‐scatter plots for several
separation distances can show how the
spatial continuity decays with increasing
distance.
Y
You can further summarize spatial continuity
f th i ti l ti it
by calculating some index of the strength of
the relationship seen in each h‐scatter plot.
Moment of Inertia Moment of Inertia
For a scatter plot that is roughly symmetric about the
Y
line x=y, the moment of inertia about this line can
serve as a useful index of the strength of the
relationship. X-Y
= moment of inertia about x=y (X-Y)/2
= average squared distance from x=y
(X,Y)
=1/n [1/2 (xi‐yi)] 2
=1/2n (xi‐yi)2 X
Variogram Variogram
• Measures spatial correlation between samples
• Function of distance
• Vector
(h) = 1 / 2n [Z(xi) ‐ Z(xi+h)]2
• Depends on distance and direction
• Semi‐variogram will be referred as variogram
for convenience
• Range
• Sill
• Nugget Effect
Data for Computation Computation 1
For the first step (h=15), there are 4 pairs:
1. x1 and x2 , or .14 and .28
2. x2 and x3 , or .28 and .19
3. x3 and x4 , or .19 and .10
4. x4 and x5 , or .10 and .09
Therefore for h=15
Therefore, h=15, we get
(15)=1/(2*4)[(x1-x2)2+(x2-x3)2+(x3-x4)2+(x4-x5)2 ]
= 1/8 [ (.14-.28)2 + (.28-.19)2 + (.19-.10)2 + (.10-.09)2]
= 0.125 [(-.14)2 + (.09)2 + (.09)2 + (.01)2 ]
= 0.125 ( .0196 + .0081 + .0081 + .0001 )
= 0.125 ( .0359 ) = 0.00448
Computation 2 Computation 3
For the second step (h=30), there are 3 pairs: For the third step (h=45), there are 2 pairs:
1. x1 and x3 , or .14 and .19 1. x1 and x4 , or .14 and .10
2. x2 and x4 , or .28 and .10 2. x2 and x5 , or .28 and .09
3. x3 and x5 , or .19 and .09 Therefore, for h=45, we get
Therefore, for h=30, we get (45) = 1/(2*2) [(x1-x4 )2 + (x2-x5)2]
(30) = 1/(2
1/(2*3)
3) [(x1-xx3)2 + (x2-xx4) 2 + (x3-xx5)2 ] = 1/4 [(.14-.10)
[( 14 10)2 + (.28-.09)
( 28 09)2 ]
= 1/6 [(.14-.19)2 + (.28-.10)2 + (.19-.09)2 ] = 0.25 [(.04)2 + (.19)2 ]
= 0.16667 [(-.05)2 + (.18)2 + (.10)2 ] = 0.25 ( .0016 + .0361 )
= 0.16667 ( .0025 + .0324 + .0100 ) = 0.25 ( .0377 ) = 0.00942
= 0.16667 ( .0449 )= 0.00748
Lag, Window and Band Width – Multiple
Windows and Band Widths
Directions
Rotation Systems Vertical Direction Option
GSLIB = (ZXY, LRR)
GSLIB-MS (the one that m624v1 uses) = (ZXY, LRL)
SAGE 2001 = (ZYX, RRR)
MEDS:
First two rotations are same as GSLIB-MS
The last rotation does not correspond to a
pure rotation about an axis.
However, there is a relationship between MEDS and GSLIB-MS,
as follows:
Exponential
Gaussian
Linear
Sample Variogram Plot Fitting a Theoretical Model
• Draw the variance as the sill (c + c0 )
• Project the first few points to the y‐axis. This is an estimate of the
nugget (c0 ).
• Project the same line until it intercepts the sill. This distance is two
thirds of the range for spherical model.
• Using the estimates of range, sill, nugget and the equation of the
Using the estimates of range sill nugget and the equation of the
mathematical model under consideration, calculate a few points and
see if the curve fits the sample variogram.
• If necessary, modify the parameters and repeat Step Four to obtain a
better fit.
Anisotropy Types of Anisotropy
• Structural character of the mineralization may • Geometric
be different for various directions. same sill and nugget, different ranges
• The mineralization maybe more continuous
• Zonal
along one direction than the other.
• Can be determined by comparing the same nugget and range, different sills
t d diff t ill
variograms calculated in different directions within
the deposit.
Modeling Anisotropy Modeling Anisotropy
• Geometric • Zonal
Modeling Geometrical Anisotropy: a recipe Modeling Zonal Anisotropy: a recipe
135o
Variogram Contours Nested Structures
Logarithmic Variogram Transformation from Logs
Covariance Function Variograms Correlogram
Indicator Variogram Cross Variogram
Cross Validation Cross Validation
• Predicts a known data point using an
interpolation plan
• Only the surrounding data points are used to
estimate this point, while leaving the data
point out.
• Other names: Point validation, jack‐knifing
• To check how well the estimation procedure
can be expected to perform.
Cross Validation Report Cross Validation
Variable : CU
ACTUAL KRIGING DIFF
Cross Validation Cross Validation
• The least amount of average estimation • The weighted square error (WSE) is given by
error the following equation:
• Either the variance of the errors or the WSE = [(1/i 2) (ei)2 ] / (1/i2)
weighted square error (WSE) is closest to
g q ( ) • The weighting of the error (e) by the inverse
e e g t g o t e e o (e) by t e e se
the average kriging variance. of the kriging variance gives more weights to
Check: those points that should be closely
estimated.
• Histogram of errors
• Scatter plots of actual versus estimate
Cross Validation Cross Validation
• It may suggest improvements Remember:
• All conclusions are based on observations
• It compares, does not determine
of errors at locations where we do not need
parameters
estimates.
• Reveals weaknesses/shortcomings
Reveals weaknesses/shortcomings
• We remove values that, after all, we are
going to use.
The Necessity of Modeling
Suppose we have the data set below
It provides virtually no information about
the entire profile
Deterministic Models Probabilistic Models
Depend on: The variables of interest in earth science data are typically
• Context of data the end result of vast number of processes whose complex
interactions cannot be described quantitatively.
• Outside information (not contained in data)
Probabilistic random function models recognize this
uncertainty and provide tools for estimating values at
unknown locations once some assumptions about the
statistical characteristics of the phenomenon are made.
Probabilistic Models Random Variables
Functions of Random Variables Conditions of Probability
The set of outcomes and their corresponding If the probabilities are represented by pi,
probabilities is sometimes referred to as the for the n possible events, then
“probability distribution” of a random variable.
To denote the values of a probability distribution,
symbols such as f(x), P(x) etc. are used. 1. 0 pi 1
For example: f(x)=1/6 for x=1,2,3,4,5,6
gives the probability distribution for the number 2. Σ pi = 1
of points we roll with a fair die.
Parameters of a Random Variable Parameters of a Random Variable
These “probability distributions” have The complete distribution can not be
parameters that can be summarized. determined from the knowledge of only a
Example: Min, Max etc… few parameters.
It’s often helpful to visualize probability
Two random variables may have the same
distributions by means of graphs, such as
mean and variance but their distributions
histogram.
may be different.
Parameters of a Random Variable Parameters of a Random Variable
The parameters can not be calculated by observing the
outcomes of a random variable. From a sequence of
observed outcomes all we can calculate is sample The two most commonly parameters used in
statistics based on that set of data. probabilistic approaches to estimation are the
Different set of data will produce different statistics.
As the number of outcomes increases, the sample statistics
mean or expected value of the random
mean or expected value of the random
becomes more similar to their model parameters. In practice, we variable and its variance.
assume that the parameters of our random variable are the same
as the sample statistics.
Expected Value Variance of a Random Variable
Expected value of a random variable is its The variance of a random variable is the
mean or average outcome.
expected squared difference from the mean
µ = E(x)
E(x) refers to expectation: of the random variable.
( ) ‐ x f(x) dx
E(x) = ( ) 2 = E (x‐µ)
E ( )2 = ‐ (x‐µ)
( )2 f(x) dx
f( ) d
where f(x) is the probability density function
of the random variable x. Std. dev. is
Expected Value Joint Random Variables
Example:
Random variables may also be generated in
Define Random Variable
L=outcome of throwing two dice and taking pairs according to some probabilistic
the larger of the two values. mechanism; the outcome of one of the
Wh i h
What is the expected value of L?
d l f L? variables may influence the outcome of the
E(L)=1/36 (1)+3/36 (2)+5/36 (3) other.
+7/36(4)+9/36 (5)+11/36 (6) = 4.47
Covariance Independence
The dependence between two random Random variables are considered
variables is described by covariance independent if the joint probability density
Cov(x1 ,x2) = E {[x1 ‐ E(x2)] [x2 ‐ E(x2)]} function satisfies:
= E(x
( 1 x2) ‐
) [E(x
[ ( 1)] [E(x
)] [ ( 2)] p(x
( 1 ,x2 ,...,xn) = p(x
) ( 1) p(x
) ( 2) ... p(x
) ( n)
i.e., probability of two event happening is
the product of each event’s probability
Independence Conditional Probability
Example: Probability of A given some sample space B
What is the probability of getting 6,6 when is the conditional probability of A relative to
we roll two fair dice? B, and is denoted by P(AB).
p(x1) = 1/6 and p(,x2) = 1/6 P(AB) = P(A,B) / P(B)
p(x1,x2)= p(x1) * p(x2) = 1/36 P(BA) = P(A,B) / P(A)
Conditional Probability Expectation and Variance
Example: In a factory, there are 60% male and Properties:
40% female workers. 20% of females commute, • C is a constant, then E(Cx) = C E(x)
the rest stay in the factory housing. What is % of • If x1 , x2 , ..., xn have finite expectation, then
female residents? E(x1 +x2 ...+xn ) = E(x1) + E(x2) + ... + E(xn)
P(M) = 0 6 and P(F) = 0 4
P(M) = 0.6 and P(F) = 0.4 If C is a constant, then Var(Cx) = C
• If C is a constant, then Var(Cx) C2 Var(x)
P(RF) = 1 – 0.2 = 0.8 • If x1 , x2 , ..., xn are independent, then
P(F,R) = P(F) * P(RF) = 0.4 * 0.8 = 0.32 Var(x1 +x2 ...+xn) = Var(x1)+Var(x2)+...+Var(xn)
So, 32% of total are female residents • Var(x+y) = Var(x) + Var(y) + 2 Cov(x,y)
Central Limit Theorem Confidence Limits
Sample means of a group of independent variables tend Can be expressed by applying error bounds
toward a normal distribution regardless of the
distribution of the samples.
around the estimate
Factors affecting the dispersion of the distribution of For example, at 95% confidence level:
sample means:
l
Lower limit = m ‐ 2 (s / n)
1. The dispersion of the parent population.
2. The size of the sample.
Upper limit = m + 2 (s / n)
Standard Error of the Mean = x= / n
Random Functions Random Functions
Reality vs Model Linear Estimators
Reality: • All estimation methods involve weighted
• sample values linear combinations:
• summary statistics estimate = z* = wi z(xi) i = 1,...,n
Model:
• possible outcomes with corresponding probabilities of
possible outcomes with corresponding probabilities of
occurrence The questions:
h
• parameters • What are the weights, wi ?
It is important to recognize the distinction between a model and • What are the values, z(xi) ?
the reality
Random Process Assumptions ‐
Desirable Properties of an Estimator
Stationarity
• Average error = 0 (unbiased)
E (Z ‐ Z * ) = 0
The independence of univariate and
where Z * is the estimate and Z is the true value of the bivariate probability laws from the location x
random variable
is referred as stationarity.
• Error variance (spread of errors) is small
(p )
Var (Z ‐ Z * ) = E (Z ‐ Z * )2 = small The question of “Are the nearby samples
• Robust relevant?”
Question:
• How to calculate the weights so that they satisfy the
required properties?
Intrinsic Hypothesis Notes on Stationarity
The intrinsic hypothesis of order zero: •Stationarity assumption does not apply to the entire data
E[Z(x)] = m, m = finite and independent of x set, but only to the search neighborhood.
E[Z(x+h)- Z(x)]2 = 2(h) = finite and independent •Local stationarity is assumed by all estimation methods. It’s
often a viable assumption even in data sets for which global
of x (variogram function)
stationarityy is clearlyy inappropriate.
pp p
W assume no drift,
We d ift andd th
the existence
i t and
d th
the •If there is sufficient information to support that stationarity
stationarity of the variogram only. assumption is not valid, then subdivide the data into smaller
If condition of no drift in a deposit cannot be zones (populations) within which stationarity assumption is
satisfied, the intrinsic hypothesis of order one is more appropriate.
invoked.
Ensuring Unbiasedness Estimation Methods
Polygonal Polygonal
Assigns all weight to nearest sample. Disadvantages:
Advantages: • Discontinuous local estimates
• Easy to understand • Edge effect
• Easy to calculate manually • No anisotropy
• Fast • No error estimation
• Declustered global histogram
Triangulation Triangulation
Weight at each triangle is proportional to the Disadvantages:
area of the opposite sub triangle. • Not unique solution
• Only three samples receive weights
Advantages:
• Extrapolation?
p
• Easy to understand and calculate manually • 3d?
• Fast • No anisotropy
• No error control
Inverse Distance Inverse Distance
Each sample weight is inversely proportional to the
distance between the sample and the point being
(1/1002)*1.1+(1/1002)*0.5
estimated: Est=
z* = [ (1/dip) z(xi ) ] / (1/ dip) i = 1,...,n (1/1002+1/1002)
where
h
Est= 0.8
z* is the estimate of the grade of a block or a point,
z(xi) refers to sample grade, p is an arbitrary Grades Weights
exponent, and n is the number of samples 1.1 0.5
0.5 0.5
Inverse Distance Inverse Distance
(1/1002)*1.1+(1/502)*0.5 (1/1002)*1.1+(1/1002)*0.5
Est= Est=
(1/1002+1/502) (1/1002+1/1002)
Inverse Distance Inverse Distance
If p tends to zero =>local mean sample Advantages:
• Easy to understand
If p tends to => nearest neighbor • Easy to implement
method (polygonal)
th d ( l l) • Flexible in adapting weights to different
Fl ibl i d ti i ht t diff t
estimation problems
Traditionally, p = 2 • Can be customized
Inverse Distance Ordinary Kriging
Disadvantages: Definition:
• Susceptible to data clustering Ordinary Kriging (OK) is an estimator designed
• p? primarily for the estimation of block grades
• Anisotropy possible with adjusted distances
A i t ibl ith dj t d di t as a linear combination of available data in
li bi ti f il bl d t i
• No error control or near the block, such that estimate is
unbiased and has minimum variance.
Ordinary Kriging Kriging Estimator
B.L.U.E. for best linear unbiased estimator. z* = wi z(xi ) i = 1,...,n
Linear because its estimates are weighted where
linear combinations of available data
z* is the estimate of the grade of a block or a
Unbiased since the sum of the weights
g adds
up to 1 point, z(xi) refers to sample grade, wi is the
Best because it aims at minimizing the corresponding weight assigned to z(xi), and n
variance of errors. is the number of samples.
Kriging Estimator Error Variance
Using R. F. model, you can express the error
Desirable Properties: variance as a function of R.F. parameters:
• Minimize 2 = F (w1, w2, w3,…,wn)
2R= 2z + (i j Ci,j ) ‐ 2 i C i,o
• r = average error = 0 (unbiased)
where
• wi = 1 2z is the sample variance
Ci,j is the covariance between samples
Ci,o is the covariance between samples and
location of estimation.
See Isaaks and Srivastava pg 281‐284
Error Variance Ordinary Kriging
• Error variance decreases as data are closer to the
location of estimation Result:
Ci,o = (i Ci,j) +
i = 1
Kriging System (point) Point Kriging (cont.)
Previous equation in matrix form: • Matrix C consists of the covariance values Cij
between the random variables Vi and Vj at the
sample locations.
• Vector D consists of the covariance values Ci0
between the random variables Vi at the sample
locations and the random variable V0 at the location
where an estimate is needed.
• Vector consists of the kriging weights and the
Lagrange multiplier.
Kriging Kriging
Grades Weights
1.1 0.744
C11 C12 1 WI C1B 0.5 0.256
C21 C22 1 x W2 = C2B
1 1 0 µ 1 Est = 0.95
C0 = 0.2
C1 = 0.8
08 Relative location is important.
RY = 500m Grade 1.1 gets most of the weight
RX = 150m because of RY=500m at 90o rotation
RZ = 150m (Y axis is rotated 90o to the east).
R1/R2/R3 = 90/0/0
Spherical
1.0 0.1 1 WI 0.56 1.0 0.1 1 WI 0.56
0.1 1.0 1 x W2 = 0.12 0.1 1.0 1 x W2 = 0.12
1 1 0 µ 1 1 1 0 µ 1
Kriging System (block) Block Kriging
Kriging Variance Advantages of Kriging
• Takes into account spatial continuity
2ok = CAA - [(i CiA) + ] characteristics
• Built‐in declustering capability
Data independent • Exact estimator
act est ato
• Calculates the kriging variance for each block
• Robust
Disadvantages of Kriging Assumptions
• No drift is present in the data
• prior variography required (Stationarity hypothesis)
• more time consuming • Both variance and covariance exist and are
• smoothing effect
g finite.
• The mean grade of the deposit is unknown.
Effect of Scale Effect of Shape
Nugget Effect Effect of Range
Effect of Anisotropy Search Strategy
Search Strategy Search Volume (2D)
• Include at least a ring of drill holes with enough samples
around the blocks to be estimated
• Don’t extend the grades of the peripheral holes to the
undrilled areas too far
• Increasing vertical search distance has more impact on
numberb off samples
l available
il bl ffor a given
i bl
block,
k th
than
increasing horizontal search distance (in vertically oriented
drillholes)
• Limit the number of samples used from each individual
drillhole
Interpolation Options Interpolation Options
Interpolation Options
Interpolation Options Interpolation Options
Default:
2 grd1 x (wt1 /d1)2 + grd2 x (wt2 /d2)2
4
-----------------------------------------------
3 (wt1/d1)2 + (wt2/d2)2
Where grd = grade, d = distance, wt = weighting item
Apply weighting factor from a weight item after taking
inverse
distance:
grd1 x wt1 /(d1)2 + grd2 x wt2 /(d2)2
-----------------------------------------------
wt1/(d1)2 + wt2/(d2)2
Interpolation Options
Composites at the toe
Interpolation Options
Plan View
North-South East-West
View View
Interpolation Options
NO
YES
Interpolation Options
Interpolation Options
Block Discretization
Composite Distance to Rock
#
Value
the block type To be considered:
1 0.50 50 1
• Range of influence of the
2 1.50 60 2
variogram used in kriging.
3 0.05 70 2
• Size of the blocks with
4 0 55
0.55 80 1
respect to this range.
5 2.50 90 1
6 0.75 100 2
• Horizontal and vertical
anisotropy ratios.
Interpolation Options Interpolation Options
0 = The negative weights are allowed ( Default ).
Declustering Declustering
Clustering in high grade area: Clustering in mean grade area:
Naïve mean= Naïve mean=
(0+1+3+1+7+6+5+6+2+4 (7+1+3+1+0+6+5+1+2+4+
+0+1)/12 = 3 0+6)/ 12 =
=3
Declustered mean=
[(0+1+3+1+2+4+0+1) + Declustered mean=
(7+6+5+6)/4] /9 = 2 [(7+1+3+1+2+4+0+6) +
(0+6+5+1)/4] /9 =
=3
Declustering Declustering
Clustering in low grade area:
Naïve mean=
• Data with no correlation, do no need
(7+1+6+1+0+3+4+1+2+5+ declustering (pure nugget effect model)
0+6)/12
=3
• If variogram model has a long range and low
Declustered mean=
[(7+1+6+1+2+5+0+6) + nugget, you may need to decluster.
(0+3+4+1)/4] /9 =
=3.33
Declustering Cell Declustering
Each datum is weighted by the inverse of the
number of data in the cell
• Cell declustering
• Polygonal
Quantifying Uncertainty
One approach:
• Assume that the distribution of errors is Normal
• Assume that the ordinary kriging estimate provides
the mean of the normal distribution
the mean of the normal distribution
• Build 95 percent confidence intervals by taking 2
standard deviations either of the OK estimate
Quantifying Uncertainty Quantifying Uncertainty
Kriging Variance
2ok = CAA - [(i CiA) + ]
Advantages
Does not depend on data.
It can be calculated before sample data are
available (from previous/known variography)
available (from previous/known variography).
Disadvantages Same Kriging Variance!!!
Does not depend on data.
If proportional effect exists, previous assumptions
are not true.
Quantifying Uncertainty Quantifying Uncertainty
Other approach: Combined Variance
= sqrt (local variance * kriging variance)
Incorporate the grade in the error variance
where local variance of the weighted average (2w ) is:
calculation: 2w = w2i * (Z0‐ zi )2 i = 1, n (n>1)
Relative Variance
l = n is the number of data used,
i h b fd d
wi are the weights corresponding to each datum,
Kriging Variance /Square of Kriged Grade
Z0 is the block estimate,
and zi are the data values.
Quantifying Uncertainty Quantifying Uncertainty
Relative Variability Index(RVI) =
SQRT(Combined Variance) / Kriged Grade
Note: This is similar to Coefficient of Variation,
Note: This is similar to Coefficient of Variation
C.V. = / m
Regression Slope Change of Support
N=4
M = 8.825
Regression slope:
(block var - kriging var +[ LGM]) / (block var - kriging var + 2 * [LGM])
where LGM = Le Grange Multiplier
Change of Support Change of Support
>10
N = 16 N = 2 = 50%
M = 8.825 M = 11.15
Change of Support Change of Support
• The mean above 0.0 cutoff does not change with a
>10 change in support
N = 5 = 31% • The variance of block distribution decreases with
M =18.6
M 18.6 larger support
• The shape of the distribution tends to become
symmetrical as the support increases
• Recovered quantities depend on block size
= 33.53 = 3.20
Krige’s Relation (cont’d) Krige’s Relation (cont’d)
Affine Correction Calculation of Affine Correction
Indirect Lognormal Method Volume Variance Correction in MineSight®
Disadvantage:
If the original distribution departs from log
normality, the new mean may require
rescaling:
znew = (mold/mnew) zold
Hermite Polynomials:
• Declustered composites are transformed into a
Gaussian distribution
• Volume‐variance correction is done on the
o u e a a ce co ect o s do e o t e
Gaussian distribution
• Then this distribution is back transformed
using inverse Hermite Polynomials
Change of Support (other) Change of Support (applications)
Conditional Simulation: Design a search strategy:
• Simulate a realization of the composite (or • Decluster composites/variogram
• Define SMU units
blasthole) grades on a very closely spaced • Apply change of support from composites to SMU
grid (for example 1x1)
grid (for example, 1x1) • Calculate the SMU G‐T (grade tonnage) curves.
l l h ( d )
• Average simulated grades to obtain • “Guess at a search scenario
• Krige blocks => create G‐T curves
simulated block grades
• Compare G‐T curves of block estimates to G‐T
curves of SMUs (see next slide)
• Adjust search scenario etc…
Change of Support (G‐T Curves) Change of Support (applications)
Reconciliation between BH model and
Exploration model:
• Calculate G‐T curves of exploration model
• Apply change of support from BH model to
E l ti
Exploration model
d l
• Calculate the adjusted BH model G‐T curves.
• Compare G‐T curves of block estimates to G‐T of
adjusted BH model estimates.
C. of S. for Ore Grade/Tonnage Estimation Change of Support (applications)
Other:
• needed in MIK (Multiple Indicator Kriging)
• needed in UC (Uniform Conditioning)
Simple Kriging
•Z(xi ) - refers to sample grade
•i - corresponding simple kriging weights assigned to Z(xi )
•n - number of samples
•m = E{Z(x)} - location dependent expected value of Z(x).
Simple Kriging Cokriging
Cokriging Cokriging‐steps for Drill and Blasthole data
[Cov{didi}] [Cov{dibj}] [1] [0] [λi] [Cov{x0di}]
• Regularize blasthole data into a specified block size. Block size could be the
[Cov{dibj}] [Cov{bjbj}] [0] [1] [δj] [Cov{x0bj}]
x =
same as the size of the model blocks to be valued, or a discreet sub‐division
[1] [0] 0 0 μd 1 of such blocks. A new data base of average blasthole block values is thus
[0] [1] 0 0 μb 0 established.
• Variogram analysis of drillhole data.
[Cov{didi}] = drill hole data (dhs) covariance matrix,
matrix i=1,n
i=1 n • Variogram analysis of blasthole
analysis of blasthole data.
[Cov{bjbj}] = blast hole data (bhs) covariance matrix, j=1,m
[Cov{dibj}] = cross-covariance matrix for dhs and bhs • Cross‐variogram analysis between drill and blasthole data. Pair each
[Cov{x0di}] = drill hole data to block covariances
[Cov{x0bj}] = blast hole data to block covariances
drillhole value with all blasthole values.
[λi] = Weights for drill hole data
[δj] = Weights for blast hole data
• Selection of search and interpolation parameters.
μd and μb = Lagrange multipliers
• Cokriging.
Co‐Kriging Universal Kriging
Outlier Restricted Kriging ORK matrix
•Determine the outlier cutoff grade
•Assign indicators to the composites based on the
cutoff grade
0 if the grade is below the cutoff
1 otherwise
•Use OK with indicator variogram, or simply use
IDW, or any other method to assign the probability of
a block to have grade above the outlier cutoff.
•Modify Kriging matrix.
ORK Nearest Neighbor Kriging
Why Non‐Linear? Why use non‐linear methods?
• To overcome problems encountered with outliers
• To estimate the distribution rather than simply an expected
• To provide “better” estimates than those provided by linear value for the blocks/panels
methods • When dealing with a strongly skewed distribution, simply
• To take advantage of the properties on non-normal estimating the mean by a linear estimator might be too
distributions of data and thereby provide more optimal smooth
estimates • Non‐linear estimation provides the solution to the “small
• To provide answers to non-linear problems block” problem. We cannot precisely estimate small (SMU‐
• To provide estimates of distributions on a scale different sized) blocks by direct linear estimation. However, we can
from that of the data (the “change of support” problem) estimate the proportion of SMU‐sized blocks above a
specified cut‐off, within a panel. Thus, the concept of change
of support is critical in most practical applications of non‐
linear estimation
Lognormal Kriging Lognormal Short Cut
• The ordinary kriging of logarithms of the grades • In addition to calculating the block grades using
is back transformed to give the desired block OK, the grade and percent of the blocks above a
grades.
• Extremely sensitive to the assumption of specified cutoff can be calculated.
lognormality of the grades. Therefore, it is not
of the grades Therefore it is not • The theoretical distribution of the grades within
The theoretical distribution of the grades within
as robust as the ordinary kriging. each block can be assumed either normal or
• Never use it without checking the results lognormal.
carefully.
Lognormal Short Cut Indicator Kriging
The basis of indicator kriging is the indicator function:
At each point x in the deposit, consider the following
indicator function of zc defined as:
1, if z(x) < zc
i(x;zc ) =
0, otherwise
where:
x is location,
zc is a specified cutoff value,
z(x) is the value at location x.
Indicator Kriging Indicator Kriging
Examples:
Separate continuous variables into categories: 1 if Rock =1,
0 if Not Rock =1
I(x) = 1 if k(x) 30, 0 if k(x) >30
Characterize categorical variables and
Characterize categorical variables and
differentiate types:
I(x) = 1 for oxide, 0 for sulfide
Indicator Kriging (coding uncertainty) Indicator Kriging (coding uncertainty)
Some drill holes have encountered a particular
horizon, some were not drilled deep enough, some
penetrated the horizon but the core or the log is
missing:
Use I(x) = 1 for drill hole assays above the horizon
Use I(x) 1 for drill hole assays above the horizon
and I(x) = 0 for assays below the horizon. I1 1 1 1 1 1
I2 1 1 1 1 1
Use indicator kriging and calculate the probability of I3
I4
1
0
1
1
1 1 1
0
I5 0 0 0
the missing assays to be 1 or 0. I6 0 0 0 0
P1 1 1 1 1 1
P2 1 1 1 1 1
P3 1 1 1 1 1
P4 0 1 0.8 0.5 0
P5 0 0 0.2 0.3 0
P6 0 0 0 0 0
Indicator Kriging (spatially mixed populations) Indicator Kriging (applications)
Some data may represent a spatial mixture of two or Extreme values:
more statistical populations (for example, clay and sand.
• Separate populations: Separate population to 1 and 0 based on
I(x) = 1 for clay, 0 for sand. outlier cutoff. Proceed then as though you
• Then calculate the probability of an unsampled
l
location to be clay or sand.
b l d are dealing with two spatially mixed
d li ith t ti ll i d
• Krige (local estimates) unsampled locations using
only data belonging to that population populations.
• Final estimate can be a weighted (by probabilities) average of
the local estimates.
Indicator Function at point x The (A;zc) function
Median Indicator Variogram MIK
Indicator variogram where cutoff corresponds
to median of data
m(h;zm ) = 1/2n [ I(xj+m+h);zm ) ‐ I(xj;zm ) ]2
j=1,…,n
Indicator Coding
Sample Value
Cutoff
0 37 42 171 265 391 521 732
65 1 1 1 0 0 0 0 0 i(x;65)
225 1 1 1 1 0 0 0 0 i(x;225)
430 1 1 1 1 1 1 0 0 i(x:430)
MIK MIK
Cutoff = 65 Cutoff = 225
Indicator variogram Indicator variogram
Co = 0.1 Co = 0.0
Spherical Spherical
C1 = 0.9 C1 = 1.0
Range = 300; Azim = 50 Range = 400; Azim = 50
Range = 150; Azim = 140 Range = 175; Azim = 140
MIK MIK
Cutoff = 430
Indicator variogram
Co = 0.15
Probability intervals:
Spherical
Prob{z(xo) Є(65,430)}=0.46
C1 = 0.85
Range = 250; Azim = 50 Probability of exceedance:
Range = 80; Azim = 140 Prob{z(xo)>430 = 00.40
40
Order Relations (potential problem) MIK (change of support)
Advantages of MIK Disadvantages of MIK
Uniform Conditioning (UC) UC or MIK?
• UC is a non‐linear estimation technique, like MIK, • MIK calculates the proportion of samples
where one determines the grade distribution of each above the cut‐off within the panels
block to calculate the grade and proportion of ore • UC requires the Gaussian anamorphosis of
above any given cutoff. data and calculation of polynomials at each
p y
• The mean grade of the block is obtained from the OK data point
to ensure the estimation is locally well constrained. • Recoverable resources are calculated on the
The proportions of ore within the block are
Gaussian model and the dispersion variances
conditional to the kriged value.
accommodating the change of support for the
smu
Change of Support Conditional Simulation (CS)
Why CS? Simulation or Estimation
• CS models reproduce the actual variability • Estimation and simulation are complementary
(histogram) and spatial continuity (variogram) tools
of the attributes of interest. • They don’t have the same objectives
• CS can be used to address the problem of
CS can be used to address the problem of • Estimation is for determining averages
i i i f d i i
measuring the uncertainty associated with an • Simulation is for determining variability
estimate.
Estimation Simulation
Comparison to Reality Simulation or Estimation
• CS does not produce a best estimate of reality,
rather it yields equi‐probable models with
characteristics similar to those observed in
reality.
reality
• When multiple simulations are made their
average values will approximate the
smoothed, best fit curve.
• Sensitivity analysis
• Risk analysis
• Drilling and sampling level necessary
• Quantify the variability of contaminants
• Prediction of recoverable reserves
Grade Zoning Grade Zoning
• Grade zoning is usually applied to control Determine how the grade populations are
the extrapolation of grades into statistically separated spatially
different populations • Is there a reasonably sharp discontinuity
between the grades of the different
• Often grade zones or mineralization
Often grade zones or mineralization populations?
envelopes correspond to different geologic
• Or is there a larger transition zone between the
units grades of the different populations?
Grade Zoning Grade Zoning
Discontinuity between grade populations: Transition zone between grade populations:
Grade Zoning Grade Zoning
• Discontinuity between the grade Characterizing the contact between different
populations is best modeled using a spatial populations:
deterministic model, i.e., digitized the Calculate the difference between the
outlines
average grades within each population as a
average grades within each population as a
• Transition zone between the grade function of distance from the contact:
populations is best modeled using a
probabilistic model, i.e., indicator kriging Dzi = zi ‐ z(‐i)
Grade Zoning Grade Zoning
• If the average difference in grade Dzi vs distance from the • If the average difference in grade Dzi vs distance from the
contact is more or less constant, then there is probably a contact is small for small distances but increases with
discontinuity between the different populations : increasing distance, then there is likely a transition zone
between the different populations:
Contact Analysis in MineSight® Grade Zone Bias Check
• Often mineralization envelopes lead to biased ore reserve
models. To check:
• Interpolate using the nearest neighbor (polygonal) method)
• Use the search parameters corresponding to the model of
spatial continuity
• Disregard all grade zoning
• Compare at 0.0 cutoff grade, the tons and grade of the
polygonal model to those of the mineralization envelope
model.
Grade Control Misclassification
To predict the tons and grade that will What is misclassification?
be delivered to the mill: • Waste mined as ore
– dilution of ore grade
• The G‐T curves must be based on SMU support
• Ore mined as waste
Ore mined as waste
• The impact of misclassification at the time of
– ore sent to the dump
mining should be taken into account
One type of misclassification may be more
prevalent than the other
Impact of Misclassification Impact of Misclassification
Waste is mined as ore Ore is mined as waste
• ore grade is diluted, not concentrated • Increases grade of waste
• may or may not increase the tonnage, depends on grade • may or may not increase the waste tonnage (stripping ratio),
control procedure depends on grade control procedure
• actual $ loss, waste material does not pay for processing cost
actual $ loss waste material does not pay for processing cost • actual $ loss, potential revenue of ore material is lost
actual $ loss potential revenue of ore material is lost
• net revenue decreases with increased dilution • net revenue decreases with increased grade of waste
• you lose the processing capacity (that you could have used to
process “true” ore)
Control of Misclassification
Ideally, the impact of misclassification should be quantified
This would enable a more accurate prediction of mining
reserves
This would also enable to evaluate the efficiency of a grade
control procedure and thereby test options which may reduce
control procedure and thereby test options which may reduce
misclassification
The impact of misclassification can be quantified or
measured through Conditional Simulation
Notes for layered deposits, such as coal or oil sands. In both models, the
horizontal components of a deposit are divided into blocks that are
usually related to a production unit. In a 3DBM, the deposit is also
divided horizontally into benches, whereas in a GSM the vertical
dimensions are a function of the seam and interburden thicknesses.
For each block in the model, a variety of items may be stored.
Typically, a block in a 3DBM will contain grade items, geological
codes, and a topography percent. Many other items may also be
present. For a GSM, the seam top elevation and seam thickness are
required. Other items, such as quality parameters, seam bottom,
partings, etc. can also be stored. A variety of methods can be used
to enter data into the model. Geologic and topographic data can
be digitized and converted into codes for the model, or they can
be entered directly as block codes. Solids can also be created in
the MineSight 3D graphical interface for use in coding the model
directly. Grade data is usually entered through interpolation
techniques, such as Kriging or inverse distance weighting. Once the
model is constructed, it can be updated, summarized statistically,
plotted in plan or section, contoured in plan or section, and viewed
in 3D. The model is a necessary prerequisite in any pit design or pit
evaluation process.
Economic Pit Limits and Pit Optimization
This set of routines works on whole blocks from the 3D block
model, and uses either the floating cone or Lerchs-Grossmann
technique to find economic pit limits for different sets of economic
assumptions. Usually one grade or equivalent grade item is used
as the economic material. The user enters costs, net value of the
product, cutoff grades, and pit wall slope. Original topography is
used as the starting surface for the design, and new surfaces are
generated which reflect the economic designs. The designs can
be plotted in plan or section, viewed in 3D, and reserves can be
calculated for the grade item that was used for the design. Simple
production scheduling can also be run on these reserves.
Pit Design
The Pit Design routines are used to geometrically design
pits that include ramps, pushbacks, and variable wall slopes to
more accurately portray a realistic open pit geometry. Manually
designed pits can also be entered into the system and evaluated. Pit
designs can be displayed in plan or section, can be clipped against
topography if desired, and can be viewed in 3D. Reserves for these
pit designs are evaluated on a partial block basis, and are used in the
calculation of production schedules.
Production Scheduling
This group of programs is used to compute schedules for long-
range planning based upon pushback designs (or phases), and
reserves computed by the mine planning programs. The basic input
parameters for each production period include mill capacity, mine
capacity, and cutoff grades. Functions provided by the scheduling
programs include:
Notes
Flow of tasks
for a standard
Mine Evaluation
Project
MSDA Manager
The MSDA Basic Concepts help doc can be accessed through
online help. In MSDA applications, click on Help, then select
Concepts.
This tutorial walks you through the principal MSDA tools and
functions. The MineSight Open Pit (MSOP) demonstration project,
included with MSDA, is used throughout for illustrative purposes.
Topics reviewed in this tutorial are:
• Getting Started
• MSDA Manager
• Connecting to a Data Source
• Data Explorer
• Box Plots
• Cumulative Probability Plots (CPP)
• Histograms
• Q-Q Plots
• Scatterplots
• Variograms
• Auto-Fit Variograms
• Variogram 3D Modeling
• Create Var Parameters File for MIK
• Custom Reports
• Multiple Chart Viewer
• Downhole Variograms
• Cutoff Analysis (PIKCUT)
• Declustering
• Topics common to many applications
To access MSDA Tutorial, on MSDA applications, click on Help,
then select Contents. When the Help window opens, select MSDA
Tutorial.
Variogram Calculation
Open MSDA. Connect to the composites (Data Source). From the
MSDA Tools menu, select Build Variogram.
General Tab
Select correlogram as the type to calculate. Pick a lag distance of
50m. Use 10 lags. Pick Cu to analyze and the appropriate coordinate
items. Set the name of the output files (remember that suffixes will
be added automatically based on directions and filter names).
Filter Tab
Set up 2 separate filters. One for Rock Type 1 and one for Rock
Type 2. MSDA will automatically calculate variograms for both
codes.
Notes
Direction Tab
Set up the horizontal and vertical directions. Use 12 horizontal
directions at 30 degrees step with a window angle of 15 degrees.
Use 4 vertical directions with a step size of 30 and window angle
of 15.
Use 100 meters band widths for both directions.
Notes
Variogram Modeling
You can model the variograms one at the time, you can auto-
fit all of them at once or use the Variogram 3D Manager for a 3D
variogram model.
Modeling Individual Variograms
Open one of the variograms you have calculated from above.
Click on Auto-Fit to fit a model. Now adjust as needed (Move the
points with the mouse or type in new parameters, model type etc.)
Notes
Global Auto-fit
Go to the MSDA Utilities menu and click on Auto-Fit Variograms.
Select all the variograms you want to auto-fit. On the next panel
that comes up click on Auto-Fit to model them all at once (you can
hardwire some parameters if you wish).
Variogram 3D Manager
To create a 3D variogram model, go to the MSDA utilities menu
and click on Open Variogram 3D Manager.
Click on Add to add the variograms you want to model.
Pick the type of model you want to use and then click on 3D Model Notes
| Auto-Fit.
You can check the results by using the Standard and Rose
Diagram tabs. Adjust showing parameters as needed.
The 3D model parameters are listed under the Model tab to the
right.
Notes You can see a map of the model on the three main planes (non
rotated YX, YZ and XZ planes) by going to 3D Model | Display
Structures on planes.
You can save the model for future viewing by using File | Save all.
You can view the model in the shape of an ellipse by going to File
| Export Variogram file. Then in MS3D you can use the option Import
| Variogram ASCII file.
Notes
Downhole Variograms
To calculate down the hole variograms, use the MSDA Utilities
menu, Build Downhole Variogram.
General Tab
Use type correlogram as well. Use a lag distance of 15m. Use 20
lags. Pick #REF as item to distinguish drillholes. Pick coordinate
items (if you use assays you can pick the FROM and TO items).
Set the name. MSDA will create a folder based on the name. Only
a combined variogram is needed at this point.
Declustering Notes
Prior to this section you calculated the composites and sorted
statistics. In this section you will use cell declustering technique to
decluster the composite data. This is not required for later work.
Learning Outcomes
In this section you will learn:
• How to decluster composite values
• How to produce a histogram of declustered composite values
Declustering
There are two declustering methods that are generally applicable
to any sample data set. These methods are the polygonal method
and the cell declustering method. In both methods, a weighted linear
combination of all available sample values is used to estimate the
global mean. By assigning different weights to the available samples,
one can effectively decluster the data set.
In this section you will be using the cell declustering method
which divides the entire area into rectangular regions called cells.
Each sample received a weight inversely proportional to the number
of samples that fall within the same cell. Clustered samples will tend
to receive lower weights with this method because the cells in which
they are located will also contain several other samples.
The estimate one gets from the cell declustering method will depend
on the size of the cells specified, If the cells are very small, then most
samples will fall into a cell of its own and will therefore receive equal
weights of 1. If the cells are too large, many samples will fall into the
same cell, thereby causing artificial declustering of samples.
Declustering Composite Data
On MSDA Manager, click on Tools and select Build Decluster File.
Results
Check the output file by clicking on it in MSDA Files.
Exercise 1
Try declustering using different rock types.
Exercise 2
Create a graph of the cell sizes vs mean values. The cell size that
gives the lowest value should be the best choice.
Again, the results for the interpolation of Rock type 2 can be Notes
checked by creating a Model View in MineSight 3D.
Ordinary Kriging
Ordinary kriging is an estimator designed primarily for the local
estimation of block grades as a linear combination of the available
data in or near the block, such that the estimate is unbiased and has
minimum variance. It is a method that is often associated with the
acronym B.L.U.E. for best linear unbiased estimator. Ordinary kriging
is linear because its estimates are weighted linear combinations of the
available data, unbiased since the sum of the weights is 1, and best
because it aims at minimizing the variance of errors.
The conventional estimation methods, such as inverse distance
weighting method, are also linear and theoretically unbiased.
Therefore, the distinguishing feature of ordinary kriging from the
conventional linear estimation methods is its aim of minimizing the
error variance.
Kriging with MineSight
Before producing an interpolation using kriging, you developed a
variogram. Three types of variograms are allowed:
• Spherical
• Linear
• Exponential
On the MineSight Compass™ Menu tab, <select the Group 3-D
Deposit Modeling, and the Operation Calculation; from the procedure list,
select procedure pintrp.dat - Model Interpolation>. Select option 1 for
Ordinary Kriging. Fill out the following panels.
Panel 2 - M624V1: Kriging Search Parameters
This panel provides input for the model and composite files to
use, the area to interpolate, and optional filename extensions. For
this example, use model file 15, composite file 9, and specify ‘kr1’ as
the filename extension for both the run and report files.
Panel 3 - M624V1: Kriging Search Parameters
<Specify a 3-D search to use all composites within 210m (based on
variograms) horizontally and 50m vertically of a block. Specify that the
closest composite must be within 100 meters, and allow a minimum of 3
composites and a maximum of 16>.
Panel 5 - Interpolation Control Items
This panel allows you to specify the items and method for
interpolation. Interpolate the CU composites using kriging to the
model item CUKRG.
Panel 7 - Optional Search Parameters
Ellipsoidal Search and use of anisotropic distances are optional.
For this run, <enter major, minor, and vertical search distances of 210,
120, and 120, respectively. Enter MEDS for the rotation angle specification
and check the box to Use anisotropic distances?>
Notes
Exercise 5
<Rerun procedure using the outlier options from the last panel. Use
an outlier value of 0.70. Use max distance for outliers equal to 75.>
Open report:
It seems like there are no more composites available. <To add one
more (7 total) increase max distance to 240m as well as the ellipsoidal
searches to 240x180x180:
Exercise 6
Notes
Notes This section of the report file shows correlation statistics between
the actual and Kriging values.
Exercise
Change some of the search parameters and rerun the above
procedure. What do you observe?
Notes These values for ZONE will be used to define proven ore (ZONE
=1 or 2), probable ore (ZONE =3) and possible ore (ZONE =4).
Exercise
Make a model view of item ZONE in MineSight 3D to check the
results of the code assignment.
Kriging Variance
Make a model view of the item CUKVR as it was calculated by
running procedure P62401.DAT. Use cutoffs of 0.039, 0.055 and
0.087 (quartiles).
Notes
Exercise 1
Change block size to 10x10 and re-run the procedure. What
change do you see in the block variance?
Exercise 2
Change block discretization to 10x10x5 and see the effect on the
block variances of 20x20 blocks.
Exercise 3
If you have another variogram parameter file, try running the
procedure with it. What do you observe?
Change of Support on Composite Values
On the MineSight Compass Menu tab, <select the Group Statistics,
and the Operation Calculation; from the procedure list, select procedure
p40201.dat - Statistics (Composites)>. Fill out the panels as described.
Panel 1 - 3D Composite Data Statistical Analysis
This panel provides input for the composite file type and item to
analyze. Enter composite file 9 and specify CU as the base assay for
analysis and histogram generation.
Exercise 1
Run stats on item CUBLK to look at the new distribution.
Exercise 2 Notes
Run the Volume-Variance Correction using the indirect lognormal
method. Use 0.72 for the coefficient of variation. (Store back to
CUBLK, run statistics.)
Volume Variance Correction on Model Data
<From the MineSight Compass menu set Select Group Name =
STATISTICS, Operations Type = Calculation, and Procedure Desc. =
Volume-variance - pmodvc.dat>
Panel 1 Select the File to Use
Leave this panel blank.
Panel 2 Select Items to be Used
Select indirect lognormal correction option. Use item CUKRG.
Store back to item CUKGG.
Panel 3 Optional Data Selection
Select Rock Type 1.
Panel 4 Volume-Variance Correction Parameters
Use a correction factor of 1.08. Assume that your SMU unit is 10 x
10 x 15. If you run procedure psblkv.dat for this size of a block, you
should get a block variance of 0.19157; therefore the correction factor
should be 0.19157/0.17699 = 1.08. The average model grade for Rock
Type 1 is 0.6917 and the c.v. is 0.5025.
Results
Exercise 1
Run stats on item CUKGG to lookat the new distribution.
Exercise 2
Plot grade tonnage curves of CUKGG and CUKRG items to
compare the original and adjusted grade distribution.
Calculate indicators
In order to calculate the probability of each block to be above
grade 1.2, we are going to assign indicator values to the composites:
0 if the grade is below 1.2,
1 if otherwise.
This step requires that you have an additional item in file 9 to
store the indicators with min =0, max =1 (or more) and precision = 1.
<Select Group Composites, Operations Calculations. Run procedure
p50801.dat User Calcs (Composites)
Use item ROCKX. For benches 2540 to 2600 assign value zero to
item ROCKX (use rock types 1 and 2). Then assign value ROCKX=1 for
Cu>1.2>
Calculate probabilities
Assign the probability of occurrence of outlier grades to the
blocks. This step requires that you have an additional item in file 15
to store the probabilities. The item should be initialized with min =0,
max =1, precision = 0.01 or 0.001. Use Inverse Distance Weighting to
assign the probabilities.
<Select Group 3D-Modeling, Operations Calculations. Run procedure
pintrp.dat IDW interpolation
Use the same search parameters used in previous runs. Add one more
item to interpolate (CUIND using ROCKX). Interpolate only benches 24 to
28. Use OR1 and OR2 as extensions for run files and reports.>
Perform ORK
<Select Group 3D-Modeling, Operations Calculations. Run
Notes
Results:
PIKCUT report can be viewed by clicking on the file in MSDA
Files.
Notes
Notes
After the variograms are modeled, use the Save option to save the
model.
Notes
Pick all the saved model files and select whether you want to
make a variogram file for M624IK or M624MIK (GSLIB). Fill the
panel and save file at the end.
Introduction Notes
Geostatistics refers to a collection of numerical techniques that
deal with the characterization of spatial attributes, employing
primarily random models. The Geostatistical techniques are applied
to study the distribution in space of essential values for geologists
and mining engineers, such as mineral content or thickness of
an ore body although their application is in no way limited to
problems in geology and mining. As a matter of fact, geostatistics
is applied today in any earth and related sciences imaginable
from environmental, agricultural, ecological applications to
meteorological weather forecasting and so on.
Historically, geostatistics can be considered as old as mining
itself. When mining men was picking and analyzing samples, and
computing average grades, weighted by corresponding thickness
or the area of influence, one may consider that they were applying
geostatistics without knowing about it. However, it was Georges
Matheron of France who coined the word “Geostatistics.” After the
publication of the Theory of Regionalized Variables by Matheron in
early 1960’s, the term “Geostatistics” became popular.
The classical statistical methods are based on the assumption that
the sample values are realizations of a random variable. The samples
are considered independent. Their relative positions are ignored,
and it is assumed that all sample values have an equal probability of
being selected. Thus, one does not make use of the spatial correlation
of samples although this information is very relevant and essential
in certain data sets such as the ones obtained from an ore deposit.
In contrast, in geostatistics one considers that the sample values
are realizations of random functions. On this hypothesis, the value
of a sample is a function of its position in the mineralization of
the deposit, and the relative position of the samples is taken into
consideration. Therefore, geostatistics is concerned with spatial data.
That is, each data value is associated with a location in space and
there is at least an implied connection between the location and the
data value. Geostatistics offers many tools describing the spatial
continuity that is an essential feature of many natural phenomena
and provides adaptations of classical statistical techniques to take
advantage of this continuity.
The objective of Geostatistical techniques can be defined as:
1. to characterize and interpret the behavior of the existing
sample data;
2. to use that interpretation to predict likely values at locations
that have not yet been sampled.
In summary, geostatistics involves the analysis and prediction of
any spatial or temporal phenomena, such as mineral grades, quality
parameters, impurities, porosity, contaminant concentrations, and
so forth. The prefix geo- is usually associated with geology since
geostatistics has its origins in mining. Nowadays, geostatistics
is basically a name associated with a class of techniques used to
analyze and predict values of a variable distributed in space or
Notes time. Such values are implicitly assumed to be associated with each
other, and the study of such a correlation is usually expressed as
the “spatial analysis of continuity” or “variogram modeling.” After
spatial analysis, predictions at unsampled locations are made using
“kriging” or they can be simulated using “conditional simulations.”
In summary, the main steps in a geostatistical study include:
(a) exploratory data analysis,
(b) spatial analysis of continuity―calculation and modeling of
variograms,
(c) making predictions―kriging or simulations.
The intent of this book is to make the reader familiar with the
main concepts of statistics, and the geostatistical tools available to
solve problems in geology and mining of an ore deposit. Therefore,
the emphasis will be the use of these tools through MineSight and
their practical application to resource and reserve estimations. The
majority of the material in this book is introductory and exclusive of
mathematical formalism. It is based on a modified, but real data set.
The solutions proposed in this book are, therefore, for the particular
data set used, and may not be used as general recipes. It is hoped
that this book will provide the readers with practical aspects of
various statistical and geostatistical tools, and help prepare them to
tackle their problems at hand.
Organization of Sections
This book is organized to take the reader from the elementary
statistics to more advanced geostatistical topics in a natural
progression. Following this introductory section, Section 2 reviews
the basic statistical concepts and definitions. This section also covers
the theoretical model of distributions, random variables and Central
Limit Theorem.
Organization and presentation of geostatistical results are
vital steps in communicating the essential features of a large data
set. Therefore, Section 3 covers univariate, bivariate and spatial
descriptions for data analysis and display.
Section 4 is the first section that gets into spatial statistics. This
section provides various ways of describing the spatial features of a
data set, including variogram analysis through the analysis of spatial
continuity. It also includes the variogram types, modeling and cross
validation of the results.
Geostatistics deals with the characterization of spatial attributes
also known as regionalized variables for which deterministic
models are not possible because of the complexity of the natural
processes. Therefore, Section 5 introduces the random processes and
variances. It covers the theory of regionalized variables, random
function models and the necessity of modeling. The question of why
probabilistic models are necessary to describe earth science processes
is discussed in this section.
Notes
Basic Statistics
This chapter is designed to provide a review of basic statistics
for those readers who have little or no background in statistics. It
is intended to familiarize the readers with the statistical concepts
providing them the necessary foundation for the subjects in the
remainder of the book.
Definitions
Statistics
Statistics is the body of principles and methods for dealing with
numerical data. It encompasses all operations from collection and
analysis of the data to the interpretation and presentation of the results.
Statistical analysis may involve the use of probability concepts
and statistical inference. This body of knowledge comes into play
when it is necessary or desirable to form conclusions based on
incomplete data. The statistics make it possible to acquire useful,
accurate and operational knowledge from available information.
Geostatistics
Geostatistics is the application of statistical methods or the
collection and analysis of statistical data for use in the earth sciences.
Throughout this book, geostatistics will refer only to the statistical
methods and tools used in spatial data for resource analysis.
Universe
The universe is the source of all possible data. For our purposes,
an ore deposit can be defined as the universe. But often problems
arise when a universe does not have well defined boundaries. In this
case the universe is not clearly located in space until other concepts
are defined.
Sampling Unit
The sampling unit or simply sample is the set of all those
observations actually recorded. A sample is a subset of a parent
population or a part of the universe on which a measurement is
made. This can be a core sample, channel sample, a grab sample etc.
When one makes statements about characteristics of a universe, he
or she must specify what the sampling unit is.
Support
Support is the characteristics of the sampling unit, which refers
to the size, shape and orientation of the sample. The support can
be as small as a point or as large as the entire field. A change in any
characteristics of the support defines a new regionalized variable.
For example, the channel samples gathered from anywhere in a drift
will not have the same support as the channel samples cut across the
ore vein in the same drift.
Population
The word population is synonymous with universe in that it refers
to the total category under consideration. It is the set of all samples
of a predetermined universe within which statistical inference is to
Notes The median can be calculated easily once the data is ordered so
that x1 < x2 < ...
< xn. The calculation is slightly different depending on whether the
number of data values, n, is odd or even:
M = x(n+1)/2 if n is odd
M = [xn/2 + x(n/2)+1] / 2 if n is even (2.2.3)
The median can easily be read from a probability plot as we will
see in the next chapter. Since the y-axis records the cumulative
frequency, the median is the value on the x-axis that corresponds to
50% on the y-axis.
Mode
The mode is the value that occurs most frequently. The mode is
easily located on a graph of a frequency distribution. It is at the peak
of the curve, the point of maximum frequency. On a histogram, the
class with the tallest bar can give a quick idea where the mode is.
However, when the histogram is especially irregular, or when there
are two or more classes with equal frequencies, the location of the
mode becomes more difficult.
One of the drawbacks of the mode is that it may change with
the precision of the data values. For this reason, the mode is not
particularly useful for data sets in which the measurements have
several significant digits. In such cases, one chooses an approximate
value by finding the tallest bar on a histogram.
Minimum
The minimum is the smallest value in the data set. In many
practical situations, the smallest values are recorded simply as being
below some detection limit. In such situations, it does not make
much difference whether the minimum value assigned is 0 or some
arbitrary small value as long as it is done consistently. However, it is
extremely important, especially in drillhole sampling, not to assign
zero values to the missing data. They should be separated from the
actual data by either assigning negative values, or by alphanumeric
indicators. Once it is decided how to handle the missing assays, then
they can be used accordingly in the compositing stage.
Maximum
The maximum is the largest value in the data set. It is especially
important in ore reserve analysis to double check the maximum
value as well as any suspiciously high values, for accuracy.
This should be done to make sure that these values are real, not
typographical errors.
Both the minimum and the maximum values of a set of data are
valuable information about the data that give us the limits within
which the data values are distributed.
Quartiles
The quartiles split the data into quarters in the same way the
median splits the data into halves. Quartiles are usually denoted by
the letter Q. For example, Q1 is the lower or first quartile, Q3 is the
upper or third quartile, etc.
As with the median, quartiles can be read from a probability plot. Notes
The value on the x-axis, which corresponds to 25% on the y-axis,
is the lower quartile and the value that corresponds to 75% is the
upper quartile.
Deciles, Percentiles, and Quantiles
The idea of splitting the data into halves or into quarters can
be extended to any fraction. Deciles split the data into tenths. One
tenth of the data fall below the first or lowest decile. The fifth decile
corresponds to the median. In a similar way percentiles split the data
into hundredths. The 25th percentile is the same as the first quartile,
the 50th percentile is the same as the median, and the 75th percentile
is the same as the third quartile.
Quantiles, on the other hand, can split the data into any
fraction. They are usually denoted by q, such as q.25 and q.75, which
corresponds to lower and upper quartiles, Q1 and Q3, respectively.
Measures of Variation
The other measurable characteristic of a set of data is the variation.
Measures of variation or spread are frequently essential to give
meaning to averages. There are several such measures. The most
important ones are the variance and standard deviation.
Variance
The sample variance, s2, describes the variability of the data
values. It is the average squared difference of the data values from
their mean. Mathematically it is the second moment about the mean.
It is calculated by the following formula:
s2 = 1/n Σ (xi - m)2 i = 1,...,n (2.2.4)
The following formula can also be used and is more suitable for
programming:
s2 = 1/n { Σ(xi)2 - [ Σ(xi) ] 2 / n } i = 1,...,n (2.2.5)
Since the variance involves the squared differences, it is sensitive
to outlier values. Therefore, the values with skewed distributions
often have higher variances than those that are normally distributed.
Standard Deviation
The standard deviation, s, is simply the square root of the variance
(√s2 ). It is often used instead of the variance since its units are the
same as the units of the variable described.
Interquartile Range
The interquartile range, IQR, is the difference between the upper
and the lower quartiles and is given by:
IQR = Q3 - Q1 (2.2.6)
Unlike the variance and the standard deviation, the interquartile
range does not use the mean as the center of the distribution.
Therefore, it is sometimes preferred if a few erratically high values
strongly influence the mean. However, IQR is not a well known
parameter in earth science applications.
a) pdf
b) cdf
Figure 2.3.1 Normal Distribution Curve
Notes
Notes Suppose two random samples of size N are taken from different
distributions, and give sample means m1 and m2. Then to obtain an
approximate 95% confidence interval for the difference between the
two population means of the form (m1 - m2) ± δ requires that the
sample size should be approximately
N = 8 σ1 σ2 / δ2 (2.6.2)
where σ1 and σ2 are the standard deviations for populations 1 and
2, respectively.
2.7 Bivariate Distribution
In the analysis of earth science data, it is often desirable to
compare two distributions or to know the pattern of dependence of
one variable (X) to another (Y). For instance, one may want to study
the relationship between rainfall and the yield of certain crop, or
the relationship between pollutants in the air and the incidence of
certain disease. Problems like these are referred as the problems of
correlation analysis, where it is assumed that the data points (xi, yi)
for i=1,2, ..,n are values of a pair of random variables whose joint
density is given by F(x, y).
This pattern of dependence is particularly critical if one wishes to
estimate the outcome of unknown X from the outcome of known Y.
For example, in a gold deposit, if the drill hole cores were sampled
for gold, but not always for silver, then one may want to estimate
the missing silver grades from gold grades if these two grades are
correlated. A strong relationship between two variables can thus
help us predict one variable if the other is known.
Just like the distribution of a single random variable X is
characterized by a cdf F(x) = Prob {X ≤ x}, the joint distribution of
outcomes from two random variables X and Y is characterized by a
joint cdf:
F(x,y) = Prob {X ≤ x, and Y ≤ y} (2.7.1)
In practice, the joint cdf F(x,y) is estimated by the proportion of
pairs of data values jointly below the respective threshold values x, y.
(Footnotes)
1
A marginal distribution is a distribution that deals with only one
variable at a time.
Notes weighting, then the mean will be the weighted average value. The
last column in the table reports the standard deviation of the samples
within the intervals. Such frequency tables can have additional
information such as the percent or the proportion of samples,
coefficient of variation and so on.
The histogram is the graphical representation of the same
information on a frequency table. Summary statistics are customarily
included in the histogram to complete the preliminary information
needed to study the sample data. The histogram of the data can be
generated either on a printer or a plotter, and is essential for visual
presentation. Figure 3.2.1 shows an example of a histogram using the
sample data.
Frequency tables and histograms are very useful in ore reserve
analysis for many reasons:
1. They give a visual picture of the data and how they are
distributed.
2. Bimodal distributions show up easily, which usually indicates
mixing of two separate populations.
3. Outlier high grades can be easily spotted.
Notes
Notes
Notes
Notes
Notes
Lag Statistics
Notes variogram at 15-unit steps. For the first step (h=15), there are 4 pairs:
x1 and x2, or .14 and .28
x2 and x3, or .28 and .19
x3 and x4, or .19 and .10
x4 and x5, or .10 and .09
Notes
For the fourth step (h=60), there is only one pair: x1 and x5. The
values for this pair are .14 and .09, respectively. Therefore, for h=60,
we get
Note: The above figure can be used for both horizontal and Notes
vertical directions; plan view applies to the horizontal direction,
section view applies to the vertical direction.
Figure 4.1.4 Variogram Window and Band Width Definitions
– Single Direction
Notes Once the omni-directional variograms are well behaved, one can
proceed to explore the pattern of anisotropy with various directional
variograms. In many practical studies, there is some prior information
about the axes of the anisotropy. For example, in a mineral deposit,
there may be geologic information about the ore mineralization that
suggests directions of maximum and minimum continuity. Without
such prior information, a contour map of sample values may offer
some clues to such directions. One should be careful, however, in
relying solely on a contour map because the appearance of elongated
anomalies on a contour map may sometimes be due to sampling grid
rather than to an underlying anisotropy.
For computing directional variograms, one needs to choose a
directional tolerance (window and/or band width) that is large
enough to allow sufficient pairs for a clear variogram, yet small
enough that the character of variograms for separate directions is
not blurred beyond recognition. For most cases, it is reasonable to
use a window of about ±15 °. As a rule of thumb, one can initially
use half of the incremental angle used for computing directional
variograms. For example, if the variograms were to be computed
at 45 ° increments, then a ±22.5 ° window would be appropriate.
Both the window and class size selected for any given direction can
be adjusted after the initial trial. The best approach is to try several
tolerances and use the smallest one that still yields good results.
In cases where a three-dimensional anisotropy is present, one can
apply a coordinate transformation to the data before computing the
sample variogram. The axes of the new or transformed coordinate
system are made to align with the suspected directions of the
anisotropy. This enables a straightforward computation of the
variograms along the axes of the anisotropy.
Existence of Drift
The drift can be defined simply as a general decrease or increase in
data values with distance for the direction specified. It is the average
difference between the samples separated by a distance h. Therefore it
can be positive or negative. The existence of high drift values in each
lag with the same sign can be an indication of a drift in the direction
the variogram computed. One may be able to see a drift from the
shape of the variogram. When a linear drift is present, this usually
introduces a parabolic component in the experimental variogram,
which can be a dominant feature on the shape of the variogram.
Theoretical Variogram Models
In order to use practical use of the experimental variogram, it
is necessary to describe it by a mathematical function or a model.
There are many models that can be used to describe the experimental
variograms; however, some models are more commonly used than
others. These models are explained below.
Spherical Model
This model is the most commonly used model to describe a
variogram. The definition of this model is given by
Notes right along the x-axis. A nugget effect term could also be added to
the equation.
There are other theoretical models that can be used to describe
a variogram, such as the DeWijsian model and the cubic model.
However, these models are not as commonly used in practice as
some of the models described above. Figure 4.2.1 shows how various
theoretical variogram models look. Figure 4.2.2 shows a plot of sample
variogram and a spherical theoretical model fit to this variogram.
Notes
Notes
Notes
Similarly σ+h is the standard deviation of all the data values whose Notes
locations are +h away from some other data location:
σ2+h = 1/N ∑ (vj2 - m2+h) (4.4.11)
Like the means, the standard deviations, σ-h and σ +h, are usually
not equal in practice. The shape of the correlation function is similar
to covariance function. Therefore, it needs to be inverted to give
a variogram type of curve, which we call correlogram. Since the
correlation function is equal to 1 when h=0, the value obtained at each
lag for correlation function is subtracted from 1 to give the correlogram.
Indicator Variogram
Indicator variograms are computed using indicators of 0 or 1
based on a specified cutoff. Therefore, to compute an indicator
variogram, one must transform the raw data into indicator variables.
These variables are obtained through the indicator function which is
{
defined as:
1, if z(x) ≤ zc
i(x;zc) = (4.4.12)
0, otherwise
where:
x is location,
zc is a specified cutoff value,
z(x) is the value at location x.
Cross Variogram
Like the variogram for spatial continuity of a single variable, the
cross variogram is used to describe the cross-continuity between two
variables. Cross variogram between variable u and variable v can be
calculated using the following discrete method:
Notes majority of earth science data sets, one is forced to admit that there is
some uncertainty about how the phenomenon behaves between the
sample locations. Probabilistic random function models recognize
this fundamental uncertainty and provide tools for estimating values
at unknown locations once some assumptions about the statistical
characteristics of the phenomenon are made.
In a probabilistic model, the available sample data are viewed as
the result of some random process. From the outset, it seems like
this model conflicts with reality. The processes that create an ore
deposit are certainly extremely complicated, and our understanding
of them may be so poor that their complexity appears as random
behavior. However, this does not mean that they are random. It
simply means that one does not know enough about that particular
process. Although the word random often connotes unpredictable,
one should view the sample data as the outcome of some random
process. This will help the ore reserve practitioners with the problem
of predicting unknown values.
It is possible in practice to define a random process that might
have conceivably generated any sample data set. The application
of the most commonly used geostatistical estimation procedures,
however, does not require a complete definition of the random
process. It is sufficient to specify only certain parameters of the
random process.
With any estimation procedure, whether deterministic or
probabilistic, one inevitably wants to know how good the estimates
are. Without an exhaustive data set against which one can check the
estimates, the judgment of their goodness is largely qualitative and
depends to a large extent on the appropriateness of the underlying
model. As the conceptualization of the phenomenon that allows one
to predict what is happening at location where there are no samples,
models are neither right nor wrong. Without additional data, no
proof of their validity is possible. They can, however, be judged
as appropriate or inappropriate. Such a judgment, which must
take into account the goals of the study and whatever qualitative
information is available, will benefit considerably from a clear
statement of the model.
Parameters of Random Functions
We can calculate several parameters that describe interesting
features of a random function. If the random function is stationary,
then the expected value and the variance can be used to summarize
the univariate behavior of the set of random variables. For
bivariate behavior of a stationary random function, the covariance,
correlogram or variogram can be used. These three parameters are
related by a few simple expressions. They provide exactly the same
information in a slightly different form. The correlogram and the
covariance have the same shape. But the correlogram is scaled so that
its maximum value is 1. The variogram also has the same shape as the
covariance function except that it is inverted. While the covariance
starts from a maximum of σ2 at zero distance and decreases to 0, the
variogram starts at 0 and increase to a maximum of σ2.
Declustering Notes
There are two declustering methods that are generally applicable
to any sample data set. These methods are the polygonal and the
cell declustering methods. In both methods, a weighted linear
combination of all available sample values is used to estimate the
global mean. By assigning different weights to the available samples,
one can effectively decluster the data set.
Polygonal Declustering
In this method, each sample in the data set receives a weight
equal to its area of the polygon. Figure 6.1.1 shows the polygons of
influence of a sample data set. The perpendicular bisectors between
a sample and its neighbors form the boundaries of the polygon of
influence. However, the edges of the global area require special
treatment. A sample located near the edge of the area of interest
may not be completely surrounded by other samples and the
perpendicular bisectors with its neighbors may not form a closed
polygon. One solution is to choose a natural limit, such as a property
boundary or a geologic contact. This can then be used to close the
border polygons. An alternative, in situations where a natural
boundary is not easy to define, is to limit the distance from a sample
to any edge of its polygon of influence. This has the effect of closing
the polygon with the arc of a circle.
By using the areas of these polygons of influence as weights in
the weighted linear combination, one can accomplish the necessary
declustering. Clustered samples receive small weights corresponding
to their small polygons of influence. On the other hand, samples with
large polygons can be thought of as being representative of a larger
area and therefore entitled to a larger weight.
Figure 6.1.1
Sample Map
of Polygons of
Influence
Notes
Notes The matrix C consists of the covariance values Cij between the
random variables Vi and Vj at the sample locations. The vector D
consists of the covariance values Ci0 between the random variables
Vi at the sample locations and the random variable V0 at the location
where an estimate is needed. The vector λ consists of the kriging
weights and the Lagrange multiplier. It should be noted that the
random variables Vi, Vj, and V0 are the models of the phenomenon
under study, and these are parameters of a random function.
Block Kriging
The only difference of block kriging from point kriging is that
estimated point is replaced by a block. Point-to-block correlation
is the average correlation between sampled point i and all points
within the block. In practice, a regular grid of points within the
block is used. Consequently, the matrix equation includes “point-to-
block” correlations:
The block kriging system is similar to the point kriging system
given in Equation 7.2.1 above. In point kriging, the covariance matrix
D consists of point-to-point covariances. In block kriging, it consists
of block-to-point covariances. The block kriging system can therefore
be written as follows:
Kriging variance does not depend directly on the data. It depends Notes
on the data configuration. Since it is data value independent, the
kriging variance only represents the average reliability of the data
configuration throughout the deposit. It does not provide the
confidence interval for the mean unless one makes an assumption
that the estimation errors are normally distributed with mean zero.
However, if the data distribution is highly skewed, the errors are
definitely not normal because one makes larger errors in estimating a
higher-grade block than a low-grade block. Therefore, the reliability
should be data value dependent, rather than data value independent.
For a fixed sampling size, different sampling patterns can produce
significantly different estimation variances. In two dimensions,
regular patterns are usually at the top of the efficiency scale in terms
of achieving a given estimation variance with the minimum number
of data, while clustered sampling is the most inefficient.
Block Discretization
When using the block kriging approach, one has to decide to
discretize the local area or block being estimated. The grid of
discretizing points should always be regular. However, spacing
between points can be made larger in one direction than the other if
the spatial continuity is anisotropic. Figure 7.2.1 shows an example
of regularly spaced points to discretize a block.
If one chooses to use fewer discretizing points, computation time
for kriging will be faster. This computational efficiency must be
weighted against the desire for accuracy, which calls for as many
points as possible.
It has been shown in practice that, in general, significant
differences may occur in the estimates using grids containing less
than 16 discretizing points. With more than 16 points, however, the
estimates are found to be very similar.
The following points should be considered when deciding on how
many points to use to discretize a block:
1. Range of influence of the variogram used in kriging.
2. Size of the blocks with respect to this range.
3. Horizontal and vertical anisotropy ratios.
Figure 7.2.1.
Block
discretization
to represent
the block with
points at regular
intervals.
used under another form to assess the quality of estimation. For Notes
example, one can define σ2z* / σ2z ratio to be a smoothing factor in
order to show to which degree kriging reproduces reality in terms
of block variability.
Assumptions Made in Kriging
The following assumptions are made in ordinary kriging.
1. No drift is present in the data.
2. Both variance and covariance exist and are finite.
3. The mean grade of the deposit is unknown.
Desirable Properties of Estimators
When comparing different estimators of a random variable,
one should check into certain properties of these estimators. An
estimator without a bias is the most desirable. This condition, which
is commonly referred as unbiasedness, can be expressed as follows:
E (Z - Z*) = 0 (7.4.1)
where Z is the estimate and Z is the true value of the random
*
Notes A good search strategy should include at least a ring of drill holes
with enough samples around the blocks to be estimated. However,
it should also have provisions for not extending the grades of the
peripheral holes to the areas that have no data.
Since most drilling is vertically oriented, increasing the vertical
search distance has more impact on the number of samples available
for a given block, than increasing the horizontal search distance.
If the vertical search is considerable for a given block, then there
might be a problem of having a large portion of the samples for the
block coming from the nearest hole, thus carrying the most weight.
This may cause excessive smoothing in reserve estimates. If the
circumstances warrant a large the vertical search, then one solution
could be to limit the number of samples used from each individual
drill hole.
Octant or Quadrant Search
It is common, especially in precious metal deposits, to have
denser drilling in highly mineralized areas of the deposit. When
such clustering of the holes is present, it might be necessary to
have a balanced representation of the samples in all directions in
space, rather than taking the nearest n samples for the blocks to be
estimated. This can be achieved by either declustering the samples
before the estimation or by a simple octant or quadrant search in
which the number of samples in each octant or quadrant is limited to
a specified number during the interpolation.
The use of an octant or quadrant search usually improves the
inverse distance weighting method results more so than it does
the results of ordinary kriging that has well-known “screening” of
data to do internal declustering. Therefore, an octant or quadrant
search accomplishes some declustering and the effect of this is more
noticeable on the method that does not decluster by itself.
Relevance of Stationary Models
A random function model is said to be first order stationary if
the mean of the probability distribution of each random variable is
the same, as explained in Section 5. Because we use this assumption
to develop the unbiasedness condition, it is concluded that
unbiasedness is guaranteed when the weights sum to one is limited
to first order stationary models.
Therefore, an easily overlooked assumption in every estimate is
the fact the sample values used in the weighted linear combination
are somehow relevant, and that they belong to the same group or
population, as the point being estimated. Deciding which samples
are relevant for the estimation of a particular point or a block may be
more important than the choice of an estimation method.
The decision to view a particular sample data configuration as an
outcome of a stationary random function model is strongly linked
to the decision that these samples can be grouped together. The
cost of using an inappropriate model is that statistical properties
of the actual estimates may be very different from their model
counterparts. The use of weighted linear combinations the sum
of whose weight is one, does not guarantee that the actual bias is Notes
zero. The actual bias will depend on several factors such as the
appropriateness of sample data configuration as an outcome of a
stationary random function.
All linear estimation methods implicitly assume a first order
stationary model through their use of the unbiasedness condition.
Therefore, it is not only the ordinary kriging which requires first
order stationarity. If estimation is performed blindly, with no thought
given to the relevance of the samples within the search strategy, the
methods that make use of more samples may produce worse results
than the methods that make use of few nearby samples.
Notes The NNK is a method that combines the strengths of the nearest
neighbor, inverse distance weighting and the ordinary kriging
methods. It is a method where the value of the nearest neighbor
sample is emphasized in determining the value of the blocks. This
emphasis is directly proportional to the variability of the deposit. In
this method, the OK weight assigned to the nearest neighbor sample
is increased by a certain proportion. To compensate this increase, the
weights of the other samples in the neighborhood are lowered at the
same proportion. Therefore, the sum of the resulting NNK weights is
preserved to be one, to satisfy the unbiasedness condition.
The NNK weights are then obtained by adjusting the OK weights
(wtok) as follows:
Weight of the nearest sample = wtok + (1 - wtok) * f (8.4.1)
Weights of all other samples = wtok * (1 - f)
The f is the smoothing correction factor and has a value between 0
and 1. Thus, the NNK estimate is equal to OK estimate at the lower
end when f=0, and is equal to the polygonal estimate at the extreme
end when f=1.
Area Influence Kriging
The ordinary kriging (OK) is not designed for grade estimations
in highly variable deposits. But the practitioners keep using this
method even when it may not be suited for their cases. This is mainly
because the OK is practical, much easier to use and understand than
some advanced methods offered as an alternative. It is also robust,
especially when the kriging neighborhood is kept small. Therefore
most people are comfortable with the use of OK and its results.
Polygonal or the nearest neighbor method is simply the
assigning of the value of the nearest datum to the point or block
being estimated. The Area Influence Kriging (AIK) is a method
that combines some of the aspects of the polygonal and the
ordinary kriging methods. It is a technique where a sample value
is considered to be the primary starting point for the grade of the
blocks within its area of influence. The weights assigned to the
samples for a given block outside the area of influence of the nearest
sample then control the resulting grade for the block. Basically, in its
simplest form, the AIK has the following weighting scheme:
w1 = 1
∑ wj = 0 j=2,… ,N (8.5.1)
In this scheme, w1, the weight of the nearest sample is equal to
1, and ∑ wj, the sum of the weights of all other samples is equal to
0. N is the number of samples. Therefore, the sum of the resulting
AIK weights is preserved to be one, to satisfy the unbiasedness
condition. (Arik, 2002a). Although w1 can have a value between 0
and 1, having w1 less than 0.5 is not recommended as it defeats the
purpose the algorithm.
{
1, if z(x) ≤ zc
i(x;zc) =
0, (9.1.1)
otherwise
where:
x is location, zc is a specified cutoff value, z(x) is the value at
location x.
The indicator function at a sampled point, i(x;zc), takes a simple
form which is shown in Figure 9.1.1. This function follows the
bimodal probability law and takes only two possible values, 0 and 1.
Given an observed point grade z(x) at location x, there is either a 0
or 100 percent chance that this value will be less than or equal to the
cutoff zc. All sample values are flagged by an indicator function; 1
for values less than or equal to zc and 0 otherwise.
Essentially, the indicator function transforms the grade at each
sampled location into a [0,1] random variable. Indicators are
assigned to each sampled location in the deposit for a series of cutoff
grades. As the cutoff grade increases, the percentage of points below
the cutoff grade zc increases.
Notes
Figure 9.1.1 Indicator Function at Point x
The φ(A;zc) Function
The φ(A;zc) is a cumulative distribution function (cdf) which
is built using the information from the indicator functions. This
function is defined as the exact proportion of grades z(x) below the
cutoff zc within any area A in the deposit:
φ(A;zc) = 1/A ∫A i(x;zc) dx ∈ [0,1] (9.2.1)
For each cutoff grade zc, one point of cumulative probability
function φ(A;zc) is obtained as shown in Figure 9.2.1.
In mining practice, local recoverable reserves must be assessed
for large panels within the deposit. Local recoverable reserves of
ore grade and tonnage can be estimated by developing the φ(A;zc)
function for each panel A. The indicator data, i(x;zc), provide the
source of information which can be used to estimate point local
recoverable reserves in the same way that point grade data are
used to estimate block grades. The similarity between these two
estimation procedures is demonstrated in the following relations:
Estimate of block V: zv (x) = 1/V ∫x ∈V z(x) dx (9.2.2)
Indicator cdf for block V: φ(V;zc) = 1/V ∫x ∈V i(x;zc) dx (9.2.3)
Thus any estimator used to determine the block grades from point
grade data can also be used to determine the spatial distribution
from point indicator data.
systems for all cutoffs into one giant system and minimizing the Notes
estimation variances. This system would contain constraints that
would force the order relations to hold. The fundamental drawback
of this method is that the system of equations would be too large to
be solved easily.
The second method, which closely approximates the results of
the first method, takes the results given by the indicator kriging
algorithm and fits a distribution function which minimizes the
weighted sum of squared deviations from the optimal solution,
as shown in Figure 9.6.1. Since this method is a bit complex, one
frequently employed solution is to set φ*(A;zk+1) = φ*(A;zk).
Figure 9.8.2 Affine Correction for Ore Grade and Tonnage Estimation
Figure 10.1.1 Histograms of Data Values from 1x1, 2x2, 5x5, and 20x20 Block Sizes.
Notes 3‑D mine model blocks have a much larger volume than those
of the data points. Therefore, certain smoothing of block grades is
expected after an interpolation procedure. However, the selection
of the method and the parameters used in the interpolation may
contribute to additional smoothing of the block grades.
Smoothing and Its Impact on Reserves
The search strategy, the parameters and the procedure used
to select data, can play a significant role in the smoothness of the
estimates. As a matter of fact, it is one of the most consequential
steps of any estimation procedure (Arik, 1990; Journel, 1989). The
degree of smoothing depends on several factors, such as the size
and orientation of the local search neighborhood, and the minimum
and maximum number of samples used for a given interpolation.
Of all the methods, the nearest neighbor method does not introduce
any smoothing to the estimates since it assigns all the weight to the
nearest sample value. For the inverse distance weighting method,
increasing the inverse power used decreases the smoothing because,
as the distance power is increased, the estimate approaches that of
the nearest neighbor method. For ordinary kriging, the variogram
parameters used, especially the increase in nugget effect, contribute
to the degree of smoothing.
The immediate effect of smoothing caused by any interpolation
method is that the estimated grade and tonnage of ore above a given
cutoff are biased with respect to reality. As the degree of smoothing
increases, the average grade above cutoff usually decreases. Also
with increased smoothing, the ore tonnage usually increases for
cutoffs below the mean and decreases for cutoffs above the mean.
Figure 10.2.1 illustrates the effect of smoothing on the grade-tonnage
curves by comparing 1x1 size blocks to 5x5 and 20x20 size blocks.
which would result in “correct” grade and tonnage above a given Notes
cutoff when applied to our estimates? For one thing, if we know
the SMU size that will be applied during mining, we can determine
the theoretical or hypothetical distribution of SMU grades for our
deposit. Once we know this distribution or the grade‑tonnage
curves of SMUs, we can vary our search strategy and interpolation
parameters until we get close to these curves. The disadvantage
of this procedure is that one may end up using a small number of
samples per neighborhood of interpolation. This lack of information
may cause local biases. However, when we are trying to determine
the global and mineable resources at the exploration stage, we are
not usually interested in the local neighborhood. Rather, we are after
annual production schedules and mine plans (Parker, 1980).
Refining the search strategy and the kriging plan to control
smoothing of the kriged estimates works reasonably well,
depending on our goal. This can be accomplished by comparing
the grade‑tonnage curves from the estimated block grades to those
from the SMUs. Since the SMU grades are not known at the time
of exploration, we can determine the theoretical or hypothetical
distribution of SMU grades for our deposit based on a specified
SMU size.
Global Correction for the Support Effect
There are some methods available for adjusting an estimated
distribution to account for the support effect. The most popular ones
are affine correction and indirect lognormal correction. All of these
methods have two features in common:
1. They leave the mean of the distribution unchanged.
2. They change the variance of the distribution by some
“adjustment” factor.
Affine Correction
The affine correction is a very simple correction method. Basically,
it changes the variance of the distribution without changing its
mean by simply squeezing values together or by stretching them
around the mean. The underlying assumption for this method is
that the shape of the distribution does not change with increasing or
decreasing support.
The affine correction transforms the z value of one distribution to
z’ of another distribution using the following linear formula:
z’ = √f * (q‑m) + m (10.6.1)
where m is the mean of both distributions. If the variance of
the original distribution is σ2, the variance of the transformed
distribution will be f σ2.
Indirect Lognormal Correction
The indirect lognormal correction is a method that borrows the
transformation that would have been used if both the original
distribution and the transformed distribution were both lognormal.
Notes The idea behind this method is that while skewed distributions
may differ in important respects from the lognormal distribution,
change of support may affect them in a manner similar to that
described by two lognormal distributions with the same mean but
different variances.
The indirect lognormal correction transforms the z value of
one distribution to z’ of another distribution using the following
exponential formula:
z’ = a zb (10.6.2)
where a and b are given by the following formulas:
a = m/sqrt(f cv2 +1) [sqrt(cv2 +1)/m]b (10.6.3)
b = sqrt( ln (f cv2 +1)/ln (cv2 +1) ) (10.6.4)
In these formulas, sqrt is used to denote √, and cv the coefficient
of variation. As before, m is the mean and f is the variance
adjustment factor.
One of the problems with the indirect lognormal correction
method is that it does not necessarily preserve the mean if it is
applied to values that are not exactly lognormally distributed. In
that case, the transformed values may have to be rescaled, using the
following equation:
z” = m/m’ * z’ (10.6.5)
where m’ is the mean of the distribution after it has been
transformed by equation 10.6.2.
Figure 11.2.1 Schematic relationship between the actual sample values in a deposit and a
conditional simulation of that deposit.
As far as the dispersion of the simulated variable is concerned,
there is no difference between the simulated deposit and the real
deposit. The simulated deposit has the advantage of being known
at all points x and not only at the experimental data points xα. This Notes
simulated deposit is also called a “numerical model” of the real deposit.
Simulation or Estimation
If one analyzes a regular grid or block kriging estimates for a
spatial attribute such as grade, he or she can observe that there is
an uneven smoothing in the block estimates. This smoothing is
inversely proportional to the data density. Such distortion can be
visualized or confirmed in several ways:
1. The experimental variogram of the block estimates is
different from the original sample variogram. The block
variogram from the model has a smaller sill and a larger
range than the sample variogram, indicating an exaggerated
continuity in the estimated values.
2. The histogram of the samples is different from the histogram
of the estimated block values. Relative to the sample
histogram, the block histogram has fewer values in the tails
and a larger proportion close to the mean.
3. Cross validation of the sampling reveals that there is a
tendency of kriging to underestimate values above the
sample mean and to overestimate those below the mean. This
results in a regression line that has a less steep slope than
the ideal slope of 1.0 for the main diagonal. This distortion is
called conditional bias in the estimation.
Geostatistics addresses this smoothing in kriging by means of
stochastic simulation. The choice between kriging and simulation
must be decided based upon what is more relevant for each specific
application: minimum local estimation errors in a mean square sense
or correct spatial continuity. Therefore, estimation and simulation
are complementary tools. Estimation is appropriate for assessing
mineral reserves, particularly global in situ reserves. Simulation aims
at correctly representing spatial variability, and is more appropriate
than estimation for decisions in which spatial variability is a critical
concern and for risk analysis.
Any estimation technique such as kriging gives only one single
“best estimate”; best in some local measure such as unbiasedness
and minimum error variance, without regards to the global feature
of obtained estimates. For each unknown location in the deposit
being estimated, the technique generates a value that is close on
average to the unknown true value. This process is repeated for
every unknown point in the deposit without any consideration of the
spatial dependence that exists between true grades of the deposit. In
other words, the estimated values cannot produce the covariance or
the variogram computed from the data. Neither can an estimation
technique reproduce the histogram of the data. The only thing it can
reproduce is the data values at known data locations.
The criteria for measuring the quality of estimation are unbiased
and the minimal quadratic error, or estimation variance. There is
no reason, however, for these estimators to reproduce the spatial
variability of the true grades. In the case of kriging, for instance, the
Simulation Algorithms
There are many simulation methods available for use in the
conditional simulation of deposits. The turning bands method,
Gaussian sequential simulation, indicator sequential simulation,
L-U decomposition, probability field simulation, and annealing
techniques. The turning bands method was the first method
introduced in the early 1970’s. Since then many new techniques
have been introduced, particularly since the mid 1980’s. No single Notes
algorithm is flexible enough to allow the reproduction of the wide
variety of features and statistics encountered in practice.
Gaussian Sequential Simulation
One of the most straightforward algorithms for simulation of
continuous variables is the Gaussian sequential algorithm that is
based on classical multivariate theory. It assumes all conditional
distributions are normal and determined exactly by simple kriging
mean and variance. Since most original data is not multivariate
normal, they need to be transformed into normal scores, then
transformed back to the original distribution after the simulation.
However, the algorithm is computationally faster and easier. It also
has established record of successful applications.
Because of the unique properties of a multi-variate gaussian
model, performing the Gaussian sequential simulation is perhaps
the most convenient method. It is accomplished following the basic
idea below:
The multi-variate pdf f(x1, x2,...,xn; z1, z2,...,zn), where xi’s denote
the location in the domain A and zi’s denote particular attribute
values at these locations, can be expressed as a product of univariate
conditional distributions as follows:
f(x1, x2,...,xn; z1, z2,...,zn) = f(x1; z1)
x f(x2; z2 | Z(x1) = z1)
x f(x3; z3 | Z(x1) = z1, Z(x2) = z2)
x ... (11.4.1)
x f(xn; zn | Z(xα) = zα, α = 1,...,n-1)
Notes Figure
11.5.1
References Notes
Arik, A., 2002a, Area Influence Kriging, Mathematical
Geology, Vol. 34, No. 10 (to be published).
Arik, A., 2002b, “Comparison of Resource Classification
Methodologies With a New Approach,” APCOM
Proceedings, Phoenix, Arizona, pp. 57-64.
Arik, A., 2000, “Performance Analysis of Different Estimation
Methods on Conditionally Simulated Deposits,” SME
Annual Meeting, Salt Lake City, Utah. Preprint 00-
088.
Arik, A., 1999a, “An Alternative Approach to Resource
Classification,” APCOM Proceedings, Colorado
School of Mines, pp. 45-53.
Arik, A., 1999b, “Uncertainty, Confidence Intervals and
Resource Categorization: A Combined Variance
Approach,” Proceedings, ISGSM Symposium, Perth,
Australia
Arik, A., 1998, Nearest Neighbor Kriging: A Solution to
Control the Smoothing of Kriged Estimates. SME
Annual Meeting, Orlando, Florida. Preprint 98‑73.
Arik, A., 1996, “Application of Cokriging to Integrate
Drillhole and Blasthole Data in Ore Reserve
Estimation,” APCOM Proceedings, Computer
Applications in the Mineral Industries, Penn State
University, pp. 107-109.
Arik, A., Banfield, A. F., 1995, “Verification of Computer
Reserve Models,” SME Annual Meeting, Denver,
Colorado. Preprint 95-258.
Arik, A., 1992, “Outlier Restricted Kriging: A New Kriging
Algorithm for Handling of Outlier High Grade Data
in Ore Reserve Estimation,” APCOM Proceedings,
Tucson, Arizona, pp. 181‑188.
Arik, A., 1990, “Effects of Search Parameters on Kriged
Reserve Estimates,” International Journal of Mining
and Geological Engineering, Vol 8, No. 12, pp. 305-
318.
Dagdelen, K., Verly, G., and Coskun B., 1997, “Conditional
Simulation for Recoverable Reserve Estimation,” SME
Annual Meeting, Denver, Colorado. Preprint 97-201.
David, M., 1977, Geostatistical Ore Reserve Estimation,
Elsevier, Amsterdam.
Deutsch, C.V., Journel, A.G., 1992, GSLIB: Geostatistical
Software Library and User’s Guide, Oxford
University Press, New York.
Dowd, P.A, 1982, Lognormal Kriging The General Case,
Mathematical Geology, Vol. 14, No. 5.
Summary
Reliable ore reserve estimates for deposits with highly skewed grade distributions are difficult tasks to
perform. Although some recent geostatistical techniques are available to handle problems with these
estimations, ordinary kriging or conventional interpolation methods are still widely used to estimate
the ore reserves for such deposits. The estimation results can be very sensitive to the search parameters
used during the interpolation of grades with these methods.
This paper compares the ore reserve estimates from ordinary kriging using several cases in which
certain search parameters are varied. The comparisons are extended to different mineralizations to
show the changing effects of these parameters.
Keywords: Geostatistics, kriging, ore reserve estimation.
Introduction
In order to achieve reliable reserve estimates, a good model of a deposit must be built to
represent the deposit as close to reality as possible. The method used in the ore reserve
estimation can be very important in the outcome of good modelling work. The selection
of a method to model an ore deposit depends on geologic considerations, variability and
volume of data, the specific purpose of estimation and the requirements for the accuracy.
This selection may become a difficult task, especially for deposits with highly skewed
grade distributions. Although some recent geostatistical techniques such as indicator or
probability kriging are available to handle the estimation problems in these types of
deposits, ordinary kriging or conventional interpolation methods are still widely used to
calculate the ore reserves for such deposits. The results can be very sensitive to the search
parameters and assumptions used during these interpolations. According to Journel
(1989), The definition of the search strategy, next to the prior stationary decision and
equal to the variogram model, is possibly the most consequential step of any
geostatistical study.'
The objective of this paper is to study the effects of varying search parameters on the
estimation results. The method used in this study is the ordinary kriging method;
however, the results should also be applicable to other types of linear interpolation
methods.
0269-0136/90 S03.00+.12 © 1990 Chapman and Hall Ltd.
306 Arik
Search parameters
In mining practice, one problem is to find the best possible estimator of the mean grade of a
block, using the assay values of the different samples inside or outside the block to be
estimated. Although the estimator itself is the controlling factor in the resultant grade
estimation, the amount and type of data included for a given block also influence the grade of
the block. A common example is given in Fig. 1. The two circles in this figure represent two
Fig. 1. A sample block with two different search strategies in an irregular drill pattern
different search radii for the block at the centre. There are eight holes around the block
numbered from 1 to 8 based on their distances to the block centre. Four of the holes are
within the inner circle, the other four are between the two circles. The grade of the sample at
each hole location is also shown in this diagram. Based on this grade distribution around the
block, it does not make any significant difference on the estimated grade of the block whether
one chooses to use the small or large search radius. However, there are certain cases where
there will be a significant difference between the selection of the small and the large search
distance. For example, if the sample at hole no. 5 had a grade 10-20 times more than the
0.055 value it currently has, then the estimated block-grade using the large search radius
would be much higher than the estimated grade from using the smaller search radius. The
proper handling of such outlier high grade is crucial to ore reserve estimation. Since ordinary
kriging does not consider the grades when assigning weights to the samples, most people with
such problems resort to manipulating the data or the search-distances for the blocks around
the very high mineralizations to compensate for the deficiencies of the technique.
Besides the effect of grade distribution around a block, there is also another common case
where search strategy will make significant differences in the reserves. This usually happens
when there is irregular drill spacing with some areas having in-fill holes and some areas not.
Effects of search parameters on kriged reserve estimates 307
For example, referring back to Fig. 1, if we did not have the holes inside the inner circle, using
the larger search radius could mean adding a block of ore to the reserves. The same problems
exist also around the periphery of the drilling campaign, especially if the holes have ore
intersections.
In this paper, the term 'search parameter' will be applied not only to the search distances
but also to any parameter that controls the amount, type and distribution of data included
for a given block. Table 1 gives a list of such parameters most commonly used in reserve
estimations.
Table 1. A list of “search parameters” that control the amount and type of data used to
interpolate the blocks
_____________________________________________________________________
Search distances in X, Y and Z directions
2D or 3D search radius
Elliptical or ellipsoidal search distances
Dip and strike angles for the search plane
Search distances down dip, along strike or vertical to the search plane
Octant or quadrant search parameters
Minimum number of samples allowed
Maximum number of samples allowed
Maximum number of samples allowed from an individual hole
Minimum grade value
Maximum grade value (cut down or cut out)
Maximum distance allowed to the nearest sample value
Maximum extension distance allowed if there is only a single sample
Maximum extension distance allowed for grades higher than a specified value
Geologic codes used to restrict or match similar rock types or mineralizations
_____________________________________________________________________
Study approach
The approach to this study is to take one bench with blasthole data from an operating mine.
The blasthole locations are then kriged using the exploration drillhole data. Several runs are
made, each time selectively changing one of the search parameters, including the variogram
parameters. Finally, the estimation results are compared with the blasthole grades to see the
effect of varying interpolation parameters on the ore reserves and grades at a specified cutoff
grade.
The study is carried out using two data sets, each from a different mine. Both data sets have
skewed grade distributions, however, one data set is twice as variable as the other. This is to
compare the varying effects of the search parameters on the reserve estimates in different
types of mineralizations.
Fig. 2. The location of the drillholes and the boundary for the blastholes - Deposit 1
(Molybdenum) (1 ft =0.3048 m)
The coefficient of variation of total molybdenum grades in this deposit is around 1. Figs 3
and 4 give the statistics and the frequency distributions of the blasthole and the drillhole
grades, respectively. Fig. 5 shows a scatter graph of blasthole grades vs. drillhole grades. The
drillholes selected from this scatter graph are within 7.6 m (25 ft) of the blastholes on the
selected bench and the benches above and below it. The statistics and the scatter graph '
results indicate good correlation and no apparent bias between the blasthole and the
drillhole grades.
Drillhole variograms were developed for the intrusives and meta-sediments, the two major
rock types that contain mineralization in this deposit. Blasthole variograms were also
developed to be used in the study.
Using the variogram models developed and the drillhole data, the blasthole locations on
the selected bench are interpolated with ordinary kriging in several cases. In each case, a
search parameter is selectively changed from the previous one. The effect of this change on
Effects of search parameters on kriged reserve estimates 309
the ore reserve estimates at 0.02 cutoff grade is then investigated by comparing the estimates
to the blasthole grades. The results of these case studies are given in Table 2. The numbers in
the parentheses in this table represent the percentage difference of the results in each case
from the blasthole grades.
310 Arik
Fig. 5. Blasthole vs. drillhole grades within 25' of distance on three benches - Deposit 1
(Molybdenum)
Fig. 6. The location of the drillholes and the boundary for the blastholes - Deposit 2
(Gold)(lft=0.3048m)
A variogram using the drillhole data was developed for the case study, without geologic
separation or rock types, since such information was not available.
Using the variogram model developed and the drillhole data, the blasthole locations on
the selected bench are interpolated with ordinary kriging in several cases. In each case, a
search parameter is selectively changed from the previous one, following a similar format
tried in the molybdenum deposit. The effect of this change on the ore reserve estimates at 0.02
cutoff grade is again investigated by comparing the estimates to the blasthole grades. The
results of these case studies are given in Table 3.
There are six columns of information in Tables 2 and 3. Some of the column headings are self
explanatory; however, the others may need some explanation. The column 'No. of values @
0.02 cutoff’ gives the number of samples that are equal to or greater than 0.02 grade value for
each case. The columns 'Mean grade' and 'Std. dev.' give the average grade and the standard
deviation of the estimated grades at this cutoff, respectively. The column 'Metal quantity'
gives the product of the number of samples and the average grade. Finally, the column 'Corr.
coef.' gives the correlation coefficient between the drillhole grade estimates and the blasthole
grades based on the least square regression.
The numbers in the parentheses in these tables represent the percentage difference of the
results in each case from the blastholes. For example, referring to Table 3, the number of
blasthole grades at 0.02 cutoff is 1151 with an average grade of 0.065. In Case 1, these
numbers are estimated to be 1294 and 0.059, respectively. Hence, the amount of ore is
overestimated by 12%, and the average grade is underestimated by 10%. Using these
percentages, the performance of each case relative to the blasthole grades as well as each
other can be reviewed.
*
314 Arik
Fig. 9. Blasthole vs. drillhole grades within 15' of distance on three benches - Deposit 2
(Gold)
Other parameters
There are several other search parameters, such as those listed in Table 1, that will have some
effect on the estimation results of a grade interpolation. The most important thing in deciding
on a proper search strategy is to consider all the parameters and their effects on the
interpolation as a whole. Sometimes, the effect that one search parameter will have on the
estimation results is cancelled by another parameter that is used in the same run.
318 Arik
Conclusions
References
Buxton, B.E. (1984) Estimation variance of global recoverable reserve estimates, in Proceedings of
NATO. Lake Tahoe, Nevada.
David, M. (1977) Geostatistical Ore Reserve Estimation, Elsevier, New York.
David, M. (1988) Dilution and geostatistics, CIM Bulletin 81 (914), June, pp. 29-35.
Journel, A.G. (1989) A democratic research impetus, NACOG Geostatistics Newsletter 3 (3), Summer,
p. 5.
Journel, A.G. and Arik, A. (1988) Dealing with outlier high grade data in precious metals deposits,
Computer Applications in the Mining Industry, Balkema, Rotterdam, pp. 161-71.
Journel, A.G. and Huijbregts, Ch. J. (1978) Mining Geostatistics, Academic Press, London.
Kim, Y.C. (1988) Advanced geostatistics for highly skewed data, (short course notes) Department of
Mining and Geological Engineering, University of Arizona.
Knudsen, H.P., Kim, Y.C. and Mueller, E. (1978) Comparative study of the geostatistical ore reserve
estimation method over the conventional methods, A^iningt engineering, January 30(1), pp. 54-8.
1
ABDULLAH ARIK
MINTEC, INC.
Abstract. This paper presents a new estimation method triangulation methods. Among the geostatistical techniques,
referred to as the “Nearest Neighbor Kriging” (NNK). The the most popular one is the ordinary kriging. To insure
method is a simple and practical tool to use when the unbiasedness, all these methods require that the weights
ordinary kriging (OK) is no more reasonable because of its assigned to the sample data add up to one:
excessive smoothing of grades, especially for estimating
recoverable ore reserves for deposits with highly skewed Σ wi = 1, i=1,n
grade distributions. NNK is robust and globally unbiased
like OK. However, it is more advantageous over OK because Then for all methods, the estimate is a weighted linear
it can control the smoothing and conditional bias which combination of sample data z:
have been the major pitfalls of OK in many grade estimation
cases. The paper details the methodology of NNK and its estimate = Σ wi zi, i=1,n
application to a gold deposit.
Having arrived at some weighting scheme depending on
Introduction the method, the decision remains as to what criteria to use in
order to identify data points that should contribute to
An ore reserve block model is the basis for various interpolation. This is the selection of a search strategy that
decisions in mine development and production. Accurate is appropriate for the method used. Here, considerable
prediction of the mining reserves is essential for a successful divergence exists in practice, involving the use of fixed
mining operation. Estimating recoverable ore reserves for numbers, observations within a specified radius, quadrant
deposits with highly skewed grade distributions is especially and octant searches, elliptical or ellipsoidal searches with
a challenge to the mining engineers and geologists. They anisotropic data, and so on. Since the varying of the
require considerable amount of work to make sure that parameters may affect the outcome of estimation
representative block grade distributions can be obtained to considerably, the definition of the search strategy is
estimate the reserves with certain confidence. There are therefore one of the most consequential steps of any
many advanced geostatistical methods available to tackle estimation procedure (Arik, 1990; Journel, 1989).
some of the problems associated with grade estimates in
these type of deposits. However, the ordinary kriging or Smoothing and Its Impact on Reserves
traditional inverse distance weighting methods are still
widely used to estimate the ore reserves for such deposits. The search strategy can play a significant role in the
One reason for this is the fact that the alternative methods smoothness of the estimates. The degree of smoothing
offered are either too complex or too exhaustive for the depends on several factors such as the size and orientation
practitioners who often do not have the expertise or the time of the local search neighborhood, minimum and maximum
to apply these methods. number of samples used for a given interpolation. Of all the
methods, the nearest neighbor method does not introduce
The objective of this paper is to present a new estimation any smoothing to the estimates since it assigns all the
method which will be referred to as “the Nearest Neighbor weight to the nearest sample value. For the inverse distance
Kriging” (NNK). The paper will review some of the weighting method, increasing of the inverse power used
interpolation procedures and the potential problems in decreases the smoothing because as the distance power is
reserve estimations with linear estimation techniques, such increased, the estimate approaches to that of the nearest
as the ordinary kriging, in deposits with skewed grade neighbor method. For the ordinary kriging, the variogram
distributions. This discussion will be followed by the parameters used, especially the increase in nugget effect,
methodology of NNK. Finally, its application to a gold contributes to the degree smoothing.
deposit, and comparison of grade-tonnage curves with other
methods will be presented. The immediate effect of smoothing caused by any
interpolation method is that the estimated grade and tonnage
Linear Estimation Techniques of ore above a given cutoff are biased with respect to reality.
As the degree of smo othing increases, the average grade
All available techniques involve some weighting of above cutoff usually decreases. Also with increased
neighboring measurements to estimate the value of the smoothing the ore tonnage usually increases for cutoffs
variable at an interpolation point or block. Among the below the mean, and decreases for cutoffs above the mean.
traditional techniques are the polygonal or the nearest
neighbor method, inverse distance weighting, and
3
There are a few ways to achieve this objective. One The ordinary kriging (OK) is not designed for grade
possible solution to obtain better recoveries is to correct for estimations in highly variable deposits. But why do the
smoothness of the estimated grades. This can be done by practitioners keep using this method when it is not suited for
support correction. There are methods available for this, their case? This is mainly because OK is robust, practical,
such as the affine correction or the indirect lognormal much easier to use and understand than some advanced
correction (Isaaks and Srivastava, 1989). methods offered as an alternative. Therefore the people go
to great lengths to justify the usage of OK because they are
Similar or better results for recoverable reserves can be comfortable with it and its results. Whether they lack the
obtained by conditional simulation. A fine grid of simulated time, resources or the expertise to use another “better”
values at the sample level is blocked according to the suited method is, of course, debatable. Nonetheless, it does
required SMU size. This procedure is very simple but also not change the fact.
assumes perfect selection (Dagdelen et al, 1997).
The nearest neighbor kriging (NNK), as the author calls,
The use of higher distance powers in the traditional is a new method which combines the strengths of the
inverse distance weighting method is an attempt to reduce nearest neighbor, inverse distance weighting and the
the smoothing of block grades during the interpolation of ordinary kriging methods. It is a method where the value of
deposits with skewed grade distributions. On the the nearest neighbor sample is emphasized in determining
geostatistical side, there are methods, such as lognormal the value of the blocks. This emphasis is directly
kriging, lognormal short cut, outlier restricted kriging and proportional to the variability of the deposit. The method is
several others, which have been developed to get around essentially no more than a modified OK algorithm in which
the problems associated with the smoothing of ordinary the OK weight assigned to the nearest neighbor sample is
kriging (David, 1977; Dowd, 1982, Arik, 1996). There are also increased by a certain proportion while the weights of the
advanced geostatistical methods such as indicator or other samples in the neighborhood are lowered at the same
probability kriging which takes into account the SMU size to proportion. Therefore, the sum of the resulting NNK weights
calculate the recoverable reserves (Verly and Sullivan, 1985; is preserved to be one, to satisfy the unbiasedness
Journel and Arik, 1988; Deutsch and Journel, 1992). Each of condition.
these methods provide the practitioners with a variety of The proportion used in adjusting OK weights to
tools from which they can select, and apply where they see it determine the NNK weights will be referred as the smoothing
appropriate since each method has its own advantages as correction factor. A reasonable value to use for this factor is
4
the ratio of the SMU block variance to the sample variance, the resulting NNK block grade would have reflected that
which is no other than the variance correction factor. The value more. This minimizes over-estimating of low grade
SMU block variance can be obtained simply using the well- areas, and under-estimating the high grade areas that result
known Krige’s relation: in from smooth estimates in highly variable deposits.
s 2p = s 2b + s 2p∈b
s 2b = s 2p - s 2p∈b
f = s 2b / s 2p
The factor f has a value between 0 and 1. The NNK One may argue that the weight assigned to the nearest
weights are then obtained by adjusting the OK weights sample can be increased by decreasing the nugget of the
(wt ok) as follows: variogram, or simply by decreasing the number of samples
used. That is correct. However, there would be cases that
Weight of the nearest sample = wt ok + (1 - wt ok) * f would not work as effective as NNK. Furthermore, the loss
of information from using less number of samples can also
Weights of all other samples = wt ok * (1 - f) be questioned. For example, if we used zero nugget effect for
the variogram in our example with five samples, the
Thus, the NNK estimate is equal to OK estimate at the interpolated grade from OK would increase from .2207 to
lower end when f=0, and is equal to the polygonal estimate at .2535. Alternatively, if we used only three closest samples,
the extreme end when f=1. we would get .2678 as our OK estimate. By going to the
extreme, we could have used only two samples and no
An Example nugget effect, then we would get .3281. This is now closer to
NNK estimate. However, all that came at the expense of
Let us take a hypothetical case shown in Figure 1. For losing important information from other samples.
simplicity, there are only five data points for the sample
block to be interpolated. Using a spherical variogram model Many different scenarios are of course likely for any
with an isotropic range of 100, sill of 1.0, and nugget of 0.2, given block. For example, what happens if all the samples are
both OK and NNK runs were made. In NNK run, a correction the same distance apart from the block center. In that case,
factor f equal to 0.5 was used. Table 1 summarizes the the estimate is defaulted to OK grade unless the statistical
results obtained. As one can see from this table, the OK distance for the samples are different. In other words, the
estimate is much lower than the NNK estimate. nearest sample should then be based on the statistical (or
the anisotropic) distance, not on the true distances.
In this example, a high grade sample was the nearest
neighbor. Since OK weights are sample independent, we get
the same weights even if we change the data values at the
given points. So, since the NNK weights emphasize the
nearest data value more, if the nearest value was a low grade,
5
of 1.044. The kriging plan that was used for the entire
deposit had the following parameters.
Table 1. The weights assigned to five data points from OK
and NNK methods, and the resulting estimated grades for Variogram Parameters:
the sample block. Spherical model, nugget = 4, sill = 9.5
Ranges: 100 m. down dip (50 degree west),
75 m. along strike,
No Grade Distance OK Wts NNK Wts
8 m. up/down the dip/strike plane.
1 .461 12.0 .307 .653
Search Parameters:
2 .095 19.0 .221 .111 Ellipsoidal search as defined by the variogram.
Minimum number of composites: 2
3 .133 20.0 .200 .100 Maximum number of composites: 12
4 .119 22.4 .146 .073 The blast holes in the area have spacings of 4-5 m. The
first thing done in the study was to average the value of
5 .114 23.3 .126 .063 these blast holes that fall into the block model blocks. By
using a minimum of two holes, the blocks containing the
Estimated Grade .221 .341
blast hole averages were identified and flagged. There
were a total of 5500 blast hole blocks. Since the block size
was small enough, it was also considered to be the size of
Like any method, the NNK results can be improved SMU for simplicity.
through the selection of proper search strategy and
kriging plan. But, as an additional advantage, this method Block Model and Reconciliation Results
utilizes a smoothing correction factor in adjusting the
weights in order to give realistic reserve estimates which The entire deposit was already interpolated and the
better reflects the variability of block grades expected at grades were assigned to 3-D blocks using OK, inverse
the time of mining. distance weighting of power 3 (ID3), and the polygonal
method (POLY). In addition to these methods, NNK grade
Application to a Gold Deposit assignment was done using the same kriging plan for OK.
The smoothing correction factor used was 0.5 and applied
An NNK exercise was carried out in a structurally to the nearest sample only. Table 2 gives the gold grade
controlled gold deposit and the results were compared to statistics of all 5500 mined-out blocks at zero cutoff from
those from OK, IDW and the polygonal or the nearest different interpolation methods. The coefficient of
neighbor method. variation (standard deviation divided by the mean) of OK
block values is the lowest among the methods used.
The gold mineralization in this deposit occurs in a Compared to SMU’s (or the blast hole averages), the
specific geologic unit which is striking north and dipping variance of OK blocks is 28% lower. On the other hand,
about 50 degrees west. The mineralization zone is the variance of NNK blocks is comparable to that of
therefore distinct. This zone is identified and digitized on SMU’s.
each section of the model. However, there are waste and
low grade bands within this zone which occur right within Figure 2 shows the grade-tonnage curves from POLY,
the high grade areas. These bands are to hard to map to be OK, NNK and SMU (or the blast hole averages). ID3
separated from the high grades. curves were not plotted to maintain the clarity of the
graph. However, they plot somewhere between OK and
The area studied is a mined-out portion of the mine NNK curves. Table 3 summarizes the grade and tonnage
about 20,000 square meter in plan on 10 benches. The recoveries at 1-gram gold cutoff, and Table 4 at 3-gram
bench height is 5 m. The block size is 5 x 10 m., cutoff, from all the different methods applied.
corresponding the easting and the northing of the model.
There were 233 five-meter composites intersecting these
benches in the study area. The average grade of these
composites was 4.145 grams with a coefficient of variation
6
NNK +2.01 -1.68 +0.30 Deutsch, C.V., Journel, A.G., 1992, GSLIB: Geostatistical
Software Library and User’s Guide, Oxford
University Press, New York.
Conclusions
Dowd, P.A, 1982, Lognormal Kriging The General Case,
NNK is a new interpolation method modified from OK Mathematical Geology, Vol. 14, No. 5.
algorithm. It is practical, robust and globally unbiased like
OK. It is designed to help solve smoothing problem of OK Isaaks, E.H., Srivastava, R.M., 1989, Applied Geostatistics,
encountered in resource estimation of highly variable New York, Oxford University Press.
deposits. The method is another tool for the practitioners
who may find it useful in predicting more realistic Journel, A.G., Arik, A., 1988, “Dealing with Outlier High
recoverable reserves from their models developed at Grade Data in Precious Metals Deposits,” Compute
different stages of mining. Applications in the Mining Industry, Balkema,
Rotterdam, pp. 161-171.
NNK is not a method to replace the advanced
geostatistical methods. Rather it should be looked at as an Parker, H.M., 1980, The Volume-Variance Relationship: A
alternative method to try in those mines which use more Useful Tool for Mine Planning, Geostatistics
traditional techniques such as the polygonal, IDW or OK. (Mousset-Jones, P.,ed.), McGraw Hill, New York.
Having a smoothing correction factor which can be varied
from one area or one geologic unit to another like the Rossi, M.E., Parker, H.M. Roditis, Y.S., 1994, Evaluation of
powers of IDW, NNK can offer flexibility and practicality Existing Geostatistical Models and New Approaches
that the practitioners looking for in achieving their in Estimating Recoverable Reserves, SME Annual
objectives. Meeting , Preprint #94-322.
Each deposit is unique and therefore the application Verly, G.W., Sullivan, J.A., 1985, Multigaussian and
and performance of any method may differ from one to Probability Kriging, Application to Jerritt Canyon
another. By adopting the strong points of the polygonal, Deposit, Mining Engineering, Vol. 37, pp 568-574.
IDW and OK techniques, NNK has turned out to be a
good performer in the deposit studied. Since it is only a
few lines of coding change in OK algorithm, the author
hopes that some practitioners will find easy access to the
method, apply it to their problems and compare the results
with those from their current methods in order to find out
more about its performance.
XXVI APCOM
September 16 - 20, 1996
University Park, Pennsylvania Chapter 17
ABDULLAH ARIK
MINTEC, INC.2590 N. Alvernon Way, Tucson, AZ 85712
ABSTRACT ing utilizes only the spatial correlation between samples of a single
variable to obtain the best linear unbiased estimate of this variable.
In addition to this feature, cokriging also utilizes the cross-correla-
An ore reserve block model is the basis for various decisions in tions between several variables to further improve the estimate.
mine development and production. The block model is initially built Therefore, cokriging can be defined as a method for estimation
using the information from the drillhole data only. Once the mine is
that minimizes the variance of the estimation error by exploiting the
put in operation, the information from the blasthole data becomes
cross-correlation between two or more variables. The estimates are
available. Since the spacing of the blastholes are much closer than
derived using secondary variables as well as the primary variable
the exploration drillholes, the blasthole data gathered from the mine
(Isaaks, 1989; Kim, 1988).
over the years become quite extensive containing valuable informa-
tion about the deposit.
Reasons For Cokriging
In practice, the information from the blastholes is the basis of
determining the ore and waste boundaries between and within min-
ing blocks. In an ore reserve model, the valuation of blocks on and Cokriging is particularly suitable when the primary variable has
close to the pit face can obviously be improved by using blasthole not been sampled sufficiently. The precision of the estimation may
data from nearby drilled and mined-out blocks in addition to using the then be improved by considering the spatial correlations between
widely spaced drillhole data. The ideal basis on which to integrate the primary variable and the other better-sampled variables (Jour-
blasthole and drillhole data is to use cokriging. nel, 1978). Therefore, having extensive data from blastholes as the
secondary variable with the widely spaced exploration data as the
primary variable is an ideal case for cokriging.
INTRODUCTION
Cokriging Equation
Success in mining is dependent upon sound decisions, adequate
engineering and operational control, and all are dependent upon Cokriging system of equations for two variables, exploration drill-
accurate prediction of the mining reserves. In order to achieve reli-
hole data being the primary variable and the blasthole data being the
able reserve estimates, a good model of the in-situ resources must
secondary variable, is given in Table 1.
be built to represent the deposit as close to reality as possible. The
method used in the reserve estimation can be very important in the
Steps Required for Cokriging
outcome of good modeling work.
A block model is initially built using the information from the ex-
Since cokriging uses multiple variables, the amount of work in-
ploration drillhole data only. After the start of mine operations, the in-
volved prior to cokriging itself is a function of the number of variables
formation from the blasthole data become available. Since the spac-
ing of the blastholes are much closer than the exploration drillholes, used. For cokriging of drill and blasthole data of the same item, the
the blasthole data gathered from the mine over the years become following steps are required.
quite extensive containing valuable information about the deposit. • The regularization of blasthole data into a specified block size.
Therefore, not using the blasthole data for routine ore reserve valu- This block size could be the same as the size of the model
ations cannot be justified. blocks to be valued, or a discreet sub-division of such blocks.
The objective of this paper is twofold: The first objective is to One thus establishes a new data base of average blasthole
briefly explain the cokriging algorithm, then to show how the blast- block values.
hole data can be integrated to use with the drillhole data via cokrig- • Variogram analysis of drillhole data.
ing. The second objective is to document the performance of cokrig- • Variogram analysis of blasthole data.
ing over ordinary kriging based on the reserve estimates and their • Cross-variogram analysis between drill and blasthole data.
comparisons to “true” grades for a mined-out area in a porphyry cop- This is done by pairing each drillhole value with all blasthole
per deposit. values.
• Selection of search and interpolation parameters.
COKRIGING • Cokriging.
Cokriging is an extension of ordinary kriging. It is basically the A cokriging exercise was carried out in a typical porphyry cop-
kriging of several variables simultaneously. It can be used when per deposit and the results were compared to those from ordinary
there is one or more secondary variables that are spatially cross-cor- kriging. The area selected was a mined-out section of the deposit
related with the primary variable. The commonly used ordinary krig- in order to see the performance of both methods by comparing the
108 26th APCOM PROCEEDINGS
Table 1. Cokriging System of Equations for Two Variables. Primary Variable is Drillhole Data. Secondary Variable is Blasthole Data.
corresponding estimated grades from each method to “true” block was that the lower benches in the model did not get much influence
values. from the blasthole data making the CK estimates almost the same
The deposit had 19 different rock types which were outlined on as the OK estimates.
each bench. Some of these rock types were combined together In rock type 1, which is one of the major rock types in this de-
when the copper mineralization was similar. Therefore, there were posit, the overall improvement in correlation coefficient of CK over
a total of eight rock groups to study. A cokriging block model was OK estimates was 5%. Table 3 summarizes the results for this rock
built for this study using 20m x 20m block size, with a bench height type broken down by benches. Since the blastholes used in cokrig-
of 15m. The model consisted of only eight benches, enough to cover ing came from benches above the modeled area, one would expect
the mined-out area. On plan, the actual area covered was about better correlations for the upper benches of the model than for those
1000m by 1500m. benches at the bottom. The results from this table confirm this. The
The blasthole data used for the variogram analysis and in inter- top two benches show 6.4% improvement whereas the fifth and the
polation of the block grades came from the benches above this mod- sixth benches show only 1.6% improvement in correlations when CK
el. However, no blastholes were taken from the bench immediately is used instead of OK. The last two benches have very few blocks
above the model. In other words, one bench gap was left between
the blastholes used and the top of the model.
The blasthole data have been averaged into 20m x 20m blocks Table 2. Comparison of OK and CK by Rock Type, Based on the
for every bench. The average values of blastholes in these blocks Linear Regression Between the Estimates and the “True” Grade of
with a minimum of four holes were retained for the study. The drill- Blocks.
hole data base came from exploration holes which are spaced on
average some 50m or more apart. The total copper variograms
were developed for each rock group for both drill and blasthole data.
Similarly, the cross variograms between the two were developed for
cokriging.
Ordinary kriging was performed using only the drillhole data.
Ordinary cokriging was performed using the drillhole data as the
primary variable, and the blasthole data as the secondary variable.
Each rock group was kriged and cokriged independently, using only
the data that belong to this group, and using the corresponding var-
iogram and interpolation parameters.
Correlations
Since cokriging is done for the same item, say gold, the two data
sets, drillholes and blastholes, have to be compared for the same
section of the mine to test for any global bias. If any global bias is
evident, the reason for it must be investigated and the necessary
adjustments made to one or both data sets.
The block variances for the relevant model block size, as calcu-
lated from drillhole and blasthole variograms, have to be compared
and should theoretically agree. If they do not, the two corresponding
covariograms and the cross-variogram should be compared. Except
for the short lags, these should also be very similar and, if possible,
a single covariogram model should be fitted. The corresponding
variogram and cross-variogram models should be defined with any
necessary adjustments for obvious differences at short lags. These
should then give matching estimates of the block variance (Krige,
1995).
Cokriging program search routines should be very flexible to in-
clude separate search parameters for drillhole and blasthole data.
interpolated and the results are almost identical, showing the ever
Trial sets of kriging runs, using blasthole and drillhole data sepa-
decreasing effect of blastholes in the estimates as the blocks get
rately, as well as together should be first carried out for a limited
farther and farther away from the blastholes.
number of blocks to determine the practical limit for extrapolation of
Another way to check the performance of an estimation method is
blasthole data and the best drill and blasthole data patterns to use.
to look at the distribution of estimation errors. When “true” grades of
This is to ensure, as much as possible, conditional unbiasedness,
the blocks are known, the estimation error is the difference between
and error variances significantly lower than the block variance.
the “true” and the estimated grade of the block. Figure 1 shows his-
tograms of estimation errors from OK and CK methods overlaid for
rock type 1. It also gives the means and standard deviations of the CONCLUSIONS
errors from these methods.
For an unbiased estimator, the expected value of the mean of
In an ore reserve model, the valuation of blocks on and close
estimation errors is zero. One can see from Figure 1 that the means
to the pit face can obviously be improved by using blasthole data
of the estimation errors are 0.022 and 0.007 for OK and CK, respec-
from nearby drilled and mined-out blocks in addition to using the
tively. Although CK mean is smaller than OK mean, they both are
widely spaced drillhole data. One of the methods to integrate the
pretty close to zero. Therefore, both are unbiased.
blasthole data with the exploration drillhole data is to use cokriging
In the same figure, the standard deviations of the errors are 0.703
which was demonstrated in this paper. From the case study done
and 0.651 for OK and CK, respectively. The smaller the standard
on the performance of CK with respect to OK, the following results
deviation, the more desirable is the estimator, since more errors will
were conclusive.
be closer to zero. Therefore, one can see the improvement in block
• Improvements in correlation coefficients from as low as 1%
estimates by using CK over OK in rock type 1. This is displayed in the
figure where CK errors, shown in solid bars, are higher in frequency to as high as 16% have been achieved in different rock types
than OK errors around zero; but they are lower in frequency than OK when the estimates are compared to “true” block grades.
• Cokriging was especially effective when the blocks were clos-
er to the blasthole data used.
Figure 1. Histograms of Estimation Errors from OK and CK in Rock The level of correlations between estimates and the “true”
Type 1. block grades, the observed error variances, and the slopes of
the regression lines have confirmed the advantages of cokrig-
ing over ordinary kriging estimates in this particular applica-
tion.
ACKNOWLEDGEMENTS
The author would like to acknowledge the valuable notes and assis-
tance he has received from Prof. Danie Krige of South Africa.
REFERENCES
Isaaks, E.H., Srivastava, R.M., 1989, Applied Geostatistics,
New York, Oxford University Press.
Journel, A.G., Huijbregts, Ch.J., 1978, Mining Geostatistics,
London, Academic Press.
Kim, Y.C., 1988, Advance Geostatistics For Highly Skewed Data,
(Short Course Notes), Department of Mining and Geological
Engineering, The University of Arizona.
Krige, D.G., 1995, Private Communication.
Uncertainty, Confidence Intervals and Resource Categorization:
A Combined Variance Approach
Abdullah Arik
MINTEC, INC.
Tucson, Arizona, USA
ABSTRACT
The classification of ore resources into different categories has always been a topic of research
and debate. However, currently there is no standard practice of resource classification. It seems
that each company or practitioner has his own methodology of classifying resources. This is
partly because there is almost no established and easy way of measuring the confidence in the
estimates. Therefore, their categorization becomes a subject of debate.
In this paper, a new approach is proposed to measure the uncertainty of the estimated block
grades using a “Combined Variance.” This variance has two components: One is the traditional
kriging variance, the other is the local variance of the weighted average. With this approach of
using “Combined Variance,” one essentially takes into account the local variation, as well as the
spatial data configuration around the block being estimated. An example is given to compute this
measure with two different data sets of the same configuration for a single block. A case study
for a gold deposit is presented to demonstrate how this new measure of variability can be helpful
for estimating local and global confidence intervals for grade and tonnage. Finally, the use of
“Relative Confidence Bound” index is demonstrated to categorize the resources or reserves at
different cutoffs.
INTRODUCTION
There is yet no easy or practical way of measuring the confidence in the estimates that can in
turn be used as a guide in classifying the resources. Therefore, currently it is hard to find a
standard practice of ore resource classification. Rather each company or practitioner has adopted
a methodology to categorize the resources within some guidelines, applying that method from
deposit to deposit with some refinement.
The problem of assessing uncertainty in grade estimates has been deliberated since kriging was
first introduced. Use of kriging variance was originally touted as all that was needed in order to
quantify the confidence in grade estimates. Since then, the use of kriging variance for such a
purpose has been questioned, and techniques such as the jackknife and conditional simulation
have been proposed to develop a more meaningful measurement of uncertainty in the estimates.
1
The objectives of this paper are threefold. The first objective is to propose an alternative measure
to assess the uncertainty of the estimated block grades. We will review the traditional kriging
variance and its limits as an indicator for uncertainty. This discussion will be followed by the
methodology of computing the proposed “Combined Variance.” The second objective of the
paper is to use the “Combined Variance” in determining confidence intervals for the grade
estimates. The last objective of the paper is to demonstrate the application of this new variance to
a gold deposit, and its use in categorizing the ore reserves or resources.
KRIGING VARIANCE
As an estimation technique, ordinary kriging distinguishes itself by its attempt to produce a set of
estimates for which the variance of the errors is minimized. Using the probabilistic random
function (RF) model, one can express the error variance as a function of RF model parameters
s 2, Cij, and the weights wi (Isaaks and Srivastava, 1989):
s 2k = s 2 + ? wi wj Cij – 2 ? wi Cio i = j = 1, n [Eq. 1]
2
where n is the number of samples, s is the overall sample variance, Cij are the sample-to-sample
covariances, Cio are the sample-to-block covariances, and wi are the weights assigned to the
samples. The minimized error variance through the use of the Lagrange multipliers technique is
usually referred to as the ordinary kriging variance:
s 2ok = s 2 - ? wi Cio + µ i = 1, n [Eq. 2]
where µ is the Lagrange multiplier.
The kriging variance computed for a given point or block being estimated is essentially
independent of the data values used in the estimation. It is purely a function of the spatial
distribution and configuration of data points, and the version of the kriging used. The only link
between the kriging variance and data values is through the variogram, which is global rather
than local in its definition. For this reason, the kriging variance does not always give an accurate
reflection of the local variation. For instance, using the same variogram and the same data points
around the block being estimated will give the same kriging variance, regardless of the values of
the data points.
2
In an ordinary kriging program, the first component of s 2cv, the kriging variance (s 2k), is already
computed. We will compute the second component of s 2cv, the local variance of the weighted
average (s 2w) as follows:
s 2w = ? w2i * (Z0 - zi)2 i = 1, n (n>1) [Eq. 3]
where n is the number of data used, wi are the weights corresponding to each datum, Z0 is the
block estimate, and zi are the data values. If there is only one datum, s 2w is set to s 2k.
The Combined Variance (s 2cv) is then calculated as follows:
s 2cv = /(s 2k * s 2w) [Eq. 4]
3
Table 1. First and second sample data set results
(Note: only the grade of Sample #1 was changed)
4
Figure 2.
Contour plot of kriging
variance of the blocks
at the lower and upper
quartiles on bench 225
shown with the drill-
hole intercepts on this
bench
Similarly, Figure 5 shows the contour plot of Combined Variance at the upper quartile value on
bench 225 of the model. If we also zoom in a portion of this plot, shown in Figure 6, we can see
that the high-grade area surrounded by lower-grade composites have higher Combined
Variances. Thus, any time there is a greater variability in the surrounding data, the blocks will
have higher Combined Variances, reflecting the uncertainty in the estimates.
CONFIDENCE INTERVALS
Confidence intervals are perhaps the most familiar way of reporting uncertainty. A confidence
interval accounts for our inability to determine the unknown block grade exactly. However, the
confidence intervals for individual blocks are not very easy to derive. There are many factors
contributing to the width and degree of symmetry of these intervals, such as data errors,
estimation errors and modeling errors. Although it is important to be able to establish a range
within which the unknown true value is likely to fall, the confidence intervals may not be very
helpful if they are used per block basis. They will rather be more useful for ranking the
uncertainty associated with the estimates relative to each other.
Many variables encountered in resource analysis have positively skewed distributions. Even
though not all positively skewed distributions are lognormal, most of these distributions can be
described adequately by a lognormal model depending on one’s objective. For example, Figure 7
shows the log-scale cumulative probability plot of composite gold grades in the deposit studied.
This is a typical case one may encounter where the majority of the grades often plot on straight
line with the exception of low grades and a few outlier values. Since we are interested in grades
above an ore cutoff, it will seem appropriate that a lognormal model can be an approximate for
the distribution of grades within an ore block.
5
Figure 3. Contour plot of Combined Variance Figure 4. Contour plot of Combined Variance
of kriged blocks at the lower quartile value on of kriged blocks at the lower quartile value
bench 225 shown with the drillhole intercepts zoomed in to show that similar composite
on this bench values generate lower Combined Variance
Figure 5. Contour plot of Combined Variance Figure 6. Contour plot of Combined Variance
of kriged blocks at the upper quartile value on of kriged blocks at the upper quartile value
bench 225 shown with the drillhole intercepts zoomed in to show that variable composite
on this bench grades generate higher Combined Variance
6
The mean of this model is taken as the kriged estimate (mk) for this block and the variance is
represented by the Combined Variance (s 2cv). Then we can calculate the corresponding log mean
(∀ ) and variance (∃2) of the lognormal distribution as follows:
∀ = log mk - ∃2/2 [Eq. 5]
∃2 = log [1 + (s 2cv / mk2)] [Eq. 6]
We will first compute the confidence intervals in the log space. At 95% confidence level, we get:
Lower CI = ∀ L = ∀ - 2 /(∃2/n) [Eq. 7]
Upper CI = ∀ U = ∀ + 2 /(∃2/n) [Eq. 8]
The n in these equations is the number of composites used to estimate the block. Now we
compute the confidence intervals in terms of original variables:
Lower CI = mL = exp (∀ L + 1/2 ∃2) [Eq. 9]
Upper CI = mU = exp (∀ U + 1/2 ∃2) [Eq. 10]
Figure 7.
Log-scale cumulative
probability plot of
composite gold grades
7
Figure 8 shows the resulting histogram and statistics of the RCB index for gold using the kriged
blocks whose gold grades ∃0.6 g/t. These index values display a skewed distribution as the gold
grades.
Figure 8.
Distribution and
statistics of Relative
Confidence Bound
(RCB) Index for gold
∃0.6 g/t.
RESOURCE CATEGORIZATION
In order to classify the resources into proven, probable and possible categories, we need to
determine two values: One value to separate proven/probable, and another value to separate
probable/possible categories. As a rule of thumb, we will use the median RCB index of the
kriged blocks above the specified cutoff for the proven/probable limit. Similarly, we will use
twice the median index for the probable/possible limit. The median and twice the median RCB
index values for 0.6 g/t gold cutoff are 0.16 and 0.32, respectively. The corresponding RCB
index values for 1.5 g/t cutoff are 0.12 and 0.24, respectively. The logic behind this is that, in
majority of the cases, the RCB index distribution will be skewed. By using median and twice the
median, we will be able to determine the blocks in the tail of the distribution for the possible
category.
In summary, we will categorize our reserves or resources into proven, probable and possible
categories using the Relative Confidence Bound (RCB) index as follows:
Proven Resources: blocks with RCB index values # median RCB index
Probable Resources: blocks between the median and twice the median RCB index
Possible Resources: blocks with RCB index values > twice the median RCB index
8
REVIEW OF THE RESULTS
Tables 2 and 3 report the resource categories at 0.6 and 1.5 g/t gold cutoffs, respectively. The
results are based on both Relative Confidence Bound Index (RCBI) and the “Distance to the
nearest composite (DTNC).” The reason for the inclusion of the latter method is to see how this
new categorization will compare to one of the conventional classification methods.
In order to prepare the “Distance to the nearest composite” results, we used a similar
methodology to determine the range of values to use for the classification of the resources.
Therefore, we applied the median distance (55m) to separate proven/probable categories, twice
the median “distance” (110m) to separate probable/possible categories for 0.6 g/t cutoff. The
corresponding “distance” values for 1.5 g/t cutoff were 51 and 102m, respectively.
Besides the tons and grade, also reported in Tables 2 and 3 are the average distance to the closest
composite and the average number of composites used in the interpolation. First we look at the
results from the RCB index categorization. We can see from these results that the average
distance gets larger from the proven to probable, and from the probable to possible categories.
Conversely, the average number of composites used per block gets less as we go from the proven
to probable, then from the probable to possible categories. This is what we should expect. This is
an inherent feature of this method through the use of the kriging variance in the calculation of the
Relative Confidence Bound Index.
Comparing the results from the two methods in these tables, we find that the tonnage figures are
more similar in each category than the grades. In general, the RCB index classification resulted
in much greater spread in grade between different ore categories. The proven category had much
higher grade than probable category, and the probable category had much higher grade than the
possible category. This was to be expected because of the definition of RCB index. The question
is if all this makes sense. We can see that as our grade of ore approaches to the cutoff we use, it
becomes less likely to be proven. For this reason, for example, the relatively marginal-grade ore
will end up in the possible category, indicating its higher uncertainty with respect to the cutoff
used.
Of course the major assumption here is that the ore resource calculations were done correctly to
represent the in-situ grade distribution in the deposit properly. Then, knowing that one is actually
accounting for the local variability of the grades in determining the resource categories, the RCB
index technique should provide people a more presentable resource estimates and categories.
CONCLUSIONS
The concept of Combined Variance can be very useful in assessing uncertainty in the blocks.
Furthermore, the Relative Confidence Bound (RCB) index can be an effective tool for
classifying ore reserves or resources into proven, probable and possible categories. Both the
Combined Variance and RCB index are easy to compute and implement.
9
Table 2. Resource categories at a 0.6 g/t gold using Relative Confidence Bound Index (RCBI)
vs. “Distance to the nearest composite” (DTNC) methods
Ore Tons x 1000 Grade g/t Average Distance Ave. #of comps
Category
RCBI DTNC RCBI DTNC RCBI DTNC RCBI DTNC
Proven 15,911 14,996 1.754 1.581 47.4 35.8 9.1 8.3
Probable 11,148 13,409 1.292 1.476 62.1 72.2 6.6 7.5
Possible 2,503 1,157 1.054 1.240 81.6 124.0 4.1 4.0
Proven + Prob. 27,059 28,405 1.564 1.532 53.4 53.0 8.1 7.9
Total 29,562 29,562 1.492 1.492 55.8 55.8 7.8 7.8
Table 3. Resource categories at a 1.5 g/t gold using Relative Confidence Bound Index (RCBI)
vs. “Distance to the nearest composite” (DTNC) methods
Ore Tons x 1000 Grade g/t Average Distance Ave. #of comps
Category
RCBI DTNC RCBI DTNC RCBI DTNC RCBI DTNC
Proven 6,900 6,807 2.102 2.055 44.3 33.7 9.9 9.0
Probable 5,889 6,386 1.920 1.965 57.0 67.1 7.6 8.5
Possible 825 397 1.820 1.836 81.7 117.7 4.2 4.3
Proven + Prob. 12,789 13,217 2.018 2.011 50.1 50.0 8.8 8.8
Total 13,614 13,614 2.006 2.006 52.0 52.0 8.6 8.6
One of the drawbacks of using the Combined Variance and RCB index approach is that it
requires the calculation of the kriging variance. Therefore, the easiest application of this
approach becomes when ordinary kriging is used to assign the block grades. It is possible,
however, to compute a Combined Variance even for a conventional method such as the Inverse
Distance Weighting (IDW) as long as the kriging variance is first computed using the same
interpolation parameters. The IDW program must then be modified to read in the kriging
variance for each block to calculate the resulting Combined Variance. Alternatively, the IDW
program can be modified to write out the first component of the Combined Variance [Eq. 3].
This value is then taken and multiplied by the kriging variance. The square root of this product
gives the Combined Variance. It should be noted that the kriging variance should be based on the
actual variogram parameters. If the normalized parameters used, then the kriging variance must
be multiplied by the variance of the samples to give the correct Combined Variance.
10
This technique of resource categorization using the Relative Confidence Bound (RCB) index has
been applied to other types of deposits, including copper, molybdenum, zinc, and silver. The
results have been more sensible than the conventional methods, such as the “Distance to the
nearest composite.” This is because the RCB index technique not only accounts for the local
variability, but it also incorporates all the advantageous criteria for classifying resources, such as
the distance, the number and configuration of the composites around the block, as well as the
spatial correlation of the composites.
Another nice property of RCB index is that, like the coefficient of variation that can be used to
compare different distributions, it can be helpful to compare the estimations in different deposits.
Generally, higher mean or median RCB index value in a deposit is an indication of insufficient
drilling or the presence of high local variations in the values of samples used for the block
estimation.
REFERENCES
Adisoma, G.S., Hester, M.G., 1996, “Grade Estimation and Its Precision in Mineral Resources:
The Jacknife Approach,” Mining Engineering, Vol. 48, No. 2, pp 84-88.
Arik, A., 1990, “Effects of Search Parameters on Kriged Reserve Estimates,” International
Journal of Mining and Geological Engineering, Vol 8, No. 12, pp. 305-318.
Crozel, D., David, M., 1985, “Global Estimation Variance: Formulas and Calculation,”
Mathematical Geology, Vol. 17, No. 8, pp 785-796.
Dagdelen, K., Verly, G., Coskun, B., 1997, Conditional Simulation for Recoverable Reserve
Estimation, SME Annual Meeting , Preprint #97-201.
Davis, B.M., 1992, Confidence Interval Estimation for Mineable Reserves, SME Annual
Meeting, Preprint #92-39.
Froidevaux, R., 1984, “Conditional Estimation Variances: An Empirical Approach,”
Mathematical Geology, Vol. 16, No. 4, pp 327-350.
Isaaks, E.H., Srivastava, R.M., 1989, Applied Geostatistics, New York, Oxford University Press.
Journel, A.G., Arik, A., 1988, “Dealing with Outlier High Grade Data in Precious Metals
Deposits,” Proceedings, Computer Applications in the Mineral Industry, Balkema,
Rotterdam, pp. 161-171.
Mercks, J.W., 1997, “Applied Statistics in Mineral Exploration,” Mining Engineering, Vol. 49,
No. 2, pp 78-82.
Parker, H.M., 1980, “The Volume-Variance Relationship: A Useful Tool for Mine Planning,”
Geostatistics (Mousset-Jones, P.,ed.), McGraw Hill, New York.
11
MineSight® in the Foreground
June 2001 8
June 2001
MineSight® in th
e
Foreground 9
June 2001
Please send one copy of the latest MineSight® update CD-ROM to:
Company Name:
Telephone: FAX:
Date:
One update CD-ROM will be sent per site with an active maintenance contract.
Please note that the shipment will be directed to the person who is on le as your company’s contact
with Mintec, not necessarily the person who requests the CD. If you want to verify that an update has
already been requested and sent to your site, please contact Donna Dickerson.
or mail to: Mintec, inc., 3544 East Ft. Lowell Road, Tucson, AZ 85716-1705
MineSight® in th
e
Foreground 10
Technical Support Tips
Ellipsoidal Searches in
MEDSYSTEM®
Preprint 00-88
A. Arik
Mintec, Inc
Tucson, AZ, USA
OK - adjusted 83.3 .547 45.57 52.8 .598 31.57 19.4 .693 13.44
% Difference +2.1 -1.4 +0.6 +9.5 -5.2 +3.8 -22.7 -2.4 -24.6
ID2 77.8 .559 43.49 55.6 .598 33.25 19.4 .713 13.83
% Difference -4.7 +0.7 -4.0 +15.4 -5.2 +9.3 -22.7 +0.4 -22.4
NNK 77.8 .564 43.88 52.8 .619 32.68 25.0 .703 17.58
% Difference -4.7 +1.6 -3.1 +9.5 -1.9 +7.5 -0.4 -1.0 -1.4
MIK 75.3 .581 43.75 50.0 .638 31.90 25.6 .739 18.92
% Difference -7.7 +4.7 -3.4 +3.7 +1.1 +4.9 +2.0 +4.1 +6.1
ORK 83.3 .544 45.32 50.0 .608 30.40 19.4 .710 13.77
% Difference +2.1 -2.0 +0.1 +3.7 -3.6 0.0 -22.7 0.0 -22.8
CSIM 77.8 .570 44.35 46.0 .657 30.22 24.4 .759 18.52
% Difference -4.7 +2.7 -2.1 -4.6 +4.1 -0.6 -2.8 +6.9 +3.9
Polygon 72.2 .610 44.04 36.1 .769 27.76 30.6 .816 25.0
% Difference -11.5 +9.9 -2.8 -25.1 +21.9 -8.7 +21.9 +14.9 +40.2
OK - adjusted 83.3 .571 47.56 41.7 .727 30.32 16.7 .960 16.03
% Difference +2.0 -1.4 +0.5 -1.7 -2.9 -4.6 -14.4 +1.5 -13.1
ID2 86.1 .570 49.08 47.2 .714 33.70 13.9 1.048 14.57
% Difference +5.4 -1.5 +3.8 +11.2 -4.8 +6.0 -28.7 +10.8 -21.0
NNK 80.6 .602 48.52 38.9 .819 31.86 13.9 1.270 17.65
% Difference -1.3 +4.0 +2.6 -8.3 +9.3 +0.3 -28.7 +34.2 -4.3
MIK 76.4 .627 47.90 36.9 .863 31.84 19.2 1.087 20.87
% Difference -6.5 +9.8 +1.3 -13.0 +15.2 +0.2 -1.5 +14.9 +13.1
ORK 86.1 .565 48.65 41.7 .732 30.52 16.7 .994 16.60
% Difference +5.4 -2.4 +2.8 -1.7 -2.3 -4.0 -14.4 +5.1 -10.0
CSIM 77.8 .597 46.45 40.5 .783 31.71 20.6 .996 20.52
% Difference -4.8 +3.1 -1.8 -4.5 +4.5 -0.2 +5.6 +5.3 +11.2
Polygon 69.4 .714 49.55 30.6 1.136 34.76 22.2 1.315 29.19
% Difference -15.1 +23.3 +4.8 -27.9 +51.7 +9.4 +13.8 +39.0 +58.2
ABSTRACT
The resource classification methodologies have always been a topic of research and debate. There
are both traditional and Geostatistical methods used in practice. Because of the uniqueness of each
ore deposit, it seems almost impossible to have a single industry-accepted norm for resource
classification. Therefore, some practitioners have adapted the methodologies to suit the deposits
they are working on, but many simply use the methods they are comfortable with. Another reason
for the existence of varying classification methodologies is the lack of an easy and established way
to measure the confidence in the estimates. Therefore, their classification becomes a subject of
debate.
In this paper, a new approach is offered to classify the resources using a “Resource
Classification Index.” The approach using this index takes into account the local variation, the
spatial data configuration around the block being estimated, as well as the traditional distance
measures. A case study for a gold deposit is presented to demonstrate how this new index can be
used to classify the resources. The results from this approach are compared to those from two
popular classification methodologies.
INTRODUCTION
Over the past decades, a number of resource/reserve classification codes have been developed to
standardize the classification principles and reporting guidelines in the mineral industry. These
codes or principles can be found in the Society for Mining, Metallurgy and Exploration (SME)
guides, the U.S. Bureau of Mines and the U.S. Geological Survey Circulars, the U.S. Securities
and Exchange Commission (SEC) guidelines, and the Canadian Institute of Mining, Metallurgy
and Petroleum (CIM) bulletins. There are also the SAMREC code from South Africa and the
JORC Code from Australia developed for the same purpose. The reference material to most of
these codes and guides can be found relatively easily by going to the appropriate web site of the
organization on the internet.
The basic concepts of the resource and reserve reporting guidelines revolve around mainly
geological assurance; technical factors and economic viability. The following categories are
essentially the norm for the resources:
For the reserves, which are mineable portion of the resources, the following categories and the
terminologies are commonly used:
1
Mintec, Inc., Tucson, Arizona
• Proven Mineral Reserves
• Probable Mineral Reserves
• Possible Mineral Reserves
The classification of resources and reserves must conform the standard codes of reporting.
However, what classification scheme or practice should be utilized is up to the practitioner, as
long as he or she uses acceptable criteria to satisfy both the regulatory authority and internal
company planning needs.
The objective of this paper is to compare two of the most popular classification criteria to a
new resource classification approach. This new approach, through the use of a “Resource
Classification Index,” takes into account the local variation, the spatial data configuration around
the block being estimated, as well as the traditional distance measures and other pertinent criteria.
The application of this method will be demonstrated in a gold deposit, and the results will be
compared to those from the selected traditional and Geostatistical methods.
Distance Classification
In its simplest form, this classification is based on the distance from the centroid of the block to be
interpolated to the nearest sample used in the interpolation. Obviously the closer the sample to a
block, the more confidence is assumed about the interpolated value of the block.
The decision what distances to use to classify the resources is usually based on the geology,
drilling density and/or the variogram ranges.
Variations of this method with additional requirements of minimum number of samples and
drilling density measures are possible. In some cases, the “distance to the nearest sample” is
replaced by the “average distance of all the samples” used in the interpolation of a block. The
advantage of the distance classification is in its simplicity and its availability with any estimation
method.
Combined Variance
In an ordinary kriging program, the first component of σ2cv, the kriging variance (σ2k), is already
computed. One can refer to any Geostatistical textbook (e.g. Journel and Huijbregts, 1978) for the
calculation of this variance. We compute the second component of σ2cv, the local variance of the
weighted average (σ2w) as follows:
where n is the number of data used, wi are the weights corresponding to each datum, Z0 is the
block estimate, and zi are the data values. If there is only one datum, σ2w is set to σ2k.
The Combined Variance (σ2cv) is then calculated as follows (Arik, 1999b):
It should be noted that the kriging variance must be based on the original variogram
parameters so that it would be in the same scale as the local variance. If the variogram parameters
are normalized, or relative parameters are used, the necessary adjustments must be made.
where σcv is the square root of the Combined Variance, mk is the block grade computed from
kriging, and C is a calibration factor. Note that C is outside the square root sign.
The Calibration Factor (C) depends on which criteria we would like to include in the
computation of the Resource Classification Index. Each component of this factor is optional, and
other components can be added, as it seems appropriate. The following Calibration Factor (C) is
only a suggestion:
The advantage of the calibration factor is that it could be modified and customized for each
deposit depending on the weight that must be given to certain measures.
In this equation, d = Dist/50, n=Nused/16, and q=Nquad/4 were used. The choice of using
“dismax” of 50, instead of 90 was merely based on our preference to increase the weight of the
distance variable. Figure 1 shows the cumulative probability plot of RCI values based on model
blocks whose AUK grades that are greater or equal to 0.01.
In order to classify the resources based on the Resource Classification Index, the RCI values
corresponding to the 50th and 95th percentiles were determined using RCI cdf in Figure 1. These
values were 1.35 and 4.35, respectively. Therefore, 1.35 was used to separate Measured/Indicated,
and 4.35 was used to separate Indicated/Inferred categories. Table 3 reports the corresponding
resources based on RCI variable at 0.01 gold cutoff.
Figure 3 Resource categories at a 0.01 cutoff using KSTD variable on Bench 4280
Figure 4 Resource categories at a 0.01 cutoff using RCI variable on Bench 4280
REFERENCES
Arik, A., 1999a, “An Alternative Approach to Resource Classification,” APCOM Proceedings,
Computer Applications in the Mineral Industries, Colorado School of Mines, pp. 45-53.
Arik, A., 1999b, “Uncertainty, Confidence Intervals and Resource Categorization: A Combined
Variance Approach,” Proceedings, ISGSM Symposium, Perth, Australia.
CIM, 1996, “Mineral Resource/Reserve Classification: Categories, Definitions and Guidelines,”
Canadian Institute of Mining, Metallurgy and Petroleum (CIM) Bulletin, Vol. 89, No. 1003,
pp. 39-44.
JORC, 1999, Australisian Code for Reporting of Identified Mineral Resources and Ore Reserves
(The JORC Code). The Joint Committee of the AusIMM, AIG and Mineral Council of
Australia.
Journel, A. G., Huijbregts, Ch. J., 1978, Mining Geostatistics, Academic Press, New York.
SAMREC, 2000, South African Code for Reporting of Mineral Resources and Ore Reserves (The
SAMREC Code). The South African Mineral Resource Committee under the Auspices of the
South African Institute of Mining and Metallurgy.
SME, 1999, A Guide for Reporting Exploration Information, Mineral Resources, and Mineral
Reserves. The Society for Mining, Metallurgy and Exploration (SME).
USBM and USGS, 1980, Principles of Resource/Reserve Classification for Minerals by the U.S.
Bureau of Mines and the U.S. Geological Survey, U.S. Geological Survey Circular 831.
[Mathematical Geology, Vol.34, No.7, 2002]
MINTEC, INC.
3544 E. Ft. Lowell Rd,
Tucson, AZ 85716
aa@mintec.com
ABSTRACT
This paper presents a modified ordinary kriging technique referred to as the “Area Influence Kriging”
(AIK). The method is a simple and practical tool to use for more accurate prediction of global recoverable ore
reserves in any type of deposit. AIK performs well even in deposits with skewed grade distributions when the
ordinary kriging (OK) results are unreasonably smooth. It is robust and globally unbiased like OK.
The AIK method is not intended to replace OK which is a better estimator of the average grade of the
blocks. Rather it aims to complement OK with its excellent performance in predicting recoverable resources that
have been the major pitfalls of OK in many resource estimation cases. The paper details the methodology of AIK
with a couple of examples. It also reports the results from its application to a gold deposit.
KEY WORDS: kriging, recoverable ore reserves, global recoverable resources, area of influence,
reconciliation
INTRODUCTION
An ore reserve block model is the basis for various decisions in mine development and production. Accurate
prediction of the mining reserves is essential for a successful mining operation. Estimating recoverable ore
reserves for deposits with highly skewed grade distributions is especially a challenge to the mining engineers and
geologists. They require considerable amount of work to make sure that representative block grade distributions
can be obtained to estimate the reserves with certain confidence. There are many advanced geostatistical methods
available to tackle some of the problems associated with grade estimates in these types of deposits. However,
the ordinary kriging or even traditional inverse distance weighting methods are still widely used to estimate the
ore reserves for such deposits. One reason for this is the fact that the alternative methods offered are either too
complex or too exhaustive for the practitioners who often do not have the expertise or the time to apply them to
their problem.
The objective of this paper is to present a new estimation technique which will be referred to as “Area
Influence Kriging” (AIK). The paper will review some of the potential problems in reserve estimations with linear
estimation techniques, such as the ordinary kriging, in deposits with skewed grade distributions. This discussion
will be followed by the methodology of AIK. Finally, its application to a gold deposit, and the comparison of the
prediction performance of AIK to OK will be presented based on the blast hole data in a mined-out area of the
deposit.
LINEAR ESTIMATION TECHNIQUES
All available techniques involve some weighting of neighboring measurements to estimate the value of
the variable at an interpolation point or block. Among the traditional techniques are the polygonal or nearest
neighbor method, inverse distance weighting, and triangulation methods. Among the geostatistical techniques,
the most popular one is the ordinary kriging. To insure unbiasedness, all these methods require that the weights
assigned to the sample data add up to one:
∑ wi = 1, i = 1, N (1)
Then for all methods, the estimate is a weighted linear combination of sample data z:
Estimate = ∑ wi zi, i = 1, N (2)
Having arrived at some weighting scheme depending on the method, the decision remains as to what
criteria to use in order to identify the N data points that should contribute to interpolation. This is the selection
of a search strategy that is appropriate for the method used. Here, considerable divergence exists in practice,
involving the use of fixed numbers, observations within a specified radius, quadrant and octant searches, elliptical
or ellipsoidal searches with anisotropic data, and so on. Since the varying of the parameters may affect the
outcome of estimation considerably, the definition of the search strategy is therefore one of the most consequential
steps of any estimation procedure (Journel, 1989; Arik, 1990).
AIK MATRICES
An OK program can easily be modified for AIK. The OK matrices will not be repeated here since they can
be found in any major Geostatistical textbook. Rather the expanded AIK matrices from OK will be provided. Let’s
first recall the OK system expressed in matrix form:
[K] . [λ] = [M] (4)
where [K] is the matrix of sample covariances, [λ] is the matrix of unknown weights, and [M] is the matrix of
sample-to-block covariances. We will expand matrix [K] by one row and column to include indicators I(xj) that
will tell us whether a sample is the nearest to block or not. The following matrix will be designated as [K1]:
(n+1) columns Indicator
column
C(v1,v2) … C(v1,vn) 1 I(x1)
. . . .
. . . .
[K1] = . . . . (n+1)
C(vn,v2) … C(vn,vn) 1 I(xn)
1 … 1 0 0 rows
I(x1) … I(xn) 0 0 Indicator
row
λ1 C(v1,V)
. .
. .
. .
[λ] = λn [M1] = C(vn,V)
-µ 1
-µ12 Ф(A;z(x1))
The function Ф(A;z(x1)) is the proportion of the area of influence of the nearest datum. For example,
Ф(A;z(x1)) equal to 1 indicates that the nearest datum has 100% influence over its area.
The above AIK system for unbiased kriging of order 1 can then be expressed in matrix notation as
[K1]. [λ] = [M1] (6)
This system always yields the kriging weight for the nearest datum equal to Ф(A;z(x1)). The block grade
is calculated by adding the products of the data values by the corresponding kriging weights (Arik, 1992).
The choice of the Ф(A;z(x1)) strongly affects the performance of AIK. Therefore, instead of using a rather
subjective and arbitrary value, it may be better to do some sensitivity analysis based on Ф(A;z(x1)) values and
compare the grade-tonnage curves of the estimates obtained in different cases to the theoretical or expected results
from SMU grades or the historical blast hole data. The application of this procedure to select Ф(A;z(x1)) can
minimize the bias on the estimated proportion and grade of ore above cutoff
SOME EXAMPLES
Let us take a hypothetical case with10 data for the sample blocks to be interpolated. Figure 1 shows the
configuration of these data with respect to the two sample blocks selected. The arithmetic average of the point
data is 0.217 with a coefficient of variation (standard deviation divided by the mean) of 0.64. The blocks used are
10x10m in size. Vertical dimension is ignored for simplicity. In these examples of the blocks to be interpolated,
Block#1 has a high-grade datum closest to it. In contrast, Block#2 has a low-grade datum closest to it. The intent
is to show what will happen to the estimated grades in different sample configurations around the blocks.
Using an arbitrary spherical variogram model with an isotropic range of 100, sill of 1.0, and nugget of
0.15, both OK and AIK interpolations were performed. The value of Ф(A;z(x1)) was assumed to be 1 in AIK
case. Tables 1 and 2 summarize the results obtained. As one can see from these tables, the OK estimates are much
smoother than the AIK estimates by being closer to the average.
Now let us look at what is happening to the values within the area of the influence of the samples. Here we
select the two data points with values 0.570 and 0133. Figure 2 displays the block estimates from AIK. There are
eight blocks within the area of the influence of the datum 0.570 (with values 0.590, 0.578, 0.589, 0.574, 0.568,
0.569, 0.556 and 0.557), and six blocks within the area of the influence of the datum 0.133 (with values 0.173,
0.150, 0.134, 0.125, 0.115 and 0.116).
One can observe from this example that, within the area of influence of a given datum, some blocks have
values higher than the value of this datum while others have lower values, depending on the configuration of
surrounding data and their values. The mean of the blocks within the area of the influence of the datum is virtually
the same as the value of this datum. This is the advantage of AIK over polygonal method. Instead of having
block values that are exactly the same as the datum throughout the area of influence of the datum, we now have a
distribution of blocks whose mean is none other than the value of this datum.
Table 1. The weights assigned to 10 data points from OK and AIK methods, and the resulting estimated
grades for the sample Block #1
Table 2. The weights assigned to 10 data points from OK and AIK methods, and the resulting estimated
grades for the sample Block #2
Table 3. Comparison of OK and AIK grade and tonnage recoveries at 0.015 opt gold cutoff to Blast Hole
Averages
Table 4. Comparison of OK and AIK grade and tonnage recoveries at 0.030 opt gold cutoff to Blast Hole
Averages
CONCLUSIONS
Area Influence Kriging (AIK) is a new interpolation technique modified from Ordinary Kriging (OK)
algorithm. It is practical, robust and globally unbiased like OK. It is intended to be a new tool for the practitioners
to help them predict more realistic global recoverable reserves from their models developed at different stages
of mining. AIK definitely eliminates the smoothing problem of OK encountered in resource estimation of highly
variable deposits such as the one studied in this paper.
AIK is not a method to replace the advanced geostatistical methods that can estimate the local recoverable
reserves, i.e., distribution of grades within the blocks or panels. It is not also intended to replace OK. Rather it
should be looked at as an additional tool that can be used along with OK in the ore reserve estimation process.
It may help the practitioners to fine tune their OK estimates or apply corrections to them by having alternative
results to compare to and contemplate with.
Because the use of the Ф(A;z(x1)) value as a pre-condition or assumption, AIK technique may sacrifice the
local accuracy and the precision of estimates in favor of predicting more accurate recoverable estimates for the
global resources. However, since the choice of the Ф(A;z(x1)) affects the performance of AIK and its results, this
could be used to its advantage. Accordingly, instead of using a rather subjective and arbitrary value, one might
consider doing sensitivity analysis based on varying Ф(A;z(x1)) values, then compare the grade-tonnage curves
of the estimates obtained in different cases to the theoretical or expected ones from SMU grades or the historical
blast hole data. The application of this procedure can minimize the bias on the estimated proportion and grade of
ore above cutoff. Thus taking advantage of the favorable points of the polygonal and OK methods, AIK may offer
flexibility and practicality that the mining engineers and geologists are looking for in achieving their objectives.
AIK has turned out to be a good performer in predicting the global recoverable reserves in the deposit
studied. However, it should be noted that each deposit is unique and therefore the application and performance
of any method may differ from one to another. Since it is only a few lines of coding change in OK algorithm to
get AIK estimates, the author hopes that some practitioners will find easy access to the method, apply it to their
problems and compare the results with those from their current methods or geostatistical simulation techniques in
order to find out more about its performance.
REFERENCES
Arik, A., 1990, “Effects of Search Parameters on Kriged Reserve Estimates,” International Journal of Mining
and Geological Engineering, Vol 8, No. 12, pp. 305-318.
Arik, A., 1992, “Outlier Restricted Kriging: A New Kriging Algorithm for Handling of Outlier High Grade Data
in Ore Reserve Estimation,” APCOM Proceedings, Port City Press, Baltimore, Maryland, pp. 181-187.
Arik, A., 1998, “Nearest Neighbor Kriging: A Solution to Control the Smoothing of Kriged Estimates,” SME
Annual Meeting, Preprint #98-73.
Dagdelen, K., Verly, G., Coskun, B., 1997, Conditional Simulation For Recoverable Reserve Estimation, SME
Annual Meeting , Preprint #97-201.
David, M., 1977, Geostatistical Ore Reserve Estimation, Elsevier, Amsterdam.
Deutsch, C.V., Journel, A.G., 1992, GSLIB: Geostatistical Software Library and User’s Guide, Oxford
University Press, New York.
Dowd, P.A, 1982, Lognormal Kriging The General Case, Mathematical Geology, Vol. 14, No. 5.
Isaaks, E.H., Srivastava, R.M., 1989, Applied Geostatistics, New York, Oxford University Press.
Journel, A.G., 1989, “A Democratic Research Impetus,” NACOG Geostatistics Newsletter 3, Summer, p. 5.
Journel, A.G., Arik, A., 1988, “Dealing with Outlier High Grade Data in Precious Metals Deposits,”
Proceedings, Computer Applications in the Mineral Industry, Balkema, Rotterdam, pp. 161-171.
Parker, H.M., 1980, “The Volume-Variance Relationship: A Useful Tool for Mine Planning,” Geostatistics
(Mousset-Jones, P.,ed.), McGraw Hill, New York.
Rossi, M.E., Parker, H.M. Roditis, Y.S., 1994, “Evaluation of Existing Geostatistical Models and New
Approaches in Estimating Recoverable Reserves,” SME Annual Meeting , Preprint #94-322.
Verly, G.W., Sullivan, J.A., 1985, “Multigaussian and Probability Kriging, Application to Jerritt Canyon
Deposit,” Mining Engineering, Vol. 37, pp 568-574.
Figure 1. The 10 sample data with the Blocks #1 and #2 to be interpolated
Figure 2. Block values from AIK within the area of influence of the sample data 0.570 and 0.133
Figure 3. Histogram and statistics of exploration hole data within the mined-out area
Figure 6. Histogram and statistics of Ordinary Kriging (OK) block values within the mined-out area
Figure 7. Histogram and statistics of Area Influence Kriging (AIK) block values within the mined-out
area
Figure 8. Grade-tonnage curves of Blast hole Averages (BHAVG), Area Influence Kriging (AIK), and
Ordinary Kriging (OK) resource estimates
MineSight® in the Foreground
Interpolation programs
in MineSight® can use
“relative coordinates”
instead of actual Easting,
Northing, and Elevation
when performing final
ellipsoidal search and
computing distances for
inverse distance weights
or kriging. Among rela-
tive coordinates, the one
most often used is rela-
tive elevation. This article
discusses using relative
elevation in interpolation.
The use of relative eleva-
tion is helpful to interpo-
late model items where
mineralization follows a
specific surface.
We will look at a rather
common example: the
relative elevation is the
distance to a given sur-
face. When this option
is selected, the program Picture 1
uses composite and block
elevations relative to the surface instead of the actual Y-coordinate within PAR2, and the relative Z value
elevations. This methodology is similar to “unrolling within PAR20 of the corresponding block values.
a surface”. A new procedure relev.dat and a program When selecting PAR1, PAR2, and PAR20 make
gnrelev.exe are currently in testing and will soon be sure that the box defined by these parameters is big
available. They are designed to calculate and store enough to contain the search ellipsoid.
distances to a triangulated surface into both compos-
However, before processing each given row the
ite and model files.
interpolation program (e.g. m620v1, m624v1, etc.)
When using relative elevations, you must be very loads an “initial pool of composites”. Only those com-
careful in setting search parameters PAR1, PAR2, posites with actual Eastings, Northings, and Eleva-
PAR3, PAR4, and PAR20. tions within PAR1/PAR2/PAR3 distances from the
Ellipsoidal search parameter PAR4 is applied to the row are selected (Picture 1).
relative coordinates. This makes selection of PAR3 very important. It
If you are using relative elevation, you can specify should be large enough to ensure that all composites
PAR20 as a preliminary limit for relative Z when with relative elevation within PAR20 range of the
selecting composites that are to be used in interpola- block’s relative elevations on a row are included.
tion for each block. By default PAR20 = PAR3. Before (continued on page 6)
the search ellipsoid is applied, the set of composites is
limited to ones with X-coordinate within PAR1,
March 2003 5
MineSight® in the Foreground
6 March 2003
MineSight® in the Foreground
For the surface shown in Picture 3, the elevation range of the whole bench is 250. If we take PAR1 and
PAR2 into consideration we may reduce the elevation range estimate that is needed for PAR3 computation.
For instance, if we use PAR1 = 100 and PAR2 = 75 we can limit areas of interest to rectangles of size 100 x 75.
Manual inspection of the surface shows that the surface range on any of such rectangles does not exceed 80.
Thus, we can reduce the value of PAR3 from (PAR20 + 250) to (PAR20 + 80).
Picture 3
March 2003 7
MineSight® in the Foreground
4 November 2005
MineSight® in the Foreground
Figure 3
The Filter tab (Figure 4) is where, for example, the
different rock types that will control the variograms
are specified. This window becomes active when the
Figure 5
Use Application Filter box is checked. A complete set
of variograms will be built using composites for each Details on azimuths, dips, and bandwidths are
specific set of rock types, i.e., for each line in the filters specified in the Directions tab. In the example, the
box. If you are building 30 different variograms, i.e., initial variogram has an initial azimuth of zero and a
30 different directions, and you specify five different Window of 22.5 degrees. Variograms will be built at
rock types, MSDA will build 150 variograms (30 for 45-degree azimuth increments in a total of four direc-
each rock type). Each filter (line) contains the rock tions. Likewise, the initial dip is horizontal (0 degrees)
type or types, a title suffix and a file suffix. When and varies at 30-degree increments for a total of four
MSDA creates the variograms for a given filter, it steps. The search Window is +/- 15-degrees wide. The
selects composites of the specified rock type or types, bandwidths are activated by putting a check mark on
appends the title suffix to the main chart title, and the corresponding box, then the appropriate band-
appends the file suffix to the variogram files. width can be added. The horizontal bandwidth in this
example is 60 feet; no vertical bandwidth was added
in this example.
There are options for coordinate rotation if they
become necessary to be applied to the variograms. The
Rotation type option includes several methods, some
of the most common options being GSLIB and MEDS.
The Title and labels tab, as its name implies, is
where the description of the variograms is entered
(Figure 6). Besides the title, labels for the X and Y axis
plus a description of the variogram can be added here.
(continued on page 6)
Figure 4
November 2005 5
MineSight® in the Foreground
Figure 6
The last tab, Options (Figure 7), has three addi-
tional parameters that can be activated by placing a
check mark on the corresponding boxes and entering
the project-related values. Implementation of these
options depend upon the needs of the project. Normal
variograms can be transformed to log using the equa-
tion shown on the first option. The second option is
used to bracket the values of the item for which the
variogram is being generated. The types of vario-
grams that can be normalized are Normal and Cova-
riance only, and can be specified in the third option
in the last panel. Finally, the boxes for Constant for
relative estimator, Minimum distance to accept Figure 8
pairs, and Consider vertical if within must be filled
appropriately. The Build Now button executes the run immedi-
ately while the Queue option saves the run for even-
tual execution.
Upon execution of the run, the variograms are built
and their file names appear in the lower-left window
of the main panel (Figure 9). Before proceeding it is
recommended to review the file-names and make
sure that all of the requested variograms plus two (at
the bottom of the list) are present. These last two var-
iograms are global variograms in all directions (e. g.,
cu bhs_global.var), and a global variogram (cu
bhs_hrz_global.var) in the horizontal direction.
(continued on page 7)
Figure 7
6 November 2005
MineSight® in the Foreground
Figure 11
The Lag Stats tab shows a summary of the var-
iogram (Figure 12). It is a summary that includes
Figure 10
the range, number of pairs, distance, drift, etc.,
The parametersfor the variogram model can be of the variogram. The corresponding mean of the
modified manually by dragging the markers on the pairs involved as well as its standard deviation are
curve, or by keying parameters into the dialog at the also shown.
upper right corner. When you modify the model by
(continued on page 8)
dragging the markers, MSDA instantly updates the
parameters in the dialog, and vice versa. MSDA sup-
ports models with up to three structures.
November 2005 7
MineSight® in the Foreground
Figure 12 Figure 14
Detailed variogram modeling options can be The variogram contours can now be viewed on
started from the main panel with the command Tools any plane by selecting the Rose tab in the variogram
| Variogram 3D Modeling. The available variograms panel. This shows the trends of the mineralization
will be displayed as in Figure 13. represented by the variograms on that plane. The
If there are no variograms, they have to be high- properties of the points, contour lines, etc., can be
lighted in the project directory (use Microsoft® Win- modified and tailored to the user’s specifications.
dows Explorer), then dragged and dropped into the
File area.
Figure 15
Figure 13
The variogram contours can also be individual-
Users can drag one or more variograms from their
ized for report or presentation purposes. Figure 16
MSDA project into the 3D Modeler’s list at the lower
shows an example. A variety of other modifications,
right corner. To automatically fit a 3-D variogram
as well as printing or exporting to a bitmap, can be
model to this collection of variograms, select Auto-
implemented using the icons in the Toolbar.
Fit from the 3D Model menu. As with the Auto-Fit
tool for the individual variograms discussed earlier, (continued on page 9)
several options are available such as tolerance, weight
by number of points, and so on. However, the default
values typically work well.
8 November 2005
MineSight® in the Foreground
Figure 16
The orientation of the various structures of the
variogram model can also be displayed in plan and
cross-sections (Figure 17). To access this option, use
the command 3D Model | Display Structures on
Planes. In this example, there is only one variogram Figure 17
structure. Therefore, there are no displays in the There are many more helpful options that were not
second and third structures. Also, because we gen- covered in this article about variography in MSDA.
erated only the horizontal variograms and set the Additional descriptions of the tools related to vario-
verical range to a small distance, EW and NS section gram modeling, as well as to other MSDA options,
views are displaying flat ellipses with no dip. will be published in future newsletters.
November 2005 9
MSDA Custom Reports and
22nd 3-D Variogram Modeling
Introduction
Annual MineSight® Data Analyst (MSDA) is a new data analysis package for MineSight®,
replacing the existing m300 and m400 series programs and adding significant new
Seminar functionality. In this paper we will be presenting two new MSDA modules:
• Custom Reports
• 3-D Variogram Modeling
Custom Reports
Custom Reports is a program which allows users to design and build their own reports
using any of the data sources supported by MSDA, ie., MineSight® assay, composite,
blasthole and block model files, ODBC compliant databases, and text files. Users have
complete control over the choice of fields, statistics, filters, layout, and style of the report.
Each report is an HTML file which may be viewed in a web browser, Microsoft® Word and
Microsoft® Excel. A sample is shown in Figure R-1
Figure R-1
Functional Overview
Custom Reports works in two stages, as shown in Figure R-2. The first stage is Report
Builder, which creates a report matrix file of values according to the user’s criteria. For
example, one may build a report matrix based on mean, median, and standard deviation
of cu, pb, zn. and mo data, by copper cutoff grade and rock type. The actual selection of
statistics, fields, and filters (criteria) is entirely up to the user. Report Builder computes
all combinations of these values. In other words, it computes all statistics of all fields
according to all filters (criteria) and stores the results in a report matrix file (.mrb)
in XML format. If the data source is large, e.g., a large block model, and the criteria
are complicated, this step may be quite time consuming due to the large number of
computations involved.
The second stage is Report Viewer, in which the user extracts data from the report file
and creates a readable HTML file. For example, using the report file mentioned in the
Figure R-2
A second option for Report Viewer is to display a chart rather than an HTML report.
For example, based on the same report matrix file, we could display a graph of copper
grade by cutoff with one series (curve) per rock type. This may be considered as just
another view of the report file. Like the HTML reports, charts are created pretty well
instantaneously.
Report Builder
In the Report Builder stage the user specifies the report content and builds a report
matrix file with an .mrb extension. This step is straightforward, but can be time
consuming to run if the data source is large and the filters are complicated.
There are two fundamental types of report which one may build: univariate and
bivariate. For univariate reports, one must specify:
• General info such as titles and chart style file (Figure R-3).
• Weighting info, used when you wish to weight statistics, e.g., by length or by tonnage.
When this option is used, the total weight (e.g., total tonnage) is available on the
report (Figure R-4).
• A list of fields and field expressions (Figure R-5).
• A list of statistics, such as mean and standard deviation (Figure R-6).
Figure R-3
Figure R-4
Figure R-5
Figure R-6
Figure R-7
Figure R-8
Figure R-9
Seminar designed to filter entire records. For example, if we specify a filter with a cutoff grade
of 0.35% cu, then a record would be wholly accepted if it had cu >= 0.35%, and wholly
rejected otherwise. This is probably the case with which most users are familiar. For
example, if your ore cutoff is 0.35% cu, and you want ore stats for several metals, this
is the approach you would use. This is also the approach used throughout MSDA and
MineSight® and most other mining programs.
• Cutoff Type II: In certain circumstances, a user may want a report with statistics for
multiple fields wherein each field has it’s own cutoff. This is the type we use in the
individual field specification as shown in Figure R-5. For example, if we say that cu
cutoff = 0.3% and pb cutoff = 0.4%, then mean cu would be mean cu above 0.3% cu,
and mean pb would be mean pb above 0.4% pb.
Ø Note: A useful by-product of Cutoff Type II is that we can filter out values from a
database or text file which are missing or too low to be useful. For example, suppose
we are reading a text file wherein all fields have many zero values which are
uninterpolated blocks that we want to exclude from our statistics. In such a case, we
can just enter cutoff values such as 0.001 to exclude zeros from the statistics. Note
that this is very different from setting cutoff values of 0.001 for fields in the filter
definition, which would reject the entire record whenever one field was below cutoff.
When defining a list of fields, please note that all of these fields will be included in the
report matrix file (.mrb) created by Report Builder. However, during the Report Viewer
stage, one can select any subset of the fields for inclusion in the final HTML file. Therefore,
it is usually prudent to include more fields rather than less. If you aren’t sure whether or
not you will need a field, it’s usually best to include it.
Defining Statistics
Selecting report statistics is a simple matter of checking the desired statistics from a list
of about twenty that are supported by Custom Reports (see Figure R-6). Certain statistics
are only relevant for bivariate reports, e.g., correlation coefficient (r), and these are
suppressed from the list when the user chooses the univariate report type. Some statistics
such as median and quartiles require that extended metadata be set in MSDA (*). These
statistics are suppressed from the list if they are not available.
(*) Note: Extended metadata includes results such as min. and max. of all fields. It is an
optional calculation performed elsewhere in MSDA.
Defining Filters
Filters refer to selection criteria such as rock type and cutoff grade. There are two ways
to define a filter:
• Basic: The user defines a list of categories, cutoffs, or bins on a specified field (see
Figures R-7 and R-8).
• Custom: The user enters an SQL statement to define a filter (see Figure R-9).
The basic filter is the preferred option wherever possible, since it is easier to work with
and the processing time is much faster. In this case, you choose a field on which to filter,
such as rock type, then list the values, cutoffs or bins. For example, to filter on rock types
3,6,10,11,12,13,14,21, you would choose the List option and enter 3,6,10:14,21. If you
wished to use cu cutoff grades of 0.1, 0.2, 0.3, and 0.4, you would choose Cutoffs and enter
0.1,0.2,0.3,0.4. If you choose Bins rather than Cutoffs, the previous example would
report bins 0.1 to 0.2, 0.2 to 0.3 and 0.3 to 0.4.
Figure R-10
Figure R-11
Note regarding Total and All: Custom Reports automatically adds two items to each
filter dimension, Total and All. The former is the total (union) of all specified filter items,
whereas the latter selects all records regardless of the value of the filter. For instance, if we
ask for rock types 1, 2, and 3, then Total would capture all records from rock types 1 + 2
+ 3, whereas All would capture all records regardless of rock type (including rock type 4,
5, etc., and even records with missing rock types). In other words, All basically collapses
the dimension. If you have a table of cutoff by rock type, then the row for rock type = All
would reflect data filtered on cutoff only.
The second tab in the HTML Report dialog is the Options tab. This tab allows one to
place headers, footers, etc., on the report. Headers may contain special fields such as [date]
and [page], which are replaced by the appropriate values when the report is created. This
tab is basically very straightforward.
The final tab is the Style tab, as shown in Figure R-12. To set the style of a particular
report component, such as Title, Cell Value, Row Label, Column Label, Footnote, etc.,
simply select the component in the combo box at the top of the page, then use the font
color and justification buttons to edit it. Standard Microsoft® Windows color and font
dialogs are used. When you have made the style the way you like it (a sample is shown)
click on the Set button beside the report component at the top of the dialog.
Figure R-12
Figure R-13
Figure R-14
Chart Mode
If you open the report file in Chart Mode, you are presented with a dialog similar to
that shown in Figure R-15. One can rapidly display charts using different combinations
of dimensions in the upper left corner. Each dimension in the report can be used in one of
four ways:
Page 10 MSDA Custom Reports and 3-D Variogram Modeling
• None: The dimension is ignored. Valid for filter fields only.
22nd • Points: The values of the dimension are laid out across the x-axis. In Figure R-15, we
lay out Elevation by points. Only one dimension can be assigned to Points.
Annual • Series: Each value of the dimension forms a series on the chart, ie. a curve or set of bars
or slices of pies, etc – depending on the chart type. In Figure R-15, we assigned Rock
Type by series. Note that the Legend shows each of the Rock Types (1,2,3,4,5). Only
Seminar one dimension can be assigned to Series.
• Chart: A specified value of the dimension is used for the chart. In Figure R-15, we
assigned Field = Copper, Statistic = Mean, and Ore Type = Ore to Chart. This means
that all of the chart statistics refer exclusively to mean grade of copper ore.
As with all MSDA charts, significant options are available to dress up the style and style
templates may be prepared, saved, and restored. For report charts, it is usually interesting
to explore different chart gallery types such as scatter, curve, bar, pie, and so on.
Figure R-15
Figure V-1
Figure V-2 shows Variogram 3D Manager in Rose View mode. The following key
features can be seen in this view:
Figure V-2
Figure V-3
When working with the Standard View, it is important to remember that it has all of the
properties and benefits of any other MSDA chart, e.g., there is extensive right-click support
to modify the appearance of the chart (colors, fonts, titles, etc.), add a legend box, copy to
bitmap, print, etc. In addition, if you hover over any lag point with your cursor, a box will
pop up showing the lag properties, e.g., lag distance, number of pairs, value, etc.
Figures V-4 through V-6 show the Standard View as it appears with a variety of settings
illustrating some of the features discussed above.
Figure V-4
Figure V-5
Figure V-6
Figure V-7
The current plane control for the Rose View is defined in the dialog shown in Figure
V-8. It is possible to define the current plane via three rotation angles in any of the
supported conventions (GSLIB, GSLIB-MS, MEDS, Sage), or by selecting one of the
principal directions (EN, EZ, NZ). One may assign a window by checking Use window
of and assigning an angle, e.g., in Figure V-8 we are using a window of 5 degrees. At any
time, one may also rotate the plane by setting a rotation angle and an axis about which to
rotate, then clicking Rotate, e.g., one could rotate the current plane by 10 degrees about
the Easting axis (E). The icon shows the orientation of the current plane, e.g., in Figure
V-8 the current plane is horizontal (Z is shown coming out of the view towards the user).
Figure V-8
When working with the Rose View, it is important to remember that it has all of the
properties and benefits of any other MSDA charts, e.g., there is extensive right-click
support to modify the appearance of the chart (colors, fonts, titles, etc.), add a legend box,
copy to bitmap, print, etc.
Figures V-9 through V-12 show the Rose View as it appears with a variety of settings,
illustrating some of the features discussed above.
Figure V-9
Figure V-10
Page 18 MSDA Custom Reports and 3-D Variogram Modeling
22nd
Annual
Seminar
Figure V-11
Figure V-12
When the auto-fit parameters have been entered, the next step is to initialize the
variogram model—the auto-fit needs a valid model with which to start. The easiest way to
do this is as follows:
Annual • On the Model tab, Structures sub-tabs #1, #2, #3, check the structures you want, and
the model type for each structure (see Figure V-16). E.g., you may say that you want
two structures; the first is spherical and the second is exponential.
Seminar • You now need to enter all of the ranges and sills to define a valid starting model.
You could enter each of these by hand, but in practice one generally uses one of the
following methods:
o Select Initialize Model from the 3D Model menu to set all of the ranges, sills, and
rotations to 0.0.
o After running auto-fit once, the parameters will be contained within the dialogs
shown in Figures V-15 and V-16. It is very common to keep these parameters
and re-run auto-fit again, perhaps with smaller tolerance, etc., in order to fine
tune the best fit model. In other words, your initial model for run #2 is the best fit
result from run #1.
• Var3D lets you pin (fix) any of the parameters by checking the adjacent checkbox
(see Figures V-15 and V-16). E.g., one could set the nugget to 0.1, check the adjacent
checkbox, then run auto-fit. The result would be the best fit 3-D variogram model,
subject to nugget = 0.1.
The final step is to simply run the auto-fit program. This is done by selecting Auto-Fit
from the 3D Model menu. As the auto-fit progresses, the progress, current RMS Error, and
number of iterations are shown at the bottom of the screen. A typical auto-fit takes about
10 seconds, though this depends on many variables and could be longer.
Examining and Editing the 3-D Variogram Model Parameters
When the auto-fit program finishes, the best fit model parameters are stored in the
dialog tabs shown in Figures V-15 and V-16. This model is also shown on the Standard
View (i.e., the intersection of model with current direction) and the Rose View (i.e., the
intersection of model with current plane). Users are free to change any of the parameters
by simply keying in the new value. The model will be updated in the Standard and Rose
Views as soon as you tab out of the field.
Figure V-17
Figure V-18
23rd
Annual
Seminar MineSight® Data Analyst (MSDA)
Practical Examples
MineSight® Data Analyst (MSDA) is a very dynamic tool created to help MineSight®
users evaluate resources. This tool, while easy to use, allows for a thorough analysis of
the available data in a very quick and efficient manner. This workshop will show some
examples of how this new statistics package can be used to help with some of the most
common requirements in a geostatistic study.
MetaData
The first step when approaching any geostatistical analysis is familiarization with
the dataset. At this point, we usually need to get general information about the dataset,
as a whole, as well as more detailed information with relation to each of the available
individual items. This kind of information is often referred to as MetaData. MSDA offers a
convenient tool, which gives a general description of our dataset.
After connecting to the Data Source, select the option Data | Metadata | Add
Extended. This option allows the program to read the data source and obtain basic
information about it. This information is then viewable using the option Data |
Metadata | Show.
If you are working with MineSight® data, you will find general information about
the project and the model or drillhole file under review. In addition, you will also find
a general description of the items available in your dataset, which includes such useful
information as the number of valid records for each item, the actual minimum and
maximum for each case, and the number of distinct values for the case of integer values.
Using the results for our sample data set and looking at the MetaData information, we
can easily determine the maximum value for CU is 7.86%, and the geology items ROCK,
ALT, and MIN are defined as integers with four or five distinct values each.
The MetaData information for each item can also be accessed via the Info button located
on the MSDA panels where an item must be specified.
Page 1
MineSight® Data Analyst (MSDA) Practical Example
23rd
Annual
Seminar
In addition, we can display actual data by using the option Data | Explorer.
Univariate Analysis
Following the first assessment of the dataset, we would likely continue with a univariate
description of our data. For this analysis we can use tools such as Histograms, Cumulative
Probability Plots, and Custom Reports.
We could, for instance, use Histograms to determine the distribution of the Geology
items as well as the grade items.
Page 2
MineSight® Data Analyst (MSDA) Practical Example
23rd
Annual
Seminar
We can easily display the cumulative curve and/or the grade tonnage curves for each of
the Histograms created. This is done from the View menu of the Histogram window.
Page 3
MineSight® Data Analyst (MSDA) Practical Example
23rd
Annual
Seminar
It is also possible to apply a weighting item or factor to the histogram primary item
when appropriate. In the case of drillhole or composite data, we could weight our grade
item by the length item, so the histogram is built based on the intervals lengths instead of
just the number of samples. In the same way, for the case of a model file, we could weight
Page 4
MineSight® Data Analyst (MSDA) Practical Example
23rd our grade item by a density item and a volume factor. Our results will then be based on
tonnage values rather than on just the number of blocks.
Annual
Seminar
Furthermore, we can also try analyzing the distribution of CU using a filter item. We
can define a filter item and its corresponding values from the Filter tab included in the
Histogram Parameters window. In this case, several histograms, one for each filter
value, will be created, each with a distinct name.
Page 5
MineSight® Data Analyst (MSDA) Practical Example
23rd
Annual
Seminar
It is also possible to show several histograms at the same time for easily comparison of
different populations. This last feature is available from the External Histograms menu
inside the Histogram window.
Page 6
MineSight® Data Analyst (MSDA) Practical Example
23rd On the other hand, in addition to analyzing our data throughout histograms, we could
also visualize information in the form of reports. The ability to create custom reports is a
Annual very powerful tool in MSDA, allowing us to display and arrange the necessary information
Seminar in any way needed. These reports are created with the Custom Report option from the
Tools menu. With this tool, we can calculate a variety of different statistics in relation to
our items and also apply weighting factors and various filters to our data.
Using some of the options available for filters in Custom Reports, we could classify
our data by geology codes, cutoffs, or any other classification based on any of the items
available in our data source file. In addition, when using the Field Expression option
associated with fields, we have the option to create a calculated field ‘on the fly’ to be
included in our reports. The following are examples of typical custom reports and their
special setting options.
Report showing values of Mean Cu for each domain defined by ROCK and ALT
Page 7
MineSight® Data Analyst (MSDA) Practical Example
23rd
Annual
Seminar
Page 8
MineSight® Data Analyst (MSDA) Practical Example
23rd
Annual
Seminar
Report showing main statistics for different grade items and different ore types in the block model.
Correspondingly, the same information contained in the reports can also be displayed
as graphs when using the Chart option associated to the reports. This kind of display can
help us to better understand and visualize our data.
Page 9
MineSight® Data Analyst (MSDA) Practical Example
Finally, we may also want to check the Cumulative Probability Plots for the main grade 23rd
values to see how close our populations are to a Normal or LogNormal distribution.
Annual
Seminar
From the Settings menu in the Cumulative Probability Plot window, we can
choose between the standard Normal or the Lognormal display. In addition, from
this menu we can also control the extent of the chart tails by choosing any of the three
percentage ranges available. Changing the range display will help us better identify and
describe outlier values.
Bivariate Analysis
Should we need to perform a bivariate analysis, we can use such tools as Scatter Plots
and Custom Reports and Graphs.
When working with Scatter Plots, there are some additional features available to help
with the interpretation of results. From the View menu in the ScatterPlot window,
there are three options available. The Best Fit Line option will show the straight line
that best fits our data set; slope and intercept values are shown in the summary statistics
window. The X=Y option helps when comparing two fields which are assumed equal or
nearly so,such as a grade item in a block model interpolated by two different methods.
Finally, the Conditional Expectation Line option shows a line representing a moving
window average of Y with respect to a predefined range of X. To set up the width of this
moving window on the X-axis, select one of the options from the Setting | Conditional
Expectation Window menu. The values under this menu are measured as a percent of the
length of the X-axis.
Page 10
MineSight® Data Analyst (MSDA) Practical Example
23rd If display of this information in a report is preferred, create a Bivariate Report. This
type of report includes additional statistic measures, such as the Coefficient of Variation
Annual
or Best Line parameters.
Seminar To create this kind of report, we need to specify a slightly different set up to account for
the different calculations. The main difference between Univariate Report and Bivariate
Reports is Bivariate Reports need two tabs or dimensions for Field definitions. With
two dimensions for Fields, MSDA can compare one group of fields against the other and
calculate all the special bivariate parameters.
Report showing bivariate statistics for CU, MO, AS, and PB.
Information from the previous report can be displayed as a graph by selecting the Chart
option when opening the report. An example is shown below.
Page 11
MineSight® Data Analyst (MSDA) Practical Example
23rd
Annual
Seminar
Variograms
Finally, for our spatial description we will use directional variograms and the MSDA
tools for modeling in 3-D space.
We create a series of variograms for different directions by entering the appropriate
parameters in the Variogram panels.
Page 12
MineSight® Data Analyst (MSDA) Practical Example
23rd
Annual
Seminar
Each variogram can then be modeled with up to three nested structures; manually by
setting up some of the parameters or by using the Auto-fit tool.
After spending time with the individual variograms and determining the range, sill, and
nugget for each direction, we can try modeling all the directions in 3-D space. For this step,
use the Variogram 3-D Modeling option available under the Tools menu.
Page 13
MineSight® Data Analyst (MSDA) Practical Example
From the Standard tab of the Variogram 3-D Manager window, several variograms 23rd
and the associated models can be displayed at the same time. We can choose between
Annual
displaying all the variograms available in the list or a selection of variograms based on the
filter options available at the bottom of that window. Seminar
A Rose diagram can also be displayed showing the different ranges intersecting a
particular plane. This display is obtained from the Rose tab of the Variogram 3-D
Manager window.
Page 14
MineSight® Data Analyst (MSDA) Practical Example
23rd We can finally use the Auto-fit tool to determine a 3-D model to fit our data. This option
is available under the 3-D Model menu. When using the Auto-fit function we can choose
Annual to set some of the parameters of the final model by toggling on the corresponding windows
Seminar in the upper right corner of the Manager window. The parameters which we can control
when running the Auto-fitting process are the number of nested structures, model type,
nugget, sill, rotation angles, and ranges for the main axis of the model.
The contours of the model, as well as the error with respect to the experimental data, can
also be display in the Rose tab. This display option is controlled from the Setting | Rose
Chart menu.
To use the parameters obtained from this modeling process in MineSight® 3-D, we
have the option of exporting the 3-D variogram model to an ASCII file using File |
Export Variogram model. This ASCII file can then be imported into MineSight® 3-D for
visualization purposes or it can be used with the interpolation procedures.
Page 15
Grade Variability Index for the Estimates
GRADE VARIABILITY INDEX FOR THE ESTIMATES
Abdullah Arik
Mintec, INC.,USA
ABSTRACT
In order to achieve reliable resource estimates, a good model of a mineral deposit must
be built to represent the deposit as close to reality as possible. Many factors affect the
outcome of a modeling work. The methodology used and the subsequent selection of
data to estimate the blocks are some of these factors. When estimating the grades of
the blocks in the model, the definition of the search strategy becomes one of the most
consequential steps that affect the outcome of the interpolation results. It will then be
useful to quantify in some manner the reliability of the resulting block grade estimates
from the interpolation. To achieve this goal, this paper introduces the concept of a
“Grade Variability Index” as a measure of how well the nearby composites match to the
calculated block value. This index may thus help quantify the effects of varying search
parameters in the linear estimation methods. It can be a useful tool for assessing the
inherent variability in the estimates and to help identify the areas with higher variability
of the data used to estimate the blocks in the deposit.
INTRODUCTION
The need for accurate ore resource estimates has always been should be included for each block. In fact, the definition of
important. With the increasingly large investments required the search strategy is one of the most consequential steps
to open and operate mines today, this need becomes even that affect the outcome of the grade interpolation results.
more vital. Therefore a good and reliable model of a mineral Therefore anything that can help the resource practitioners
deposit must be built to represent the deposit as close to select the appropriate search strategy will result in their
reality as possible. Obviously the knowledge of the geology calculating the resources more accurately.
of the deposit and its mineralization are helpful for building The objectives of this paper are twofold. The first objective
a representative geologic model. With the knowledge from is to propose a parameter called “Grade Variability Index” for
geostatistical analysis of the drill hole data along with this the estimated grades to assess which interpolation strategy
geologic model, one can then construct a grade model of the introduces more or less grade mixtures into the estimates.
deposit that is fairly reasonable. The second objective of the paper is to demonstrate the
One area that can have a significant effect on the outcome application of this parameter to a copper deposit, and its
of the resource estimates is the methodology selected for the use to validate the refinement of the kriging interpolation
grade assignment for the block model. This includes not only parameters to estimate the resources.
the method used but also its parameters that define what data
144
APCOM 2007, Santiago, Chile | Chapter 01 Geological Modeling, Resource Estimate and Geostatistics
GRADE VARIABILITY INDEX FOR THE AN EXAMPLE OF GRADE VARIABILITY
ESTIMATE INDEX
In estimating the grade of a block for a mineral deposit, we A single block of size 5x5m, shown in Figure 1, was
use a set of samples in and around the block to be estimated. interpolated using ordinary kriging in three different cases.
In the case of linear estimators, such as ordinary kriging, we In the first case, a spherical search with a 15m search radius
have the following form: was used. In the second case, an ellipsoidal search was used.
The search distance was 15m in the major axis direction
Z* = Σ wi zi i = 1,...,n (1) and 12m in the minor axis direction. The major axis of the
ellipsoid had an azimuth of 150° as shown in the figure.
where Z* is the estimate of the grade of a block, zi refers to The spherical variogram models used in these kriging runs
sample data values, wi is the corresponding weight assigned had the same ranges as the search distances used in each
to zi and n is the number of samples. Depending on the run. Therefore, the first model was isotropic and the second
method of estimation and the samples used, our estimate for model was anisotropic. A 20% nugget effect was used for
the blocks may vary. It will then be useful to quantify in some both variograms. The number of data used was limited to the
manner the degree of variability in our estimate also. nearest 6 samples for simplicity.
One suggestion here is to compute the inherent variability Table 1 displays the data values used and the corresponding
of samples used for the “ore” blocks, the blocks that are equal weights obtained from the kriging of the first case – spherical
to or greater than a specific cutoff. If a block is considered search and isotropic variogram model. The resulting block
“ore,” then the proportion of the samples used for the block grade estimate from this kriging run is 1.052. The Grade
below the cutoff should give us some idea how much of the Variability Index (GVI) computed at 1.0 cutoff value is
estimate is from “waste” samples, thus “diluted.” So, when 0.803. There are two samples below the cutoff used. These
the block grade is equal to or greater than a specific cutoff, are samples no. 3 and no. 6. These “waste” samples are
we will determine the sum of weighted differences or the highlighted in the table.
“variability” between the block estimate and the sample Similarly, Table 2 displays the data values used and the
values below the cutoff (zc). We will call this Weighted corresponding weights obtained from the kriging of the
Differences of Samples from the estimate (WDS): second case – ellipsoidal search and anisotropic variogram
model. The resulting block grade from this kriging run is
WDS = Σ wj * (Z* - zj) j = 1, nb (Z* ≥ zc and zj < zc) (2) 1.058. The Grade Variability Index (GVI) computed at 1.0
cutoff value is 0.766. There are two samples below the cutoff
where nb is the number of data values below zc, zj are the used. These are samples no. 3 and no. 6. These “waste”
data values below zc and wj are the weights corresponding to samples are highlighted.
these data. Coincidentally the six samples selected in these cases
We will compute the Grade Variability Index (GVI) for the were identical. This was actually done on purpose to make
estimate by dividing the WDS by the estimate itself (Z*). The the comparison simpler and clearer. Thus, the only variable
GVI is expressed as a percent of the estimate and is calculated in the calculation of Grade Variability Index (GVI) is the
as follows: kriging weights assigned to the samples. The spherical search
case resulted in an GVI value of 0.803 whereas ellipsoidal
GVI = (WDS / √n) / Z* * 100 (n>1) (3) search case produced 0.766. This is an improvement of
about 5% in GVI by simply trying out a different search and
If all the samples used for the block are “ore” (zi ≥ zc), then variogram model in kriging. There was also a slight increase
there is no “dilution” and GVI = 0. in the estimate from 1.052 to 1.058.
The Grade Variability Index (GVI) can be computed at To emphasize the effect of search plan on the estimates,
zero cutoff with all the samples used for the estimate. In the another case was tried. In this third case, all parameters were
areas with similar grade estimates, relatively higher GVI kept the same as the second case, except the azimuth used
values indicate the variability of the samples (more grade for the mineralization direction. The major axis ellipsoid was
mixtures) used to the calculate block values. oriented at 60° along with the variogram rotation angle. In
other words, this ellipsoid was perpendicular to the ellipsoid
outline shown in Figure 1. The outline of this ellipsoid is not
shown in the figure, but the sample data values used and the
corresponding weights obtained from the kriging are reported
145
in Table 3. Table 3: Kriging results - Third case (ellipsoidal search - 60o).
REFERENCES
Adisoma, G.S., Hester, M.G., 1996. “Grade Estimation and
Its Precision in Mineral Resources: The Jacknife Approach,”
Mining Engineering, Vol. 48, No. 2, pp. 84-88.
Arik, A., 1990. “Effects of Search Parameters on Kriged
Reserve Estimates,” International Journal of Mining and
Geological Engineering, Vol 8, No. 12, pp. 305-318.
Arik, A., 1999. “Uncertainty, Confidence Intervals and
Resource Categorization: A Combined Variance
Approach,” Proceedings, ISGSM Symposium, Perth,
Australia.
Isaaks, E.H., Srivastava, R.M., 1989. Applied Geostatistics,
New York, Oxford University Press.
Journel, A.G., Arik, A., 1988. “Dealing with Outlier High
Grade Data in Precious Metals Deposits,” Proceedings,
Computer Applications in the Mineral Industry,
Balkema, Rotterdam, pp. 161-171.
Conditional Simulation Overview and Applications
Page 1
Conditional Simulation Overview and Applications
simulation curve is preferred for studying the dispersion characteristics of these reserves, 23rd
remembering that the real curve is known only at the experimental data points.
Annual
Seminar
Figure 1. Real, simulated and estimated profiles (thick solid line=reality, normal solid line=conditional
simulation, dashed line=kriging, o=conditioning data)
Page 2
Conditional Simulation Overview and Applications
Page 3
Conditional Simulation Overview and Applications
23rd
Annual
Seminar
Planned Updates
The capabilities that are planned to be added to the simulation package are listed below:
Despiking
Relative elevations for use in both SGS and SIS
Direct model access in SIS
Local means or probabilities for the SIS simple kriging
SGS with a co-located variable
We encourage your feedback on this subject, which allows Mintec, Inc. to assess any
problems clients may be facing and suggest useful solutions.
References
Arik, A., 2003, “Kriging vs. Simulation: How optimum is our pit design?” 4th CAMI
Symposium Proceedings, Calgary, Alberta, Canada.
Arik, A., 2001, “Performance Analysis of Different Estimation Methods on Conditionally
Simulated Deposits,” SME Transactions, Vol. 310.
Arik, A., 1999, “Uncertainty, Confidence Intervals and Resource Categorization: A
Combined Variance Approach,” ISGSM Symposium, Perth, W. Australia.
Dagdelen, K., Verly, G., and Coskun B., 1997, “Conditional Simulation for Recoverable
Reserve Estimation,” SME Annual Meeting, Preprint 97-201.
David, M., 1977, Geostatistical Ore Reserve Estimation, Elsevier, Amsterdam.
Davis, B.M., 1992, Confidence Interval Estimation for Mineable Reserves, SME Annual
Meeting, Preprint #92-39.
Isaaks, E.H., Srivastava, R.M., 1989, Applied Geostatistics, New York, Oxford University
Press.
Page 4
Conditional Simulation Overview and Applications
23rd Journel A.G. and Huijbregts Ch.J. (1978). Mining Geostatistics, Academic Press, London.
Annual Kim, Y.C, Knudsen, H.P. and Baafi, E.Y. (1980). Application of Conditional Simulation to
Seminar Emission Control Strategy Development, University of Arizona, Tucson, Arizona.
Lloyd, C.D., McKinley, J. M. and Ruffell, A. H. (2003). “Conditional simulation of
sandstone permeability” Proceedings of IAMG 2003.
Page 5