Nothing Special   »   [go: up one dir, main page]

AI Inversion Course 4

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 92

ACOUSTIC IMPEDANCE

INVERSION
Introduction to Inversion

Forward Inverse Modeling


Modeling (Inversion)

Earth Seismic
Input: Model Response

Modeling Inversion
Process: Algorithm Algorithm

Seismic Earth
Output:
Response Model

2
•Inversion is the reverse of the forward modeling process.

•Inversion can only be as good as the forward modeling algorithm.

•Forward modeling is simpler. The current technology for this process is


not controversial. For any given earth model, there is only one seismic
response.

•Inversion is more complex: It is possible that some forward modeling


process will have no inverse. It is also possible that a given seismic
response will have many possible earth models

•To understand inversion, we must first understand the forward model.

3
Seismic Inversion methods can be devided into the categories shown below:

4
5
•All inversion algorithms suffer from the “non-uniqueness” problem.

•This means that there is more than one possible geological model
consistent with the seismic data.

•The only way to decide between the possibilities is to use other


information, not present in the seismic data.

•This other information is usually provided in two ways:


•the initial guess model
•constraints on how far the final result may deviate from the
initial guess

•This means that the final result always depends on the “other
information” as well as the seismic data.

6
The Convolutional Model
The seismic trace is modeled as follows:

seismic = wavelet * reflectivity + noise

where * means convolution.

The assumptions are:


• Post-stack data – the traces are zero-offset
• There are no multiples
• There are no AVO effects
• The noise is random, white noise, uncorrelated with the
seismic. There is no coherent noise.
• The wavelet is constant – not varying with time.
• The seismic data are already migrated – each seismic trace
depends only on the reflectivity sequence directly below the
seismic trace location.
7
Wavelet Reflectivity Seismic Trace

The seismic trace is modeled by the following equation:


seismic = wavelet * reflectivity + noise
8
In the frequency domain,
convolution is the product of the
reflectivity spectrum and the
wavelet spectrum.

Generally, this means that the


seismic trace has lost both the
high frequency and the low
frequency portions of the
spectrum.

Inversion attempts to recover


these lost regions. The method
of filling in the missing data
depends on the inversion
algorithm.

9
The reflection coefficient for the ith interface is defined as:

 V -  Vi
i 1 i 1
ri 
i

 V   Vi
i 1 i 1 i

where:
 i  density of layer i
Vi  velocity of layer i
This equation is true only for vertical incidence rays. This means that
AVO effects are assumed to be negligible.

If the seismic data does contain an AVO anomaly, the inversion


process will (erroneously) attribute the entire effect to changes in
density and velocity, rather than to changes in Poisson’s Ratio.

Since the reflection coefficient depends only on the product of


density and velocity, and not on either individually, it follows that
post-stack inversion is incapable of solving separately for density or
velocity. Only impedance changes can be measured by inversion.
10
Creating the Density Log

The forward model requires both velocity and density. Often, density is
not available. STRATA contains the option to calculate the density using
Gardner’s equation:
  0.23 *V 0.25
When loading logs into STRATA, the coefficients of this equation may
be modified using the menu.
  a *V b
The final inversion result from STRATA is actually a series of
impedance traces. To get a velocity result, STRATA uses the
generalized Gardner’s equation:

log(  )  log(a)  b *log(V )


The coefficients, a and b, are derived by a least-squares fit using all the
wells in the project. Note that the equation is linearized by taking the
11
logarithm of both sides.
If Gardner’s equation was used in the forward model, this procedure
will automatically use the same coefficients in the inverse model.

If a real density was used in the forward model, this procedure will use
the inverse formula shown above with a and b derived from the real
density and velocity.

Note that post-stack inversion is incapable of deciding whether a


particular impedance change is a change in velocity, or a change in
density, or both. This procedure assumes that the change is
distributed between the two.

12
INVERSION SCHEME
Seismic Data
Data Quality

Amplitude Velocity
Preserved Analysis

PSTM Partial Angle


Stacks
Understanding the limitations
Inversion Methods
Well Logs Reliability, Well
Editing, Calibration, Modeling calibrated

Fluid substitution analysis

Accurate wavelet

Understanding the expectations


13
Processing considerations :
Data should be carefully processed
such that :
• Amplitude is preserved
• Vertical resolution is improved
• Horizontal resolution is enhanced
• Noise is eliminated significantly

14
Amplitude Preservation :
“True Amplitude Recovery” (~ Relative
amplitude is preserved) using
Deterministic amplitude corrections:
• Spherical divergence
• Absorption
• Transmission loss

Compensating poor surface condition using: :


• Surface consistent gain correction
• Surface consistent static correction

15
Improvement of vertical resolution :
s(t) = w(t) * r(t) time domain convolutional model
S(f) = W(f) . R(f) frequency domain multiplication

Deconvolution : removing the wavelet shape to


reveal the reflectivity series.
r(t) = s(t) * op(t); op(t) = operator inverse of w(t)
R(f) = S(f) . 1/W(f)
Wavelet Estimation
Wavelet Shaping

16
Improvement of horizontal
resolution :

Migration :
• locating reflectors back to their “true” location.
• beaming up reflection amplitude back to
where it belongs.

17
Noise Attenuation :
• Random noise is reduced by the stacking
process.
• Coherent noise, such as multiples should be
eliminated: F-K filtering, Radon, Inverse Velocity
stacking.

18
Simplified
Pre-inversion
processing flow

19
Incorporating the low frequency
component :

The low frequency component can be


found from:
• a filtered sonic log
• seismic velocity analysis
• a geological model

20
21
Recursive Inversion

Recursive Inversion, also called Bandlimited Inversion is the simplest


form of inversion.

Starting from the definition of reflection coefficient:


Zi  1 - Zi
ri 
Zi  1  Zi
the impedance of the ith +1 layer can be determined from the ith layer:
1  ri
Zi  1  Zi *
1- ri
Starting at the first layer, the impedance of each successive layer is
determined by recursively applying this formula:
 1  ri 
Zn  Z 1 *   
 1- ri  22
Applying recursive inversion under ideal conditions:

23
Issues in Recursive Inversion

The samples in the seismic trace are actually reflection coefficients. This means
that there is no seismic wavelet.

The samples are scaled properly. Reflection coefficients must be numbers


between -1 and +1. Seismic samples may have any amplitude. The recursive
inversion formula assumes that the absolute scaling is correct.

Since the equation is applied recursively from top to bottom of the trace, the effect
of errors is cumulative. This means that the error at the bottom may be much
greater than the error at the top.

The greatest effect of this cumulative error is in the trend or low-frequency


component of the answer. Normally, this trend is so poorly defined that it is
customary to remove the trend from the answer and replace it with the trend from
the initial guess model. This is the strategy adopted by STRATA.

A single parameter on the recursive inversion menu defines what is low-


frequency.

The process is called bandlimited because the final impedance traces are defined
within the same frequency band as the input seismic data.
24
Model Based (Blocky) Inversion
Model based inversion follows from the convolutional model:

seismic = wavelet * reflectivity + noise

Assuming that:

• the seismic trace is known


• the wavelet is known
• the noise is uncorrelated and random

The reflectivity is defined as that sequence which “fits” the data best.
That is, if we can find a reflectivity which convolves with the wavelet to
give a good approximation to the seismic trace, we assume that this is
the right answer.

In practice, Model Based Inversion starts with an initial guess and


improves on it by a series of steps.
25
The seismic trace:

The wavelet:

The initial guess impedance:

26
Step 1: Block the initial guess impedance with a uniform block size.

Step 2: Form a synthetic trace by convolving the blocky impedance with the
known wavelet:

Step 3: Compare the synthetic trace with the real trace.

Step 4: Modify both the amplitudes and thickness of the blocks to improve the fit:

Repeat this process through a series of iterations.


27
Potential problems with Model Based Inversion

(1) Sensitive dependence on the wavelet:

Inversion using the correct


wavelet:

Inversion using the wrong


wavelet:

28
(2) Non-uniqueness.

For a given wavelet, all these results fit the trace about equally well:

29
Sparse Spike Inversion
Sparse Spike Inversion assumes that the actual reflectivity can be
thought of as a series of large spikes embedded in a background of
small spikes:

Sparse Spike Inversion assumes that only the large spikes are
meaningful. It finds the location of the large spikes by examining the
seismic trace.
30
Sparse Spike Inversion builds up the reflectivity
sequence one spike at a time. Spikes are added until
the trace is modeled accurately enough:

The amplitudes of the impedance blocks are determined using the


model-based inversion algorithm. 31
Sparse Spike Inversion

• Puts events only where the seismic demands.


• Attempts to produce the simplest possible model consistent with the
data.
• Often produces fewer events than are known to be geologically true.
• Less dependent on initial-guess model.

Model-Based Inversion

• Puts events where the initial guess model (user) demands.


• Produces the closest model to the initial guess, which is also
consistent with the seismic data.
• Can produce higher-resolution results than supported by seismic
alone.
• Subject to non-uniqueness. Dependent on initial-guess model.

32
The Initial Guess Model
The initial guess model for each trace consists of an impedance log, usually
derived by multiplying a real sonic log by a real density log. The impedance log
model must be measured in 2-way travel time. The original logs are measured
in depth. A critical step is depth-to-time conversion:

33
The depth-to-time conversion is made using a depth-time
table which maps each depth to the two-way travel time
from the datum (surface) to that depth and back:

34
The depth-time table is usually calculated from the sonic
log velocities using this equation:

i
dj
ti  2*  where: ti = time down to layer i
j 1 Vj dj = thickness of layer j
V j = velocity of layer j

Note: The time to an event depends on all the velocities above that layer,
including the first velocity to the surface, V1. That velocity is unknown
and is usually approximated by extrapolating the first measured velocity
back to the surface:

35
If the well is deviated, it must be corrected to vertical
and the correction made from KB to datum:

DM = Measured depth from KB


DV = Vertical depth from KB
DS = Vertical depth from datum
T = Two-way time from datum
36
The depth-time table calculated from the sonic log is
rarely sufficient to produce a model impedance which
ties the seismic data properly because:
 The seismic datum and log datum may be different.
 The average first layer velocity is not known.
 Errors in the sonic log velocities produce cumulative errors in the
calculated travel-times.
 The events on the seismic data may be mispositioned due to
migration errors.
 The seismic data may be subject to time stretch caused by
frequency-dependent absorption and short-period multiples.

To improve the depth-time table two procedures are used:


 Apply check shot corrections.
 Apply manual log correlation to the seismic data.

Check Shot Corrections


A check shot table is a series of
measurements of actual 2-way time
for a set of depths:
37
The depth-time table calculated from the sonic log must
be modified to reflect the desired check shot times:

38
Check Shot Corrections
The interpolation of points on the drift curve uses one of three options:

Linear: Honors the points exactly with straight line segments between

Spline: Honors the points exactly with smooth curves between

Polynomial: Fits a smooth curve using least-squares optimization


39
Changing the depth-time table implies a
possible change in the original sonic log
velocities. There are three options in
STRATA:

(1) Change all the velocities in the log in


such a way that the new log will integrate
to exactly the desired times.

Note: This involves a ramped velocity


above the first measured depth to handle
the bulk time shift and to minimize the
effect of spurious reflections on the
synthetic.

This is called “Apply All Changes” in


STRATA.

40
(2) Change the velocities for layers
between the first and last check shot
depth only.

This means that no ramp is added


above the first measured depth.

The resulting log will integrate to the


desired times except for a bulk time
shift.

This is called “Apply Relative Changes”


in STRATA.

41
(3) Do not change the velocities in the
sonic log.

The resulting log will not integrate to


the desired times, but GEOVIEW and
STRATA will use the new depth-time
table.

This option has the effect of


maintaining the original reflection
coefficients for synthetic calculations.

This is called “Change Depth-Time


Table Only” in STRATA.

42
Depending on the interpolation option used, the sonic log
changes may be drastic:

Note: The time stretches in this example are unrealistically large.

43
Log Correlation

Log correlation is the process of applying a manual correction to the


depth-time curve to optimize the correlation between initial model and
seismic data.

Log correlation should be applied after the check shot correction, and is
ideally a small change.

Log correlation changes the depth-time curve in exactly the same way as
a check shot correction.

Log correlation consists of selecting events on the synthetic trace and


the corresponding events on the real trace.

Since the synthetic is used, the choice of wavelet may be crucial.

44
The log correlation
window looks like this:

First, extract a new wavelet. Since


the log has not yet been correlated,
use the Statistical wavelet
extraction to extract a zero-phase
wavelet with the same amplitude
spectrum as the seismic.

45
The extracted wavelet will
look like this:

46
Now correlate the points on the log as shown here, and
then click on the Stretch button at the bottom of the log
correlation window.

47
The default parameters use
Spline interpolation between
points on the drift curve.

48
Change the Type of Interpolation to Linear and click on
Apply. Note the change in the shape of the drift curve.

49
Now change the Type of Interpolation to Polynomial with
the parameters shown below and click on Apply.

50
Change the menu as shown below and click on Apply. Note
that the option to Apply all changes adds a ramp to the top
of the sonic log.

51
Finally, change the menu as
shown below and click on
Apply. Then click on Ok on
the Check Shot window to
accept these parameters.

52
The log correlation
window now looks
like this. Note that
we have achieved
an 85% correlation.
To see the
parameters for this
calculation, click
on the Parameters
button.

53
The Initial Guess Model
Interpolating the Log

Adding a single log to the model creates a uniform horizontal model:

54
Picking a single event guides the interpolation of the log:

Note: A single picked event simply produces a bulk time shift on the log
for each trace. This is equivalent to applying a check shot correction
with a single point.

55
Picking a different event causes a different shift to be applied:

56
Picking two or more events is equivalent to applying a
variable check-shot at each trace. The impedances between the two
picked events are stretched / squeezed.

57
A pinch-out is handled by forcing the two picked events to converge:

58
Handling Faults

STRATA currently does not handle faults. However, the


effect may be simulated by picking the same event on both sides of the
fault, and picking the fault plane as well:

59
Interpolating the Log

When more than one well is entered into the model,


the results are interpolated using inverse-distance weighting:

60
Assume that there are two input logs, L1 and L2. We
wish to calculate the output log, Lout.

This will be a linear combination of the two input logs:

Lout = w1*L1 + w2*L2

The weights vary inversely as the distance from the target point to each
of the input logs:
1 d1
2

w1 
1 d 12  1 d 2 2
In general:
Lout   wi * Li
i

where: -2

wi  d i

d
-2
j
j

61
Using picked events with multiple logs forces the inverse
distance interpolation to be guided by the picked events:

62
Note the difference between interpolation with and
without picked events:

General rules for adding picked events:

(1) Picked events must be present across the entire survey.


Missing picks will be interpolated by the program.
(2) Only pick events which you are sure of.
(3) Pick the large scale structure, not the fine details.

63
The next slide shows the same comparison for the
wedge model.

This is the original model:

This model was created


by deleting the horizon at
the base of the wedge:

64
Original Model and Inversion Inversion with horizon removed

65
Wavelet Extraction
The Convolutional Model is used as
the basis for all inversion:

trace = wavelet * reflectivity + noise

In the frequency domain, convolution


becomes multiplication:

Inversion can be thought of as division by the wavelet:

Reflectivity = Trace / Wavelet

The narrow band wavelet restricts the available range of information


in the frequency domain.
66
The Wavelet is defined completely by its amplitude
spectrum and its phase spectrum:

Over a limited frequency range, the phase spectrum may often be


approximated by a straight line.

The intercept of the line is the constant phase rotation which best
characterizes this wavelet.

The slope of the line measures the time-shift of the wavelet. 67


Polarity Convention

A special wavelet phase issue is the Polarity Convention.

The default convention is that an increase in acoustic impedance is


represented as a peak on zero-phase seismic data:

The alternate convention is that an increase in acoustic impedance is


represented as a trough on zero-phase seismic data:

The polarity convention is set using the


Synthetic Polarity Convention menu: 68
Wavelets in the earth vary both laterally (spatially) and
temporally for a variety of reasons:
 Near surface effects (space variant)

 Frequency-dependent absorption (space and time variant)

 Inter-bed multiples (space and time variant)

 NMO stretch

 Processing artifacts

STRATA assumes that the wavelet is constant with time and space:
 Time invariant: This means that the inversion is optimized for
a limited time window.
 Space invariant: This assumes that the data has been
processed optimally to remove spatial variations in the
wavelet.

69
In general, a variety of methods can be used for
wavelet extraction.

Some are available in STRATA.

(1) Estimate amplitude spectrum using the the seismic data alone. The
phase is assumed known from some other source.

 autocorrelation
 maximum entropy spectral analysis
 cross spectral analysis
 in STRATA: statistical wavelet extraction uses the
autocorrelation

(2) Estimate both amplitude and phase spectra from the seismic data
alone.

 minimum entropy wavelet estimation


 higher order moments
 STRATA does not have any capability for this, since it is not
70
considered reliable.
(3) Estimate both amplitude and phase spectra using deterministic
measurements.

 marine signatures
 VSP analysis
 in STRATA: read in the external wavelet as an ASCII file

(4) Estimate both amplitude and phase spectra using both seismic
and well log measurements.

In STRATA: extract the full wavelet using well logs.

(5) Estimate amplitude spectrum and a constant phase spectrum


using both seismic and well log measurements.

In STRATA: extract constant phase wavelet using well logs.


71
Using the well logs to calculate
a single constant phase value

This procedure calculates the amplitude spectrum of the wavelet using


the autocorrelation of the seismic traces, exactly as in the statistical
procedure. The phase spectrum is approximated as a single constant
value, using the well logs. This procedure is more robust than the full
phase spectrum calculation, especially when the tie between logs and
seismic is poor.

Steps for calculating the phase:

(1) Calculate the wavelet using the statistical wavelet extraction


procedure (don’t use the wells).
(2) Apply a series of constant phase rotations to the extracted
wavelet.
(3) For each phase rotation, calculate the synthetic trace and
correlate it with the seismic trace.
(4) Select the phase rotation which produces the maximum
correlation.
72
A General Problem with wavelet extraction:
 To extract a wavelet using logs, an optimum correlation must
be done first.
 To perform correlation properly, the wavelet must already be
known.

Practical wavelet extraction procedure:

(1) Use statistical wavelet extraction to determine a preliminary


wavelet. This assumes that the approximate phase of the
wavelet is known.
(2) Stretch/squeeze the logs to tie the seismic data.
(3) Extract a new wavelet using the well logs.
(4) Possibly repeat steps (2) and (3).

73
Model Based Inversion Parameters
The menu for model-based inversion:

The significant parameters


are:
•Inversion option
•Maximum impedance
change
•Average block size
•Number of iterations

74
Inversion option

This parameter controls the how the constraints will be used.

Model-based inversion minimizes an objective function of this form:

J = weight1 x (T - W*r) + weight2 x (M - H*r)

where:
T = the seismic trace
W= the wavelet
r = the final reflectivity
M = the initial guess model impedance
H = the integration operator which convolves with the
final reflectivity to produce the final impedance
* = convolution 75
The objective function has two parts:

Minimizing the first part, (T - W*r), forces a solution which models the
seismic trace. Minimizing the second part, (M - H*r), forces a solution
which models the initial guess impedance using the specified block size.

These two conditions are (usually) incompatible. The weights, weight 1


and weight2, determine how the two parts are balanced. In stochastic
inversion, the objective function is exactly as shown above. The weights
are determined by this parameter:

The Model Constraint is the value of weight2 in the objective function.


Setting this value to 0 causes the seismic trace to dominate. Setting this
value to 1 causes the initial guess model to dominate. This is called a
soft constraint because the final model may deviate any distance from
the initial guess, but it pays an increasingly large penalty for doing so.
76
In constrained inversion, the second term is missing
entirely from the objective function. However, the
algorithm is constrained to keep the final impedance
values constrained within the limits specified by:

This is called a hard constraint,


because values are not allowed to
change beyond a fixed boundary.

The Maximum Impedance Change is a percentage of the average


impedance for the log. Note the effective range for this model:

In practice, constrained inversion is usually preferred to stochastic


inversion because the Maximum Impedance Change parameter is more
meaningful than the Model Constraint parameter. 77
Average Block Size

This parameter controls the resolution of the final result. The initial
guess model is blocked to a series of uniform blocks with this size:

The final inversion result may change the size of the blocks, but the
number of blocks is still the same. This means that some blocks get
bigger and some get smaller, while the average is kept constant.

Using a small block size (2 ms) will increase the resolution, but the
increased detail may be coming from the initial guess.

Using a small block size will always improve the fit between the input
trace and the final synthetic trace. 78
Number of Iterations

Since STRATA converges through a series of iterations, this parameter


determines the degree of convergence. In practice most of the work
has been done after about 3 iterations.

There is never any harm in having more iterations - it only affects the
run-time.

The number of iterations required for convergence may depend on the


block size used in the inversion. A finer block size may require more
iterations.

The way to confirm whether enough iterations have been done is to


examine the error plot.

79
Maximum-Likelihood Sparse Spike
Inversion Parameters
The menu for sparse spike inversion:

Sparse Spike Inversion uses the same parameters as constrained


model-based inversion.
inversion

These additional parameters determine how many spikes will be


detected on each trace:

Maximum Number of Spikes


Spike Detection Threshold

80
Maximum Number of Spikes

This parameter sets the maximum number of allowable spikes per


trace. This is defaulted to be the same as the total number of
samples in the window. Effectively this means that this parameter
does not operate under normal conditions.

Spike Detection Threshold

As each spike is added, its amplitude is compared with the average


amplitude of all spikes detected so far. When the new amplitude is
less than a specified fraction of the average, the algorithm stops
adding spikes.

81
Bandlimited Inversion Parameters
The menu for bandlimited inversion:

The only parameter for Bandlimited Inversion is:

Constraint High-Cut Frequency: This parameter controls the filter


which is applied to the initial guess model to provide the low-
frequency component to the result. All frequencies above this value
are removed from the initial guess. All frequencies below this value
are removed from the recursively inverted trace. The two are then
added together.

82
The resulting initial guess model for various settings
of the Constraint High-Cut Frequency:

83
Scaling Parameters
After the main inversion menu is set, the following pages appear,
controlling the scaling of the data:

84
Why is Scaling an Issue?

The Convolutional Model is used as the basis for all inversion:

Trace = Wavelet * Reflectivity + Noise

In the frequency domain, this can be approximated by:

Reflectivity = Trace / Wavelet

To solve for the reflectivity, the wavelet must be known.

Normally, when a wavelet is extracted, only its shape is known; not its
absolute amplitude. Inversion requires that the absolute amplitude be
known as well.

From the equation above, if the wavelet is multiplied by 2, the resulting


reflectivity will be divided by 2.

STRATA determines the scaling of the wavelet automatically by forcing


the root-mean-square amplitude of the initial guess synthetic to be equal
to the root-mean-square amplitude of the real trace.
85
Scaling Options
Calculate a single scaler for the entire data set
Calculate a separate scaler for each trace

The first option, single scaler, is theoretically more desirable. This is


because it assumes that there is a single wavelet scaling which is suitable
for all traces of the data set. This will preserve amplitude variations from
trace to trace.

The second option, separate scaler, is actually more robust. It effectively


assumes that traces may need to be rescaled to remove trace-to-trace
variation which is not based on lithology.

For either option, the window used to determine the scaler may be
different from that used in the actual inversion. For some data sets,
especially sparse models, the automatic scaling may not be ideal. In that
case, you may override with a manual adjustment, which multiplies the
automatic scaling result:

The only way to determine this factor is by visually inspecting how


well the inversion traces match the initial guess logs.
86
Good Scaling:

Scaling too
low:

Scaling too
high:

87
Error Plot
The Error Plot shows the difference between the actual traces and the
synthetic traces calculated using the inversion impedance result:

Ideally, the Error Plot should show no coherent energy, and should
have a low over-all amplitude.
88
Low frequency component in the error – probably
caused by using the wrong wavelet:

89
Error localized to one side of line – probably caused by
not picking enough events:

90
Coherent error throughout data set – probably caused by:

too large block size


not enough iterations
constraint too tight

91
This is a summary of the three inversion tests. The
biggest improvement occurs when the block size is
reduced from 6 ms to 4 ms. Increasing the number of iterations also
has a lesser effect.

6 ms - 5 4 ms - 5
iterations iterations

4 ms - 10
iterations

92

You might also like