Nothing Special   »   [go: up one dir, main page]

Stationary ARMA Procecess. - 2023

Download as pdf or txt
Download as pdf or txt
You are on page 1of 52

Stationary ARMA Processes

Kostas Mouratidis
Sheffield: 30/09/2024
Outline
• Introduction to Stationary Time Series

• Concepts and Definitions


✓ Stochastic Process
✓ Weak and Strict
✓ Invertibility

• Examples of Sationary Processes.

• Invertibility: Non-Fundamentals Shocks

• Introduction to Forecasting
Introduction to Time Series
• Time series is a sequence of data points collected on a variable or several
variables sequentially over time. We denote a univariate time series by
fyt : t = . . . -2,-1, 0, 1, 2, . . .} .

Note that with capital Yt we refer to the random variable while the small yt
denotes the realization of Yt

Examples:
• business - production, sales, prices of goods, inventory
• economics - gross domestic product (GDP), unemployment, in.ation,
exchange rate, interest rate
• Finance – Stock Prices.
• An example of GDP in the United States data recorded from 1960Q1 to 2017Q4 looks
like:

The values of GDP are expressed in $2009, which means that the price level is held fixed
at its 2009 value.

• The plot that is used for illustrating time series data is a time-plot. A time-plot
(sometimes called a time series graph) displays the values of a time series valuable
against time.
Example of GDP in the US
The time-plot plots the real GDP per year in the United States over a 58
year-period, from 1960 to 2017. Here, real GDP means that the GDP
values obtained after adjusting for inflation.

GDP increased dramatically (exponentially) over the 58-year period, from


approximately $3 trillion in 1960 to over $17 trillion in 2017.

The change is greater-than five-fold increase. That is (17/3 = 5.6667). This


corresponds to an increase of 1.7 log points.

The rate of growth was not constant. GDP has declined during the recessions
of 1960.1961, 1970, 1974.1975, 1980, 1981.1982, 1990.1991, 2001, and
2007, 2009.

Over the long run, GDP tends to grow by a certain percentage per year on
average. Therefore, the log transformed GDP (log (GDPt )) grows linearly
over time.
• Another reason is that the standard deviation of the GDP changes with time and the
standard deviation of the log-transformed series is approximately constant.

• Therefore, modelling linear series is more convenient rather than modelling the original
series.
Components of Time Series
1. Trend: The trend is a persistent, long term upward or downward pattern of movement. The
duration of a trend is usually several years. The source of such a trend might be gradual and
ongoing changes in technology, population, wealth, etc.

• Real GDP has a non-linear upward trend over the period.


• The logarithm of GDP has a clear linear upward trend over the period.
2. Seasonal:

• Seasonal behaviour occurs when the data exhibit rises and falls at a fixed frequency.

• A seasonal pattern occurs when a time series is afected by seasonal factors such as the
time of the year or the day of the week. The monthly sales of women clothing shows
seasonality which is induced partly by the change in the cost of the clothing at the end of
the calendar year.

Kanchana
3. Cycle:

A cycle occurs when the data exhibit rises and falls that are not of a fixed frequency.

These fluctuations are usually due to economic conditions, and are often related to the business
cycle. The duration of these fluctuations is usually at least 2 years.
Irregular:
• This component represents whatever is left over after identifying the other three
systematic components.

• It represents the random, unpredictable fluctuations in data. There is no pattern to


the irregular component.
• The use of the analysis of time series can be viewed as follows:
1 Prediction of future values using the current and past information [Will be
covered in Topic 4]

• The examination of the joint relationship of several related time series [Will
be covered in Topic 3]

• Assess the impact of an intervention on the behaviour of a time series [Will


be covered in topic 3]

• The determination of dynamic causal effect on a variable (say, Y ) of a


change in another variable (say, X) [Will also be covered in Topic 3]
Expectations of Stochastic Process
Examples
0
1
2
3
4
6
7
8
9

5
1
5
9
13
17
21
25
29
33
37
41
45
49
53
57
61
65
69
73
77
81
85
89
93
97
101
105
109
113
117
121
White Noise Process Yt=μ+εt

125
129
133
137
141
145
149
153
157
161
165
169
173
177
181
185
189
193
197
20
40
60
80

-20
0
100
120
140
0.6
4.2
7.8
11.4
15
18.6
22.2
25.8
29.4
33
36.6
40.2
43.8
47.4
51
54.6
58.2
61.8
Yt=βt+εt

65.4
69
72.6
76.2
79.8
83.4
87
90.6
94.2
97.8
101.4
105
108.6
112.2
115.8
119.4
Stationarity
White Noise
A process { t }t+= − whose element have mean zero and variance σ2,

E ( t ) = 0 (1.1.14)
E ( t2 ) =  2 (1.1.15)

and for which the ε’s are uncorrelated across time:

E ( t   ) = 0 for t   (1.1.16)

A process satisfying (1.1.14) through (1.1.16) is called white noise process. Finally, if
(1.1.14) through (1.1.16) hold along with

 t ~ N (0,  2 ) (1.1.16)

then we have the Gaussian white noise process.


Moving Average Processes: MA(1) and MA(q)
We start with an MA(1) model where we compute mean the
variance and autocovariance
The qth-Order Moving Average Process
Example- MA(2) process
Autoregressive Processes
Realizations of an AR(1) process,
The second-order Autoregressive process: AR(2)
ARMA(p,q) Process
Model Selection
One natural question to ask of any estimated model is: How well does it fit
the data? Adding additional lags for p and/or q will necessarily reduce the
sum of squares of the estimated residuals. However, adding such lags entails
the estimation of additional coefficients and an associated loss of degrees of
freedom. there exist various model selection criteria that trade-off a reduction
in the sum of squares of the residuals for a
more parsimonious model. The two most commonly used model selection
criteria are the Akaike Information Criterion (AIC) and the Schwartz
Bayesian Criterion (SBC).

AIC = T ln(sum of squared residuals) + 2n


SBC = T ln(sum of squared residuals) + n ln(T)

where n = number of parameters estimated (p + q + possible constant term)


T = number of usable observations.
BOX–JENKINS MODEL SELECTION
• Parsimony
Box and Jenkins argue that parsimonious models produce better forecasts than
overparameterized models. A parsimonious model fits the data well without
incorporating any needless coefficients.

• Stationarity and Invertibility


The distribution theory underlying the use of the sample ACF and PACF as
approximations to those of the true data-generating process assumes that the {yt}
sequence is stationary. Moreover, t-statistics and Q-statistics also presume that the data
are stationary.

Under the null hypothesis that all values of ρj = 0, Q is asymptotically 𝜒2 distributed


with s degrees of freedom.
If the sample value of Q calculated from exceeds the critical value of 𝜒2
with s degrees of freedom, then at least one value of ρj is statistically
different from zero at the specified significance level.

Goodness of Fit
A good model will fit the data well. Obviously, R2 and the average of the
residual sum of squares are common goodness-of-fit measures in ordinary
least squares
The Autocovariance Generated function (AGF)

Examples:
As an example of calculating an autocovariance-
generating function, consider the MA(1) process

The autocovariance-generating function of an


MA(q) process is given by
Given that an AR(1) process can be written as
Invertibility
Forecasting
Example AR(1)
Summary

• Introduction to Stationary Time Series

• Weak Stationarity (and Strict Stationarity in Lecture notes)

• Examples of Sationary Procecesses.

• Autocovariance Generating Function

• Invertible process

You might also like