Stationary Processes
Statistical Stationarity
A Stationary Time Series is one whose statistical properties such as mean, variance, autocorrelation, etc. are all constant over time. Most statistical forecasting methods are based on the assumption that the time series can be rendered approximately stationary (i.e., "stationarized") through the use of mathematical transformations.
A stationarized series is relatively easy to predict: you simply predict that its statistical properties will be the same in the future as they have been in the past! The predictions for the stationarized series can then be "untransformed," by reversing whatever mathematical transformations were previously used, to obtain predictions for the original series.
Thus, finding the sequence of transformations needed to stationarize a time series often provides important clues in the search for an appropriate forecasting model. Stationarizing a time series through differencing (where needed) is an important part of the process of fitting an ARIMA model.
Another reason for trying to stationarize a time series is to be able to obtain meaningful sample statistics such as means, variances, and correlations with other variables. Such statistics are useful as descriptors of future behavior only if the series is stationary.
For example, if the series is consistently increasing over time, the sample mean and variance will grow with the size of the sample, and they will always underestimate the mean and variance in future periods. And if the mean and variance of a series are not well-defined, then neither are its correlations with other variables. For this reason you should be cautious about trying to extrapolate regression models fitted to nonstationary data.
Most business and economic time series are far from stationary when expressed in their original units of measurement, and even after deflation or seasonal adjustment they will typically still exhibit trends, cycles, random-walking, and other non-stationary behavior. If the series has a stable long-run trend and tends to revert to the trend line following a disturbance, it may be possible to stationarize it by de-trending (e.g., by fitting a trend line and subtracting it out prior to fitting a model, or else by including the time index as an independent variable in a regression or ARIMA Model), perhaps in conjunction with logging or deflating.
Such a series is said to be trend-stationary. However, sometimes even de-trending is not sufficient to make the series stationary, in which case it may be necessary to transform it into a series of period-to-period and/or season-to-season differences.
If the mean, variance, and autocorrelations of the original series are not constant in time, even after detrending, perhaps the statistics of the changes in the series between periods or between seasons will be constant. Such a series is said to be difference-stationary. [13]
Trend and Seasonality impact on Stationarity
Time series with trends, or with seasonality, are not stationary. The trend and seasonality will affect the value of the time series at different times.
Cyclic vs Seasonal vs Trend
A time series with cyclic behavior (but not trend or seasonality) is stationary. In general, a stationary time series will have no predictable patterns in the long-term
Conditions for Stationarity
A Time Series is stationary if has the following conditions:
- Constant (mean) for all .
- Constant (variance) for all .
- The autocovariance function between and only depends on the interval and .
Transformations to Achieve Stationarity
- If the time series is not stationary, we can often transform it to stationarity with one of the following techniques. We can difference the data. That is, given the series , we create the new series
The differenced data will contain one less point than the original data. Although you can difference the data more than once, one difference is usually sufficient.
If the data contain a trend, we can fit some type of curve to the data and then model the residuals from that fit. Since the purpose of the fit is to simply remove long term trend, a simple fit, such as a straight line, is typically used.
For non-constant variance, taking the logarithm or square root of the series may stabilize the variance. For negative data, you can add a suitable constant to make all the data positive before applying the transformation. This constant can then be subtracted from the model to obtain predicted (i.e., the fitted) values and forecasts for future points.
Tests for Stationarity
Augmented Dickey-Fuller (ADF)
Augmented Dickey Fuller test: null hypothesis is that the data are non-stationary and non-seasonal.