Time Series Analysis and Control Examples 
Least Squares and Householder Transformation
Consider the univariate AR(p) process
Define the design matrix X.
Let y = (y_{p+1}, ... ,y_{n})'.
The least squares estimate, ,is the approximation to the maximum likelihood
estimate of if is assumed to be Gaussian
error disturbances. Combining X and y as
the Z matrix can be decomposed as
where Q is an orthogonal matrix and R is an upper
triangular matrix, w_{1} = (w_{1}, ... ,w_{p+1})', and
w_{2} = (w_{p+2},0, ... ,0)'.
The least squares estimate using Householder transformation
is computed by solving the linear system

Ra = w_{1}
The unbiased residual variance estimate is
and
In practice, least squares estimation does not require the
orthogonal matrix Q. The TIMSAC subroutines compute
the upper triangular matrix without computing the matrix Q.
Bayesian Constrained Least Squares
Consider the additive time series model
Practically, it is not possible to estimate parameters
a = (T_{1}, ... ,T_{T},S_{1}, ... ,S_{T})', since the number of
parameters exceeds the number of available observations.
Let denote the seasonal difference operator with L
seasons and degree of m; that is, .Suppose that T=L*n. Some constraints on the trend and seasonal
components need to be imposed such that the sum of squares of
, , and is
small. The constrained least squares estimates are obtained by
minimizing
Using matrix notation,

(yMa)'(yMa) + (aa_{0})'D'D(aa_{0})
where , y = (y_{1}, ... ,y_{T})',
and a_{0} is the initial guess of a. The matrix D is a
3T×2T control matrix in which structure varies
according to the order of differencing in trend and season.
where
The n×n matrix C_{m} has the same structure
as the matrix G_{m}, and I_{L} is the
L×L identity matrix.
The solution of the constrained least squares method is equivalent
to that of maximizing the following function
Therefore, the PDF of the data y is
The prior PDF of the parameter vector a is
When the constant d is known, the estimate of
a is the mean of the posterior distribution, where
the posterior PDF of the parameter a is proportional to
the function L(a).
It is obvious that is the minimizer of
,where
The value of d is determined by the minimum ABIC
procedure. The ABIC is defined as
State Space and Kalman Filter Method
In this section, the mathematical formulas for state space
modeling are introduced. The Kalman filter algorithms are
derived from the state space model. As an example, the state
space model of the TSDECOMP subroutine is formulated.
Define the following state space model:
where and
.If the observations, (y_{1}, ... ,y_{T}), and the initial
conditions, and , are
available, the onestep predictor of the state vector x_{t} and its mean square error (MSE)
matrix are
written as
Using the current observation, the filtered value of
x_{t}
and its variance are updated.
where and
.The loglikelihood function is computed as
where v_{tt1} is the conditional variance of
the onestep prediction error e_{t}.
Consider the additive time series decomposition
where x_{t} is a (K×1) regressor vector and
is a (K×1) timevarying coefficient vector.
Each component has the following constraints:
where and
.The AR component u_{t} is assumed to be stationary. The
trading day component TD_{t}(i) represents the number of the
ith day of the week in time t.
If k=3, p=3, m=1, and L=12 (monthly data),
The state vector is defined as
The matrix F is
where

F_{4} = I_{6}

1' = (1,1, ... ,1)
The matrix G can be denoted as
where
Finally, the matrix H_{t} is timevarying,
where
Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.