Chapter Contents |
Previous |
Next |

The SYSLIN Procedure |

A brief description of the methods used by the SYSLIN procedure follows. For more information on these methods, see the references at the end of this chapter.

There are two fundamental methods of estimation for simultaneous equations: least squares and maximum likelihood. There are two approaches within each of these categories: single equation methods and system estimation. 2SLS, 3SLS, and IT3SLS use the least-squares method; LIML and FIML use the maximum likelihood method. 2SLS and LIML are single equation methods, which means that over identifying restrictions in other equations are not taken into account in estimating parameters in a particular equation. (See "Over Identification Restrictions" in the section "Computational Details" later in this chapter for more information.) As a result, 2SLS and LIML estimates are not asymptotically efficient. The system methods are 3SLS, IT3SLS, and FIML. These methods use information concerning the endogenous variables in the system and take into account error covariances across equations and hence are asymptotically efficient in the absence of specification error.

K-class estimation is a class of estimation methods that include
the 2SLS, OLS, LIML, and MELO methods as special cases.
A *K*-value less than 1 is recommended but not required.

MELO is a Bayesian K-class estimator. It yields estimates that can be expressed as a matrix weighted average of the OLS and 2SLS estimates.

The SUR and ITSUR methods use information about contemporaneous correlation among error terms across equations in an attempt to improve the efficiency of parameter estimates.

The 2SLS method substitutes for *Y*,
which results in consistent estimates.
In 2SLS, the instrumental variables are used as regressors to obtain
the projected value , which is then substituted for *Y*.
Normally, the predetermined variables of the system
are used as the instruments.
It is possible to use variables other than predetermined variables
from your system of equations as instruments;
however, the estimation may not be as efficient.
For consistent estimates, the instruments must be uncorrelated
with the residual and correlated with the endogenous variable.

K-class estimators are instrumental variable estimators
where the first-stage predicted values take a special form:
for a specified value *k*.
The probability limit of *k* must equal 1 for consistent
parameter estimates.

The LIML method results in consistent estimates that are exactly equal to
2SLS estimates when an equation is exactly identified.
LIML can be viewed as least-variance ratio
estimators or as maximum likelihood estimators.
LIML involves minimizing the ratio
,
where *rvar*_*eq* is the residual variance associated
with regressing the weighted endogenous variables
on all predetermined variables appearing in that equation,
and *rvar*_*sys* is the residual variance associated with
regressing weighted endogenous variables
on all predetermined variables in the system.
The K-class interpretation of LIML is that .
Unlike OLS and 2SLS, where *k* is 0 and 1, respectively,
*k* is stochastic in the LIML method.

The MELO method computes the minimum expected loss estimator. The MELO method computes estimates that "minimize the posterior expectation of generalized quadratic loss functions for structural coefficients of linear structural models" (Judge et al. 1985, 635). Other frequently used K-class estimators may not have finite moments under some commonly encountered circumstances and hence there can be infinite risk relative to quadratic and other loss functions. MELO estimators have finite second moments and hence finite risk.

One way of comparing K-class estimators is to note that when *k*=1,
the correlation between regressor and the residual is completely corrected for.
In all other cases, it is only partially corrected for.

Theoretically, SUR parameter estimates will always be at least as efficient as OLS in large samples, provided that your equations are correctly specified. However, in small samples the need to estimate the covariance matrix from the OLS residuals increases the sampling variability of the SUR estimates, and this effect can cause SUR to be less efficient than OLS. If the sample size is small and the across-equation correlations are small, then OLS should be preferred to SUR. The consequences of specification error are also more serious with SUR than with OLS.

The 3SLS method combines the ideas of the 2SLS and SUR methods.
Like 2SLS, the 3SLS method uses instead of *Y* for
endogenous regressors, which results in consistent estimates.
Like SUR, the 3SLS method takes the cross-equation error correlations
into account to improve large sample efficiency.
For 3SLS, the 2SLS residuals are used to estimate
the cross-equation error covariance matrix.

The SUR and 3SLS methods can be iterated by recomputing the estimate of the cross-equation covariance matrix from the SUR or 3SLS residuals and then computing new SUR or 3SLS estimates based on this updated covariance matrix estimate. Continuing this iteration until convergence produces ITSUR or IT3SLS estimates.

Note: the RESTRICT, SRESTRICT, TEST, and STEST statements are not supported when the FIML method is used.

In practice, models are never perfectly specified. It is a matter of judgment whether the misspecification is serious enough to warrant avoidance of system methods.

Another factor to consider is sample size. With small samples, 2SLS may be preferred to 3SLS. In general, it is difficult to say much about the small sample properties of K-class estimators because this depends on the regressors used.

LIML and FIML are invariant to the normalization rule imposed but are computationally more expensive than 2SLS or 3SLS.

If the reason for contemporaneous correlation among errors across equations is a common omitted variable, it is not necessarily best to apply SUR. SUR parameter estimates are more sensitive to specification error than OLS. OLS may produce better parameter estimates under these circumstances. SUR estimates are also affected by the sampling variation of the error covariance matrix. There is some evidence from Monte Carlo studies that SUR is less efficient than OLS in small samples.

Chapter Contents |
Previous |
Next |
Top |

Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.