Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
Introduction to Analysis-of-Variance Procedures

Linear Hypotheses

When models are expressed in the framework of linear models, hypothesis tests are expressed in terms of a linear function of the parameters. For example, you may want to test that \beta_2 - \beta_3 = 0.In general, the coefficients for linear hypotheses are some set of Ls:

H_0\colon L_0 \beta_0 + L_1 \beta_1 +  ...  + L_k \beta_k = 0
Several of these linear functions can be combined to make one joint test. These tests can be expressed in one matrix equation:
H_0\colon {L {\beta}} = 0
For each linear hypothesis, a sum of squares (SS) due to that hypothesis can be constructed. These sums of squares can be calculated either as a quadratic form of the estimates
{SS}({L{\beta}} = 0) = ({Lb})^'
(L(X^'X)^{ -}
L^')^{ -1}({Lb})
or, equivalently, as the increase in sums of squares for error (SSE) for the model constrained by the null hypothesis
{SS}({L \beta} = 0) =
{SSE(constrained)} - {SSE(full)}
This SS is then divided by appropriate degrees of freedom and used as a numerator of an F statistic.

Chapter Contents
Chapter Contents
Previous
Previous
Next
Next
Top
Top

Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.