Chapter Contents |
Previous |
Next |

The MULTTEST Procedure |

The MULTTEST procedure addresses the multiple testing problem. This problem arises when you perform many hypothesis tests on the same data set. Carrying out multiple tests is often reasonable because of the cost of obtaining data, the discovery of new aspects of the data, and the many alternative statistical methods. However, a negative feature of multiple testing is the greatly increased probability of declaring false significances.

For example, suppose you carry out 10 hypothesis tests at the 5%
level, and you assume that the distributions of the *p*-values from
these tests are uniform and independent. Then, the probability of
declaring a particular test significant under its null hypothesis is
0.05, but the probability of declaring at least 1 of the 10 tests
significant is 0.401. If you perform 20 hypothesis tests, the
latter probability increases to 0.642. These high chances
illustrate the danger in multiple testing.

PROC MULTTEST approaches the multiple testing problem by adjusting
the *p*-values from a family of hypothesis tests. An
adjusted *p*-value is defined as the smallest significance level for
which the given hypothesis would be rejected, when the entire family
of tests is considered. The decision rule is to reject the null
hypothesis when the adjusted *p*-value is less then ; in
most cases, this procedure controls the *familywise error
rate* at or below the level.
PROC MULTTEST offers the following *p*-value adjustments:

- Bonferroni
- Sidak
- stepdown
- Hochberg's method
- Hochberg and Benjamini's method
- bootstrap
- permutation

The Bonferroni and Sidak adjustments are simple functions of the raw
*p*-values. They are computationally quick, but they can be too
conservative. Stepdown methods remove some conservativeness, as do
the step-up methods of Hochberg (1988). The bootstrap and
permutation adjustments resample the data with and without
replacement, respectively, to approximate the distribution of the
minimum *p*-value of all tests. This distribution is then used to
adjust the individual raw *p*-values. The bootstrap and permutation
methods are computationally intensive but appealing in that, unlike
the other methods, correlations and distributional characteristics
are incorporated into the adjustments (Westfall and Young 1989,
1993).

PROC MULTTEST handles data arising from a multivariate one-way ANOVA
model, possibly stratified, with continuous and discrete response
variables; it can also accept raw *p*-values as input data. You can
perform a *t*-test for the mean for continuous data and the
following statistical tests for discrete data:

- Cochran-Armitage (CA) linear trend test
- Freeman-Tukey (FT) double arcsine test
- Peto (PETO) mortality-prevalence (log-rank) test
- Fisher (FISHER) exact test

The CA and PETO tests have exact versions that use permutation distributions and asymptotic versions that use an optional continuity correction. Also, with the exception of the FISHER test, you can use a stratification variable to construct Mantel-Haenszel type tests. All of the previously mentioned tests can be one- or two-sided.

As in the GLM procedure, you can specify linear contrasts that
compare means or proportions of the treated groups. The output
contains summary statistics and regular and multiplicity-adjusted
*p*-values. You can create output data sets containing raw
and adjusted *p*-values, test statistics and other intermediate
calculations, permutation distributions, and
resampling information.

Chapter Contents |
Previous |
Next |
Top |

Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.