Refer to Silverman (1986) or Scott (1992) for an introduction to
nonparametric density estimation.
PROC MODECLUS uses (hyper)spherical
uniform kernels of fixed or variable
radius. The density estimate at a point is computed by dividing the
number of observations within a sphere
centered at the point by the product of the sample size and the volume
of the sphere. The size of the sphere is determined by the smoothing
parameters that you are required to specify.
For fixed-radius kernels, specify the radius as a Euclidean distance
with either the DR= or R= option. For variable-radius kernels, specify
the number of neighbors desired within the sphere with either the DK=
or K= option; the radius is then the smallest radius that contains at
least the specified number of observations including the observation
at which the density is being estimated. If you specify both the DR= or
R= option and the DK= or K= option,
the radius used is the maximum of the two indicated
radii; this is useful for dealing with outliers.
It is convenient to refer to the sphere of support of the kernel at
observation xi as the
neighborhood of xi. The observations
within the neighborhood of xi are the neighbors
of xi. In some contexts,
considered a neighbor of itself, but in other contexts it is not. The
following notation is used in this chapter.
- the ith observation
- the distance between points x and y
- the total number of observations in the sample
- the number of observations within the
neighborhood of xi including
- the number of observations within the
neighborhood of xi not including
- the set of indices of neighbors of xi including i
- the set of indices of neighbors of
xi not including i
- the volume of the neighborhood of xi
- the estimated density at xi
- the cross-validated density estimate
- the set of indices of observations
assigned to cluster k
- the number of variables or the dimensionality
- standard deviation of the lth variable
The estimated density at xi is
that is, the number of neighbors of xi divided
by the product of the sample size and the volume of the neighborhood
The density estimates provided by uniform kernels are not quite as
good as those provided by some other types of kernels, but they are quite
satisfactory for clustering. The significance tests for the
number of clusters require the use of fixed-size uniform
There is no simple answer to the question of which smoothing
to use (Silverman 1986, pp. 43 -61, 84 -88,
It is usually necessary to try several different smoothing
parameters. A reasonable first guess for the K= option is in
the range of 0.1 to 1 times n4/(v+4), smaller values
being suitable for higher dimensionalities.
A reasonable first guess for the R= option in many
coordinate data sets is given by
which can be computed in a DATA step using the GAMMA function
for .This formula is derived under the assumption that the data are
sampled from a multivariate normal distribution and, therefore,
tend to be too large (oversmooth) if the true distribution is multimodal.
Robust estimates of the standard deviations may be preferable
if there are outliers.
If the data are distances, the
factor can be
replaced by an average (mean, trimmed mean, median, root-mean-square,
and so on) distance divided by .To prevent outliers from appearing as separate clusters, you can
also specify K=2 or CK=2 or, more generally,
K=m or CK=m, ,which in most cases forces clusters to have at least m members.
If the variables all have unit variance
(for example, if you specify the STD option),
you can use Table 42.2 to obtain an initial
guess for the R= option.
Table 42.2: Reasonable First Guess for R= for Standardized Data
Number of Variables
One data-based method for choosing the smoothing parameter
is likelihood cross validation (Silverman 1986, pp. 52 -55).
The cross-validated density estimate at an observation is obtained
by omitting the observation from the computations.
The (log) likelihood cross-validation criterion is then computed as
The suggested smoothing parameter is the one that maximizes
this criterion. With fixed-radius kernels, likelihood cross validation
oversmooths long-tailed distributions; for purposes of
clustering, it tends to undersmooth short-tailed distributions.
With k-nearest-neighbor density estimation, likelihood
cross validation is useless because it almost always indicates
Cascaded density estimates are obtained by computing initial kernel
density estimates and then, at each observation, taking the arithmetic
mean, harmonic mean, or sum of the initial density estimates of the
observations within the neighborhood. The cascaded density estimates
can, in turn, be cascaded, and so on.
Let be the
density estimate at xi cascaded k times.
For all types of cascading, .If the cascading is done by arithmetic
means, then, for ,
For harmonic means,
and for sums,
To avoid cluttering formulas, the symbol
from now on
to denote the density estimate at
xi whether cascaded or not,
since the clustering methods and significance tests do not depend
on the degree of cascading.
Cascading increases the smoothness of the estimates with less
computation than would be required by increasing the smoothing
parameters to yield a comparable degree of smoothness.
For population densities with bounded support and discontinuities
at the boundaries, cascading improves estimates near the boundaries.
especially using sums, may be more
sensitive to the local covariance structure of the distribution
than are the uncascaded kernel estimates.
Cascading seems to be useful for detecting very nonspherical clusters.
Cascading was suggested by Tukey and Tukey (1981, p. 237).
Additional research into the properties of cascaded density estimates
Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.