Chapter Contents Previous Next
 The SIM2D Procedure

## Computational and Theoretical Details of Spatial Simulation

### Introduction

There are a number of approaches to simulating spatial random fields or, more generally, simulating sets of dependent random variables. This includes sequential indicator methods, turning bands, and the Karhunen-Loeve Expansion. Refer to Christakos (1992, Chapter 8) and Duetsch and Journel (1992, Chapter V) for details.

A particularly simple method available for Gaussian spatial random fields is the LU decomposition method. This method is computationally efficient. For a given covariance matrix, the LU = LLT decomposition is computed once, and the simulation proceeds by repeatedly generating a vector of independent N(0,1) random variables and multiplying by the L matrix.

One problem with this technique is memory requirements; memory is required to hold the full data and grid covariance matrix in core. While this is especially limiting in the three-dimensional case, you can use PROC SIM2D, which handles only two-dimensional data, for moderately sized simulation problems.

### Theoretical Development

It is a simple matter to produce an N(0,1) random number, and by stacking k N(0,1) random numbers in a column vector, you can obtain a vector with independent standard normal components . The meaning of the terms independence and randomness in the context of a deterministic algorithm required for the generation of these numbers is a little subtle; refer to Knuth (1981, Vol. 2, Chapter 3) for details.

Rather than , what is required is the generation of a vector , that is,

with covariance matrix

If the covariance matrix is symmetric and positive definite, it has a Cholesky root L such that C can be factored as

C = LLT

where L is lower triangular. Refer to Ralston and Rabinowitz (1978, Chapter 9, Section 3-3) for details. This vector Z can be generated by the transformation Z = LW. Note that this is where the assumption of a Gaussian SRF is crucial. When , then Z = LW is also Gaussian. The mean of Z is

E(Z) = L(E(W)) = 0
and the variance is
Var(Z) = Var(LW) = E(LWWTLT) = LE(WWT)LT = LLT = C

Consider now an SRF , with spatial covariance function C(h). Fix locations s1,s2, ... ,sk, and let Z denote the random vector

with corresponding covariance matrix

Since this covariance matrix is symmetric and positive definite, it has a Cholesky root, and the Z(si), i = 1, ... ,k can be simulated as described previously. This is how the SIM2D procedure implements unconditional simulation in the zero mean case. More generally,

with being a quadratic form in the coordinates s = (x,y), and the being an SRF having the same covariance matrix Cz as previously. In this case, the is computed once and added to the simulated vector for each realization.

For a conditional simulation, this distribution of

must be conditioned on the observed data. The relevant general result concerning conditional distributions of multivariate normal random variables is the following. Let , where

and

The subvector X1 is k×1, X2 is n×1, is k×k, is n×n, and is k×n, with k+n=m. The full vector X is partitioned into two subvectors X1 and X2, and is similarly partitioned into covariances and cross covariances.

With this notation, the distribution of X1 conditioned on X2 = x2 is ,with

and

Refer to Searle (1971, pp. 46 -47) for details. The correspondence with the conditional spatial simulation problem is as follows. Let the coordinates of the observed data points be denoted ,with values .Let denote the random vector

The random vector corresponds to X2, while Z corresponds to X1. Then as in the previous distribution. The matrix

is again positive definite, so a Cholesky factorization can be performed.

The dimension n for is simply the number of nonmissing observations for the VAR= variable; the values are the values of this variable. The coordinates are also found in the DATA= data set, with the variables corresponding to the x and y coordinates identified in the COORDINATES statement. Note that all VAR= variables use the same set of conditioning coordinates; this fixes the matrix C22 for all simulations.

The dimension k for Z is the number of grid points specified in the GRID statement. Since there is a single GRID statement, this fixes the matrix C11 for all simulations. Similarly, C12 is fixed.

The Cholesky factorization is computed once, as is the mean correction

Note that the means and are computed using the grid coordinates s1,s2, ... ,sk, the data coordinates ,and the quadratic form specification from the MEAN statement. The simulation is now performed exactly as in the unconditional case. A k ×1 vector of independent standard N(0,1) random variables is generated and multiplied by L, and is added to the transformed vector. This is repeated N times, where N is the value specified for the NR= option.

### Computational Details

In the computation of and described in the previous section, the inverse is never actually computed; an equation of the form
is solved for A using a modified Gaussian elimination algorithm that takes advantage of the fact that is symmetric with constant diagonal Cz(0) that is larger than all off-diagonal elements. The SINGULAR= option pertains to this algorithm. The value specified for the SINGULAR= option is scaled by Cz(0) before comparison with the pivot element.

#### Memory Usage

For conditional simulations, the largest matrix held in core at any one time depends on the number of grid points and data points. Using the previous notation, the data-data covariance matrix C22 is n ×n, where n is the number of nonmissing observations for the VAR= variable in the DATA= data set. The grid-data cross covariance C12 is n ×k, where k is the number of grid points. The grid-grid covariance C11 is k ×k. The maximum memory required at any one time for storing these matrices is

max(k(k+1),n(n+1)+2(n×k)) × sizeof( double)

There are additional memory requirements that add to the total memory usage, but usually these matrix calculations dominate, especially when the number of grid points is large.

 Chapter Contents Previous Next Top