Chapter Contents |
Previous |
Next |

Nonlinear Optimization Examples |

The eighth element of the *opt* argument specifies
the type of finite difference approximation used to
compute first- or second-order derivatives and whether
the finite difference intervals, *h*, should be computed
by an algorithm of Gill, Murray, Saunders, and Wright (1983).
The value of *opt*[8] is a two-digit integer, *ij*.

- If
*opt*[8] is missing or*j*=0, the fast but not very precise forward difference formulas are used; if , the numerically more expensive central difference formulas are used. - If
*opt*[8] is missing or , the finite difference intervals*h*are based only on the information of*par*[8] or*par*[9], which specifies the number of accurate digits to use in evaluating the objective function and nonlinear constraints, respectively. If*i*= 1,2, or3, the intervals are computed with an algorithm by Gill, Murray, Saunders, and Wright (1983). For*i*=1, the interval is based on the behavior of the objective function; for*i*=2, the interval is based on the behavior of the nonlinear constraint functions; and for*i*=3, the interval is based on the behavior of both the objective function and the nonlinear constraint functions.

- First-order derivatives:
*n*additional function calls are needed. - Second-order derivatives based on function calls
only, when the
*"grd"*module is not specified (Dennis and Schnabel 1983): for a dense Hessian matrix,*n*+*n*/2 additional function calls are needed.^{2} - Second-order derivatives based on gradient calls, when
the
*"grd"*module is specified (Dennis and Schnabel 1983):*n*additional gradient calls are needed.

- First-order derivatives: 2
*n*additional function calls are needed. - Second-order derivatives based on function calls
only, when the
*"grd"*module is not specified (Abramowitz and Stegun 1972): for a dense Hessian matrix, 2*n*+2*n*additional function calls are needed.^{2} - Second-order derivatives based on gradient
calls, when the
*"grd"*module is specified: 2*n*additional gradient calls are needed.

- For the forward-difference approximation of first-order derivatives using only function calls and for second-order derivatives using only gradient calls, .
- For the forward-difference approximation of second-order derivatives using only function calls and for central-difference formulas, .

If the algorithm of Gill, Murray, Saunders, and Wright
(1983) is not used to compute , a constant value
is used depending on the value of *par*[8].

- If the number of accurate digits is specified by
*par*[8]=*k*, then is set to 10_{1}^{-k1}. - If
*par*[8] is not specified, is set to the machine precision, .

- The absolute maximum gradient element is less than or equal to 100 times the ABSGTOL threshold.
- The term on the left of the GTOL criterion is less than or equal to max(1E-6, 100×GTOL threshold). The 1E-6 ensures that the switch is performed even if you set the GTOL threshold to zero.

Many applications need considerably more time for computing second-order derivatives than for computing first-order derivatives. In such cases, you should use a quasi-Newton or conjugate gradient technique.

If you specify a vector, *c*, of *nc* nonlinear constraints
with the "*nlc*" module but you do not specify the
"*jacnlc*" module, the first-order formulas can be
used to compute finite difference approximations of the
*nc* ×*n* Jacobian matrix of the nonlinear constraints.

**Note:** If you are not able to specify analytic
derivatives and if the finite-difference approximations
provided by the subroutines are not good enough to solve
your optimization problem, you may be able to implement
better finite-difference approximations with the "*grd*",
"*hes*", "*jac*", and "*
jacnlc*" module arguments.

Chapter Contents |
Previous |
Next |
Top |

Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.